content
listlengths 1
171
| tag
dict |
---|---|
[
{
"data": "title: Monitoring Weave Net with Prometheus menu_order: 90 search_type: Documentation Two endpoints are exposed: one for the Weave Net router, and, when deployed as a , one for the [network policy controller](/site/kubernetes/kube-addon.md#npc). `weave_connections` - Number of peer-to-peer connections. `weaveconnectionterminations_total` - Number of peer-to-peer connections terminated. `weave_ips` - Number of IP addresses. `weavemaxips` - Size of IP address space used by allocator. `weavednsentries` - Number of DNS entries. `weave_flows` - Number of FastDP flows. `weaveipamunreachable_count` - Number of unreachable peers that own IPAM addresses. `weaveipamunreachable_percentage` - Percentage of all IP addresses owned by unreachable peers. `weaveipampending_allocates` - Number of pending allocates. `weaveipampending_claims` - Number of pending claims. The following metric is exposed: `weavenpcblockedconnections_total` - Connection attempts blocked by policy controller. When installed as a Kubernetes Addon, the router listens for metrics requests on 0.0.0.0:6782 and the Network Policy Controller listens on 0.0.0.0:6781. No other requests are served on these endpoints. Note: If your Kubernetes hosts are exposed to the public internet then these metrics endpoints will also be exposed. When started via `weave launch`, by default weave listens on its local interface to serve metrics and other read-only status requests. To publish your metrics throughout your cluster, you can set `WEAVESTATUSADDR`: `WEAVESTATUSADDR=X.X.X.X:PORT` Set it to an empty string to disable. You can also pass the parameter `--metrics-addr=X.X.X.X:PORT` to `weave launch` to specify an address to listen for metrics only. Weave Net monitoring can be setup in Kubernetes using the kube-prometheus for Weave Net. You can read about the example document . Let's setup weave monitoring using . Follow this ``` go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb ``` Follow this ``` go get github.com/brancz/gojsontoyaml ``` ``` jb update ``` Note: Some alert configurations are environment specific and may require modifications of alert thresholds. ``` cat << EOF > weave-net.jsonnet local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') + (import 'kube-prometheus/kube-prometheus-weave-net.libsonnet') + { _config+:: { namespace: 'monitoring', }, prometheusAlerts+:: { groups: std.map( function(group) if group.name == 'weave-net' then group { rules: std.map(function(rule) if rule.alert == \"WeaveNetFastDPFlowsLow\" then rule { expr: \"sum(weave_flows) < 20000\" } else if rule.alert == \"WeaveNetIPAMUnreachable\" then rule { expr: \"weaveipamunreachable_percentage > 25\" } else rule , group.rules ) } else group, super.groups ), }, }; { ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } + { ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } + { ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } + { ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } + { ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } + { ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) } + { ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } EOF ``` ``` jsonnet -J vendor -m manifests weave-net.jsonnet | xargs -I{} sh -c 'cat $1 | gojsontoyaml > $1.yaml; rm -f $1' -- {} ``` Applying the created manifests will install the following components in your Kubernetes cluster: \"monitoring\" Operator to manage prometheus Dashboards on top of Prometheus metrics. ServiceMonitor brings the weave net metrics to prometheus. Weave Net Service which the ServiceMonitor scrapes. ``` cd manifests ls *.yaml | grep -e ^prometheus -e ^node -e ^grafana -e ^kube | xargs -I {} kubectl create -f {} -n monitoring kubectl create -f prometheus-serviceWeaveNet.yaml -n kube-system kubectl create -f prometheus-serviceMonitorWeaveNet.yaml -n monitoring ``` Note: If you want to make changes to the created yamls, please modify the weave-net.jsonnet and recreate the manifests before applying."
}
] |
{
"category": "Runtime",
"file_name": "metrics.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "When unpausing schedule, a backup could be due immediately. New Schedules also create new backup immediately. This design allows user to skip immediately due backup run upon unpausing or schedule creation. Currently, the default behavior of schedule when `.Status.LastBackup` is nil or is due immediately after unpausing, a backup will be created. This may not be a desired by all users (https://github.com/vmware-tanzu/velero/issues/6517) User want ability to skip the first immediately due backup when schedule is unpaused and or created. If you create a schedule with cron \"45 \" and pause it at say the 43rd minute and then unpause it at say 50th minute, a backup gets triggered (since .Status.LastBackup is nil or >60min ago). With this design, user can skip the first immediately due backup when schedule is unpaused and or created. Add an option so user can when unpausing (when immediately due) or creating new schedule, to not create a backup immediately. Changing the default behavior Add a new field with to the schedule spec and as a new cli flags for install, server, schedule commands; allowing user to skip immediately due backup when unpausing or schedule creation. If CLI flag is specified during schedule unpause, velero will update the schedule spec accordingly and override prior spec for `skipImmediately``. `velero schedule unpause` will now take an optional bool flag `--skip-immediately` to allow user to override the behavior configured for velero server (see `velero server` below). `velero schedule unpause schedule-1 --skip-immediately=false` will unpause the schedule but not skip the backup if due immediately from `Schedule.Status.LastBackup` timestamp. Backup will be run at the next cron schedule. `velero schedule unpause schedule-1 --skip-immediately=true` will unpause the schedule and skip the backup if due immediately from `Schedule.Status.LastBackup` timestamp. Backup will also be run at the next cron schedule. `velero schedule unpause schedule-1` will check `.spec.SkipImmediately` in the schedule to determine behavior. This field will default to false to maintain prior behavior. `velero server` will add a new flag `--schedule-skip-immediately` to configure default value to patch new schedules created without the field. This flag will default to false to maintain prior behavior if not set. `velero install` will add a new flag `--schedule-skip-immediately` to configure default value to patch new schedules created without the field. This flag will default to false to maintain prior behavior if not set. `pkg/apis/velero/v1/schedule_types.go` ```diff // ScheduleSpec defines the specification for a Velero schedule type ScheduleSpec struct { // Template is the definition of the Backup to be run // on the provided schedule Template BackupSpec `json:\"template\"` // Schedule is a Cron expression defining when to run // the"
},
{
"data": "Schedule string `json:\"schedule\"` // UseOwnerReferencesBackup specifies whether to use // OwnerReferences on backups created by this Schedule. // +optional // +nullable UseOwnerReferencesInBackup *bool `json:\"useOwnerReferencesInBackup,omitempty\"` // Paused specifies whether the schedule is paused or not // +optional Paused bool `json:\"paused,omitempty\"` // SkipImmediately specifies whether to skip backup if schedule is due immediately from `Schedule.Status.LastBackup` timestamp when schedule is unpaused or if schedule is new. // If true, backup will be skipped immediately when schedule is unpaused if it is due based on .Status.LastBackupTimestamp or schedule is new, and will run at next schedule time. // If false, backup will not be skipped immediately when schedule is unpaused, but will run at next schedule time. // If empty, will follow server configuration (default: false). // +optional SkipImmediately bool `json:\"skipImmediately,omitempty\"` } ``` `LastSkipped` will be added to `ScheduleStatus` struct to track the last time a schedule was skipped. ```diff // ScheduleStatus captures the current state of a Velero schedule type ScheduleStatus struct { // Phase is the current phase of the Schedule // +optional Phase SchedulePhase `json:\"phase,omitempty\"` // LastBackup is the last time a Backup was run for this // Schedule schedule // +optional // +nullable LastBackup *metav1.Time `json:\"lastBackup,omitempty\"` // LastSkipped is the last time a Schedule was skipped // +optional // +nullable LastSkipped *metav1.Time `json:\"lastSkipped,omitempty\"` // ValidationErrors is a slice of all validation errors (if // applicable) // +optional ValidationErrors []string `json:\"validationErrors,omitempty\"` } ``` When `schedule.spec.SkipImmediately` is `true`, `LastSkipped` will be set to the current time, and `schedule.spec.SkipImmediately` set to nil so it can be used again. The `getNextRunTime()` function below is updated so `LastSkipped` which is after `LastBackup` will be used to determine next run time. ```go func getNextRunTime(schedule *velerov1.Schedule, cronSchedule cron.Schedule, asOf time.Time) (bool, time.Time) { var lastBackupTime time.Time if schedule.Status.LastBackup != nil { lastBackupTime = schedule.Status.LastBackup.Time } else { lastBackupTime = schedule.CreationTimestamp.Time } if schedule.Status.LastSkipped != nil && schedule.Status.LastSkipped.After(lastBackupTime) { lastBackupTime = schedule.Status.LastSkipped.Time } nextRunTime := cronSchedule.Next(lastBackupTime) return asOf.After(nextRunTime), nextRunTime } ``` When schedule is unpaused, and `Schedule.Status.LastBackup` is not nil, if `Schedule.Status.LastSkipped` is recent, a backup will not be created. When schedule is unpaused or created with `Schedule.Status.LastBackup` set to nil or schedule is newly created, normally a backup will be created immediately. If `Schedule.Status.LastSkipped` is recent, a backup will not be created. Backup will be run at the next cron schedule based on LastBackup or LastSkipped whichever is more recent. N/A None Upon upgrade, the new field will be added to the schedule spec automatically and will default to the prior behavior of running a backup when schedule is unpaused if it is due based on .Status.LastBackup or schedule is new. Since this is a new field, it will be ignored by older versions of velero. TBD N/A"
}
] |
{
"category": "Runtime",
"file_name": "schedule-skip-immediately-config_design.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Longhorn will leave the failed backups behind and will not delete the backups automatically either until the backup target is removed. Failed backup cleanup will be occurred when making a backup to remote backup target failed. This LEP will trigger the deletion of failed backups automatically. Support the auto-deletion of failed backups that exceeded the TTL. Support the global auto-deletion option of failed backups cleanup for users. The process should not be stuck in the reconciliation of the controllers. Clean up unknown files or directories on the remote backup target. The `backupvolumecontroller` will be responsible for deleting Backup CR when there is a backup which state is in `Error` or `Unknown`. The reconciliation procedure of the `backupvolumecontroller` gets the latest failed backups from the datastore and delete the failed backups. ```text queue ... syncHandler() | reconcile() | get failed backups | | then delete them ``` When a user or recurring job tries to make a backup and store it in the remote backup target, many situations will cause the backup procedure failed. In some cases, there will be some failed backups still staying in the Longhorn system and this kind of backups are not handled by the Longhorn system until user removes the backup target. Or users can manage the failed backups via Longhorn GUI or command line tools manually. After the enhancement, Longhorn can delete the failed backups automatically after enabling auto-deletion. Via Longhorn GUI Users can be aware of that backup was failed if auto-deletion is disabled. Users can check the event log to understand why the backup failed and deleted. Via `kubectl` Users can list the failed backups by `kubectl -n longhorn-system get backups` if auto-deletion is disabled. Settings Add setting `failed-backup-ttl`. Default value is `1440` minutes and set to `0` to disable the auto-deletion. Failed Backup Backups in the state `longhorn.BackupStateError` or `longhorn.BackupStateUnknown`. Backup Controller Start the monitor and sync the backup status with the monitor in each reconcile loop. Update the backup status. Trigger `backupvolumecontroller` to delete the failed backups. Backup Volume controller Reconcile loop usually is triggered after backupstore polling which is controlled by Backupstore Poll Interval setting. Start to get all backups in each reconcile loop Tell failed backups from all backups and try to delete failed backups by default. Update the backup volume CR status. Integration tests `backups` CRs with `Error` or `Unknown` state will be removed by `backupvolumecontroller` triggered by backupstore polling when the `backup_monitor` detects the backup failed. `backups` CRs with `Error` or `Unknown` state will not be handled if the auto-deletion is disabled. We already have the backup CR to handle the backup resources and failed backup is not like orphaned replica which is not owned by any volume at the beginning. Cascading deletion of orphaned CR and backup CR would be more complicated than we just handle the failed backups immediately when backup procedure failed. Both in this LEP or orphan framework we would delete the failed backups by `backupvolumecontroller`. Listing orphaned backups and failed backups on both two UI pages `Orphaned Data` and `Backup` might be a bit confusing for users. Deleting items manually on either of two pages would be involved in what it mentioned at statement 2."
}
] |
{
"category": "Runtime",
"file_name": "20220801-failed-backups-cleanup.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "(cluster-member-config)= Each cluster member has its own key/value configuration with the following supported namespaces: `user` (free form key/value for user metadata) `scheduler` (options related to how the member is automatically targeted by the cluster) The following keys are currently supported: % Include content from ```{include} ../config_options.txt :start-after: <!-- config group cluster-cluster start --> :end-before: <!-- config group cluster-cluster end --> ```"
}
] |
{
"category": "Runtime",
"file_name": "cluster_member_config.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: Quickstart Welcome to Rook! We hope you have a great experience installing the Rook cloud-native storage orchestrator platform to enable highly available, durable Ceph storage in Kubernetes clusters. Don't hesitate to ask questions in our . Sign up for the Rook Slack . This guide will walk through the basic setup of a Ceph cluster and enable K8s applications to consume block, object, and file storage. Always use a virtual machine when testing Rook. Never use a host system where local devices may mistakenly be consumed. Kubernetes versions v1.25 through v1.30 are supported. Architectures released are `amd64 / x86_64` and `arm64`. To check if a Kubernetes cluster is ready for `Rook`, see the . To configure the Ceph storage cluster, at least one of these local storage options are required: Raw devices (no partitions or formatted filesystem) Raw partitions (no formatted filesystem) LVM Logical Volumes (no formatted filesystem) Encrypted devices (no formatted filesystem) Multipath devices (no formatted filesystem) Persistent Volumes available from a storage class in `block` mode A simple Rook cluster is created for Kubernetes with the following `kubectl` commands and . ```console $ git clone --single-branch --branch master https://github.com/rook/rook.git cd rook/deploy/examples kubectl create -f crds.yaml -f common.yaml -f operator.yaml kubectl create -f cluster.yaml ``` After the cluster is running, applications can consume storage. The first step is to deploy the Rook operator. !!! important The is available to deploy the operator instead of creating the below manifests. !!! note Check that the are from a tagged release of Rook. !!! note These steps are for a standard production Rook deployment in Kubernetes. For Openshift, testing, or more options, see the . ```console cd deploy/examples kubectl create -f crds.yaml -f common.yaml -f operator.yaml kubectl -n rook-ceph get pod ``` Before starting the operator in production, consider these settings: Some Rook features are disabled by default. See the for these and other advanced settings. Device discovery: Rook will watch for new devices to configure if the `ROOKENABLEDISCOVERY_DAEMON` setting is enabled, commonly used in bare metal clusters. Node affinity and tolerations: The CSI driver by default will run on any node in the cluster. To restrict the CSI driver affinity, several settings are available. If deploying Rook into a namespace other than the default `rook-ceph`, see the topic on . The Rook documentation is focused around starting Rook in a variety of environments. While creating the cluster in this guide, consider these example cluster manifests: : Cluster settings for a production cluster running on bare metal. Requires at least three worker nodes. : Cluster settings for a production cluster running in a dynamic cloud environment. : Cluster settings for a test environment such as minikube. See the for more details. Now that the Rook operator is running we can create the Ceph cluster. !!! important The is available to deploy the operator instead of creating the below manifests. !!! important For the cluster to survive reboots, set the `dataDirHostPath` property that is valid for the"
},
{
"data": "For more settings, see the documentation on . Create the cluster: ```console kubectl create -f cluster.yaml ``` Verify the cluster is running by viewing the pods in the `rook-ceph` namespace. The number of osd pods will depend on the number of nodes in the cluster and the number of devices configured. For the default `cluster.yaml` above, one OSD will be created for each available device found on each node. !!! hint If the `rook-ceph-mon`, `rook-ceph-mgr`, or `rook-ceph-osd` pods are not created, please refer to the for more details and potential solutions. ```console $ kubectl -n rook-ceph get pod NAME READY STATUS RESTARTS AGE csi-cephfsplugin-provisioner-d77bb49c6-n5tgs 5/5 Running 0 140s csi-cephfsplugin-provisioner-d77bb49c6-v9rvn 5/5 Running 0 140s csi-cephfsplugin-rthrp 3/3 Running 0 140s csi-rbdplugin-hbsm7 3/3 Running 0 140s csi-rbdplugin-provisioner-5b5cd64fd-nvk6c 6/6 Running 0 140s csi-rbdplugin-provisioner-5b5cd64fd-q7bxl 6/6 Running 0 140s rook-ceph-crashcollector-minikube-5b57b7c5d4-hfldl 1/1 Running 0 105s rook-ceph-mgr-a-64cd7cdf54-j8b5p 2/2 Running 0 77s rook-ceph-mgr-b-657d54fc89-2xxw7 2/2 Running 0 56s rook-ceph-mon-a-694bb7987d-fp9w7 1/1 Running 0 105s rook-ceph-mon-b-856fdd5cb9-5h2qk 1/1 Running 0 94s rook-ceph-mon-c-57545897fc-j576h 1/1 Running 0 85s rook-ceph-operator-85f5b946bd-s8grz 1/1 Running 0 92m rook-ceph-osd-0-6bb747b6c5-lnvb6 1/1 Running 0 23s rook-ceph-osd-1-7f67f9646d-44p7v 1/1 Running 0 24s rook-ceph-osd-2-6cd4b776ff-v4d68 1/1 Running 0 25s rook-ceph-osd-prepare-node1-vx2rz 0/2 Completed 0 60s rook-ceph-osd-prepare-node2-ab3fd 0/2 Completed 0 60s rook-ceph-osd-prepare-node3-w4xyz 0/2 Completed 0 60s ``` To verify that the cluster is in a healthy state, connect to the and run the `ceph status` command. All mons should be in quorum A mgr should be active At least three OSDs should be `up` and `in` If the health is not `HEALTH_OK`, the warnings or errors should be investigated ```console $ ceph status cluster: id: a0452c76-30d9-4c1a-a948-5d8405f19a7c health: HEALTH_OK services: mon: 3 daemons, quorum a,b,c (age 3m) mgr:a(active, since 2m), standbys: b osd: 3 osds: 3 up (since 1m), 3 in (since 1m) []...] ``` !!! hint If the cluster is not healthy, please refer to the for potential solutions. For a walkthrough of the three types of storage exposed by Rook, see the guides for: *: Create block storage to be consumed by a pod (RWO) *: Create a filesystem to be shared across multiple pods (RWX) *: Create an object store that is accessible with an S3 endpoint inside or outside the Kubernetes cluster Ceph has a dashboard to view the status of the cluster. See the . Create a toolbox pod for full access to a ceph admin client for debugging and troubleshooting the Rook cluster. See the for setup and usage information. The provides commands to view status and troubleshoot issues. See the document for helpful maintenance and tuning examples. Each Rook cluster has built-in metrics collectors/exporters for monitoring with Prometheus. To configure monitoring, see the . The Rook maintainers would like to receive telemetry reports for Rook clusters. The data is anonymous and does not include any identifying information. Enable the telemetry reporting feature with the following command in the toolbox: ```console ceph telemetry on ``` For more details on what is reported and how your privacy is protected, see the . When finished with the test cluster, see ."
}
] |
{
"category": "Runtime",
"file_name": "quickstart.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Name | Type | Description | Notes | - | - | - Id | Pointer to string | | [optional] DesiredRam | Pointer to int64 | desired memory zone size in bytes | [optional] `func NewVmResizeZone() *VmResizeZone` NewVmResizeZone instantiates a new VmResizeZone object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewVmResizeZoneWithDefaults() *VmResizeZone` NewVmResizeZoneWithDefaults instantiates a new VmResizeZone object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *VmResizeZone) GetId() string` GetId returns the Id field if non-nil, zero value otherwise. `func (o VmResizeZone) GetIdOk() (string, bool)` GetIdOk returns a tuple with the Id field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmResizeZone) SetId(v string)` SetId sets Id field to given value. `func (o *VmResizeZone) HasId() bool` HasId returns a boolean if a field has been set. `func (o *VmResizeZone) GetDesiredRam() int64` GetDesiredRam returns the DesiredRam field if non-nil, zero value otherwise. `func (o VmResizeZone) GetDesiredRamOk() (int64, bool)` GetDesiredRamOk returns a tuple with the DesiredRam field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmResizeZone) SetDesiredRam(v int64)` SetDesiredRam sets DesiredRam field to given value. `func (o *VmResizeZone) HasDesiredRam() bool` HasDesiredRam returns a boolean if a field has been set."
}
] |
{
"category": "Runtime",
"file_name": "VmResizeZone.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Run cilium-operator-aws ``` cilium-operator-aws [flags] ``` ``` --auto-create-cilium-pod-ip-pools map Automatically create CiliumPodIPPool resources on startup. Specify pools in the form of <pool>=ipv4-cidrs:<cidr>,[<cidr>...];ipv4-mask-size:<size> (multiple pools can also be passed by repeating the CLI flag) --aws-enable-prefix-delegation Allows operator to allocate prefixes to ENIs instead of individual IP addresses --aws-instance-limit-mapping map Add or overwrite mappings of AWS instance limit in the form of {\"AWS instance type\": \"Maximum Network Interfaces\",\"IPv4 Addresses per Interface\",\"IPv6 Addresses per Interface\"}. cli example: --aws-instance-limit-mapping=a1.medium=2,4,4 --aws-instance-limit-mapping=a2.somecustomflavor=4,5,6 configmap example: {\"a1.medium\": \"2,4,4\", \"a2.somecustomflavor\": \"4,5,6\"} --aws-release-excess-ips Enable releasing excess free IP addresses from AWS ENI. --aws-use-primary-address Allows for using primary address of the ENI for allocations on the node --bgp-announce-lb-ip Announces service IPs of type LoadBalancer via BGP --bgp-config-path string Path to file containing the BGP configuration (default \"/var/lib/cilium/bgp/config.yaml\") --bgp-v2-api-enabled Enables BGPv2 APIs in Cilium --ces-dynamic-rate-limit-nodes strings List of nodes used for the dynamic rate limit steps --ces-dynamic-rate-limit-qps-burst strings List of qps burst used for the dynamic rate limit steps --ces-dynamic-rate-limit-qps-limit strings List of qps limits used for the dynamic rate limit steps --ces-enable-dynamic-rate-limit Flag to enable dynamic rate limit specified in separate fields instead of the static one --ces-max-ciliumendpoints-per-ces int Maximum number of CiliumEndpoints allowed in a CES (default 100) --ces-slice-mode string Slicing mode defines how CiliumEndpoints are grouped into CES: either batched by their Identity (\"cesSliceModeIdentity\") or batched on a \"First Come, First Served\" basis (\"cesSliceModeFCFS\") (default \"cesSliceModeIdentity\") --ces-write-qps-burst int CES work queue burst rate. Ignored when ces-enable-dynamic-rate-limit is set (default 20) --ces-write-qps-limit float CES work queue rate limit. Ignored when ces-enable-dynamic-rate-limit is set (default 10) --cilium-endpoint-gc-interval duration GC interval for cilium endpoints (default 5m0s) --cilium-pod-labels string Cilium Pod's labels. Used to detect if a Cilium pod is running to remove the node taints where its running and set NetworkUnavailable to false (default \"k8s-app=cilium\") --cilium-pod-namespace string Name of the Kubernetes namespace in which Cilium is deployed in. Defaults to the same namespace defined in k8s-namespace --cluster-id uint32 Unique identifier of the cluster --cluster-name string Name of the cluster (default \"default\") --cluster-pool-ipv4-cidr strings IPv4 CIDR Range for Pods in cluster. Requires 'ipam=cluster-pool' and 'enable-ipv4=true' --cluster-pool-ipv4-mask-size int Mask size for each IPv4 podCIDR per node. Requires 'ipam=cluster-pool' and 'enable-ipv4=true' (default 24) --cluster-pool-ipv6-cidr strings IPv6 CIDR Range for Pods in cluster. Requires 'ipam=cluster-pool' and 'enable-ipv6=true' --cluster-pool-ipv6-mask-size int Mask size for each IPv6 podCIDR per node. Requires 'ipam=cluster-pool' and 'enable-ipv6=true' (default 112) --clustermesh-concurrent-service-endpoint-syncs int The number of remote cluster service syncing operations that will be done concurrently. Larger number = faster endpoint slice updating, but more CPU (and network) load. (default 5) --clustermesh-config string Path to the ClusterMesh configuration directory --clustermesh-enable-endpoint-sync Whether or not the endpoint slice cluster mesh synchronization is enabled. --clustermesh-enable-mcs-api Whether or not the MCS API support is enabled. --clustermesh-endpoint-updates-batch-period duration The length of endpoint slice updates batching period for remote cluster"
},
{
"data": "Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated. (default 500ms) --clustermesh-endpoints-per-slice int The maximum number of endpoints that will be added to a remote cluster's EndpointSlice . More endpoints per slice will result in less endpoint slices, but larger resources. (default 100) --cnp-status-cleanup-burst int Maximum burst of requests to clean up status nodes updates in CNPs (default 20) --cnp-status-cleanup-qps float Rate used for limiting the clean up of the status nodes updates in CNP, expressed as qps (default 10) --config string Configuration file (default \"$HOME/ciliumd.yaml\") --config-dir string Configuration directory that contains a file for each option --controller-group-metrics strings List of controller group names for which to to enable metrics. Accepts 'all' and 'none'. The set of controller group names available is not guaranteed to be stable between Cilium versions. -D, --debug Enable debugging mode --ec2-api-endpoint string AWS API endpoint for the EC2 service --enable-cilium-endpoint-slice If set to true, the CiliumEndpointSlice feature is enabled. If any CiliumEndpoints resources are created, updated, or deleted in the cluster, all those changes are broadcast as CiliumEndpointSlice updates to all of the Cilium agents. --enable-cilium-operator-server-access strings List of cilium operator APIs which are administratively enabled. Supports ''. (default []) --enable-gateway-api-app-protocol Enables Backend Protocol selection (GEP-1911) for Gateway API via appProtocol --enable-gateway-api-proxy-protocol Enable proxy protocol for all GatewayAPI listeners. Note that only Proxy protocol traffic will be accepted once this is enabled. --enable-gateway-api-secrets-sync Enables fan-in TLS secrets sync from multiple namespaces to singular namespace (specified by gateway-api-secrets-namespace flag) (default true) --enable-ingress-controller Enables cilium ingress controller. This must be enabled along with enable-envoy-config in cilium agent. --enable-ingress-proxy-protocol Enable proxy protocol for all Ingress listeners. Note that only Proxy protocol traffic will be accepted once this is enabled. --enable-ingress-secrets-sync Enables fan-in TLS secrets from multiple namespaces to singular namespace (specified by ingress-secrets-namespace flag) (default true) --enable-ipv4 Enable IPv4 support (default true) --enable-ipv6 Enable IPv6 support (default true) --enable-k8s Enable the k8s clientset (default true) --enable-k8s-api-discovery Enable discovery of Kubernetes API groups and resources with the discovery API --enable-k8s-endpoint-slice Enables k8s EndpointSlice feature in Cilium if the k8s cluster supports it (default true) --enable-metrics Enable Prometheus metrics --enable-node-ipam Enable Node IPAM --enable-node-port Enable NodePort type services by Cilium --enforce-ingress-https Enforces https for host having matching TLS host in Ingress. Incoming traffic to http listener will return 308 http error code with respective location in header. (default true) --eni-gc-interval duration Interval for garbage collection of unattached ENIs. Set to 0 to disable (default 5m0s) --eni-gc-tags map Additional tags attached to ENIs created by"
},
{
"data": "Dangling ENIs with this tag will be garbage collected --eni-tags map ENI tags in the form of k1=v1 (multiple k/v pairs can be passed by repeating the CLI flag) --excess-ip-release-delay int Number of seconds operator would wait before it releases an IP previously marked as excess (default 180) --gateway-api-hostnetwork-enabled Exposes Gateway listeners on the host network. --gateway-api-hostnetwork-nodelabelselector string Label selector that matches the nodes where the gateway listeners should be exposed. It's a list of comma-separated key-value label pairs. e.g. 'kubernetes.io/os=linux,kubernetes.io/hostname=kind-worker' --gateway-api-secrets-namespace string Namespace having tls secrets used by CEC for Gateway API (default \"cilium-secrets\") --gateway-api-xff-num-trusted-hops uint32 The number of additional GatewayAPI proxy hops from the right side of the HTTP header to trust when determining the origin client's IP address. --gops-port uint16 Port for gops server to listen on (default 9891) -h, --help help for cilium-operator-aws --identity-allocation-mode string Method to use for identity allocation (default \"kvstore\") --identity-gc-interval duration GC interval for security identities (default 15m0s) --identity-gc-rate-interval duration Interval used for rate limiting the GC of security identities (default 1m0s) --identity-gc-rate-limit int Maximum number of security identities that will be deleted within the identity-gc-rate-interval (default 2500) --identity-heartbeat-timeout duration Timeout after which identity expires on lack of heartbeat (default 30m0s) --ingress-default-lb-mode string Default loadbalancer mode for Ingress. Applicable values: dedicated, shared (default \"dedicated\") --ingress-default-request-timeout duration Default request timeout for Ingress. --ingress-default-secret-name string Default secret name for Ingress. --ingress-default-secret-namespace string Default secret namespace for Ingress. --ingress-default-xff-num-trusted-hops uint32 The number of additional ingress proxy hops from the right side of the HTTP header to trust when determining the origin client's IP address. --ingress-hostnetwork-enabled Exposes ingress listeners on the host network. --ingress-hostnetwork-nodelabelselector string Label selector that matches the nodes where the ingress listeners should be exposed. It's a list of comma-separated key-value label pairs. e.g. 'kubernetes.io/os=linux,kubernetes.io/hostname=kind-worker' --ingress-hostnetwork-shared-listener-port uint32 Port on the host network that gets used for the shared listener (HTTP, HTTPS & TLS passthrough) --ingress-lb-annotation-prefixes strings Annotations and labels which are needed to propagate from Ingress to the Load Balancer. (default [lbipam.cilium.io,service.beta.kubernetes.io,service.kubernetes.io,cloud.google.com]) --ingress-secrets-namespace string Namespace having tls secrets used by Ingress and CEC. (default \"cilium-secrets\") --ingress-shared-lb-service-name string Name of shared LB service name for Ingress. (default \"cilium-ingress\") --instance-tags-filter map EC2 Instance tags in the form of k1=v1,k2=v2 (multiple k/v pairs can also be passed by repeating the CLI flag --ipam string Backend to use for IPAM (default \"eni\") --k8s-api-server string Kubernetes API server URL --k8s-client-burst int Burst value allowed for the K8s client --k8s-client-qps float32 Queries per second limit for the K8s client --k8s-heartbeat-timeout duration Configures the timeout for api-server heartbeat, set to 0 to disable (default 30s) --k8s-kubeconfig-path string Absolute path of the kubernetes kubeconfig file --k8s-namespace string Name of the Kubernetes namespace in which Cilium Operator is deployed in --k8s-service-proxy-name string Value of K8s service-proxy-name label for which Cilium handles the services (empty = all services without service.kubernetes.io/service-proxy-name label) --kube-proxy-replacement string Enable only selected features (will panic if any selected feature cannot be enabled) (\"false\"), or enable all features (will panic if any feature cannot be enabled) (\"true\") (default \"false\") --kvstore string Key-value store type --kvstore-opt map Key-value store options e.g."
},
{
"data": "--leader-election-lease-duration duration Duration that non-leader operator candidates will wait before forcing to acquire leadership (default 15s) --leader-election-renew-deadline duration Duration that current acting master will retry refreshing leadership in before giving up the lock (default 10s) --leader-election-retry-period duration Duration that LeaderElector clients should wait between retries of the actions (default 2s) --limit-ipam-api-burst int Upper burst limit when accessing external APIs (default 20) --limit-ipam-api-qps float Queries per second limit when accessing external IPAM APIs (default 4) --loadbalancer-l7-algorithm string Default LB algorithm for services that do not specify related annotation (default \"round_robin\") --loadbalancer-l7-ports strings List of service ports that will be automatically redirected to backend. --log-driver strings Logging endpoints to use for example syslog --log-opt map Log driver options for cilium-operator, configmap example for syslog driver: {\"syslog.level\":\"info\",\"syslog.facility\":\"local4\"} --max-connected-clusters uint32 Maximum number of clusters to be connected in a clustermesh. Increasing this value will reduce the maximum number of identities available. Valid configurations are [255, 511]. (default 255) --mesh-auth-mutual-enabled The flag to enable mutual authentication for the SPIRE server (beta). --mesh-auth-spiffe-trust-domain string The trust domain for the SPIFFE identity. (default \"spiffe.cilium\") --mesh-auth-spire-agent-socket string The path for the SPIRE admin agent Unix socket. (default \"/run/spire/sockets/agent/agent.sock\") --mesh-auth-spire-server-address string SPIRE server endpoint. (default \"spire-server.spire.svc:8081\") --mesh-auth-spire-server-connection-timeout duration SPIRE server connection timeout. (default 10s) --nodes-gc-interval duration GC interval for CiliumNodes (default 5m0s) --operator-api-serve-addr string Address to serve API requests (default \"localhost:9234\") --operator-pprof Enable serving pprof debugging API --operator-pprof-address string Address that pprof listens on (default \"localhost\") --operator-pprof-port uint16 Port that pprof listens on (default 6061) --operator-prometheus-serve-addr string Address to serve Prometheus metrics (default \":9963\") --parallel-alloc-workers int Maximum number of parallel IPAM workers (default 50) --pod-restart-selector string cilium-operator will delete/restart any pods with these labels if the pod is not managed by Cilium. If this option is empty, then all pods may be restarted (default \"k8s-app=kube-dns\") --remove-cilium-node-taints Remove node taint \"node.cilium.io/agent-not-ready\" from Kubernetes nodes once Cilium is up and running (default true) --set-cilium-is-up-condition Set CiliumIsUp Node condition to mark a Kubernetes Node that a Cilium pod is up and running in that node (default true) --set-cilium-node-taints Set node taint \"node.cilium.io/agent-not-ready\" from Kubernetes nodes if Cilium is scheduled but not up and running --skip-crd-creation When true, Kubernetes Custom Resource Definitions will not be created --subnet-ids-filter strings Subnets IDs (separated by commas) --subnet-tags-filter map Subnets tags in the form of k1=v1,k2=v2 (multiple k/v pairs can also be passed by repeating the CLI flag --synchronize-k8s-nodes Synchronize Kubernetes nodes to kvstore and perform CNP GC (default true) --synchronize-k8s-services Synchronize Kubernetes services to kvstore (default true) --unmanaged-pod-watcher-interval int Interval to check for unmanaged kube-dns pods (0 to disable) (default 15) --update-ec2-adapter-limit-via-api Use the EC2 API to update the instance type to adapter limits (default true) --version Print version information ``` - Generate the autocompletion script for the specified shell - Inspect the hive - Access metric status of the operator - Display status of operator - Run troubleshooting utilities to check control-plane connectivity"
}
] |
{
"category": "Runtime",
"file_name": "cilium-operator-aws.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "name: Feature request about: Suggest an idea for this project. title: \"[FEATURE] \" labels: '' assignees: '' Yes/No All fs objects: Total space: Free space: RAM used: last metadata save duration:"
}
] |
{
"category": "Runtime",
"file_name": "feature_request.md",
"project_name": "MooseFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Example Configurations Configuration for Rook and Ceph can be configured in multiple ways to provide block devices, shared filesystem volumes or object storage in a kubernetes namespace. While several examples are provided to simplify storage setup, settings are available to optimize various production environments. See the folder for all the rook/ceph setup example spec files. The first step to deploy Rook is to create the CRDs and other common resources. The configuration for these resources will be the same for most deployments. The and sets these resources up. ```console kubectl create -f crds.yaml -f common.yaml ``` The examples all assume the operator and all Ceph daemons will be started in the same namespace. If deploying the operator in a separate namespace, see the comments throughout `common.yaml`. After the common resources are created, the next step is to create the Operator deployment. Several spec file examples are provided in : : The most common settings for production deployments `kubectl create -f operator.yaml` : Includes all of the operator settings for running a basic Rook cluster in an OpenShift environment. You will also want to review the to confirm the settings. `oc create -f operator-openshift.yaml` Settings for the operator are configured through environment variables on the operator deployment. The individual settings are documented in . Now that the operator is running, create the Ceph storage cluster with the CephCluster CR. This CR contains the most critical settings that will influence how the operator configures the storage. It is important to understand the various ways to configure the cluster. These examples represent several different ways to configure the storage. : Common settings for a production storage cluster. Requires at least three worker nodes. : Settings for a test cluster where redundancy is not configured. Requires only a single node. : Common settings for backing the Ceph Mons and OSDs by PVs. Useful when running in cloud environments or where local PVs have been created for Ceph to consume. : Connect to an with minimal access to monitor the health of the cluster and connect to the storage. : Connect to an with the admin key of the external cluster to enable remote creation of pools and configure services such as an or a . : Create a cluster in \"stretched\" mode, with five mons stretched across three zones, and the OSDs across two zones. See the . See the topic for more details and more examples for the settings. Now we are ready to setup Block, Shared Filesystem or Object storage in the Rook cluster. These storage types are respectively created with the CephBlockPool, CephFilesystem and CephObjectStore CRs. Ceph provides raw block device volumes to pods. Each example below sets up a storage class which can then be used to provision a block device in application"
},
{
"data": "The storage class is defined with a Ceph pool which defines the level of data redundancy in Ceph: : This example illustrates replication of 3 for production scenarios and requires at least three worker nodes. Data is replicated on three different kubernetes worker nodes. Intermittent or long-lasting single node failures will not result in data unavailability or loss. : Configures erasure coding for data durability rather than replication. Ceph's erasure coding is more efficient than replication so you can get high reliability without the 3x replication cost of the preceding example (but at the cost of higher computational encoding and decoding costs on the worker nodes). Erasure coding requires at least three worker nodes. See the documentation. : Replication of 1 for test scenarios. Requires only a single node. Do not use this for production applications. A single node failure can result in full data loss. The block storage classes are found in the examples directory: `csi/rbd`: the CSI driver examples for block devices See the topic for more block storage settings. Ceph filesystem (CephFS) allows the user to mount a shared posix-compliant folder into one or more application pods. This storage is similar to NFS shared storage or CIFS shared folders, as explained . Shared Filesystem storage contains configurable pools for different scenarios: : Replication of 3 for production scenarios. Requires at least three worker nodes. : Erasure coding for production scenarios. Requires at least three worker nodes. : Replication of 1 for test scenarios. Requires only a single node. Dynamic provisioning is possible with the CSI driver. The storage class for shared filesystems is found in the directory. See the topic for more details on the settings. Ceph supports storing blobs of data called objects that support HTTP(s)-type get/put/post and delete semantics. This storage is similar to AWS S3 storage, for example. Object storage contains multiple pools that can be configured for different scenarios: : Replication of 3 for production scenarios. Requires at least three worker nodes. : Replication of 3 with rgw in a port range valid for OpenShift. Requires at least three worker nodes. : Erasure coding rather than replication for production scenarios. Requires at least three worker nodes. : Replication of 1 for test scenarios. Requires only a single node. See the topic for more details on the settings. : Creates a simple object storage user and generates credentials for the S3 API The Ceph operator also runs an object store bucket provisioner which can grant access to existing buckets or dynamically provision new buckets. Creates a request for a new bucket by referencing a StorageClass which saves the bucket when the initiating OBC is deleted. Creates a request for a new bucket by referencing a StorageClass which deletes the bucket when the initiating OBC is deleted. Creates a new StorageClass which defines the Ceph Object Store and retains the bucket after the initiating OBC is deleted. Creates a new StorageClass which defines the Ceph Object Store and deletes the bucket after the initiating OBC is deleted."
}
] |
{
"category": "Runtime",
"file_name": "example-configurations.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Get information of the specified data partition. ```bash cfs-cli datapartition info [Partition ID] ``` Decommission the specified data partition on the target node and automatically transfer it to other available nodes. ```bash cfs-cli datapartition decommission [Address] [Partition ID] ``` Add a new data partition on the target node. ```bash cfs-cli datapartition add-replica [Address] [Partition ID] ``` Delete the data partition on the target node. ```bash cfs-cli datapartition del-replica [Address] [Partition ID] ``` Fault diagnosis, find data partitions that are mostly unavailable and missing. ```bash cfs-cli datapartition check ``` ```bash cfs-cli datapartition set-discard [DATA PARTITION ID] [DISCARD] ```"
}
] |
{
"category": "Runtime",
"file_name": "datapartition.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Fixes build break for plan9, nacl, solaris This new release introduces: Enhance TextFormatter to not print caller information when they are empty (#944) Remove dependency on golang.org/x/crypto (#932, #943) Fixes: Fix Entry.WithContext method to return a copy of the initial entry (#941) This new release introduces: Add `DeferExitHandler`, similar to `RegisterExitHandler` but prepending the handler to the list of handlers (semantically like `defer`) (#848). Add `CallerPrettyfier` to `JSONFormatter` and `TextFormatter (#909, #911) Add `Entry.WithContext()` and `Entry.Context`, to set a context on entries to be used e.g. in hooks (#919). Fixes: Fix wrong method calls `Logger.Print` and `Logger.Warningln` (#893). Update `Entry.Logf` to not do string formatting unless the log level is enabled (#903) Fix infinite recursion on unknown `Level.String()` (#907) Fix race condition in `getCaller` (#916). This new release introduces: Log, Logf, Logln functions for Logger and Entry that take a Level Fixes: Building prometheus node_exporter on AIX (#840) Race condition in TextFormatter (#468) Travis CI import path (#868) Remove coloured output on Windows (#862) Pointer to func as field in JSONFormatter (#870) Properly marshal Levels (#873) This new release introduces: A new method `SetReportCaller` in the `Logger` to enable the file, line and calling function from which the trace has been issued A new trace level named `Trace` whose level is below `Debug` A configurable exit function to be called upon a Fatal trace The `Level` object now implements `encoding.TextUnmarshaler` interface This is a bug fix"
},
{
"data": "fix the build break on Solaris don't drop a whole trace in JSONFormatter when a field param is a function pointer which can not be serialized This new release introduces: several fixes: a fix for a race condition on entry formatting proper cleanup of previously used entries before putting them back in the pool the extra new line at the end of message in text formatter has been removed a new global public API to check if a level is activated: IsLevelEnabled the following methods have been added to the Logger object IsLevelEnabled SetFormatter SetOutput ReplaceHooks introduction of go module an indent configuration for the json formatter output colour support for windows the field sort function is now configurable for text formatter the CLICOLOR and CLICOLOR\\_FORCE environment variable support in text formater This new release introduces: a new api WithTime which allows to easily force the time of the log entry which is mostly useful for logger wrapper a fix reverting the immutability of the entry given as parameter to the hooks a new configuration field of the json formatter in order to put all the fields in a nested dictionnary a new SetOutput method in the Logger a new configuration of the textformatter to configure the name of the default keys a new configuration of the text formatter to disable the level truncation Fix hooks race (#707) Fix panic deadlock (#695) Fix race when adding hooks (#612) Fix terminal check in AppEngine (#635) Replace example files with testable examples bug: quote non-string values in text formatter (#583) Make (Logger) SetLevel a public method bug: fix escaping in text formatter (#575) Officially changed name to lower-case bug: colors on Windows 10 (#541) bug: fix race in accessing level (#512) feature: add writer and writerlevel to entry (#372) bug: fix undefined variable on solaris (#493) formatter: configure quoting of empty values (#484) formatter: configure quoting character (default is `\"`) (#484) bug: fix not importing io correctly in non-linux environments (#481) bug: fix windows terminal detection (#476) bug: fix tty detection with custom out (#471) performance: Use bufferpool to allocate (#370) terminal: terminal detection for app-engine (#343) feature: exit handler (#375) feature: Add a test hook (#180) feature: `ParseLevel` is now case-insensitive (#326) feature: `FieldLogger` interface that generalizes `Logger` and `Entry` (#308) performance: avoid re-allocations on `WithFields` (#335) logrus/text_formatter: don't emit empty msg logrus/hooks/airbrake: move out of main repository logrus/hooks/sentry: move out of main repository logrus/hooks/papertrail: move out of main repository logrus/hooks/bugsnag: move out of main repository logrus/core: run tests with `-race` logrus/core: detect TTY based on `stderr` logrus/core: support `WithError` on logger logrus/core: Solaris support logrus/core: fix possible race (#216) logrus/doc: small typo fixes and doc improvements hooks/raven: allow passing an initialized client logrus/core: revert #208 formatter/text: fix data race (#218) logrus/core: fix entry log level (#208) logrus/core: improve performance of text formatter by 40% logrus/core: expose `LevelHooks` type logrus/core: add support for DragonflyBSD and NetBSD formatter/text: print structs more verbosely logrus: fix more Fatal family functions logrus: fix not exiting on `Fatalf` and `Fatalln` logrus: defaults to stderr instead of stdout hooks/sentry: add special field for `http.Request` formatter/text: ignore Windows for colors formatter/\\: allow configuration of timestamp layout formatter/text: Add configuration option for time format (#158)"
}
] |
{
"category": "Runtime",
"file_name": "CHANGELOG.md",
"project_name": "Soda Foundation",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "English | Spiderpool can be used as a solution to provide fixed IPs in an Underlay network scenario, and this article will use , , and as examples to build a complete Underlay network solution that exposes the available bridges as node resources for use by the cluster. is a Kubernetes CNI plugin that utilizes Open vSwitch (OVS) to enable network virtualization within a Kubernetes cluster. Make sure a multi-node Kubernetes cluster is ready. has been already installed. Open vSwitch must be installed and running on the host. It could refer to . The following examples are based on Ubuntu 22.04.1. installation may vary depending on the host system. ```bash ~# sudo apt-get install -y openvswitch-switch ~# sudo systemctl start openvswitch-switch ``` If your OS is such as Fedora and CentOS and uses NetworkManager to manage network configurations, you need to configure NetworkManager in the following scenarios: If you are using Underlay mode, the `coordinator` will create veth interfaces on the host. To prevent interference from NetworkManager with the veth interface. It is strongly recommended that you configure NetworkManager. If you create VLAN and Bond interfaces through Ifacer, NetworkManager may interfere with these interfaces, leading to abnormal pod access. It is strongly recommended that you configure NetworkManager. ```shell ~# IFACER_INTERFACE=\"<NAME>\" ~# cat > /etc/NetworkManager/conf.d/spidernet.conf <<EOF [keyfile] unmanaged-devices=interface-name:^veth*;interface-name:${IFACER_INTERFACE} EOF ~# systemctl restart NetworkManager ``` The following is an example of creating and configuring a persistent OVS Bridge. This article takes the `eth0` network card as an example and needs to be executed on each node. If you are using an Ubuntu system, you can refer to this chapter to configure OVS Bridge through `netplan`. Create OVS Bridge ```bash ~# ovs-vsctl add-br br1 ~# ovs-vsctl add-port br1 eth0 ~# ip link set br1 up ``` After creating 12-br1.yaml in the /etc/netplan directory, run `netplan apply` to take effect. To ensure that br1 is still available in scenarios such as restarting the host, please check whether the eth0 network card is also managed by netplan. ```yaml: 12-br1.yaml network: version: 2 renderer: networkd ethernets: br1: addresses: \"<IP address>/<Subnet mask>\" # 172.18.10.10/16 ``` After creation, you can view the following bridge information on each node: ```bash ~# ovs-vsctl show ec16d9e1-6187-4b21-9c2f-8b6cb75434b9 Bridge br1 Port eth0 Interface eth0 Port br1 Interface br1 type: internal Port veth97fb4795 Interface veth97fb4795 ovs_version: \"2.17.3\" ~# ip a show br1 208: br1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether 00:50:56:b4:5f:fd brd ff:ff:ff:ff:ff:ff inet 172.18.10.10/16 brd 172.18.255.255 scope global noprefixroute br1 validlft forever preferredlft forever inet6 fe80::4f28:8ef1:6b82:a9e4/64 scope link noprefixroute validlft forever preferredlft forever ``` If you use OS such as Fedora, Centos, etc., it is recommended to use NetworkManager persistent OVS Bridge. Persisting OVS Bridge through NetworkManager is a more general method that is not limited to operating systems. To use NetworkManager to persist OVS Bridge, you need to install the OVS NetworkManager plug-in. The example is as follows: ```bash ~# sudo dnf install -y NetworkManager-ovs ~# sudo systemctl restart NetworkManager ``` Create ovs bridges, ports and interfaces."
},
{
"data": "~# sudo nmcli con add type ovs-bridge conn.interface br1 con-name br1 ~# sudo nmcli con add type ovs-port conn.interface br1-port master br1 con-name br1-port ~# sudo nmcli con add type ovs-interface slave-type ovs-port conn.interface br1 master br1-port con-name br1-int ``` Create another port on the bridge and select the eth0 NIC in the physical device as its Ethernet interface so that real traffic can flow on the network. ```bash ~# sudo nmcli con add type ovs-port conn.interface ovs-port-eth0 master br1 con-name ovs-port-eth0 ~# sudo nmcli con add type ethernet conn.interface eth0 master ovs-port-eth0 con-name ovs-port-eth0-int ``` Configure and activate the ovs bridge. Configure the bridge by setting a static IP ```bash ~# sudo nmcli con modify br1-int ipv4.method static ipv4.address \"<IP>/<>\" # 172.18.10.10/16 ``` Activate bridge ```bash ~# sudo nmcli con down \"eth0\" ~# sudo nmcli con up ovs-port-eth0-int ~# sudo nmcli con up br1-int ``` After creation, you can view information similar to the following on each node. ```bash ~# nmcli c br1-int dbb1c9be-e1ab-4659-8d4b-564e3f8858fa ovs-interface br1 br1 a85626c1-2392-443b-a767-f86a57a1cff5 ovs-bridge br1 br1-port fe30170f-32d2-489e-9ca3-62c1f5371c6c ovs-port br1-port ovs-port-eth0 a43771a9-d840-4d2d-b1c3-c501a6da80ed ovs-port ovs-port-eth0 ovs-port-eth0-int 1334f49b-dae4-4225-830b-4d101ab6fad6 ethernet eth0 ~# ovs-vsctl show 203dd6d0-45f4-4137-955e-c4c36b9709e6 Bridge br1 Port ovs-port-eth0 Interface eth0 type: system Port br1-port Interface br1 type: internal ovs_version: \"3.2.1\" ~# ip a show br1 208: br1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether 00:50:56:b4:5f:fd brd ff:ff:ff:ff:ff:ff inet 172.18.10.10/16 brd 172.18.255.255 scope global noprefixroute br1 validlft forever preferredlft forever inet6 fe80::4f28:8ef1:6b82:a9e4/64 scope link noprefixroute validlft forever preferredlft forever ``` Install Spiderpool. ```bash helm repo add spiderpool https://spidernet-io.github.io/spiderpool helm repo update spiderpool helm install spiderpool spiderpool/spiderpool --namespace kube-system --set multus.multusCNI.defaultCniCRName=\"ovs-conf\" --set plugins.installOvsCNI=true ``` > If ovs-cni is not installed, you can install it by specifying the Helm parameter `--set plugins.installOvsCNI=true`. > > If you are a mainland user who is not available to access ghcr.io, you can specify the parameter `-set global.imageRegistryOverride=ghcr.m.daocloud.io` to avoid image pulling failures for Spiderpool. > > Specify the name of the NetworkAttachmentDefinition instance for the default CNI used by Multus via `multus.multusCNI.defaultCniCRName`. If the `multus.multusCNI.defaultCniCRName` option is provided, an empty NetworkAttachmentDefinition instance will be automatically generated upon installation. Otherwise, Multus will attempt to create a NetworkAttachmentDefinition instance based on the first CNI configuration found in the /etc/cni/net.d directory. If no suitable configuration is found, a NetworkAttachmentDefinition instance named `default` will be created to complete the installation of Multus. Please check if `Spidercoordinator.status.phase` is `Synced`: ```shell ~# kubectl get spidercoordinators.spiderpool.spidernet.io default -o yaml apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderCoordinator metadata: finalizers: spiderpool.spidernet.io name: default spec: detectGateway: false detectIPConflict: false hijackCIDR: 169.254.0.0/16 hostRPFilter: 0 hostRuleTable: 500 mode: auto podCIDRType: calico podDefaultRouteNIC: \"\" podMACPrefix: \"\" tunePodRoutes: true status: overlayPodCIDR: 10.244.64.0/18 phase: Synced serviceCIDR: 10.233.0.0/18 ``` At present: Spiderpool prioritizes obtaining the cluster's Pod and Service subnets by querying the kube-system/kubeadm-config ConfigMap. If the kubeadm-config does not exist, causing the failure to obtain the cluster subnet, Spiderpool will attempt to retrieve the cluster Pod and Service subnets from the kube-controller-manager"
},
{
"data": "If the kube-controller-manager component in your cluster runs in systemd mode instead of as a static Pod, Spiderpool still cannot retrieve the cluster's subnet information. If both of the above methods fail, Spiderpool will synchronize the status.phase as NotReady, preventing Pod creation. To address such abnormal situations, we can manually create the kubeadm-config ConfigMap and correctly configure the cluster's subnet information: ```shell export PODSUBNET=<YOURPOD_SUBNET> export SERVICESUBNET=<YOURSERVICE_SUBNET> cat << EOF | kubectl apply -f - apiVersion: v1 kind: ConfigMap metadata: name: kubeadm-config namespace: kube-system data: ClusterConfiguration: | networking: podSubnet: ${POD_SUBNET} serviceSubnet: ${SERVICE_SUBNET} EOF ``` Create a SpiderIPPool instance. The Pod will obtain an IP address from the IP pool for underlying network communication, so the subnet of the IP pool needs to correspond to the underlying subnet being accessed. Here is an example of creating a SpiderSubnet instance: ```bash cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: ippool-test spec: ips: \"172.18.30.131-172.18.30.140\" subnet: 172.18.0.0/16 gateway: 172.18.0.1 multusName: kube-system/ovs-conf EOF ``` Verify the installation ```bash ~# kubectl get po -n kube-system |grep spiderpool spiderpool-agent-7hhkz 1/1 Running 0 13m spiderpool-agent-kxf27 1/1 Running 0 13m spiderpool-controller-76798dbb68-xnktr 1/1 Running 0 13m spiderpool-init 0/1 Completed 0 13m ~# kubectl get sp ippool-test NAME VERSION SUBNET ALLOCATED-IP-COUNT TOTAL-IP-COUNT DEFAULT ippool-test 4 172.18.0.0/16 0 10 false ~# ``` To simplify writing Multus CNI configuration in JSON format, Spiderpool provides SpiderMultusConfig CR to automatically manage Multus NetworkAttachmentDefinition CR. Here is an example of creating an ovs-cni SpiderMultusConfig configuration: Confirm the bridge name for ovs-cni. Take the host bridge: `br1` as an example: ```shell BRIDGE_NAME=\"br1\" cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderMultusConfig metadata: name: ovs-conf namespace: kube-system spec: cniType: ovs ovs: bridge: \"${BRIDGE_NAME}\" EOF ``` In the following example Yaml, 2 copies of the Deployment are created, of which: `v1.multus-cni.io/default-network`: used to specify Multus' NetworkAttachmentDefinition configuration, which will create a default NIC for the application. ```shell cat <<EOF | kubectl create -f - apiVersion: apps/v1 kind: Deployment metadata: name: test-app spec: replicas: 2 selector: matchLabels: app: test-app template: metadata: annotations: ipam.spidernet.io/ippool: |- { \"ipv4\": [\"ippool-test\"] } v1.multus-cni.io/default-network: kube-system/ovs-conf labels: app: test-app spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: labelSelector: matchLabels: app: test-app topologyKey: kubernetes.io/hostname containers: name: test-app image: nginx imagePullPolicy: IfNotPresent ports: name: http containerPort: 80 protocol: TCP EOF ``` SpiderIPPool assigns an IP to the application, and the application's IP will be automatically fixed within this IP range: ```bash ~# kubectl get po -l app=test-app -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-app-6f8dddd88d-hstg7 1/1 Running 0 3m37s 172.18.30.131 ipv4-worker <none> <none> test-app-6f8dddd88d-rj7sm 1/1 Running 0 3m37s 172.18.30.132 ipv4-control-plane <none> <none> ~# kubectl get spiderippool NAME VERSION SUBNET ALLOCATED-IP-COUNT TOTAL-IP-COUNT DEFAULT DISABLE ippool-test 4 172.18.0.0/16 2 2 false false ~# kubectl get spiderendpoints NAME INTERFACE IPV4POOL IPV4 IPV6POOL IPV6 NODE test-app-6f8dddd88d-hstg7 eth0 ippool-test 172.18.30.131/16 ipv4-worker test-app-6f8dddd88d-rj7sm eth0 ippool-test 172.18.30.132/16 ipv4-control-plane ``` Testing Pod communication with cross-node Pods: ```shell ~#kubectl exec -ti test-app-6f8dddd88d-hstg7 -- ping 172.18.30.132 -c 2 PING 172.18.30.132 (172.18.30.132): 56 data bytes 64 bytes from 172.18.30.132: seq=0 ttl=64 time=1.882 ms 64 bytes from 172.18.30.132: seq=1 ttl=64 time=0.195 ms 172.18.30.132 ping statistics 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.195/1.038/1.882 ms ```"
}
] |
{
"category": "Runtime",
"file_name": "get-started-ovs.md",
"project_name": "Spiderpool",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Inspect the hive ``` cilium-operator-azure hive [flags] ``` ``` --bgp-v2-api-enabled Enables BGPv2 APIs in Cilium --ces-dynamic-rate-limit-nodes strings List of nodes used for the dynamic rate limit steps --ces-dynamic-rate-limit-qps-burst strings List of qps burst used for the dynamic rate limit steps --ces-dynamic-rate-limit-qps-limit strings List of qps limits used for the dynamic rate limit steps --ces-enable-dynamic-rate-limit Flag to enable dynamic rate limit specified in separate fields instead of the static one --ces-max-ciliumendpoints-per-ces int Maximum number of CiliumEndpoints allowed in a CES (default 100) --ces-slice-mode string Slicing mode defines how CiliumEndpoints are grouped into CES: either batched by their Identity (\"cesSliceModeIdentity\") or batched on a \"First Come, First Served\" basis (\"cesSliceModeFCFS\") (default \"cesSliceModeIdentity\") --ces-write-qps-burst int CES work queue burst rate. Ignored when ces-enable-dynamic-rate-limit is set (default 20) --ces-write-qps-limit float CES work queue rate limit. Ignored when ces-enable-dynamic-rate-limit is set (default 10) --cluster-id uint32 Unique identifier of the cluster --cluster-name string Name of the cluster (default \"default\") --clustermesh-concurrent-service-endpoint-syncs int The number of remote cluster service syncing operations that will be done concurrently. Larger number = faster endpoint slice updating, but more CPU (and network) load. (default 5) --clustermesh-config string Path to the ClusterMesh configuration directory --clustermesh-enable-endpoint-sync Whether or not the endpoint slice cluster mesh synchronization is enabled. --clustermesh-enable-mcs-api Whether or not the MCS API support is enabled. --clustermesh-endpoint-updates-batch-period duration The length of endpoint slice updates batching period for remote cluster services. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated. (default 500ms) --clustermesh-endpoints-per-slice int The maximum number of endpoints that will be added to a remote cluster's EndpointSlice . More endpoints per slice will result in less endpoint slices, but larger resources. (default 100) --controller-group-metrics strings List of controller group names for which to to enable metrics. Accepts 'all' and 'none'. The set of controller group names available is not guaranteed to be stable between Cilium versions. --enable-cilium-operator-server-access strings List of cilium operator APIs which are administratively enabled. Supports ''. (default []) --enable-gateway-api-app-protocol Enables Backend Protocol selection (GEP-1911) for Gateway API via appProtocol --enable-gateway-api-proxy-protocol Enable proxy protocol for all GatewayAPI listeners. Note that only Proxy protocol traffic will be accepted once this is enabled. --enable-gateway-api-secrets-sync Enables fan-in TLS secrets sync from multiple namespaces to singular namespace (specified by gateway-api-secrets-namespace flag) (default true) --enable-ingress-controller Enables cilium ingress controller. This must be enabled along with enable-envoy-config in cilium agent. --enable-ingress-proxy-protocol Enable proxy protocol for all Ingress listeners. Note that only Proxy protocol traffic will be accepted once this is enabled. --enable-ingress-secrets-sync Enables fan-in TLS secrets from multiple namespaces to singular namespace (specified by ingress-secrets-namespace flag) (default true) --enable-k8s Enable the k8s clientset (default true) --enable-k8s-api-discovery Enable discovery of Kubernetes API groups and resources with the discovery API --enable-k8s-endpoint-slice Enables k8s EndpointSlice feature in Cilium if the k8s cluster supports it (default true) --enable-node-ipam Enable Node IPAM --enable-node-port Enable NodePort type services by Cilium --enforce-ingress-https Enforces https for host having matching TLS host in Ingress. Incoming traffic to http listener will return 308 http error code with respective location in header. (default true) --gateway-api-hostnetwork-enabled Exposes Gateway listeners on the host"
},
{
"data": "--gateway-api-hostnetwork-nodelabelselector string Label selector that matches the nodes where the gateway listeners should be exposed. It's a list of comma-separated key-value label pairs. e.g. 'kubernetes.io/os=linux,kubernetes.io/hostname=kind-worker' --gateway-api-secrets-namespace string Namespace having tls secrets used by CEC for Gateway API (default \"cilium-secrets\") --gateway-api-xff-num-trusted-hops uint32 The number of additional GatewayAPI proxy hops from the right side of the HTTP header to trust when determining the origin client's IP address. --gops-port uint16 Port for gops server to listen on (default 9891) -h, --help help for hive --identity-gc-interval duration GC interval for security identities (default 15m0s) --identity-gc-rate-interval duration Interval used for rate limiting the GC of security identities (default 1m0s) --identity-gc-rate-limit int Maximum number of security identities that will be deleted within the identity-gc-rate-interval (default 2500) --identity-heartbeat-timeout duration Timeout after which identity expires on lack of heartbeat (default 30m0s) --ingress-default-lb-mode string Default loadbalancer mode for Ingress. Applicable values: dedicated, shared (default \"dedicated\") --ingress-default-request-timeout duration Default request timeout for Ingress. --ingress-default-secret-name string Default secret name for Ingress. --ingress-default-secret-namespace string Default secret namespace for Ingress. --ingress-default-xff-num-trusted-hops uint32 The number of additional ingress proxy hops from the right side of the HTTP header to trust when determining the origin client's IP address. --ingress-hostnetwork-enabled Exposes ingress listeners on the host network. --ingress-hostnetwork-nodelabelselector string Label selector that matches the nodes where the ingress listeners should be exposed. It's a list of comma-separated key-value label pairs. e.g. 'kubernetes.io/os=linux,kubernetes.io/hostname=kind-worker' --ingress-hostnetwork-shared-listener-port uint32 Port on the host network that gets used for the shared listener (HTTP, HTTPS & TLS passthrough) --ingress-lb-annotation-prefixes strings Annotations and labels which are needed to propagate from Ingress to the Load Balancer. (default [lbipam.cilium.io,service.beta.kubernetes.io,service.kubernetes.io,cloud.google.com]) --ingress-secrets-namespace string Namespace having tls secrets used by Ingress and CEC. (default \"cilium-secrets\") --ingress-shared-lb-service-name string Name of shared LB service name for Ingress. (default \"cilium-ingress\") --k8s-api-server string Kubernetes API server URL --k8s-client-burst int Burst value allowed for the K8s client --k8s-client-qps float32 Queries per second limit for the K8s client --k8s-heartbeat-timeout duration Configures the timeout for api-server heartbeat, set to 0 to disable (default 30s) --k8s-kubeconfig-path string Absolute path of the kubernetes kubeconfig file --k8s-service-proxy-name string Value of K8s service-proxy-name label for which Cilium handles the services (empty = all services without service.kubernetes.io/service-proxy-name label) --kube-proxy-replacement string Enable only selected features (will panic if any selected feature cannot be enabled) (\"false\"), or enable all features (will panic if any feature cannot be enabled) (\"true\") (default \"false\") --loadbalancer-l7-algorithm string Default LB algorithm for services that do not specify related annotation (default \"round_robin\") --loadbalancer-l7-ports strings List of service ports that will be automatically redirected to backend. --max-connected-clusters uint32 Maximum number of clusters to be connected in a clustermesh. Increasing this value will reduce the maximum number of identities available. Valid configurations are [255, 511]. (default 255) --mesh-auth-mutual-enabled The flag to enable mutual authentication for the SPIRE server (beta). --mesh-auth-spiffe-trust-domain string The trust domain for the SPIFFE identity. (default \"spiffe.cilium\") --mesh-auth-spire-agent-socket string The path for the SPIRE admin agent Unix socket. (default \"/run/spire/sockets/agent/agent.sock\") --mesh-auth-spire-server-address string SPIRE server endpoint. (default \"spire-server.spire.svc:8081\") --mesh-auth-spire-server-connection-timeout duration SPIRE server connection timeout. (default 10s) --operator-api-serve-addr string Address to serve API requests (default \"localhost:9234\") --operator-pprof Enable serving pprof debugging API --operator-pprof-address string Address that pprof listens on (default \"localhost\") --operator-pprof-port uint16 Port that pprof listens on (default 6061) --operator-prometheus-serve-addr string Address to serve Prometheus metrics (default \":9963\") --skip-crd-creation When true, Kubernetes Custom Resource Definitions will not be created ``` - Run cilium-operator-azure - Output the dependencies graph in graphviz dot format"
}
] |
{
"category": "Runtime",
"file_name": "cilium-operator-azure_hive.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation. Examples of behavior that contributes to creating a positive environment include: Using welcoming and inclusive language Being respectful of differing viewpoints and experiences Gracefully accepting constructive criticism Focusing on what is best for the community Showing empathy towards other community members Examples of unacceptable behavior by participants include: The use of sexualized language or imagery and unwelcome sexual attention or advances Trolling, insulting/derogatory comments, and personal or political attacks Public or private harassment Publishing others' private information, such as a physical or electronic address, without explicit permission Other conduct which could reasonably be considered inappropriate in a professional setting Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at [email protected]. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. This Code of Conduct is adapted from the , version 1.4, available at"
}
] |
{
"category": "Runtime",
"file_name": "CODE_OF_CONDUCT.md",
"project_name": "Stash by AppsCode",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "| Author | | | | - | | Date | 2021-01-22 | | Email | - | The cni-operator module encapsulates the libcni module, provides a more reasonable and friendly network management interface for generation, and is responsible for loading and updating network configuration files. ````c /* Description: Network management module initialization: Initialize libcni network module and network management layer data;; cache_dir: Network cache configuration file storage directory; conf_path: cni configuration file storage directory; bin_paths: cni plugin storage directory list; binpathslen: directory listing length; Return value: return 0 on success, non-zero on failure */ int cnimanagerstoreinit(const char *cachedir, const char conf_path, const char const *binpaths, sizet binpathslen); /* Description: According to the filtering rules, load the cni configuration file to the memory; store: cni configuration list; res_len: length of cni configuration list; filter_ops: Customize the cni configuration loading rules, and load the configuration files that meet the rules; Return value: return 0 on success, non-zero on failure */ int getnetconflistfromdir(struct cninetworklistconf ***store, sizet *reslen, cniconffiltert filter_ops); /* Description: Create a container loopback network id: container id; netns: container network namespace; Return value: return 0 on success, non-zero on failure */ int attach_loopback(const char id, const char netns); /* Description: delete the container loopback network id: container id; netns: container network namespace; Return value: return 0 on success, non-zero on failure */ int detach_loopback(const char id, const char netns); /* Description: Create a container single network plane; manager: The set of parameters required for container network creation; list: network configuration; result: record necessary network information; Return value: return 0 on success, non-zero on failure */ int attachnetworkplane(const struct cnimanager *manager, const struct cninetworklistconf list, struct cni_opt_result *result); /* Description: delete the container single network plane; manager: The set of parameters required for container network deletion; list: network configuration; result: record necessary network information; Return value: return 0 on success, non-zero on failure */ int detachnetworkplane(const struct cnimanager *manager, const struct cninetworklistconf list, struct cni_opt_result *result); /* Description: Check the status of the single network plane of the container; manager: set of parameters required for container network check; list: network configuration; result: record necessary network information; Return value: return 0 on success, non-zero on failure */ int checknetworkplane(const struct cnimanager *manager, const struct cninetworklistconf list, struct cni_opt_result *result); /* Description: get the CNI version information supported by the plugins required for the single network plane of the container; list: network configuration; resultversionlist: record the CNI version supported by the plugins; Return value: return 0 on success, non-zero on failure */ int versionnetworkplane(const struct cninetworklistconf *list, struct cniresultversionlist resultversionlist); ````"
}
] |
{
"category": "Runtime",
"file_name": "cni_operator_design.md",
"project_name": "iSulad",
"subcategory": "Container Runtime"
}
|
[
{
"data": "The following sections give answers to frequently asked questions. They explain how to resolve common issues and point you to more detailed information. Most likely, your firewall blocks network access for your instances. See {ref}`network-bridge-firewall` for more information about the problem and how to fix it. Another frequent reason for connectivity issues is running Incus and Docker on the same host. See {ref}`network-incus-docker` for instructions on how to fix such issues. By default, the Incus server is not accessible from the network, because it only listens on a local Unix socket. You can enable it for remote access by following the instructions in {ref}`server-expose`. To be able to access the remote API, clients must authenticate with the Incus server. See {ref}`server-authenticate` for instructions on how to authenticate using a trust token. A privileged container can do things that affect the entire host - for example, it can use things in `/sys` to reset the network card, which will reset it for the entire host, causing network blips. See {ref}`container-security` for more information. Almost everything can be run in an unprivileged container, or - in cases of things that require unusual privileges, like wanting to mount NFS file systems inside the container - you might need to use bind mounts. Yes, you can do this by using a {ref}`disk device <devices-disk>`: incus config device add container-name home disk source=/home/${USER} path=/home/ubuntu For unprivileged containers, you need to make sure that the user in the container has working read/write permissions. Otherwise, all files will show up as the overflow UID/GID (`65536:65536`) and access to anything that's not world-readable will fail. Use either of the following methods to grant the required permissions: Pass `shift=true` to the call. This depends on the kernel and file system supporting either idmapped mounts (see ). Add a `raw.idmap` entry (see ). Place recursive POSIX ACLs on your home directory. Privileged containers do not have this issue because all UID/GID in the container are the same as outside. But that's also the cause of most of the security issues with such privileged containers. To run Docker inside an Incus container, set the {config:option}`instance-security:security.nesting` property of the container to `true`: incus config set <container> security.nesting true Note that Incus containers cannot load kernel modules, so depending on your Docker configuration, you might need to have extra kernel modules loaded by the host. You can do so by setting a comma-separated list of kernel modules that your container needs: incus config set <containername> linux.kernelmodules <modules> In addition, creating a `/.dockerenv` file in your container can help Docker ignore some errors it's getting due to running in a nested"
},
{
"data": "The command stores its configuration under `~/.config/incus`. Various configuration files are stored in that directory, for example: `client.crt`: client certificate (generated on demand) `client.key`: client key (generated on demand) `config.yml`: configuration file (info about `remotes`, `aliases`, etc.) `clientcerts/`: directory with per-remote client certificates `servercerts/`: directory with server certificates belonging to `remotes` Many switches do not allow MAC address changes, and will either drop traffic with an incorrect MAC or disable the port totally. If you can ping an Incus instance from the host, but are not able to ping it from a different host, this could be the cause. The way to diagnose this problem is to run a `tcpdump` on the uplink and you will see either ``ARP Who has `xx.xx.xx.xx` tell `yy.yy.yy.yy` ``, with you sending responses but them not getting acknowledged, or ICMP packets going in and out successfully, but never being received by the other host. (faq-monitor)= To see detailed information about what Incus is doing and what processes it is running, use the command. For example, to show a human-readable output of all types of messages, enter the following command: incus monitor --pretty See for all options, and {doc}`debugging` for more information. Check if your storage pool is out of space (by running ). In that case, Incus cannot finish unpacking the image, and the instance that you're trying to create shows up as stopped. To get more insight into what is happening, run (see {ref}`faq-monitor`), and check `sudo dmesg` for any I/O errors. If starting containers suddenly fails with a cgroup-related error message (`Failed to mount \"/sys/fs/cgroup\"`), this might be due to running a VPN client on the host. This is a known issue for both and , but might occur for other VPN clients as well. The problem is that the VPN client mounts the `net_cls` cgroup1 over cgroup2 (which Incus uses). The easiest fix for this problem is to stop the VPN client and unmount the `net_cls` cgroup1 with the following command: umount /sys/fs/cgroup/net_cls If you need to keep the VPN client running, mount the `net_cls` cgroup1 in another location and reconfigure your VPN client accordingly. See for instructions for Mullvad VPN. When setting the `bridge.mtu` option on an Incus managed bridge network, Incus will create a dummy network interface named `BRIDGE-mtu`. <!-- wokeignore:rule=dummy --> That interface will never be used to carry traffic but it has the requested MTU set to it and is bridged into the network bridge. This has the effect of forcing the bridge to adopt that MTU and avoids issues where the bridge's configured MTU would change as interfaces get added to it."
}
] |
{
"category": "Runtime",
"file_name": "faq.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Exports an exited, single-app pod to an App Container Image (.aci) ``` $ rkt export UUID .aci ``` | Flag | Default | Options | Description | | | | | | | `--overwrite` | `false` | `true` or `false` | Overwrite the output ACI if it exists | See the table with ."
}
] |
{
"category": "Runtime",
"file_name": "export.md",
"project_name": "rkt",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Essentially, it is init process inside container. In runc, init process eventually executes the entrypoint of container defined in config.json. In rune, init process never call execve() syscall. Instead, it serves for the communications between Enclave Runtime PAL and the host side through Enclave Runtime PAL API. init-runelet is created by `rune create`, and runelet process on behalf of enclave application is created by `rune exec`. This API defines the function calls between Enclave Runtime PAL and init-runelet. The implementer of Enclave Runtime PAL API, on behalf of Enclave Runtime. The implementer of enclave. Occlum, Graphene-SGX and WAMR (WebAssembly Micro Runtime) are all the so-called Enclave Runtime. The actual running entity inside Enclave Runtime. A new class of container managed by OCI Runtime `rune`."
}
] |
{
"category": "Runtime",
"file_name": "terminology.md",
"project_name": "Inclavare Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "name: Improvement request about: Suggest an improvement of an existing feature title: \"[IMPROVEMENT] \" labels: [\"kind/improvement\", \"require/doc\", \"require/manual-test-plan\", \"require/backport\"] assignees: '' <!--A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]--> <!--A clear and concise description of what you want to happen.--> <!--A clear and concise description of any alternative solutions or features you've considered.--> <!--Add any other context or screenshots about the feature request here.-->"
}
] |
{
"category": "Runtime",
"file_name": "improvement.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "piadm(8) -- Manage SmartOS Platform Images =========================================== /usr/sbin/piadm [-v | -vv] <command> [command-specific arguments] piadm activate|assign <PI-stamp> [ZFS-pool-name] piadm avail piadm bootable piadm bootable [-dr] <ZFS-pool-name> piadm bootable -e [ -i <source> ] <ZFS-pool-name> piadm install <source> [ZFS-pool-name] piadm list [ZFS-pool-name] piadm remove <PI-stamp> [ZFS-pool-name] piadm update [ZFS-pool-name] Historically, SmartOS booted off of a USB key or a read-only media like CD-ROM. The copy and version of the SmartOS software on one of these media is called a Platform Image. A Platform Image is described in detail in the next section. The piadm(8) utility enables and manages the ability to instead boot directly off of a ZFS pool. piadm(8) manages multiple Platform Images on a bootable ZFS pool, allowing easier updates to Platform Images and maintaining multiple Platform Images on a single boot media. The method and implementation of SmartOS booting does not change vs. a USB key or CD-ROM, but merely uses a bootable ZFS pool as the source of the Platform Image, which can be the traditional SmartOS `zones` pool if it is a bootable pool. A SmartOS Platform Image (PI) is identified by creation timestamp, referred to here as a PI-stamp. One can see it in uname(1): smartos-build(~)[0]% uname -a SunOS smartos-build 5.11 joyent_20200602T173751Z i86pc i386 i86pc smartos-build(~)[0]% The PI-stamp for this system's Platform Image is `20200602T173751Z`. The Platform Image is a directory containing: A directory structure in a format used by loader(7). The SmartOS `unix` kernel The SmartOS boot archive containing kernel modules, libraries, commands, and more. A manifest and hash. A file containing the PI-stamp. The SmartOS loader(7) will find a path to a Platform Image on the bootable ZFS pool, and will load `unix` and then the boot archive. Platform images are supplied by either a gzipped tarball containing the above. Or inside an ISO image file which contains the above AND the boot image as well (see below). In addition to platform images, the loader(7) also has a directory structure containing the loader itself and its support files. These are stamped as well with PI stamps, but are distinct from the contents of a gzipped PI tarball. Often, a PI can use an older Boot Image to boot itself without issue. Occasionally, however, a PI will have Boot Image changes also that need to accompany it. The behavior of loader can be controlled by providing loader.conf.local and/or loader.rc.local files in the ${BOOTPOOL}/boot-${VERSION} directory. Loader config files can also be placed in ${BOOTPOOL}/custom, and will be used by all subsequently installed boot images. See loader.conf(5) and loader(7) for the format of these files. A SmartOS bootable pool (POOL in the examples) contains: A dataset named POOL/boot A `bootfs` pool property set to POOL/boot. At least an MBR on its physical disks for BIOS booting, or if the pool was created with `zpool create -B`, an EFI System Partition (ESP) with loader(7) installed in it. At least one Platform Image in /POOL/boot/platform-<PI-stamp>. At least one Boot Image in /POOL/boot/boot-<PI-stamp>. A /POOL/boot/etc directory that indicates the PI-stamp for the Boot Image. Symbolic links /POOL/boot/platform and /POOL/boot/boot that point to the Platform Image and Boot Image that will be used at the next boot. For example: ```"
},
{
"data": "~]# piadm bootable standalone ==> BIOS and UEFI zones ==> non-bootable [root@smartos ~]# piadm list PI STAMP BOOTABLE FILESYSTEM BOOT IMAGE NOW NEXT 20200714T195617Z standalone/boot next yes yes [root@smartos ~]# ls -l /standalone/boot total 7 lrwxrwxrwx 1 root root 23 Jul 15 04:22 boot -> ./boot-20200714T195617Z drwxr-xr-x 4 root root 15 Jul 15 04:12 boot-20200714T195617Z drwxr-xr-x 3 root root 3 Jul 15 04:22 etc lrwxrwxrwx 1 root root 27 Jul 15 04:22 platform -> ./platform-20200714T195617Z drwxr-xr-x 4 root root 5 Jul 15 04:12 platform-20200714T195617Z [root@smartos ~]# ``` The Triton Cloud Orchestration system is constructed to contain a Head Node (sometimes more than one) and several Compute Nodes. The Compute Nodes use iPXE, an improved Preboot eXecution Environment (PXE) for network booting. Originally Triton Compute Nodes required a USB key which contained iPXE and booted directly into iPXE. piadm(8) can enable a Triton Compute Node's ZFS pool to boot iPXE, obviating the need for a USB key. It detects if a machine is a Triton Compute Node, and enables maintenance of iPXE on the bootable pool. Many piadm(8) subcommands are disabled on a Triton Compute Node. The layout of a Triton Compute Node bootable pool is limited to `boot` and `platform` symbolic links to a populated-with-iPXE `boot-ipxe` directory, and a mostly empty `platform-ipxe` directory. There is an additional platform-STAMP for a backup on-disk PI, in case of emergency. This directory contains an additional in-directory `platform` link to enable its selection as a backup. The piadm(8) command can convert a USB-key-booting Triton Head Node into a ZFS-pool-booting one. It can also transfer boot duties from an existing ZFS pool to a new one. The `piadm list` subcommand will also show Platform Images available to the Head Node, but it is highly recommended that Head Nodes continue to use sdcadm(1) for such information. The piadm(8) command will produce more verbose output if -v is stated prior to the command. If -vv is stated prior to the command, piadm(8) will produce both -v output and enable the shell's -x flag, which produces output of all of the commands run in the piadm(8) script. piadm(8) commands and options are: piadm activate <PI-stamp> [ZFS-pool-name] piadm assign <PI-stamp> [ZFS-pool-name] Activate a Platform Image for the next boot, on a specified ZFS pool if there are more than one bootable pools imported. It is up to the administrator to know which pool the system will actually boot. If a boot image with the specified PI-stamp is unavailable, a warning will be issued but the new PI will be activated anyway. `activate` and `assign` are synonyms, for those used to other distros' `beadm`, or Triton's `sdcadm platform`, respectively. This command is disallowed on Triton Compute Nodes. piadm avail Query the well-known SmartOS PI repository for available ISO images, listed by PI-Stamp. No PI-Stamps older than the currently running PI stamp will be listed. This command is disallowed on Triton Compute Nodes. piadm bootable [-d | -e [-i <source>] | -r] [ZFS-pool-name] Query or upgrade a ZFS pool's bootable status. With no arguments, the status of all imported pools will be queried. -d will disable a pool from being bootable, and -e will enable"
},
{
"data": "If the -i flag specifies an installation source, see below in the `install` subcommand, it will be used. Lack of -i is equivalent to `-i media`. As mentioned earlier, it is up to the administrator to know which pool the system will actually boot. Unlike install, this command will always attempt to install a corresponding boot image as well. The -r flag will refresh a bootable pool's MBR and/or ESP. This is especially useful on mirror or raidz pools that have new devices attached. Some pools can only be bootable on systems configured to boot in legacy BIOS mode, while others can also be bootable from UEFI systems. The `bootable` subcommand will indicate this. For Triton Compute Nodes, the -i option is disallowed. Otherwise, this will enable a Triton Compute Node to boot iPXE from the disk, obviating the need for USB key with iPXE on it. It will also allow boot to a backup PI that is either the currently-running PI, or the Triton default PI if the currently-running one is not available. The iPXE is provided by the Triton Head Node, and if it needs updating, the `sdcadm experimental update-gz-tools` command will update it on the head node. See below for post-bootable iPXE updating on the Triton Compute Node. For Triton Head Nodes, the -i option is also disabled. When invoked with -e on a Head Node, the piadm(8) command will attempt to convert a pool to be bootable for a Triton Head Node. If a Head Node is booting from a USB key, the boot data comes from the USB Key. If a Head Node is booting from another pool, the boot data comes from the current booted pool. After invoking `piadm bootable -e $POOL`, $POOL can boot the Triton Head Node, BUT any pre-reboot operations (regardless if the current Head Node boot comes from USB or an existing bootable pool), will not copy over to the newly-enabled bootable pool. It is therefore recommended that a Head Node reboot to the newly-enabled pool as soon as possible. piadm install <source> [ZFS-pool-name] Installs a new Platform Image into the bootable pool. If the source also contains the boot image (like an ISO does), the Boot Image will also be installed, if available. If there are more than one bootable pools, a pool name will be required. piadm(8) requires a Platform Image source. That source can be: A PI-stamp, which will consult the well-known SmartOS PI repository for an ISO image. This requires network reachability and working name resolution. The word \"latest\", which will consult the well-known SmartOS PI repository for the latest ISO image. This requires network reachability and working name resolution. The word \"media\", which will attempt to find a mountable optical media (CD or DVD) or USB-key with SmartOS on it. The SmartOS installer uses this keyword. An ISO image file path. A PI gzipped tarball file path. NOTE this source does not have a boot image in it. A URL to either one of an ISO image or a gzipped PI tarball. This command is disallowed on Triton Compute Nodes. piadm list [ZFS-pool-name] Lists the available platform images (and boot images) on bootable pools. piadm remove <PI-stamp> [ZFS-pool-name] The opposite of `install`, and only accepts a"
},
{
"data": "If a boot image exists with the specified PI-stamp, it will also be removed unless it is the only boot image available. This command is disallowed on Triton Compute Nodes. piadm update [ZFS-pool-name] This command is exclusive to Triton Compute Nodes. This command updates iPXE and loader (boot) for the specified pool on the Triton Compute Node. If the Triton Compute Node has booted to a different PI than what is currently cached as the bootable backup PI, this command will update the bootable backup PI as well, or attempt to refresh the the Triton default PI. ``` [root@smartos ~]# zpool create -f -B standalone c1t1d0 [root@smartos ~]# piadm bootable standalone ==> non-bootable zones ==> non-bootable [root@smartos ~]# piadm -v bootable -e -i latest standalone Installing PI 20200701T231659Z Platform Image 20200701T231659Z will be loaded on next boot, with a new boot image, boot image 20200701T231659Z [root@smartos ~]# piadm bootable standalone ==> BIOS and UEFI zones ==> non-bootable [root@smartos ~]# piadm list PI STAMP BOOTABLE FILESYSTEM BOOT IMAGE NOW NEXT 20200701T231659Z standalone/boot next no yes [root@smartos ~]# ``` ``` [root@smartos ~]# piadm list PI STAMP BOOTABLE FILESYSTEM BOOT IMAGE NOW NEXT 20200714T195617Z standalone/boot next yes yes [root@smartos ~]# piadm -v install https://example.com/PIs/platform-20200715T192200Z.tgz Installing https://example.com/PIs/platform-20200715T192200Z.tgz (downloaded to /tmp/tmp.Bba0Ac) Installing PI 20200715T192200Z [root@smartos ~]# piadm list PI STAMP BOOTABLE FILESYSTEM BOOT IMAGE NOW NEXT 20200714T195617Z standalone/boot next yes yes 20200715T192200Z standalone/boot none no no [root@smartos ~]# piadm -v activate 20200715T192200Z Platform Image 20200715T192200Z will be loaded on next boot, WARNING: 20200715T192200Z has no matching boot image, using boot image 20200714T195617Z [root@smartos ~]# piadm list PI STAMP BOOTABLE FILESYSTEM BOOT IMAGE NOW NEXT 20200714T195617Z standalone/boot next yes no 20200715T192200Z standalone/boot none no yes [root@smartos ~]# ``` The following exit values are returned: 0 Successful completion. 1 An error occurred, but no change was made 2 A fatal error occurred, and there may be partial change or other residual files or directories. 3 A corrupt environment on what is supposed to be a bootable pool. sdcadm(1), loader.conf(5), loader(7), zpool(8) Many ZFS pool types are not allowed to be bootable. The system's BIOS or UEFI must locate a bootable disk on a bootable pool in order to boot. Future work in illumos will enable more ZFS pool types to be bootable, but for now a ZFS pool should be a single-level-vdev pool, namely one of: Single disk Mirror RaidZ (any parity) SmartOS still loads a ramdisk root with a read-only /usr filesystem, even when booted from a bootable pool. This means a bootable pool that isn't the SmartOS `zones` pool receives relatively few writes unless it is used for some other purpose as well. A bootable pool created without the -B option, but using whole disks, will be BIOS bootable thanks to space for the MBR, but not bootable with UEFI. A hand-partitioned GPT disk may be able to be bootable with both BIOS and UEFI, and can have some of its other GPT parititions used for other purposes. If a bootable pool's boot image or platform image becomes corrupt, even if it's `zones`, a machine can still be booted with a USB stick, CD-ROM, or other method of booting SmartOS. A bootable pool can then be repaired using piadm(8) from the USB stick or CD-ROM."
}
] |
{
"category": "Runtime",
"file_name": "piadm.8.md",
"project_name": "SmartOS",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Status: Approved Velero supports restoring resources into different namespaces than they were backed up from. This enables a user to, among other things, clone a namespace within a cluster. However, if the namespace being cloned uses persistent volume claims, Velero cannot currently create a second copy of the original persistent volume when restoring. This limitation is documented in detail in . This document proposes a solution that allows new copies of persistent volumes to be created during a namespace clone. Enable persistent volumes to be cloned when using `velero restore create --namespace-mappings ...` to create a second copy of a namespace within a cluster. Cloning of persistent volumes in any scenario other than when using `velero restore create --namespace-mappings ...` flag. . (Omitted, see introduction) During a restore, Velero will detect that it needs to assign a new name to a persistent volume being restored if and only if both of the following conditions are met: the persistent volume is claimed by a persistent volume claim in a namespace that's being remapped using `velero restore create --namespace-mappings ...` a persistent volume already exists in the cluster with the original name If these conditions exist, Velero will give the persistent volume a new arbitrary name before restoring it. It will also update the `spec.volumeName` of the related persistent volume claim. In `pkg/restore/restore.go`, around , Velero has special-case code for persistent volumes. This code will be updated to check for the two preconditions described in the previous section. If the preconditions are met, the object will be given a new name. The persistent volume will also be annotated with the original name, e.g. `velero.io/original-pv-name=NAME`. Importantly, the name change will occur before , where Velero checks to see if it should restore the persistent volume. Additionally, the old and new persistent volume names will be recorded in a new field that will be added to the `context` struct, `renamedPVs map[string]string`. In the special-case code for persistent volume claims starting on , Velero will check to see if the claimed persistent volume has been renamed by looking in `ctx.renamedPVs`. If so, Velero will update the persistent volume claim's `spec.volumeName` to the new name. One alternative approach is to add a new CLI flag and API field for restores, e.g. `--clone-pvs`, that a user could provide to indicate they want to create copies of persistent volumes. This approach would work fine, but it does require the user to be aware of this flag/field and to properly specify it when needed. It seems like a better UX to detect the typical conditions where this behavior is needed, and to automatically apply it. Additionally, the design proposed here does not preclude such a flag/field from being added later, if it becomes necessary to cover other use cases. N/A"
}
] |
{
"category": "Runtime",
"file_name": "pv-cloning.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Toolbox The Rook toolbox is a container with common tools used for rook debugging and testing. The toolbox is based on CentOS, so more tools of your choosing can be easily installed with `yum`. The toolbox can be run in two modes: : Start a toolbox pod where you can connect and execute Ceph commands from a shell : Run a script with Ceph commands and collect the results from the job log !!! hint Before running the toolbox you should have a running Rook cluster deployed (see the ). !!! note The toolbox is not necessary if you are using to execute Ceph commands. The rook toolbox can run as a deployment in a Kubernetes cluster where you can connect and run arbitrary Ceph commands. Launch the rook-ceph-tools pod: ```console kubectl create -f deploy/examples/toolbox.yaml ``` Wait for the toolbox pod to download its container and get to the `running` state: ```console kubectl -n rook-ceph rollout status deploy/rook-ceph-tools ``` Once the rook-ceph-tools pod is running, you can connect to it with: ```console kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash ``` All available tools in the toolbox are ready for your troubleshooting needs. Example: `ceph status` `ceph osd status` `ceph df` `rados df` When you are done with the toolbox, you can remove the deployment: ```console kubectl -n rook-ceph delete deploy/rook-ceph-tools ``` If you want to run Ceph commands as a one-time operation and collect the results later from the logs, you can run a script as a Kubernetes Job. The toolbox job will run a script that is embedded in the job spec. The script has the full flexibility of a bash script. In this example, the `ceph status` command is executed when the job is created. Create the toolbox job: ```console kubectl create -f deploy/examples/toolbox-job.yaml ``` After the job completes, see the results of the script: ```console kubectl -n rook-ceph logs -l job-name=rook-ceph-toolbox-job ```"
}
] |
{
"category": "Runtime",
"file_name": "ceph-toolbox.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "The Antrea build system relies on Docker to build container images, which can then be used to test Antrea locally. As an Antrea developer, if you run `make`, `docker build` will be invoked to build the `antrea-ubuntu` container image. On Linux, Docker Engine (based on moby) runs natively, but if you use macOS or Windows for Antrea development, Docker needs to run inside a Linux Virtual Machine (VM). This VM is typically managed by [Docker Desktop](https://www.docker.com/products/docker-desktop). Starting January 31 2022, Docker Desktop requires a per user paid subscription for professional use in \"large\" companies (more than 250 employees or more than $10 million in annual revenue). See <https://www.docker.com/pricing/faq> for details. For developers who contribute to Antrea as an employee of such a company (and not in their own individual capacity), it is no longer possible to use Docker Desktop to build (and possibly run) Antrea Docker images locally, unless they have a Docker subscription. For contributors who do not have a Docker subscription, we recommend the following Docker Desktop alternatives. is a UI built with . It supports running a container runtime (docker, containerd or kuberneters) on macOS, inside a Lima VM. Major benefits of Colima include its ability to be used as a drop-in replacement for Docker Desktop and its ability to coexist with Docker Desktop on the same macOS machine. To install and run Colima, follow these steps: `brew install colima` `colima start` to start Colima (the Linux VM) with the default configuration. Check the Colima documentation for configuration options. By default, Colima will use the Docker runtime. This means that you can keep using the `docker` CLI and that no changes are required to build Antrea. we recommend increasing the CPU and memory resources allocated to the VM as by default it only has 2 vCPUs and 2GiB of memory. For example, you can use: `colima start --cpu 4 --memory 8`. Otherwise, building Antrea container images may be slow, and your Kind clusters may run out of memory. `docker context list` and check that the `colima` context is selected. You can use `docker context use desktop-linux` to go back to Docker Desktop. `make` to build Antrea locally. Check that the `antrea-ubuntu` image is available by listing all images with `docker images`. We have validated that Kind clusters with Antrea can run inside Colima without any issue (confirmed for IPv4, IPv6 single-stack clusters, as well as for dual-stack clusters). At any time, you can stop the VM with `colima stop` and restart it with `colima start` (you do not need to specify configuration flags again, unless you want to change the current values). You can also check the status of the VM with `colima ls`. While it should be possible to have multiple Colima instances simultaneously, this is not something that we have tested. Rancher Desktop is another possible alternative to Docker Desktop, which supports Windows in addition to macOS. On macOS, it also uses Lima as the Linux VM. Two major differences with Colima are that Rancher Desktop will always run Kubernetes, and that Rancher Desktop uses the UI for container management instead of `docker`. However, the `nerdctl` and `docker` UIs are supposed to be compatible, so in theory it should be possible to alias `docker` to `nerdctl` and keep using the Antrea build system as is (to be tested)."
}
] |
{
"category": "Runtime",
"file_name": "docker-desktop-alternatives.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "title: Velero 1.2 Sets Sail by Shifting Plugins Out of Tree, Adding a Structural Schema, and Sharpening Usability excerpt: With this release, weve focused on extracting in-tree cloud provider plugins into their own repositories, making further usability improvements to the restic integration, preparing for the general availability of Kubernetes custom resource definitions (CRDs) by adding a structural schema to our CRDs, and many other new features and usability improvements. author_name: Steve Kriss slug: Velero-1.2-Sets-Sail categories: ['velero','release'] image: /img/posts/sailboat.jpg tags: ['Velero Team', 'Steve Kriss'] Velero continues to evolve with the release of version 1.2. With this release, weve focused on extracting in-tree cloud provider plugins into their own repositories, making further usability improvements to the restic integration, preparing for the general availability of Kubernetes custom resource definitions (CRDs) by adding a structural schema to our CRDs, and many other new features and usability improvements. Lets take a look at the highlights for this release. Velero has had built-in support for AWS, Microsoft Azure, and Google Cloud Platform (GCP) since day 1. When Velero moved to a plugin architecture for object store providers and volume snapshotters in version 0.6, the code for these three providers was converted to use the plugin interface provided by this new architecture, but the cloud provider code still remained inside the Velero codebase. This put the AWS, Azure, and GCP plugins in a different position compared with other providers plugins, since they automatically shipped with the Velero binary and could include documentation in-tree. With version 1.2, weve extracted the AWS, Azure, and GCP plugins into their own repositories, one per provider. We now also publish one plugin image per provider. This change brings these providers to parity with other providers plugin implementations, reduces the size of the core Velero binary by not requiring each providers SDK to be included, and opens the door for the plugins to be maintained and released independently of core Velero. Weve continued to work on improving Veleros restic integration. With this release, weve made the following enhancements: Restic backup and restore progress is now captured during execution and visible to the user through the `velero backup/restore describe --details` command. The details are updated every 10 seconds. This provides a new level of visibility into restic operations for users. Restic backups of persistent volume claims (PVCs) now remain incremental across the rescheduling of a pod. Previously, if the pod using a PVC was rescheduled, the next restic backup would require a full rescan of the volumes contents. This improvement potentially makes such backups significantly faster. Read-write-many volumes are no longer backed up once for every pod using the volume, but instead just once per Velero backup. This improvement speeds up backups and prevents potential restore issues due to multiple copies of the backup being processed simultaneously. Before version"
},
{
"data": "you could clone a Kubernetes namespace by backing it up and then restoring it to a different namespace in the same cluster by using the `--namespace-mappings` flag with the `velero restore create` command. However, in this scenario, Velero was unable to clone persistent volumes used by the namespace, leading to errors for users. In version 1.2, Velero automatically detects when you are trying to clone an existing namespace, and clones the persistent volumes used by the namespace as well. This doesnt require the user to specify any additional flags for the `velero restore create` command. This change lets you fully achieve your goal of cloning namespaces using persistent storage within a cluster. To help you secure your important backup data, weve added support for more forms of server-side encryption of backup data on both AWS and GCP. Specifically: On AWS, Velero now supports Amazon S3-managed encryption keys (SSE-S3), which uses AES256 encryption, by specifying `serverSideEncryption: AES256` in a backup storage locations config. On GCP, Velero now supports using a specific Cloud KMS key for server-side encryption by specifying `kmsKeyName: <key name>` in a backup storage locations config. In Kubernetes 1.16, custom resource definitions (CRDs) reached general availability. Structural schemas are required for CRDs created in the `apiextensions.k8s.io/v1` API group. Velero now defines a structural schema for each of its CRDs and automatically applies it the user runs the `velero install` command. The structural schemas enable the user to get quicker feedback when their backup, restore, or schedule request is invalid, so they can immediately remediate their request. There are too many new features and improvements to cover in this short blog post. For full details on all of the changes, see the . Veleros user and contributor community continues to grow, and it is a huge part of this projects success. This release includes many community contributions, including from (GitHub handles listed): Thank you for helping improve the Velero project! If youre going to KubeCon + CloudNativeCon North America 2019 in San Diego, come hang out with us.! The Velero maintainers will all be attending and would love to chat with you. Well be having a Velero community lunch on Wednesday, November 20, at 12:30PM in the convention center. Come to the VMware booth or look for the Velero signs in the lunch area. Check out these talks related to Velero: , by Adnan Abdulhussein and Nolan Brubaker, both from VMware (and core maintainers) , by Annette Clewett and Dylan Murray, both from Red Hat (Dylan is a Velero contributor) Velero is better because of our contributors and maintainers. It is because of you that we can bring great software to the community. Please join us during our and catch up with past meetings on YouTube on the . You can always find the latest project information at . Look for issues on GitHub marked or if you want to roll up your sleeves and write some code with us. You can chat with us on and follow us on Twitter at ."
}
] |
{
"category": "Runtime",
"file_name": "2019-11-07-Velero-1.2-Sets-Sail.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: CephNFS CRD Rook allows exporting NFS shares of a CephFilesystem or CephObjectStore through the CephNFS custom resource definition. ```yaml apiVersion: ceph.rook.io/v1 kind: CephNFS metadata: name: my-nfs namespace: rook-ceph spec: server: active: 1 placement: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: matchExpressions: key: role operator: In values: nfs-node topologySpreadConstraints: tolerations: key: nfs-node operator: Exists podAffinity: podAntiAffinity: annotations: my-annotation: something labels: my-label: something resources: limits: memory: \"8Gi\" requests: cpu: \"3\" memory: \"8Gi\" priorityClassName: \"\" logLevel: NIV_INFO security: kerberos: principalName: \"nfs\" domainName: \"DOMAIN1.EXAMPLE.COM\" configFiles: volumeSource: configMap: name: my-krb5-config-files keytabFile: volumeSource: secret: secretName: my-nfs-keytab defaultMode: 0600 # mode must be 0600 sssd: sidecar: image: registry.access.redhat.com/rhel7/sssd:latest sssdConfigFile: volumeSource: configMap: name: my-nfs-sssd-config defaultMode: 0600 # mode must be 0600 debugLevel: 0 resources: {} ``` The `server` spec sets configuration for Rook-created NFS-Ganesha server pods. `active`: The number of active NFS servers. Rook supports creating more than one active NFS server, but cannot guarantee high availability. For values greater than 1, see the below. `placement`: Kubernetes placement restrictions to apply to NFS server Pod(s). This is similar to placement defined for daemons configured by the . `annotations`: Kubernetes annotations to apply to NFS server Pod(s) `labels`: Kubernetes labels to apply to NFS server Pod(s) `resources`: Kubernetes resource requests and limits to set on NFS server containers `priorityClassName`: Set priority class name for the NFS server Pod(s) `logLevel`: The log level that NFS-Ganesha servers should output.</br> Default value: `NIV_INFO`</br> Supported values: `NIVNULL | NIVFATAL | NIVMAJ | NIVCRIT | NIVWARN | NIVEVENT | NIVINFO | NIVDEBUG | NIVMIDDEBUG | NIVFULLDEBUG | NBLOGLEVEL` `hostNetwork`: Whether host networking is enabled for the NFS server pod(s). If not set, the network settings from the CephCluster CR will be applied. The `security` spec sets security configuration for the NFS cluster. `kerberos`: Kerberos configures NFS-Ganesha to secure NFS client connections with Kerberos. `principalName`: this value is combined with (a) the namespace and name of the CephNFS (with a hyphen between) and (b) the Realm configured in the user-provided kerberos config file(s) to determine the full service principal name: `<principalName>/<namespace>-<name>@<realm>`. e.g., nfs/[email protected]. For full details, see the . `domainName`: this is the domain name used in the kerberos credentials. This is used to configure idmap to map the kerberos credentials to uid/gid. Without this configured, NFS-Ganesha will use the anonuid/anongid configured (default: -2) when accessing the local filesystem. eg., DOMAIN1.EXAMPLE.COM. . `configFiles`: defines where the Kerberos configuration should be sourced from. Config files will be placed into the `/etc/krb5.conf.rook/` directory. For advanced usage, see the . `volumeSource`: this is a standard Kubernetes VolumeSource for Kerberos configuration files like what is normally used to configure Volumes for a Pod. For example, a ConfigMap, Secret, or HostPath. The volume may contain multiple files, all of which will be loaded. `keytabFile`: defines where the Kerberos keytab should be sourced from. The keytab file will be placed into `/etc/krb5.keytab`. For advanced usage, see the . `volumeSource`: this is a standard Kubernetes VolumeSource for the Kerberos keytab file like what is normally used to configure Volumes for a Pod. For example, a Secret or HostPath. There are two requirements for the source's content: The config file must be mountable via `subPath: krb5.keytab`. For example, in a Secret, the data item must be named `krb5.keytab`, or `items` must be defined to select the key and give it path `krb5.keytab`. A HostPath directory must have the"
},
{
"data": "file. The volume or config file must have mode 0600. `sssd`: SSSD enables integration with System Security Services Daemon (SSSD). See also: . `sidecar`: Specifying this configuration tells Rook to run SSSD in a sidecar alongside the NFS server in each NFS pod. `image`: defines the container image that should be used for the SSSD sidecar. `sssdConfigFile`: defines where the SSSD configuration should be sourced from. The config file will be placed into `/etc/sssd/sssd.conf`. For advanced usage, see the . `volumeSource`: this is a standard Kubernetes like what is normally used to configure Volumes for a Pod. For example, a ConfigMap, Secret, or HostPath. There are two requirements for the source's content: The config file must be mountable via `subPath: sssd.conf`. For example, in a ConfigMap, the data item must be named `sssd.conf`, or `items` must be defined to select the key and give it path `sssd.conf`. A HostPath directory must have the `sssd.conf` file. The volume or config file must have mode 0600. `additionalFiles`: adds any number of additional files into the SSSD sidecar. All files will be placed into `/etc/sssd/rook-additional/<subPath>` and can be referenced by the SSSD config file. For example, CA and/or TLS certificates to authenticate with Kerberos. `subPath`: the sub-path of `/etc/sssd/rook-additional` to add files into. This can include `/` to create arbitrarily deep sub-paths if desired. If the `volumeSource` is a file, this will refer to a file name. `volumeSource`: this is a standard Kubernetes VolumeSource for additional files like what is normally used to configure Volumes for a Pod. For example, a ConfigMap, Secret, or HostPath. The volume may contain multiple files, a single file, or may be a file on its own (e.g., a host path with `type: File`). `debugLevel`: sets the debug level for SSSD. If unset or `0`, Rook does nothing. Otherwise, this may be a value between 1 and 10. See the for more info. `resources`: Kubernetes resource requests and limits to set on NFS server containers It is possible to scale the size of the cluster up or down by modifying the `spec.server.active` field. Scaling the cluster size up can be done at will. Once the new server comes up, clients can be assigned to it immediately. The CRD always eliminates the highest index servers first, in reverse order from how they were started. Scaling down the cluster requires that clients be migrated from servers that will be eliminated to others. That process is currently a manual one and should be performed before reducing the size of the cluster. !!! warning See the below about setting this value greater than one. Active-active scale out does not work well with the NFS protocol. If one NFS server in a cluster is offline, other servers may block client requests until the offline server returns, which may not always happen due to the Kubernetes scheduler. Workaround: It is safest to run only a single NFS server, but we do not limit this if it benefits your use case. Ceph NFS management with the Rook mgr module enabled has a breaking regression with the Ceph Quincy v17.2.1 release. Workaround: Leave Ceph's Rook orchestrator mgr module disabled. If you have enabled it, you must disable it using the snippet below from the toolbox. ```console ceph orch set backend \"\" ceph mgr module disable rook ```"
}
] |
{
"category": "Runtime",
"file_name": "ceph-nfs-crd.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "As a CNCF project, WasmEdge follows the . In addition to this code of conduct, we have also implemented guidelines for the use of other developers' open-source work in your code. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the WasmEdge team via <[email protected]>. As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities. We are committed to making participation in this project a harassment-free experience for everyone, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, or nationality. Examples of unacceptable behavior by participants include: The use of sexualized language or imagery Personal attacks Trolling or insulting/derogatory comments Public or private harassment Publishing others' private information, such as physical or electronic addresses, without explicit permission Other unethical or unprofessional conduct. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct. By adopting this Code of Conduct, project maintainers commit themselves to fairly and consistently applying these principles to every aspect of managing this project. Project maintainers who do not follow or enforce the Code of Conduct may be permanently removed from the project team. This code of conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. In addition to the CNCF code of conduct, we would like to introduce a new code of conduct that demonstrates respect for open-source contributors. If you plan to use other developers' open-source work in your code, please make sure that the license of the original repository allows this. For instance, the Apache 2.0 license allows for free use of the code, provided that authors receive proper credit for their contributions. In order to appropriately recognize and acknowledge others' work, we kindly request that you add the following messages before you raise a PR or show your work in public: At the top of any files you use, please include: \"This file is licensed under Apache 2.0 and was originally developed by [the author name].\" Include a statement in the README like \"This work was made possible by [the author name].\" See a best practice ."
}
] |
{
"category": "Runtime",
"file_name": "CODE_OF_CONDUCT.md",
"project_name": "WasmEdge Runtime",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: \"Build from source\" layout: docs Access to a Kubernetes cluster, version 1.7 or later. A DNS server on the cluster `kubectl` installed installed (minimum version 1.8) ```bash mkdir $HOME/go export GOPATH=$HOME/go go get github.com/vmware-tanzu/velero ``` Where `go` is your for Go. For Go development, it is recommended to add the Go import path (`$HOME/go` in this example) to your path. Download the archive named `Source code` from the and extract it in your Go import path as `src/github.com/vmware-tanzu/velero`. Note that the Makefile targets assume building from a git repository. When building from an archive, you will be limited to the `go build` commands described below. There are a number of different ways to build `velero` depending on your needs. This section outlines the main possibilities. When building by using `make`, it will place the binaries under `output/bin/$GOOS/$GOARCH`. For example, you will find the binary for darwin here: `output/bin/darwin/amd64/velero`, and the binary for linux here: `_output/bin/linux/amd64/velero`. `make` will also splice version and git commit information in so that `velero version` displays proper output. Note: `velero install` will also use the version information to determine which tagged image to deploy. If you would like to overwrite what image gets deployed, use the `image` flag (see below for instructions on how to build images). To build the `velero` binary on your local machine, compiled for your OS and architecture, run one of these two commands: ```bash go build ./cmd/velero ``` ```bash make local ``` To build the velero binary targeting linux/amd64 within a build container on your local machine, run: ```bash make build ``` For any specific platform, run `make build-<GOOS>-<GOARCH>`. For example, to build for the Mac, run `make build-darwin-amd64`. Velero's `Makefile` has a convenience target, `all-build`, that builds the following platforms: linux-amd64 linux-arm linux-arm64 linux-ppc64le darwin-amd64 windows-amd64 If after installing Velero you would like to change the image used by its deployment to one that contains your code changes, you may do so by updating the image: ```bash kubectl -n velero set image deploy/velero velero=myimagerepo/velero:$VERSION ``` To build a Velero container image, you need to configure `buildx` first. Docker Buildx is a CLI plugin that extends the docker command with the full support of the features provided by Moby BuildKit builder toolkit. It provides the same user experience as docker build with many new features like creating scoped builder instances and building against multiple nodes concurrently. More information in the and in the repo. Set the `$REGISTRY` environment variable. For example, if you want to build the `gcr.io/my-registry/velero:main` image, set `$REGISTRY` to"
},
{
"data": "If this variable is not set, the default is `velero`. Optionally, set the `$VERSION` environment variable to change the image tag or `$BIN` to change which binary to build a container image for. Then, run: ```bash make container ``` Note: To build build container images for both `velero` and `velero-restic-restore-helper`, run: `make all-containers` To publish container images to a registry, the following one time setup is necessary: If you are building cross platform container images ```bash $ docker run --rm --privileged multiarch/qemu-user-static --reset -p yes ``` Create and bootstrap a new docker buildx builder ```bash $ docker buildx create --use --name builder builder $ docker buildx inspect --bootstrap [+] Building 2.6s (1/1) FINISHED => [internal] booting buildkit 2.6s => => pulling image moby/buildkit:buildx-stable-1 1.9s => => creating container buildxbuildkitbuilder0 0.7s Name: builder Driver: docker-container Nodes: Name: builder0 Endpoint: unix:///var/run/docker.sock Status: running Platforms: linux/amd64, linux/arm64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6 ``` NOTE: Without the above setup, the output of `docker buildx inspect --bootstrap` will be: ```bash $ docker buildx inspect --bootstrap Name: default Driver: docker Nodes: Name: default Endpoint: default Status: running Platforms: linux/amd64, linux/arm64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6 ``` And the `REGISTRY=myrepo BUILDXOUTPUTTYPE=registry make container` will fail with the below error: ```bash $ REGISTRY=ashishamarnath BUILDXPLATFORMS=linux/arm64 BUILDXOUTPUT_TYPE=registry make container auto-push is currently not implemented for docker driver make: * [container] Error 1 ``` Having completed the above one time setup, now the output of `docker buildx inspect --bootstrap` should be like ```bash $ docker buildx inspect --bootstrap Name: builder Driver: docker-container Nodes: Name: builder0 Endpoint: unix:///var/run/docker.sock Status: running Platforms: linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v ``` Now build and push the container image by running the `make container` command with `$BUILDXOUTPUTTYPE` set to `registry` ```bash $ REGISTRY=myrepo BUILDXOUTPUTTYPE=registry make container ``` Docker `buildx` platforms supported: `linux/amd64` `linux/arm64` `linux/arm/v7` `linux/ppc64le` For any specific platform, run `BUILDX_PLATFORMS=<GOOS>/<GOARCH> make container` For example, to build an image for arm64, run: ```bash BUILDX_PLATFORMS=linux/arm64 make container ``` Note: By default, `$BUILDXPLATFORMS` is set to `linux/amd64`_ With `buildx`, you can also build all supported platforms at the same time and push a multi-arch image to the registry. For example: ```bash REGISTRY=myrepo VERSION=foo BUILDXPLATFORMS=linux/amd64,linux/arm64,linux/arm/v7,linux/ppc64le BUILDXOUTPUT_TYPE=registry make all-containers ``` Note: when building for more than 1 platform at the same time, you need to set `BUILDXOUTPUTTYPE` to `registry` as local multi-arch images are not supported . Note: if you want to update the image but not change its name, you will have to trigger Kubernetes to pick up the new image. One way of doing so is by deleting the Velero deployment pod: ```bash kubectl -n velero delete pods -l deploy=velero ```"
}
] |
{
"category": "Runtime",
"file_name": "build-from-source.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": ".. This work is licensed under a Creative Commons Attribution 4.0 International License. .. http://creativecommons.org/licenses/by/4.0 http://creativecommons.org/licenses/by/3.0/legalcode ====================================== OpenSDS SourthBound Ceph Driver Design ====================================== Problem description =================== As a SDS controller, it's essential for OpenSDS to build its eco-system of sourthbound interface. At the first stage, our strategy is to quickly make up for our lack with the help of OpenStack(Cinder, Manila). Now it's time to move to next stage where we should build our own eco-system competitive. After a careful consideration, we plan to select Ceph as the first OpenSDS native sourthbound backend driver. The reasons are as follows: 1) Ceph is one of the most popular distributed storage systems in the world and it holds a large number of users. 2) Ceph has a good performance in IO stream and data high availability. 3) It's open-source and has a large number of active contributors. This proposal is launched mainly for the design of OpenSDS sourthbound Ceph driver. With this standalone driver, OpenSDS can directly manage resources in Ceph cluster and provide these storage resources for bare metals, VMs and containers. Proposed Change =============== Since OpenSDS repo has OpenSDS-plugins, Controller and Dock these three parts, most of the work will be done in Dock module. we found that Ceph maintains an official project \"go-ceph\", and we can manage resources of Ceph(pools, images and so on) in Golang.The main jobs are as follows: 1) We need to implement CreateVolume, GetVolume, ListVolumes, DeleteVolume, AttachVolume, DetachVolume, MountVolume and UnmountVolume in Ceph driver. Here is the standardized interface: type VolumeDriver interface { //Any initialization the volume driver does while starting. Setup() //Any operation the volume driver does while stoping. Unset() CreateVolume(name string, volType string, size int32) (string, error) GetVolume(volID string) (string, error) GetAllVolumes(allowDetails bool) (string, error) DeleteVolume(volID string) (string, error) AttachVolume(volID string) (string, error) DetachVolume(device string) (string, error) MountVolume(mountDir, device, fsType string) (string, error) UnmountVolume(mountDir string) (string, error) } 2) From step 1 we can find that the return value is string type, which does not seem to be a standardized description. And at the second step, what we are going to do is that we will leverage Ceph and Cinder and design a new unified sourthbound interface. 3) After those two steps, we will start to draft the V0.01 Spec of OpenSDS sourthbound interface. Data model impact -- Add ceph_driver element in Backends description. REST API impact None Security impact None Other end user impact None Performance impact None Other deployer impact None Dependencies ============ None Testing ======= None Documentation Impact ==================== None References ========== https://github.com/noahdesu/go-ceph"
}
] |
{
"category": "Runtime",
"file_name": "ceph_driver.md",
"project_name": "Soda Foundation",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "| json type \\ dest type | bool | int | uint | float |string| | | | | |--|--| | number | positive => true <br/> negative => true <br/> zero => false| 23.2 => 23 <br/> -32.1 => -32| 12.1 => 12 <br/> -12.1 => 0|as normal|same as origin| | string | empty string => false <br/> string \"0\" => false <br/> other strings => true | \"123.32\" => 123 <br/> \"-123.4\" => -123 <br/> \"123.23xxxw\" => 123 <br/> \"abcde12\" => 0 <br/> \"-32.1\" => -32| 13.2 => 13 <br/> -1.1 => 0 |12.1 => 12.1 <br/> -12.3 => -12.3<br/> 12.4xxa => 12.4 <br/> +1.1e2 =>110 |same as origin| | bool | true => true <br/> false => false| true => 1 <br/> false => 0 | true => 1 <br/> false => 0 |true => 1 <br/>false => 0|true => \"true\" <br/> false => \"false\"| | object | true | 0 | 0 |0|originnal json| | array | empty array => false <br/> nonempty array => true| [] => 0 <br/> [1,2] => 1 | [] => 0 <br/> [1,2] => 1 |[] => 0<br/>[1,2] => 1|original json|"
}
] |
{
"category": "Runtime",
"file_name": "fuzzy_mode_convert_table.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Since you are reading this from the Singularity source code, it will be assumed that you are building/compiling from source. Singularity packages are available for various Linux distributions, but may not always be up-to-date with the latest source release version. For full instructions on installation, including building RPMs, installing pre-built EPEL packages etc. please check the . You must first install development tools and libraries to your host. On Debian-based systems, including Ubuntu: ```sh sudo apt-get update sudo apt-get install -y \\ build-essential \\ libseccomp-dev \\ pkg-config \\ squashfs-tools \\ cryptsetup \\ curl wget git ``` On CentOS/RHEL: ```sh sudo yum groupinstall -y 'Development Tools' sudo yum install -y epel-release sudo yum install -y \\ libseccomp-devel \\ squashfs-tools \\ cryptsetup \\ wget git ``` Singularity is written in Go, and may require a newer version of Go than is available in the repositories of your distribution. We recommend installing the latest version of Go from the . First, download the Go tar.gz archive to `/tmp`, then extract the archive to `/usr/local`. _NOTE: if you are updating Go from a older version, make sure you remove `/usr/local/go` before reinstalling it._ ```sh export GOVERSION=1.17.3 OS=linux ARCH=amd64 # change this as you need wget -O /tmp/go${GOVERSION}.${OS}-${ARCH}.tar.gz \\ https://dl.google.com/go/go${GOVERSION}.${OS}-${ARCH}.tar.gz sudo tar -C /usr/local -xzf /tmp/go${GOVERSION}.${OS}-${ARCH}.tar.gz ``` Finally, add `/usr/local/go/bin` to the `PATH` environment variable: ```sh echo 'export PATH=$PATH:/usr/local/go/bin' >> ~/.bashrc source ~/.bashrc ``` If you will be making changes to the source code, and submitting PRs, you should install `golangci-lint`, which is the linting tool used in the Singularity project to ensure code consistency. Every pull request must pass the `golangci-lint` checks, and these will be run automatically before attempting to merge the code. If you are modifying Singularity and contributing your changes to the repository, it's faster to run these checks locally before uploading your pull request. In order to download and install the latest version of `golangci-lint`, you can run: <!-- markdownlint-disable MD013 --> ```sh curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin v1.43.0 ``` <!-- markdownlint-enable MD013 --> Add `$(go env GOPATH)` to the `PATH` environment variable: ```sh echo 'export PATH=$PATH:$(go env GOPATH)/bin' >> ~/.bashrc source ~/.bashrc ``` With the adoption of Go modules you no longer need to clone the Singularity repository to a specific location. Clone the repository with `git` in a location of your choice: ```sh git clone"
},
{
"data": "cd singularity ``` By default your clone will be on the `master` branch which is where development of Singularity happens. To build a specific version of Singularity, check out a before compiling, for example: ```sh git checkout v3.8.4 ``` You can configure, build, and install Singularity using the following commands: ```sh ./mconfig cd ./builddir make sudo make install ``` And that's it! Now you can check your Singularity version by running: ```sh singularity --version ``` The `mconfig` command accepts options that can modify the build and installation of Singularity. For example, to build in a different folder and to set the install prefix to a different path: ```sh ./mconfig -b ./buildtree -p /usr/local ``` See the output of `./mconfig -h` for available options. On a RHEL / CentOS / Fedora machine you can build a Singularity into an rpm package, and install it from the rpm. This is useful if you need to install Singularity across multiple machines, or wish to manage all software via `yum/dnf`. To build the rpm, in addition to the , install `rpm-build`, `wget`, and `golang`: ```sh sudo yum install -y rpm-build wget golang ``` The rpm build can use the distribution or EPEL version of Go, even though as of this writing that version is older than the default minimum version of Go that Singularity requires. This is because the rpm applies a source code patch to lower the minimum required. To build from a release source tarball do these commands: <!-- markdownlint-disable MD013 --> ```sh export VERSION=3.8.4 # this is the singularity version, change as you need wget https://github.com/hpcng/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz rpmbuild -tb singularity-${VERSION}.tar.gz sudo rpm -ivh ~/rpmbuild/RPMS/x8664/singularity-${VERSION}-1.el7.x8664.rpm rm -rf ~/rpmbuild singularity-${VERSION}*.tar.gz ``` <!-- markdownlint-enable MD013 --> Alternatively, to build an RPM from the latest master you can . Create the build configuration using the `--only-rpm` option of `mconfig` if you're using the system's too-old golang installation, to lower the minimum required version. Then use the `rpm` make target to build Singularity as an rpm package: <!-- markdownlint-disable MD013 --> ```sh ./mconfig --only-rpm make -C builddir rpm sudo rpm -ivh ~/rpmbuild/RPMS/x8664/singularity-3.8.4*.x8664.rpm # or whatever version you built ``` <!-- markdownlint-enable MD013 --> By default, the rpm will be built so that Singularity is installed under `/usr/local`. To build an rpm with an alternative install prefix set RPMPREFIX on the make step, for example: ```sh make -C builddir rpm RPMPREFIX=/opt/singularity ``` For more information on installing/updating/uninstalling the RPM, check out our . Additional information on how to build a Debian package can be found in ."
}
] |
{
"category": "Runtime",
"file_name": "INSTALL.md",
"project_name": "Singularity",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Using block devices in containers is usually restricted via the . This is done to prevent users from doing dangerous things like creating a physical disk device via `mknod` and writing data to it. ``` brw-rw- 1 root disk 259, 1 Nov 3 15:13 /dev/nvme0n1p1 / # mknod /dev/nvme0n1p1 b 259 1 mknod: /dev/nvme0n1p1: Operation not permitted ``` When accessing devices from inside the container is actually desired, users can set up rkt volumes and mounts. In that case, rkt will automatically configure the device cgroup controller for the container without the aforementioned restriction. ``` --interactive \\ kinvolk.io/aci/busybox \\ --mount volume=disk,target=/dev/nvme0n1p1 / # ls -l /dev/nvme0n1p1 brw-rw- 1 root disk 259, 1 Nov 3 14:13 /dev/nvme0n1p1 / # head /dev/nvme0n1p1 -c11 Xmkfs.fat/ # / # echo 1 > /dev/nvme0n1p1 /bin/sh: can't create /dev/nvme0n1p1: Operation not permitted ``` Note that the volume is read-only, so we can't write to it because rkt sets a read-only policy in the device cgroup. For completeness, users can also use `--insecure-options=paths`, which disables any block device protection. Then, users can just create devices with `mknod`: ``` --interactive \\ kinvolk.io/aci/busybox / # mknod /dev/nvme0n1p1 b 259 1 / # ls -l /dev/nvme0n1p1 brw-r--r-- 1 root root 259, 1 Nov 3 15:43 /dev/nvme0n1p1 ``` Here are some real-world examples that use block devices. SSHFS allows mounting remote directories over ssh. In this example we'll mount a remote directory on `/mnt` inside the container. For this to work, we need to be able to mount and umount filesystems inside the container so we pass the appropriate seccomp and capability options: ``` --dns=8.8.8.8 \\ --interactive \\ --volume fuse,kind=host,source=/dev/fuse \\ docker://ubuntu \\ --mount volume=fuse,target=/dev/fuse \\ --seccomp mode=retain,@rkt/default-whitelist,mount,umount2 \\ --caps-retain=CAPSETUID,CAPSETGID,CAPDACOVERRIDE,CAPCHOWN,CAPFOWNER,CAPSYSADMIN root@rkt-f2098164-b207-41d0-b62b-745659725aee:/# apt-get update && apt-get install sshfs [...] root@rkt-f2098164-b207-41d0-b62b-745659725aee:/# sshfs [email protected]: /mnt The authenticity of host 'host.com (12.34.56.78)' can't be established. ECDSA key fingerprint is"
},
{
"data": "Are you sure you want to continue connecting (yes/no)? yes [email protected]'s password: root@rkt-f2098164-b207-41d0-b62b-745659725aee:/# cat /mnt/remote-file.txt HELLO FROM REMOTE root@rkt-f2098164-b207-41d0-b62b-745659725aee:/# fusermount -u /mnt/ ``` CUDA allows using GPUs for general purpose processing and it needs access to the gpu devices. In this example we also mount the CUDA SDK binaries and the host libraries, and we do some substitution magic to have appc-compliant volume names: ``` $(for f in /dev/nvidia /opt/bin/nvidia /usr/lib/; \\ do echo \"--volume $(basename $f | sed 's/\\./-/g'),source=$f,kind=host \\ --mount volume=$(basename $f | sed 's/\\./-/g'),target=$f\"; \\ done) \\ docker://nvidia/cuda:latest \\ --exec=/opt/bin/nvidia-smi Wed Sep 7 21:25:22 2016 +--+ | NVIDIA-SMI 367.35 Driver Version: 367.35 | |-+-+-+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 780 Off | 0000:01:00.0 N/A | N/A | | 33% 61C P2 N/A / N/A | 474MiB / 3018MiB | N/A Default | +-+-+-+ +--+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 Not Supported | +--+ ``` You can mount a disk block device (for example, an external USB stick) and format it inside a container. Like before, if you want to mount it inside the container, you need to pass the appropriate seccomp and capability options: ``` --volume disk,kind=host,source=/dev/sda,readOnly=false \\ --interactive \\ docker://ubuntu \\ --mount volume=disk,target=/dev/sda root@rkt-72bd9a93-2e89-4515-8b46-44e0e11c4c79:/# mkfs.ext4 /dev/sda mke2fs 1.42.13 (17-May-2015) /dev/sda contains a ext4 file system last mounted on Fri Nov 3 17:15:56 2017 Proceed anyway? (y,n) y Creating filesystem with 491520 4k blocks and 122880 inodes Filesystem UUID: 9ede01b1-e35b-46a0-b224-24e879973582 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912 Allocating group tables: done Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done root@rkt-72bd9a93-2e89-4515-8b46-44e0e11c4c79:/# mount /dev/sda /mnt/ root@rkt-72bd9a93-2e89-4515-8b46-44e0e11c4c79:/# echo HELLO > /mnt/hi.txt root@rkt-72bd9a93-2e89-4515-8b46-44e0e11c4c79:/# cat /mnt/hi.txt HELLO root@rkt-72bd9a93-2e89-4515-8b46-44e0e11c4c79:/# umount /mnt/ ```"
}
] |
{
"category": "Runtime",
"file_name": "block-devices.md",
"project_name": "rkt",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Learn about creating replicated volumes, making your data accessible on any cluster node. In this tutorial you will learn how to create a replicated volume, and verify that the data stays accessible when moving Pods from one node to another. A Kubernetes Cluster with at least two nodes. An installed and configured Piraeus Datastore. Learn how to get started in our First, we will create a new for our replicated volumes. We will be using the `pool1` storage pool from the , but this time also set the `placementCount` to 2, telling LINSTOR to store the volume data on two nodes. ``` $ kubectl apply -f - <<EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: piraeus-storage-replicated provisioner: linstor.csi.linbit.com allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer parameters: linstor.csi.linbit.com/storagePool: pool1 linstor.csi.linbit.com/placementCount: \"2\" EOF ``` Next, we will again create a , requesting a 1G replicated volume from our newly created `StorageClass`. ``` $ kubectl apply -f - <<EOF apiVersion: v1 kind: PersistentVolumeClaim metadata: name: replicated-volume spec: storageClassName: piraeus-storage-replicated resources: requests: storage: 1Gi accessModes: ReadWriteOnce EOF ``` For our workload, we will create a Pod which will use the replicated volume to log its name, the current date, and the node it is running on. ``` $ kubectl apply -f - <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: volume-logger spec: selector: matchLabels: app.kubernetes.io/name: volume-logger strategy: type: Recreate template: metadata: labels: app.kubernetes.io/name: volume-logger spec: terminationGracePeriodSeconds: 0 containers: name: volume-logger image: busybox args: sh -c | echo \"Hello from \\$HOSTNAME, running on \\$NODENAME, started at \\$(date)\" >> /volume/hello tail -f /dev/null env: name: NODENAME valueFrom: fieldRef: fieldPath: spec.nodeName volumeMounts: mountPath: /volume name: replicated-volume volumes: name: replicated-volume persistentVolumeClaim: claimName: replicated-volume EOF ``` After a short wait, the Pod is `Running`, our `PersistentVolumeClaim` is now `Bound`, and we can see that LINSTOR placed the volume on two nodes: ``` $ kubectl wait pod --for=condition=Ready -l app.kubernetes.io/name=volume-logger pod/volume-logger-84dd47f4cb-trh4l $ kubectl get persistentvolumeclaim NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE replicated-volume Bound pvc-dbe422ac-c5ae-4786-a624-74d2be8a262d 1Gi RWO piraeus-storage-replicated 1m $ kubectl -n piraeus-datastore exec deploy/linstor-controller -- linstor resource list-volumes +-+ | Node | Resource | StoragePool | VolNr | MinorNr | DeviceName | Allocated | InUse | State | |===========================================================================================================================================| | n1.example.com | pvc-dbe422ac-c5ae-4786-a624-74d2be8a262d | pool1 | 0 | 1000 | /dev/drbd1000 | 16.91 MiB | InUse | UpToDate | | n2.example.com | pvc-dbe422ac-c5ae-4786-a624-74d2be8a262d | pool1 | 0 | 1000 | /dev/drbd1000 | 876 KiB | Unused | UpToDate | +-+ ``` NOTE: If your cluster has three or more nodes, you will actually see a third volume, marked as"
},
{
"data": "This is intentional and improves the behaviour should one of the cluster nodes become unavailable. Now, we can check that our Pod actually logged the expected information by reading `/volume/hello` in the Pod: ``` $ kubectl exec deploy/volume-logger -- cat /volume/hello Hello from volume-logger-84dd47f4cb-trh4l, running on n1.example.com, started at Fri Feb 3 08:53:47 UTC 2023 ``` Now, we will verify that when we move the Pod to another node, we still have access to the same data. To test this, we will disable scheduling on the node the Pod is currently running on. This forces Kubernetes to move the Pod to another node once we trigger a restart. In our examples, the Hello message tells us that the Pod was started on `n1.example.com`, so this is the node we disable. Replace the name with your own node name. ``` $ kubectl cordon n1.example.com node/n1.example.com cordoned ``` Now, we can trigger a new rollout of the deployment. Since we disabled scheduling `n1.example.com`, another node will have to take over our Pod. ``` $ kubectl rollout restart deploy/volume-logger deployment.apps/volume-logger restarted $ kubectl wait pod --for=condition=Ready -l app.kubernetes.io/name=volume-logger pod/volume-logger-5db9dd7b87-lps2f condition met $ kubectl get pods -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES volume-logger-5db9dd7b87-lps2f 1/1 Running 0 26s 10.125.97.9 n2.example.com <none> <none> ``` As expected, the Pod is now running on a different node, in this case on `n2.example.com`. Now, we can verify that the message from the original pod is still present: ``` $ kubectl exec deploy/volume-logger -- cat /volume/hello Hello from volume-logger-84dd47f4cb-trh4l, running on n1.example.com, started at Fri Feb 3 08:53:47 UTC 2023 Hello from volume-logger-5db9dd7b87-lps2f, running on n2.example.com, started at Fri Feb 3 08:55:42 UTC 2023 ``` As expected, we still see the message from `n1.example.com`, as well as the message from the new Pod on `n2.example.com`. We can also see that LINSTOR now shows the volume as `InUse` on the new node: ``` $ kubectl -n piraeus-datastore exec deploy/linstor-controller -- linstor resource list-volumes +-+ | Node | Resource | StoragePool | VolNr | MinorNr | DeviceName | Allocated | InUse | State | |===========================================================================================================================================| | n1.example.com | pvc-dbe422ac-c5ae-4786-a624-74d2be8a262d | pool1 | 0 | 1000 | /dev/drbd1000 | 16.91 MiB | Unused | UpToDate | | n2.example.com | pvc-dbe422ac-c5ae-4786-a624-74d2be8a262d | pool1 | 0 | 1000 | /dev/drbd1000 | 952 KiB | InUse | UpToDate | +-+ ``` You have now successfully created a replicated volume and verified that the data is accessible from multiple nodes. Now that we have verified replication works, we can reset the disabled node: ``` $ kubectl uncordon n1.example.com node/n1.example.com uncordoned ```"
}
] |
{
"category": "Runtime",
"file_name": "replicated-volumes.md",
"project_name": "Piraeus Datastore",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This page shows how to create a single control-plane Kubernetes and install the software required to run confidential computing containers with Occlum in the Kubernetes cluster. A machine with Intel SGX hardware support. Make sure you have one of the following operating systems: Ubuntu 18.04 server 64bits Install the Intel SGX software stack. Install the Occlum software stack. Create a single control-plane Kubernetes cluster for running confidential computing containers with Occlum. On Ubuntu ```bash sudo apt-get install -y gnupg wget echo 'deb [arch=amd64] https://mirrors.openanolis.cn/inclavare-containers/deb-repo bionic main' | tee /etc/apt/sources.list.d/inclavare-containers.list wget -qO - https://mirrors.openanolis.cn/inclavare-containers/deb-repo/DEB-GPG-KEY.key | sudo apt-key add - cat << EOF >/etc/apt/preferences.d/inclavare-containers Package: epm Pin: origin mirrors.openanolis.cn Pin-Priority: 1000 EOF sudo apt-get update ``` The Linux SGX software stack is comprised of Intel SGX driver, Intel SGX SDK, and Intel SGX PSW. Please follow to install SGX driver, SDK and PSW, the recommended version is 2.14. is the only enclave runtime supported by shim-rune currently. `enable-rdfsdbase` is a Linux kernel module which enables Occlum to use rdfsbase-family instructions in enclaves. Step 1. Install kernel module enable-rdfsdbase Please follow the to install `enable-rdfsdbase`. Step 2. Install package libsgx-uae-service `libsgx-uae-service` package is required by occlum, install libsgx-uae-service use the following command: On Ubuntu ``` sudo apt-get install libsgx-uae-service ``` Step 3. Install occlum On Ubuntu ```bash sudo apt-get install occlum ``` `rune` is a CLI tools for spawning and running containers according to the OCI specification. The codebase of the `rune` is a fork of , so `rune` can be used as `runc` if enclave is not configured or available. The difference between them is `rune` can run a so-called enclave which is referred to as protected execution environment, preventing the untrusted entity from accessing the sensitive and confidential assets in use in containers.<br /> <br /> Install rune use the following commands: On Ubuntu ```bash sudo apt-get install rune ``` `epm` epm is a service that is used to manage the cache pools to optimize the startup time of enclave.<br /> <br /> Install epm use the following commands: On Ubuntu ```bash sudo apt-get install epm ``` `shim-rune` resides in between `containerd` and `rune`, conducting enclave signing and management beyond the normal `shim` basis. `shim-rune` and `rune` can compose a basic enclave containerization stack for the cloud-native ecosystem. On Ubuntu ```bash sudo apt-get install shim-rune ``` containerd is an industry-standard container runtime with an emphasis on simplicity, robustness and portability. It is available as a daemon for Linux and Windows, which can manage the complete container lifecycle of its host system: image transfer and storage, container execution and supervision, low-level storage and network attachments, etc.<br />You can download one of the containerd binaries on the page. Step 1. Download and install containerd-1.3.4 as follows: ```bash curl -LO https://github.com/containerd/containerd/releases/download/v1.3.4/containerd-1.3.4.linux-amd64.tar.gz tar -xvf containerd-1.3.4.linux-amd64.tar.gz cp bin/* /usr/local/bin ``` Step 2. Configure the containerd.service You can use systemd to manage the containerd daemon, and place the `containerd.service` to `/etc/systemd/system/containerd.service`. ```bash cat << EOF >/etc/systemd/system/containerd.service [Unit] Description=containerd container runtime Documentation=https://containerd.io After=network.target [Service] ExecStartPre=/sbin/modprobe overlay ExecStart=/usr/local/bin/containerd Restart=always RestartSec=5 Delegate=yes KillMode=process OOMScoreAdjust=-999 LimitNOFILE=1048576 LimitNPROC=infinity LimitCORE=infinity [Install] WantedBy=multi-user.target EOF ``` Step 3. Configure the containerd configuration The daemon also uses a configuration file located in `/etc/containerd/config.toml` for specifying daemon level options. ```bash mkdir /etc/containerd cat << EOF >/etc/containerd/config.toml [plugins] [plugins.cri] sandbox_image = \"registry.cn-hangzhou.aliyuncs.com/acs/pause-amd64:3.1\" [plugins.cri.containerd] defaultruntimename = \"rune\" snapshotter = \"overlayfs\" [plugins.cri.containerd.runtimes.rune] runtime_type = \"io.containerd.rune.v2\" EOF ``` Step"
},
{
"data": "Enable and restart the containerd.service ```bash sudo systemctl enable containerd.service sudo systemctl restart containerd.service ``` Step 1. Set the kernel parameters Make sure that the `brnetfilter` module is loaded and both `net.bridge.bridge-nf-call-iptables` and `net.ipv4.ipforward` are set to 1 in your sysctl config. ```bash sudo modprobe br_netfilter cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF sudo sysctl --system ``` Step 2. Configure the kubernets package repository for downloading kubelet, kubeadm and kubelet On Ubuntu ```bash sudo apt update && sudo apt install -y apt-transport-https curl curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add - echo \"deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main\" >>/etc/apt/sources.list.d/kubernetes.list ``` Step 3. Install kubelet, kubeadm and kubectl Set SELinux in permissive mode and install kubelet, kubeadm and kubectl of version v1.16.9, you can choose other versions you like, but it is recommend that you use the versions greater than or equal to v1.16. On Ubuntu ```bash sudo setenforce 0 sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config kubernetes_version=1.16.9 sudo apt update && apt install -y kubelet=${kubernetes_version}-00 \\ kubeadm=${kubernetesversion}-00 kubectl=${kubernetesversion}-00 ``` Step 4. Configure the kubelet configuration file Configure the kubelet configuration file `10-kubeadm.conf`, specify the runtime to containerd by arguments `--container-runtime=remote` and `--container-runtime-endpoint`. On Ubuntu ```bash cat << EOF >/etc/resolv.conf.kubernetes nameserver 8.8.8.8 options timeout:2 attempts:3 rotate single-request-reopen EOF cat << EOF >/etc/systemd/system/kubelet.service.d/10-kubeadm.conf [Service] Environment=\"KUBELETKUBECONFIGARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf\" Environment=\"KUBELETCONFIGARGS=--config=/var/lib/kubelet/config.yaml\" Environment=\"KUBELETSYSTEMPODS_ARGS=--max-pods 64 --pod-manifest-path=/etc/kubernetes/manifests\" Environment=\"KUBELETNETWORKARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin\" Environment=\"KUBELETDNSARGS=--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/acs/pause-amd64:3.0 --cluster-domain=cluster.local --cloud-provider=external --resolv-conf=/etc/resolv.conf.kubernetes\" Environment=\"KUBELETEXTRAARGS=--container-runtime=remote --container-runtime-endpoint=/run/containerd/containerd.sock\" ExecStart= ExecStart=/usr/bin/kubelet \\$KUBELETKUBECONFIGARGS \\$KUBELETCONFIGARGS \\$KUBELETSYSTEMPODSARGS \\$KUBELETNETWORKARGS \\$KUBELETDNSARGS \\$KUBELETEXTRA_ARGS EOF ``` Step 5. Enable the kubelet.service ```bash sudo systemctl enable kubelet.service ``` Step 6. Initialize the Kubernetes cluster with kubeadm The version of Kubernetes must match with the kubelet version. You can specify the Kubernetes Pod and Service CIDR block with arguments `pod-network-cidr` and `service-cidr`, and make sure the CIDRs are not conflict with the host IP address. For example, if the host IP address is `192.168.1.100`, you can initialize the cluster as follows: ```bash kubeadm init --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \\ --kubernetes-version=v1.16.9 \\ --pod-network-cidr=\"172.21.0.0/20\" --service-cidr=\"172.20.0.0/20\" ``` Step 7. Configure kubeconfig To make kubectl work, run these commands, which are also part of the `kubeadm init` output: ```bash mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config ``` Step 8. Install the network addon Install the network addon `flannel` and wait for the node status to `Ready`. ```bash kubectl taint nodes $(hostname | tr 'A-Z' 'a-z') node.cloudprovider.kubernetes.io/uninitialized- kubectl taint nodes $(hostname | tr 'A-Z' 'a-z') node-role.kubernetes.io/master- kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml ``` Step 9. Check the pod status Check the pod status with command `kubectl get pod -A` and wait until all pods status are `Ready` , the output should like this: ``` $ kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-67c766df46-bzmwx 1/1 Running 0 74s kube-system coredns-67c766df46-l6blz 1/1 Running 0 74s kube-system etcd-izuf68q2tx28s7tel52vb0z 1/1 Running 0 20s kube-system kube-apiserver-izuf68q2tx28s7tel52vb0z 1/1 Running 0 12s kube-system kube-controller-manager-izuf68q2tx28s7tel52vb0z 1/1 Running 0 28s kube-system kube-flannel-ds-amd64-s542d 1/1 Running 0 56s kube-system kube-proxy-fpwnh 1/1 Running 0 74s kube-system kube-scheduler-izuf68q2tx28s7tel52vb0z 1/1 Running 0 20s ``` Step 1. Apply the following yaml files to create `rune` RuntimeClass object ```yaml cat << EOF | kubectl apply -f - apiVersion: node.k8s.io/v1beta1 handler: rune kind: RuntimeClass metadata: name: rune EOF ``` Step 2. Make sure the `rune` RuntimeClass object is created List the runtimeClasses with command `kubectl get runtimeclass` and the output should like this: ``` $ kubectl get runtimeclass NAME CREATED AT rune 2020-05-06T06:57:48Z ```"
}
] |
{
"category": "Runtime",
"file_name": "create_a_confidential_computing_kubernetes_cluster_with_inclavare_containers.md",
"project_name": "Inclavare Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: Installation sidebar_position: 1 description: Learn how to install JuiceFS on Linux, macOS, and Windows, including one-click installation, pre-compiled installation, and containerized deployment methods. JuiceFS has good cross-platform capability and supports various operating systems across almost all major architectures, including but not limited to Linux, macOS, and Windows. The JuiceFS client has only one binary file. You can either download the pre-compiled version to unzip it and use it directly, or manually compile it from the provided source code. The one-click installation script is available for Linux and macOS systems. It automatically downloads and installs the latest version of the JuiceFS client based on your hardware architecture. Here is how to use it: ```shell curl -sSL https://d.juicefs.com/install | sh - ``` ```shell curl -sSL https://d.juicefs.com/install | sh -s /tmp ``` You can download the latest version of the client at . Pre-compiled versions for different CPU architectures and operating systems are available in the download list of each client version. Please select the version that best suits your application. For example: | File Name | Description | |--|-| | `juicefs-x.y.z-darwin-amd64.tar.gz` | For macOS systems with Intel chips | | `juicefs-x.y.z-darwin-arm64.tar.gz` | For macOS systems with M1 series chips | | `juicefs-x.y.z-linux-amd64.tar.gz` | For Linux distributions on x86 architecture | | `juicefs-x.y.z-linux-arm64.tar.gz` | For Linux distributions on ARM architecture | | `juicefs-x.y.z-windows-amd64.tar.gz` | For Windows on x86 architecture | | `juicefs-hadoop-x.y.z.jar` | Hadoop Java SDK on x86 and ARM architectures (supports Linux, macOS, and Windows systems) | For Linux systems with x86 architecture, download the file with the file name `linux-amd64` and execute the following commands in the terminal. Get the latest version number: ```shell JFSLATESTTAG=$(curl -s https://api.github.com/repos/juicedata/juicefs/releases/latest | grep 'tag_name' | cut -d '\"' -f 4 | tr -d 'v') ``` Download the client to the current directory: ```shell wget \"https://github.com/juicedata/juicefs/releases/download/v${JFSLATESTTAG}/juicefs-${JFSLATESTTAG}-linux-amd64.tar.gz\" ``` Unzip the installation package: ```shell tar -zxf \"juicefs-${JFSLATESTTAG}-linux-amd64.tar.gz\" ``` Install the client: ```shell sudo install juicefs /usr/local/bin ``` After completing the above 4 steps, execute the `juicefs` command in the terminal. If the client installation is successful, a help message will be returned. :::info If the terminal prompts `command not found`, it is probably because `/usr/local/bin` is not in your system's `PATH` environment variable. You can execute `echo $PATH` to see which executable paths are set in your system. Based on the returned result, select an appropriate path, adjust, and re-execute the installation command in step 4. ::: JuiceFS also provides a repository, which makes it easy to install the latest version of the client on Ubuntu systems. Choose the corresponding PPA repository based on your CPU architecture: x86 architecture`ppa:juicefs/ppa` ARM architecture`ppa:juicefs/arm64` For example, on a Ubuntu"
},
{
"data": "system with x86 architecture, execute the following commands: ```shell sudo add-apt-repository ppa:juicefs/ppa sudo apt-get update sudo apt-get install juicefs ``` JuiceFS also provides a repository, which allows for easy installation of the latest version of the client on Red Hat and its derivatives. The supported systems currently include: Amazonlinux 2023 CentOS 8, 9 Fedora 37, 38, 39, rawhide RHEL 7, 8, 9 Taking Fedora 38 as an example, execute the following commands to install the client: ```shell sudo dnf copr enable -y juicedata/juicefs sudo dnf install juicefs ``` We have also packaged and released the on the platform. For Ubuntu 16.04 and above and other operating systems that support Snap, you can install it using the following commands: ```shell sudo snap install juicefs sudo ln -s -f /snap/juicefs/current/juicefs /snap/bin/juicefs ``` When there is a new version, execute the following command to update the client: ```shell sudo snap refresh juicefs ``` JuiceFS also provides an repository, which makes it convenient to install the latest version of the client on Arch Linux and its derivatives. For systems using the Yay package manager, execute the following command to install the client: ```shell yay -S juicefs ``` :::info There are multiple JuiceFS client packages available on AUR. The following are versions officially maintained by JuiceFS: : A stable compiled version that fetches the latest stable source code and compiles it during installation. : A stable pre-compiled version that directly downloads and installs the latest stable pre-compiled program. : A development version that fetches the latest development source code and compiles it during installation. ::: Additionally, you can manually compile and install using `makepkg`, as shown for an Arch Linux system: ```shell sudo pacman -S base-devel git go git clone https://aur.archlinux.org/juicefs.git cd juicefs makepkg -si ``` There are three ways to use JuiceFS on Windows systems. The Windows client of JuiceFS is also a standalone binary. After you download and extract it, you can run it right away. Install dependencies. Since Windows does not natively support the FUSE interface, you need to download and install first in order to implement FUSE support. :::tip is an open source Windows file system agent. It provides a FUSE emulation layer that allows JuiceFS clients to mount file systems on Windows systems for use. ::: Install the client. Take the Windows 10 system as an example, download the file with the file name `windows-amd64`, unzip it, and get `juicefs.exe` which is the JuiceFS client binary. To make it easier to use, it is recommended to create a folder named `juicefs` in the root directory of the `C:\\` disk and extract `juicefs.exe` to that folder. Then add `C:\\juicefs` to the environment variables of your system and restart the system to let the settings take effect. Lastly, you can run `juicefs` commands directly using the \"Command Prompt\" or \"PowerShell\" terminal that comes with your"
},
{
"data": "If you have installed in your Windows system, you can use the following command to install the latest version of the JuiceFS client: ```shell scoop install juicefs ``` is short for Windows Subsystem for Linux, which is supported from Windows 10 version 2004 onwards or Windows 11. It allows you to run most of the command-line tools, utilities, and applications of GNU/Linux natively on a Windows system without incurring the overhead of a traditional virtual machine or dual-boot setup. For details, see . Since macOS does not support the FUSE interface by default, you need to install first to implement the support for FUSE. :::tip is an open source file system enhancement tool that allows macOS to mount third-party file systems. It enables JuiceFS clients to mount file systems on macOS systems. ::: If you have the package manager installed on your system, you can install the JuiceFS client by executing the following command: ```shell brew install juicefs ``` For more information about this command, please refer to page. You can also download the binary with the file name `darwin-amd64`. After downloading, unzip the file and install the program to any executable path on your system using the `install` command, for example: ```shell sudo install juicefs /usr/local/bin ``` For those interested in using JuiceFS in a Docker container, a `Dockerfile` for building a JuiceFS client image is provided below. It can be used as a base to build a JuiceFS client image alone or packaged together with other applications. ```dockerfile FROM ubuntu:20.04 RUN apt update && apt install -y curl fuse && \\ apt-get autoremove && \\ apt-get clean && \\ rm -rf \\ /tmp/* \\ /var/lib/apt/lists/* \\ /var/tmp/* RUN set -x && \\ mkdir /juicefs && \\ cd /juicefs && \\ JFSLATESTTAG=$(curl -s https://api.github.com/repos/juicedata/juicefs/releases/latest | grep 'tag_name' | cut -d '\"' -f 4 | tr -d 'v') && \\ curl -s -L \"https://github.com/juicedata/juicefs/releases/download/v${JFSLATESTTAG}/juicefs-${JFSLATESTTAG}-linux-amd64.tar.gz\" \\ | tar -zx && \\ install juicefs /usr/bin && \\ cd .. && \\ rm -rf /juicefs CMD [ \"juicefs\" ] ``` If there is no pre-compiled client versions that are suitable for your operating system, such as FreeBSD, you can manually compile the JuiceFS client. One advantage of manual compilation is that you have priority access to various new features in JuiceFS development, but it requires some basic knowledge of software compilation. :::tip For users in China, in order to speed up the acquisition of Go modules, it is recommended to set the `GOPROXY` environment variable to the domestic mirror server by executing `go env -w GOPROXY=https://goproxy.cn,direct`. For details, see . ::: Compiling clients for Linux, macOS, BSD and other Unix-like systems requires the following dependencies: 1.20+ GCC 5.4+ Clone the source code: ```shell git clone https://github.com/juicedata/juicefs.git ``` Enter the source code directory: ```shell cd juicefs ``` Switch to the desired branch, such as release v1.0.0: The source code uses the `main` branch by default. You can switch to any official release, for example, to the release"
},
{
"data": "```shell git checkout v1.0.0 ``` :::caution The development branch often involves large changes, so do not use the clients compiled in the \"development branch\" for the production environment. ::: Compile: ```shell make ``` The compiled `juicefs` binary is located in the current directory. To compile the JuiceFS client on Windows systems, you need to install the following dependencies: 1.20+ GCC 5.4+ Among them, WinFsp and Go can be downloaded and installed directly. GCC needs to use a version provided by a third party, which can use or . Here we take MinGW-w64 as an example. On the , select a precompiled version for Windows, such as . After downloading, extract it to the root directory of the `C` drive, then find PATH in the system environment variable settings and add the `C:\\mingw64\\bin` directory. After restarting the system, execute the `gcc -v` command in the command prompt or PowerShell. If you can see version information, it means that MingGW-w64 is successfully installed, and you can start compiling. Clone and enter the project directory: ```shell git clone https://github.com/juicedata/juicefs.git && cd juicefs ``` Copy WinFsp headers: ```shell mkdir \"C:\\WinFsp\\inc\\fuse\" ``` ```shell copy .\\hack\\winfsp_headers\\* C:\\WinFsp\\inc\\fuse\\ ``` ```shell dir \"C:\\WinFsp\\inc\\fuse\" ``` ```shell set CGO_CFLAGS=-IC:/WinFsp/inc/fuse ``` ```shell go env -w CGO_CFLAGS=-IC:/WinFsp/inc/fuse ``` Compile the client: ```shell go build -ldflags=\"-s -w\" -o juicefs.exe . ``` The compiled `juicefs.exe` binary program is located in the current directory. For convenience, it can be moved to the `C:\\Windows\\System32` directory, so that the `juicefs.exe` command can be used directly anywhere. Compiling a specific version of the client for Windows is essentially the same as and can be done directly on a Linux system. However, in addition to `go` and `gcc`, you also need to install . The latest version can be installed from software repositories on many Linux distributions. For example, on Ubuntu 20.04+, you can install `mingw-w64` with the following command: ```shell sudo apt install mingw-w64 ``` Compile the Windows client: ```shell make juicefs.exe ``` The compiled client is a binary file named `juicefs.exe`, located in the current directory. Clone and enter the project directory: ```shell git clone https://github.com/juicedata/juicefs.git && cd juicefs ``` Install dependencies: ```shell brew install FiloSottile/musl-cross/musl-cross ``` Compile the client: ```shell make juicefs.linux ``` The JuiceFS client has only one binary file, so it can be easily deleted once you find the location of the program. For example, to uninstall the client that is installed on the Linux system as described above, you only need to execute the following command: ```shell sudo rm /usr/local/bin/juicefs ``` You can also check where the program is located by using the `which` command: ```shell which juicefs ``` The path returned by the command is the location where the JuiceFS client is installed on your system. The uninstallation of the JuiceFS client on other operating systems follows the same way."
}
] |
{
"category": "Runtime",
"file_name": "installation.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "% runc-list \"8\" runc-list - lists containers runc list [option ...] The list commands lists containers. Note that a global --root option can be specified to change the default root. For the description of --root, see runc(8). --format|-f table|json : Specify the format. Default is table. The json format provides more details. --quiet|-q : Only display container IDs. To list containers created with the default root: To list containers in a human-readable JSON (with the help of jq(1) utility): To list containers created with the root of /tmp/myroot: runc(8)."
}
] |
{
"category": "Runtime",
"file_name": "runc-list.8.md",
"project_name": "runc",
"subcategory": "Container Runtime"
}
|
[
{
"data": "In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation. Examples of behavior that contributes to creating a positive environment include: Using welcoming and inclusive language Being respectful of differing viewpoints and experiences Gracefully accepting constructive criticism Focusing on what is best for the community Showing empathy towards other community members Examples of unacceptable behavior by participants include: The use of sexualized language or imagery and unwelcome sexual attention or advances Trolling, insulting/derogatory comments, and personal or political attacks Public or private harassment Publishing others' private information, such as a physical or electronic address, without explicit permission Other conduct which could reasonably be considered inappropriate in a professional setting Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at [email protected]. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. This Code of Conduct is adapted from the , version 1.4, available at ."
}
] |
{
"category": "Runtime",
"file_name": "CODE_OF_CONDUCT.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "The following guide will show you how to build and run a self-contained Go app using rkt, the reference implementation of the . If you're not on Linux, you should do all of this inside . For a more complex example, please check the . ```go package main import ( \"log\" \"net/http\" ) func main() { http.HandleFunc(\"/\", func(w http.ResponseWriter, r *http.Request) { log.Printf(\"request from %v\\n\", r.RemoteAddr) w.Write([]byte(\"hello\\n\")) }) log.Fatal(http.ListenAndServe(\":5000\", nil)) } ``` Next we need to build our application. We are going to statically link our app so we can ship an App Container Image with no external dependencies. With Go 1.9: ``` $ CGO_ENABLED=0 go build -ldflags '-extldflags \"-static\"' ``` Note that if you use , the command is instead: ``` $ go build -compiler gccgo -gccgoflags '-static' ``` Before proceeding, verify that the produced binary is statically linked: ``` $ file hello hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, not stripped $ ldd hello not a dynamic executable ``` To create the image, we can use , which can be downloaded via one of the . The following commands will create an ACI containing our application and important metadata. ```bash acbuild begin acbuild set-name example.com/hello acbuild copy hello /bin/hello acbuild set-exec /bin/hello acbuild port add www tcp 5000 acbuild label add version 0.0.1 acbuild label add arch amd64 acbuild label add os linux acbuild annotation add authors \"Carly Container <[email protected]>\" acbuild write hello-0.0.1-linux-amd64.aci acbuild end ``` ``` ``` Note that `--insecure-options=image` is required because, by default, rkt expects our images to be signed. See the for more details. At this point our hello app is running and ready to handle HTTP requests. To stop the container, pass three escape characters (`^]^]^]`), which is generated by `Ctrl-]` on a US keyboard. You can also . By default, rkt will assign the running container an IP address. Use `rkt list` to discover what it is: ``` UUID APP IMAGE NAME STATE NETWORKS 885876b0 hello example.com/hello:0.0.1 running default:ip4=172.16.28.2 ``` Then you can `curl` that IP on port 5000: ``` $ curl 172.16.28.2:5000 hello ```"
}
] |
{
"category": "Runtime",
"file_name": "getting-started-guide.md",
"project_name": "rkt",
"subcategory": "Container Runtime"
}
|
[
{
"data": "(devices-proxy)= ```{note} The `proxy` device type is supported for both containers (NAT and non-NAT modes) and VMs (NAT mode only). It supports hotplugging for both containers and VMs. ``` Proxy devices allow forwarding network connections between host and instance. This method makes it possible to forward traffic hitting one of the host's addresses to an address inside the instance, or to do the reverse and have an address in the instance connect through the host. In {ref}`devices-proxy-nat-mode`, a proxy device can be used for TCP and UDP proxying. In non-NAT mode, you can also proxy traffic between Unix sockets (which can be useful to, for example, forward graphical GUI or audio traffic from the container to the host system) or even across protocols (for example, you can have a TCP listener on the host system and forward its traffic to a Unix socket inside a container). The supported connection types are: `tcp <-> tcp` `udp <-> udp` `unix <-> unix` `tcp <-> unix` `unix <-> tcp` `udp <-> tcp` `tcp <-> udp` `udp <-> unix` `unix <-> udp` To add a `proxy` device, use the following command: incus config device add <instancename> <devicename> proxy listen=<type>:<addr>:<port> connect=<type>:<addr>:<port> bind=<host/instance_name> (devices-proxy-nat-mode)= The proxy device also supports a NAT mode (`nat=true`), where packets are forwarded using NAT rather than being proxied through a separate connection. This mode has the benefit that the client address is maintained without the need for the target destination to support the HAProxy PROXY protocol (which is the only way to pass the client address through when using the proxy device in non-NAT mode). However, NAT mode is supported only if the host that the instance is running on is the gateway (which is the case if you're using `incusbr0`, for"
},
{
"data": "In NAT mode, the supported connection types are: `tcp <-> tcp` `udp <-> udp` When configuring a proxy device with `nat=true`, you must ensure that the target instance has a static IP configured on its NIC device. Use the following command to configure a static IP for an instance NIC: incus config device set <instancename> <nicname> ipv4.address=<ipv4address> ipv6.address=<ipv6address> To define a static IPv6 address, the parent managed network must have `ipv6.dhcp.stateful` enabled. When defining IPv6 addresses, use the square bracket notation, for example: connect=tcp:[2001:db8::1]:80 You can specify that the connect address should be the IP of the instance by setting the connect IP to the wildcard address (`0.0.0.0` for IPv4 and `[::]` for IPv6). ```{note} The listen address can also use wildcard addresses when using non-NAT mode. However, when using NAT mode, you must specify an IP address on the Incus host. ``` `proxy` devices have the following device options: Key | Type | Default | Required | Description :-- | :-- | :-- | :-- | :-- `bind` | string | `host` | no | Which side to bind on (`host`/`instance`) `connect` | string | - | yes | The address and port to connect to (`<type>:<addr>:<port>`) `gid` | int | `0` | no | GID of the owner of the listening Unix socket `listen` | string | - | yes | The address and port to bind and listen (`<type>:<addr>:<port>`) `mode` | int | `0644` | no | Mode for the listening Unix socket `nat` | bool | `false` | no | Whether to optimize proxying via NAT (requires that the instance NIC has a static IP address) `proxy_protocol`| bool | `false` | no | Whether to use the HAProxy PROXY protocol to transmit sender information `security.gid` | int | `0` | no | What GID to drop privilege to `security.uid` | int | `0` | no | What UID to drop privilege to `uid` | int | `0` | no | UID of the owner of the listening Unix socket"
}
] |
{
"category": "Runtime",
"file_name": "devices_proxy.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "The Firecracker microVM Metadata Service (MMDS) is a mutable data store which can be used for sharing information between host and guests, in a secure and easy at hand way. By default, MMDS is not reachable from the guest operating system. At microVM runtime, MMDS is tightly coupled with a network interface, which allows MMDS requests. When configuring the microVM, if MMDS needs to be activated, a network interface has to be configured to allow MMDS requests. This can be achieved in two steps: Attach one (or more) network interfaces through an HTTP `PUT` request to `/network-interfaces/${MMDSNETIF}`. The full network configuration API can be found in the . Configure MMDS through an HTTP `PUT` request to `/mmds/config` resource and include the IDs of the network interfaces that should allow forwarding requests to MMDS in the `network_interfaces` list. The complete MMDS API is described in the . Attaching a network device with ID `MMDSNETIF`: ```bash MMDSNETIF=eth0 curl --unix-socket /tmp/firecracker.socket -i \\ -X PUT 'http://localhost/network-interfaces/${MMDSNETIF}' \\ -H 'Accept: application/json' \\ -H 'Content-Type: application/json' \\ -d '{ \"ifaceid\": \"${MMDSNET_IF}\", \"guest_mac\": \"AA:FC:00:00:00:01\", \"hostdevname\": \"tap0\" }' ``` Configuring MMDS to receive requests through the `MMDSNETIF` network interface ID: ```bash MMDSIPV4ADDR=169.254.170.2 curl --unix-socket /tmp/firecracker.socket -i \\ -X PUT \"http://localhost/mmds/config\" \\ -H \"Content-Type: application/json\" \\ -d '{ \"networkinterfaces\": [\"${MMDSNET_IF}\"] }' ``` MMDS can be configured pre-boot only, using the Firecracker API server. Enabling MMDS without at least a network device attached will return an error. The IPv4 address used by guest applications when issuing requests to MMDS can be customized through the same HTTP `PUT` request to `/mmds/config` resource, by specifying the IPv4 address to the `ipv4_address` field. If the IP configuration is not provided before booting up the guest, the MMDS IPv4 address defaults to `169.254.169.254`. ```bash MMDSIPV4ADDR=169.254.170.2 curl --unix-socket /tmp/firecracker.socket -i \\ -X PUT \"http://localhost/mmds/config\" \\ -H \"Content-Type: application/json\" \\ -d '{ \"networkinterfaces\": [\"${MMDSNET_IF}\"], \"ipv4address\": \"${MMDSIPV4_ADDR}\" }' ``` MMDS is tightly coupled with a network interface which is used to route MMDS packets. To send MMDS intended packets, guest applications must insert a new rule into the routing table of the guest OS. This new rule must forward MMDS intended packets to a network interface which allows MMDS requests. For example: ```bash MMDSIPV4ADDR=169.254.170.2 MMDSNETIF=eth0 ip route add ${MMDSIPV4ADDR} dev ${MMDSNETIF} ``` MMDS supports two methods to access the contents of the metadata store from the guest operating system: `V1` and `V2`. More about the particularities of the two mechanisms can be found in the section. The MMDS version used can be specified when configuring MMDS, through the `version` field of the HTTP `PUT` request to `/mmds/config` resource. Accepted values are `V1`(deprecated) and `V2` and the default MMDS version used in case the `version` field is missing is . ```bash MMDSIPV4ADDR=169.254.170.2 curl --unix-socket"
},
{
"data": "-i \\ -X PUT \"http://localhost/mmds/config\" \\ -H \"Content-Type: application/json\" \\ -d '{ \"networkinterfaces\": [\"${MMDSNET_IF}\"], \"version\": \"V2\", \"ipv4address\": \"${MMDSIPV4_ADDR}\" }' ``` Inserting and updating metadata is possible through the Firecracker API server. The metadata inserted in MMDS must be any valid JSON. A user can create or update the MMDS data store before the microVM is started or during its operation. To insert metadata into MMDS, an HTTP `PUT` request to the `/mmds` resource has to be issued. This request must have a payload with metadata structured in format. To replace existing metadata, a subsequent HTTP `PUT` request to the `/mmds` resource must be issued, using as a payload the new metadata. A complete description of metadata insertion firecracker API can be found in the . An example of an API request for inserting metadata is provided below: ```bash curl --unix-socket /tmp/firecracker.socket -i \\ -X PUT \"http://localhost/mmds\" \\ -H \"Content-Type: application/json\" \\ -d '{ \"latest\": { \"meta-data\": { \"ami-id\": \"ami-12345678\", \"reservation-id\": \"r-fea54097\", \"local-hostname\": \"ip-10-251-50-12.ec2.internal\", \"public-hostname\": \"ec2-203-0-113-25.compute-1.amazonaws.com\", \"network\": { \"interfaces\": { \"macs\": { \"02:29:96:8f:6a:2d\": { \"device-number\": \"13345342\", \"local-hostname\": \"localhost\", \"subnet-id\": \"subnet-be9b61d\" } } } } } } }' ``` To partially update existing metadata, an HTTP `PATCH` request to the `/mmds` resource has to be issued, using as a payload the metadata patch, as functionality describes. A complete description of updating metadata Firecracker API can be found in the . An example API for how to update existing metadata is offered below: ```bash curl --unix-socket /tmp/firecracker.socket -i \\ -X PATCH \"http://localhost/mmds\" \\ -H \"Content-Type: application/json\" \\ -d '{ \"latest\": { \"meta-data\": { \"ami-id\": \"ami-87654321\", \"reservation-id\": \"r-79054aef\", } } }' ``` MicroVM metadata can be retrieved both from host and guest operating systems. For the scope of this chapter, let's assume the data store content is the JSON below: ```json { \"latest\": { \"meta-data\": { \"ami-id\": \"ami-87654321\", \"reservation-id\": \"r-79054aef\" } } } ``` To retrieve existing MMDS metadata from host operating system, an HTTP `GET` request to the `/mmds` resource must be issued. The HTTP response returns the existing metadata, as a JSON formatted text. A complete description of retrieving metadata Firecracker API can be found in the . Below you can see how to retrieve metadata from the host: ```bash curl -s --unix-socket /tmp/firecracker.socket http://localhost/mmds ``` Output: ```json { \"latest\": { \"meta-data\": { \"ami-id\": \"ami-87654321\", \"reservation-id\": \"r-79054aef\" } } } ``` Accessing the contents of the metadata store from the guest operating system can be done using one of the following methods: `V1`: simple request/response method (deprecated) `V2`: session-oriented method Version 1 is deprecated and will be removed in the next major version change. Version 2 should be used instead. To retrieve existing MMDS metadata using MMDS version 1, an HTTP `GET` request must be"
},
{
"data": "The requested resource can be referenced by its corresponding , which is also the path of the MMDS request. The HTTP response content will contain the referenced metadata resource. The only HTTP method supported by MMDS version 1 is `GET`. Requests containing any other HTTP method will receive 405 Method Not Allowed error. ```bash MMDSIPV4ADDR=169.254.170.2 RESOURCEPOINTEROBJ=latest/meta-data curl -s \"http://${MMDSIPV4ADDR}/${RESOURCEPOINTEROBJ}\" ``` Similar to , MMDS version 2 (`V2`) is a session oriented method, which makes use of a session token in order to allow fetching metadata contents. The session must start with an HTTP `PUT` request that generates the session token. In order to be successful, the request must respect the following constraints: must be directed towards `/latest/api/token` path must contain a `X-metadata-token-ttl-seconds` header specifying the token lifetime in seconds. The value cannot be lower than 1 or greater than 21600 (6 hours). must not contain a `X-Forwarded-For` header. ```bash MMDSIPV4ADDR=169.254.170.2 TOKEN=`curl -X PUT \"http://${MMDSIPV4ADDR}/latest/api/token\" \\ -H \"X-metadata-token-ttl-seconds: 21600\"` ``` The HTTP response from MMDS is a plaintext containing the session token. During the duration specified by the token's time to live value, all subsequent `GET` requests must specify the session token through the `X-metadata-token` header in order to fetch data from MMDS. ```bash MMDSIPV4ADDR=169.254.170.2 RESOURCEPOINTEROBJ=latest/meta-data curl -s \"http://${MMDSIPV4ADDR}/${RESOURCEPOINTEROBJ}\" \\ -H \"X-metadata-token: ${TOKEN}\" ``` After the token expires, it becomes unusable and a new session token must be issued. The data store is not persisted across snapshots, in order to avoid leaking vm-specific information that may need to be reseeded into the data store for a new clone. The MMDS version, network stack configuration and IP address used for accessing the service are persisted across snapshot-restore. If the targeted snapshot version does not support Mmds Version 2, it will not be persisted in the snapshot (the clone will use the default, V1). Similarly, if a snapshotted Vm state contains the Mmds version but the Firecracker version used for restoring does not support persisting the version, the default will be used. The response format can be JSON or IMDS. The IMDS documentation can be found . The output format can be selected by specifying the optional `Accept` header. Using `Accept: application/json` will format the output to JSON, while using `Accept: plain/text` or not specifying this optional header at all will format the output to IMDS. Retrieving MMDS resources in IMDS format, other than JSON `string` and `object` types, is not supported. Below is an example on how to retrieve the `latest/meta-data` resource in JSON format: ```bash MMDSIPV4ADDR=169.254.170.2 RESOURCEPOINTEROBJ=latest/meta-data curl -s -H \"Accept: application/json\" \"http://${MMDSIPV4ADDR}/${RESOURCEPOINTEROBJ}\" ``` Output: ```json { \"ami-id\": \"ami-87654321\", \"reservation-id\": \"r-79054aef\" } ``` Retrieving the `latest/meta-data/ami-id` resource in JSON format: ```bash"
},
{
"data": "RESOURCEPOINTERSTR=latest/meta-data/ami-id curl -s -H \"Accept: application/json\" \"http://${MMDSIPV4ADDR}/${RESOURCEPOINTERSTR}\" ``` Output: ```json \"ami-87654321\" ``` Retrieving the `latest` resource in IMDS format: ```bash MMDSIPV4ADDR=169.254.170.2 RESOURCE_POINTER=latest curl -s \"http://${MMDSIPV4ADDR}/${RESOURCE_POINTER}\" ``` Output: ```text meta-data/ ``` Retrieving the `latest/meta-data/` resource in IMDS format: ```bash MMDSIPV4ADDR=169.254.170.2 RESOURCE_POINTER=latest/meta-data curl -s \"http://${MMDSIPV4ADDR}/${RESOURCE_POINTER}\" ``` Output: ```text ami-id reservation-id ``` Retrieving the `latest/meta-data/ami-id` resource in IMDS format: ```bash MMDSIPV4ADDR=169.254.170.2 RESOURCE_POINTER=latest/meta-data/ami-id curl -s \"http://${MMDSIPV4ADDR}/${RESOURCE_POINTER}\" ``` Output: ```text ami-87654321 ``` 200 - `Ok` The request was successfully processed and a response was successfully formed. 400 - `Bad Request` The request was malformed. 401 - `Unauthorized` Only when using MMDS `V2`. The HTTP request either lacks the session token, or the token specified is invalid. A token is invalid if it was not generated using an HTTP `PUT` request or if it has expired. 404 - `Not Found` The requested resource can not be found in the MMDS data store. 405 - `Method Not Allowed` The HTTP request uses a not allowed HTTP method and a response with the `Allow` header was formed. When using MMDS `V1`, this is returned for any HTTP method other than `GET`. When MMDS `V2` is configured, the only accepted HTTP methods are `PUT` and `GET`. 501 - `Not Implemented` The requested HTTP functionality is not supported by MMDS or the requested resource is not supported in IMDS format. For this example, the guest expects to find some sort of credentials (say, a secret access key) by issuing a `GET` request to `http://169.254.169.254/latest/meta-data/credentials/secret-key`. Most similar use cases will encompass the following sequence of steps: Some agent running on the host sends a `PUT` request with the initial contents of the MMDS, using the Firecracker API. This most likely takes place before the microVM starts running, but may also happen at a later time. Guest MMDS requests which arrive prior to contents being available receive a NotFound response. The contents are saved to MMDS. The guest sends a `GET` request for the secret key, which is intercepted by MMDS. MMDS processes the request and sends back an HTTP response with the ensembled secret key as a JSON string. After a while, the host agent decides to rotate the secret key. It does so by updating the data store with a new value. This can be done via a `PUT` request to the `/mmds` API resource, which replaces everything, or with a `PATCH` request that only touches the desired key. This effectively triggers the first two steps again. The guest reads the new secret key, going one more time through the last three steps. This can happen after a notification from the host agent, or discovered via periodic polling, or some other mechanism. Since access to the data store is thread safe, the guest can only receive either the old version, or the new version of the key, and not some intermediate state caused by the update."
}
] |
{
"category": "Runtime",
"file_name": "mmds-user-guide.md",
"project_name": "Firecracker",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: Starting, Stopping and Removing Peers menu_order: 40 search_type: Documentation You may wish to `weave stop` and re-launch to change some config or to upgrade to a new version. Provided that the underlying protocol hasn't changed, Weave Net picks up where it left off and learns from peers in the network which address ranges it was previously using. If, however, you run `weave reset` this removes the peer from the network so if Weave Net is run again on that node it will start from scratch. For failed peers, the `weave rmpeer` command can be invoked to permanently remove the ranges allocated to said peers. This allows other peers to allocate IPs in the ranges previously owned by the removed peers, and as such should be used with extreme caution: if the removed peers had transferred some range of IP addresses to other peers but this is not known to the whole network, or if some of them later rejoin the Weave network, the same IP address may be allocated twice. Assume you had started the three peers in the , and then host3 caught fire, you can go to one of the other hosts and run: host1$ weave rmpeer host3 524288 IPs taken over from host3 Weave Net takes all the IP address ranges owned by host3 and transfers them to be owned by host1. The name \"host3\" is resolved via the 'nickname' feature of Weave Net, which defaults to the local host name. Alternatively, you can supply a peer name as shown in `weave status`. Do not invoke `weave rmpeer` for the same peer on more than one host. The removed peer's address range cannot be left dangling and is therefore reassigned to the peer on which `weave rmpeer` was run. Consequently, if you run `weave rmpeer` for the same peer on more than one host, the removed peer's address range will be owned by multiple peers. Once the peers detect this inconsistency, they log the error and drop the connection that supplied the inconsistent data. The rest of the peers will carry on with their view of the world, but the network will not function correctly. Depending on the timing of the `rmpeer` and the communication between peers, several peer cliques may form - groups of peers that are talking to each other, but repeatedly drop attempted connections with peers in other cliques. See Also *"
}
] |
{
"category": "Runtime",
"file_name": "stop-remove-peers-ipam.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "title: CSI provisioner and driver !!! attention This feature is experimental and will not support upgrades to future versions. For this section, we will refer to Rook's deployment examples in the directory. The Ceph CSI NFS provisioner and driver require additional RBAC to operate. Apply the `deploy/examples/csi/nfs/rbac.yaml` manifest to deploy the additional resources. Rook will only deploy the Ceph CSI NFS provisioner and driver components when the `ROOKCSIENABLE_NFS` config is set to `\"true\"` in the `rook-ceph-operator-config` configmap. Change the value in your manifest, or patch the resource as below. ```console kubectl --namespace rook-ceph patch configmap rook-ceph-operator-config --type merge --patch '{\"data\":{\"ROOKCSIENABLE_NFS\": \"true\"}}' ``` !!! note The rook-ceph operator Helm chart will deploy the required RBAC and enable the driver components if `csi.nfs.enabled` is set to `true`. In order to create NFS exports via the CSI driver, you must first create a CephFilesystem to serve as the underlying storage for the exports, and you must create a CephNFS to run an NFS server that will expose the exports. RGWs cannot be used for the CSI driver. From the examples, `filesystem.yaml` creates a CephFilesystem called `myfs`, and `nfs.yaml` creates an NFS server called `my-nfs`. You may need to enable or disable the Ceph orchestrator. You must also create a storage class. Ceph CSI is designed to support any arbitrary Ceph cluster, but we are focused here only on Ceph clusters deployed by Rook. Let's take a look at a portion of the example storage class found at `deploy/examples/csi/nfs/storageclass.yaml` and break down how the values are determined. ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: rook-nfs provisioner: rook-ceph.nfs.csi.ceph.com # [1] parameters: nfsCluster: my-nfs # [2] server: rook-ceph-nfs-my-nfs-a # [3] clusterID: rook-ceph # [4] fsName: myfs # [5] pool: myfs-replicated # [6] csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph ``` `provisioner`: rook-ceph.nfs.csi.ceph.com because rook-ceph is the namespace where the CephCluster is installed `nfsCluster`: my-nfs because this is the name of the CephNFS `server`: rook-ceph-nfs-my-nfs-a because Rook creates this Kubernetes Service for the CephNFS named my-nfs `clusterID`: rook-ceph because this is the namespace where the CephCluster is installed `fsName`: myfs because this is the name of the CephFilesystem used to back the NFS exports `pool`: myfs-replicated because myfs is the name of the CephFilesystem defined in `fsName` and because replicated is the name of a data pool defined in the CephFilesystem `csi.storage.k8s.io/*`: note that these values are shared with the Ceph CSI CephFS provisioner See"
},
{
"data": "for an example of how to create a PVC that will create an NFS export. The export will be created and a PV created for the PVC immediately, even without a Pod to mount the PVC. See `deploy/examples/csi/nfs/pod.yaml` for an example of how a PVC can be connected to an application pod. After a PVC is created successfully, the `share` parameter set on the resulting PV contains the `share` path which can be used as the export path when . In the example below `/0001-0009-rook-ceph-0000000000000001-55c910f9-a1af-11ed-9772-1a471870b2f5` is the export path. ```console $ kubectl get pv pvc-b559f225-de79-451b-a327-3dbec1f95a1c -o jsonpath='{.spec.csi.volumeAttributes}' /0001-0009-rook-ceph-0000000000000001-55c910f9-a1af-11ed-9772-1a471870b2f5 ``` NFS export PVCs can be snapshotted and later restored to new PVCs. First, create a VolumeSnapshotClass as in the example . The `csi.storage.k8s.io/snapshotter-secret-name` parameter should reference the name of the secret created for the cephfsplugin . ```console kubectl create -f deploy/examples/csi/nfs/snapshotclass.yaml ``` In , `volumeSnapshotClassName` should be the name of the VolumeSnapshotClass previously created. The `persistentVolumeClaimName` should be the name of the PVC which is already created by the NFS CSI driver. ```console kubectl create -f deploy/examples/csi/nfs/snapshot.yaml ``` ```console $ kubectl get volumesnapshotclass NAME DRIVER DELETIONPOLICY AGE csi-nfslugin-snapclass rook-ceph.nfs.csi.ceph.com Delete 3h55m ``` ```console $ kubectl get volumesnapshot NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE nfs-pvc-snapshot true nfs-pvc 1Gi csi-nfsplugin-snapclass snapcontent-34476204-a14a-4d59-bfbc-2bbba695652c 3h50m 3h51m ``` The snapshot will be ready to restore to a new PVC when `READYTOUSE` field of the volumesnapshot is set to true. In , `dataSource` name should be the name of the VolumeSnapshot previously created. The `dataSource` kind should be \"VolumeSnapshot\". Create a new PVC from the snapshot. ```console kubectl create -f deploy/examples/csi/nfs/pvc-restore.yaml ``` ```console $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-pvc Bound pvc-74734901-577a-11e9-b34f-525400581048 1Gi RWX rook-nfs 55m nfs-pvc-restore Bound pvc-95308c75-6c93-4928-a551-6b5137192209 1Gi RWX rook-nfs 34s ``` To clean your cluster of the resources created by this example, run the following: ```console kubectl delete -f deploy/examples/csi/nfs/pvc-restore.yaml kubectl delete -f deploy/examples/csi/nfs/snapshot.yaml kubectl delete -f deploy/examples/csi/nfs/snapshotclass.yaml ``` In , `dataSource` should be the name of the PVC which is already created by NFS CSI driver. The `dataSource` kind should be \"PersistentVolumeClaim\" and also storageclass should be same as the source PVC. Create a new PVC Clone from the PVC as in the example . ```console kubectl create -f deploy/examples/csi/nfs/pvc-clone.yaml ``` ```console kubectl get pvc ``` ```console NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-pvc Bound pvc-1ea51547-a88b-4ab0-8b4a-812caeaf025d 1Gi RWX rook-nfs 39m nfs-pvc-clone Bound pvc-b575bc35-d521-4c41-b4f9-1d733cd28fdf 1Gi RWX rook-nfs 8s ``` To clean your cluster of the resources created by this example, run the following: ```console kubectl delete -f deploy/examples/csi/nfs/pvc-clone.yaml ```"
}
] |
{
"category": "Runtime",
"file_name": "nfs-csi-driver.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Datenlord is using Grafana dashboard to show all the metrics, which can be configured dynamically to display different information. Currently Datenlord has two dashboards. `Node resources` and `Application and Pod resources`. Both of them have 9 panels currently. All the nodes resources information is displayed in this dashboard, including memory usage, CPU usage, network information, etc. The default data is all nodes resources, you can filter the data to a specific node by clicking the top `Node` button. Currently it has 9 panels: Node memory usage rate, Node memory usage, Pods number, Node storage information, Namespace details, Node CPU usage rate, Node CPU usage, network and node information details. To add a new panel, just click the top side `Add Panel` button Then choose the `Pramethues` as the Data source and input the query to collect the metrics data. All the application, instance and Pod information are displayed in this dashboard. To display a specific application's resource information, you can use the top side filter panel: When you choose a specific application, all the following metrics (tables, charts, guages) will be filtered to only display the resources for this application."
}
] |
{
"category": "Runtime",
"file_name": "datenlord_dashboard.md",
"project_name": "DatenLord",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "In this chapter, we will cover two demo apps. We will build them from Rust source code, build OCI images around them, and then publish the images to Docker Hub. If you have not done so, please Next, explore the examples Since we have already built and published those demo apps on Docker Hub, you could also just go straight to the container runtime sections to use these images."
}
] |
{
"category": "Runtime",
"file_name": "demo.md",
"project_name": "WasmEdge Runtime",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Here are templates for jenkins-job-builder. Template is a template for jenkins job which basically runs pybot command to run one file with robot tests e.g. ``` pybot --loglevel ${LOGLEVEL} -v AGENTVPPIMAGENAME:${IMAGENAME} -v AGENTIMAGENAME:${IMAGENAME} -v DOCKERHOSTIP:${DOCKERHOSTIP} -v VARIABLES:${VARIABLESFILE} /root/vpp-agent/tests/robot/suites/crud/tap_crud.robot ``` Note: the variables enclosed in curly braces will be replaced in time of execution of jenkins job Note: This procedure suppose that some preliminary steps are done - preparation of repository to the folder /root. This is done by other jenkins job (. This is done to prevent recurrent downloading of repository for each single robot file. Template is a template which amasses the jobs to the groups to run them together. Project ligato/vpp-agent contains Script will collect all these robot test to the file `listofallrobottests2` sort them will run python script which will prepare the data for templates (outputed to `p.yaml`) Example of p.yaml ``` project: name: ligato/vpp-agent all tests on arm64 jobs: '05{inodeoffolder}{nameoftest}job': HOWTOBUILD_INCLTAGPRESENT: ' ' HOWTOBUILD_EXCLTAGPRESENT: ' ' inodeoffolder: API dateofjjb_generation: 2019-02-22 05:45:55 nameoftest: bfd_api pathtotest: /root/vpp-agent/tests/robot/suites/api/BFD/bfd_api.robot localvariablesfile: arm64_local arm64_node: 147.75.72.194 ... ... '05{inodeoffolder}{nameoftest}job': HOWTOBUILD_INCLTAGPRESENT: ' ' HOWTOBUILD_EXCLTAGPRESENT: ' ' inodeoffolder: CRUDIPV6_ dateofjjb_generation: 2019-02-22 05:45:55 nameoftest: loopback_crudIPv6 pathtotest: /root/vpp-agent/tests/robot/suites/crudIPv6/loopback_crudIPv6.robot localvariablesfile: arm64_local arm64_node: 147.75.72.194 ... ... '04{nameofpipeline}_pipeline': nameofpipeline: IPv4arm64node_I dateofjjb_generation: 2019-02-22 05:45:55 localvariablesfile: 'arm64_local' arm64_node: '147.75.72.194' listofjenkins_jobs: |- stage 'test' build job: '05CRUDaclcrudjob', parameters: [string(name: 'HOWTOBUILD', value: \"${{HOWTOBUILD}}\"), string(name: 'LOGLEVEL', value: \"${{LOGLEVEL}}\"), string(name: 'VARIABLES_FIL E', value: \"${{VARIABLESFILE}}\"), string(name: 'DOCKERHOSTIP', value: \"${{DOCKERHOSTIP}}\"), string(name: 'IMAGENAME', value: \"${{IMAGE_NAME}}\")], propagate: false build job: '05CRUDafpacketcrudjob', parameters: [string(name: 'HOWTOBUILD', value: \"${{HOWTOBUILD}}\"), string(name: 'LOGLEVEL', value: \"${{LOGLEVEL}}\"), string(name: 'VARIABLE SFILE', value: \"${{VARIABLESFILE}}\"), string(name: 'DOCKERHOSTIP', value: \"${{DOCKERHOSTIP}}\"), string(name: 'IMAGENAME', value: \"${{IMAGENAME}}\")], propagate: false build job: '05CRUDappnamespacescrud_job', parameters: [string(name: 'HOWTOBUILD', value: \"${{HOWTOBUILD}}\"), string(name: 'LOGLEVEL', value: \"${{LOGLEVEL}}\"), string(name: 'VA RIABLESFILE', value: \"${{VARIABLESFILE}}\"), string(name: 'DOCKERHOSTIP', value: \"${{DOCKERHOSTIP}}\"), string(name: 'IMAGENAME', value: \"${{IMAGENAME}}\")], propagate: false build job: '05CRUDarpcrudjob', parameters: [string(name: 'HOWTOBUILD', value: \"${{HOWTOBUILD}}\"), string(name: 'LOGLEVEL', value: \"${{LOGLEVEL}}\"), string(name: 'VARIABLES_FIL E', value: \"${{VARIABLESFILE}}\"), string(name: 'DOCKERHOSTIP', value: \"${{DOCKERHOSTIP}}\"), string(name: 'IMAGENAME', value: \"${{IMAGE_NAME}}\")], propagate: false build job: '05CRUDbdcrudjob', parameters: [string(name: 'HOWTOBUILD', value: \"${{HOWTOBUILD}}\"), string(name: 'LOGLEVEL', value: \"${{LOGLEVEL}}\"), string(name: 'VARIABLES_FILE ', value: \"${{VARIABLESFILE}}\"), string(name: 'DOCKERHOSTIP', value: \"${{DOCKERHOSTIP}}\"), string(name: 'IMAGENAME', value: \"${{IMAGE_NAME}}\")], propagate: false ... ... ``` Prepared templates and data will be merged by jenkins-job-builder to deploy jenkins jobs defaults.yaml is a file with setting for all jobs"
}
] |
{
"category": "Runtime",
"file_name": "README.md",
"project_name": "Ligato",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "To contribute code, please read the . Please note that the applies to community forums as well as technical participation. The project maintains two mailing lists: for accouncements and general discussion. for development and contribution. We also have a . We'd love to hear from you! The community calendar shows upcoming public meetings and opportunities to collaborate or discuss the project. Meetings are planned and announced ahead of time via the mailing list. These meetings are public: anyone can join. <iframe src=\"https://calendar.google.com/calendar/b/1/embed?showTitle=0&height=600&wkst=1&bgcolor=%23FFFFFF&src=bd6f4k210u3ukmlj9b8vl053fk%40group.calendar.google.com&color=%23AB8B00&ctz=America%2FLos_Angeles\" style=\"border-width:0\" width=\"600\" height=\"400\" frameborder=\"0\" scrolling=\"no\"></iframe>"
}
] |
{
"category": "Runtime",
"file_name": "community.md",
"project_name": "gVisor",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Velero is an open source tool with a growing community devoted to safe backup and restore, disaster recovery, and data migration of Kubernetes resources and persistent volumes. The community has adopted this security disclosure and response policy to ensure we responsibly handle critical issues. The Velero project maintains the following , , and . Please refer to these for release and related details. Only the most recent version of Velero is supported. Each includes information about upgrading to the latest version. Security is of the highest importance and all security vulnerabilities or suspected security vulnerabilities should be reported to Velero privately, to minimize attacks against current users of Velero before they are fixed. Vulnerabilities will be investigated and patched on the next patch (or minor) release as soon as possible. This information could be kept entirely internal to the project. If you know of a publicly disclosed security vulnerability for Velero, please IMMEDIATELY contact the VMware Security Team ([email protected]). IMPORTANT: Do not file public issues on GitHub for security vulnerabilities To report a vulnerability or a security-related issue, please contact the VMware email address with the details of the vulnerability. The email will be fielded by the VMware Security Team and then shared with the Velero maintainers who have committer and release permissions. Emails will be addressed within 3 business days, including a detailed plan to investigate the issue and any potential workarounds to perform in the meantime. Do not report non-security-impacting bugs through this channel. Use instead. Provide a descriptive subject line and in the body of the email include the following information: Basic identity information, such as your name and your affiliation or company. Detailed steps to reproduce the vulnerability (POC scripts, screenshots, and logs are all helpful to us). Description of the effects of the vulnerability on Velero and the related hardware and software configurations, so that the VMware Security Team can reproduce it. How the vulnerability affects Velero usage and an estimation of the attack surface, if there is one. List other projects or dependencies that were used in conjunction with Velero to produce the vulnerability. When you think Velero has a potential security vulnerability. When you suspect a potential vulnerability but you are unsure that it impacts Velero. When you know of or suspect a potential vulnerability on another project that is used by Velero. The VMware Security Team will respond to vulnerability reports as follows: The Security Team will investigate the vulnerability and determine its effects and criticality. If the issue is not deemed to be a vulnerability, the Security Team will follow up with a detailed reason for rejection. The Security Team will initiate a conversation with the reporter within 3 business"
},
{
"data": "If a vulnerability is acknowledged and the timeline for a fix is determined, the Security Team will work on a plan to communicate with the appropriate community, including identifying mitigating steps that affected users can take to protect themselves until the fix is rolled out. The Security Team will also create a using the . The Security Team makes the final call on the calculated CVSS; it is better to move quickly than making the CVSS perfect. Issues may also be reported to using this . The CVE will initially be set to private. The Security Team will work on fixing the vulnerability and perform internal testing before preparing to roll out the fix. The Security Team will provide early disclosure of the vulnerability by emailing the mailing list. Distributors can initially plan for the vulnerability patch ahead of the fix, and later can test the fix and provide feedback to the Velero team. See the section Early Disclosure to Velero Distributors List for details about how to join this mailing list. A public disclosure date is negotiated by the VMware SecurityTeam, the bug submitter, and the distributors list. We prefer to fully disclose the bug as soon as possible once a user mitigation or patch is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for distributor coordination. The timeframe for disclosure is from immediate (especially if its already publicly known) to a few weeks. For a critical vulnerability with a straightforward mitigation, we expect the report date for the public disclosure date to be on the order of 14 business days. The VMware Security Team holds the final say when setting a public disclosure date. Once the fix is confirmed, the Security Team will patch the vulnerability in the next patch or minor release, and backport a patch release into all earlier supported releases. Upon release of the patched version of Velero, we will follow the Public Disclosure Process. The Security Team publishes a to the Velero community via GitHub. In most cases, additional communication via Slack, Twitter, mailing lists, blog and other channels will assist in educating Velero users and rolling out the patched release to affected users. The Security Team will also publish any mitigating steps users can take until the fix can be applied to their Velero instances. Velero distributors will handle creating and publishing their own security advisories. Use [email protected] to report security concerns to the VMware Security Team, who uses the list to privately discuss security issues and fixes prior to disclosure. Join the mailing list for early private information and vulnerability disclosure. Early disclosure may include mitigating steps and additional information on security patch releases. See below for information on how Velero distributors or vendors can apply to join this"
},
{
"data": "The private list is intended to be used primarily to provide actionable information to multiple distributor projects at once. This list is not intended to inform individuals about security issues. To be eligible to join the mailing list, you should: Be an active distributor of Velero. Have a user base that is not limited to your own organization. Have a publicly verifiable track record up to the present day of fixing security issues. Not be a downstream or rebuild of another distributor. Be a participant and active contributor in the Velero community. Accept the Embargo Policy that is outlined below. Have someone who is already on the list vouch for the person requesting membership on behalf of your distribution. The terms and conditions of the Embargo Policy apply to all members of this mailing list. A request for membership represents your acceptance to the terms and conditions of the Embargo Policy. The information that members receive on the Velero Distributors mailing list must not be made public, shared, or even hinted at anywhere beyond those who need to know within your specific team, unless you receive explicit approval to do so from the VMware Security Team. This remains true until the public disclosure date/time agreed upon by the list. Members of the list and others cannot use the information for any reason other than to get the issue fixed for your respective distribution's users. Before you share any information from the list with members of your team who are required to fix the issue, these team members must agree to the same terms, and only be provided with information on a need-to-know basis. In the unfortunate event that you share information beyond what is permitted by this policy, you must urgently inform the VMware Security Team ([email protected]) of exactly what information was leaked and to whom. If you continue to leak information and break the policy outlined here, you will be permanently removed from the list. Send new membership requests to [email protected]. In the body of your request please specify how you qualify for membership and fulfill each criterion listed in the Membership Criteria section above. We consider vulnerabilities leading to the compromise of data confidentiality, elevation of privilege, or integrity to be our highest priority concerns. Availability, in particular in areas relating to DoS and resource exhaustion, is also a serious security concern. The VMware Security Team takes all vulnerabilities, potential vulnerabilities, and suspected vulnerabilities seriously and will investigate them in an urgent and expeditious manner. Note that we do not currently consider the default settings for Velero to be secure-by-default. It is necessary for operators to explicitly configure settings, role based access control, and other resource related features in Velero to provide a hardened Velero environment. We will not act on any security disclosure that relates to a lack of safe defaults. Over time, we will work towards improved safe-by-default configuration, taking into account backwards compatibility."
}
] |
{
"category": "Runtime",
"file_name": "SECURITY.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: \"Use IBM Cloud Object Storage as Velero's storage destination.\" layout: docs You can deploy Velero on IBM or clouds, or even on any other Kubernetes cluster, but anyway you can use IBM Cloud Object Store as a destination for Velero's backups. To set up IBM Cloud Object Storage (COS) as Velero's destination, you: Download an official release of Velero Create your COS instance Create an S3 bucket Define a service that can store data in the bucket Configure and start the Velero server Download the tarball for your client platform. _We strongly recommend that you use an of Velero. The tarballs for each release contain the `velero` command-line client. The code in the main branch of the Velero repository is under active development and is not guaranteed to be stable!_ Extract the tarball: ```bash tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to ``` The directory you extracted is called the \"Velero directory\" in subsequent steps. Move the `velero` binary from the Velero directory to somewhere in your PATH. If you dont have a COS instance, you can create a new one, according to the detailed instructions in . Velero requires an object storage bucket to store backups in. See instructions in . The process of creating service credentials is described in . Several comments: The Velero service will write its backup into the bucket, so it requires the Writer access role. Velero uses an AWS S3 compatible API. Which means it authenticates using a signature created from a pair of access and secret keysa set of HMAC credentials. You can create these HMAC credentials by specifying `{HMAC:true}` as an optional inline parameter. See guide. After successfully creating a Service credential, you can view the JSON definition of the credential. Under the `coshmackeys` entry there are `accesskeyid` and `secretaccesskey`. Use them in the next step. Create a Velero-specific credentials file (`credentials-velero`) in your local directory: ``` [default] awsaccesskeyid=<ACCESSKEY_ID> awssecretaccesskey=<SECRETACCESS_KEY> ``` Where the access key id and secret are the values that you got above. Install Velero, including all prerequisites, into the cluster and start the deployment. This will create a namespace called `velero`, and place a deployment named `velero` in it. ```bash velero install \\ --provider aws \\ --bucket <YOUR_BUCKET> \\ --secret-file ./credentials-velero \\ --use-volume-snapshots=false \\ --backup-location-config region=<YOURREGION>,s3ForcePathStyle=\"true\",s3Url=<YOURURLACCESSPOINT> ``` Velero does not have a volume snapshot plugin for IBM Cloud, so creating volume snapshots is disabled. Additionally, you can specify `--use-restic` to enable , and `--wait` to wait for the deployment to be ready. (Optional) Specify for the Velero/restic pods. Once the installation is complete, remove the default `VolumeSnapshotLocation` that was created by `velero install`, since it's specific to AWS and won't work for IBM Cloud: ```bash kubectl -n velero delete volumesnapshotlocation.velero.io default ``` For more complex installation needs, use either the Helm chart, or add `--dry-run -o yaml` options for generating the YAML representation for the installation. If you run the nginx example, in file `examples/nginx-app/with-pv.yaml`: Uncomment `storageClassName: <YOURSTORAGECLASS_NAME>` and replace with your `StorageClass` name."
}
] |
{
"category": "Runtime",
"file_name": "ibm-config.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Currently, Firecracker supports uncompressed ELF kernel images on x86_64 while on aarch64 it supports PE formatted images. Here's a quick step-by-step guide to building your own kernel that Firecracker can boot: Get the Linux source code: ```bash git clone https://github.com/torvalds/linux.git linux.git cd linux.git ``` Check out the Linux version you want to build (e.g. we'll be using v4.20 here): ```bash git checkout v4.20 ``` You will need to configure your Linux build. You can start from our recommended by copying the relevant one to `.config` (under the Linux sources dir). You can make interactive config adjustments using: ```bash make menuconfig ``` Note: there are many ways of building a kernel config file, other than `menuconfig`. You are free to use whichever one you choose. Build the kernel image: ```bash arch=$(uname -m) if [ \"$arch\" = \"x86_64\" ]; then make vmlinux elif [ \"$arch\" = \"aarch64\" ]; then make Image fi ``` Upon a successful build, you can find the kernel image under `./vmlinux` (for x86) or `./arch/arm64/boot/Image` (for aarch64). For a list of currently supported kernel versions, check out the . The kernel images used in our CI to test Firecracker's features are obtained by using the recipe inside devtool: ```bash config=\"resources/guestconfigs/microvm-kernel-x8664-4.14.config\" ./tools/devtool build_kernel -c $config -n 8 ``` or ```bash config=\"resources/guest_configs/microvm-kernel-arm64-4.14.config\" ./tools/devtool build_kernel -c $config -n 8 ``` on an aarch64 machine. A rootfs image is just a file system image, that hosts at least an init system. For instance, our getting started guide uses an ext4 filesystem image. Note that, whichever file system you choose to use, support for it will have to be compiled into the kernel, so it can be mounted at boot time. In order to obtain an ext4 image that you can use with Firecracker, you have the following options: Prepare a properly-sized file. We'll use 50MiB here, but this depends on how much data you'll want to fit inside: ```bash dd if=/dev/zero"
},
{
"data": "bs=1M count=50 ``` Create an empty file system on the file you created: ```bash mkfs.ext4 rootfs.ext4 ``` You now have an empty EXT4 image in `rootfs.ext4`, so let's prepare to populate it. First, you'll need to mount this new file system, so you can easily access its contents: ```bash mkdir /tmp/my-rootfs sudo mount rootfs.ext4 /tmp/my-rootfs ``` The minimal init system would be just an ELF binary, placed at `/sbin/init`. The final step in the Linux boot process executes `/sbin/init` and expects it to never exit. More complex init systems build on top of this, providing service configuration files, startup / shutdown scripts for various services, and many other features. For the sake of simplicity, let's set up an Alpine-based rootfs, with OpenRC as an init system. To that end, we'll use the official Docker image for Alpine Linux: First, let's start the Alpine container, bind-mounting the EXT4 image created earlier, to `/my-rootfs`: ```bash docker run -it --rm -v /tmp/my-rootfs:/my-rootfs alpine ``` Then, inside the container, install the OpenRC init system, and some basic tools: ```bash apk add openrc apk add util-linux ``` And set up userspace init (still inside the container shell): ```bash ln -s agetty /etc/init.d/agetty.ttyS0 echo ttyS0 > /etc/securetty rc-update add agetty.ttyS0 default rc-update add devfs boot rc-update add procfs boot rc-update add sysfs boot for d in bin etc lib root sbin usr; do tar c \"/$d\" | tar x -C /my-rootfs; done for dir in dev proc run sys var; do mkdir /my-rootfs/${dir}; done exit ``` Finally, unmount your rootfs image: ```bash sudo umount /tmp/my-rootfs ``` The disk images used in our CI to test Firecracker's features are obtained by using the recipe (in a Ubuntu 22.04 host): ```bash ./resources/rebuild.sh ``` The images resulting using this method are minimized Ubuntu 22.04. Feel free to adjust the script(s) to suit your use case. You should now have a kernel image (`vmlinux`) and a rootfs image (`rootfs.ext4`), that you can boot with Firecracker."
}
] |
{
"category": "Runtime",
"file_name": "rootfs-and-kernel-setup.md",
"project_name": "Firecracker",
"subcategory": "Container Runtime"
}
|
[
{
"data": "The source code developed under the OpenEBS Project is licensed under Apache 2.0. However, the OpenEBS project contains unmodified/modified subcomponents from other Open Source Projects with separate copyright notices and license terms. Your use of the source code for these subcomponents is subject to the terms and conditions as defined by those source projects. OpenEBS uses to perform license scan on all its projects. The links to the auto-generated FOSSA reports on the 3rd Party Software used by each of the OpenEBS project are listed below: | Component | Repo Link | License | Dependency and Notes ||||| | Storage Device Management - NDM | https://github.com/openebs/node-disk-manager | Apache 2.0 | | Storage Management - Maya | https://github.com/openebs/maya | Apache 2.0 | | Data Engine - Jiva | https://github.com/openebs/jiva | Apache 2.0 | | Data Engine - cStor Replica | https://github.com/openebs/libcstor | Apache 2.0 | | Data Engine - cStor Target | https://github.com/openebs/istgt | Apache 2.0 | | Data Engine - cStor (uZFS) | https://github.com/openebs/cstor | CDDL 1.0 |"
}
] |
{
"category": "Runtime",
"file_name": "NOTICE.md",
"project_name": "OpenEBS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Copyright (c) 2017-2018, Sylabs, Inc. All rights reserved. Copyright (c) 2015-2017, Gregory M. Kurtzer. All rights reserved. Copyright (c) 2016-2017, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy). All rights reserved."
}
] |
{
"category": "Runtime",
"file_name": "COPYRIGHT.md",
"project_name": "Singularity",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: \"File System Backup\" layout: docs Velero supports backing up and restoring Kubernetes volumes attached to pods from the file system of the volumes, called File System Backup (FSB shortly) or Pod Volume Backup. The data movement is fulfilled by using modules from free open-source backup tools to understand if it fits your use case. Velero allows you to take snapshots of persistent volumes as part of your backups if youre using one of the supported cloud providers block storage offerings (Amazon EBS Volumes, Azure Managed Disks, Google Persistent Disks). It also provides a plugin model that enables anyone to implement additional object and block storage backends, outside the main Velero repository. If your storage supports CSI (Container Storage Interface) snapshots, Velero also allows you to take snapshots through CSI and then optionally move the snapshot data to a different storage location. Velero's File System Backup is an addition to the aforementioned snapshot approaches. Its pros and cons are listed below: Pros: It is capable of backing up and restoring almost any type of Kubernetes volume. Therefore, if you need a volume snapshot plugin for your storage platform, or if you're using EFS, AzureFile, NFS, emptyDir, local, or any other volume type that doesn't have a native snapshot concept, FSB might be for you. It is not tied to a specific storage platform, so you could save the backup data to a different storage platform from the one backing Kubernetes volumes, for example, a durable storage. Cons: It backs up data from the live file system, in which way the data is not captured at the same point in time, so is less consistent than the snapshot approaches. It access the file system from the mounted hostpath directory, so Velero Node Agent pods need to run as root user and even under privileged mode in some environments. NOTE: hostPath volumes are not supported, but the is supported. Understand how Velero performs . the latest Velero release. Kubernetes v1.16.0 or later are required. Velero's File System Backup requires the Kubernetes . Velero Node Agent is a Kubernetes daemonset that hosts FSB modules, i.e., restic, kopia uploader & repository. To install Node Agent, use the `--use-node-agent` flag in the `velero install` command. See the for more details on other flags for the install command. ``` velero install --use-node-agent ``` When using FSB on a storage that doesn't have Velero support for snapshots, the `--use-volume-snapshots=false` flag prevents an unused `VolumeSnapshotLocation` from being created on installation. At present, Velero FSB supports object storage as the backup storage only. Velero gets the parameters from the to compose the URL to the backup storage. Velero's known object storage providers are include here , for which, Velero pre-defines the endpoints; if you want to use a different backup storage, make sure it is S3 compatible and you provide the correct bucket name and endpoint in BackupStorageLocation. Alternatively, for Restic, you could set the `resticRepoPrefix` value in BackupStorageLocation. For example, on AWS, `resticRepoPrefix` is something like `s3:s3-us-west-2.amazonaws.com/bucket` (note that `resticRepoPrefix` doesn't work for Kopia). Velero handles the creation of the backup repo prefix in the backup storage, so make sure it is specified in BackupStorageLocation correctly. Velero creates one backup repo per"
},
{
"data": "For example, if backing up 2 namespaces, namespace1 and namespace2, using kopia repository on AWS S3, the full backup repo path for namespace1 would be `https://s3-us-west-2.amazonaws.com/bucket/kopia/ns1` and for namespace2 would be `https://s3-us-west-2.amazonaws.com/bucket/kopia/ns2`. There may be additional installation steps depending on the cloud provider plugin you are using. You should refer to the for the most up to date information. Note: Currently, Velero creates a secret named `velero-repo-credentials` in the velero install namespace, containing a default backup repository password. You can update the secret with your own password encoded as base64 prior to the first backup (i.e., FS Backup, data mover) targeting to the backup repository. The value of the key to update is ``` data: repository-password: <custom-password> ``` Backup repository is created during the first execution of backup targeting to it after installing Velero with node agent. If you update the secret password after the first backup which created the backup repository, then Velero will not be able to connect with the older backups. After installation, some PaaS/CaaS platforms based on Kubernetes also require modifications the node-agent DaemonSet spec. The steps in this section are only needed if you are installing on RancherOS, Nutanix, OpenShift, VMware Tanzu Kubernetes Grid Integrated Edition (formerly VMware Enterprise PKS), or Microsoft Azure. RancherOS Update the host path for volumes in the node-agent DaemonSet in the Velero namespace from `/var/lib/kubelet/pods` to `/opt/rke/var/lib/kubelet/pods`. ```yaml hostPath: path: /var/lib/kubelet/pods ``` to ```yaml hostPath: path: /opt/rke/var/lib/kubelet/pods ``` Nutanix Update the host path for volumes in the node-agent DaemonSet in the Velero namespace from `/var/lib/kubelet/pods` to `/var/nutanix/var/lib/kubelet`. ```yaml hostPath: path: /var/lib/kubelet/pods ``` to ```yaml hostPath: path: /var/nutanix/var/lib/kubelet ``` OpenShift To mount the correct hostpath to pods volumes, run the node-agent pod in `privileged` mode. Add the `velero` ServiceAccount to the `privileged` SCC: ``` oc adm policy add-scc-to-user privileged -z velero -n velero ``` Install Velero with the '--privileged-node-agent' option to request a privileged mode: ``` velero install --use-node-agent --privileged-node-agent ``` If node-agent is not running in a privileged mode, it will not be able to access pods volumes within the mounted hostpath directory because of the default enforced SELinux mode configured in the host system level. You can to relax the security in your cluster so that node-agent pods are allowed to use the hostPath volume plugin without granting them access to the `privileged` SCC. By default a userland openshift namespace will not schedule pods on all nodes in the cluster. To schedule on all nodes the namespace needs an annotation: ``` oc annotate namespace <velero namespace> openshift.io/node-selector=\"\" ``` This should be done before velero installation. Or the ds needs to be deleted and recreated: ``` oc get ds node-agent -o yaml -n <velero namespace> > ds.yaml oc annotate namespace <velero namespace> openshift.io/node-selector=\"\" oc create -n <velero namespace> -f ds.yaml ``` VMware Tanzu Kubernetes Grid Integrated Edition (formerly VMware Enterprise PKS) You need to enable the `Allow Privileged` option in your plan configuration so that Velero is able to mount the hostpath. The hostPath should be changed from `/var/lib/kubelet/pods` to `/var/vcap/data/kubelet/pods` ```yaml hostPath: path: /var/vcap/data/kubelet/pods ``` Velero supports two approaches of discovering pod volumes that need to be backed up using FSB: Opt-in approach: Where every pod containing a volume to be backed up using FSB must be annotated with the volume's name. Opt-out approach: Where all pod volumes are backed up using FSB, with the ability to opt-out any volumes that should not be backed"
},
{
"data": "The following sections provide more details on the two approaches. In this approach, Velero will back up all pod volumes using FSB with the exception of: Volumes mounting the default service account token, Kubernetes Secrets, and ConfigMaps Hostpath volumes It is possible to exclude volumes from being backed up using the `backup.velero.io/backup-volumes-excludes` annotation on the pod. Instructions to back up using this approach are as follows: Run the following command on each pod that contains volumes that should not be backed up using FSB ```bash kubectl -n YOURPODNAMESPACE annotate pod/YOURPODNAME backup.velero.io/backup-volumes-excludes=YOURVOLUMENAME1,YOURVOLUMENAME2,... ``` where the volume names are the names of the volumes in the pod spec. For example, in the following pod: ```yaml apiVersion: v1 kind: Pod metadata: name: app1 namespace: sample spec: containers: image: k8s.gcr.io/test-webserver name: test-webserver volumeMounts: name: pvc1-vm mountPath: /volume-1 name: pvc2-vm mountPath: /volume-2 volumes: name: pvc1-vm persistentVolumeClaim: claimName: pvc1 name: pvc2-vm claimName: pvc2 ``` to exclude FSB of volume `pvc1-vm`, you would run: ```bash kubectl -n sample annotate pod/app1 backup.velero.io/backup-volumes-excludes=pvc1-vm ``` Take a Velero backup: ```bash velero backup create BACKUPNAME --default-volumes-to-fs-backup OTHEROPTIONS ``` The above steps uses the opt-out approach on a per backup basis. Alternatively, this behavior may be enabled on all velero backups running the `velero install` command with the `--default-volumes-to-fs-backup` flag. Refer for details. When the backup completes, view information about the backups: ```bash velero backup describe YOURBACKUPNAME ``` ```bash kubectl -n velero get podvolumebackups -l velero.io/backup-name=YOURBACKUPNAME -o yaml ``` Velero, by default, uses this approach to discover pod volumes that need to be backed up using FSB. Every pod containing a volume to be backed up using FSB must be annotated with the volume's name using the `backup.velero.io/backup-volumes` annotation. Instructions to back up using this approach are as follows: Run the following for each pod that contains a volume to back up: ```bash kubectl -n YOURPODNAMESPACE annotate pod/YOURPODNAME backup.velero.io/backup-volumes=YOURVOLUMENAME1,YOURVOLUMENAME2,... ``` where the volume names are the names of the volumes in the pod spec. For example, for the following pod: ```yaml apiVersion: v1 kind: Pod metadata: name: sample namespace: foo spec: containers: image: k8s.gcr.io/test-webserver name: test-webserver volumeMounts: name: pvc-volume mountPath: /volume-1 name: emptydir-volume mountPath: /volume-2 volumes: name: pvc-volume persistentVolumeClaim: claimName: test-volume-claim name: emptydir-volume emptyDir: {} ``` You'd run: ```bash kubectl -n foo annotate pod/sample backup.velero.io/backup-volumes=pvc-volume,emptydir-volume ``` This annotation can also be provided in a pod template spec if you use a controller to manage your pods. Take a Velero backup: ```bash velero backup create NAME OPTIONS... ``` When the backup completes, view information about the backups: ```bash velero backup describe YOURBACKUPNAME ``` ```bash kubectl -n velero get podvolumebackups -l velero.io/backup-name=YOURBACKUPNAME -o yaml ``` Regardless of how volumes are discovered for backup using FSB, the process of restoring remains the same. Restore from your Velero backup: ```bash velero restore create --from-backup BACKUP_NAME OPTIONS... ``` When the restore completes, view information about your pod volume restores: ```bash velero restore describe YOURRESTORENAME ``` ```bash kubectl -n velero get podvolumerestores -l velero.io/restore-name=YOURRESTORENAME -o yaml ``` `hostPath` volumes are not supported. are supported. At present, Velero uses a static, common encryption key for all backup repositories it creates. This means that anyone who has access to your backup storage can decrypt your backup data. Make sure that you limit access to the backup storage appropriately. An incremental backup chain will be maintained across pod reschedules for"
},
{
"data": "However, for pod volumes that are not PVCs, such as `emptyDir` volumes, when a pod is deleted/recreated (for example, by a ReplicaSet/Deployment), the next backup of those volumes will be full rather than incremental, because the pod volume's lifecycle is assumed to be defined by its pod. Even though the backup data could be incrementally preserved, for a single file data, FSB leverages on deduplication to find the difference to be saved. This means that large files (such as ones storing a database) will take a long time to scan for data deduplication, even if the actual difference is small. You may need to to make sure backups complete successfully for massive small files or large backup size cases, for more details refer to . Velero's File System Backup reads/writes data from volumes by accessing the node's filesystem, on which the pod is running. For this reason, FSB can only backup volumes that are mounted by a pod and not directly from the PVC. For orphan PVC/PV pairs (without running pods), some Velero users overcame this limitation running a staging pod (i.e. a busybox or alpine container with an infinite sleep) to mount these PVC/PV pairs prior taking a Velero backup. Velero File System Backup expects volumes to be mounted under `<hostPath>/<pod UID>` (`hostPath` is configurable as mentioned in ). Some Kubernetes systems (i.e., ) don't mount volumes under the `<pod UID>` sub-dir, Velero File System Backup is not working with them. File system restores of the same pod won't start until all the volumes of the pod get bound, even though some of the volumes have been bound and ready for restore. An a result, if a pod has multiple volumes, while only part of the volumes are restored by file system restore, these file system restores won't start until the other volumes are restored completely by other restore types (i.e., , ), the file system restores won't happen concurrently with those other types of restores. Velero uses a helper init container when performing a FSB restore. By default, the image for this container is `velero/velero-restore-helper:<VERSION>`, where `VERSION` matches the version/tag of the main Velero image. You can customize the image that is used for this helper by creating a ConfigMap in the Velero namespace with the alternate image. In addition, you can customize the resource requirements for the init container, should you need. The ConfigMap must look like the following: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: fs-restore-action-config namespace: velero labels: velero.io/plugin-config: \"\" velero.io/pod-volume-restore: RestoreItemAction data: image: myregistry.io/my-custom-helper-image[:OPTIONAL_TAG] cpuRequest: 200m memRequest: 128Mi cpuLimit: 200m memLimit: 128Mi secCtxRunAsUser: 1001 secCtxRunAsGroup: 999 secCtxAllowPrivilegeEscalation: false secCtx: | capabilities: drop: ALL add: [] allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1001 runAsGroup: 999 ``` Run the following checks: Are your Velero server and daemonset pods running? ```bash kubectl get pods -n velero ``` Does your backup repository exist, and is it ready? ```bash velero repo get velero repo get REPO_NAME -o yaml ``` Are there any errors in your Velero backup/restore? ```bash velero backup describe BACKUP_NAME velero backup logs BACKUP_NAME velero restore describe RESTORE_NAME velero restore logs RESTORE_NAME ``` What is the status of your pod volume backups/restores? ```bash kubectl -n velero get podvolumebackups -l velero.io/backup-name=BACKUP_NAME -o yaml kubectl -n velero get podvolumerestores -l"
},
{
"data": "-o yaml ``` Is there any useful information in the Velero server or daemon pod logs? ```bash kubectl -n velero logs deploy/velero kubectl -n velero logs DAEMONPODNAME ``` NOTE: You can increase the verbosity of the pod logs by adding `--log-level=debug` as an argument to the container command in the deployment/daemonset pod template spec. Velero integrate Restic binary directly, so the operations are done by calling Restic commands: Run `restic init` command to initialize the Run `restic prune` command periodically to prune restic repository Run `restic backup` commands to backup pod volume data Run `restic restore` commands to restore pod volume data Velero integrate Kopia modules into Velero's code, primarily two modules: Kopia Uploader: Velero makes some wrap and isolation around it to create a generic file system uploader, which is used to backup pod volume data Kopia Repository: Velero integrates it with Velero's Unified Repository Interface, it is used to preserve the backup data and manage the backup storage For more details, refer to and Velero's Velero has three custom resource definitions and associated controllers: `BackupRepository` - represents/manages the lifecycle of Velero's backup repositories. Velero creates a backup repository per namespace when the first FSB backup/restore for a namespace is requested. The backup repository is backed by restic or kopia, the `BackupRepository` controller invokes restic or kopia internally, refer to and for details. You can see information about your Velero's backup repositories by running `velero repo get`. `PodVolumeBackup` - represents a FSB backup of a volume in a pod. The main Velero backup process creates one or more of these when it finds an annotated pod. Each node in the cluster runs a controller for this resource (in a daemonset) that handles the `PodVolumeBackups` for pods on that node. `PodVolumeBackup` is backed by restic or kopia, the controller invokes restic or kopia internally, refer to and for details. `PodVolumeRestore` - represents a FSB restore of a pod volume. The main Velero restore process creates one or more of these when it encounters a pod that has associated FSB backups. Each node in the cluster runs a controller for this resource (in the same daemonset as above) that handles the `PodVolumeRestores` for pods on that node. `PodVolumeRestore` is backed by restic or kopia, the controller invokes restic or kopia internally, refer to and for details. Velero's FSB supports two data movement paths, the restic path and the kopia path. Velero allows users to select between the two paths: For backup, the path is specified at the installation time through the `uploader-type` flag, the valid value is either `restic` or `kopia`, or default to `kopia` if the value is not specified. The selection is not allowed to be changed after the installation. For restore, the path is decided by the path used to back up the data, it is automatically selected. For example, if you've created a backup with restic path, then you reinstall Velero with `uploader-type=kopia`, when you create a restore from the backup, the restore still goes with restic path. Based on configuration, the main Velero backup process uses the opt-in or opt-out approach to check each pod that it's backing up for the volumes to be backed up using"
},
{
"data": "When found, Velero first ensures a backup repository exists for the pod's namespace, by: checking if a `BackupRepository` custom resource already exists if not, creating a new one, and waiting for the `BackupRepository` controller to init/connect it Velero then creates a `PodVolumeBackup` custom resource per volume listed in the pod annotation The main Velero process now waits for the `PodVolumeBackup` resources to complete or fail Meanwhile, each `PodVolumeBackup` is handled by the controller on the appropriate node, which: has a hostPath volume mount of `/var/lib/kubelet/pods` to access the pod volume data finds the pod volume's subdirectory within the above volume based on the path selection, Velero invokes restic or kopia for backup updates the status of the custom resource to `Completed` or `Failed` As each `PodVolumeBackup` finishes, the main Velero process adds it to the Velero backup in a file named `<backup-name>-podvolumebackups.json.gz`. This file gets uploaded to object storage alongside the backup tarball. It will be used for restores, as seen in the next section. The main Velero restore process checks each existing `PodVolumeBackup` custom resource in the cluster to backup from. For each `PodVolumeBackup` found, Velero first ensures a backup repository exists for the pod's namespace, by: checking if a `BackupRepository` custom resource already exists if not, creating a new one, and waiting for the `BackupRepository` controller to connect it (note that in this case, the actual repository should already exist in backup storage, so the Velero controller will simply check it for integrity and make a location connection) Velero adds an init container to the pod, whose job is to wait for all FSB restores for the pod to complete (more on this shortly) Velero creates the pod, with the added init container, by submitting it to the Kubernetes API. Then, the Kubernetes scheduler schedules this pod to a worker node. If the pod fails to be scheduled for some reason (i.e. lack of cluster resources), the FSB restore will not be done. Velero creates a `PodVolumeRestore` custom resource for each volume to be restored in the pod The main Velero process now waits for each `PodVolumeRestore` resource to complete or fail Meanwhile, each `PodVolumeRestore` is handled by the controller on the appropriate node, which: has a hostPath volume mount of `/var/lib/kubelet/pods` to access the pod volume data waits for the pod to be running the init container finds the pod volume's subdirectory within the above volume based on the path selection, Velero invokes restic or kopia for restore on success, writes a file into the pod volume, in a `.velero` subdirectory, whose name is the UID of the Velero restore that this pod volume restore is for updates the status of the custom resource to `Completed` or `Failed` The init container that was added to the pod is running a process that waits until it finds a file within each restored volume, under `.velero`, whose name is the UID of the Velero restore being run Once all such files are found, the init container's process terminates successfully and the pod moves on to running other init containers/the main containers. Velero won't restore a resource if a that resource is scaled to 0 and already exists in the cluster. If Velero restored the requested pods in this scenario, the Kubernetes reconciliation loops that manage resources would delete the running pods because its scaled to be 0. Velero will be able to restore once the resources is scaled up, and the pods are created and remain running. Velero does not provide a mechanism to detect persistent volume claims that are missing the File System Backup annotation. To solve this,"
}
] |
{
"category": "Runtime",
"file_name": "file-system-backup.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "(instance-config)= The instance configuration consists of different categories: Instance properties : Instance properties are specified when the instance is created. They include, for example, the instance name and architecture. Some of the properties are read-only and cannot be changed after creation, while others can be updated by {ref}`setting their property value <instances-configure-properties>` or {ref}`editing the full instance configuration <instances-configure-edit>`. In the YAML configuration, properties are on the top level. See {ref}`instance-properties` for a reference of available instance properties. Instance options : Instance options are configuration options that are related directly to the instance. They include, for example, startup options, security settings, hardware limits, kernel modules, snapshots and user keys. These options can be specified as key/value pairs during instance creation (through the `--config key=value` flag). After creation, they can be configured with the and commands. In the YAML configuration, options are located under the `config` entry. See {ref}`instance-options` for a reference of available instance options, and {ref}`instances-configure-options` for instructions on how to configure the options. Instance devices : Instance devices are attached to an instance. They include, for example, network interfaces, mount points, USB and GPU devices. Devices are usually added after an instance is created with the command, but they can also be added to a profile or a YAML configuration file that is used to create an instance. Each type of device has its own specific set of options, referred to as instance device options. In the YAML configuration, devices are located under the `devices` entry. See {ref}`devices` for a reference of available devices and the corresponding instance device options, and {ref}`instances-configure-devices` for instructions on how to add and configure instance devices. ```{toctree} :maxdepth: 1 :hidden: ../reference/instance_properties.md ../reference/instance_options.md ../reference/devices.md ../reference/instance_units.md ```"
}
] |
{
"category": "Runtime",
"file_name": "instance_config.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "sidebar_position: 1 sidebar_label: \"K8s Storage\" Kubernetes has made several enhancements to support running Stateful Workloads by providing the required abstractions for Platform (or Cluster Administrators) and Application developers. The abstractions ensure that different types of file and block storage (whether ephemeral or persistent, local or remote) are available wherever a container is scheduled (including provisioning/creating, attaching, mounting, unmounting, detaching, and deleting of volumes), storage capacity management (container ephemeral storage usage, volume resizing, etc.), influencing scheduling of containers based on storage (data gravity, availability, etc.), and generic operations on storage (snapshotting, etc.). The most important Kubernetes Storage abstractions to be aware of for running Stateful workloads using HwameiStor are: - - - The Container Storage Interface (CSI) is a standard for exposing arbitrary block and file storage systems to containerized workloads on Container Orchestration Systems (COs) like Kubernetes. Using CSI third-party storage providers like HwameiStor can write and deploy plugins exposing new storage volumes like HwameiStor Local and Replicated Volumes in Kubernetes without ever having to touch the core Kubernetes code. When cluster administrators install HwameiStor, the required HwameiStor CSI driver components are installed into the Kubernetes cluster. ```csharp Prior to CSI, Kubernetes supported adding storage providers using out-of-tree provisioners referred to as external provisioners. And Kubernetes in-tree volumes pre-date the external provisioners. There is an ongoing effort in the Kubernetes community to deprecate in-tree volumes with CSI based volumes. ``` A StorageClass provides a way for administrators to describe the \"classes\" of storage they offer. Different classes might map to quality-of-service levels, or to backup policies, or to arbitrary policies determined by the cluster administrators. This concept is sometimes called \"profiles\" in other storage systems. The dynamic provisioning feature eliminates the need for cluster administrators to pre-provision storage. Instead, it automatically provisions storage when it is requested by users. The implementation of dynamic volume provisioning is based on the StorageClass abstraction. A cluster administrator can define as many StorageClass objects as needed, each specifying a volume plugin (aka provisioner) that provisions a volume and the set of parameters to pass to that provisioner when"
},
{
"data": "A cluster administrator can define and expose multiple flavors of storage (from the same or different storage systems) within a cluster, each with a custom set of parameters. This design also ensures that end users don't have to worry about the complexity and nuances of how storage is provisioned, but still have the ability to select from multiple storage options. When HwameiStor is installed, it ships with a couple of default storage classes that allow users to create either local (HwameiStor LocalVolume) or replicated (HwameiStor LocalVolumeReplica) volumes. The cluster administrator can enable the required storage engines and then create Storage Classes for the required Data Engines. PersistentVolumeClaim (PVC) is a users storage request that is served by a StorageClass offered by the cluster administrator. An application running on a container can request a certain type of storage. For example, a container can specify the size of storage it needs or the way it needs to access the data (read only, read/write, read-write many, etc.,). Beyond storage size and access mode, administrators create Storage Classes to provided PVs with custom properties, such as the type of disk (HDD vs. SSD), the level of performance, or the storage tier (regular or cold storage). The PersistentVolume(PV) is dynamically provisioned by the storage providers when users request for a PVC. PV contains the details on how the storage can be consumed by the container. Kubernetes and the Volume Drivers use the details in the PV to attach/detach the storage to the node where the container is running and mount/unmount storage to a container. HwameiStor Control Plane dynamically provisions HwameiStor Local and Replicated volumes and helps in creating the PV objects in the cluster. Kubernetes provides several built-in workload resources such as StatefulSets and Deployments that let application developers define an application running on Kubernetes. You can run a stateful application by creating a Kubernetes Deployment/Statefulset and connecting it to a PersistentVolume using a PersistentVolumeClaim. For example, you can create a MySQL Deployment YAML that references a PersistentVolumeClaim. The MySQL PersistentVolumeClaim referenced by the Deployment should be created with the requested size and StorageClass. Once the HwameiStor control plane provisions a PersistenceVolume for the required StorageClass and requested capacity, the claim is set as satisfied. Kubernetes will then mount the PersistentVolume and launch the MySQL Deployment."
}
] |
{
"category": "Runtime",
"file_name": "k8s_storage.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Note: If you just want to experience the deployment and basic functions of Curve, you do not need to compile Curve, please refer to . This document is only used to help you build the Curve code compilation environment, which is convenient for you to participate in the development, debugging and run tests of Curve. The following image and build procedures are currently only supported on x86 systems. To compile , please follow to package and compile the image. Currently the master branch does not support compiling and running on the arm system Recommend using Debian 10 or later versions of the operating system. Other operating systems have not been thoroughly tested. Method 1: Pull the docker image from the docker hub image library (recommended) ```bash docker pull opencurvedocker/curve-base:build-debian9 ``` Method 2: Build docker image manually Use the Dockerfile in the project directory to build. The command is as follows: ```bash docker build -t opencurvedocker/curve-base:build-debian9 ``` Note: The above operations are not recommended to be performed in the Curve project directory, otherwise the files in the current directory will be copied to the docker image when building the image. It is recommended to copy the Dockerfile to the newly created clean directory to build the docker image. ```bash git clone https://github.com/opencurve/curve.git git clone https://gitee.com/mirrors/curve.git cd curve docker run --rm -v $(pwd):/curve -w /curve -v ${HOME}:${HOME} --user $(id -u ${USER}):$(id -g ${USER}) -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro --privileged -it opencurvedocker/curve-base:build-debian9 bash make tar dep=1 compile curvebs and make tar package make deb dep=1 compile curvebs and make debian package make build stor=bs dep=1 make dep stor=bs && make build stor=bs make build stor=fs dep=1 make dep stor=fs && make build stor=fs ``` Note: `make tar` and `make deb` are used for compiling and packaging curve v2.0. They are no longer maintained after v2.0. Curve compilation depends on: | Dependency | Version | |:-- |:-- | | bazel | 4.2.2 | | gcc | Compatible version supporting C++11 | Other dependencies of Curve are managed by bazel and do not need to be installed separately. Note: The 4.* version of bazel can successfully compile the curve project, other versions are not compatible. 4.2.2 is the recommended version. For dependencies, you can refer to the installation steps in . ```bash git clone https://github.com/opencurve/curve.git or git clone"
},
{
"data": "make tar dep=1 compile curvebs and make tar package make deb dep=1 compile curvebs and make debian package make build stor=bs dep=1 make dep stor=bs && make build stor=bs make build stor=fs dep=1 make dep stor=fs && make build stor=fs ``` This step can be performed in a container or on a physical machine. Note that if it is executed in a container, you need to add `-v /var/run/docker.sock:/var/run/docker.sock -v /root/.docker:/root/.docker when executing the `docker run` command ` parameter. ```bash make image stor=bs tag=test make image stor=fs tag=test ``` ```bash docker push test ``` Only compile all modules without packaging ```bash $ bash ./build.sh ``` ```bash bazel query '//test/...' bazel query '//curvefs/test/...' ``` Compile corresponding modules, such as common-test in the `test/common` directory ```bash $ bazel build test/common:common-test --copt -DHAVE_ZLIB=1 \\ $ --define=withglog=true --compilationmode=dbg \\ $ --define=libunwind=true ``` Before executing the test, you need to prepare the dependencies required for the test case to run: execute unit tests: build module tests: ```bash $ bazel build xxx/...//:xxx_test ``` run module tests: ```bash $ bazel run xxx/xxx//:xxx_test ``` compile all tests ```bash $ bazel build \"...\" ``` sometimes the bazel compiling cache will be failure. clean the project cache: ```bash $ bazel clean ``` clean the project cache and deps cache.(bazel will also save project cache). ```bash $ bazel clean --expunge ``` debug mode build: ```bash $ bazel build xxx//:xxx_test -c dbg ``` releases mode build ```bash $ bazel build xxx//:xxx_test -c opt ``` more about bazel docs, please go . ```bash $ export LDLIBRARYPATH=<CURVE-WORKSPACE>/thirdparties/etcdclient:<CURVE-WORKSPACE>/thirdparties/aws-sdk/usr/lib:/usr/local/lib:${LDLIBRARYPATH} ``` In the snapshot clone integration test, the open source was used to simulate the real s3 service. ```bash $ apt install ruby -y OR yum install ruby -y $ gem install fakes3 $ fakes3 -r /S3DATADIR -p 9999 --license YOURLICENSEKEY ``` Remarks: `-r S3DATADIR`: The directory where data is stored `--license YOURLICENSEKEY`: fakes3 needs a key to run, please refer to `-p 9999`: The port where the fake-s3 service starts, no need to change ```bash $ wget -ct0 https://github.com/etcd-io/etcd/releases/download/v3.4.10/$ etcd-v3.4.10-linux-amd64.tar.gz $ tar zxvf etcd-v3.4.10-linux-amd64. tar.gz $ cd etcd-v3.4.10-linux-amd64 && cp etcd etcdctl /usr/bin ``` ```bash $ ./bazel-bin/test/common/common-test ``` The executable programs compiled by bazel are all in the `./bazel-bin` directory, for example, the test program corresponding to the test code in the test/common directory is `./bazel-bin/test/common/common-test`, this program can be run directly for testing. CurveBS-related unit test program directory is under the `./bazel-bin/test` directory CurveFS-related unit test program directory is under the `./bazel-bin/curvefs/test` directory The integration test is under the `./bazel-bin/test/integration` directory NEBD-related unit test programs are in the `./bazel-bin/nebd/test` directory NBD-related unit test programs are in the `./bazel-bin/nbd/test` directory If you want to run all unit tests and integration tests, you can execute the ut.sh script in the project directory: ```bash $ bash ut.sh ```"
}
] |
{
"category": "Runtime",
"file_name": "build_and_run_en.md",
"project_name": "Curve",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "The content of Curve v2.0 includes CurveBS v1.3, CurveFS v0.2.0 and some other content listed below. Previous change logs can be found at"
}
] |
{
"category": "Runtime",
"file_name": "CHANGELOG-2.0.md",
"project_name": "Curve",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Antrea is super easy to install. All the Antrea components are containerized and can be installed using the Kubernetes deployment manifest. Antrea relies on `NodeIPAM` for per-Node CIDR allocation. `NodeIPAM` can run within the Kubernetes `kube-controller-manager`, or within the Antrea Controller. When using `kubeadm` to create the Kubernetes cluster, passing `--pod-network-cidr=<CIDR Range for Pods>` to `kubeadm init` will enable `NodeIpamController`. Clusters created with kubeadm will always have `CNI` plugins enabled. Refer to for more information about setting up a Kubernetes cluster with `kubeadm`. When the cluster is deployed by other means then: To enable `NodeIpamController`, `kube-controller-manager` should be started with the following flags: `--cluster-cidr=<CIDR Range for Pods>` `--allocate-node-cidrs=true` To enable `CNI` network plugins, `kubelet` should be started with the `--network-plugin=cni` flag. To enable masquerading of traffic for Service cluster IP via iptables, `kube-proxy` should be started with the `--cluster-cidr=<CIDR Range for Pods>` flag. For further info about running NodeIPAM within Antrea Controller, see As for OVS, when using the built-in kernel module, kernel version >= 4.6 is required. On the other hand, when building it from OVS sources, OVS version >= 2.6.0 is required. Red Hat Enterprise Linux and CentOS 7.x use kernel 3.10, but as changes to OVS kernel modules are regularly backported to these kernel versions, they should work with Antrea, starting with version 7.4. In case a node does not have a supported OVS module installed, you can install it following the instructions at: . Please be aware that the `vport-stt` module is not in the Linux tree and needs to be built from source, please build and load it manually before STT tunneling is enabled. Some experimental features disabled by default may have additional requirements, please refer to the to determine whether it applies to you. Antrea will work out-of-the-box on most popular Operating Systems. Known issues encountered when running Antrea on specific OSes are documented . There are also a few network prerequisites which need to be satisfied, and they depend on the tunnel mode you choose, please check . To deploy a released version of Antrea, pick a deployment manifest from the . For any given release `<TAG>` (e.g. `v0.1.0`), you can deploy Antrea as follows: ```bash kubectl apply -f https://github.com/antrea-io/antrea/releases/download/<TAG>/antrea.yml ``` To deploy the latest version of Antrea (built from the main branch), use the checked-in : ```bash kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea.yml ``` You can use the same `kubectl apply` command to upgrade to a more recent version of Antrea. Antrea supports some experimental features that can be enabled or disabled, please refer to the for more information. If you want to add Windows Nodes to your cluster, please refer to these . Starting with v1.0, Antrea supports arm64 and arm/v7"
},
{
"data": "The installation instructions do not change when some (or all) Linux Nodes in a cluster use an ARM architecture: the same deployment YAML can be used, as the `antrea/antrea-agent-ubuntu` and `antrea/antrea-controller-ubuntu` Docker images are actually manifest lists with support for the amd64, arm64 and arm/v7 architectures. Note that while we do run a subset of the Kubernetes conformance tests on both the arm/v7 and arm64 Docker images (using as the Kubernetes distribution), our testing is not as thorough as for the amd64 image. However, we do not anticipate any issue. Starting with v1.8, Antrea can be installed and updated with Helm. Please refer to these . The instructions above only apply when deploying Antrea in a new cluster. If you need to migrate your existing cluster from another CNI plugin to Antrea, you will need to do the following: Delete previous CNI, including all resources (K8s objects, iptables rules, interfaces, ...) created by that CNI. Deploy Antrea. Restart all Pods in the CNI network in order for Antrea to set-up networking for them. This does not apply to Pods which use the Node's network namespace (i.e. Pods configured with `hostNetwork: true`). You may use `kubectl drain` to drain each Node or reboot all your Nodes. While this is in-progress, networking will be disrupted in your cluster. After deleting the previous CNI, existing Pods may not be reachable anymore. For example, when migrating from Flannel to Antrea, you will need to do the following: Delete Flannel with `kubectl delete -f <path to your Flannel YAML manifest>`. Delete Flannel bridge and tunnel interface with `ip link delete flannel.1 && ip link delete flannel cni0` on each Node. Ensure are satisfied. . Drain and uncordon Nodes one-by-one. For each Node, run `kubectl drain --ignore-daemonsets <node name> && kubectl uncordon <node name>`. The `--ignore-daemonsets` flag will ignore DaemonSet-managed Pods, including the Antrea Agent Pods. If you have any other DaemonSet-managed Pods (besides the Antrea ones and system ones such as kube-proxy), they will be ignored and will not be drained from the Node. Refer to the [Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) for more information. Alternatively, you can also restart all the Pods yourself, or simply reboot your Nodes. To build the image locally, you can follow the instructions in the [Contributor Guide](../CONTRIBUTING.md#building-and-testing-your-change). To deploy Antrea in a cluster, please refer to this . To deploy Antrea in a cluster, please refer to this . To deploy Antrea in a managed cluster, please refer to this . Antrea can work with cloud managed Kubernetes services, and can be deployed to AKS, EKS, and GKE clusters. To deploy Antrea to an AKS or an AKS Engine cluster, please refer to . To deploy Antrea to an EKS cluster, please refer to . To deploy Antrea to a GKE cluster, please refer to . By default, Antrea generates the certificates needed for itself to"
},
{
"data": "To provide your own certificates, please refer to . To use antctl, the Antrea command-line tool, please refer to . Besides Kubernetes NetworkPolicy, Antrea also implements its own Network Policy CRDs, which provide advanced features including: policy priority, tiering, deny action, external entity, and policy statistics. For more information on usage of Antrea Network Policies, refer to the . Antrea supports specifying which egress (SNAT) IP the traffic from the selected Pods to the external network should use and which Node the traffic should leave the cluster from. For more information, refer to the . Antrea supports exporting network flow information using IPFIX, and provides a reference cookbook on how to visualize the exported network flows using Elastic Stack and Kibana dashboards. For more information, refer to the [network flow visibility document](network-flow-visibility.md). Besides the default `Encap` mode, in which Pod traffic across Nodes will be encapsulated and sent over tunnels, Antrea also supports `NoEncap` and `Hybrid` traffic modes. In `NoEncap` mode, Antrea does not encapsulate Pod traffic, but relies on the Node network to route the traffic across Nodes. In `Hybrid` mode, Antrea encapsulates Pod traffic when the source Node and the destination Node are in different subnets, but does not encapsulate when the source and the destination Nodes are in the same subnet. Refer to to learn how to configure Antrea with `NoEncap` or `Hybrid` mode. Antrea comes with a web UI, which can show runtime information of Antrea components and perform Antrea Traceflow operations. Please refer to the [Antrea UI repository](https://github.com/antrea-io/antrea-ui) for installation instructions and more information. Antrea can offload OVS flow processing to the NICs that support OVS kernel hardware offload using TC. The hardware offload can improve OVS performance significantly. For more information on how to configure OVS offload, refer to the . Antrea supports exporting metrics to Prometheus. For more information, refer to the . By leveraging Antrea's Service external IP management feature or configuring MetalLB to work with Antrea, Services of type LoadBalancer can be supported without requiring an external LoadBalancer. To learn more information, please refer to the . Traceflow is a very useful network diagnosis feature in Antrea. It can trace and report the forwarding path of a specified packet in the Antrea network. For usage of this feature, refer to the . Antrea supports encrypting traffic between Linux Nodes using IPsec or WireGuard. To deploy Antrea with traffic encryption enabled, please refer to . Antrea Multi-cluster implements Multi-cluster Service API, which allows users to create multi-cluster Services that can be accessed cross clusters in a ClusterSet. Antrea Multi-cluster also supports Antrea ClusterNetworkPolicy replication. Multi-cluster admins can define ClusterNetworkPolicies to be replicated across the entire ClusterSet, and enforced in all member clusters. To learn more information about Antrea Multi-cluster, please refer to the ."
}
] |
{
"category": "Runtime",
"file_name": "getting-started.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "- https://github.com/heptio/ark/releases/tag/v0.8.3 Don't restore backup and restore resources to avoid possible data corruption (#622, @ncdc) https://github.com/heptio/ark/releases/tag/v0.8.2 Don't crash when a persistent volume claim is missing spec.volumeName (#520, @ncdc) https://github.com/heptio/ark/releases/tag/v0.8.1 Azure: allow pre-v0.8.0 backups with disk snapshots to be restored and deleted (#446 #449, @skriss) https://github.com/heptio/ark/releases/tag/v0.8.0 Backup deletion has been completely revamped to make it simpler and less error-prone. As a user, you still use the `ark backup delete` command to request deletion of a backup and its associated cloud resources; behind the scenes, we've switched to using a new `DeleteBackupRequest` Custom Resource and associated controller for processing deletion requests. We've reduced the number of required fields in the Ark config. For Azure, `location` is no longer required, and for GCP, `project` is not needed. Ark now copies tags from volumes to snapshots during backup, and from snapshots to new volumes during restore. Ark has moved back to a single namespace (`heptio-ark` by default) as part of #383. Add global `--kubecontext` flag to Ark CLI (#296, @blakebarnett) Azure: support cross-resource group restores of volumes (#356 #378, @skriss) AWS/Azure/GCP: copy tags from volumes to snapshots, and from snapshots to volumes (#341, @skriss) Replace finalizer for backup deletion with `DeleteBackupRequest` custom resource & controller (#383 #431, @ncdc @nrb) Don't log warnings during restore if an identical object already exists in the cluster (#405, @nrb) Add bash & zsh completion support (#384, @containscafeine) Error from the Ark CLI if attempting to restore a non-existent backup (#302, @ncdc) Enable running the Ark server locally for development purposes (#334, @ncdc) Add examples to `ark schedule create` documentation (#331, @lypht) GCP: Remove `project` requirement from Ark config (#345, @skriss) Add `--from-backup` flag to `ark restore create` and allow custom restore names (#342 #409, @skriss) Azure: remove `location` requirement from Ark config (#344, @skriss) Add documentation/examples for storing backups in IBM Cloud Object Storage (#321, @roytman) Reduce verbosity of hooks logging (#362, @skriss) AWS: Add minimal IAM policy to documentation (#363 #419, @hopkinsth) Don't restore events (#374, @sanketjpatel) Azure: reduce API polling interval from 60s to 5s (#359, @skriss) Switch from hostPath to emptyDir volume type for minio example (#386, @containscafeine) Add limit ranges as a prioritized resource for restores (#392, @containscafeine) AWS: Add documentation on using Ark with kube2iam (#402, @domderen) Azure: add node selector so Ark pod is scheduled on a linux node (#415, @ffd2subroutine) Error from the Ark CLI if attempting to get logs for a non-existent restore (#391, @containscafeine) GCP: Add minimal IAM policy to documentation (#429, @skriss @jody-frankowski) Ark v0.7.1 moved the Ark server deployment into a separate namespace, `heptio-ark-server`. As of v0.8.0 we've returned to a single namespace, `heptio-ark`, for all Ark-related resources. If you're currently running v0.7.1, here are the steps you can take to upgrade: Execute the steps from the Credentials and configuration section for your cloud: * When you get to the secret creation step, if you don't have your `credentials-ark` file handy, you can copy the existing secret from your `heptio-ark-server` namespace into the `heptio-ark` namespace: ```bash kubectl get secret/cloud-credentials -n heptio-ark-server --export -o json | \\ jq '.metadata.namespace=\"heptio-ark\"' | \\ kubectl apply -f - ``` You can now safely delete the `heptio-ark-server` namespace: ```bash kubectl delete namespace heptio-ark-server ``` Execute the commands from the Start the server section for your cloud: *"
}
] |
{
"category": "Runtime",
"file_name": "CHANGELOG-0.8.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Configuring IP Routing on an Amazon Web Services Virtual Private Cloud menu_order: 80 search_type: Documentation If your container infrastructure is running entirely within Amazon Web Services (AWS) Elastic Compute Cloud (EC2), then you can enable AWS-VPC mode with Weave Net. In AWS-VPC mode, containers are networked without using an overlay and allows network speeds close to that of the underlying network. With AWS-VPC enabled, Weave Net manages IP addresses and connects containers to the network as usual, but instead of wrapping each packet and sending it to its destination, Weave Net instructs the AWS network router with the ranges of container IP addresses and the instances on which they live. First, your AWS instances need to be given write access to the route table via its . If you have an existing IAM Role then extend it, otherwise create a new role. The role must have a attached which allows the following : ``` { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:CreateRoute\", \"ec2:DeleteRoute\", \"ec2:ReplaceRoute\", \"ec2:DescribeRouteTables\", \"ec2:DescribeInstances\" ], \"Resource\": [ \"*\" ] } ] } ``` Secondly, your Security Group must allow network traffic between instances. You must open TCP port 6783 which is the port that Weave Net uses to manage the network and also allow any ports which your own containers use. >Remember: There is no network overlay in this mode, and so, IP packets with container addresses will flow over the AWS network unmodified. Finally, since Weave will be operating with IP addresses outside of the range allocated by Amazon, you must disable \"Source/Destination check\" on each machine. Launch Weave Net with the `--awsvpc` flag: $ weave launch --awsvpc [other hosts] >>Note: You will still need to supply the names or IP addresses of other hosts in your cluster. AWS-VPC mode does not inter-operate with other Weave Net modes; it is all or nothing. In this mode, all hosts in a cluster must be AWS instances. (We hope to ease this limitation in future.) The `weave launch` command waits until the is ready, i.e. until after this peer has been able to make contact with other peers and confirm that it has joined the cluster the bridge. Without AWS-VPC, `weave launch` returns without waiting. The AWS network does not support multicast. The number of hosts in a cluster is limited by the maximum size of your AWS route table. This is limited to 50 entries though you can request an increase to 100 by contacting Amazon. All of your containers must be on the same network, with no subnet isolation. (We hope to ease this limitation in future.) The Maximum Transmission Unit, or MTU, is the technical term for the limit on how big a single packet can be on the network. Weave Net defaults to 1376 bytes. This default works across almost all networks, but for better performance you can set it to a larger MTU size. The AWS network supports packets of up to 9000 bytes. In AWS-VPC mode you can run the following: $ WEAVE_MTU=9000 weave launch --awsvpc host2 host3 See Also"
}
] |
{
"category": "Runtime",
"file_name": "awsvpc.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "carina-node runs in a cumtomized containers, embedding lvm2. Refer `docs/runtime-container` to learn abount how it works. building carina-node runtime image ```shell $ cd docs/runtime-container $ docker build -t runtime-container:latest . ``` multi-arch buildin ```shell $ cd docs/runtime-container $ docker buildx build -t centos-mutilarch-lvm2:runtime --platform=linux/arm,linux/arm64,linux/amd64 . --push ```"
}
] |
{
"category": "Runtime",
"file_name": "runtime-container.md",
"project_name": "Carina",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This page shows you how to deploy a sample site using . to install runsc with Docker. This document assumes that the runtime name chosen is `runsc`. Now, let's deploy a WordPress site using Docker. WordPress site requires two containers: web server in the frontend, MySQL database in the backend. Note: This example uses gVisor to sandbox the frontend web server, but not the MySQL database backend. In a production setup, due to imposed by gVisor, it is not recommended to run your database in a sandbox. The frontend is the critical component with the largest outside attack surface, where gVisor's security/performance trade-off makes the most sense. See the [Production guide] for more details. First, let's define a few environment variables that are shared between both containers: ```bash export MYSQLPASSWORD=${YOURSECRETPASSWORDHERE?} export MYSQL_DB=wordpress export MYSQL_USER=wordpress ``` Next, let's start the database container running MySQL and wait until the database is initialized: ```shell $ docker run --name mysql -d \\ -e MYSQLRANDOMROOT_PASSWORD=1 \\ -e MYSQLPASSWORD=\"${MYSQLPASSWORD}\" \\ -e MYSQLDATABASE=\"${MYSQLDB}\" \\ -e MYSQLUSER=\"${MYSQLUSER}\" \\ mysql:5.7 $ docker logs mysql |& grep 'port: 3306 MySQL Community Server (GPL)' ``` Once the database is running, you can start the WordPress frontend. We use the `--link` option to connect the frontend to the database, and expose the WordPress to port 8080 on the localhost. ```shell $ docker run --runtime=runsc --name wordpress -d \\ --link mysql:mysql \\ -p 8080:80 \\ -e WORDPRESSDBHOST=mysql \\ -e WORDPRESSDBUSER=\"${MYSQL_USER}\" \\ -e WORDPRESSDBPASSWORD=\"${MYSQL_PASSWORD}\" \\ -e WORDPRESSDBNAME=\"${MYSQL_DB}\" \\ -e WORDPRESSTABLEPREFIX=wp_ \\ wordpress ``` Now, you can access the WordPress website pointing your favorite browser to <http://localhost:8080>. Congratulations! You have just deployed a WordPress site using Docker and gVisor. Learn how to deploy WordPress with or . Before deploying this to production, see the [Production guide] for how to take full advantage of gVisor."
}
] |
{
"category": "Runtime",
"file_name": "docker.md",
"project_name": "gVisor",
"subcategory": "Container Runtime"
}
|
[
{
"data": "The libseccomp-golang Security Vulnerability Handling Process =============================================================================== https://github.com/seccomp/libseccomp-golang This document document attempts to describe the processes through which sensitive security relevant bugs can be responsibly disclosed to the libseccomp-golang project and how the project maintainers should handle these reports. Just like the other libseccomp-golang process documents, this document should be treated as a guiding document and not a hard, unyielding set of regulations; the bug reporters and project maintainers are encouraged to work together to address the issues as best they can, in a manner which works best for all parties involved. Problems with the libseccomp-golang library that are not suitable for immediate public disclosure should be emailed to the current libseccomp-golang maintainers, the list is below. We typically request at most a 90 day time period to address the issue before it is made public, but we will make every effort to address the issue as quickly as possible and shorten the disclosure window. Paul Moore, [email protected] Tom Hromatka, [email protected] Kir Kolyshkin, [email protected] Upon disclosure of a bug, the maintainers should work together to investigate the problem and decide on a solution. In order to prevent an early disclosure of the problem, those working on the solution should do so privately and outside of the traditional libseccomp-golang development practices. One possible solution to this is to leverage the GitHub \"Security\" functionality to create a private development fork that can be shared among the maintainers, and optionally the reporter. A placeholder GitHub issue may be created, but details should remain extremely limited until such time as the problem has been fixed and responsibly disclosed. If a CVE, or other tag, has been assigned to the problem, the GitHub issue title should include the vulnerability tag once the problem has been disclosed. Whenever possible, responsible reporting and patching practices should be followed, including notification to the linux-distros and oss-security mailing lists. https://oss-security.openwall.org/wiki/mailing-lists/distros https://oss-security.openwall.org/wiki/mailing-lists/oss-security"
}
] |
{
"category": "Runtime",
"file_name": "SECURITY.md",
"project_name": "runc",
"subcategory": "Container Runtime"
}
|
[
{
"data": "This document details the versioning and release plan for containerd. Stability is a top goal for this project, and we hope that this document and the processes it entails will help to achieve that. It covers the release process, versioning numbering, backporting, API stability and support horizons. If you rely on containerd, it would be good to spend time understanding the areas of the API that are and are not supported and how they impact your project in the future. This document will be considered a living document. Supported timelines, backport targets and API stability guarantees will be updated here as they change. If there is something that you require or this document leaves out, please reach out by . Releases of containerd will be versioned using dotted triples, similar to . For the purposes of this document, we will refer to the respective components of this triple as `<major>.<minor>.<patch>`. The version number may have additional information, such as alpha, beta and release candidate qualifications. Such releases will be considered \"pre-releases\". Major and minor releases of containerd will be made from main. Releases of containerd will be marked with GPG signed tags and announced at https://github.com/containerd/containerd/releases. The tag will be of the format `v<major>.<minor>.<patch>` and should be made with the command `git tag -s v<major>.<minor>.<patch>`. After a minor release, a branch will be created, with the format `release/<major>.<minor>` from the minor tag. All further patch releases will be done from that branch. For example, once we release `v1.0.0`, a branch `release/1.0` will be created from that tag. All future patch releases will be done against that branch. Pre-releases, such as alphas, betas and release candidates will be conducted from their source branch. For major and minor releases, these releases will be done from main. For patch releases, these pre-releases should be done within the corresponding release branch. While pre-releases are done to assist in the stabilization process, no guarantees are provided. The upgrade path for containerd is such that the 0.0.x patch releases are always backward compatible with its major and minor version. Minor (0.x.0) version will always be compatible with the previous minor release. i.e. 1.2.0 is backwards compatible with 1.1.0 and 1.1.0 is compatible with 1.0.0. There is no compatibility guarantees for upgrades that span multiple, minor releases. For example, 1.0.0 to 1.2.0 is not supported. One should first upgrade to 1.1, then 1.2. There are no compatibility guarantees with upgrades to major versions. For example, upgrading from 1.0.0 to 2.0.0 may require resources to migrated or integrations to change. Each major version will be supported for at least 1 year with bug fixes and security patches. The activity for the next release will be tracked in the . If your issue or PR is not present in a milestone, please reach out to the maintainers to create the milestone or add an issue or PR to an existing milestone. Support horizons will be defined corresponding to a release branch, identified by `<major>.<minor>`. Release branches will be in one of several states: *Next*: The next planned release"
},
{
"data": "*Active*: The release is a stable branch which is currently supported and accepting patches. *Extended*: The release branch is only accepting security patches. *LTS*: The release is a long term stable branch which is currently supported and accepting patches. *End of Life*: The release branch is no longer supported and no new patches will be accepted. Releases will be supported at least one year after a minor release. This means that we will accept bug reports and backports to release branches until the end of life date. If no new minor release has been made, that release will be considered supported until 6 months after the next minor is released or one year, whichever is longer. Additionally, releases may have an extended security support period after the end of the active period to accept security backports. This timeframe will be decided by maintainers before the end of the active status. Long term stable (LTS) releases will be supported for at least three years after their initial minor release. These branches will accept bug reports and backports until the end of life date. They may also accept a wider range of patches than non-LTS releases to support the longer term maintainability of the branch, including library dependency, toolchain (including Go) and other version updates which are needed to ensure each release is built with fully supported dependencies and remains usable by containerd clients. There should be at least a 6-month overlap between the end of life of an LTS release and the initial release of a new LTS release. Up to 6 months before the announced end of life of an LTS branch, the branch may convert to a regular Active release with stricter backport criteria. The current state is available in the following tables: | Release | Status | Start | End of Life | | | - | | - | | | End of Life | Dec 4, 2015 | - | | | End of Life | Mar 21, 2016 | - | | | End of Life | Apr 21, 2016 | December 5, 2017 | | | End of Life | December 5, 2017 | December 5, 2018 | | | End of Life | April 23, 2018 | October 23, 2019 | | | End of Life | October 24, 2018 | October 15, 2020 | | | End of Life | September 26, 2019 | March 4, 2021 | | | End of Life | August 17, 2020 | March 3, 2022 | | | End of Life | May 3, 2021 | February 28, 2023 | | | LTS | February 15, 2022 | max(February 15, 2025 or next LTS + 6 months) | | | Active | March 10, 2023 | max(March 10, 2024 or release of 2.0 + 6 months) | | | Next | TBD | TBD | The Kubernetes version matrix represents the versions of containerd which are recommended for a Kubernetes"
},
{
"data": "Any actively supported version of containerd may receive patches to fix bugs encountered in any version of Kubernetes, however, our recommendation is based on which versions have been the most thoroughly tested. See the for the list of actively tested versions. Kubernetes only supports n-3 minor release versions and containerd will ensure there is always a supported version of containerd for every supported version of Kubernetes. | Kubernetes Version | containerd Version | CRI Version | |--|--|--| | 1.24 | 1.7.0+, 1.6.4+ | v1, v1alpha2 | | 1.25 | 1.7.0+, 1.6.4+ | v1, v1alpha2 | | 1.26 | 1.7.0+, 1.6.15+ | v1 | Note: containerd v1.6., and v1.7. support CRI v1 and v1alpha2 through EOL as those releases continue to support older versions of k8s, cloud providers, and other clients using CRI v1alpha2. CRI v1alpha2 is deprecated in v1.7 and will be removed in containerd v2.0. Deprecated containerd and kubernetes versions | Containerd Version | Kubernetes Version | CRI Version | |--|--|-| | v1.0 (w/ cri-containerd) | 1.7, 1.8, 1.9 | v1alpha1 | | v1.1 | 1.10+ | v1alpha2 | | v1.2 | 1.10+ | v1alpha2 | | v1.3 | 1.12+ | v1alpha2 | | v1.4 | 1.19+ | v1alpha2 | | v1.5 | 1.20+ | v1 (1.23+), v1alpha2 | Backports in containerd are community driven. As maintainers, we'll try to ensure that sensible bugfixes make it into active release, but our main focus will be features for the next minor or major release. For the most part, this process is straightforward, and we are here to help make it as smooth as possible. If there are important fixes that need to be backported, please let us know in one of three ways: Open an issue. Open a PR with cherry-picked change from main. Open a PR with a ported fix. If you are reporting a security issue: Please follow the instructions at Remember that backported PRs must follow the versioning guidelines from this document. Any release that is \"active\" can accept backports. Opening a backport PR is fairly straightforward. The steps differ depending on whether you are pulling a fix from main or need to draft a new commit specific to a particular branch. To cherry-pick a straightforward commit from main, simply use the cherry-pick process: Pick the branch to which you want backported, usually in the format `release/<major>.<minor>`. The following will create a branch you can use to open a PR: ```console $ git checkout -b my-backport-branch release/<major>.<minor>. ``` Find the commit you want backported. Apply it to the release branch: ```console $ git cherry-pick -xsS <commit> ``` (Optional) If other commits exist in the main branch which are related to the cherry-picked commit; eg: fixes to the main PR. It is recommended to cherry-pick those commits also into this same `my-backport-branch`. Push the branch and open up a PR against the release branch: ``` $ git push -u stevvooe my-backport-branch ``` Make sure to replace `stevvooe` with whatever fork you are using to open the PR. When you open the PR, make sure to switch `main` with whatever release branch you are targeting with the fix. Make sure the PR title has `[<release branch>]` prefixed."
},
{
"data": "``` [release/1.4] Fix foo in bar ``` If there is no existing fix in main, you should first fix the bug in main, or ask us a maintainer or contributor to do it via an issue. Once that PR is completed, open a PR using the process above. Only when the bug is not seen in main and must be made for the specific release branch should you open a PR with new code. The following table provides an overview of the components covered by containerd versions: | Component | Status | Stabilized Version | Links | ||-|--|| | GRPC API | Stable | 1.0 | | | Metrics API | Stable | 1.0 | - | | Runtime Shim API | Stable | 1.2 | - | | Daemon Config | Stable | 1.0 | - | | CRI GRPC API | Stable | 1.6 (CRI v1) | | | Go client API | Unstable | future | | | `ctr` tool | Unstable | Out of scope | - | From the version stated in the above table, that component must adhere to the stability constraints expected in release versions. Unless explicitly stated here, components that are called out as unstable or not covered may change in a future minor version. Breaking changes to \"unstable\" components will be avoided in patch versions. The primary product of containerd is the GRPC API. As of the 1.0.0 release, the GRPC API will not have any backwards incompatible changes without a major version jump. To ensure compatibility, we have collected the entire GRPC API symbol set into a single file. At each minor release of containerd, we will move the current `next.pb.txt` file to a file named for the minor version, such as `1.0.pb.txt`, enumerating the support services and messages. Note that new services may be added in minor releases. New service methods and new fields on messages may be added if they are optional. `*.pb.txt` files are generated at each API release. They prevent unintentional changes to the API by having a diff that the CI can run. These files are not intended to be consumed or used by clients. The metrics API that outputs prometheus style metrics will be versioned independently, prefixed with the API version. i.e. `/v1/metrics`, `/v2/metrics`. The metrics API version will be incremented when breaking changes are made to the prometheus output. New metrics can be added to the output in a backwards compatible manner without bumping the API version. containerd is based on a modular design where plugins are implemented to provide the core functionality. Plugins implemented in tree are supported by the containerd community unless explicitly specified as non-stable. Out of tree plugins are not supported by the containerd maintainers. Currently, the Windows runtime and snapshot plugins are not stable and not supported. Please refer to the GitHub milestones for Windows support in a future release. Error codes will not change in a patch release, unless a missing error code causes a blocking"
},
{
"data": "Error codes of type \"unknown\" may change to more specific types in the future. Any error code that is not \"unknown\" that is currently returned by a service will not change without a major release or a new version of the service. If you find that an error code that is required by your application is not well-documented in the protobuf service description or tested explicitly, please file an issue and we will clarify. Unless explicitly stated, the formats of certain fields may not be covered by this guarantee and should be treated opaquely. For example, don't rely on the format details of a URL field unless we explicitly say that the field will follow that format. The Go client API, documented in , is currently considered unstable. It is recommended to vendor the necessary components to stabilize your project build. Note that because the Go API interfaces with the GRPC API, clients written against a 1.0 Go API should remain compatible with future 1.x series releases. We intend to stabilize the API in a future release when more integrations have been carried out. Any changes to the API should be detectable at compile time, so upgrading will be a matter of fixing compilation errors and moving from there. The CRI (Container Runtime Interface) GRPC API is used by a Kubernetes kubelet to communicate with a container runtime. This interface is used to manage container lifecycles and container images. Currently, this API is under development and unstable across Kubernetes releases. Each Kubernetes release only supports a single version of CRI and the CRI plugin only implements a single version of CRI. Each minor release will support one version of CRI and at least one version of Kubernetes. Once this API is stable, a minor will be compatible with any version of Kubernetes which supports that version of CRI. The `ctr` tool provides the ability to introspect and understand the containerd API. It is not considered a primary offering of the project and is unsupported in that sense. While we understand its value as a debug tool, it may be completely refactored or have breaking changes in minor releases. Targeting `ctr` for feature additions reflects a misunderstanding of the containerd architecture. Feature addition should focus on the client Go API and additions to `ctr` may or may not be accepted at the discretion of the maintainers. We will do our best to not break compatibility in the tool in patch releases. The daemon's configuration file, commonly located in `/etc/containerd/config.toml` is versioned and backwards compatible. The `version` field in the config file specifies the config's version. If no version number is specified inside the config file then it is assumed to be a version 1 config and parsed as such. Please use `version = 2` to enable version 2 config as version 1 has been deprecated. As a general rule, anything not mentioned in this document is not covered by the stability guidelines and may change in any"
},
{
"data": "Explicitly, this pertains to this non-exhaustive list of components: File System layout Storage formats Snapshot formats Between upgrades of subsequent, minor versions, we may migrate these formats. Any outside processes relying on details of these file system layouts may break in that process. Container root file systems will be maintained on upgrade. We may make exceptions in the interest of security patches. If a break is required, it will be communicated clearly and the solution will be considered against total impact. The deprecated features are shown in the following table: | Component | Deprecation release | Target release for removal | Recommendation | |-||-|| | Runtime V1 API and implementation (`io.containerd.runtime.v1.linux`) | containerd v1.4 | containerd v2.0 | Use `io.containerd.runc.v2` | | Runc V1 implementation of Runtime V2 (`io.containerd.runc.v1`) | containerd v1.4 | containerd v2.0 | Use `io.containerd.runc.v2` | | config.toml `version = 1` | containerd v1.5 | containerd v2.0 | Use config.toml `version = 2` | | Built-in `aufs` snapshotter | containerd v1.5 | containerd v2.0 | Use `overlayfs` snapshotter | | Container label `containerd.io/restart.logpath` | containerd v1.5 | containerd v2.0 | Use `containerd.io/restart.loguri` label | | `cri-containerd-.tar.gz` release bundles | containerd v1.6 | containerd v2.0 | Use `containerd-.tar.gz` bundles | | Pulling Schema 1 images (`application/vnd.docker.distribution.manifest.v1+json`) | containerd v1.7 | containerd v2.0 | Use Schema 2 or OCI images | | CRI `v1alpha2` | containerd v1.7 | containerd v2.0 | Use CRI `v1` | The deprecated properties in are shown in the following table: | Property Group | Property | Deprecation release | Target release for removal | Recommendation | |-|||-|-| |`[plugins.\"io.containerd.grpc.v1.cri\"]` | `systemd_cgroup` | containerd v1.3 | containerd v2.0 | Use `SystemdCgroup` in runc options (see below) | |`[plugins.\"io.containerd.grpc.v1.cri\".containerd]` | `untrustedworkloadruntime` | containerd v1.2 | containerd v2.0 | Create `untrusted` runtime in `runtimes` | |`[plugins.\"io.containerd.grpc.v1.cri\".containerd]` | `defaultruntime` | containerd v1.3 | containerd v2.0 | Use `defaultruntime_name` | |`[plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.*]` | `runtime_engine` | containerd v1.3 | containerd v2.0 | Use runtime v2 | |`[plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.*]` | `runtime_root` | containerd v1.3 | containerd v2.0 | Use `options.Root` | |`[plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.*.options]` | `CriuPath` | containerd v1.7 | containerd v2.0 | Set `$PATH` to the `criu` binary | |`. See also . | |` | |` | Note CNI Config Template (`plugins.\"io.containerd.grpc.v1.cri\".cni.conf_template`) was once deprecated in v1.7.0, but its deprecation was cancelled in v1.7.3. <details><summary>Example: runc option <code>SystemdCgroup</code></summary><p> ```toml version = 2 [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runc.options] SystemdCgroup = true ``` </p></details> <details><summary>Example: runc option <code>Root</code></summary><p> ```toml version = 2 [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runc.options] Root = \"/path/to/runc/root\" ``` </p></details> Experimental features are new features added to containerd which do not have the same stability guarantees as the rest of containerd. An effort is made to avoid breaking interfaces between versions, but changes to experimental features before being fully supported is possible. Users can still expect experimental features to be high quality and are encouraged to use new features to help them stabilize more quickly. | Component | Initial Release | Target Supported Release | |-|--|--| | | containerd v1.7 | containerd v2.0 | | | containerd v1.7 | containerd v2.0 | | | containerd v1.7 | containerd v2.0 | | | containerd v1.7 | containerd v2.0 | | | containerd v1.7 | containerd v2.0 | | | containerd v1.7 | containerd v2.0 | | | containerd v1.7 | containerd v2.0 |"
}
] |
{
"category": "Runtime",
"file_name": "RELEASES.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Targeted for v1.6 In clusters with large numbers of OSDs, it can take a very long time to update all of the OSDs. This occurs on updates of Rook and Ceph both for major as well as the most minor updates. To better support large clusters, Rook should be able to update (and upgrade) multiple OSDs in parallel. In the worst (but unlikely) case, all OSDs which are updated for a given parallel update operation might fail to come back online after they are updated. Users may wish to limit the number of OSDs updated in parallel in order to avoid too many OSDs failing in this way. Adding new OSDs to a cluster should occur as quickly as possible. This allows users to make use of newly added storage as quickly as possible, which they may need for critical applications using the underlying Ceph storage. In some degraded cases, adding new storage may be necessary in order to allow currently-running Ceph OSDs to be updated without experiencing storage cluster downtime. This does not necessarily mean that adding new OSDs needs to happen before updates. This prioritization might delay updates significantly since adding OSDs not only adds capacity to the Ceph cluster but also necessitates data rebalancing. Rebalancing generates data movement which needs to settle for updates to be able to proceed. For Ceph cluster with huge numbers of OSDs, Rook's process to update OSDs should not starve other resources out of the opportunity to get configuration updates. The Ceph manager (mgr) will add functionality to allow querying the maximum number of OSDs that are okay to stop safely. The command will take an initial OSD ID to include in the results. It should return error if the initial OSD cannot be stopped safely. Otherwise it returns a list of 1 or more OSDs that can be stopped safely in parallel. It should take a `--max=<int>` parameter that limits the number of OSDs returned. It will look similar to this on the command line `ceph osd ok-to-stop $id --max $int`. The command will have an internal algorithm that follows the flow below: Query `ok-to-stop` for the \"seed\" OSD ID. This represents the CRUSH hierarchy bucket at the \"osd\" (or \"device\") level. If the previous operation reports that it safe to update, batch query `ok-to-stop` for all OSDs that fall under the CRUSH bucket one level up from the current"
},
{
"data": "Repeat step 3 moving up the CRUSH hierarchy until one of the following two conditions: The number of OSDs in the batch query is greater than or equal to the `max` parameter, OR It is no longer `ok-to-stop` all OSDs in the CRUSH bucket. Update OSD Deployments in parallel for the last CRUSH bucket where it was `ok-to-stop` the OSDs. If there are more OSDs in the CRUSH bucket than allowed by the user that are okay to stop, return only the `max` number of OSD IDs from the CRUSH bucket. The pull request for this feature in the Ceph project can be found at https://github.com/ceph/ceph/pull/39455. Build an \"existence list\" of OSDs which already have Deployments created for them. Build an \"update queue\" of OSD Deployments which need updated. Start OSD prepare Jobs as needed for OSDs on PVC and OSDs on nodes. Note which prepare Jobs are started Provision Loop If all prepare Jobs have been completed and the update queue is empty, stop Provision Loop. If there is a `CephCluster` update/delete, stop Provision Loop with a special error. Create OSDs: if a prepare Job has completed, read the results. If any OSDs reported by prepare Job do not exist in the \"existence list\", create them. Mark the prepare Job as completed. Restart Provision Loop. Update OSDs: if the update queue is not empty, update a batch of OSD Deployments. Query `ceph osd ok-to-stop <osd-id> --max=<int>` for each OSD in the update queue until a list of OSD IDs is returned. If no OSDs in the update queue are okay to stop, Restart Provision Loop. Update all of the OSD Deployments in parallel. Record any failures. Remove all OSDs from the batch from the update queue (even failures). Restart Provision Loop. If there are any recorded errors/failures, return with an error. Otherwise return success. Because , it could take a long time for all OSDs in a cluster to be updated. In order for Rook to have opportunity to reconcile other components of a Ceph cluster's `CephCluster` resource, Rook should ensure that the OSD update reconciliation does not create a scenario where the `CephCluster` cannot be modified in other ways. https://github.com/rook/rook/pull/6693 introduced a means of interrupting the current OSD orchestration to handle newer `CephCluster` resource changes. This functionality should remain so that user changes to the `CephCluster` can begin reconciliation quickly. The Rook Operator should stop OSD orchestration on any updates to the `CephCluster` spec and be able to resume OSD orchestration with the next"
},
{
"data": "List all OSD Deployments belonging to the Rook cluster. Build a list of OSD IDs matching the OSD Deployments. Record this in a data structure that allows O(1) lookup. List all OSD Deployments belonging to the Rook cluster to use as the update queue. All OSDs should be updated in case there are changes to the CephCluster resource that result in OSD deployments being updated. The minimal information each item in the queue needs is only the OSD ID. The OSD Deployment managed by Rook can easily be inferred from the OSD ID. Note: A previous version of this design planned to ignore OSD Deployments which are already updated. The plan was to identify OSD Deployments which need updated by looking at the OSD Deployments for: (1) a `rook-version` label that does not match the current version of the Rook operator AND/OR (2) a `ceph-version` label that does not match the current Ceph version being deployed. This is an invalid optimization that does not account for OSD Deployments changing due to CephCluster resource updates. Instead of trying to optimize, it is better to always update OSD Deployments and rely on the lower level update calls to finish quickly when there is no update to apply. Establish a new `updatePolicy` section in the `CephCluster` `spec`. In this section, users can set options for how OSDs should be updated in parallel. Additionally, we can move some existing one-off configs related to updates to this section for better coherence. This also allows for a natural location where future update options can be added. ```yaml apiVersion: ceph.rook.io/v1 kind: CephCluster spec: skipUpgradeChecks: continueUpgradeAfterChecksEvenIfNotHealthy: removeOSDsIfOutAndSafeToDestroy: updatePolicy: skipUpgradeChecks: <bool, default=false> continueUpgradeAfterChecksEvenIfNotHealthy: <bool, default=false, relocated from spec> osds: removeIfOutAndSafeToDestroy: <bool, default=false> maxInParallelPerCluster: <k8s.io/apimachinery/pkg/util/intstr.intOrString, default=15%> ``` Default `maxInParallelPerCluster`: Ceph defaults to keeping 3 replicas of an item or 2+1 erasure coding. It should be impossible to update more than one-third (33.3%) of a default Ceph cluster at any given time. It should be safe and fairly easy to update slightly less than half of one-third at once, which rounds down to 16%. 15% is a more round number, so that is chosen instead. Some users may wish to update OSDs in a particular failure domain or zone completely before moving onto updates in another zone to minimize risk from updates to a single failure domain. This is out of scope for this initial design, but we should consider how to allow space to more easily implement this change when it is needed."
}
] |
{
"category": "Runtime",
"file_name": "update-osds-in-parallel.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> List current state of all peers List state of all peers defined in CiliumBGPPeeringPolicy ``` cilium-dbg bgp peers [flags] ``` ``` -h, --help help for peers -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Access to BGP control plane"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_bgp_peers.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "slug: welcome title: Welcome authors: [Michael] tags: [hello, Hwameistor] Welcome to the Hwameistor blog space. Here you can keep up with the progress of the Hwameistor open source project and recent hot topics. We also plan to include release notes for major releases, guidance articles, community-related events, and possibly some development tips, and interesting topics within the team. If you are interested in contributing to this open source project and would like to join the discussion or make some guest blog posts, please contact us. GitHub address is: https://github.com/hwameistor"
}
] |
{
"category": "Runtime",
"file_name": "2022-04-22-welcome.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "oep-number: CSI Volume Provisioning 20190606 title: CSI Volume Provisioning authors: \"@amitkumardas\" \"@payes\" \"@prateekpandey14\" owners: \"@kmova\" \"@vishnuitta\" editor: \"@amitkumardas\" creation-date: 2019-06-06 last-updated: 2019-07-04 status: provisional see-also: NA replaces: current cstor volume provisioning in v1.0.0 superseded-by: NA * * * * * * * * * * This proposal charts out the design details to implement Container Storage Interface commonly referred to as CSI to create and delete openebs volumes. CSI implementation provides the necessary abstractions for any storage provider to provision volumes and related use-cases on any container orchestrator. This is important from the perspective of all OpenEBS storage engines, since it will make all OpenEBS storage engines adhere to CSI specifications and hence abstract the same from inner workings of a container orchestrator. Ability to create and delete a volume in a kubernetes cluster using CSI Enable higher order applications to consume this volume Handling high availability in cases of node restarts Handling volume placement related requirements As an application developer I should be able to provide a volume that can be consumed by my application. This volume should get created dynamically during application creation time. As an application developer I should be able to delete the volume that was being consumed by my application. This volume should get deleted only when the application is deleted. Current provisioning approach works well given day 1 operations. However, we consider this approach as limiting when day 2 operations come into picture. There has been a general consensus to adopt to reconcile driven approach (pushed by Kubernetes ecosystem) that handles day 2 operations effectively. The sections below can be assumed to be specific to `CStor` unless mentioned otherwise. CSI executes REST API calls to maya api server to create and delete volume Most of volume creation logic is handled via CASTemplate (known as CAST) name: cstor-volume-create-default CAST makes use of various configurations required for volume operations Config (can be set at PVC, SC or at CASTemplate itself) StoragePoolClaim -- pools of this claim are selected VolumeControllerImage -- docker image VolumeTargetImage -- docker image VolumeMonitorImage -- docker image ReplicaCount -- no. of replicas for this volume TargetDir -- TargetResourceRequests TargetResourceLimits AuxResourceRequests AuxResourceLimits RunNamespace ServiceAccountName FSType Lun ResyncInterval TargetNodeSelector TargetTolerations Volume (runtime configuration specific to the targeted volume) runNamespace capacity pvc isCloneEnable isRestoreVolume sourceVolume storageclass owner snapshotName Above configurations can be categorized broadly into two groups: Volume specific Kubernetes specific Uses PVC for below actions: get .metadata.annotations.volume.kubernetes.io/selected-node get .metadata.labels.openebs.io/replica-anti-affinity get .metadata.labels.openebs.io/preferred-replica-anti-affinity get .metadata.labels.openebs.io/target-affinity get .metadata.labels.openebs.io/sts-target-affinity derive statefulset application name Uses CStorPoolList for below actions: filter CSPs based on StoragePoolClaim name compares desired volume replica count against available number of pools maps CSP uid with its .metadata.labels.kubernetes.io/hostname Uses StorageClass for below actions: get .metadata.resourceVersion get .metadata.labels.openebs.io/sts-target-affinity Creates a Kubernetes Service for cstor target stores its name stores its cluster IP Creates CStorVolume custom resource Creates Kubernetes deployment for cstor target lot of config items are applied here conditionally Create CStorVolumeReplica custom resource pool selection is done here lot of config items are applied here conditionally lot of CVR properties get derived from above config items OpenEBS volumes are aware of higher order resources"
},
{
"data": "kubernetes based PVC & SC In other words OpenEBS volumes cannot be managed without the presence of PVC & SC Inorder to implement CSI as per its standards, OpenEBS volumes should decouple itself from being aware of container orchestrator (read Kubernetes native resources which are higher order entities). Logic like pool selection are currently done via go-templating which is not sustainable going forward go based templating cannot replace a high level language Building cstor volume target deployment is currently done via go-templating which has become quite complex to maintain Since OpenEBS itself runs on Kubernetes; limiting OpenEBS provisioning to be confined to CSI standards will prove difficult to develop and improve upon OpenEBS features. This proposal tries to adhere to CSI standards and make itself CSI compliant. At the same time, this proposal lets OpenEBS embrace use of Kubernetes custom resources and custom controllers to provide storage for stateful applications. This also makes OpenEBS implementation idiomatic to Kubernetes practices which is implement features via Custom Resources. CSI driver will handle CSI request for volume create Kubernetes will have following as part of the infrastructure required to operate OpenEBS CSI driver CStorVolumeConfig (Kubernetes custom resource) CStorVolumeConfig will be watched and reconciled by a dedicated controller CSI driver will handle CSI request for volume delete CSI driver will read the request parameters and delete corresponding CStorVolume resource The resources owned by CStorVolume will be garbage collected Below represents `cstor volume lifecycle` with Kubernetes as the container orchestrator. ``` [PVC]--1-->(CSI Provisioner)--2-->(OpenEBS CSI Driver)--3--> | |4 \\|/ [PV] [CStorVolumePolicy] | |6 \\|/ --3-->[CStorVolumeConfig]--5-->(CStorVolumeConfigController) [Mount]--7-->(CSI Node)--8-->[CStorVolumeConfig]--9-->(CStorVolumeConfigController)--10--> --10-->[CStorVolume + Deployment + Service + CStorVolumeReplica(s)] ``` Below represents owner and owned resources ``` | CStorVolume..................K8s Custom Resource | CStorVolumeConfig --| CStor Target.................K8s Deployment | | CStor Target Service.........K8s Service | | CStorVolumeReplica(1..n).....K8s Custom Resource \\-/ \\-/ | | owner owned ``` Bow represents resources that are in a constant state of reconciliation ``` | CStorVolume CStorVolumeConfig | | CStorVolumeReplica ``` Next sections provide details on these resources that work in tandem to ensure smooth volume create & delete operations. This is a new Kubernetes custom resource that holds volume policy information to be used by cstor volume config controller while provisioning cstor volumes. Whenever a cstor volume policy field is changed by the user, webhook inturrupts and updates the phase as reconciling. It is assumed if webhook server is not running, the editing of the resource would not be allowed. On detecting the phase as reconciling, the CVC controller will reverify all the policies and update the change to the corresponding resource. After successful changes, phase is updated back to bound. NOTE: This resource kind does not have a controller i.e. watcher logic. It holds the config policy information to be utilized by cstor volume config controller. Following is the proposed schema for `CStorVolumePolicy`: ```go type CStorVolumePolicy struct { metav1.TypeMeta `json:\",inline\"` metav1.ObjectMeta `json:\"metadata,omitempty\"` // Spec defines a configuration info of a cstor volume required // to provisione cstor volume resources Spec CStorVolumePolicySpec `json:\"spec\"` Status CStorVolumePolicyStatus `json:\"status,omitempty\"` } // CStorVolumePolicySpec ... type CStorVolumePolicySpec struct { // replicaAffinity is set to true then volume replica resources need to be // distributed across the pool instances Provision Provision `json:\"provision,omitempty\"` // TargetSpec represents configuration related to cstor target and its resources Target TargetSpec `json:\"target,omitempty\"` // ReplicaSpec represents configuration related to replicas resources Replica ReplicaSpec `json:\"replica,omitempty\"` // ReplicaPoolInfo holds the pool information of volume"
},
{
"data": "// Ex: If volume is provisioned on which CStor pool volume replicas exist ReplicaPoolInfo []ReplicaPoolInfo `json:\"replicaPoolInfo,omitempty\"` } // TargetSpec represents configuration related to cstor target and its resources type TargetSpec struct { // QueueDepth sets the queue size at iSCSI target which limits the // ongoing IO count from client QueueDepth string `json:\"queueDepth,omitempty\"` // IOWorkers sets the number of threads that are working on above queue IOWorkers int64 `json:\"luWorkers,omitempty\"` // Monitor enables or disables the target exporter sidecar Monitor bool `json:\"monitor,omitempty\"` // ReplicationFactor represents maximum number of replicas // that are allowed to connect to the target ReplicationFactor int64 `json:\"replicationFactor,omitempty\"` // Resources are the compute resources required by the cstor-target // container. Resources *corev1.ResourceRequirements `json:\"resources,omitempty\"` // AuxResources are the compute resources required by the cstor-target pod // side car containers. AuxResources *corev1.ResourceRequirements `json:\"auxResources,omitempty\"` // Tolerations, if specified, are the target pod's tolerations Tolerations []corev1.Toleration `json:\"tolerations,omitempty\"` // PodAffinity if specified, are the target pod's affinities PodAffinity *corev1.PodAffinity `json:\"affinity,omitempty\"` // NodeSelector is the labels that will be used to select // a node for target pod scheduleing // Required field NodeSelector map[string]string `json:\"nodeSelector,omitempty\"` // PriorityClassName if specified applies to this target pod // If left empty, no priority class is applied. PriorityClassName string `json:\"priorityClassName,omitempty\"` } // ReplicaSpec represents configuration related to replicas resources type ReplicaSpec struct { // IOWorkers represents number of threads that executes client IOs IOWorkers string `json:\"zvolWorkers,omitempty\"` // Controls the compression algorithm used for this volumes // examples: on|off|gzip|gzip-N|lz4|lzjb|zle // // Setting compression to \"on\" indicates that the current default compression // algorithm should be used.The default balances compression and decompression // speed, with compression ratio and is expected to work well on a wide variety // of workloads. Unlike all other settings for this property, on does not // select a fixed compression type. As new compression algorithms are added // to ZFS and enabled on a pool, the default compression algorithm may change. // The current default compression algorithm is either lzjb or, if the // `lz4_compress feature is enabled, lz4. // The lz4 compression algorithm is a high-performance replacement for the lzjb // algorithm. It features significantly faster compression and decompression, // as well as a moderately higher compression ratio than lzjb, but can only // be used on pools with the lz4_compress // feature set to enabled. See zpool-features(5) for details on ZFS feature // flags and the lz4_compress feature. // The lzjb compression algorithm is optimized for performance while providing // decent data compression. // The gzip compression algorithm uses the same compression as the gzip(1) // command. You can specify the gzip level by using the value gzip-N, // where N is an integer from 1 (fastest) to 9 (best compression ratio). // Currently, gzip is equivalent to gzip-6 (which is also the default for gzip(1)). // The zle compression algorithm compresses runs of zeros. Compression string `json:\"compression,omitempty\"` } // Provision represents different provisioning policy for cstor volumes type Provision struct { // replicaAffinity is set to true then volume replica resources need to be // distributed across the cstor pool instances based on the given topology ReplicaAffinity bool `json:\"replicaAffinity\"` // BlockSize is the logical block size in multiple of 512 bytes // BlockSize specifies the block size of the volume. The blocksize // cannot be changed once the volume has been written, so it should be // set at volume creation time. The default blocksize for volumes is 4"
},
{
"data": "// Any power of 2 from 512 bytes to 128 Kbytes is valid. BlockSize uint32 `json:\"blockSize,omitempty\"` } // ReplicaPoolInfo represents the pool information of volume replica type ReplicaPoolInfo struct { // PoolName represents the pool name where volume replica exists PoolName string `json:\"poolName\"` // UID also can be added } ``` `spec.targetDeployment.spec`, `spec.replica.spec` and `spec.targetService.spec` will be of same data type It will have below fields: labels, annotations, env, owners, nodeAffinity, podAntiAffinity, containers, initContainers, & so on `spec.*.spec` will mostly reflect the fields supported by Pod kind This resource is built & created by the CSI driver on a volume create request. This resource will be the trigger to get the desired i.e. requested cstor volume into the actual state in Kubernetes cluster. CStorVolumeConfig controller will reconcile this config into actual resources. CStorVolumeConfig controller will will be deployed as k8s deployment in k8s cluster. CStorVolumeConfig will be reconciled into following owned resources: CStorVolume (Kubernetes custom resource) CStor volume target (Kubernetes Deployment) CStor volume service (Kubernetes Service) CStorVolumeReplica (Kubernetes custom resource) Following is the proposed schema for `CStorVolumeConfig`: ```go type CStorVolumeConfig struct { metav1.TypeMeta `json:\",inline\"` metav1.ObjectMeta `json:\"metadata,omitempty\"` // Spec defines a specification of a cstor volume config required // to provisione cstor volume resources Spec CStorVolumeConfigSpec `json:\"spec\"` // Publish contains info related to attachment of a volume to a node. // i.e. NodeId etc. Publish CStorVolumeConfigPublish `json:\"publish,omitempty\"` // Status represents the current information/status for the cstor volume // config, populated by the controller. Status CStorVolumeConfigStatus `json:\"status\"` VersionDetails VersionDetails `json:\"versionDetails\"` } // CStorVolumeConfigSpec is the spec for a CStorVolumeConfig resource type CStorVolumeConfigSpec struct { // Capacity represents the actual resources of the underlying // cstor volume. Capacity corev1.ResourceList `json:\"capacity\"` // CStorVolumeRef has the information about where CstorVolumeClaim // is created from. CStorVolumeRef *corev1.ObjectReference `json:\"cstorVolumeRef,omitempty\"` // CStorVolumeSource contains the source volumeName@snapShotname // combaination. This will be filled only if it is a clone creation. CStorVolumeSource string `json:\"cstorVolumeSource,omitempty\"` // Provision represents the initial volume configuration for the underlying // cstor volume based on the persistent volume request by user. Provision // properties are immutable Provision VolumeProvision `json:\"provision\"` // Policy contains volume specific required policies target and replicas Policy CStorVolumePolicySpec `json:\"policy\"` } type VolumeProvision struct { // Capacity represents initial capacity of volume replica required during // volume clone operations to maintain some metadata info related to child // resources like snapshot, cloned volumes. Capacity corev1.ResourceList `json:\"capacity\"` // ReplicaCount represents initial cstor volume replica count, its will not // be updated later on based on scale up/down operations, only readonly // operations and validations. ReplicaCount int `json:\"replicaCount\"` } // CStorVolumeConfigPublish contains info related to attachment of a volume to a node. // i.e. NodeId etc. type CStorVolumeConfigPublish struct { // NodeID contains publish info related to attachment of a volume to a node. NodeID string `json:\"nodeId,omitempty\"` } // CStorVolumeConfigPhase represents the current phase of CStorVolumeConfig. type CStorVolumeConfigPhase string const ( //CStorVolumeConfigPhasePending indicates that the cvc is still waiting for //the cstorvolume to be created and bound CStorVolumeConfigPhasePending CStorVolumeConfigPhase = \"Pending\" //CStorVolumeConfigPhaseBound indiacates that the cstorvolume has been //provisioned and bound to the cstor volume config CStorVolumeConfigPhaseBound CStorVolumeConfigPhase = \"Bound\" //CStorVolumeConfigPhaseFailed indiacates that the cstorvolume provisioning //has failed CStorVolumeConfigPhaseFailed CStorVolumeConfigPhase = \"Failed\" ) // CStorVolumeConfigStatus is for handling status of CstorVolume Claim. // defines the observed state of CStorVolumeConfig type CStorVolumeConfigStatus struct { // Phase represents the current phase of"
},
{
"data": "Phase CStorVolumeConfigPhase `json:\"phase,omitempty\"` // PoolInfo represents current pool names where volume replicas exists PoolInfo []string `json:\"poolInfo,omitempty\"` // Capacity the actual resources of the underlying volume. Capacity corev1.ResourceList `json:\"capacity,omitempty\"` Conditions []CStorVolumeConfigCondition `json:\"condition,omitempty\"` } // CStorVolumeConfigCondition contains details about state of cstor volume type CStorVolumeConfigCondition struct { // Current Condition of cstor volume config. If underlying persistent volume is being // resized then the Condition will be set to 'ResizeStarted' etc Type CStorVolumeConfigConditionType `json:\"type\"` // Last time we probed the condition. // +optional LastProbeTime metav1.Time `json:\"lastProbeTime,omitempty\"` // Last time the condition transitioned from one status to another. // +optional LastTransitionTime metav1.Time `json:\"lastTransitionTime,omitempty\"` // Reason is a brief CamelCase string that describes any failure Reason string `json:\"reason\"` // Human-readable message indicating details about last transition. Message string `json:\"message\"` } // CStorVolumeConfigConditionType is a valid value of CstorVolumeConfigCondition.Type type CStorVolumeConfigConditionType string // These constants are CVC condition types related to resize operation. const ( // CStorVolumeConfigResizePending ... CStorVolumeConfigResizing CStorVolumeConfigConditionType = \"Resizing\" // CStorVolumeConfigResizeFailed ... CStorVolumeConfigResizeFailed CStorVolumeConfigConditionType = \"VolumeResizeFailed\" // CStorVolumeConfigResizeSuccess ... CStorVolumeConfigResizeSuccess CStorVolumeConfigConditionType = \"VolumeResizeSuccessful\" // CStorVolumeConfigResizePending ... CStorVolumeConfigResizePending CStorVolumeConfigConditionType = \"VolumeResizePending\" ) ``` Refer Refer These are some of the controller patterns that can be followed while implementing controllers. Status.Phase of a CStorVolumeConfig (CVC) resource can have following: Pending Bound Reconciling Status.Phase is only set during creation Once the phase is Bound it should never revert to Pending Status.Conditions will be used to track an on-going operation or sub status-es It will be an array It will have ConditionStatus as a field which can have below values: True False Unknown An item within a condition can be added, updated & deleted by the resources own controller An item within a condition can be updated by a separate controller The resources own controller still holds the right to delete this condition item Below is a sample schema snippet for `CStorVolumeConfigStatus` resource ```go type CStorVolumeConfigStatus struct { Phase CStorVolumeConfigPhase `json:\"phase\"` Conditions []CStorVolumeConfigCondition `json:\"conditions\"` } type CStorVolumeConfigCondition struct { Type CStorVolumeConfigConditionType `json:\"type\"` Status CStorVolumeConfigConditionStatus `json:\"status\"` LastTransitionTime metav1.Time `json:\"lastTransitionTime\"` LastUpdateTime metav1.Time `json:\"lastUpdateTime\"` Reason string `json:\"reason\"` Message string `json:\"message\"` } type CStorVolumeConfigConditionType string const ( CVCConditionResizing CStorVolumeConfigConditionType = \"resizing\" ) type CStorVolumeConfigConditionStatus string const ( CVCConditionStatusTrue CStorVolumeConfigConditionStatus = \"true\" CVCConditionStatusFalse CStorVolumeConfigConditionStatus = \"false\" CVCConditionStatusUnknown CStorVolumeConfigConditionStatus = \"unknown\" ) ``` This proposal tries to do away with existing ways to provision and delete volume. OpenEBS control plane (known as Maya) currently handles all the volume provisioning requirements. This will involve considerable risk in terms of time and effort since existing way that works will be done away with. We shall try to avoid major disruptions, by having the following processes in place: Test-Driven Development (TDD) methodology Need to implement automated test cases in parallel to the development Try to reuse custom resources that enable provisioning openebs volumes Integration test covers volume creation & volume deletion usecases Implementation does not need a custom (i.e. forked) CSI Kubernetes provisioner Existing/Old way of volume provisioning is not impacted Kubernetes CSI testsuite passes this CSI implementation Owner acceptance of `Summary` and `Motivation` sections - YYYYMMDD Agreement on `Proposal` section - YYYYMMDD Date implementation started - YYYYMMDD First OpenEBS release where an initial version of this OEP was available - YYYYMMDD Version of OpenEBS where this OEP graduated to general availability - YYYYMMDD If this OEP was retired or superseded - YYYYMMDD NA NA Availability of github.com/openebs/csi repo Enable integration with Travis as the minimum CI tool"
}
] |
{
"category": "Runtime",
"file_name": "20190606-csi-volume-provisioning.md",
"project_name": "OpenEBS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Returns a set of temporary security credentials for applications/clients who have been authenticated through client credential grants provided by identity provider. Example providers include KeyCloak, Okta etc. Calling AssumeRoleWithClientGrants does not require the use of MinIO default credentials. Therefore, client application can be distributed that requests temporary security credentials without including MinIO default credentials. Instead, the identity of the caller is validated by using a JWT access token from the identity provider. The temporary security credentials returned by this API consists of an access key, a secret key, and a security token. Applications can use these temporary security credentials to sign calls to MinIO API operations. By default, the temporary security credentials created by AssumeRoleWithClientGrants last for one hour. However, use the optional DurationSeconds parameter to specify the duration of the credentials. This value varies from 900 seconds (15 minutes) up to the maximum session duration of 365 days. The OAuth 2.0 access token that is provided by the identity provider. Application must get this token by authenticating the application using client credential grants before the application makes an AssumeRoleWithClientGrants call. | Params | Value | | :-- | :-- | | Type | String | | Length Constraints | Minimum length of 4. Maximum length of 2048. | | Required | Yes | Indicates STS API version information, the only supported value is '2011-06-15'. This value is borrowed from AWS STS API documentation for compatibility reasons. | Params | Value | | :-- | :-- | | Type | String | | Required | Yes | The duration, in seconds. The value can range from 900 seconds (15 minutes) up to 365 days. If value is higher than this setting, then operation fails. By default, the value is set to 3600 seconds. If no DurationSeconds is specified expiry seconds is obtained from Token. | Params | Value | | :-- | :-- | | Type | Integer | | Valid Range | Minimum value of 900. Maximum value of 31536000. | | Required | No | An IAM policy in JSON format that you want to use as an inline session policy. This parameter is optional. Passing policies to this operation returns new temporary credentials. The resulting session's permissions are the intersection of the canned policy name and the policy set here. You cannot use this policy to grant more permissions than those allowed by the canned policy name being assumed. | Params | Value | | :-- | :-- | | Type | String | | Valid Range | Minimum length of 1. Maximum length of 2048. | | Required | No | XML response for this API is similar to XML error response for this API is similar to ``` http://minio.cluster:9000?Action=AssumeRoleWithClientGrants&DurationSeconds=3600&Token=eyJ4NXQiOiJOVEF4Wm1NeE5ETXlaRGczTVRVMVpHTTBNekV6T0RKaFpXSTRORE5sWkRVMU9HRmtOakZpTVEiLCJraWQiOiJOVEF4Wm1NeE5ETXlaRGczTVRVMVpHTTBNekV6T0RKaFpXSTRORE5sWkRVMU9HRmtOakZpTVEiLCJhbGciOiJSUzI1NiJ9.eyJhdWQiOiJQb0VnWFA2dVZPNDVJc0VOUm5nRFhqNUF1NVlhIiwiYXpwIjoiUG9FZ1hQNnVWTzQ1SXNFTlJuZ0RYajVBdTVZYSIsImlzcyI6Imh0dHBzOlwvXC9sb2NhbGhvc3Q6OTQ0M1wvb2F1dGgyXC90b2tlbiIsImV4cCI6MTU0MTgwOTU4MiwiaWF0IjoxNTQxODA1OTgyLCJqdGkiOiI2Y2YyMGIwZS1lNGZmLTQzZmQtYTdiYS1kYTc3YTE3YzM2MzYifQ.Jm29jPliRvrK6Os34nSK3rhzIYLFjEzdVGNng3uGKXGKzP3Wei6NPnhA0szJXMOKglXzUF1UgSz8MctbaxFS8XDusQPVe4LkB45hwBm6TmBxzui911nt-1RbBLNjZIlvl2lPrbTUH5hSn9kEkph6seWanTNQpz9tNEoVa6ROX3kpJqxe8tLQUWw453A1JTwFNhdHa6-f1K8QeEZ4gOYINQ9tfhTibdbkXZkJQFLop-Jwoybi9s4nwQU_dATocgcufq5eCeNItQeleT-23lGxIz0X7CiJrJynYLdd-ER0F77SumqEb5iCxhxuf4H7dovwd1kAmyKzLxpw&Version=2011-06-15 ``` ``` <?xml version=\"1.0\" encoding=\"UTF-8\"?> <AssumeRoleWithClientGrantsResponse xmlns=\"https://sts.amazonaws.com/doc/2011-06-15/\"> <AssumeRoleWithClientGrantsResult> <AssumedRoleUser> <Arn/> <AssumeRoleId/> </AssumedRoleUser> <Credentials> <AccessKeyId>Y4RJU1RNFGK48LGO9I2S</AccessKeyId> <SecretAccessKey>sYLRKS1Z7hSjluf6gEbb9066hnx315wHTiACPAjg</SecretAccessKey> <Expiration>2019-08-08T20:26:12Z</Expiration> <SessionToken>eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJhY2Nlc3NLZXkiOiJZNFJKVTFSTkZHSzQ4TEdPOUkyUyIsImF1ZCI6IlBvRWdYUDZ1Vk80NUlzRU5SbmdEWGo1QXU1WWEiLCJhenAiOiJQb0VnWFA2dVZPNDVJc0VOUm5nRFhqNUF1NVlhIiwiZXhwIjoxNTQxODExMDcxLCJpYXQiOjE1NDE4MDc0NzEsImlzcyI6Imh0dHBzOi8vbG9jYWxob3N0Ojk0NDMvb2F1dGgyL3Rva2VuIiwianRpIjoiYTBiMjc2MjktZWUxYS00M2JmLTg3MzktZjMzNzRhNGNkYmMwIn0.ewHqKVFTaP-jkgZrcOEKroNUjk10GEp8bqQjxBbYVovV0nHO985VnRESFbcT6XMDDKHZiWqN2viETX_u3Q-w</SessionToken> </Credentials> </AssumeRoleWithClientGrantsResult> <ResponseMetadata/> </AssumeRoleWithClientGrantsResponse> ``` ``` export MINIOROOTUSER=minio export MINIOROOTPASSWORD=minio123 export MINIOIDENTITYOPENIDCONFIGURL=http://localhost:8080/auth/realms/demo/.well-known/openid-configuration export MINIOIDENTITYOPENIDCLIENTID=\"843351d4-1080-11ea-aa20-271ecba3924a\" minio server /mnt/export ``` Testing with an example Obtaining client ID and secrets follow ``` $ go run client-grants.go -cid PoEgXP6uVO45IsENRngDXj5Au5Ya -csec eKsw6z8CtOJVBtrOWvhRWL4TUCga { \"accessKey\": \"NUIBORZYTV2HG2BMRSXR\", \"secretKey\": \"qQlP5O7CFPc5m5IXf1vYhuVTFj7BRVJqh0FqZ86S\", \"expiration\": \"2018-08-21T17:10:29-07:00\", \"sessionToken\": \"eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJhY2Nlc3NLZXkiOiJOVUlCT1JaWVRWMkhHMkJNUlNYUiIsImF1ZCI6IlBvRWdYUDZ1Vk80NUlzRU5SbmdEWGo1QXU1WWEiLCJhenAiOiJQb0VnWFA2dVZPNDVJc0VOUm5nRFhqNUF1NVlhIiwiZXhwIjoxNTM0ODk2NjI5LCJpYXQiOjE1MzQ4OTMwMjksImlzcyI6Imh0dHBzOi8vbG9jYWxob3N0Ojk0NDMvb2F1dGgyL3Rva2VuIiwianRpIjoiNjY2OTZjZTctN2U1Ny00ZjU5LWI0MWQtM2E1YTMzZGZiNjA4In0.eJONnVaSVHypiXKEARSMnSKgr-2mlC2Sr4fEGJitLcJFat3LeNdTHv0oHsv6ZZA3zueVGgFlVXMlREgr9LXA\" } ```"
}
] |
{
"category": "Runtime",
"file_name": "client-grants.md",
"project_name": "MinIO",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Join the [kubernetes-security-announce] group for security and vulnerability announcements. You can also subscribe to an RSS feed of the above using . Instructions for reporting a vulnerability can be found on the [Kubernetes Security and Disclosure Information] page. Information about supported Kubernetes versions can be found on the [Kubernetes version and version skew support policy] page on the Kubernetes website."
}
] |
{
"category": "Runtime",
"file_name": "SECURITY.md",
"project_name": "Inclavare Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "`Scheduler` is a background task management module, mainly responsible for balancing, disk repair, node offline, deletion, repair and other background tasks. The configuration of the Scheduler is based on the , and the following configuration instructions mainly focus on the private configuration of the Scheduler. | Configuration Item | Description | Required | |:-|:--|:--| | Public configuration | Such as service port, running logs, audit logs, etc., refer to the section | Yes | | cluster_id | Cluster ID, unified ID within the cluster | Yes | | services | List of all nodes of the Scheduler | Yes, refer to the example | | service_register | Service registration information | Yes, refer to the example | | clustermgr | Clustermgr client initialization configuration | Yes, clustermgr service address needs to be configured | | proxy | Proxy client initialization configuration | No, refer to the rpc configuration example | | blobnode | BlobNode client initialization configuration | No, refer to the rpc configuration example | | kafka | Kafka related configuration | Yes | | balance | Load balancing task parameter configuration | No | | disk_drop | Disk offline task parameter configuration | No | | disk_repair | Disk repair task parameter configuration | No | | volume_inspect | Volume inspection task parameter configuration (this volume refers to the volume in the erasure code subsystem) | No | | shard_repair | Repair task parameter configuration | Yes, the directory for storing orphan data logs needs to be configured | | blob_delete | Deletion task parameter configuration | Yes, the directory for storing deletion logs needs to be configured | | topologyupdateinterval_min | Configure the time interval for updating the cluster topology | No, default is 1 minute | | volumecacheupdateintervals | Volume cache update frequency to avoid frequent updates of volumes in a short period of time | No, default is 10s | | freechunkcounter_buckets | Bucket access for freechunk indicators | No, default is \\[1, 5, 10, 25, 50, 100, 250, 500, 1000, 2500, 5000\\] | | task_log | Record information of completed background tasks for backup | Yes, directory needs to be configured, chunkbits default is 29 | leader, ID of the main node node_id, ID of the current node members, all Scheduler nodes ```json { \"leader\": 1, \"node_id\": 1, \"members\": { \"1\": \"127.0.0.1:9800\", \"2\": \"127.0.0.2:9800\" } } ``` host, service address of the current node idc, IDC information of the current node ```json { \"host\": \"http://127.0.0.1:9800\", \"idc\": \"z0\" } ``` hosts, clustermgr service list Other RPC parameters, refer to the general module description ```json { \"hosts\": [ \"http://127.0.0.1:9998\", \"http://127.0.0.2:9998\","
},
{
"data": "] } ``` ::: tip Note Starting from v3.3.0, consumer groups are supported. For previous versions, please refer to the corresponding configuration file. ::: broker_list, Kafka node list failmsgsendertimeoutms, timeout for resending messages to the failed topic after message consumption fails, default is 1000ms version, kafka version, default is 2.1.0 topicsconsume topics shardrepair, normal topic, default are `shardrepair` and `shardrepairprior` shardrepairfailed, failed topic, default is `shardrepairfailed` blobdelete, normal topic, default is `blobdelete` blobdeletefailed, failed topic, default is `blobdeletefailed` ```json { \"broker_list\": [\"127.0.0.1:9095\",\"127.0.0.1:9095\",\"127.0.0.1:9095\"], \"failmsgsendertimeoutms\": 1000, \"version\": \"0.10.2.0\", \"topics\": { \"shard_repair\": [ \"shard_repair\", \"shardrepairprior\" ], \"shardrepairfailed\": \"shardrepairfailed\", \"blobdelete\": \"blobdelete\", \"blobdeletefailed\": \"blobdeletefailed\" } } ``` diskconcurrency, the maximum number of disks allowed to be balanced simultaneously, default is 1 (before v3.3.0, this value was balancediskcntlimit, default is 100) maxdiskfreechunkcnt, when balancing, it will be judged whether there are disks with freechunk greater than or equal to this value in the current IDC. If not, no balance will be initiated. The default is 1024. mindiskfreechunkcnt, disks with freechunk less than this value will be balanced, default is 20 preparequeueretrydelays, retry interval for the preparation queue when a task in the preparation queue fails to execute, default is 10 finishqueueretrydelays, retry interval for the completion queue, default is 10 cancelpunishduration_s, retry interval after task cancellation, default is 20 workqueuesize, size of the queue for executing tasks, default is 20 collecttaskinterval_s, time interval for collecting tasks, default is 5 checktaskinterval_s, time interval for task verification, default is 5 ```json { \"disk_concurrency\": 700, \"maxdiskfreechunkcnt\": 500, \"mindiskfreechunkcnt\": 105, \"preparequeueretrydelays\": 60, \"finishqueueretrydelays\": 60, \"cancelpunishduration_s\": 60, \"workqueuesize\": 600, \"collecttaskinterval_s\": 10, \"checktaskinterval_s\": 1 } ``` ::: tip Note Starting from version v3.3.0, concurrent disk offline is supported. ::: preparequeueretrydelays, retry interval for the preparation queue when a task in the preparation queue fails to execute, default is 10 finishqueueretrydelays, retry interval for the completion queue, default is 10 cancelpunishduration_s, retry interval after task cancellation, default is 20 workqueuesize, size of the queue for executing tasks, default is 20 collecttaskinterval_s, time interval for collecting tasks, default is 5 checktaskinterval_s, time interval for task verification, default is 5 disk_concurrency, the number of disks to be offline concurrently, default is 1 ```json { \"preparequeueretrydelays\": 60, \"finishqueueretrydelays\": 60, \"cancelpunishduration_s\": 60, \"workqueuesize\": 600, \"collecttaskinterval_s\": 10, \"checktaskinterval_s\": 1, \"disk_concurrency\": 1 } ``` ::: tip Note Starting from version v3.3.0, concurrent disk repair is"
},
{
"data": "::: preparequeueretrydelays, retry interval for the preparation queue when a task in the preparation queue fails to execute, default is 10 finishqueueretrydelays, retry interval for the completion queue, default is 10 cancelpunishduration_s, retry interval after task cancellation, default is 20 workqueuesize, size of the queue for executing tasks, default is 20 collecttaskinterval_s, time interval for collecting tasks, default is 5 checktaskinterval_s, time interval for task verification, default is 5 disk_concurrency, the number of disks to be repaired concurrently, default is 1 ```json { \"preparequeueretrydelays\": 60, \"finishqueueretrydelays\": 60, \"cancelpunishduration_s\": 60, \"workqueuesize\": 600, \"collecttaskinterval_s\": 10, \"checktaskinterval_s\": 1, \"disk_concurrency\": 1 } ``` inspectintervals, inspection time interval, default is 1s inspect_batch, batch inspection volume size, default is 1000 listvolstep, the size of requesting clustermgr to list volumes, which can control the QPS of requesting clustermgr, default is 100 listvolinterval_ms, time interval for requesting clustermgr to list volumes, default is 10ms timeout_ms, time interval for checking whether a batch of inspection tasks is completed, default is 10000ms ```json { \"inspectintervals\": 100, \"inspect_batch\": 10, \"listvolstep\": 20, \"listvolinterval_ms\": 10, \"timeout_ms\": 10000 } ``` taskpoolsize, concurrency of repair tasks, default is 10 messagepunishthreshold, Punishment threshold, if the corresponding number of failed attempts to consume a message exceeds this value, a punishment will be imposed for a period of time to avoid excessive retries within a short period. The default value is 3. messagepunishtime_m, punishment time, default 10 minutes orphanshardlog, record information of orphan data repair failures, directory needs to be configured, chunkbits is the log file rotation size, default is 29 (2^29 bytes) ```json { \"taskpoolsize\": 10, \"messagepunishthreshold\": 3, \"messagepunishtime_m\": 10, \"orphanshardlog\": { \"dir\": \"/home/service/scheduler/package/orphanshard_log\", \"chunkbits\": 29 } } ``` ::: tip Note Starting from version v3.3.0, it is supported to configure the data deletion time period. ::: taskpoolsize, concurrency of deletion tasks, default is 10 safedelaytime_h, deletion protection period, default is 72h. If a negative value is configured, the data will be deleted directly. messagepunishthreshold, Punishment threshold, if the corresponding number of failed attempts to consume a message exceeds this value, a punishment will be imposed for a period of time to avoid excessive retries within a short period. The default value is 3. messagepunishtime_m, punishment time, default 10 minutes messageslowdowntimes, slow down when it overload, default 3 second delete_log, directory for storing deletion logs, needs to be configured, chunkbits default is 29 deletehourrange, supports configuring the deletion time period in 24-hour format. For example, the following configuration indicates that deletion requests will only be initiated during the time period between 1:00 a.m. and 3:00 a.m. If not configured, deletion will be performed all day. maxbatchsize, batch consumption size of kafka messages, default is 10. If the batch is full or the time interval is reached, consume the Kafka messages accumulated during this period batchintervals, time interval for consuming kafka messages, default is 2s ```json { \"taskpoolsize\": 400, \"messagepunishthreshold\": 3, \"messagepunishtime_m\": 10, \"messageslowdowntimes\": 3, \"safedelaytime_h\": 12, \"deletehourrange\": { \"from\": 1, \"to\": 3 }, \"maxbatchsize\": 10, \"batchintervals\": 2, \"delete_log\": { \"dir\": \"/home/service/scheduler/package/deletelog\", \"chunkbits\": 29 } } ```"
}
] |
{
"category": "Runtime",
"file_name": "scheduler.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "These document describes concepts and terminology used by the Sysbox container runtime. We use these throughout our documents. The software that given the container's configuration and root filesystem (i.e., a directory that has the contents of the container) interacts with the Linux kernel to create the container. Sysbox and the are examples of container runtimes. The entity that provides the container's configuration and root filesystem to the container runtime is typically a container manager (e.g., Docker, containerd). The container manager manages the container's lifecycle, from image transfer and storage to container execution (by interacting with the container runtime). Examples are Docker, containerd, etc. The describes the interface between the container manager and the container runtime. We call the containers deployed by Sysbox system containers, to highlight the fact that they can run not just micro-services (as regular containers do), but also system software such as Docker, Kubernetes, Systemd, inner containers, etc. Traditionally, containers package a single application / micro-service. This makes sense for application containers, where multiple such containers form the application and separation of concerns is important. However, system containers deviate from this a bit: they are meant to be used as light-weight, super-efficient \"virtual hosts\", and thus typically bundle multiple services. Within the system container you can run the services of your choice (e.g., Systemd, sshd, Docker, etc.), and even launch (inner) containers just as you would on a physical host of VM. You can think of it as a \"virtual host\" or a \"container of containers\". Of course, you can package a single service (e.g., Docker daemon) if you so desire; the choice is yours. System containers provide an alternative to VMs in many scenarios, but are much more flexible, efficient, and portable. They offer strong isolation (in fact stronger than regular Docker containers) but to a lesser degree than the isolation provided by a VM. For more info on system containers, see this . Sysbox is a container runtime that creates system containers. When launching Docker inside a system container, terminology can quickly get confusing due to container nesting. To prevent confusion we refer to the containers as the \"outer\" and \"inner\" containers. The outer container is a system container, created at the host level; it's launched with Docker + Sysbox. The inner container is an application container, created within the outer container (e.g., it's created by the Docker or Kubernetes instance running inside the system container). DinD refers to deploying Docker (CLI + Daemon) inside Docker containers. Sysbox supports DinD using well-isolated (unprivileged) containers and without the need for complex Docker run commands or specialized images. KinD refers to deploying Kubernetes inside Docker containers. Each Docker container acts as a K8s node (replacing a VM or physical host). A K8s cluster is composed of one or more of these containers, connected via an overlay network (e.g., Docker bridge). Sysbox supports KinD with high efficiency, using well-isolated (unprivileged) containers, and without the need for complex Docker run commands or specialized images."
}
] |
{
"category": "Runtime",
"file_name": "concepts.md",
"project_name": "Sysbox",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: Architecture description: How Virtual Kubelet works weight: 3 This document provides a high-level overview of how Virtual Kubelet works. It begins by explaining how normali.e. non-virtual and then by way of contrast. Ordinarily, Kubernetes implement Pod and container operations for each Kubernetes Node. They run as an agent on each Node, whether that Node is a physical server or a virtual machine, and handles Pod/container operations on that Node. kubelets take a configuration called a PodSpec as input and work to ensure that containers specified in the PodSpec are running and healthy. From the standpoint of the Kubernetes API server, Virtual Kubelets seem like normal kubelets, but with the crucial difference that they scheduler containers elsewhere, for example in a cloud serverless API, and not on the Node. below shows a Kubernetes cluster with a series of standard kubelets and one Virtual Kubelet: {{< svg src=\"img/diagram.svg\" caption=\"Standard vs. Virtual Kubelets\" >}}"
}
] |
{
"category": "Runtime",
"file_name": "architecture.md",
"project_name": "Virtual Kubelet",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Velero currently does not support any restore policy on Kubernetes resources that are already present in-cluster. Velero skips over the restore of the resource if it already exists in the namespace/cluster irrespective of whether the resource present in the restore is the same or different from the one present on the cluster. It is desired that Velero gives the option to the user to decide whether or not the resource in backup should overwrite the one present in the cluster. As of Today, Velero will skip over the restoration of resources that already exist in the cluster. The current workflow followed by Velero is (Using a `service` that is backed up for example): Velero tries to attempt restore of the `service` Fetches the `service` from the cluster If the `service` exists then: Checks whether the `service` instance in the cluster is equal to the `service` instance present in backup If not equal then skips the restore of the `service` and adds a restore warning (except for ) If equal then skips the restore of the `service` and mentions that the restore of resource `service` is skipped in logs It is desired to add the functionality to specify whether or not to overwrite the instance of resource `service` in cluster with the one present in backup during the restore process. Related issue: https://github.com/vmware-tanzu/velero/issues/4066 Add support for `ExistingResourcePolicy` to restore API for Kubernetes resources. Change existing restore workflow for `ServiceAccount` objects Add support for `ExistingResourcePolicy` as `recreate` for Kubernetes resources. (Future scope feature) Add support for `ExistingResourcePolicy` to restore API for Non-Kubernetes resources. Add support for `ExistingResourcePolicy` to restore API for `PersistentVolume` data. Let's say you have a Backup Cluster which is identical to the Production Cluster. After some operations/usage/time the Production Cluster had changed itself, there might be new deployments, some secrets might have been updated. Now, this means that the Backup cluster will no longer be identical to the Production Cluster. In order to keep the Backup Cluster up to date/identical to the Production Cluster with respect to Kubernetes resources except PV data we would like to use Velero for scheduling new backups which would in turn help us update the Backup Cluster via Velero restore. Reference: https://github.com/vmware-tanzu/velero/issues/4066#issuecomment-954320686 Here delta resources mean the resources restored by a previous backup, but they are no longer in the latest backup. Let's follow a sequence of steps to understand this scenario: Consider there are 2 clusters, Cluster A, which has 3 resources - P1, P2 and P3. Create a Backup1 from Cluster A which has P1, P2 and P3. Perform restore on a new Cluster B using Backup1. Now, Lets say in Cluster A resource P1 gets deleted and resource P2 gets"
},
{
"data": "Create a new Backup2 with the new state of Cluster A, keep in mind Backup1 has P1, P2 and P3 while Backup2 has P2' and P3. So the Delta here is (|Cluster B - Backup2|), Delete P1 and Update P2. During Restore time we would want the Restore to help us identify this resource delta. Reference: https://github.com/vmware-tanzu/velero/pull/4613#issuecomment-1027260446 In this approach we do not change existing velero behavior. If the resource to restore in cluster is equal to the one backed up then do nothing following current Velero behavior. For resources that already exist in the cluster that are not equal to the resource in the backup (other than Service Accounts). We add a new optional spec field `existingResourcePolicy` which can have the following values: `none`: This is the existing behavior, if Velero encounters a resource that already exists in the cluster, we simply skip restoration. `update`: This option would provide the following behavior. Unchanged resources: Velero would update the backup/restore labels on the unchanged resources, if labels patch fails Velero adds a restore error. Changed resources: Velero will first try to patch the changed resource, Now if the patch: succeeds: Then the in-cluster resource gets updated with the labels as well as the resource diff fails: Velero adds a restore warning and tries to just update the backup/restore labels on the resource, if the labels patch also fails then we add restore error. `recreate`: If resource already exists, then Velero will delete it and recreate the resource. Note: The `recreate` option is a non-goal for this enhancement proposal, but it is considered as a future scope. Another thing to highlight is that Velero will not be deleting any resources in any of the policy options proposed in this design but Velero will patch the resources in `update` policy option. Example: A. The following Restore will execute the `existingResourcePolicy` restore type `none` for the `services` and `deployments` present in the `velero-protection` namespace. ``` Kind: Restore includeNamespaces: velero-protection includeResources: services deployments existingResourcePolicy: none ``` B. The following Restore will execute the `existingResourcePolicy` restore type `update` for the `secrets` and `daemonsets` present in the `gdpr-application` namespace. ``` Kind: Restore includeNamespaces: gdpr-application includeResources: secrets daemonsets existingResourcePolicy: update ``` In this approach we give user the ability to specify which resources are to be included for a particular kind of force update behaviour, essentially a more granular approach where in the user is able to specify a resource:behaviour mapping. It would look like: `existingResourcePolicyConfig`: `patch:` `includedResources:` [ ]string `recreate:` `includedResources:` [ ]string Note: There is no `none` behaviour in this approach as that would conform to the current/default Velero restore behaviour. The `recreate` option is a non-goal for this enhancement proposal, but it is considered as a future scope. Example: A. The following Restore will execute the restore type `patch` and apply the `existingResourcePolicyConfig` for `secrets` and `daemonsets` present in the `inventory-app`"
},
{
"data": "``` Kind: Restore includeNamespaces: inventory-app existingResourcePolicyConfig: patch: includedResources secrets daemonsets ``` Now, this approach is somewhat a combination of the aforementioned approaches. Here we propose addition of two spec fields to the Restore API - `existingResourceDefaultPolicy` and `existingResourcePolicyOverrides`. As the names suggest ,the idea being that `existingResourceDefaultPolicy` would describe the default velero behaviour for this restore and `existingResourcePolicyOverrides` would override the default policy explicitly for some resources. Example: A. The following Restore will execute the restore type `patch` as the `existingResourceDefaultPolicy` but will override the default policy for `secrets` using the `existingResourcePolicyOverrides` spec as `none`. ``` Kind: Restore includeNamespaces: inventory-app existingResourceDefaultPolicy: patch existingResourcePolicyOverrides: none: includedResources secrets ``` The `existingResourcePolicy` spec field will be an `PolicyType` type field. Restore API: ``` type RestoreSpec struct { . . . // ExistingResourcePolicy specifies the restore behaviour for the Kubernetes resource to be restored // +optional ExistingResourcePolicy PolicyType } ``` PolicyType: ``` type PolicyType string const PolicyTypeNone PolicyType = \"none\" const PolicyTypePatch PolicyType = \"update\" ``` The `existingResourcePolicyConfig` will be a spec of type `PolicyConfiguration` which gets added to the Restore API. Restore API: ``` type RestoreSpec struct { . . . // ExistingResourcePolicyConfig specifies the restore behaviour for a particular/list of Kubernetes resource(s) to be restored // +optional ExistingResourcePolicyConfig []PolicyConfiguration } ``` PolicyConfiguration: ``` type PolicyConfiguration struct { PolicyTypeMapping map[PolicyType]ResourceList } ``` PolicyType: ``` type PolicyType string const PolicyTypePatch PolicyType = \"patch\" const PolicyTypeRecreate PolicyType = \"recreate\" ``` ResourceList: ``` type ResourceList struct { IncludedResources []string } ``` Restore API: ``` type RestoreSpec struct { . . . // ExistingResourceDefaultPolicy specifies the default restore behaviour for the Kubernetes resource to be restored // +optional existingResourceDefaultPolicy PolicyType // ExistingResourcePolicyOverrides specifies the restore behaviour for a particular/list of Kubernetes resource(s) to be restored // +optional existingResourcePolicyOverrides []PolicyConfiguration } ``` PolicyType: ``` type PolicyType string const PolicyTypeNone PolicyType = \"none\" const PolicyTypePatch PolicyType = \"patch\" const PolicyTypeRecreate PolicyType = \"recreate\" ``` PolicyConfiguration: ``` type PolicyConfiguration struct { PolicyTypeMapping map[PolicyType]ResourceList } ``` ResourceList: ``` type ResourceList struct { IncludedResources []string } ``` The restore workflow changes will be done We would introduce a new CLI flag called `existing-resource-policy` of string type. This flag would be used to accept the policy from the user. The velero restore command would look somewhat like this: ``` velero create restore <restore_name> --existing-resource-policy=update ``` Help message `Restore Policy to be used during the restore workflow, can be - none, update` The CLI changes will go at `pkg/cmd/cli/restore/create.go` We would also add a validation which checks for invalid policy values provided to this flag. Restore describer will also be updated to reflect the policy `pkg/cmd/util/output/restore_describer.go` We have decided to go ahead with the implementation of Approach 1 as: It is easier to implement It is also easier to scale and leaves room for improvement and the door open to expanding to approach 3 It also provides an option to preserve the existing velero restore workflow"
}
] |
{
"category": "Runtime",
"file_name": "existing-resource-policy_design.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Generate the autocompletion script for fish Generate the autocompletion script for the fish shell. To load completions in your current shell session: cilium-operator-azure completion fish | source To load completions for every new session, execute once: cilium-operator-azure completion fish > ~/.config/fish/completions/cilium-operator-azure.fish You will need to start a new shell for this setup to take effect. ``` cilium-operator-azure completion fish [flags] ``` ``` -h, --help help for fish --no-descriptions disable completion descriptions ``` - Generate the autocompletion script for the specified shell"
}
] |
{
"category": "Runtime",
"file_name": "cilium-operator-azure_completion_fish.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Dump StateDB contents as JSON ``` cilium-dbg statedb dump [flags] ``` ``` -h, --help help for dump ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Inspect StateDB"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_statedb_dump.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Name | Type | Description | Notes | - | - | - File | string | | Size | Pointer to int64 | | [optional] Iommu | Pointer to bool | | [optional] [default to false] DiscardWrites | Pointer to bool | | [optional] [default to false] PciSegment | Pointer to int32 | | [optional] Id | Pointer to string | | [optional] `func NewPmemConfig(file string, ) *PmemConfig` NewPmemConfig instantiates a new PmemConfig object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewPmemConfigWithDefaults() *PmemConfig` NewPmemConfigWithDefaults instantiates a new PmemConfig object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *PmemConfig) GetFile() string` GetFile returns the File field if non-nil, zero value otherwise. `func (o PmemConfig) GetFileOk() (string, bool)` GetFileOk returns a tuple with the File field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *PmemConfig) SetFile(v string)` SetFile sets File field to given value. `func (o *PmemConfig) GetSize() int64` GetSize returns the Size field if non-nil, zero value otherwise. `func (o PmemConfig) GetSizeOk() (int64, bool)` GetSizeOk returns a tuple with the Size field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *PmemConfig) SetSize(v int64)` SetSize sets Size field to given value. `func (o *PmemConfig) HasSize() bool` HasSize returns a boolean if a field has been set. `func (o *PmemConfig) GetIommu() bool` GetIommu returns the Iommu field if non-nil, zero value otherwise. `func (o PmemConfig) GetIommuOk() (bool, bool)` GetIommuOk returns a tuple with the Iommu field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *PmemConfig) SetIommu(v bool)` SetIommu sets Iommu field to given value. `func (o *PmemConfig) HasIommu() bool` HasIommu returns a boolean if a field has been set. `func (o *PmemConfig) GetDiscardWrites() bool` GetDiscardWrites returns the DiscardWrites field if non-nil, zero value otherwise. `func (o PmemConfig) GetDiscardWritesOk() (bool, bool)` GetDiscardWritesOk returns a tuple with the DiscardWrites field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *PmemConfig) SetDiscardWrites(v bool)` SetDiscardWrites sets DiscardWrites field to given value. `func (o *PmemConfig) HasDiscardWrites() bool` HasDiscardWrites returns a boolean if a field has been set. `func (o *PmemConfig) GetPciSegment() int32` GetPciSegment returns the PciSegment field if non-nil, zero value otherwise. `func (o PmemConfig) GetPciSegmentOk() (int32, bool)` GetPciSegmentOk returns a tuple with the PciSegment field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *PmemConfig) SetPciSegment(v int32)` SetPciSegment sets PciSegment field to given value. `func (o *PmemConfig) HasPciSegment() bool` HasPciSegment returns a boolean if a field has been set. `func (o *PmemConfig) GetId() string` GetId returns the Id field if non-nil, zero value otherwise. `func (o PmemConfig) GetIdOk() (string, bool)` GetIdOk returns a tuple with the Id field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *PmemConfig) SetId(v string)` SetId sets Id field to given value. `func (o *PmemConfig) HasId() bool` HasId returns a boolean if a field has been set."
}
] |
{
"category": "Runtime",
"file_name": "PmemConfig.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "(instance-options)= Instance options are configuration options that are directly related to the instance. See {ref}`instances-configure-options` for instructions on how to set the instance options. The key/value configuration is namespaced. The following options are available: {ref}`instance-options-misc` {ref}`instance-options-boot` {ref}`instance-options-limits` {ref}`instance-options-migration` {ref}`instance-options-nvidia` {ref}`instance-options-raw` {ref}`instance-options-security` {ref}`instance-options-snapshots` {ref}`instance-options-volatile` Note that while a type is defined for each option, all values are stored as strings and should be exported over the REST API as strings (which makes it possible to support any extra values without breaking backward compatibility). (instance-options-misc)= In addition to the configuration options listed in the following sections, these instance options are supported: % Include content from ```{include} ../config_options.txt :start-after: <!-- config group instance-miscellaneous start --> :end-before: <!-- config group instance-miscellaneous end --> ``` ```{config:option} environment.* instance-miscellaneous :type: \"string\" :liveupdate: \"yes (exec)\" :shortdesc: \"Environment variables for the instance\" You can export key/value environment variables to the instance. These are then set for . ``` (instance-options-boot)= The following instance options control the boot-related behavior of the instance: % Include content from ```{include} ../config_options.txt :start-after: <!-- config group instance-boot start --> :end-before: <!-- config group instance-boot end --> ``` (instance-options-cloud-init)= The following instance options control the configuration of the instance: % Include content from ```{include} ../config_options.txt :start-after: <!-- config group instance-cloud-init start --> :end-before: <!-- config group instance-cloud-init end --> ``` Support for these options depends on the image that is used and is not guaranteed. If you specify both `cloud-init.user-data` and `cloud-init.vendor-data`, the content of both options is merged. Therefore, make sure that the `cloud-init` configuration you specify in those options does not contain the same keys. (instance-options-limits)= The following instance options specify resource limits for the instance: % Include content from ```{include} ../config_options.txt :start-after: <!-- config group instance-resource-limits start --> :end-before: <!-- config group instance-resource-limits end --> ``` ```{config:option} limits.kernel.* instance-resource-limits :type: \"string\" :liveupdate: \"no\" :condition: \"container\" :shortdesc: \"Kernel resources per instance\" You can set kernel limits on an instance, for example, you can limit the number of open files. See {ref}`instance-options-limits-kernel` for more information. ``` You have different options to limit CPU usage: Set `limits.cpu` to restrict which CPUs the instance can see and use. See {ref}`instance-options-limits-cpu` for how to set this option. Set `limits.cpu.allowance` to restrict the load an instance can put on the available CPUs. This option is available only for containers. See {ref}`instance-options-limits-cpu-container` for how to set this option. It is possible to set both options at the same time to restrict both which CPUs are visible to the instance and the allowed usage of those instances. However, if you use `limits.cpu.allowance` with a time limit, you should avoid using `limits.cpu` in addition, because that puts a lot of constraints on the scheduler and might lead to less efficient allocations. The CPU limits are implemented through a mix of the `cpuset` and `cpu` cgroup controllers. (instance-options-limits-cpu)= `limits.cpu` results in CPU pinning through the `cpuset` controller. You can specify either which CPUs or how many CPUs are visible and available to the instance: To specify which CPUs to use, set `limits.cpu` to either a set of CPUs (for example, `1,2,3`) or a CPU range (for example,"
},
{
"data": "To pin to a single CPU, use the range syntax (for example, `1-1`) to differentiate it from a number of CPUs. If you specify a number (for example, `4`) of CPUs, Incus will do dynamic load-balancing of all instances that aren't pinned to specific CPUs, trying to spread the load on the machine. Instances are re-balanced every time an instance starts or stops, as well as whenever a CPU is added to the system. ```{note} Incus supports live-updating the `limits.cpu` option. However, for virtual machines, this only means that the respective CPUs are hotplugged. Depending on the guest operating system, you might need to either restart the instance or complete some manual actions to bring the new CPUs online. ``` Incus virtual machines default to having just one vCPU allocated, which shows up as matching the host CPU vendor and type, but has a single core and no threads. When `limits.cpu` is set to a single integer, Incus allocates multiple vCPUs and exposes them to the guest as full cores. Those vCPUs are not pinned to specific physical cores on the host. The number of vCPUs can be updated while the VM is running. When `limits.cpu` is set to a range or comma-separated list of CPU IDs (as provided by ), the vCPUs are pinned to those physical cores. In this scenario, Incus checks whether the CPU configuration lines up with a realistic hardware topology and if it does, it replicates that topology in the guest. When doing CPU pinning, it is not possible to change the configuration while the VM is running. For example, if the pinning configuration includes eight threads, with each pair of thread coming from the same core and an even number of cores spread across two CPUs, the guest will show two CPUs, each with two cores and each core with two threads. The NUMA layout is similarly replicated and in this scenario, the guest would most likely end up with two NUMA nodes, one for each CPU socket. In such an environment with multiple NUMA nodes, the memory is similarly divided across NUMA nodes and be pinned accordingly on the host and then exposed to the guest. All this allows for very high performance operations in the guest as the guest scheduler can properly reason about sockets, cores and threads as well as consider NUMA topology when sharing memory or moving processes across NUMA nodes. (instance-options-limits-cpu-container)= `limits.cpu.allowance` drives either the CFS scheduler quotas when passed a time constraint, or the generic CPU shares mechanism when passed a percentage value: The time constraint (for example, `20ms/50ms`) is a hard limit. For example, if you want to allow the container to use a maximum of one CPU, set `limits.cpu.allowance` to a value like `100ms/100ms`. The value is relative to one CPU worth of time, so to restrict to two CPUs worth of time, use something like `100ms/50ms` or `200ms/100ms`. When using a percentage value, the limit is a soft limit that is applied only when under load. It is used to calculate the scheduler priority for the instance, relative to any other instance that is using the same CPU or"
},
{
"data": "For example, to limit the CPU usage of the container to one CPU when under load, set `limits.cpu.allowance` to `100%`. `limits.cpu.priority` is another factor that is used to compute the scheduler priority score when a number of instances sharing a set of CPUs have the same percentage of CPU assigned to them. (instance-options-limits-hugepages)= Incus allows to limit the number of huge pages available to a container through the `limits.hugepage.[size]` key. Architectures often expose multiple huge-page sizes. The available huge-page sizes depend on the architecture. Setting limits for huge pages is especially useful when Incus is configured to intercept the `mount` syscall for the `hugetlbfs` file system in unprivileged containers. When Incus intercepts a `hugetlbfs` `mount` syscall, it mounts the `hugetlbfs` file system for a container with correct `uid` and `gid` values as mount options. This makes it possible to use huge pages from unprivileged containers. However, it is recommended to limit the number of huge pages available to the container through `limits.hugepages.[size]` to stop the container from being able to exhaust the huge pages available to the host. Limiting huge pages is done through the `hugetlb` cgroup controller, which means that the host system must expose the `hugetlb` controller in the legacy or unified cgroup hierarchy for these limits to apply. (instance-options-limits-kernel)= Incus exposes a generic namespaced key `limits.kernel.*` that can be used to set resource limits for an instance. It is generic in the sense that Incus does not perform any validation on the resource that is specified following the `limits.kernel.*` prefix. Incus cannot know about all the possible resources that a given kernel supports. Instead, Incus simply passes down the corresponding resource key after the `limits.kernel.*` prefix and its value to the kernel. The kernel does the appropriate validation. This allows users to specify any supported limit on their system. Some common limits are: % Include content from ```{include} ../config_options.txt :start-after: <!-- config group kernel-limits start --> :end-before: <!-- config group kernel-limits end --> ``` A full list of all available limits can be found in the manpages for the `getrlimit(2)`/`setrlimit(2)` system calls. To specify a limit within the `limits.kernel.*` namespace, use the resource name in lowercase without the `RLIMIT_` prefix. For example, `RLIMIT_NOFILE` should be specified as `nofile`. A limit is specified as two colon-separated values that are either numeric or the word `unlimited` (for example, `limits.kernel.nofile=1000:2000`). A single value can be used as a shortcut to set both soft and hard limit to the same value (for example, `limits.kernel.nofile=3000`). A resource with no explicitly configured limit will inherit its limit from the process that starts up the instance. Note that this inheritance is not enforced by Incus but by the kernel. (instance-options-migration)= The following instance options control the behavior if the instance is {ref}`moved from one Incus server to another <move-instances>`: % Include content from ```{include} ../config_options.txt :start-after: <!-- config group instance-migration start --> :end-before: <!-- config group instance-migration end --> ``` (instance-options-nvidia)= The following instance options specify the NVIDIA and CUDA configuration of the instance: % Include content from ```{include}"
},
{
"data": ":start-after: <!-- config group instance-nvidia start --> :end-before: <!-- config group instance-nvidia end --> ``` (instance-options-raw)= The following instance options allow direct interaction with the backend features that Incus itself uses: % Include content from ```{include} ../config_options.txt :start-after: <!-- config group instance-raw start --> :end-before: <!-- config group instance-raw end --> ``` ```{important} Setting these `raw.*` keys might break Incus in non-obvious ways. Therefore, you should avoid setting any of these keys. ``` (instance-options-qemu)= For VM instances, Incus configures QEMU through a configuration file that is passed to QEMU with the `-readconfig` command-line option. This configuration file is generated for each instance before boot. It can be found at `/run/incus/<instance_name>/qemu.conf`. The default configuration works fine for Incus' most common use case: modern UEFI guests with VirtIO devices. In some situations, however, you might need to override the generated configuration. For example: To run an old guest OS that doesn't support UEFI. To specify custom virtual devices when VirtIO is not supported by the guest OS. To add devices that are not supported by Incus before the machines boots. To remove devices that conflict with the guest OS. To override the configuration, set the `raw.qemu.conf` option. It supports a format similar to `qemu.conf`, with some additions. Since it is a multi-line configuration option, you can use it to modify multiple sections or keys. To replace a section or key in the generated configuration file, add a section with a different value. For example, use the following section to override the default `virtio-gpu-pci` GPU driver: ``` raw.qemu.conf: |- [device \"qemu_gpu\"] driver = \"qxl-vga\" ``` To remove a section, specify a section without any keys. For example: ``` raw.qemu.conf: |- [device \"qemu_gpu\"] ``` To remove a key, specify an empty string as the value. For example: ``` raw.qemu.conf: |- [device \"qemu_gpu\"] driver = \"\" ``` To add a new section, specify a section name that is not present in the configuration file. The configuration file format used by QEMU allows multiple sections with the same name. Here's a piece of the configuration generated by Incus: ``` [global] driver = \"ICH9-LPC\" property = \"disable_s3\" value = \"1\" [global] driver = \"ICH9-LPC\" property = \"disable_s4\" value = \"1\" ``` To specify which section to override, specify an index. For example: ``` raw.qemu.conf: |- value = \"0\" ``` Section indexes start at 0 (which is the default value when not specified), so the above example would generate the following configuration: ``` [global] driver = \"ICH9-LPC\" property = \"disable_s3\" value = \"1\" [global] driver = \"ICH9-LPC\" property = \"disable_s4\" value = \"0\" ``` (instance-options-security)= The following instance options control the {ref}`security` policies of the instance: % Include content from ```{include} ../config_options.txt :start-after: <!-- config group instance-security start --> :end-before: <!-- config group instance-security end --> ``` (instance-options-snapshots)= The following instance options control the creation and expiry of {ref}`instance snapshots <instances-snapshots>`: % Include content from ```{include} ../config_options.txt :start-after: <!-- config group instance-snapshots start --> :end-before: <!-- config group instance-snapshots end --> ``` (instance-options-snapshots-names)= {{snapshotpatterndetail}} (instance-options-volatile)= The following volatile keys are currently used internally by Incus to store internal data specific to an instance: % Include content from ```{include} ../config_options.txt :start-after: <!-- config group instance-volatile start --> :end-before: <!-- config group instance-volatile end -- ``` ```{note} Volatile keys cannot be set by the user. ```"
}
] |
{
"category": "Runtime",
"file_name": "instance_options.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "| Author | zhangxiaoyu | | | - | | Date | 2022-09-17 | | Email | [email protected] | The events module mainly records the relevant operation events of the container or image. Users can view events through the isula events client, or view events for a container in a certain period of time through parameters. E.g ```bash $ isula events --help Usage: isula events [OPTIONS] Get real time events from the server -n, --name Name of the container -S, --since Show all events created since this timestamp -U, --until Show all events created until this timestamp $ isula events 2022-05-10T17:20:34.661862100+08:00 container start 3277ec2e57cde72cbd20a1fea4bb4444e29df67f6fc27e60f8532b733b7ef400 (image=busybox, name=3277ec2e57cde72cbd20a1fea4bb4444e29df67f6fc27e60f8532b733b7ef400, pid=9007) ```` ````mermaid sequenceDiagram participant client participant containereventscb participant clients_checker participant gcontextlist participant monitored participant geventsbuffer client->>containereventscb: events request containereventscb->>geventsbuffer: traverse containereventscb->>client: write events to client containereventscb->>gcontextlist: add containereventscb->>containereventscb: sem wait [events]->>monitored: add event loop epoll monitored->>geventsbuffer: add monitored->>gcontextlist: traverse monitored->>client: write event to client end loop clientschecker->>gcontext_list: traverse alt cancelled or now > until clientschecker->>gcontext_list: delete clientschecker->>containerevents_cb: sem post end end containereventscb->>client: return ```` ````c // Send container event to monitor fifo int isuladmonitorsendcontainerevent(const char *name, runtimestatet state, int pid, int exit_code, const char args, const char extra_annations); // Send mirror event to monitor fifo int isuladmonitorsendimageevent(const char *name, imagestatet state); ```` ````c // Process the newly generated event, including stopped event processing, and forward the event to the client in the context list of the subscription list void eventshandler(struct monitordmsg *msg); // Add the client to the events subscription list: context list int addmonitorclient(char name, const types_timestamp_t since, const typestimestampt *until, const streamfuncwrapper *stream); // Write the eligible events in the events lists back to the client int eventssubscribe(const char *name, const typestimestampt *since, const typestimestamp_t *until, const streamfuncwrapper *stream); // copy event struct isuladeventsformat dup_event(const struct isulad_events_format event); // Create collector thread and monitored thread int eventsmoduleinit(char msg); ````"
}
] |
{
"category": "Runtime",
"file_name": "events_design.md",
"project_name": "iSulad",
"subcategory": "Container Runtime"
}
|
[
{
"data": "The MIT License (MIT) Copyright (c) 2013 Fatih Arslan Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE."
}
] |
{
"category": "Runtime",
"file_name": "LICENSE.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Refer to for introduction and `nydus` has supported Kata Containers with hypervisor `QEMU` and `CLH` currently. You can use Kata Containers with `nydus` as follows, Use ; Deploy `nydus` environment as ; Start `nydus-snapshotter` with `enablenydusoverlayfs` enabled; Use `latest` branch to compile and build `kata-containers.img`; Update `configuration-qemu.toml` or `configuration-clh.toml`to include: ```toml shared_fs = \"virtio-fs-nydus\" virtiofsdaemon = \"<nydusd binary path>\" virtiofsextra_args = [] ``` run `crictl run -r kata nydus-container.yaml nydus-sandbox.yaml`; The `nydus-sandbox.yaml` looks like below: ```yaml metadata: attempt: 1 name: nydus-sandbox uid: nydus-uid namespace: default log_directory: /tmp linux: security_context: namespace_options: network: 2 annotations: \"io.containerd.osfeature\": \"nydus.remoteimage.v1\" ``` The `nydus-container.yaml` looks like below: ```yaml metadata: name: nydus-container image: image: localhost:5000/ubuntu-nydus:latest command: /bin/sleep args: 600 log_path: container.1.log ```"
}
] |
{
"category": "Runtime",
"file_name": "how-to-use-virtio-fs-nydus-with-kata.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "<!-- toc --> - - - <!-- /toc --> `NodePortLocal` (NPL) is a feature that runs as part of the Antrea Agent, through which each port of a Service backend Pod can be reached from the external network using a port of the Node on which the Pod is running. NPL enables better integration with external Load Balancers which can take advantage of the feature: instead of relying on NodePort Services implemented by kube-proxy, external Load-Balancers can consume NPL port mappings published by the Antrea Agent (as K8s Pod annotations) and load-balance Service traffic directly to backend Pods. NodePortLocal was introduced in v0.13 as an alpha feature, and was graduated to beta in v1.4, at which time it was enabled by default. Prior to v1.4, a feature gate, `NodePortLocal`, must be enabled on the antrea-agent for the feature to work. Starting from Antrea v1.7, NPL is supported on the Windows antrea-agent. From Antrea v1.14, NPL is GA. In addition to enabling the NodePortLocal feature gate (if needed), you need to ensure that the `nodePortLocal.enable` flag is set to true in the Antrea Agent configuration. The `nodePortLocal.portRange` parameter can also be set to change the range from which Node ports will be allocated. Otherwise, the range of `61000-62000` will be used by default on Linux, and the range `40000-41000` will be used on Windows. When using the NodePortLocal feature, your `antrea-agent` ConfigMap should look like this: ```yaml kind: ConfigMap apiVersion: v1 metadata: name: antrea-config namespace: kube-system data: antrea-agent.conf: | featureGates: nodePortLocal: enable: true ``` Pods can be selected for `NodePortLocal` by tagging a Service with annotation: `nodeportlocal.antrea.io/enabled: \"true\"`. Consequently, `NodePortLocal` is enabled for all the Pods which are selected by the Service through a selector, and the ports of these Pods will be reachable through Node ports allocated from the port range. The selected Pods will be annotated with the details about allocated Node port(s) for the Pod. For example, given the following Service and Deployment definitions: ```yaml apiVersion: v1 kind: Service metadata: name: nginx annotations: nodeportlocal.antrea.io/enabled: \"true\" spec: ports: name: web port: 80 protocol: TCP targetPort: 8080 selector: app: nginx apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx replicas: 3 template: metadata: labels: app: nginx spec: containers: name: nginx image: nginx ``` If the NodePortLocal feature gate is enabled, then all the Pods in the Deployment will be annotated with the `nodeportlocal.antrea.io` annotation. The value of this annotation is a serialized JSON array. In our example, a given Pod in the `nginx` Deployment may look like this: ```yaml apiVersion: v1 kind: Pod metadata: name: nginx-6799fc88d8-9rx8z labels: app: nginx annotations: nodeportlocal.antrea.io: '[{\"podPort\":8080,\"nodeIP\":\"10.10.10.10\",\"nodePort\":61002,\"protocol\":\"tcp\"}]' ``` This annotation indicates that port 8080 of the Pod can be reached through port 61002 of the Node with IP Address 10.10.10.10 for TCP traffic. The `nodeportlocal.antrea.io` annotation is generated and managed by"
},
{
"data": "It is not meant to be created or modified by users directly. A user-provided annotation is likely to be overwritten by Antrea, or may lead to unexpected behavior. NodePortLocal can only be used with Services of type `ClusterIP` or `LoadBalancer`. The `nodeportlocal.antrea.io` annotation has no effect for Services of type `NodePort` or `ExternalName`. The annotation also has no effect for Services with an empty or missing Selector. Starting from Antrea v2.0, the `protocols` field is removed. Prior to the Antrea v1.7 minor release, the `nodeportlocal.antrea.io` annotation could contain multiple members in `protocols`. An example may look like this: ```yaml apiVersion: v1 kind: Pod metadata: name: nginx-6799fc88d8-9rx8z labels: app: nginx annotations: nodeportlocal.antrea.io: '[{\"podPort\":8080,\"nodeIP\":\"10.10.10.10\",\"nodePort\":61002}, \"protocols\":[\"tcp\",\"udp\"]]' ``` This annotation indicates that port 8080 of the Pod can be reached through port 61002 of the Node with IP Address 10.10.10.10 for both TCP and UDP traffic. Prior to v1.7, the implementation would always allocate the same nodePort value for all the protocols exposed for a given podPort. Starting with v1.7, there will be multiple annotations for the different protocols for a given podPort, and the allocated nodePort may be different for each one. Prior to the Antrea v1.4 minor release, the `nodePortLocal` option group in the Antrea Agent configuration did not exist. To enable the NodePortLocal feature, one simply needed to enable the feature gate, and the port range could be configured using the (now removed) `nplPortRange` parameter. Prior to the Antrea v1.2 minor release, the NodePortLocal feature suffered from a known . In order to use the feature, the correct list of ports exposed by each container had to be provided in the Pod specification (`.spec.containers[*].Ports`). The NodePortLocal implementation would then use this information to decide which ports to map for each Pod. In the above example, the Deployment definition would need to be changed to: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx replicas: 3 template: metadata: labels: app: nginx spec: containers: name: nginx image: nginx ports: containerPort: 80 ``` This was error-prone because providing this list of ports is typically optional in K8s and omitting it does not prevent ports from being exposed, which means that many user may omit this information and expect NPL to work. Starting with Antrea v1.2, we instead rely on the `service.spec.ports[*].targetPort` information, for each NPL-enabled Service, to determine which ports need to be mapped. This feature is currently only supported for Nodes running Linux or Windows with IPv4 addresses. Only TCP & UDP Service ports are supported (not SCTP). When using AVI and the AVI Kubernetes Operator (AKO), the AKO `serviceType` configuration parameter can be set to `NodePortLocal`. After that, annotating Services manually with `nodeportlocal.antrea.io` is no longer required. AKO will automatically annotate Services of type `LoadBalancer`, along with backend ClusterIP Services used by Ingress resources (for which AVI is the Ingress class). For more information refer to the [AKO documentation](https://avinetworks.com/docs/ako/1.5/handling-objects/)."
}
] |
{
"category": "Runtime",
"file_name": "node-port-local.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "The containerd client was built to be easily extended by consumers. The goal is that the execution flow of the calls remain the same across implementations while `Opts` are written to extend functionality. To accomplish this we depend on the `Opts` pattern in Go. For many functions and methods within the client package you will generally see variadic args as the last parameter. If we look at the `NewContainer` method on the client we can see that it has a required argument of `id` and then additional `NewContainerOpts`. There are a few built in options that allow the container to be created with an existing spec, `WithSpec`, and snapshot opts for creating or using an existing snapshot. ```go func (c *Client) NewContainer(ctx context.Context, id string, opts ...NewContainerOpts) (Container, error) { } ``` As a consumer of the containerd client you need to be able to add your domain specific functionality. There are a few ways of doing this, changing the client code, submitting a PR to the containerd client, or forking the client. These ways of extending the client should only be considered after every other method has been tried. The proper and supported way of extending the client is to build a package of `Opts` that define your application specific logic. As an example, if Docker is integrating containerd support and needs to add concepts such as Volumes, they would create a `docker` package with options. ```go // example code container, err := client.NewContainer(ctx, id) // add volumes with their config and bind mounts container.Labels[\"volumes\"] = VolumeConfig{} container.Spec.Binds = append({\"/var/lib/docker/volumes...\"}) ``` ```go // example code import \"github.com/docker/docker\" import \"github.com/docker/libnetwork\" container, err := client.NewContainer(ctx, id, docker.WithVolume(\"volume-name\"), libnetwork.WithOverlayNetwork(\"cluster-network\"), ) ``` There are a few advantages using this model. Your application code is not scattered in the execution flow of the containerd client. Your code can be unit tested without mocking the containerd client. Contributors can better follow your containerd implementation and understand when and where your application logic is added to standard containerd client calls. If we want to make a `SpecOpt` to setup a container to monitor the host system with `htop` it can be easily done without ever touching a line of code in the containerd repository. ```go package monitor import ( \"github.com/containerd/containerd/v2/pkg/oci\" specs \"github.com/opencontainers/runtime-spec/specs-go\" ) // WithHtop configures a container to monitor the host system via `htop` func WithHtop(s *specs.Spec) error { // make sure we are in the host pid namespace if err := oci.WithHostNamespace(specs.PIDNamespace)(s); err != nil { return err } // make sure we set htop as our arg s.Process.Args = []string{\"htop\"} // make sure we have a tty set for htop if err := oci.WithTTY(s); err != nil { return err } return nil } ``` Adding your new option to spec generation is as easy as importing your new package and adding the option when creating a spec. ```go import \"github.com/crosbymichael/monitor\" container, err := client.NewContainer(ctx, id, containerd.WithNewSpec(oci.WithImageConfig(image), monitor.WithHtop), ) ``` You can see the full code and run the monitor container ."
}
] |
{
"category": "Runtime",
"file_name": "client-opts.md",
"project_name": "containerd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "The table below is not comprehensive. Antrea should work with most K8s installers and distributions. The table refers to specific version combinations which are known to work and have been tested, but support is not limited to that list. Each Antrea version supports , and installers / distributions based on any one of these K8s versions should work with that Antrea version. | Antrea Version | Installer / Distribution | Cloud Infra | Node Info | Node Size | Conformance Results | Comments | |-|-|-|-|-|-|-| | v1.0.0 | Kubeadm v1.21.0 | AWS EC2 | Ubuntu 20.04.2 LTS (5.4.0-1045-aws) amd64, docker://20.10.6 | t3.medium | | | | - | - | - | Windows Server 2019 Datacenter (10.0.17763.1817), docker://19.3.14 | t3.medium | | | | - | - | - | Ubuntu 20.04.2 LTS (5.4.0-1045-aws) arm64, docker://20.10.6 | t3.medium | | | | - | Cluster API Provider vSphere (CAPV), K8s 1.19.1 | VMC on AWS, vSphere 7.0.1 | Ubuntu 18.04, containerd | 2 vCPUs, 8GB RAM | | Antrea CI | | - | K3s v1.19.8+k3s1 | [OSUOSL] | Ubuntu 20.04.1 LTS (5.4.0-66-generic) arm64, containerd://1.4.3-k3s3 | 2 vCPUs, 4GB RAM | | Antrea CI, cluster installed with [k3sup] 0.9.13 | | - | Kops v1.20, K8s v1.20.5 | AWS EC2 | Ubuntu 20.04.2 LTS (5.4.0-1041-aws) amd64, containerd://1.4.4 | t3.medium | | | | - | EKS, K8s v1.17.12 | AWS | AmazonLinux2, docker | t3.medium | | Antrea CI | | - | GKE, K8s v1.19.8-gke.1600 | GCP | Ubuntu 18.04, docker | e2-standard-4 | | Antrea CI | | - | AKS, K8s v1.18.14 | Azure | Ubuntu 18.04, moby | StandardDS2v2 | | Antrea CI | | - | AKS, K8s v1.19.9 | Azure | Ubuntu 18.04, containerd | StandardDS2v2 | | Antrea CI | | - | Kind v0.9.0, K8s v1.19.1 | N/A | Ubuntu 20.10, containerd://1.4.0 | N/A | | | | - | Minikube v1.25.0 | N/A | Ubuntu 20.04.2 LTS (5.10.76-linuxkit) arm64, docker://20.10.12 | 8GB RAM | | | | v1.10.0 | Rancher v2.7.0, K8s v1.24.10 | vSphere | Ubuntu 22.04.1 LTS (5.15.0-57-generic) amd64, docker://20.10.21 | 4 vCPUs, 4GB RAM | | | | v1.11.0 | Kubeadm v1.20.2 | N/A | openEuler 22.03 LTS, docker://18.09.0 | 10GB RAM | | | | v1.11.0 | Kubeadm v1.25.5 | N/A | openEuler 22.03 LTS, containerd://1.6.18 | 10GB RAM | | | | v1.15.0 | Talos v1.5.5 | Docker provisioner | Talos | 2 vCPUs, 2.1 GB RAM | Pass | Requires Antrea v1.15 or above | | - | - | QEMU provisioner | Talos | 2 vCPUs, 2.1 GB RAM | Pass | Requires Antrea"
},
{
"data": "or above | When running `kubeadm init` to create a cluster, you need to provide a range of IP addresses for the Pod network using `--pod-network-cidr`. By default, a /24 subnet will be allocated out of the CIDR to every Node which joins the cluster, so make sure you use a large enough CIDR to accommodate the number of Nodes you want. Once the cluster has been created, this CIDR cannot be changed. Follow these steps to deploy Antrea (as a ) on cluster: Edit the cluster YAML and set the `network-plugin` option to none. Add an addon for Antrea, in the following manner: ```yaml addons_include: <link of the antrea.yml file> ``` When creating a cluster, run K3s with the following options: `--flannel-backend=none`, which lets you run the [CNI of your choice](https://rancher.com/docs/k3s/latest/en/installation/network-options/) `--disable-network-policy`, to disable the K3s NetworkPolicy controller When creating a cluster, run Kops with `--networking cni`, to enable CNI for the cluster without deploying a specific network plugin. To deploy Antrea on Kind, please follow these . To deploy Antrea on minikube, please follow these . is a Linux distribution designed for running Kubernetes. Antrea can be used as the CNI on Talos clusters (tested with both the Docker provisioner and the QEMU provisioner). However, because of some built-in security settings in Talos, the default configuration values cannot be used when installing Antrea. You will need to install Antrea using Helm, with a few custom values. Antrea v1.15 or above is required. Follow these steps to deploy Antrea on a Talos cluster: Make sure that your Talos cluster is created without a CNI. To ensure this, you can use a config patch. For example, to create a Talos cluster without a CNI, using the Docker provisioner: ```bash cat << EOF > ./patch.yaml cluster: network: cni: name: none EOF talosctl cluster create [email protected] --wait=false --workers 2 ``` Notice how we use `--wait=false`: the cluster will never be \"ready\" until a CNI is installed. Note that while we use the Docker provisioner here, you can use the Talos platform of your choice. Ensure that you retrieve the Kubeconfig for your new cluster once it is available. You may need to use the `talosctl kubeconfig` command for this. Install Antrea using Helm, with the appropriate values: ```bash cat << EOF > ./values.yaml agent: dontLoadKernelModules: true installCNI: securityContext: capabilities: [] EOF helm install -n kube-system antrea -f value.yml antrea/antrea ``` The above configuration will drop all capabilities from the `installCNI` container, and instruct the Antrea Agent not to try loading any Kernel module explicitly. You can to: Add a new K8s installer or distribution to the table above. Add a new combination of versions that you have tested successfully to the table above. Please make sure that you run conformance tests with [sonobuoy] and consider uploading the test results to a publicly accessible location. You can run sonobuoy with: ```bash sonobuoy run --mode certified-conformance ```"
}
] |
{
"category": "Runtime",
"file_name": "kubernetes-installers.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Introspect or mangle pcap recorder ``` -h, --help help for recorder ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - CLI - Delete individual pcap recorder - Display individual pcap recorder - List current pcap recorders - Update individual pcap recorder"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_recorder.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "title: Object Store Multisite Multisite is a feature of Ceph that allows object stores to replicate their data over multiple Ceph clusters. Multisite also allows object stores to be independent and isolated from other object stores in a cluster. When a ceph-object-store is created without the `zone` section; a realm, zone group, and zone is created with the same name as the ceph-object-store. Since it is the only ceph-object-store in the realm, the data in the ceph-object-store remain independent and isolated from others on the same cluster. When a ceph-object-store is created with the `zone` section, the ceph-object-store will join a custom created zone, zone group, and realm each with a different names than its own. This allows the ceph-object-store to replicate its data over multiple Ceph clusters. To review core multisite concepts please read the . This guide assumes a Rook cluster as explained in the . If an admin wants to set up multisite on a Rook Ceph cluster, the following resources must be created: A A A A ceph object store with the `zone` section object-multisite.yaml in the directory can be used to create the multisite CRDs. ```console kubectl create -f object-multisite.yaml ``` The first zone group created in a realm is the master zone group. The first zone created in a zone group is the master zone. When a non-master zone or non-master zone group is created, the zone group or zone is not in the Ceph Radosgw Multisite until an object-store is created in that zone (and zone group). The zone will create the pools for the object-store(s) that are in the zone to use. When one of the multisite CRs (realm, zone group, zone) is deleted the underlying ceph realm/zone group/zone is not deleted, neither are the pools created by the zone. See the \"Multisite Cleanup\" section for more information. For more information on the multisite CRDs, see the related CRDs: If an admin wants to sync data from another cluster, the admin needs to pull a realm on a Rook Ceph cluster from another Rook Ceph (or Ceph) cluster. To begin doing this, the admin needs 2 pieces of information: An endpoint from the realm being pulled from The access key and the system key of the system user from the realm being pulled from. To pull a Ceph realm from a remote Ceph cluster, an `endpoint` must be added to the CephObjectRealm's `pull` section in the `spec`. This endpoint must be from the master zone in the master zone group of that realm. If an admin does not know of an endpoint that fits this criteria, the admin can find such an endpoint on the remote Ceph cluster (via the tool box if it is a Rook Ceph Cluster) by running: ```console $ radosgw-admin zonegroup get --rgw-realm=$REALMNAME --rgw-zonegroup=$MASTERZONEGROUP_NAME {"
},
{
"data": "\"endpoints\": [http://10.17.159.77:80], ... } ``` A list of endpoints in the master zone group in the master zone is in the `endpoints` section of the JSON output of the `zonegoup get` command. This endpoint must also be resolvable from the new Rook Ceph cluster. To test this run the `curl` command on the endpoint: ```console $ curl -L http://10.17.159.77:80 <?xml version=\"1.0\" encoding=\"UTF-8\"?><ListAllMyBucketsResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult> ``` Finally add the endpoint to the `pull` section of the CephObjectRealm's spec. The CephObjectRealm should have the same name as the CephObjectRealm/Ceph realm it is pulling from. ```yaml apiVersion: ceph.rook.io/v1 kind: CephObjectRealm metadata: name: realm-a namespace: rook-ceph spec: pull: endpoint: http://10.17.159.77:80 ``` The access key and secret key of the system user are keys that allow other Ceph clusters to pull the realm of the system user. When an admin creates a ceph-object-realm a system user automatically gets created for the realm with an access key and a secret key. This system user has the name \"$REALM_NAME-system-user\". For the example if realm name is `realm-a`, then uid for the system user is \"realm-a-system-user\". These keys for the user are exported as a kubernetes called \"$REALM_NAME-keys\" (ex: realm-a-keys). This system user used by RGW internally for the data replication. To get these keys from the cluster the realm was originally created on, run: ```console kubectl -n $ORIGINALCLUSTERNAMESPACE get secrets realm-a-keys -o yaml > realm-a-keys.yaml ``` Edit the `realm-a-keys.yaml` file, and change the `namespace` with the namespace that the new Rook Ceph cluster exists in. Then create a kubernetes secret on the pulling Rook Ceph cluster with the same secrets yaml file. ```console kubectl create -f realm-a-keys.yaml ``` The access key and the secret key of the system user can be found in the output of running the following command on a non-rook ceph cluster: ```console radosgw-admin user info --uid=\"realm-a-system-user\" ``` ```json { ... \"keys\": [ { \"user\": \"realm-a-system-user\" \"access_key\": \"aSw4blZIKV9nKEU5VC0=\" \"secret_key\": \"JSlDXFt5TlgjSV9QOE9XUndrLiI5JEo9YDBsJg==\", } ], ... } ``` Then base64 encode the each of the keys and create a `.yaml` file for the Kubernetes secret from the following template. Only the `access-key`, `secret-key`, and `namespace` sections need to be replaced. ```yaml apiVersion: v1 data: access-key: YVN3NGJsWklLVjluS0VVNVZDMD0= secret-key: SlNsRFhGdDVUbGdqU1Y5UU9FOVhVbmRyTGlJNUpFbzlZREJzSmc9PQ== kind: Secret metadata: name: realm-a-keys namespace: $NEWROOKCLUSTER_NAMESPACE type: kubernetes.io/rook ``` Finally, create a kubernetes secret on the pulling Rook Ceph cluster with the new secrets yaml file. ```console kubectl create -f realm-a-keys.yaml ``` Once the admin knows the endpoint and the secret for the keys has been created, the admin should create: A matching to the realm on the other Ceph cluster, with an endpoint as described above. A matching the master zone group name or the master CephObjectZoneGroup from the cluster the realm was pulled from. A referring to the CephObjectZoneGroup created above. A CephObjectStore referring to the new CephObjectZone resource."
},
{
"data": "(with changes) in the directory can be used to create the multisite CRDs. ```console kubectl create -f object-multisite-pull-realm.yaml ``` Scaling the number of gateways that run the synchronization thread to 2 or more can increase the latency of the replication of each S3 object. The recommended way to scale a multisite configuration is to dissociate the gateway dedicated to the synchronization from gateways that serve clients. The two types of gateways can be deployed by creating two CephObjectStores associated with the same CephObjectZone. The objectstore that deploys the gateway dedicated to the synchronization must have `spec.gateway.instances` set to `1`, while the objectstore that deploys the client gateways have multiple replicas and should disable the synchronization thread on the gateways by setting `spec.gateway.disableMultisiteSyncTraffic` to `true`. ```yaml apiVersion: ceph.rook.io/v1 kind: CephObjectStore metadata: name: replication namespace: rook-ceph spec: gateway: port: 80 instances: 1 disableMultisiteSyncTraffic: false zone: name: zone-a apiVersion: ceph.rook.io/v1 kind: CephObjectStore metadata: name: clients namespace: rook-ceph spec: gateway: port: 80 instances: 5 disableMultisiteSyncTraffic: true zone: name: zone-a ``` Multisite configuration must be cleaned up by hand. Deleting a realm/zone group/zone CR will not delete the underlying Ceph realm, zone group, zone, or the pools associated with a zone. Changes made to the resource's configuration or deletion of the resource are not reflected on the Ceph cluster. When the ceph-object-realm resource is deleted or modified, the realm is not deleted from the Ceph cluster. Realm deletion must be done via the toolbox. The Rook toolbox can modify the Ceph Multisite state via the radosgw-admin command. The following command, run via the toolbox, deletes the realm. ```console radosgw-admin realm delete --rgw-realm=realm-a ``` Changes made to the resource's configuration or deletion of the resource are not reflected on the Ceph cluster. When the ceph-object-zone group resource is deleted or modified, the zone group is not deleted from the Ceph cluster. Zone Group deletion must be done through the toolbox. The Rook toolbox can modify the Ceph Multisite state via the radosgw-admin command. The following command, run via the toolbox, deletes the zone group. ```console radosgw-admin zonegroup delete --rgw-realm=realm-a --rgw-zonegroup=zone-group-a radosgw-admin period update --commit --rgw-realm=realm-a --rgw-zonegroup=zone-group-a ``` Changes made to the resource's configuration or deletion of the resource are not reflected on the Ceph cluster. When the ceph-object-zone resource is deleted or modified, the zone is not deleted from the Ceph cluster. Zone deletion must be done through the toolbox. The Rook toolbox can change the master zone in a zone group. ```console radosgw-admin zone modify --rgw-realm=realm-a --rgw-zonegroup=zone-group-a --rgw-zone=zone-a --master radosgw-admin zonegroup modify --rgw-realm=realm-a --rgw-zonegroup=zone-group-a --master radosgw-admin period update --commit --rgw-realm=realm-a --rgw-zonegroup=zone-group-a --rgw-zone=zone-a ``` The Rook toolbox can modify the Ceph Multisite state via the radosgw-admin command. There are two scenarios possible when deleting a zone. The following commands, run via the toolbox, deletes the zone if there is only one zone in the zone"
},
{
"data": "```console radosgw-admin zone delete --rgw-realm=realm-a --rgw-zonegroup=zone-group-a --rgw-zone=zone-a radosgw-admin period update --commit --rgw-realm=realm-a --rgw-zonegroup=zone-group-a --rgw-zone=zone-a ``` In the other scenario, there are more than one zones in a zone group. Care must be taken when changing which zone is the master zone. Please read the following before running the below commands: The following commands, run via toolboxes, remove the zone from the zone group first, then delete the zone. ```console radosgw-admin zonegroup rm --rgw-realm=realm-a --rgw-zonegroup=zone-group-a --rgw-zone=zone-a radosgw-admin period update --commit --rgw-realm=realm-a --rgw-zonegroup=zone-group-a --rgw-zone=zone-a radosgw-admin zone delete --rgw-realm=realm-a --rgw-zonegroup=zone-group-a --rgw-zone=zone-a radosgw-admin period update --commit --rgw-realm=realm-a --rgw-zonegroup=zone-group-a --rgw-zone=zone-a ``` When a zone is deleted, the pools for that zone are not deleted. The Rook toolbox can delete pools. Deleting pools should be done with caution. The following on pools should be read before deleting any pools. When a zone is created the following pools are created for each zone: ```console $ZONE_NAME.rgw.control $ZONE_NAME.rgw.meta $ZONE_NAME.rgw.log $ZONE_NAME.rgw.buckets.index $ZONE_NAME.rgw.buckets.non-ec $ZONE_NAME.rgw.buckets.data ``` Here is an example command to delete the .rgw.buckets.data pool for zone-a. ```console ceph osd pool rm zone-a.rgw.buckets.data zone-a.rgw.buckets.data --yes-i-really-really-mean-it ``` In this command the pool name must be mentioned twice for the pool to be removed. When an object-store (created in a zone) is deleted, the endpoint for that object store is removed from that zone, via ```console kubectl delete -f object-store.yaml ``` Removing object store(s) from the master zone of the master zone group should be done with caution. When all of these object-stores are deleted the period cannot be updated and that realm cannot be pulled. When an object store is configured by Rook, it internally creates a zone, zone group, and realm with the same name as the object store. To enable multisite, you will need to create the corresponding zone, zone group, and realm CRs with the same name as the object store. For example, to create multisite CRs for an object store named `my-store`: ```yaml apiVersion: ceph.rook.io/v1 kind: CephObjectRealm metadata: name: my-store namespace: rook-ceph # namespace:cluster apiVersion: ceph.rook.io/v1 kind: CephObjectZoneGroup metadata: name: my-store namespace: rook-ceph # namespace:cluster spec: realm: my-store apiVersion: ceph.rook.io/v1 kind: CephObjectZone metadata: name: my-store namespace: rook-ceph # namespace:cluster spec: zoneGroup: my-store metadataPool: replicated: size: 3 dataPool: replicated: size: 3 preservePoolsOnDelete: false ``` Now modify the existing `CephObjectStore` CR to exclude pool settings and add a reference to the zone. ```yaml apiVersion: ceph.rook.io/v1 kind: CephObjectStore metadata: name: my-store namespace: rook-ceph # namespace:cluster spec: gateway: port: 80 instances: 1 zone: name: my-store ``` If names different from the object store need to be set for the realm, zone, or zone group, first rename them in the backend via toolbox pod, then following the procedure above. ```console radosgw-admin realm rename --rgw-realm=my-store --realm-new-name=<new-realm-name> radosgw-admin zonegroup rename --rgw-zonegroup=my-store --zonegroup-new-name=<new-zonegroup-name> --rgw-realm=<new-realm-name> radosgw-admin zone rename --rgw-zone=my-store --zone-new-name=<new-zone-name> --rgw-zonegroup=<new-zonegroup-name> --rgw-realm=<new-realm-name> radosgw-admin period update --commit ``` !!! important Renaming in the toolbox must be performed before creating the multisite CRs"
}
] |
{
"category": "Runtime",
"file_name": "ceph-object-multisite.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: VMware vSphere link: https://github.com/vmware-tanzu/velero-plugin-for-vsphere objectStorage: false volumesnapshotter: true localStorage: true supportedByVeleroTeam: true This repository contains a volume snapshotter plugin to support running Velero on VMware vSphere."
}
] |
{
"category": "Runtime",
"file_name": "01-vsphere.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "(storage-cephobject)= % Include content from ```{include} storage_ceph.md :start-after: <!-- Include start Ceph intro --> :end-before: <!-- Include end Ceph intro --> ``` is an object storage interface built on top of to provide applications with a RESTful gateway to . It provides object storage functionality with an interface that is compatible with a large subset of the Amazon S3 RESTful API. % Include content from ```{include} storage_ceph.md :start-after: <!-- Include start Ceph terminology --> :end-before: <!-- Include end Ceph terminology --> ``` A Ceph Object Gateway consists of several OSD pools and one or more Ceph Object Gateway daemon (`radosgw`) processes that provide object gateway functionality. ```{note} The `cephobject` driver can only be used for buckets. For storage volumes, use the {ref}`Ceph <storage-ceph>` or {ref}`CephFS <storage-cephfs>` drivers. ``` % Include content from ```{include} storage_ceph.md :start-after: <!-- Include start Ceph driver cluster --> :end-before: <!-- Include end Ceph driver cluster --> ``` You must set up a `radosgw` environment beforehand and ensure that its HTTP/HTTPS endpoint URL is reachable from the Incus server or servers. See for information on how to set up a Ceph cluster and on how to set up a `radosgw` environment. The `radosgw` URL can be specified at pool creation time using the option. Incus uses the `radosgw-admin` command to manage buckets. So this command must be available and operational on the Incus servers. % Include content from ```{include} storage_ceph.md :start-after: <!-- Include start Ceph driver remote --> :end-before: <!-- Include end Ceph driver remote --> ``` % Include content from ```{include} storage_ceph.md :start-after: <!-- Include start Ceph driver control --> :end-before: <!-- Include end Ceph driver control --> ``` The following configuration options are available for storage pools that use the `cephobject` driver and for storage buckets in these pools. (storage-cephobject-pool-config)= Key | Type | Default | Description :-- | : | : | :- `cephobject.bucket.name_prefix` | string | - | Prefix to add to bucket names in Ceph `cephobject.cluster_name` | string | `ceph` | The Ceph cluster to use `cephobject.radosgw.endpoint` | string | - | URL of the `radosgw` gateway process `cephobject.radosgw.endpointcertfile` | string | - | Path to the file containing the TLS client certificate to use for endpoint communication `cephobject.user.name` | string | `admin` | The Ceph user to use `volatile.pool.pristine` | string | `true` | Whether the `radosgw` `incus-admin` user existed at creation time Key | Type | Default | Description :-- | : | : | :- `size` | string | - | Quota of the storage bucket"
}
] |
{
"category": "Runtime",
"file_name": "storage_cephobject.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Antrea Multi-cluster implements , which allows users to create multi-cluster Services that can be accessed cross clusters in a ClusterSet. Antrea Multi-cluster also supports Antrea ClusterNetworkPolicy replication. Multi-cluster admins can define ClusterNetworkPolicies to be replicated across the entire ClusterSet, and enforced in all member clusters. An Antrea Multi-cluster ClusterSet includes a leader cluster and multiple member clusters. Antrea Multi-cluster Controller needs to be deployed in the leader and all member clusters. A cluster can serve as the leader, and meanwhile also be a member cluster of the ClusterSet. The diagram below depicts a basic Antrea Multi-cluster topology with one leader cluster and two member clusters. <img src=\"assets/basic-topology.svg\" width=\"650\" alt=\"Antrea Multi-cluster Topology\"> ClusterSet is a placeholder name for a group of clusters with a high degree of mutual trust and shared ownership that share Services amongst themselves. Within a ClusterSet, Namespace sameness applies, which means all Namespaces with a given name are considered to be the same Namespace. The ClusterSet Custom Resource Definition(CRD) defines a ClusterSet including the leader and member clusters information. The MemberClusterAnnounce CRD declares a member cluster configuration to the leader cluster. The Common Area is an abstraction in the Antrea Multi-cluster implementation provides a storage interface for resource export/import that can be read/written by all member and leader clusters in the ClusterSet. The Common Area is implemented with a Namespace in the leader cluster for a given ClusterSet. Antrea Multi-cluster Controller implements ClusterSet management and resource export/import in the ClusterSet. In either a leader or a member cluster, Antrea Multi-cluster Controller is deployed with a Deployment of a single replica, but it takes different responsibilities in leader and member clusters. In a member cluster, Multi-cluster Controller watches and validates the ClusterSet, and creates a MemberClusterAnnounce CR in the Common Area of the leader cluster to join the ClusterSet. In the leader cluster, Multi-cluster controller watches, validates and initializes the ClusterSet. It also validates the MemberClusterAnnounce CR created by a member cluster and updates the member cluster's connection status to `ClusterSet.Status`. In a member cluster, Multi-cluster controller watches exported resources (e.g. ServiceExports, Services, Multi-cluster Gateways), encapsulates an exported resource into a ResourceExport and creates the ResourceExport CR in the Common Area of the leader cluster. In the leader cluster, Multi-cluster Controller watches ResourceExports created by member clusters (in the case of Service and ClusterInfo export), or by the ClusterSet admin (in the case of Multi-cluster NetworkPolicy), converts ResourceExports to ResourceImports, and creates the ResourceImport CRs in the Common Area for member clusters to import them. Multi-cluster Controller also merges ResourceExports from different member clusters to a single ResourceImport, when these exported resources share the same kind, name, and original Namespace (matching Namespace sameness). Multi-cluster Controller in a member cluster also watches ResourceImports in the Common Area of the leader cluster, decapsulates the resources from them, and creates the resources (e.g. Services, Endpoints, Antrea ClusterNetworkPolicies, ClusterInfoImports) in the member"
},
{
"data": "For more information about multi-cluster Service export/import, please also check the section. <img src=\"assets/resource-export-import-pipeline.svg\" width=\"1500\" alt=\"Antrea Multi-cluster Service Export/Import Pipeline\"> Antrea Multi-cluster Controller implements Service export/import among member clusters. The above diagram depicts Antrea Multi-cluster resource export/import pipeline, using Service export/import as an example. Given two Services with the same name and Namespace in two member clusters - `foo.ns.cluster-a.local` and `foo.ns.cluster-b.local`, a multi-cluster Service can be created by the following resource export/import workflow. User creates a ServiceExport `foo` in Namespace `ns` in each of the two clusters. Multi-cluster Controllers in `cluster-a` and `cluster-b` see ServiceExport `foo`, and both create two ResourceExports for the Service and Endpoints respectively in the Common Area of the leader cluster. Multi-cluster Controller in the leader cluster sees the ResourcesExports in the Common Area, including the two for Service `foo`: `cluster-a-ns-foo-service`, `cluster-b-ns-foo-service`; and the two for the Endpoints: `cluster-a-ns-foo-endpoints`, `cluster-b-ns-foo-endpoints`. It then creates a ResourceImport `ns-foo-service` for the multi-cluster Service; and a ResourceImport `ns-foo-endpoints` for the Endpoints, which includes the exported endpoints of both `cluster-a-ns-foo-endpoints` and `cluster-b-ns-foo-endpoints`. Multi-cluster Controller in each member cluster watches the ResourceImports from the Common Area, decapsulates them and gets Service `ns/antrea-mc-foo` and Endpoints `ns/antrea-mc-foo`, and creates the Service and Endpoints, as well as a ServiceImport `foo` in the local Namespace `ns`. Since Antrea v1.7.0, the Service's ClusterIP is exported as the multi-cluster Service's Endpoints. Multi-cluster Gateways must be configured to support multi-cluster Service access across member clusters, and Service CIDRs cannot overlap between clusters. Please refer to for more information. Before Antrea v1.7.0, Pod IPs are exported as the multi-cluster Service's Endpoints. Pod IPs must be directly reachable across clusters for multi-cluster Service access, and Pod CIDRs cannot overlap between clusters. Antrea Multi-cluster only supports creating multi-cluster Services for Services of type ClusterIP. Antrea started to support Multi-cluster Gateway since v1.7.0. User can choose one K8s Node as the Multi-cluster Gateway in a member cluster. The Gateway Node is responsible for routing all cross-clusters traffic from the local cluster to other member clusters through tunnels. The diagram below depicts Antrea Multi-cluster connectivity with Multi-cluster Gateways. <img src=\"assets/mc-gateway.svg\" width=\"800\" alt=\"Antrea Multi-cluster Gateway\"> Antrea Agent is responsible for setting up tunnels between Gateways of member clusters. The tunnels between Gateways use Antrea Agent's configured tunnel type. All member clusters in a ClusterSet need to deploy Antrea with the same tunnel type. The Multi-cluster Gateway implementation introduces two new CRDs `Gateway` and `ClusterInfoImport`. `Gateway` includes the local Multi-cluster Gateway information including: `internalIP` for tunnels to local Nodes, and `gatewayIP` for tunnels to remote cluster Gateways. `ClusterInfoImport` includes Gateway and network information of member clusters, including Gateway IPs and Service CIDRs. The existing esource export/import pipeline is leveraged to exchange the cluster network information among member clusters, generating ClusterInfoImports in each member cluster. Let's use the ClusterSet in the above diagram as an"
},
{
"data": "As shown in the diagram: Cluster A has a client Pod named `pod-a` running on a regular Node, and a multi-cluster Service named `antrea-mc-nginx` with ClusterIP `10.112.10.11` in the `default` Namespace. Cluster B exported a Service named `nginx` with ClusterIP `10.96.2.22` in the `default` Namespace. The Service has one Endpoint `172.170.11.22` which is `pod-b`'s IP. Cluster C exported a Service named `nginx` with ClusterIP `10.11.12.33` also in the `default` Namespace. The Service has one Endpoint `172.10.11.33` which is `pod-c`'s IP. The multi-cluster Service `antrea-mc-nginx` in cluster A will have two Endpoints: `nginx` Service's ClusterIP `10.96.2.22` from cluster B. `nginx` Service's ClusterIP `10.11.12.33` from cluster C. When the client Pod `pod-a` on cluster A tries to access the multi-cluster Service `antrea-mc-nginx`, the request packet will first go through the Service load balancing pipeline on the source Node `node-a2`, with one endpoint of the multi-cluster Service being chosen as the destination. Let's say endpoint `10.11.12.33` from cluster C is chosen, then the request packet will be DNAT'd with IP `10.11.12.33` and tunnelled to the local Gateway Node `node-a1`. `node-a1` knows from the destination IP (`10.11.12.33`) the packet is multi-cluster Service traffic destined for cluster C, and it will tunnel the packet to cluster C's Gateway Node `node-c1`, after performing SNAT and setting the packet's source IP to its own Gateway IP. On `node-c1`, the packet will go through the Service load balancing pipeline again with an endpoint of Service `nginx` being chosen as the destination. As the Service has only one endpoint - `172.10.11.33` of `pod-c`, the request packet will be DNAT'd to `172.10.11.33` and tunnelled to `node-c2` where `pod-c` is running. Finally, on `node-c2` the packet will go through the normal Antrea forwarding pipeline and be forwarded to `pod-c`. At this moment, Antrea does not support Pod-level policy enforcement for cross-cluster traffic. Access towards multi-cluster Services can be regulated with Antrea ClusterNetworkPolicy `toService` rules. In each member cluster, users can create an Antrea ClusterNetworkPolicy selecting Pods in that cluster, with the imported Mutli-cluster Service name and Namespace in an egress `toService` rule, and the Action to take for traffic matching this rule. For more information regarding Antrea ClusterNetworkPolicy (ACNP), refer to . Multi-cluster admins can also specify certain ClusterNetworkPolicies to be replicated across the entire ClusterSet. The ACNP to be replicated should be created as a ResourceExport in the leader cluster, and the resource export/import pipeline will ensure member clusters receive this ACNP spec to be replicated. Each member cluster's Multi-cluster Controller will then create an ACNP in their respective clusters. Multi-cluster Gateway supports all of `encap`, `noEncap`, `hybrid`, and `networkPolicyOnly` modes. In all supported modes, the cross-cluster traffic is routed by Multi-cluster Gateways of member clusters, and the traffic goes through Antrea overlay tunnels between Gateways. In `noEncap`, `hybrid`, and `networkPolicyOnly` modes, even when in-cluster Pod traffic does not go through tunnels, antrea-agent still creates tunnels between the Gateway Node and other Nodes, and routes cross-cluster traffic to reach the Gateway through the tunnels. Specially for , Antrea only handles multi-cluster traffic routing, while the primary CNI takes care of in-cluster traffic routing."
}
] |
{
"category": "Runtime",
"file_name": "architecture.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "title: Clone Files or Directories sidebar_position: 6 This command makes a 1:1 clone of your data by creating a mere metadata copy, without creating any new data in the object storage, thus cloning is very fast regardless of target file / directory size. Under JuiceFS, this command is a better alternative to `cp`, moreover, for Linux clients using kernels with support, then the `cp` command achieves the same result as `juicefs clone`. The clone result is a metadata copy only, where all the files are still referencing the same underlying object storage blocks, that's why a clone behaves the same in every way as its originals. When either of them go through actual file data modification, the affected data blocks will be copied on write, and become new blocks after write, while the unchanged part of the files remains the same, still referencing the original blocks. Please note that system tools like disk-free or disk-usage (`df`, `du`) will report the space used by the cloned data, but the underlying object storage space will not grow as blocks remains the same. On the same way, as metadata is actually replicated, the clone will take the same metadata engine storage space as the original. Clones takes up both file system storage space, inodes and metadata engine storage space. Pay special attention when making clones on large size directories. ```shell juicefs clone SRC DST juicefs clone /mnt/jfs/file1 /mnt/jfs/file2 juicefs clone /mnt/jfs/dir1 /mnt/jfs/dir2 ``` In terms of transaction consistency, cloning behaves as follows: Before `clone` command finishes, destination file is not visible. For file: The `clone` command ensures atomicity, meaning that the cloned file will always be in a correct and consistent state. For directory: The `clone` command does not guarantee atomicity for directories. In other words, if the source directory changes during the cloning process, the target directory may be different from the source directory. Only one `clone` can be successfully created from the same location at the same time. The failed clone will clean up the temporarily created directory tree. The clone is done by the mount process, it will be interrupted if `clone` command is terminated. If the clone fails or is interrupted, `mount` process will cleanup any created inodes. If the mount process fails to do that, there could be some leaking the metadata engine and object storage, because the dangling tree still hold the references to underlying data blocks. They could be cleaned up by the command."
}
] |
{
"category": "Runtime",
"file_name": "clone.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This guide is designed for developers and is - same as the Developer Guide - not intended for production systems or end users. It is advisable to only follow this guide on non-critical development systems. To run Kata Containers in SNP-VMs, the following software stack is used. The host BIOS and kernel must be capable of supporting AMD SEV-SNP and configured accordingly. For Kata Containers, the host kernel with branch and commit is known to work in conjunction with SEV Firmware version 1.51.3 (0xh\\_1.33.03) available on AMD's . See to configure the host accordingly. Verify that you are able to run SEV-SNP encrypted VMs first. The guest components required for Kata Containers are built as described below. Tip: It is easiest to first have Kata Containers running on your system and then modify it to run containers in SNP-VMs. Follow the and then follow the below steps. Nonetheless, you can just follow this guide from the start. Follow all of the below steps to install Kata Containers with SNP-support from scratch. These steps mostly follow the developer guide with modifications to support SNP Steps from the Developer Guide: Get all the for building the kata-runtime by first building a rootfs, then building the initrd based on the rootfs, use a custom agent and install. `ubuntu` works as the distribution of choice. Get the to build a custom kernel SNP-specific steps: Build the SNP-specific kernel as shown below (see this for more information) ```bash $ pushd kata-containers/tools/packaging/ $ ./kernel/build-kernel.sh -a x86_64 -x snp setup $ ./kernel/build-kernel.sh -a x86_64 -x snp build $ sudo -E PATH=\"${PATH}\" ./kernel/build-kernel.sh -x snp install $ popd ``` Build a current OVMF capable of SEV-SNP: ```bash $ pushd kata-containers/tools/packaging/static-build/ovmf $ ./build.sh $ tar -xvf edk2-x86_64.tar.gz $ popd ``` Build a custom QEMU ```bash $ source kata-containers/tools/packaging/scripts/lib.sh $ qemuurl=\"$(getfromkatadeps \"assets.hypervisor.qemu-snp-experimental.url\")\" $ qemutag=\"$(getfromkatadeps \"assets.hypervisor.qemu-snp-experimental.tag\")\" $ git clone \"${qemu_url}\" $ pushd qemu $ git checkout \"${qemu_tag}\" $ ./configure --enable-virtfs --target-list=x86_64-softmmu --enable-debug $ make -j \"$(nproc)\" $ popd ``` The configuration file located at `/etc/kata-containers/configuration.toml` must be adapted as follows to support SNP-VMs: Use the SNP-specific kernel for the guest VM (change path) ```toml kernel = \"/usr/share/kata-containers/vmlinuz-snp.container\" ``` Enable the use of an initrd (uncomment) ```toml initrd ="
},
{
"data": "``` Disable the use of a rootfs (comment out) ```toml ``` Use the custom QEMU capable of SNP (change path) ```toml path = \"/path/to/qemu/build/qemu-system-x86_64\" ``` Use `virtio-9p` device since `virtio-fs` is unsupported due to bugs / shortcomings in QEMU version for SEV and SEV-SNP (change value) ```toml shared_fs = \"virtio-9p\" ``` Disable `virtiofsd` since it is no longer required (comment out) ```toml ``` Disable NVDIMM (uncomment) ```toml disableimagenvdimm = true ``` Disable shared memory (uncomment) ```toml filemembackend = \"\" ``` Enable confidential guests (uncomment) ```toml confidential_guest = true ``` Enable SNP-VMs (uncomment) ```toml sevsnpguest = true ``` Configure an OVMF (add path) ```toml firmware = \"/path/to/kata-containers/tools/packaging/static-build/ovmf/opt/kata/share/ovmf/OVMF.fd\" ``` With Kata Containers configured to support SNP-VMs, we use containerd to test and deploy containers in these VMs. If not already present, follow to install containerd and its related components including `CNI` and the `cri-tools` (skip Kata Containers since we already installed it) Follow to configure containerd to use Kata Containers Run the below commands to start a container. See for more information ```bash $ sudo ctr image pull docker.io/library/busybox:latest $ sudo ctr run --cni --runtime io.containerd.run.kata.v2 -t --rm docker.io/library/busybox:latest hello sh ``` Inside the running container, run the following commands to check if SNP is active. It should look something like this: ``` / # dmesg | grep -i sev [ 0.299242] Memory Encryption Features active: AMD SEV SEV-ES SEV-SNP [ 0.472286] SEV: Using SNP CPUID table, 31 entries present. [ 0.514574] SEV: SNP guest platform device initialized. [ 0.885425] sev-guest sev-guest: Initialized SEV guest driver (using vmpck_id 0) ``` To obtain an attestation report inside the container, the `/dev/sev-guest` must first be configured. As of now, the VM does not perform this step, however it can be performed inside the container, either in the terminal or in code. Example for shell: ``` / # SNP_MAJOR=$(cat /sys/devices/virtual/misc/sev-guest/dev | awk -F: '{print $1}') / # SNP_MINOR=$(cat /sys/devices/virtual/misc/sev-guest/dev | awk -F: '{print $2}') / # mknod -m 600 /dev/sev-guest c \"${SNPMAJOR}\" \"${SNPMINOR}\" ``` Support for cgroups v2 is still . If issues occur due to cgroups v2 becoming the default in newer systems, one possible solution is to downgrade cgroups to v1: ```bash sudo sed -i 's/^\\(GRUBCMDLINELINUX=\".*\\)\"/\\1 systemd.unifiedcgrouphierarchy=0\"/' /etc/default/grub sudo update-grub sudo reboot ``` If both SEV and SEV-SNP are supported by the host, Kata Containers uses SEV-SNP by default. You can verify what features are enabled by checking `/sys/module/kvmamd/parameters/sev` and `sevsnp`. This means that Kata Containers can not run both SEV-SNP-VMs and SEV-VMs at the same time. If SEV is to be used by Kata Containers instead, reload the `kvm_amd` kernel module without SNP-support, this will disable SNP-support for the entire platform. ```bash sudo rmmod kvmamd && sudo modprobe kvmamd sev_snp=0 ```"
}
] |
{
"category": "Runtime",
"file_name": "how-to-run-kata-containers-with-SNP-VMs.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Starting from version 1.6.0, Antrea supports the `antctl mc` commands, which can collect information from a leader cluster for troubleshooting Antrea Multi-cluster issues, deploy Antrea Multi-cluster and set up ClusterSets in both leader and member clusters. The `antctl mc get` command is supported since Antrea v1.6.0, while other commands are supported since v1.8.0. These commands cannot run inside the `antrea-controller`, `antrea-agent` or `antrea-mc-controller` Pods. antctl needs a kubeconfig file to access the target cluster's API server, and it will look for the kubeconfig file at `$HOME/.kube/config` by default. You can select a different file by setting the `KUBECONFIG` environment variable or with the `--kubeconfig` option of antctl. `antctl mc get clusterset` (or `get clustersets`) command prints all ClusterSets, a specified Clusterset, or the ClusterSet in a specified Namespace. `antctl mc get resourceimport` (or `get resourceimports`, `get ri`) command prints all ResourceImports, a specified ResourceImport, or ResourceImports in a specified Namespace. `antctl mc get resourceexport` (or `get resourceexports`, `get re`) command prints all ResourceExports, a specified ResourceExport, or ResourceExports in a specified Namespace. `antctl mc get joinconfig` command prints member cluster join parameters of the ClusterSet in a specified leader cluster Namespace. `antctl mc get membertoken` (or `get membertokens`) command prints all member tokens, a specified token, or member tokens in a specified Namespace. The command is supported only on a leader cluster. Using the `json` or `yaml` antctl output format can print more information of ClusterSet, ResourceImport, and ResourceExport than using the default table output format. ```bash antctl mc get clusterset [NAME] [-n NAMESPACE] [-o json|yaml] [-A] antctl mc get resourceimport [NAME] [-n NAMESPACE] [-o json|yaml] [-A] antctl mc get resourceexport [NAME] [-n NAMESPACE] [-clusterid CLUSTERID] [-o json|yaml] [-A] antctl mc get joinconfig [--member-token TOKEN_NAME] [-n NAMESPACE] antctl mc get membertoken [NAME] [-n NAMESPACE] [-o json|yaml] [-A] ``` To see the usage examples of these commands, you may also run `antctl mc get [subcommand] --help`. `antctl mc create` command creates a token for member clusters to join a ClusterSet. The command will also create a Secret to store the token, as well as a ServiceAccount and a RoleBinding. The `--output-file` option saves the member token Secret manifest to a file. ```bash anctcl mc create membertoken NAME -n NAMESPACE [-o OUTPUT_FILE] ``` To see the usage examples of these commands, you may also run `antctl mc create [subcommand] --help`. `antctl mc delete` command deletes a member token of a ClusterSet. The command will delete the corresponding Secret, ServiceAccount and RoleBinding if they"
},
{
"data": "```bash anctcl mc delete membertoken NAME -n NAMESPACE ``` To see the usage examples of these commands, you may also run `antctl mc delete [subcommand] --help`. `antctl mc deploy` command deploys Antrea Multi-cluster Controller to a leader or member cluster. `antctl mc deploy leadercluster` command deploys Antrea Multi-cluster Controller to a leader cluster and imports all the Antrea Multi-cluster CRDs. `antctl mc deploy membercluster` command deploys Antrea Multi-cluster Controller to a member cluster and imports all the Antrea Multi-cluster CRDs. ```bash antctl mc deploy leadercluster -n NAMESPACE [--antrea-version ANTREAVERSION] [-f PATHTO_MANIFEST] antctl mc deploy membercluster -n NAMESPACE [--antrea-version ANTREAVERSION] [-f PATHTO_MANIFEST] ``` To see the usage examples of these commands, you may also run `antctl mc deploy [subcommand] --help`. `antctl mc init` command initializes an Antrea Multi-cluster ClusterSet in a leader cluster. It will create a ClusterSet for the leader cluster. If the `-j|--join-config-file` option is specified, the ClusterSet join parameters will be saved to the specified file, which can be used in the `antctl mc join` command for a member cluster to join the ClusterSet. ```bash antctl mc init -n NAMESPACE --clusterset CLUSTERSETID --clusterid CLUSTERID [--create-token] [-j JOINCONFIG_FILE] ``` To see the usage examples of this command, you may also run `antctl mc init --help`. `antctl mc join` command lets a member cluster join an existing Antrea Multi-cluster ClusterSet. It will create a ClusterSet for the member cluster. Users can use command line options or a config file (which can be the output file of the `anctl mc init` command) to specify the ClusterSet join arguments. When the config file is provided, the command line options may be overridden by the file. A token is needed for a member cluster to access the leader cluster API server. Users can either specify a pre-created token Secret with the `--token-secret-name` option, or pass a Secret manifest to create the Secret with either the `--token-secret-file` option or the config file. ```bash antctl mc join --clusterset=CLUSTERSET_ID \\ --clusterid=CLUSTER_ID \\ --namespace=[MEMBER_NAMESPACE] \\ --leader-clusterid=LEADERCLUSTERID \\ --leader-namespace=LEADER_NAMESPACE \\ --leader-apiserver=LEADER_APISERVER \\ --token-secret-name=[TOKENSECRETNAME] \\ --token-secret-file=[TOKENSECRETFILE] antctl mc join --config-file JOINCONFIGFILE [--clusterid=CLUSTERID] [--token-secret-name=TOKENSECRETNAME] [--token-secret-file=TOKENSECRET_FILE] ``` Below is a config file example: ```yaml apiVersion: multicluster.antrea.io/v1alpha1 kind: ClusterSetJoinConfig clusterSetID: clusterset1 clusterID: cluster-east namespace: kube-system leaderClusterID: cluster-north leaderNamespace: antrea-multicluster leaderAPIServer: https://172.18.0.3:6443 tokenSecretName: cluster-east-token ``` `antctl mc leave` command lets a member cluster leave a ClusterSet. It will delete the ClusterSet and other resources created by antctl for the member cluster. ```bash antctl mc leave --clusterset CLUSTERSET_ID --namespace [NAMESPACE] ``` `antctl mc destroy` command can destroy an Antrea Multi-cluster ClusterSet in a leader cluster. It will delete the ClusterSet and other resources created by antctl for the leader cluster. ```bash antctl mc destroy --clusterset=CLUSTERSET_ID --namespace NAMESPACE ```"
}
] |
{
"category": "Runtime",
"file_name": "antctl.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "``` bash curl -v http://192.168.0.22:17220/getInode?pid=100&ino=1024 ``` Request Parameters: | Parameter | Type | Description | |--||-| | pid | Integer | Shard ID | | ino | Integer | Inode ID | ``` bash curl -v http://192.168.0.22:17220/getExtentsByInode?pid=100&ino=1024 ``` Request Parameters: | Parameter | Type | Description | |--||-| | pid | Integer | Shard ID | | ino | Integer | Inode ID | ``` bash curl -v http://192.168.0.22:17220/getAllInodes?pid=100 ``` Request Parameters: | Parameter | Type | Description | |--||-| | pid | Integer | Shard ID | ``` bash curl -v '192.168.0.22:17220/getEbsExtentsByInode?pid=282&ino=16797167' ``` Request Parameters: | Parameter | Type | Description | |--||-| | pid | Integer | Shard ID | | ino | Integer | Inode ID |"
}
] |
{
"category": "Runtime",
"file_name": "inode.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "All communication between Incus and its clients happens using a RESTful API over HTTP. This API is encapsulated over either TLS (for remote operations) or a Unix socket (for local operations). See {ref}`authentication` for information about how to access the API remotely. ```{tip} For examples on how the API is used, run any command of the Incus client () with the `--debug` flag. The debug information displays the API calls and the return values. For quickly querying the API, the Incus client provides a command. ``` The list of supported major API versions can be retrieved using `GET /`. The reason for a major API bump is if the API breaks backward compatibility. Feature additions done without breaking backward compatibility only result in addition to `api_extensions` which can be used by the client to check if a given feature is supported by the server. There are three standard return types: Standard return value Background operation Error For a standard synchronous operation, the following JSON object is returned: ```js { \"type\": \"sync\", \"status\": \"Success\", \"status_code\": 200, \"metadata\": {} // Extra resource/action specific metadata } ``` HTTP code must be 200. When a request results in a background operation, the HTTP code is set to 202 (Accepted) and the Location HTTP header is set to the operation URL. The body is a JSON object with the following structure: ```js { \"type\": \"async\", \"status\": \"OK\", \"status_code\": 100, \"operation\": \"/1.0/instances/<id>\", // URL to the background operation \"metadata\": {} // Operation metadata (see below) } ``` The operation metadata structure looks like: ```js { \"id\": \"a40f5541-5e98-454f-b3b6-8a51ef5dbd3c\", // UUID of the operation \"class\": \"websocket\", // Class of the operation (task, websocket or token) \"created_at\": \"2015-11-17T22:32:02.226176091-05:00\", // When the operation was created \"updated_at\": \"2015-11-17T22:32:02.226176091-05:00\", // Last time the operation was updated \"status\": \"Running\", // String version of the operation's status \"status_code\": 103, // Integer version of the operation's status (use this rather than status) \"resources\": { // Dictionary of resource types (container, snapshots, images) and affected resources \"instances\": [ \"/1.0/instances/test\" ] }, \"metadata\": { // Metadata specific to the operation in question (in this case, exec) \"fds\": { \"0\": \"2a4a97af81529f6608dca31f03a7b7e47acc0b8dc6514496eb25e325f9e4fa6a\", \"control\": \"5b64c661ef313b423b5317ba9cb6410e40b705806c28255f601c0ef603f079a7\" } }, \"may_cancel\": false, // Whether the operation can be canceled (DELETE over REST) \"err\": \"\" // The error string should the operation have failed } ``` The body is mostly provided as a user friendly way of seeing what's going on without having to pull the target operation, all information in the body can also be retrieved from the background operation URL. There are various situations in which something may immediately go wrong, in those cases, the following return value is used: ```js { \"type\": \"error\", \"error\": \"Failure\", \"error_code\": 400, \"metadata\": {} // More details about the error } ``` HTTP code must be one of of 400, 401, 403, 404, 409, 412 or 500. The Incus REST API often has to return status information, be that the reason for an error, the current state of an operation or the state of the various resources it exports. To make it simple to debug, all of those are always doubled. There is a numeric representation of the state which is guaranteed never to change and can be relied on by API clients. Then there is a text version meant to make it easier for people manually using the API to figure out what's happening. In most cases, those will be called status and `status_code`, the former being the user-friendly string representation and the latter the fixed numeric"
},
{
"data": "The codes are always 3 digits, with the following ranges: 100 to 199: resource state (started, stopped, ready, ...) 200 to 399: positive action result 400 to 599: negative action result 600 to 999: future use Code | Meaning : | : 100 | Operation created 101 | Started 102 | Stopped 103 | Running 104 | Canceling 105 | Pending 106 | Starting 107 | Stopping 108 | Aborting 109 | Freezing 110 | Frozen 111 | Thawed 112 | Error 113 | Ready 200 | Success 400 | Failure 401 | Canceled (rest-api-recursion)= To optimize queries of large lists, recursion is implemented for collections. A `recursion` argument can be passed to a GET query against a collection. The default value is 0 which means that collection member URLs are returned. Setting it to 1 will have those URLs be replaced by the object they point to (typically another JSON object). Recursion is implemented by simply replacing any pointer to an job (URL) by the object itself. (rest-api-filtering)= To filter your results on certain values, filter is implemented for collections. A `filter` argument can be passed to a GET query against a collection. Filtering is available for the instance, image and storage volume endpoints. There is no default value for filter which means that all results found will be returned. The following is the language used for the filter argument: ?filter=fieldname eq desiredfield_assignment The language follows the OData conventions for structuring REST API filtering logic. Logical operators are also supported for filtering: not (`not`), equals (`eq`), not equals (`ne`), and (`and`), or (`or`). Filters are evaluated with left associativity. Values with spaces can be surrounded with quotes. Nesting filtering is also supported. For instance, to filter on a field in a configuration you would pass: ?filter=config.fieldname eq desiredfield_assignment For filtering on device attributes you would pass: ?filter=devices.devicename.fieldname eq desiredfieldassignment Here are a few GET query examples of the different filtering methods mentioned above: containers?filter=name eq \"my container\" and status eq Running containers?filter=config.image.os eq ubuntu or devices.eth0.nictype eq bridged images?filter=Properties.os eq Centos and not UpdateSource.Protocol eq simplestreams Any operation which may take more than a second to be done must be done in the background, returning a background operation ID to the client. The client will then be able to either poll for a status update or wait for a notification using the long-poll API. A WebSocket-based API is available for notifications, different notification types exist to limit the traffic going to the client. It's recommended that the client always subscribes to the operations notification type before triggering remote operations so that it doesn't have to then poll for their status. The Incus API supports both PUT and PATCH to modify existing objects. PUT replaces the entire object with a new definition, it's typically called after the current object state was retrieved through GET. To avoid race conditions, the ETag header should be read from the GET response and sent as If-Match for the PUT request. This will cause Incus to fail the request if the object was modified between GET and PUT. PATCH can be used to modify a single field inside an object by only specifying the property that you want to change. To unset a key, setting it to empty will usually do the trick, but there are cases where PATCH won't work and PUT needs to be used instead. Incus has an auto-generated specification describing its API endpoints. The YAML version of this API specification can be found in . See {doc}`api` for a convenient web rendering of it."
}
] |
{
"category": "Runtime",
"file_name": "rest-api.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "is an extremely simple yet fast and modern VPN that utilizes state-of-the-art cryptography. Traffic is encrypted and encapsulated in UDP packets. WireGuard creates a virtual network device that is accessed via netlink. It appears like any network device and currently has a hardcoded name `subwg0`. WireGuard identifies peers by their cryptographic public key without the need to exchange shared secrets. The owner of the public key must have the corresponding private key to prove identity. The driver creates the key pair and adds the public key to the local endpoint so other clusters can connect. Like `ipsec`, the node IP address is used as the endpoint udp address of the WireGuard tunnels. A fixed port is used for all endpoints. The driver adds routing rules to redirect cross cluster communication through the virtual network device `subwg0`. (*note: this is different from `ipsec`, which intercepts packets at netfilter level.*) The driver uses , a go package that enables control of WireGuard devices on multiple platforms. Link creation and removal are done through . Currently assuming Linux Kernel WireGuard (`wgtypes.LinuxKernel`). WireGuard needs to be on the gateway nodes. For example, (Ubuntu < 19.04), ```shell sudo add-apt-repository ppa:wireguard/wireguard sudo apt-get update sudo apt-get install linux-headers-`uname -r` -y sudo apt-get install wireguard ``` The driver needs to be enabled with ```shell bin/subctl join --cable-driver wireguard --disable-nat broker-info.subm ``` The default UDP listen port for submariner WireGuard driver is `4500`. It can be changed by setting the env var `CEIPSECNATTPORT` It is assumed that the wireguard network device named `submariner` is exclusively used by submariner-gateway and should not be edited manually. If you get the following message ```text Fatal error occurred creating engine: failed to add wireguard device: operation not supported ``` you probably did not install WireGuard on the Gateway node. The e2e tests can be run with WireGuard by calling `make e2e` with `using=wireguard`: ```shell make e2e using=wireguard ``` No new `iptables` rules were added, although source NAT needs to be disabled for cross cluster communication. This is similar to disabling SNAT when sending cross-cluster traffic between nodes to `submariner-gateway`, so the existing rules should be enough. The driver will fail if the CNI does SNAT before routing to Wireguard (e.g., failed with Calico, works with Flannel). The following metrics are exposed per gateway: `connection_status`: indicates whether or not the connection is established where the value 1 means connected and 0 means disconnected. `connectionestablishedtimestamp` the Unix timestamp at which the connection established. `gatewaytxbytes` Bytes transmitted for the connection. `gatewayrxbytes` Bytes received for the connection. bctl`* command line utility and Helm charts wrap the Operator. The recommended deployment method is `subctl`, as it is currently the default in CI and provides diagnostic features. See the on Submariner's website. Submariner provides the `subctl` CLI utility to simplify the deployment and maintenance of Submariner across your clusters. See the on Submariner's website. See the on Submariner's website. See the and [Automated Troubleshooting docs](https://submariner.io/operations/troubleshooting/#automated-troubleshooting) on Submariner's website. See the on Submariner's website. See the on Submariner's website. See the of Submariner's website."
}
] |
{
"category": "Runtime",
"file_name": "README.md",
"project_name": "Submariner",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "You may have noticed that the `markdown` files in the `/docs` directory are also displayed on . If you want to add documentation to Kilo, you can start a local webserver to check out how the website would look like. Install . The markdown files for the website are located in `/website/docs` and are generated from the like-named markdown files in the `/docs` directory and from the corresponding header files without the `.md` extension in the `/website/docs` directory. To generate the markdown files in `/website/docs`, run: ```shell make website/docs/README.md ``` Next, build the website itself by installing the `node_modules` and building the website's HTML from the generated markdown: ```shell make website/build/index.html ``` Now, start the website server with: ```shell yarn --cwd website start ``` This command should have opened a browser window with the website; if not, open your browser and point it to `http://localhost:3000`. If you make changes to any of the markdown files in `/docs` and want to reload the local `node` server, run: ```shell make website/docs/README.md -B ``` You can execute the above while the node server is running and the website will be rebuilt and reloaded automatically. If you add a new file to the `/docs` directory, you also need to create a corresponding header file containing the front-matter in `/website/docs/`. Then, regenerate the markdown for the website with the command: ```shell make website/docs/README.md ``` Edit `/website/sidebars.js` accordingly. Note: The `id` in the header file `/website/docs/<new file>` must match the `id` specified in `website/sidebars.js`."
}
] |
{
"category": "Runtime",
"file_name": "building_website.md",
"project_name": "Kilo",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "The solution plan agreed upon with the telemetry team is for the Rook operator to add telemetry to the Ceph mon `config-key` database, and Ceph will read each of those items for telemetry retrieval. Users must opt in to telemetry Users must also opt into the new (delta) telemetry items added between Ceph versions Rook should not add information to `config-key` keys that can grow arbitrarily large to keep space usage of the mon database low (limited growth is still acceptable) Metric names will indicate a hierarchy that can be parsed to add it to Ceph telemetry collection in a more ordered fashion. For example `rook/version` and `rook/kubernetes/version` would be put into a structure like shown: ```json \"rook\": { \"version\": \"vx.y.z\" \"kubernetes\": { \"version\": \"vX.Y.Z\" } } ``` `rook/version` - Rook version. `rook/kubernetes/...` `rook/kubernetes/version` - Kubernetes version. Ceph already collects os/kernel versions. `rook/csi/...` `rook/csi/version` - Ceph CSI version. `rook/node/count/...` - Node scale information `rook/node/count/kubernetes-total` - Total number of Kubernetes nodes `rook/node/count/with-ceph-daemons` - Number of nodes running Ceph daemons. Since clusters with portable PVCs have one \"node\" per PVC, this will help show the actual node count for Rook installs in portable environments We can get this by counting the number of crash collector pods; however, if any users disable the crash collector, we will report `-1` to represent \"unknown\" `rook/node/count/with-csi-rbd-plugin` - Number of nodes with CSI RBD plugin pods `rook/node/count/with-csi-cephfs-plugin` - Number of nodes with CSI CephFS plugin pods `rook/node/count/with-csi-nfs-plugin` - Number of nodes with CSI NFS plugin pods `rook/usage/storage-class/...` - Info about storage classes related to the Ceph cluster `rook/usage/storage-class/count/...` - Number of storage classes of a given type `rook/usage/storage-class/count/total` - This is additionally useful in the case of a newly-added storage class type not recognized by an older Ceph telemetry version `rook/usage/storage-class/count/rbd` `rook/usage/storage-class/count/cephfs` `rook/usage/storage-class/count/nfs` `rook/usage/storage-class/count/bucket` `rook/cluster/storage/...` - Info about storage configuration `rook/cluster/storage/device-set/...` - Info about storage class device sets `rook/cluster/storage/device-set/count/...` - Number of device sets of given types `rook/cluster/storage/device-set/count/total` `rook/cluster/storage/device-set/count/portable` `rook/cluster/storage/device-set/count/non-portable` `rook/cluster/mon/...` - Info about monitors and mon configuration `rook/cluster/mon/count` - The desired mon count `rook/cluster/mon/allow-multiple-per-node` - true/false if allowing multiple mons per node 'true' shouldn't be used in production clusters, so this can give an idea of production count `rook/cluster/mon/max-id` - The highest mon ID, which increases as mons fail over `rook/cluster/mon/pvc/enabled` - true/false whether mons are on PVC `rook/cluster/mon/stretch/enabled` - true/false if mons are in a stretch configuration `rook/cluster/network/...` `rook/cluster/network/provider` - The network provider used for the cluster (default, host, multus) `rook/cluster/external-mode` - true/false if the cluster is in external mode RBD pools (name is stripped), some config info, and the number of them. Ceph Filesystems and MDS info. RGW count, zonegroups, zones. RBD mirroring info. Rook, with input from Ceph telemetry, will approve a version of this design doc and treat all noted metric names as"
},
{
"data": "Ceph will add all of the noted telemetry keys from the config-key database to its telemetry and backport to supported major Ceph versions. Ceph code should handle the case where Rook has not set these fields. Rook will implement each metric (or related metric group) individually over time. This strategy will allow Rook time to add telemetry items as it is able without rushing. Because the telemetry fields will be approved all at once, it will also minimize the coordination that is required between Ceph and Rook. The Ceph team will not need to create PRs one-to-one with Rook, and we can limit version mismatch issues as the telemetry is being added. Future updates will follow a similar pattern where new telemetry is suggested by updates to this design doc in Rook, then batch-added by Ceph. Rook will define all telemetry config-keys in a common file to easily understand from code what telemetry is implemented by a given code version of Rook. The below one-liner should list each individual metric in this design doc, which can help in creating Ceph issue trackers for adding Rook telemetry features. ```console grep -E -o -e '- `rook/.[^\\.]`' design/ceph/ceph-telemetry.md | grep -E -o -e 'rook/.[^`]' ``` Rejected metrics are included to capture the full discussion, and they can be revisited at any time with new information or desires. Count of each type of CR: cluster, object, file, object store, mirror, bucket topic, bucket notification, etc. This was rejected for version one for a few reasons: In most cases, the Ceph telemetry should be able to report this implicitly via its information. Block pools, filesystems, and object stores can already be inferred easily It would require a config-key for each CR sub-resource, and Rook has many. This could be simpler if we had a json blob for the value, but the Ceph telemetry team has set a guideline not to do this. We can revisit this on a case-by-case basis for specific CRs or features. For example, we may wish to have ideas about COSI usage when that is available. The memory/CPU requests/limits set on Ceph daemon types. This was rejected for a few reasons: Ceph telemetry already collects general information about OSDs and available space. This would require config-keys for each Ceph daemon, for memory and CPU, and for requests and limits, which is a large matrix to provide config keys for. This could be simpler if we had a json blob for the value, but the Ceph telemetry team has set a guideline not to do this. This is further exacerbated because OSDs can have different configurations for different storage class device sets. Unless we can provide good reasoning for why this particular metric is valuable, this is likely too much work for too little benefit. The number of PVCs/PVs of the different CSI types. This was rejected primarily because it would require adding new get/list permissions to the Rook operator which is antithetical to Rook's desires to keep permissions as minimal as possible."
}
] |
{
"category": "Runtime",
"file_name": "ceph-telemetry.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Adopters sidebar_position: 1 slug: /adopters | Company/Team | Environment | Use Cases | User Story | |-|-|--|--| | | Production | AI | | | | Production | Big Data, AI | | | | Production | Big Data | | | | Production | Big Data | | | | Production | Big Data | | | | Production | Big Data | | | | Production | Big Data | | | | Production | Big Data, File Sharing | | | | Production | AI | | | | Production | File Sharing | | | | Production | File Sharing | | | | Production | AI | | | | Production | DevOps | | | | Production | AI | | | | Production | Big Data, File Sharing | | | | Production | Big Data | | | | Production | Big Data, File Sharing | | | | Production | AI, File Sharing | | | | Production | AI | | | | Production | File Sharing | | | | Production | AI | | | | Production | File Sharing, VFX Rendering | | | | Production | AI | | | | Production | Big Data | | | | Production | AI, File Sharing | | | | Production | AI | | | | Production | File Sharing | | | | Production | AI, File Sharing | | | | Production | AI, File Sharing | | | | Production | AI, File Sharing | | | | Production | Big Data | | | | Production | File Sharing, VFX Rendering | | | | Production | AI, File Sharing | | | | Production | Big Data | | | | Production | Big Data, File Sharing | | | | Testing | File Sharing | | | | Production | Big Data, File Sharing | | | | Production | File Sharing | | | | Production | AI | | | | Production | Big Data | | | | Production | Big Data | | | | Production | HPC, File Sharing | | | | Production | AI | | | | Production | AI | | You are welcome to share your experience after using JuiceFS, either by submitting a Pull Request directly to this list, or by contacting us at ."
}
] |
{
"category": "Runtime",
"file_name": "ADOPTERS.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Container Object Storage Interface (COSI) is a specification for container orchestration frameworks to manage object storage. Even though there is no standard protocol defined for Object Store, it has flexibility to add support for all. The COSI spec abstracts common storage features such as create/delete buckets, grant/revoke access to buckets, attach/detach buckets, and more. COSI is released v1alpha1 with Kubernetes 1.25. More details about COSI can be found . It is projected that COSI will be the only supported object storage driver in the future and OBCs will be deprecated after some time. The is deployed as container in the default namespace. It is not deployed by Rook. The Ceph COSI driver is deployed as a deployment with a . The Ceph COSI driver is deployed in the same namespace as the Rook operator. The Ceph COSI driver is deployed with a service account that has the following RBAC permissions: The aim to support the v1alpha1 version of COSI in Rook v1.12. It will be extended to beta and release versions as appropriate. There will be one COSI Driver support from Rook. The driver will be started automatically with default settings when first Object Store gets created. The driver will be deleted when Rook Operator is uninstalled. The driver will be deployed in the same namespace as Rook operator. The user can provide additional setting via new `CephCOSIDriver` CRD which is owned by Rook. COSI CRDs should be installed in the cluster via following command ```bash kubectl apply -k github.com/kubernetes-sigs/container-object-storage-interface-api ``` COSI controller should be deployed in the cluster via following command ```bash kubectl apply -k github.com/kubernetes-sigs/container-object-storage-interface-controller ``` Following contents need to be append to `common.yaml` : <https://github.com/ceph/ceph-cosi/blob/master/resources/sa.yaml> <https://github.com/ceph/ceph-cosi/blob/master/resources/rbac.yaml> The `ceph-object-cosi-controller` will start the Ceph COSI Driver pod with default settings on the Rook Operator Namespace. The controller will bring up the Ceph COSI driver when first object store is created and will stop the COSI driver when Rook Operator is uninstalled only if it detect COSI Controller is running on default namespace. The controller will also watch for CephCosiDriver CRD, if defined the driver will be started with the settings provided in the CRD. If the Ceph COSI driver if up and running,it will also create `CephObjectStoreUser` named `cosi` for each object store which internally creates a secret rook-ceph-object-user-<objectstore-name>-cosi provides credentials for the object store. This can be specified in the BucketClass and BucketAccessClass. Also this controller ensures maximum one CephCosiDriver CRD exists in the cluster. For v1.12 the Ceph COSI Driver will be supported only for single site CephObjectStore aka object stores not configured with multisite settings like zone/zonegroup/realm. The users can define following CRD so that configuration related the Ceph COSI driver can be passed. This is not mandatory and the driver will be started with default settings if this CRD is not defined. ```yaml apiVersion: ceph.rook.io/v1 kind: CephCosiDriver metadata: name: rook-ceph-cosi namespace: rook-ceph spec: deploymentStrategy: \"Auto\" placement: resource: ``` The `deploymentStrategy` can be `Auto` or `Force` or `Disable`. If `Auto` is specified the driver will be started automatically when first object store is created and will be stopped when last object store is deleted. If `Force` is specified the driver will be started when the CRD is created and will be stopped when the CRD is deleted. The user can also disable the driver by specifying `Disable` as the"
},
{
"data": "The `placement` field can be used to specify the placement settings for the driver. The `resource` field can be used to specify the resource requirements for the driver. Both can refer from . There are five different kubernetes resources related to COSI. They are `Bucket`, `BucketAccess`, `BucketAccessClass`, `BucketClass` and `BucketClaim`. The user can create these resources using following `kubectl` command. All the examples will be added to deploy/examples/cosi directory in the Rook repository. ```bash kubectl create -f deploy/examples/cosi/bucketclass.yaml kubectl create -f deploy/examples/cosi/bucketclaim.yaml kubectl create -f deploy/examples/cosi/bucketaccessclass.yaml kubectl create -f deploy/examples/cosi/bucketaccess.yaml ``` User need refer Secret which contains access credentials for the object store in the `Parameter` field of `BucketClass` and `BucketAccessClass` CRD as below: ```yaml Parameter objectUserSecretName: \"rook-ceph-object-user-<objectstore-name>-cosi\" objectStoreNamespace: \"<objectstore-store-namespace>\" ``` The user needs to mount the secret as volume created by `BucketAccess` to the application pod. The user can use the secret to access the bucket by parsing the mounted file. ```yaml spec: containers: volumeMounts: name: cosi-secrets mountPath: /data/cosi volumes: name: cosi-secrets secret: secretName: ba-secret ``` ```bash ``` ```json { apiVersion: \"v1alpha1\", kind: \"BucketInfo\", metadata: { name: \"ba-$uuid\" }, spec: { bucketName: \"ba-$uuid\", authenticationType: \"KEY\", endpoint: \"https://rook-ceph-my-store:443\", accessKeyID: \"AKIAIOSFODNN7EXAMPLE\", accessSecretKey: \"wJalrXUtnFEMI/K...\", region: \"us-east-1\", protocols: [ \"s3\" ] } } ``` Currently the ceph object store provisioned via Object Bucket Claim (OBC). They both can coexist and can even use same backend bucket from ceph storage. No deployment/configuration changes are required to support both. The lib-bucket-provisioner is deprecated and eventually will be replaced by COSI when it becomes more and more stable. The CRDs used by both are different hence there is no conflicts between them. The user can create OBC and COSI BucketClaim for same backend bucket which have always result in conflicts. Even though credentials for access the buckets are different, both have equal permissions on accessing the bucket. If Rooks creates OBC with `Delete` reclaim policy and same backend bucket is used by COSI BucketClaim with same policy, then bucket will be deleted when either of them is removed. This applied to OBC with reclaim policy is `Retain` otherwise the bucket will be deleted when OBC is deleted. So no point in migrating the OBC with `Delete` reclaim policy. First the user need to create a COSI Bucket resource pointing to the backend bucket. Then user can create BucketAccessClass and BucketAccess using the COSI Bucket CRD. Now the update application's credentials with BucketAccess secret, for OBC it was combination of secret and config map with keys word like AccessKey, SecretKey, Bucket, BucketHost etc. Here details in as JSON format in the . Finally the user need to delete the existing OBC. The credentials/endpoint for the CephObjectStore should be available by creating CephObjectStoreUser with proper permissions The COSI controller should be deployed in the cluster Rook can manage one Ceph COSI driver per Rook operator Rook should not modify COSI resources like Bucket, BucketAccess, BucketAccessClass, or BucketClass. Users should able to manage both OBC and COSI Bucket/BucketAccess resources with Same Rook Operator. When provisioning ceph COSI driver Rook must uniquely identify the driver name like <namespace of rook operator>-cosi-driver so that multiple COSI drivers or multiple Rook instances within a Kubernetes cluster will not collide. Rook needs to dynamically create/update the secret containing the credentials of the ceph object store for the ceph COSI driver when user creates/updates the CephObjectStoreUser keys."
}
] |
{
"category": "Runtime",
"file_name": "ceph-cosi-driver.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "name: Bug Report about: Report a bug if something isn't working as expected <!-- Make sure to include as much information as possible so we can fix it as quickly as possible. --> <!-- If you know how to fix this bug, please open a pull request on https://github.com/openebs/openebs/compare/?template=bugs.md --> <!-- If you can't answer some sections, please delete them --> <!-- Provide a description of this bug --> <!-- Tell us what should happen --> <!-- Tell us what happens instead --> <!-- Suggest a fix or reason for this bug --> <!-- Provide a link to a live example or steps to reproduce this bug --> <!-- Add screenshots of this bug --> <!-- Include as many relevant details about the environment you experienced the bug in --> `kubectl get nodes`: `kubectl get pods --all-namespaces`: `kubectl get services`: `kubectl get sc`: `kubectl get pv`: `kubectl get pvc`: OS (from `/etc/os-release`): Kernel (from `uname -a`): Install tools: Others:"
}
] |
{
"category": "Runtime",
"file_name": "bug-report.md",
"project_name": "OpenEBS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "All notable changes to this project will be documented in this file. The format is based on . This project adheres to . Add `Recorder` in `go.opentelemetry.io/otel/log/logtest` to facilitate testing the log bridge implementations. (#5134) Add span flags to OTLP spans and links exported by `go.opentelemetry.io/otel/exporters/otlp/otlptrace`. (#5194) Make the initial alpha release of `go.opentelemetry.io/otel/sdk/log`. This new module contains the Go implementation of the OpenTelemetry Logs SDK. This module is unstable and breaking changes may be introduced. See our for more information about these stability guarantees. (#5240) Make the initial alpha release of `go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp`. This new module contains an OTLP exporter that transmits log telemetry using HTTP. This module is unstable and breaking changes may be introduced. See our for more information about these stability guarantees. (#5240) Make the initial alpha release of `go.opentelemetry.io/otel/exporters/stdout/stdoutlog`. This new module contains an exporter prints log records to STDOUT. This module is unstable and breaking changes may be introduced. See our for more information about these stability guarantees. (#5240) The `go.opentelemetry.io/otel/semconv/v1.25.0` package. The package contains semantic conventions from the `v1.25.0` version of the OpenTelemetry Semantic Conventions. (#5254) Update `go.opentelemetry.io/proto/otlp` from v1.1.0 to v1.2.0. (#5177) Improve performance of baggage member character validation in `go.opentelemetry.io/otel/baggage`. (#5214) Add `WithProxy` option in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp`. (#4906) Add `WithProxy` option in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlptracehttp`. (#4906) Add `AddLink` method to the `Span` interface in `go.opentelemetry.io/otel/trace`. (#5032) The `Enabled` method is added to the `Logger` interface in `go.opentelemetry.io/otel/log`. This method is used to notify users if a log record will be emitted or not. (#5071) Add `SeverityUndefined` `const` to `go.opentelemetry.io/otel/log`. This value represents an unset severity level. (#5072) Add `Empty` function in `go.opentelemetry.io/otel/log` to return a `KeyValue` for an empty value. (#5076) Add `go.opentelemetry.io/otel/log/global` to manage the global `LoggerProvider`. This package is provided with the anticipation that all functionality will be migrate to `go.opentelemetry.io/otel` when `go.opentelemetry.io/otel/log` stabilizes. At which point, users will be required to migrage their code, and this package will be deprecated then removed. (#5085) Add support for `Summary` metrics in the `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp` and `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc` exporters. (#5100) Add `otel.scope.name` and `otel.scope.version` tags to spans exported by `go.opentelemetry.io/otel/exporters/zipkin`. (#5108) Add support for `AddLink` to `go.opentelemetry.io/otel/bridge/opencensus`. (#5116) Add `String` method to `Value` and `KeyValue` in `go.opentelemetry.io/otel/log`. (#5117) Add Exemplar support to `go.opentelemetry.io/otel/exporters/prometheus`. (#5111) Add metric semantic conventions to `go.opentelemetry.io/otel/semconv/v1.24.0`. Future `semconv` packages will include metric semantic conventions as well. (#4528) `SpanFromContext` and `SpanContextFromContext` in `go.opentelemetry.io/otel/trace` no longer make a heap allocation when the passed context has no span. (#5049) `go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc` and `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc` now create a gRPC client in idle mode and with \"dns\" as the default resolver using . (#5151) Because of that `WithDialOption` ignores , , and . Notice that which was used before is now deprecated. Clarify the documentation about equivalence guarantees for the `Set` and `Distinct` types in `go.opentelemetry.io/otel/attribute`. (#5027) Prevent default `ErrorHandler` self-delegation. (#5137) Update all dependencies to address [GO-2024-2687]. (#5139) Drop support for [Go 1.20]. (#4967) Deprecate `go.opentelemetry.io/otel/attribute.Sortable` type. (#4734) Deprecate `go.opentelemetry.io/otel/attribute.NewSetWithSortable` function. (#4734) Deprecate `go.opentelemetry.io/otel/attribute.NewSetWithSortableFiltered` function. (#4734) This release is the last to support [Go 1.20]. The next release will require at least [Go 1.21]. Support [Go 1.22]. (#4890) Add exemplar support to `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc`. (#4900) Add exemplar support to `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp`. (#4900) The `go.opentelemetry.io/otel/log` module is added. This module includes OpenTelemetry Go's implementation of the Logs Bridge API. This module is in an alpha state, it is subject to breaking changes. See our for more info. (#4961) ARM64 platform to the compatibility testing suite. (#4994) Fix registration of multiple callbacks when using the global meter provider from `go.opentelemetry.io/otel`. (#4945) Fix negative buckets in output of exponential"
},
{
"data": "(#4956) Register all callbacks passed during observable instrument creation instead of just the last one multiple times in `go.opentelemetry.io/otel/sdk/metric`. (#4888) This release contains the first stable, `v1`, release of the following modules: `go.opentelemetry.io/otel/bridge/opencensus` `go.opentelemetry.io/otel/bridge/opencensus/test` `go.opentelemetry.io/otel/example/opencensus` `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc` `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp` `go.opentelemetry.io/otel/exporters/stdout/stdoutmetric` See our for more information about these stability guarantees. Add `WithEndpointURL` option to the `exporters/otlp/otlpmetric/otlpmetricgrpc`, `exporters/otlp/otlpmetric/otlpmetrichttp`, `exporters/otlp/otlptrace/otlptracegrpc` and `exporters/otlp/otlptrace/otlptracehttp` packages. (#4808) Experimental exemplar exporting is added to the metric SDK. See for more information about this feature and how to enable it. (#4871) `ErrSchemaURLConflict` is added to `go.opentelemetry.io/otel/sdk/resource`. This error is returned when a merge of two `Resource`s with different (non-empty) schema URL is attempted. (#4876) The `Merge` and `New` functions in `go.opentelemetry.io/otel/sdk/resource` now returns a partial result if there is a schema URL merge conflict. Instead of returning `nil` when two `Resource`s with different (non-empty) schema URLs are merged the merged `Resource`, along with the new `ErrSchemaURLConflict` error, is returned. It is up to the user to decide if they want to use the returned `Resource` or not. It may have desired attributes overwritten or include stale semantic conventions. (#4876) Fix `ContainerID` resource detection on systemd when cgroup path has a colon. (#4449) Fix `go.opentelemetry.io/otel/sdk/metric` to cache instruments to avoid leaking memory when the same instrument is created multiple times. (#4820) Fix missing `Mix` and `Max` values for `go.opentelemetry.io/otel/exporters/stdout/stdoutmetric` by introducing `MarshalText` and `MarshalJSON` for the `Extrema` type in `go.opentelemetry.io/sdk/metric/metricdata`. (#4827) This is a release candidate for the v1.23.0 release. That release is expected to include the `v1` release of the following modules: `go.opentelemetry.io/otel/bridge/opencensus` `go.opentelemetry.io/otel/bridge/opencensus/test` `go.opentelemetry.io/otel/example/opencensus` `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc` `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp` `go.opentelemetry.io/otel/exporters/stdout/stdoutmetric` See our for more information about these stability guarantees. The `go.opentelemetry.io/otel/semconv/v1.22.0` package. The package contains semantic conventions from the `v1.22.0` version of the OpenTelemetry Semantic Conventions. (#4735) The `go.opentelemetry.io/otel/semconv/v1.23.0` package. The package contains semantic conventions from the `v1.23.0` version of the OpenTelemetry Semantic Conventions. (#4746) The `go.opentelemetry.io/otel/semconv/v1.23.1` package. The package contains semantic conventions from the `v1.23.1` version of the OpenTelemetry Semantic Conventions. (#4749) The `go.opentelemetry.io/otel/semconv/v1.24.0` package. The package contains semantic conventions from the `v1.24.0` version of the OpenTelemetry Semantic Conventions. (#4770) Add `WithResourceAsConstantLabels` option to apply resource attributes for every metric emitted by the Prometheus exporter. (#4733) Experimental cardinality limiting is added to the metric SDK. See for more information about this feature and how to enable it. (#4457) Add `NewMemberRaw` and `NewKeyValuePropertyRaw` in `go.opentelemetry.io/otel/baggage`. (#4804) Upgrade all use of `go.opentelemetry.io/otel/semconv` to use `v1.24.0`. (#4754) Update transformations in `go.opentelemetry.io/otel/exporters/zipkin` to follow `v1.24.0` version of the OpenTelemetry specification. (#4754) Record synchronous measurements when the passed context is canceled instead of dropping in `go.opentelemetry.io/otel/sdk/metric`. If you do not want to make a measurement when the context is cancelled, you need to handle it yourself (e.g `if ctx.Err() != nil`). (#4671) Improve `go.opentelemetry.io/otel/trace.TraceState`'s performance. (#4722) Improve `go.opentelemetry.io/otel/propagation.TraceContext`'s performance. (#4721) Improve `go.opentelemetry.io/otel/baggage` performance. (#4743) Improve performance of the `(*Set).Filter` method in `go.opentelemetry.io/otel/attribute` when the passed filter does not filter out any attributes from the set. (#4774) `Member.String` in `go.opentelemetry.io/otel/baggage` percent-encodes only when necessary. (#4775) Improve `go.opentelemetry.io/otel/trace.Span`'s performance when adding multiple attributes. (#4818) `Property.Value` in `go.opentelemetry.io/otel/baggage` now returns a raw string instead of a percent-encoded value. (#4804) Fix `Parse` in `go.opentelemetry.io/otel/baggage` to validate member value before percent-decoding. (#4755) Fix whitespace encoding of `Member.String` in `go.opentelemetry.io/otel/baggage`. (#4756) Fix observable not registered error when the asynchronous instrument has a drop aggregation in `go.opentelemetry.io/otel/sdk/metric`. (#4772) Fix baggage item key so that it is not canonicalized in `go.opentelemetry.io/otel/bridge/opentracing`. (#4776) Fix `go.opentelemetry.io/otel/bridge/opentracing` to properly handle baggage values that requires escaping during propagation. (#4804) Fix a bug where using multiple readers resulted in incorrect asynchronous counter values in"
},
{
"data": "(#4742) Remove the deprecated `go.opentelemetry.io/otel/bridge/opencensus.NewTracer`. (#4706) Remove the deprecated `go.opentelemetry.io/otel/exporters/otlp/otlpmetric` module. (#4707) Remove the deprecated `go.opentelemetry.io/otel/example/view` module. (#4708) Remove the deprecated `go.opentelemetry.io/otel/example/fib` module. (#4723) Do not parse non-protobuf responses in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp`. (#4719) Do not parse non-protobuf responses in `go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp`. (#4719) This release brings a breaking change for custom trace API implementations. Some interfaces (`TracerProvider`, `Tracer`, `Span`) now embed the `go.opentelemetry.io/otel/trace/embedded` types. Implementors need to update their implementations based on what they want the default behavior to be. See the \"API Implementations\" section of the [trace API] package documentation for more information about how to accomplish this. Add `go.opentelemetry.io/otel/bridge/opencensus.InstallTraceBridge`, which installs the OpenCensus trace bridge, and replaces `opencensus.NewTracer`. (#4567) Add scope version to trace and metric bridges in `go.opentelemetry.io/otel/bridge/opencensus`. (#4584) Add the `go.opentelemetry.io/otel/trace/embedded` package to be embedded in the exported trace API interfaces. (#4620) Add the `go.opentelemetry.io/otel/trace/noop` package as a default no-op implementation of the trace API. (#4620) Add context propagation in `go.opentelemetry.io/otel/example/dice`. (#4644) Add view configuration to `go.opentelemetry.io/otel/example/prometheus`. (#4649) Add `go.opentelemetry.io/otel/metric.WithExplicitBucketBoundaries`, which allows defining default explicit bucket boundaries when creating histogram instruments. (#4603) Add `Version` function in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc`. (#4660) Add `Version` function in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp`. (#4660) Add Summary, SummaryDataPoint, and QuantileValue to `go.opentelemetry.io/sdk/metric/metricdata`. (#4622) `go.opentelemetry.io/otel/bridge/opencensus.NewMetricProducer` now supports exemplars from OpenCensus. (#4585) Add support for `WithExplicitBucketBoundaries` in `go.opentelemetry.io/otel/sdk/metric`. (#4605) Add support for Summary metrics in `go.opentelemetry.io/otel/bridge/opencensus`. (#4668) Deprecate `go.opentelemetry.io/otel/bridge/opencensus.NewTracer` in favor of `opencensus.InstallTraceBridge`. (#4567) Deprecate `go.opentelemetry.io/otel/example/fib` package is in favor of `go.opentelemetry.io/otel/example/dice`. (#4618) Deprecate `go.opentelemetry.io/otel/trace.NewNoopTracerProvider`. Use the added `NewTracerProvider` function in `go.opentelemetry.io/otel/trace/noop` instead. (#4620) Deprecate `go.opentelemetry.io/otel/example/view` package in favor of `go.opentelemetry.io/otel/example/prometheus`. (#4649) Deprecate `go.opentelemetry.io/otel/exporters/otlp/otlpmetric`. (#4693) `go.opentelemetry.io/otel/bridge/opencensus.NewMetricProducer` returns a `*MetricProducer` struct instead of the metric.Producer interface. (#4583) The `TracerProvider` in `go.opentelemetry.io/otel/trace` now embeds the `go.opentelemetry.io/otel/trace/embedded.TracerProvider` type. This extends the `TracerProvider` interface and is is a breaking change for any existing implementation. Implementors need to update their implementations based on what they want the default behavior of the interface to be. See the \"API Implementations\" section of the `go.opentelemetry.io/otel/trace` package documentation for more information about how to accomplish this. (#4620) The `Tracer` in `go.opentelemetry.io/otel/trace` now embeds the `go.opentelemetry.io/otel/trace/embedded.Tracer` type. This extends the `Tracer` interface and is is a breaking change for any existing implementation. Implementors need to update their implementations based on what they want the default behavior of the interface to be. See the \"API Implementations\" section of the `go.opentelemetry.io/otel/trace` package documentation for more information about how to accomplish this. (#4620) The `Span` in `go.opentelemetry.io/otel/trace` now embeds the `go.opentelemetry.io/otel/trace/embedded.Span` type. This extends the `Span` interface and is is a breaking change for any existing implementation. Implementors need to update their implementations based on what they want the default behavior of the interface to be. See the \"API Implementations\" section of the `go.opentelemetry.io/otel/trace` package documentation for more information about how to accomplish this. (#4620) `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc` does no longer depend on `go.opentelemetry.io/otel/exporters/otlp/otlpmetric`. (#4660) `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp` does no longer depend on `go.opentelemetry.io/otel/exporters/otlp/otlpmetric`. (#4660) Retry for `502 Bad Gateway` and `504 Gateway Timeout` HTTP statuses in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp`. (#4670) Retry for `502 Bad Gateway` and `504 Gateway Timeout` HTTP statuses in `go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp`. (#4670) Retry for `RESOURCE_EXHAUSTED` only if RetryInfo is returned in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc`. (#4669) Retry for `RESOURCE_EXHAUSTED` only if RetryInfo is returned in `go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc`. (#4669) Retry temporary HTTP request failures in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp`. (#4679) Retry temporary HTTP request failures in `go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp`. (#4679) Fix improper parsing of characters such us `+`, `/` by `Parse` in `go.opentelemetry.io/otel/baggage` as they were rendered as a whitespace. (#4667) Fix improper parsing of characters such us `+`, `/` passed via `OTELRESOURCEATTRIBUTES` in `go.opentelemetry.io/otel/sdk/resource` as they were rendered as a"
},
{
"data": "(#4699) Fix improper parsing of characters such us `+`, `/` passed via `OTELEXPORTEROTLPHEADERS` and `OTELEXPORTEROTLPMETRICS_HEADERS` in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc` as they were rendered as a whitespace. (#4699) Fix improper parsing of characters such us `+`, `/` passed via `OTELEXPORTEROTLPHEADERS` and `OTELEXPORTEROTLPMETRICS_HEADERS` in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp` as they were rendered as a whitespace. (#4699) Fix improper parsing of characters such us `+`, `/` passed via `OTELEXPORTEROTLPHEADERS` and `OTELEXPORTEROTLPTRACES_HEADERS` in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlptracegrpc` as they were rendered as a whitespace. (#4699) Fix improper parsing of characters such us `+`, `/` passed via `OTELEXPORTEROTLPHEADERS` and `OTELEXPORTEROTLPTRACES_HEADERS` in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlptracehttp` as they were rendered as a whitespace. (#4699) In `go.opentelemetry.op/otel/exporters/prometheus`, the exporter no longer `Collect`s metrics after `Shutdown` is invoked. (#4648) Fix documentation for `WithCompressor` in `go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc`. (#4695) Fix documentation for `WithCompressor` in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc`. (#4695) This release contains the first stable release of the OpenTelemetry Go [metric SDK]. Our project stability guarantees now apply to the `go.opentelemetry.io/otel/sdk/metric` package. See our for more information about these stability guarantees. Add the \"Roll the dice\" getting started application example in `go.opentelemetry.io/otel/example/dice`. (#4539) The `WithWriter` and `WithPrettyPrint` options to `go.opentelemetry.io/otel/exporters/stdout/stdoutmetric` to set a custom `io.Writer`, and allow displaying the output in human-readable JSON. (#4507) Allow '/' characters in metric instrument names. (#4501) The exporter in `go.opentelemetry.io/otel/exporters/stdout/stdoutmetric` does not prettify its output by default anymore. (#4507) Upgrade `gopkg.io/yaml` from `v2` to `v3` in `go.opentelemetry.io/otel/schema`. (#4535) In `go.opentelemetry.op/otel/exporters/prometheus`, don't try to create the Prometheus metric on every `Collect` if we know the scope is invalid. (#4499) Remove `\"go.opentelemetry.io/otel/bridge/opencensus\".NewMetricExporter`, which is replaced by `NewMetricProducer`. (#4566) This is a release candidate for the v1.19.0/v0.42.0 release. That release is expected to include the `v1` release of the OpenTelemetry Go metric SDK and will provide stability guarantees of that SDK. See our for more information about these stability guarantees. Allow '/' characters in metric instrument names. (#4501) In `go.opentelemetry.op/otel/exporters/prometheus`, don't try to create the prometheus metric on every `Collect` if we know the scope is invalid. (#4499) This release drops the compatibility guarantee of [Go 1.19]. Add `WithProducer` option in `go.opentelemetry.op/otel/exporters/prometheus` to restore the ability to register producers on the prometheus exporter's manual reader. (#4473) Add `IgnoreValue` option in `go.opentelemetry.io/otel/sdk/metric/metricdata/metricdatatest` to allow ignoring values when comparing metrics. (#4447) Use a `TestingT` interface instead of `*testing.T` struct in `go.opentelemetry.io/otel/sdk/metric/metricdata/metricdatatest`. (#4483) The `NewMetricExporter` in `go.opentelemetry.io/otel/bridge/opencensus` was deprecated in `v0.35.0` (#3541). The deprecation notice format for the function has been corrected to trigger Go documentation and build tooling. (#4470) Removed the deprecated `go.opentelemetry.io/otel/exporters/jaeger` package. (#4467) Removed the deprecated `go.opentelemetry.io/otel/example/jaeger` package. (#4467) Removed the deprecated `go.opentelemetry.io/otel/sdk/metric/aggregation` package. (#4468) Removed the deprecated internal packages in `go.opentelemetry.io/otel/exporters/otlp` and its sub-packages. (#4469) Dropped guaranteed support for versions of Go less than 1.20. (#4481) Export the `ManualReader` struct in `go.opentelemetry.io/otel/sdk/metric`. (#4244) Export the `PeriodicReader` struct in `go.opentelemetry.io/otel/sdk/metric`. (#4244) Add support for exponential histogram aggregations. A histogram can be configured as an exponential histogram using a view with `\"go.opentelemetry.io/otel/sdk/metric\".ExponentialHistogram` as the aggregation. (#4245) Export the `Exporter` struct in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc`. (#4272) Export the `Exporter` struct in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp`. (#4272) The exporters in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric` now support the `OTELEXPORTEROTLPMETRICSTEMPORALITY_PREFERENCE` environment variable. (#4287) Add `WithoutCounterSuffixes` option in `go.opentelemetry.io/otel/exporters/prometheus` to disable addition of `_total` suffixes. (#4306) Add info and debug logging to the metric SDK in `go.opentelemetry.io/otel/sdk/metric`. (#4315) The `go.opentelemetry.io/otel/semconv/v1.21.0` package. The package contains semantic conventions from the `v1.21.0` version of the OpenTelemetry Semantic Conventions. (#4362) Accept 201 to 299 HTTP status as success in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp` and `go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp`. (#4365) Document the `Temporality` and `Aggregation` methods of the `\"go.opentelemetry.io/otel/sdk/metric\".Exporter\"` need to be concurrent safe. (#4381) Expand the set of units supported by the Prometheus exporter, and don't add unit suffixes if they are already present in"
},
{
"data": "(#4374) Move the `Aggregation` interface and its implementations from `go.opentelemetry.io/otel/sdk/metric/aggregation` to `go.opentelemetry.io/otel/sdk/metric`. (#4435) The exporters in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric` now support the `OTELEXPORTEROTLPMETRICSDEFAULTHISTOGRAMAGGREGATION` environment variable. (#4437) Add the `NewAllowKeysFilter` and `NewDenyKeysFilter` functions to `go.opentelemetry.io/otel/attribute` to allow convenient creation of allow-keys and deny-keys filters. (#4444) Support Go 1.21. (#4463) Starting from `v1.21.0` of semantic conventions, `go.opentelemetry.io/otel/semconv/{version}/httpconv` and `go.opentelemetry.io/otel/semconv/{version}/netconv` packages will no longer be published. (#4145) Log duplicate instrument conflict at a warning level instead of info in `go.opentelemetry.io/otel/sdk/metric`. (#4202) Return an error on the creation of new instruments in `go.opentelemetry.io/otel/sdk/metric` if their name doesn't pass regexp validation. (#4210) `NewManualReader` in `go.opentelemetry.io/otel/sdk/metric` returns `*ManualReader` instead of `Reader`. (#4244) `NewPeriodicReader` in `go.opentelemetry.io/otel/sdk/metric` returns `*PeriodicReader` instead of `Reader`. (#4244) Count the Collect time in the `PeriodicReader` timeout in `go.opentelemetry.io/otel/sdk/metric`. (#4221) The function `New` in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc` returns `*Exporter` instead of `\"go.opentelemetry.io/otel/sdk/metric\".Exporter`. (#4272) The function `New` in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp` returns `*Exporter` instead of `\"go.opentelemetry.io/otel/sdk/metric\".Exporter`. (#4272) If an attribute set is omitted from an async callback, the previous value will no longer be exported in `go.opentelemetry.io/otel/sdk/metric`. (#4290) If an attribute set is observed multiple times in an async callback in `go.opentelemetry.io/otel/sdk/metric`, the values will be summed instead of the last observation winning. (#4289) Allow the explicit bucket histogram aggregation to be used for the up-down counter, observable counter, observable up-down counter, and observable gauge in the `go.opentelemetry.io/otel/sdk/metric` package. (#4332) Restrict `Meter`s in `go.opentelemetry.io/otel/sdk/metric` to only register and collect instruments it created. (#4333) `PeriodicReader.Shutdown` and `PeriodicReader.ForceFlush` in `go.opentelemetry.io/otel/sdk/metric` now apply the periodic reader's timeout to the operation if the user provided context does not contain a deadline. (#4356, #4377) Upgrade all use of `go.opentelemetry.io/otel/semconv` to use `v1.21.0`. (#4408) Increase instrument name maximum length from 63 to 255 characters in `go.opentelemetry.io/otel/sdk/metric`. (#4434) Add `go.opentelemetry.op/otel/sdk/metric.WithProducer` as an `Option` for `\"go.opentelemetry.io/otel/sdk/metric\".NewManualReader` and `\"go.opentelemetry.io/otel/sdk/metric\".NewPeriodicReader`. (#4346) Remove `Reader.RegisterProducer` in `go.opentelemetry.io/otel/metric`. Use the added `WithProducer` option instead. (#4346) Remove `Reader.ForceFlush` in `go.opentelemetry.io/otel/metric`. Notice that `PeriodicReader.ForceFlush` is still available. (#4375) Correctly format log messages from the `go.opentelemetry.io/otel/exporters/zipkin` exporter. (#4143) Log an error for calls to `NewView` in `go.opentelemetry.io/otel/sdk/metric` that have empty criteria. (#4307) Fix `\"go.opentelemetry.io/otel/sdk/resource\".WithHostID()` to not set an empty `host.id`. (#4317) Use the instrument identifying fields to cache aggregators and determine duplicate instrument registrations in `go.opentelemetry.io/otel/sdk/metric`. (#4337) Detect duplicate instruments for case-insensitive names in `go.opentelemetry.io/otel/sdk/metric`. (#4338) The `ManualReader` will not panic if `AggregationSelector` returns `nil` in `go.opentelemetry.io/otel/sdk/metric`. (#4350) If a `Reader`'s `AggregationSelector` returns `nil` or `DefaultAggregation` the pipeline will use the default aggregation. (#4350) Log a suggested view that fixes instrument conflicts in `go.opentelemetry.io/otel/sdk/metric`. (#4349) Fix possible panic, deadlock and race condition in batch span processor in `go.opentelemetry.io/otel/sdk/trace`. (#4353) Improve context cancellation handling in batch span processor's `ForceFlush` in `go.opentelemetry.io/otel/sdk/trace`. (#4369) Decouple `go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal` from `go.opentelemetry.io/otel/exporters/otlp/internal` using gotmpl. (#4397, #3846) Decouple `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc/internal` from `go.opentelemetry.io/otel/exporters/otlp/internal` and `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/internal` using gotmpl. (#4404, #3846) Decouple `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp/internal` from `go.opentelemetry.io/otel/exporters/otlp/internal` and `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/internal` using gotmpl. (#4407, #3846) Decouple `go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal` from `go.opentelemetry.io/otel/exporters/otlp/internal` and `go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal` using gotmpl. (#4400, #3846) Decouple `go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp/internal` from `go.opentelemetry.io/otel/exporters/otlp/internal` and `go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal` using gotmpl. (#4401, #3846) Do not block the metric SDK when OTLP metric exports are blocked in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc` and `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp`. (#3925, #4395) Do not append `_total` if the counter already has that suffix for the Prometheus exproter in `go.opentelemetry.io/otel/exporter/prometheus`. (#4373) Fix resource detection data race in `go.opentelemetry.io/otel/sdk/resource`. (#4409) Use the first-seen instrument name during instrument name conflicts in `go.opentelemetry.io/otel/sdk/metric`. (#4428) The `go.opentelemetry.io/otel/exporters/jaeger` package is deprecated. OpenTelemetry dropped support for Jaeger exporter in July 2023. Use `go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp` or `go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc` instead. (#4423) The `go.opentelemetry.io/otel/example/jaeger` package is deprecated. (#4423) The `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/internal` package is deprecated. (#4420) The `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/internal/oconf` package is deprecated. (#4420) The `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/internal/otest` package is deprecated. (#4420) The `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/internal/transform` package is deprecated. (#4420) The `go.opentelemetry.io/otel/exporters/otlp/internal` package is"
},
{
"data": "(#4421) The `go.opentelemetry.io/otel/exporters/otlp/internal/envconfig` package is deprecated. (#4421) The `go.opentelemetry.io/otel/exporters/otlp/internal/retry` package is deprecated. (#4421) The `go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal` package is deprecated. (#4425) The `go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal/envconfig` package is deprecated. (#4425) The `go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal/otlpconfig` package is deprecated. (#4425) The `go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal/otlptracetest` package is deprecated. (#4425) The `go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal/retry` package is deprecated. (#4425) The `go.opentelemetry.io/otel/sdk/metric/aggregation` package is deprecated. Use the aggregation types added to `go.opentelemetry.io/otel/sdk/metric` instead. (#4435) This release contains the first stable release of the OpenTelemetry Go [metric API]. Our project stability guarantees now apply to the `go.opentelemetry.io/otel/metric` package. See our for more information about these stability guarantees. The `go.opentelemetry.io/otel/semconv/v1.19.0` package. The package contains semantic conventions from the `v1.19.0` version of the OpenTelemetry specification. (#3848) The `go.opentelemetry.io/otel/semconv/v1.20.0` package. The package contains semantic conventions from the `v1.20.0` version of the OpenTelemetry specification. (#4078) The Exponential Histogram data types in `go.opentelemetry.io/otel/sdk/metric/metricdata`. (#4165) OTLP metrics exporter now supports the Exponential Histogram Data Type. (#4222) Fix serialization of `time.Time` zero values in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc` and `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp` packages. (#4271) Use `strings.Cut()` instead of `string.SplitN()` for better readability and memory use. (#4049) `MeterProvider` returns noop meters once it has been shutdown. (#4154) The deprecated `go.opentelemetry.io/otel/metric/instrument` package is removed. Use `go.opentelemetry.io/otel/metric` instead. (#4055) Fix build for BSD based systems in `go.opentelemetry.io/otel/sdk/resource`. (#4077) This is a release candidate for the v1.16.0/v0.39.0 release. That release is expected to include the `v1` release of the OpenTelemetry Go metric API and will provide stability guarantees of that API. See our for more information about these stability guarantees. Support global `MeterProvider` in `go.opentelemetry.io/otel`. (#4039) Use `Meter` for a `metric.Meter` from the global `metric.MeterProvider`. Use `GetMeterProivder` for a global `metric.MeterProvider`. Use `SetMeterProivder` to set the global `metric.MeterProvider`. Move the `go.opentelemetry.io/otel/metric` module to the `stable-v1` module set. This stages the metric API to be released as a stable module. (#4038) The `go.opentelemetry.io/otel/metric/global` package is removed. Use `go.opentelemetry.io/otel` instead. (#4039) Remove unused imports from `sdk/resource/hostidbsd.go` which caused build failures. (#4040, #4041) The `go.opentelemetry.io/otel/metric/embedded` package. (#3916) The `Version` function to `go.opentelemetry.io/otel/sdk` to return the SDK version. (#3949) Add a `WithNamespace` option to `go.opentelemetry.io/otel/exporters/prometheus` to allow users to prefix metrics with a namespace. (#3970) The following configuration types were added to `go.opentelemetry.io/otel/metric/instrument` to be used in the configuration of measurement methods. (#3971) The `AddConfig` used to hold configuration for addition measurements `NewAddConfig` used to create a new `AddConfig` `AddOption` used to configure an `AddConfig` The `RecordConfig` used to hold configuration for recorded measurements `NewRecordConfig` used to create a new `RecordConfig` `RecordOption` used to configure a `RecordConfig` The `ObserveConfig` used to hold configuration for observed measurements `NewObserveConfig` used to create a new `ObserveConfig` `ObserveOption` used to configure an `ObserveConfig` `WithAttributeSet` and `WithAttributes` are added to `go.opentelemetry.io/otel/metric/instrument`. They return an option used during a measurement that defines the attribute Set associated with the measurement. (#3971) The `Version` function to `go.opentelemetry.io/otel/exporters/otlp/otlpmetric` to return the OTLP metrics client version. (#3956) The `Version` function to `go.opentelemetry.io/otel/exporters/otlp/otlptrace` to return the OTLP trace client version. (#3956) The `Extrema` in `go.opentelemetry.io/otel/sdk/metric/metricdata` is redefined with a generic argument of `[N int64 | float64]`. (#3870) Update all exported interfaces from `go.opentelemetry.io/otel/metric` to embed their corresponding interface from `go.opentelemetry.io/otel/metric/embedded`. This adds an implementation requirement to set the interface default behavior for unimplemented methods. (#3916) Move No-Op implementation from `go.opentelemetry.io/otel/metric` into its own package `go.opentelemetry.io/otel/metric/noop`. (#3941) `metric.NewNoopMeterProvider` is replaced with `noop.NewMeterProvider` Add all the methods from `\"go.opentelemetry.io/otel/trace\".SpanContext` to `bridgeSpanContext` by embedding `otel.SpanContext` in `bridgeSpanContext`. (#3966) Wrap `UploadMetrics` error in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/` to improve error message when encountering generic grpc errors. (#3974) The measurement methods for all instruments in `go.opentelemetry.io/otel/metric/instrument` accept an option instead of the variadic `\"go.opentelemetry.io/otel/attribute\".KeyValue`. (#3971) The `Int64Counter.Add` method now accepts `...AddOption` The"
},
{
"data": "method now accepts `...AddOption` The `Int64UpDownCounter.Add` method now accepts `...AddOption` The `Float64UpDownCounter.Add` method now accepts `...AddOption` The `Int64Histogram.Record` method now accepts `...RecordOption` The `Float64Histogram.Record` method now accepts `...RecordOption` The `Int64Observer.Observe` method now accepts `...ObserveOption` The `Float64Observer.Observe` method now accepts `...ObserveOption` The `Observer` methods in `go.opentelemetry.io/otel/metric` accept an option instead of the variadic `\"go.opentelemetry.io/otel/attribute\".KeyValue`. (#3971) The `Observer.ObserveInt64` method now accepts `...ObserveOption` The `Observer.ObserveFloat64` method now accepts `...ObserveOption` Move global metric back to `go.opentelemetry.io/otel/metric/global` from `go.opentelemetry.io/otel`. (#3986) `TracerProvider` allows calling `Tracer()` while it's shutting down. It used to deadlock. (#3924) Use the SDK version for the Telemetry SDK resource detector in `go.opentelemetry.io/otel/sdk/resource`. (#3949) Fix a data race in `SpanProcessor` returned by `NewSimpleSpanProcessor` in `go.opentelemetry.io/otel/sdk/trace`. (#3951) Automatically figure out the default aggregation with `aggregation.Default`. (#3967) The `go.opentelemetry.io/otel/metric/instrument` package is deprecated. Use the equivalent types added to `go.opentelemetry.io/otel/metric` instead. (#4018) This is a release candidate for the v1.15.0/v0.38.0 release. That release will include the `v1` release of the OpenTelemetry Go metric API and will provide stability guarantees of that API. See our for more information about these stability guarantees. The `WithHostID` option to `go.opentelemetry.io/otel/sdk/resource`. (#3812) The `WithoutTimestamps` option to `go.opentelemetry.io/otel/exporters/stdout/stdoutmetric` to sets all timestamps to zero. (#3828) The new `Exemplar` type is added to `go.opentelemetry.io/otel/sdk/metric/metricdata`. Both the `DataPoint` and `HistogramDataPoint` types from that package have a new field of `Exemplars` containing the sampled exemplars for their timeseries. (#3849) Configuration for each metric instrument in `go.opentelemetry.io/otel/sdk/metric/instrument`. (#3895) The internal logging introduces a warning level verbosity equal to `V(1)`. (#3900) Added a log message warning about usage of `SimpleSpanProcessor` in production environments. (#3854) Optimize memory allocation when creation a new `Set` using `NewSet` or `NewSetWithFiltered` in `go.opentelemetry.io/otel/attribute`. (#3832) Optimize memory allocation when creation new metric instruments in `go.opentelemetry.io/otel/sdk/metric`. (#3832) Avoid creating new objects on all calls to `WithDeferredSetup` and `SkipContextSetup` in OpenTracing bridge. (#3833) The `New` and `Detect` functions from `go.opentelemetry.io/otel/sdk/resource` return errors that wrap underlying errors instead of just containing the underlying error strings. (#3844) Both the `Histogram` and `HistogramDataPoint` are redefined with a generic argument of `[N int64 | float64]` in `go.opentelemetry.io/otel/sdk/metric/metricdata`. (#3849) The metric `Export` interface from `go.opentelemetry.io/otel/sdk/metric` accepts a `*ResourceMetrics` instead of `ResourceMetrics`. (#3853) Rename `Asynchronous` to `Observable` in `go.opentelemetry.io/otel/metric/instrument`. (#3892) Rename `Int64ObserverOption` to `Int64ObservableOption` in `go.opentelemetry.io/otel/metric/instrument`. (#3895) Rename `Float64ObserverOption` to `Float64ObservableOption` in `go.opentelemetry.io/otel/metric/instrument`. (#3895) The internal logging changes the verbosity level of info to `V(4)`, the verbosity level of debug to `V(8)`. (#3900) `TracerProvider` consistently doesn't allow to register a `SpanProcessor` after shutdown. (#3845) The deprecated `go.opentelemetry.io/otel/metric/global` package is removed. (#3829) The unneeded `Synchronous` interface in `go.opentelemetry.io/otel/metric/instrument` was removed. (#3892) The `Float64ObserverConfig` and `NewFloat64ObserverConfig` in `go.opentelemetry.io/otel/sdk/metric/instrument`. Use the added `float64` instrument configuration instead. (#3895) The `Int64ObserverConfig` and `NewInt64ObserverConfig` in `go.opentelemetry.io/otel/sdk/metric/instrument`. Use the added `int64` instrument configuration instead. (#3895) The `NewNoopMeter` function in `go.opentelemetry.io/otel/metric`, use `NewMeterProvider().Meter(\"\")` instead. (#3893) This is a release candidate for the v1.15.0/v0.38.0 release. That release will include the `v1` release of the OpenTelemetry Go metric API and will provide stability guarantees of that API. See our for more information about these stability guarantees. This release drops the compatibility guarantee of [Go 1.18]. Support global `MeterProvider` in `go.opentelemetry.io/otel`. (#3818) Use `Meter` for a `metric.Meter` from the global `metric.MeterProvider`. Use `GetMeterProivder` for a global `metric.MeterProvider`. Use `SetMeterProivder` to set the global `metric.MeterProvider`. Dropped compatibility testing for [Go 1.18]. The project no longer guarantees support for this version of Go. (#3813) Handle empty environment variable as it they were not set. (#3764) Clarify the `httpconv` and `netconv` packages in `go.opentelemetry.io/otel/semconv/*` provide tracing semantic conventions. (#3823) Fix race conditions in `go.opentelemetry.io/otel/exporters/metric/prometheus` that could cause a"
},
{
"data": "(#3899) Fix sending nil `scopeInfo` to metrics channel in `go.opentelemetry.io/otel/exporters/metric/prometheus` that could cause a panic in `github.com/prometheus/client_golang/prometheus`. (#3899) The `go.opentelemetry.io/otel/metric/global` package is deprecated. Use `go.opentelemetry.io/otel` instead. (#3818) The deprecated `go.opentelemetry.io/otel/metric/unit` package is removed. (#3814) This release is the last to support [Go 1.18]. The next release will require at least [Go 1.19]. The `event` type semantic conventions are added to `go.opentelemetry.io/otel/semconv/v1.17.0`. (#3697) Support [Go 1.20]. (#3693) The `go.opentelemetry.io/otel/semconv/v1.18.0` package. The package contains semantic conventions from the `v1.18.0` version of the OpenTelemetry specification. (#3719) The following `const` renames from `go.opentelemetry.io/otel/semconv/v1.17.0` are included: `OtelScopeNameKey` -> `OTelScopeNameKey` `OtelScopeVersionKey` -> `OTelScopeVersionKey` `OtelLibraryNameKey` -> `OTelLibraryNameKey` `OtelLibraryVersionKey` -> `OTelLibraryVersionKey` `OtelStatusCodeKey` -> `OTelStatusCodeKey` `OtelStatusDescriptionKey` -> `OTelStatusDescriptionKey` `OtelStatusCodeOk` -> `OTelStatusCodeOk` `OtelStatusCodeError` -> `OTelStatusCodeError` The following `func` renames from `go.opentelemetry.io/otel/semconv/v1.17.0` are included: `OtelScopeName` -> `OTelScopeName` `OtelScopeVersion` -> `OTelScopeVersion` `OtelLibraryName` -> `OTelLibraryName` `OtelLibraryVersion` -> `OTelLibraryVersion` `OtelStatusDescription` -> `OTelStatusDescription` A `IsSampled` method is added to the `SpanContext` implementation in `go.opentelemetry.io/otel/bridge/opentracing` to expose the span sampled state. See the for more information. (#3570) The `WithInstrumentationAttributes` option to `go.opentelemetry.io/otel/metric`. (#3738) The `WithInstrumentationAttributes` option to `go.opentelemetry.io/otel/trace`. (#3739) The following environment variables are supported by the periodic `Reader` in `go.opentelemetry.io/otel/sdk/metric`. (#3763) `OTELMETRICEXPORT_INTERVAL` sets the time between collections and exports. `OTELMETRICEXPORT_TIMEOUT` sets the timeout an export is attempted. Fall-back to `TextMapCarrier` when it's not `HttpHeader`s in `go.opentelemetry.io/otel/bridge/opentracing`. (#3679) The `Collect` method of the `\"go.opentelemetry.io/otel/sdk/metric\".Reader` interface is updated to accept the `metricdata.ResourceMetrics` value the collection will be made into. This change is made to enable memory reuse by SDK users. (#3732) The `WithUnit` option in `go.opentelemetry.io/otel/sdk/metric/instrument` is updated to accept a `string` for the unit value. (#3776) Ensure `go.opentelemetry.io/otel` does not use generics. (#3723, #3725) Multi-reader `MeterProvider`s now export metrics for all readers, instead of just the first reader. (#3720, #3724) Remove use of deprecated `\"math/rand\".Seed` in `go.opentelemetry.io/otel/example/prometheus`. (#3733) Do not silently drop unknown schema data with `Parse` in `go.opentelemetry.io/otel/schema/v1.1`. (#3743) Data race issue in OTLP exporter retry mechanism. (#3755, #3756) Wrapping empty errors when exporting in `go.opentelemetry.io/otel/sdk/metric`. (#3698, #3772) Incorrect \"all\" and \"resource\" definition for schema files in `go.opentelemetry.io/otel/schema/v1.1`. (#3777) The `go.opentelemetry.io/otel/metric/unit` package is deprecated. Use the equivalent unit string instead. (#3776) Use `\"1\"` instead of `unit.Dimensionless` Use `\"By\"` instead of `unit.Bytes` Use `\"ms\"` instead of `unit.Milliseconds` Attribute `KeyValue` creations functions to `go.opentelemetry.io/otel/semconv/v1.17.0` for all non-enum semantic conventions. These functions ensure semantic convention type correctness. (#3675) Removed the `http.target` attribute from being added by `ServerRequest` in the following packages. (#3687) `go.opentelemetry.io/otel/semconv/v1.13.0/httpconv` `go.opentelemetry.io/otel/semconv/v1.14.0/httpconv` `go.opentelemetry.io/otel/semconv/v1.15.0/httpconv` `go.opentelemetry.io/otel/semconv/v1.16.0/httpconv` `go.opentelemetry.io/otel/semconv/v1.17.0/httpconv` The deprecated `go.opentelemetry.io/otel/metric/instrument/asyncfloat64` package is removed. (#3631) The deprecated `go.opentelemetry.io/otel/metric/instrument/asyncint64` package is removed. (#3631) The deprecated `go.opentelemetry.io/otel/metric/instrument/syncfloat64` package is removed. (#3631) The deprecated `go.opentelemetry.io/otel/metric/instrument/syncint64` package is removed. (#3631) The `WithInt64Callback` option to `go.opentelemetry.io/otel/metric/instrument`. This options is used to configure `int64` Observer callbacks during their creation. (#3507) The `WithFloat64Callback` option to `go.opentelemetry.io/otel/metric/instrument`. This options is used to configure `float64` Observer callbacks during their creation. (#3507) The `Producer` interface and `Reader.RegisterProducer(Producer)` to `go.opentelemetry.io/otel/sdk/metric`. These additions are used to enable external metric Producers. (#3524) The `Callback` function type to `go.opentelemetry.io/otel/metric`. This new named function type is registered with a `Meter`. (#3564) The `go.opentelemetry.io/otel/semconv/v1.13.0` package. The package contains semantic conventions from the `v1.13.0` version of the OpenTelemetry specification. (#3499) The `EndUserAttributesFromHTTPRequest` function in `go.opentelemetry.io/otel/semconv/v1.12.0` is merged into `ClientRequest` and `ServerRequest` in `go.opentelemetry.io/otel/semconv/v1.13.0/httpconv`. The `HTTPAttributesFromHTTPStatusCode` function in `go.opentelemetry.io/otel/semconv/v1.12.0` is merged into `ClientResponse` in `go.opentelemetry.io/otel/semconv/v1.13.0/httpconv`. The `HTTPClientAttributesFromHTTPRequest` function in `go.opentelemetry.io/otel/semconv/v1.12.0` is replaced by `ClientRequest` in `go.opentelemetry.io/otel/semconv/v1.13.0/httpconv`. The `HTTPServerAttributesFromHTTPRequest` function in `go.opentelemetry.io/otel/semconv/v1.12.0` is replaced by `ServerRequest` in `go.opentelemetry.io/otel/semconv/v1.13.0/httpconv`. The `HTTPServerMetricAttributesFromHTTPRequest` function in `go.opentelemetry.io/otel/semconv/v1.12.0` is replaced by `ServerRequest` in `go.opentelemetry.io/otel/semconv/v1.13.0/httpconv`. The `NetAttributesFromHTTPRequest` function in `go.opentelemetry.io/otel/semconv/v1.12.0` is split into `Transport` in `go.opentelemetry.io/otel/semconv/v1.13.0/netconv` and `ClientRequest` or `ServerRequest` in"
},
{
"data": "The `SpanStatusFromHTTPStatusCode` function in `go.opentelemetry.io/otel/semconv/v1.12.0` is replaced by `ClientStatus` in `go.opentelemetry.io/otel/semconv/v1.13.0/httpconv`. The `SpanStatusFromHTTPStatusCodeAndSpanKind` function in `go.opentelemetry.io/otel/semconv/v1.12.0` is split into `ClientStatus` and `ServerStatus` in `go.opentelemetry.io/otel/semconv/v1.13.0/httpconv`. The `Client` function is included in `go.opentelemetry.io/otel/semconv/v1.13.0/netconv` to generate attributes for a `net.Conn`. The `Server` function is included in `go.opentelemetry.io/otel/semconv/v1.13.0/netconv` to generate attributes for a `net.Listener`. The `go.opentelemetry.io/otel/semconv/v1.14.0` package. The package contains semantic conventions from the `v1.14.0` version of the OpenTelemetry specification. (#3566) The `go.opentelemetry.io/otel/semconv/v1.15.0` package. The package contains semantic conventions from the `v1.15.0` version of the OpenTelemetry specification. (#3578) The `go.opentelemetry.io/otel/semconv/v1.16.0` package. The package contains semantic conventions from the `v1.16.0` version of the OpenTelemetry specification. (#3579) Metric instruments to `go.opentelemetry.io/otel/metric/instrument`. These instruments are use as replacements of the deprecated `go.opentelemetry.io/otel/metric/instrument/{asyncfloat64,asyncint64,syncfloat64,syncint64}` packages.(#3575, #3586) `Float64ObservableCounter` replaces the `asyncfloat64.Counter` `Float64ObservableUpDownCounter` replaces the `asyncfloat64.UpDownCounter` `Float64ObservableGauge` replaces the `asyncfloat64.Gauge` `Int64ObservableCounter` replaces the `asyncint64.Counter` `Int64ObservableUpDownCounter` replaces the `asyncint64.UpDownCounter` `Int64ObservableGauge` replaces the `asyncint64.Gauge` `Float64Counter` replaces the `syncfloat64.Counter` `Float64UpDownCounter` replaces the `syncfloat64.UpDownCounter` `Float64Histogram` replaces the `syncfloat64.Histogram` `Int64Counter` replaces the `syncint64.Counter` `Int64UpDownCounter` replaces the `syncint64.UpDownCounter` `Int64Histogram` replaces the `syncint64.Histogram` `NewTracerProvider` to `go.opentelemetry.io/otel/bridge/opentracing`. This is used to create `WrapperTracer` instances from a `TracerProvider`. (#3116) The `Extrema` type to `go.opentelemetry.io/otel/sdk/metric/metricdata`. This type is used to represent min/max values and still be able to distinguish unset and zero values. (#3487) The `go.opentelemetry.io/otel/semconv/v1.17.0` package. The package contains semantic conventions from the `v1.17.0` version of the OpenTelemetry specification. (#3599) Jaeger and Zipkin exporter use `github.com/go-logr/logr` as the logging interface, and add the `WithLogr` option. (#3497, #3500) Instrument configuration in `go.opentelemetry.io/otel/metric/instrument` is split into specific options and configuration based on the instrument type. (#3507) Use the added `Int64Option` type to configure instruments from `go.opentelemetry.io/otel/metric/instrument/syncint64`. Use the added `Float64Option` type to configure instruments from `go.opentelemetry.io/otel/metric/instrument/syncfloat64`. Use the added `Int64ObserverOption` type to configure instruments from `go.opentelemetry.io/otel/metric/instrument/asyncint64`. Use the added `Float64ObserverOption` type to configure instruments from `go.opentelemetry.io/otel/metric/instrument/asyncfloat64`. Return a `Registration` from the `RegisterCallback` method of a `Meter` in the `go.opentelemetry.io/otel/metric` package. This `Registration` can be used to unregister callbacks. (#3522) Global error handler uses an atomic value instead of a mutex. (#3543) Add `NewMetricProducer` to `go.opentelemetry.io/otel/bridge/opencensus`, which can be used to pass OpenCensus metrics to an OpenTelemetry Reader. (#3541) Global logger uses an atomic value instead of a mutex. (#3545) The `Shutdown` method of the `\"go.opentelemetry.io/otel/sdk/trace\".TracerProvider` releases all computational resources when called the first time. (#3551) The `Sampler` returned from `TraceIDRatioBased` `go.opentelemetry.io/otel/sdk/trace` now uses the rightmost bits for sampling decisions. This fixes random sampling when using ID generators like `xray.IDGenerator` and increasing parity with other language implementations. (#3557) Errors from `go.opentelemetry.io/otel/exporters/otlp/otlptrace` exporters are wrapped in errors identifying their signal name. Existing users of the exporters attempting to identify specific errors will need to use `errors.Unwrap()` to get the underlying error. (#3516) Exporters from `go.opentelemetry.io/otel/exporters/otlp` will print the final retryable error message when attempts to retry time out. (#3514) The instrument kind names in `go.opentelemetry.io/otel/sdk/metric` are updated to match the API. (#3562) `InstrumentKindSyncCounter` is renamed to `InstrumentKindCounter` `InstrumentKindSyncUpDownCounter` is renamed to `InstrumentKindUpDownCounter` `InstrumentKindSyncHistogram` is renamed to `InstrumentKindHistogram` `InstrumentKindAsyncCounter` is renamed to `InstrumentKindObservableCounter` `InstrumentKindAsyncUpDownCounter` is renamed to `InstrumentKindObservableUpDownCounter` `InstrumentKindAsyncGauge` is renamed to `InstrumentKindObservableGauge` The `RegisterCallback` method of the `Meter` in `go.opentelemetry.io/otel/metric` changed. The named `Callback` replaces the inline function parameter. (#3564) `Callback` is required to return an error. (#3576) `Callback` accepts the added `Observer` parameter added. This new parameter is used by `Callback` implementations to observe values for asynchronous instruments instead of calling the `Observe` method of the instrument directly. (#3584) The slice of `instrument.Asynchronous` is now passed as a variadic argument. (#3587) The exporter from `go.opentelemetry.io/otel/exporters/zipkin` is updated to use the `v1.16.0` version of semantic"
},
{
"data": "This means it no longer uses the removed `net.peer.ip` or `http.host` attributes to determine the remote endpoint. Instead it uses the `net.sock.peer` attributes. (#3581) The `Min` and `Max` fields of the `HistogramDataPoint` in `go.opentelemetry.io/otel/sdk/metric/metricdata` are now defined with the added `Extrema` type instead of a `*float64`. (#3487) Asynchronous instruments that use sum aggregators and attribute filters correctly add values from equivalent attribute sets that have been filtered. (#3439, #3549) The `RegisterCallback` method of the `Meter` from `go.opentelemetry.io/otel/sdk/metric` only registers a callback for instruments created by that meter. Trying to register a callback with instruments from a different meter will result in an error being returned. (#3584) The `NewMetricExporter` in `go.opentelemetry.io/otel/bridge/opencensus` is deprecated. Use `NewMetricProducer` instead. (#3541) The `go.opentelemetry.io/otel/metric/instrument/asyncfloat64` package is deprecated. Use the instruments from `go.opentelemetry.io/otel/metric/instrument` instead. (#3575) The `go.opentelemetry.io/otel/metric/instrument/asyncint64` package is deprecated. Use the instruments from `go.opentelemetry.io/otel/metric/instrument` instead. (#3575) The `go.opentelemetry.io/otel/metric/instrument/syncfloat64` package is deprecated. Use the instruments from `go.opentelemetry.io/otel/metric/instrument` instead. (#3575) The `go.opentelemetry.io/otel/metric/instrument/syncint64` package is deprecated. Use the instruments from `go.opentelemetry.io/otel/metric/instrument` instead. (#3575) The `NewWrappedTracerProvider` in `go.opentelemetry.io/otel/bridge/opentracing` is now deprecated. Use `NewTracerProvider` instead. (#3116) The deprecated `go.opentelemetry.io/otel/sdk/metric/view` package is removed. (#3520) The `InstrumentProvider` from `go.opentelemetry.io/otel/sdk/metric/asyncint64` is removed. Use the new creation methods of the `Meter` in `go.opentelemetry.io/otel/sdk/metric` instead. (#3530) The `Counter` method is replaced by `Meter.Int64ObservableCounter` The `UpDownCounter` method is replaced by `Meter.Int64ObservableUpDownCounter` The `Gauge` method is replaced by `Meter.Int64ObservableGauge` The `InstrumentProvider` from `go.opentelemetry.io/otel/sdk/metric/asyncfloat64` is removed. Use the new creation methods of the `Meter` in `go.opentelemetry.io/otel/sdk/metric` instead. (#3530) The `Counter` method is replaced by `Meter.Float64ObservableCounter` The `UpDownCounter` method is replaced by `Meter.Float64ObservableUpDownCounter` The `Gauge` method is replaced by `Meter.Float64ObservableGauge` The `InstrumentProvider` from `go.opentelemetry.io/otel/sdk/metric/syncint64` is removed. Use the new creation methods of the `Meter` in `go.opentelemetry.io/otel/sdk/metric` instead. (#3530) The `Counter` method is replaced by `Meter.Int64Counter` The `UpDownCounter` method is replaced by `Meter.Int64UpDownCounter` The `Histogram` method is replaced by `Meter.Int64Histogram` The `InstrumentProvider` from `go.opentelemetry.io/otel/sdk/metric/syncfloat64` is removed. Use the new creation methods of the `Meter` in `go.opentelemetry.io/otel/sdk/metric` instead. (#3530) The `Counter` method is replaced by `Meter.Float64Counter` The `UpDownCounter` method is replaced by `Meter.Float64UpDownCounter` The `Histogram` method is replaced by `Meter.Float64Histogram` The `WithView` `Option` is added to the `go.opentelemetry.io/otel/sdk/metric` package. This option is used to configure the view(s) a `MeterProvider` will use for all `Reader`s that are registered with it. (#3387) Add Instrumentation Scope and Version as info metric and label in Prometheus exporter. This can be disabled using the `WithoutScopeInfo()` option added to that package.(#3273, #3357) OTLP exporters now recognize: (#3363) `OTELEXPORTEROTLP_INSECURE` `OTELEXPORTEROTLPTRACESINSECURE` `OTELEXPORTEROTLPMETRICSINSECURE` `OTELEXPORTEROTLPCLIENTKEY` `OTELEXPORTEROTLPTRACESCLIENT_KEY` `OTELEXPORTEROTLPMETRICSCLIENT_KEY` `OTELEXPORTEROTLPCLIENTCERTIFICATE` `OTELEXPORTEROTLPTRACESCLIENT_CERTIFICATE` `OTELEXPORTEROTLPMETRICSCLIENT_CERTIFICATE` The `View` type and related `NewView` function to create a view according to the OpenTelemetry specification are added to `go.opentelemetry.io/otel/sdk/metric`. These additions are replacements for the `View` type and `New` function from `go.opentelemetry.io/otel/sdk/metric/view`. (#3459) The `Instrument` and `InstrumentKind` type are added to `go.opentelemetry.io/otel/sdk/metric`. These additions are replacements for the `Instrument` and `InstrumentKind` types from `go.opentelemetry.io/otel/sdk/metric/view`. (#3459) The `Stream` type is added to `go.opentelemetry.io/otel/sdk/metric` to define a metric data stream a view will produce. (#3459) The `AssertHasAttributes` allows instrument authors to test that datapoints returned have appropriate attributes. (#3487) The `\"go.opentelemetry.io/otel/sdk/metric\".WithReader` option no longer accepts views to associate with the `Reader`. Instead, views are now registered directly with the `MeterProvider` via the new `WithView` option. The views registered with the `MeterProvider` apply to all `Reader`s. (#3387) The `Temporality(view.InstrumentKind) metricdata.Temporality` and `Aggregation(view.InstrumentKind) aggregation.Aggregation` methods are added to the `\"go.opentelemetry.io/otel/sdk/metric\".Exporter` interface. (#3260) The `Temporality(view.InstrumentKind) metricdata.Temporality` and `Aggregation(view.InstrumentKind) aggregation.Aggregation` methods are added to the `\"go.opentelemetry.io/otel/exporters/otlp/otlpmetric\".Client` interface. (#3260) The `WithTemporalitySelector` and `WithAggregationSelector` `ReaderOption`s have been changed to `ManualReaderOption`s in the `go.opentelemetry.io/otel/sdk/metric` package. (#3260) The periodic reader in the"
},
{
"data": "package now uses the temporality and aggregation selectors from its configured exporter instead of accepting them as options. (#3260) The `go.opentelemetry.io/otel/exporters/prometheus` exporter fixes duplicated `_total` suffixes. (#3369) Remove comparable requirement for `Reader`s. (#3387) Cumulative metrics from the OpenCensus bridge (`go.opentelemetry.io/otel/bridge/opencensus`) are defined as monotonic sums, instead of non-monotonic. (#3389) Asynchronous counters (`Counter` and `UpDownCounter`) from the metric SDK now produce delta sums when configured with delta temporality. (#3398) Exported `Status` codes in the `go.opentelemetry.io/otel/exporters/zipkin` exporter are now exported as all upper case values. (#3340) `Aggregation`s from `go.opentelemetry.io/otel/sdk/metric` with no data are not exported. (#3394, #3436) Re-enabled Attribute Filters in the Metric SDK. (#3396) Asynchronous callbacks are only called if they are registered with at least one instrument that does not use drop aggragation. (#3408) Do not report empty partial-success responses in the `go.opentelemetry.io/otel/exporters/otlp` exporters. (#3438, #3432) Handle partial success responses in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric` exporters. (#3162, #3440) Prevent duplicate Prometheus description, unit, and type. (#3469) Prevents panic when using incorrect `attribute.Value.As[Type]Slice()`. (#3489) The `go.opentelemetry.io/otel/exporters/otlp/otlpmetric.Client` interface is removed. (#3486) The `go.opentelemetry.io/otel/exporters/otlp/otlpmetric.New` function is removed. Use the `otlpmetric[http|grpc].New` directly. (#3486) The `go.opentelemetry.io/otel/sdk/metric/view` package is deprecated. Use `Instrument`, `InstrumentKind`, `View`, and `NewView` in `go.opentelemetry.io/otel/sdk/metric` instead. (#3476) The Prometheus exporter in `go.opentelemetry.io/otel/exporters/prometheus` registers with a Prometheus registerer on creation. By default, it will register with the default Prometheus registerer. A non-default registerer can be used by passing the `WithRegisterer` option. (#3239) Added the `WithAggregationSelector` option to the `go.opentelemetry.io/otel/exporters/prometheus` package to change the default `AggregationSelector` used. (#3341) The Prometheus exporter in `go.opentelemetry.io/otel/exporters/prometheus` converts the `Resource` associated with metric exports into a `target_info` metric. (#3285) The `\"go.opentelemetry.io/otel/exporters/prometheus\".New` function is updated to return an error. It will return an error if the exporter fails to register with Prometheus. (#3239) The URL-encoded values from the `OTELRESOURCEATTRIBUTES` environment variable are decoded. (#2963) The `baggage.NewMember` function decodes the `value` parameter instead of directly using it. This fixes the implementation to be compliant with the W3C specification. (#3226) Slice attributes of the `attribute` package are now comparable based on their value, not instance. (#3108 #3252) The `Shutdown` and `ForceFlush` methods of the `\"go.opentelemetry.io/otel/sdk/trace\".TraceProvider` no longer return an error when no processor is registered. (#3268) The Prometheus exporter in `go.opentelemetry.io/otel/exporters/prometheus` cumulatively sums histogram buckets. (#3281) The sum of each histogram data point is now uniquely exported by the `go.opentelemetry.io/otel/exporters/otlpmetric` exporters. (#3284, #3293) Recorded values for asynchronous counters (`Counter` and `UpDownCounter`) are interpreted as exact, not incremental, sum values by the metric SDK. (#3350, #3278) `UpDownCounters` are now correctly output as Prometheus gauges in the `go.opentelemetry.io/otel/exporters/prometheus` exporter. (#3358) The Prometheus exporter in `go.opentelemetry.io/otel/exporters/prometheus` no longer describes the metrics it will send to Prometheus on startup. Instead the exporter is defined as an \"unchecked\" collector for Prometheus. This fixes the `reader is not registered` warning currently emitted on startup. (#3291 #3342) The `go.opentelemetry.io/otel/exporters/prometheus` exporter now correctly adds `_total` suffixes to counter metrics. (#3360) The `go.opentelemetry.io/otel/exporters/prometheus` exporter now adds a unit suffix to metric names. This can be disabled using the `WithoutUnits()` option added to that package. (#3352) Add default User-Agent header to OTLP exporter requests (`go.opentelemetry.io/otel/exporters/otlptrace/otlptracegrpc` and `go.opentelemetry.io/otel/exporters/otlptrace/otlptracehttp`). (#3261) `span.SetStatus` has been updated such that calls that lower the status are now no-ops. (#3214) Upgrade `golang.org/x/sys/unix` from `v0.0.0-20210423185535-09eb48e85fd7` to `v0.0.0-20220919091848-fb04ddd9f9c8`. This addresses . (#3235) Added an example of using metric views to customize instruments. (#3177) Add default User-Agent header to OTLP exporter requests (`go.opentelemetry.io/otel/exporters/otlpmetric/otlpmetricgrpc` and `go.opentelemetry.io/otel/exporters/otlpmetric/otlpmetrichttp`). (#3261) Flush pending measurements with the `PeriodicReader` in the `go.opentelemetry.io/otel/sdk/metric` when `ForceFlush` or `Shutdown` are called. (#3220) Update histogram default bounds to match the requirements of the latest specification. (#3222) Encode the HTTP status code in the OpenTracing bridge (`go.opentelemetry.io/otel/bridge/opentracing`) as an"
},
{
"data": "(#3265) Use default view if instrument does not match any registered view of a reader. (#3224, #3237) Return the same instrument every time a user makes the exact same instrument creation call. (#3229, #3251) Return the existing instrument when a view transforms a creation call to match an existing instrument. (#3240, #3251) Log a warning when a conflicting instrument (e.g. description, unit, data-type) is created instead of returning an error. (#3251) The OpenCensus bridge no longer sends empty batches of metrics. (#3263) The Prometheus exporter sanitizes OpenTelemetry instrument names when exporting. Invalid characters are replaced with `_`. (#3212) The metric portion of the OpenCensus bridge (`go.opentelemetry.io/otel/bridge/opencensus`) has been reintroduced. (#3192) The OpenCensus bridge example (`go.opentelemetry.io/otel/example/opencensus`) has been reintroduced. (#3206) Updated go.mods to point to valid versions of the sdk. (#3216) Set the `MeterProvider` resource on all exported metric data. (#3218) The metric SDK in `go.opentelemetry.io/otel/sdk/metric` is completely refactored to comply with the OpenTelemetry specification. Please see the package documentation for how the new SDK is initialized and configured. (#3175) Update the minimum supported go version to go1.18. Removes support for go1.17 (#3179) The metric portion of the OpenCensus bridge (`go.opentelemetry.io/otel/bridge/opencensus`) has been removed. A new bridge compliant with the revised metric SDK will be added back in a future release. (#3175) The `go.opentelemetry.io/otel/sdk/metric/aggregator/aggregatortest` package is removed, see the new metric SDK. (#3175) The `go.opentelemetry.io/otel/sdk/metric/aggregator/histogram` package is removed, see the new metric SDK. (#3175) The `go.opentelemetry.io/otel/sdk/metric/aggregator/lastvalue` package is removed, see the new metric SDK. (#3175) The `go.opentelemetry.io/otel/sdk/metric/aggregator/sum` package is removed, see the new metric SDK. (#3175) The `go.opentelemetry.io/otel/sdk/metric/aggregator` package is removed, see the new metric SDK. (#3175) The `go.opentelemetry.io/otel/sdk/metric/controller/basic` package is removed, see the new metric SDK. (#3175) The `go.opentelemetry.io/otel/sdk/metric/controller/controllertest` package is removed, see the new metric SDK. (#3175) The `go.opentelemetry.io/otel/sdk/metric/controller/time` package is removed, see the new metric SDK. (#3175) The `go.opentelemetry.io/otel/sdk/metric/export/aggregation` package is removed, see the new metric SDK. (#3175) The `go.opentelemetry.io/otel/sdk/metric/export` package is removed, see the new metric SDK. (#3175) The `go.opentelemetry.io/otel/sdk/metric/metrictest` package is removed. A replacement package that supports the new metric SDK will be added back in a future release. (#3175) The `go.opentelemetry.io/otel/sdk/metric/number` package is removed, see the new metric SDK. (#3175) The `go.opentelemetry.io/otel/sdk/metric/processor/basic` package is removed, see the new metric SDK. (#3175) The `go.opentelemetry.io/otel/sdk/metric/processor/processortest` package is removed, see the new metric SDK. (#3175) The `go.opentelemetry.io/otel/sdk/metric/processor/reducer` package is removed, see the new metric SDK. (#3175) The `go.opentelemetry.io/otel/sdk/metric/registry` package is removed, see the new metric SDK. (#3175) The `go.opentelemetry.io/otel/sdk/metric/sdkapi` package is removed, see the new metric SDK. (#3175) The `go.opentelemetry.io/otel/sdk/metric/selector/simple` package is removed, see the new metric SDK. (#3175) The `\"go.opentelemetry.io/otel/sdk/metric\".ErrUninitializedInstrument` variable was removed. (#3175) The `\"go.opentelemetry.io/otel/sdk/metric\".ErrBadInstrument` variable was removed. (#3175) The `\"go.opentelemetry.io/otel/sdk/metric\".Accumulator` type was removed, see the `MeterProvider`in the new metric SDK. (#3175) The `\"go.opentelemetry.io/otel/sdk/metric\".NewAccumulator` function was removed, see `NewMeterProvider`in the new metric SDK. (#3175) The deprecated `\"go.opentelemetry.io/otel/sdk/metric\".AtomicFieldOffsets` function was removed. (#3175) Support Go 1.19. (#3077) Include compatibility testing and document support. (#3077) Support the OTLP ExportTracePartialSuccess response; these are passed to the registered error handler. (#3106) Upgrade go.opentelemetry.io/proto/otlp from v0.18.0 to v0.19.0 (#3107) Fix misidentification of OpenTelemetry `SpanKind` in OpenTracing bridge (`go.opentelemetry.io/otel/bridge/opentracing`). (#3096) Attempting to start a span with a nil `context` will no longer cause a panic. (#3110) All exporters will be shutdown even if one reports an error (#3091) Ensure valid UTF-8 when truncating over-length attribute values. (#3156) Add support for Schema Files format 1.1.x (metric \"split\" transform) with the new `go.opentelemetry.io/otel/schema/v1.1` package. (#2999) Add the `go.opentelemetry.io/otel/semconv/v1.11.0` package. The package contains semantic conventions from the `v1.11.0` version of the OpenTelemetry specification. (#3009) Add the `go.opentelemetry.io/otel/semconv/v1.12.0`"
},
{
"data": "The package contains semantic conventions from the `v1.12.0` version of the OpenTelemetry specification. (#3010) Add the `http.method` attribute to HTTP server metric from all `go.opentelemetry.io/otel/semconv/*` packages. (#3018) Invalid warning for context setup being deferred in `go.opentelemetry.io/otel/bridge/opentracing` package. (#3029) Add support for `opentracing.TextMap` format in the `Inject` and `Extract` methods of the `\"go.opentelemetry.io/otel/bridge/opentracing\".BridgeTracer` type. (#2911) The `crosslink` make target has been updated to use the `go.opentelemetry.io/build-tools/crosslink` package. (#2886) In the `go.opentelemetry.io/otel/sdk/instrumentation` package rename `Library` to `Scope` and alias `Library` as `Scope` (#2976) Move metric no-op implementation form `nonrecording` to `metric` package. (#2866) Support for go1.16. Support is now only for go1.17 and go1.18 (#2917) The `Library` struct in the `go.opentelemetry.io/otel/sdk/instrumentation` package is deprecated. Use the equivalent `Scope` struct instead. (#2977) The `ReadOnlySpan.InstrumentationLibrary` method from the `go.opentelemetry.io/otel/sdk/trace` package is deprecated. Use the equivalent `ReadOnlySpan.InstrumentationScope` method instead. (#2977) Add the `go.opentelemetry.io/otel/semconv/v1.8.0` package. The package contains semantic conventions from the `v1.8.0` version of the OpenTelemetry specification. (#2763) Add the `go.opentelemetry.io/otel/semconv/v1.9.0` package. The package contains semantic conventions from the `v1.9.0` version of the OpenTelemetry specification. (#2792) Add the `go.opentelemetry.io/otel/semconv/v1.10.0` package. The package contains semantic conventions from the `v1.10.0` version of the OpenTelemetry specification. (#2842) Added an in-memory exporter to metrictest to aid testing with a full SDK. (#2776) Globally delegated instruments are unwrapped before delegating asynchronous callbacks. (#2784) Remove import of `testing` package in non-tests builds of the `go.opentelemetry.io/otel` package. (#2786) The `WithLabelEncoder` option from the `go.opentelemetry.io/otel/exporters/stdout/stdoutmetric` package is renamed to `WithAttributeEncoder`. (#2790) The `LabelFilterSelector` interface from `go.opentelemetry.io/otel/sdk/metric/processor/reducer` is renamed to `AttributeFilterSelector`. The method included in the renamed interface also changed from `LabelFilterFor` to `AttributeFilterFor`. (#2790) The `Metadata.Labels` method from the `go.opentelemetry.io/otel/sdk/metric/export` package is renamed to `Metadata.Attributes`. Consequentially, the `Record` type from the same package also has had the embedded method renamed. (#2790) The `Iterator.Label` method in the `go.opentelemetry.io/otel/attribute` package is deprecated. Use the equivalent `Iterator.Attribute` method instead. (#2790) The `Iterator.IndexedLabel` method in the `go.opentelemetry.io/otel/attribute` package is deprecated. Use the equivalent `Iterator.IndexedAttribute` method instead. (#2790) The `MergeIterator.Label` method in the `go.opentelemetry.io/otel/attribute` package is deprecated. Use the equivalent `MergeIterator.Attribute` method instead. (#2790) Removed the `Batch` type from the `go.opentelemetry.io/otel/sdk/metric/metrictest` package. (#2864) Removed the `Measurement` type from the `go.opentelemetry.io/otel/sdk/metric/metrictest` package. (#2864) The metrics global package was added back into several test files. (#2764) The `Meter` function is added back to the `go.opentelemetry.io/otel/metric/global` package. This function is a convenience function equivalent to calling `global.MeterProvider().Meter(...)`. (#2750) Removed module the `go.opentelemetry.io/otel/sdk/export/metric`. Use the `go.opentelemetry.io/otel/sdk/metric` module instead. (#2720) Don't panic anymore when setting a global MeterProvider to itself. (#2749) Upgrade `go.opentelemetry.io/proto/otlp` in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric` from `v0.12.1` to `v0.15.0`. This replaces the use of the now deprecated `InstrumentationLibrary` and `InstrumentationLibraryMetrics` types and fields in the proto library with the equivalent `InstrumentationScope` and `ScopeMetrics`. (#2748) Allow non-comparable global `MeterProvider`, `TracerProvider`, and `TextMapPropagator` types to be set. (#2772, #2773) Don't panic anymore when setting a global TracerProvider or TextMapPropagator to itself. (#2749) Upgrade `go.opentelemetry.io/proto/otlp` in `go.opentelemetry.io/otel/exporters/otlp/otlptrace` from `v0.12.1` to `v0.15.0`. This replaces the use of the now deprecated `InstrumentationLibrary` and `InstrumentationLibrarySpans` types and fields in the proto library with the equivalent `InstrumentationScope` and `ScopeSpans`. (#2748) The `go.opentelemetry.io/otel/schema/*` packages now use the correct schema URL for their `SchemaURL` constant. Instead of using `\"https://opentelemetry.io/schemas/v<version>\"` they now use the correct URL without a `v` prefix, `\"https://opentelemetry.io/schemas/<version>\"`. (#2743, #2744) Upgrade `go.opentelemetry.io/proto/otlp` from `v0.12.0` to `v0.12.1`. This includes an indirect upgrade of `github.com/grpc-ecosystem/grpc-gateway` which resolves from `gopkg.in/yaml.v2` in version `v2.2.3`. (#2724, #2728) This update is a breaking change of the unstable Metrics API. Code instrumented with the `go.opentelemetry.io/otel/metric` will need to be modified. Add metrics exponential histogram"
},
{
"data": "New mapping functions have been made available in `sdk/metric/aggregator/exponential/mapping` for other OpenTelemetry projects to take dependencies on. (#2502) Add Go 1.18 to our compatibility tests. (#2679) Allow configuring the Sampler with the `OTELTRACESSAMPLER` and `OTELTRACESSAMPLER_ARG` environment variables. (#2305, #2517) Add the `metric/global` for obtaining and setting the global `MeterProvider`. (#2660) The metrics API has been significantly changed to match the revised OpenTelemetry specification. High-level changes include: Synchronous and asynchronous instruments are now handled by independent `InstrumentProvider`s. These `InstrumentProvider`s are managed with a `Meter`. Synchronous and asynchronous instruments are grouped into their own packages based on value types. Asynchronous callbacks can now be registered with a `Meter`. Be sure to check out the metric module documentation for more information on how to use the revised API. (#2587, #2660) Fallback to general attribute limits when span specific ones are not set in the environment. (#2675, #2677) Log the Exporters configuration in the TracerProviders message. (#2578) Added support to configure the span limits with environment variables. The following environment variables are supported. (#2606, #2637) `OTELSPANATTRIBUTEVALUELENGTH_LIMIT` `OTELSPANATTRIBUTECOUNTLIMIT` `OTELSPANEVENTCOUNTLIMIT` `OTELEVENTATTRIBUTECOUNTLIMIT` `OTELSPANLINKCOUNTLIMIT` `OTELLINKATTRIBUTECOUNTLIMIT` If the provided environment variables are invalid (negative), the default values would be used. Rename the `gc` runtime name to `go` (#2560) Add resource container ID detection. (#2418) Add span attribute value length limit. The new `AttributeValueLengthLimit` field is added to the `\"go.opentelemetry.io/otel/sdk/trace\".SpanLimits` type to configure this limit for a `TracerProvider`. The default limit for this resource is \"unlimited\". (#2637) Add the `WithRawSpanLimits` option to `go.opentelemetry.io/otel/sdk/trace`. This option replaces the `WithSpanLimits` option. Zero or negative values will not be changed to the default value like `WithSpanLimits` does. Setting a limit to zero will effectively disable the related resource it limits and setting to a negative value will mean that resource is unlimited. Consequentially, limits should be constructed using `NewSpanLimits` and updated accordingly. (#2637) Drop oldest tracestate `Member` when capacity is reached. (#2592) Add event and link drop counts to the exported data from the `oltptrace` exporter. (#2601) Unify path cleaning functionally in the `otlpmetric` and `otlptrace` configuration. (#2639) Change the debug message from the `sdk/trace.BatchSpanProcessor` to reflect the count is cumulative. (#2640) Introduce new internal `envconfig` package for OTLP exporters. (#2608) If `http.Request.Host` is empty, fall back to use `URL.Host` when populating `http.host` in the `semconv` packages. (#2661) Remove the OTLP trace exporter limit of SpanEvents when exporting. (#2616) Default to port `4318` instead of `4317` for the `otlpmetrichttp` and `otlptracehttp` client. (#2614, #2625) Unlimited span limits are now supported (negative values). (#2636, #2637) Deprecated `\"go.opentelemetry.io/otel/sdk/trace\".WithSpanLimits`. Use `WithRawSpanLimits` instead. That option allows setting unlimited and zero limits, this option does not. This option will be kept until the next major version incremented release. (#2637) Fix race condition in reading the dropped spans number for the `BatchSpanProcessor`. (#2615) Use `OTELEXPORTERZIPKIN_ENDPOINT` environment variable to specify zipkin collector endpoint. (#2490) Log the configuration of `TracerProvider`s, and `Tracer`s for debugging. To enable use a logger with Verbosity (V level) `>=1`. (#2500) Added support to configure the batch span-processor with environment variables. The following environment variables are used. (#2515) `OTELBSPSCHEDULE_DELAY` `OTELBSPEXPORT_TIMEOUT` `OTELBSPMAXQUEUESIZE`. `OTELBSPMAXEXPORTBATCH_SIZE` Zipkin exporter exports `Resource` attributes in the `Tags` field. (#2589) Deprecate module the `go.opentelemetry.io/otel/sdk/export/metric`. Use the `go.opentelemetry.io/otel/sdk/metric` module instead. (#2382) Deprecate `\"go.opentelemetry.io/otel/sdk/metric\".AtomicFieldOffsets`. (#2445) Fixed the instrument kind for noop async instruments to correctly report an implementation. (#2461) Fix UDP packets overflowing with Jaeger payloads. (#2489, #2512) Change the `otlpmetric.Client` interface's `UploadMetrics` method to accept a single `ResourceMetrics` instead of a slice of them. (#2491) Specify explicit buckets in Prometheus example, fixing issue where example only has `+inf` bucket. (#2419, #2493) W3C baggage will now decode urlescaped"
},
{
"data": "(#2529) Baggage members are now only validated once, when calling `NewMember` and not also when adding it to the baggage itself. (#2522) The order attributes are dropped from spans in the `go.opentelemetry.io/otel/sdk/trace` package when capacity is reached is fixed to be in compliance with the OpenTelemetry specification. Instead of dropping the least-recently-used attribute, the last added attribute is dropped. This drop order still only applies to attributes with unique keys not already contained in the span. If an attribute is added with a key already contained in the span, that attribute is updated to the new value being added. (#2576) Updated `go.opentelemetry.io/proto/otlp` from `v0.11.0` to `v0.12.0`. This version removes a number of deprecated methods. (#2546) - We have updated the project minimum supported Go version to 1.16 Added an internal Logger. This can be used by the SDK and API to provide users with feedback of the internal state. To enable verbose logs configure the logger which will print V(1) logs. For debugging information configure to print V(5) logs. (#2343) Add the `WithRetry` `Option` and the `RetryConfig` type to the `go.opentelemetry.io/otel/exporter/otel/otlpmetric/otlpmetrichttp` package to specify retry behavior consistently. (#2425) Add `SpanStatusFromHTTPStatusCodeAndSpanKind` to all `semconv` packages to return a span status code similar to `SpanStatusFromHTTPStatusCode`, but exclude `4XX` HTTP errors as span errors if the span is of server kind. (#2296) The `\"go.opentelemetry.io/otel/exporter/otel/otlptrace/otlptracegrpc\".Client` now uses the underlying gRPC `ClientConn` to handle name resolution, TCP connection establishment (with retries and backoff) and TLS handshakes, and handling errors on established connections by re-resolving the name and reconnecting. (#2329) The `\"go.opentelemetry.io/otel/exporter/otel/otlpmetric/otlpmetricgrpc\".Client` now uses the underlying gRPC `ClientConn` to handle name resolution, TCP connection establishment (with retries and backoff) and TLS handshakes, and handling errors on established connections by re-resolving the name and reconnecting. (#2425) The `\"go.opentelemetry.io/otel/exporter/otel/otlpmetric/otlpmetricgrpc\".RetrySettings` type is renamed to `RetryConfig`. (#2425) The `go.opentelemetry.io/otel/exporter/otel/*` gRPC exporters now default to using the host's root CA set if none are provided by the user and `WithInsecure` is not specified. (#2432) Change `resource.Default` to be evaluated the first time it is called, rather than on import. This allows the caller the option to update `OTELRESOURCEATTRIBUTES` first, such as with `os.Setenv`. (#2371) The `go.opentelemetry.io/otel/exporter/otel/*` exporters are updated to handle per-signal and universal endpoints according to the OpenTelemetry specification. Any per-signal endpoint set via an `OTELEXPORTEROTLP<signal>ENDPOINT` environment variable is now used without modification of the path. When `OTELEXPORTEROTLP_ENDPOINT` is set, if it contains a path, that path is used as a base path which per-signal paths are appended to. (#2433) Basic metric controller updated to use sync.Map to avoid blocking calls (#2381) The `go.opentelemetry.io/otel/exporter/jaeger` correctly sets the `otel.status_code` value to be a string of `ERROR` or `OK` instead of an integer code. (#2439, #2440) Deprecated the `\"go.opentelemetry.io/otel/exporter/otel/otlpmetric/otlpmetrichttp\".WithMaxAttempts` `Option`, use the new `WithRetry` `Option` instead. (#2425) Deprecated the `\"go.opentelemetry.io/otel/exporter/otel/otlpmetric/otlpmetrichttp\".WithBackoff` `Option`, use the new `WithRetry` `Option` instead. (#2425) Remove the metric Processor's ability to convert cumulative to delta aggregation temporality. (#2350) Remove the metric Bound Instruments interface and implementations. (#2399) Remove the metric MinMaxSumCount kind aggregation and the corresponding OTLP export path. (#2423) Metric SDK removes the \"exact\" aggregator for histogram instruments, as it performed a non-standard aggregation for OTLP export (creating repeated Gauge points) and worked its way into a number of confusing examples. (#2348) Metric SDK `export.ExportKind`, `export.ExportKindSelector` types have been renamed to `aggregation.Temporality` and `aggregation.TemporalitySelector` respectively to keep in line with current specification and protocol along with built-in selectors (e.g., `aggregation.CumulativeTemporalitySelector`, ...). (#2274) The Metric `Exporter` interface now requires a `TemporalitySelector` method instead of an `ExportKindSelector`. (#2274) Metrics API"
},
{
"data": "The `metric/sdkapi` package has been created to relocate the API-to-SDK interface: The following interface types simply moved from `metric` to `metric/sdkapi`: `Descriptor`, `MeterImpl`, `InstrumentImpl`, `SyncImpl`, `BoundSyncImpl`, `AsyncImpl`, `AsyncRunner`, `AsyncSingleRunner`, and `AsyncBatchRunner` The following struct types moved and are replaced with type aliases, since they are exposed to the user: `Observation`, `Measurement`. The No-op implementations of sync and async instruments are no longer exported, new functions `sdkapi.NewNoopAsyncInstrument()` and `sdkapi.NewNoopSyncInstrument()` are provided instead. (#2271) Update the SDK `BatchSpanProcessor` to export all queued spans when `ForceFlush` is called. (#2080, #2335) Add the `\"go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc\".WithGRPCConn` option so the exporter can reuse an existing gRPC connection. (#2002) Added a new `schema` module to help parse Schema Files in OTEP 0152 format. (#2267) Added a new `MapCarrier` to the `go.opentelemetry.io/otel/propagation` package to hold propagated cross-cutting concerns as a `map[string]string` held in memory. (#2334) Add the `\"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc\".WithGRPCConn` option so the exporter can reuse an existing gRPC connection. (#2002) Add the `go.opentelemetry.io/otel/semconv/v1.7.0` package. The package contains semantic conventions from the `v1.7.0` version of the OpenTelemetry specification. (#2320) Add the `go.opentelemetry.io/otel/semconv/v1.6.1` package. The package contains semantic conventions from the `v1.6.1` version of the OpenTelemetry specification. (#2321) Add the `go.opentelemetry.io/otel/semconv/v1.5.0` package. The package contains semantic conventions from the `v1.5.0` version of the OpenTelemetry specification. (#2322) When upgrading from the `semconv/v1.4.0` package note the following name changes: `K8SReplicasetUIDKey` -> `K8SReplicaSetUIDKey` `K8SReplicasetNameKey` -> `K8SReplicaSetNameKey` `K8SStatefulsetUIDKey` -> `K8SStatefulSetUIDKey` `k8SStatefulsetNameKey` -> `K8SStatefulSetNameKey` `K8SDaemonsetUIDKey` -> `K8SDaemonSetUIDKey` `K8SDaemonsetNameKey` -> `K8SDaemonSetNameKey` Links added to a span will be dropped by the SDK if they contain an invalid span context (#2275). The `\"go.opentelemetry.io/otel/semconv/v1.4.0\".HTTPServerAttributesFromHTTPRequest` now correctly only sets the HTTP client IP attribute even if the connection was routed with proxies and there are multiple addresses in the `X-Forwarded-For` header. (#2282, #2284) The `\"go.opentelemetry.io/otel/semconv/v1.4.0\".NetAttributesFromHTTPRequest` function correctly handles IPv6 addresses as IP addresses and sets the correct net peer IP instead of the net peer hostname attribute. (#2283, #2285) The simple span processor shutdown method deterministically returns the exporter error status if it simultaneously finishes when the deadline is reached. (#2290, #2289) json stdout exporter no longer crashes due to concurrency bug. (#2265) NoopMeterProvider is now private and NewNoopMeterProvider must be used to obtain a noopMeterProvider. (#2237) The Metric SDK `Export()` function takes a new two-level reader interface for iterating over results one instrumentation library at a time. (#2197) The former `\"go.opentelemetry.io/otel/sdk/export/metric\".CheckpointSet` is renamed `Reader`. The new interface is named `\"go.opentelemetry.io/otel/sdk/export/metric\".InstrumentationLibraryReader`. This is the first stable release for the project. This release includes an API and SDK for the tracing signal that will comply with the stability guarantees defined by the projects . OTLP trace exporter now sets the `SchemaURL` field in the exported telemetry if the Tracer has `WithSchemaURL` option. (#2242) Slice-valued attributes can correctly be used as map keys. (#2223) Removed the `\"go.opentelemetry.io/otel/exporters/zipkin\".WithSDKOptions` function. (#2248) Removed the deprecated package `go.opentelemetry.io/otel/oteltest`. (#2234) Removed the deprecated package `go.opentelemetry.io/otel/bridge/opencensus/utils`. (#2233) Removed deprecated functions, types, and methods from `go.opentelemetry.io/otel/attribute` package. Use the typed functions and methods added to the package instead. (#2235) The `Key.Array` method is removed. The `Array` function is removed. The `Any` function is removed. The `ArrayValue` function is removed. The `AsArray` function is removed. Added `ErrorHandlerFunc` to use a function as an `\"go.opentelemetry.io/otel\".ErrorHandler`. (#2149) Added `\"go.opentelemetry.io/otel/trace\".WithStackTrace` option to add a stack trace when using `span.RecordError` or when panic is handled in `span.End`. (#2163) Added typed slice attribute types and functionality to the `go.opentelemetry.io/otel/attribute` package to replace the existing array type and functions. (#2162) `BoolSlice`, `IntSlice`, `Int64Slice`, `Float64Slice`, and `StringSlice` replace the use of the `Array` function in the package. Added the `go.opentelemetry.io/otel/example/fib` example package. Included is an example application that computes Fibonacci"
},
{
"data": "(#2203) Metric instruments have been renamed to match the (feature-frozen) metric API specification: ValueRecorder becomes Histogram ValueObserver becomes Gauge SumObserver becomes CounterObserver UpDownSumObserver becomes UpDownCounterObserver The API exported from this project is still considered experimental. (#2202) Metric SDK/API implementation type `InstrumentKind` moves into `sdkapi` sub-package. (#2091) The Metrics SDK export record no longer contains a Resource pointer, the SDK `\"go.opentelemetry.io/otel/sdk/trace/export/metric\".Exporter.Export()` function for push-based exporters now takes a single Resource argument, pull-based exporters use `\"go.opentelemetry.io/otel/sdk/metric/controller/basic\".Controller.Resource()`. (#2120) The JSON output of the `go.opentelemetry.io/otel/exporters/stdout/stdouttrace` is harmonized now such that the output is \"plain\" JSON objects after each other of the form `{ ... } { ... } { ... }`. Earlier the JSON objects describing a span were wrapped in a slice for each `Exporter.ExportSpans` call, like ``. Outputting JSON object directly after each other is consistent with JSON loggers, and a bit easier to parse and read. (#2196) Update the `NewTracerConfig`, `NewSpanStartConfig`, `NewSpanEndConfig`, and `NewEventConfig` function in the `go.opentelemetry.io/otel/trace` package to return their respective configurations as structs instead of pointers to the struct. (#2212) The `go.opentelemetry.io/otel/bridge/opencensus/utils` package is deprecated. All functionality from this package now exists in the `go.opentelemetry.io/otel/bridge/opencensus` package. The functions from that package should be used instead. (#2166) The `\"go.opentelemetry.io/otel/attribute\".Array` function and the related `ARRAY` value type is deprecated. Use the typed `*Slice` functions and types added to the package instead. (#2162) The `\"go.opentelemetry.io/otel/attribute\".Any` function is deprecated. Use the typed functions instead. (#2181) The `go.opentelemetry.io/otel/oteltest` package is deprecated. The `\"go.opentelemetry.io/otel/sdk/trace/tracetest\".SpanRecorder` can be registered with the default SDK (`go.opentelemetry.io/otel/sdk/trace`) as a `SpanProcessor` and used as a replacement for this deprecated package. (#2188) Removed metrics test package `go.opentelemetry.io/otel/sdk/export/metric/metrictest`. (#2105) The `fromEnv` detector no longer throws an error when `OTELRESOURCEATTRIBUTES` environment variable is not set or empty. (#2138) Setting the global `ErrorHandler` with `\"go.opentelemetry.io/otel\".SetErrorHandler` multiple times is now supported. (#2160, #2140) The `\"go.opentelemetry.io/otel/attribute\".Any` function now supports `int32` values. (#2169) Multiple calls to `\"go.opentelemetry.io/otel/sdk/metric/controller/basic\".WithResource()` are handled correctly, and when no resources are provided `\"go.opentelemetry.io/otel/sdk/resource\".Default()` is used. (#2120) The `WithoutTimestamps` option for the `go.opentelemetry.io/otel/exporters/stdout/stdouttrace` exporter causes the exporter to correctly omit timestamps. (#2195) Fixed typos in resources.go. (#2201) Added `WithOSDescription` resource configuration option to set OS (Operating System) description resource attribute (`os.description`). (#1840) Added `WithOS` resource configuration option to set all OS (Operating System) resource attributes at once. (#1840) Added the `WithRetry` option to the `go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp` package. This option is a replacement for the removed `WithMaxAttempts` and `WithBackoff` options. (#2095) Added API `LinkFromContext` to return Link which encapsulates SpanContext from provided context and also encapsulates attributes. (#2115) Added a new `Link` type under the SDK `otel/sdk/trace` package that counts the number of attributes that were dropped for surpassing the `AttributePerLinkCountLimit` configured in the Span's `SpanLimits`. This new type replaces the equal-named API `Link` type found in the `otel/trace` package for most usages within the SDK. For example, instances of this type are now returned by the `Links()` function of `ReadOnlySpan`s provided in places like the `OnEnd` function of `SpanProcessor` implementations. (#2118) Added the `SpanRecorder` type to the `go.opentelemetry.io/otel/skd/trace/tracetest` package. This type can be used with the default SDK as a `SpanProcessor` during testing. (#2132) The `SpanModels` function is now exported from the `go.opentelemetry.io/otel/exporters/zipkin` package to convert OpenTelemetry spans into Zipkin model spans. (#2027) Rename the `\"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc\".RetrySettings` to `RetryConfig`. (#2095) The `TextMapCarrier` and `TextMapPropagator` from the `go.opentelemetry.io/otel/oteltest` package and their associated creation functions (`TextMapCarrier`, `NewTextMapPropagator`) are deprecated. (#2114) The `Harness` type from the `go.opentelemetry.io/otel/oteltest` package and its associated creation function, `NewHarness` are deprecated and will be removed in the next release. (#2123) The `TraceStateFromKeyValues` function from the `go.opentelemetry.io/otel/oteltest` package is deprecated. Use the `trace.ParseTraceState` function"
},
{
"data": "(#2122) Removed the deprecated package `go.opentelemetry.io/otel/exporters/trace/jaeger`. (#2020) Removed the deprecated package `go.opentelemetry.io/otel/exporters/trace/zipkin`. (#2020) Removed the `\"go.opentelemetry.io/otel/sdk/resource\".WithBuiltinDetectors` function. The explicit `With*` options for every built-in detector should be used instead. (#2026 #2097) Removed the `WithMaxAttempts` and `WithBackoff` options from the `go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp` package. The retry logic of the package has been updated to match the `otlptracegrpc` package and accordingly a `WithRetry` option is added that should be used instead. (#2095) Removed `DroppedAttributeCount` field from `otel/trace.Link` struct. (#2118) When using WithNewRoot, don't use the parent context for making sampling decisions. (#2032) `oteltest.Tracer` now creates a valid `SpanContext` when using `WithNewRoot`. (#2073) OS type detector now sets the correct `dragonflybsd` value for DragonFly BSD. (#2092) The OTel span status is correctly transformed into the OTLP status in the `go.opentelemetry.io/otel/exporters/otlp/otlptrace` package. This fix will by default set the status to `Unset` if it is not explicitly set to `Ok` or `Error`. (#2099 #2102) The `Inject` method for the `\"go.opentelemetry.io/otel/propagation\".TraceContext` type no longer injects empty `tracestate` values. (#2108) Use `6831` as default Jaeger agent port instead of `6832`. (#2131) Adds HTTP support for OTLP metrics exporter. (#2022) Removed the deprecated package `go.opentelemetry.io/otel/exporters/metric/prometheus`. (#2020) With this release we are introducing a split in module versions. The tracing API and SDK are entering the `v1.0.0` Release Candidate phase with `v1.0.0-RC1` while the experimental metrics API and SDK continue with `v0.x` releases at `v0.21.0`. Modules at major version 1 or greater will not depend on modules with major version 0. Adds `otlpgrpc.WithRetry`option for configuring the retry policy for transient errors on the otlp/gRPC exporter. (#1832) The following status codes are defined as transient errors: | gRPC Status Code | Description | | - | -- | | 1 | Cancelled | | 4 | Deadline Exceeded | | 8 | Resource Exhausted | | 10 | Aborted | | 10 | Out of Range | | 14 | Unavailable | | 15 | Data Loss | Added `Status` type to the `go.opentelemetry.io/otel/sdk/trace` package to represent the status of a span. (#1874) Added `SpanStub` type and its associated functions to the `go.opentelemetry.io/otel/sdk/trace/tracetest` package. This type can be used as a testing replacement for the `SpanSnapshot` that was removed from the `go.opentelemetry.io/otel/sdk/trace` package. (#1873) Adds support for scheme in `OTELEXPORTEROTLP_ENDPOINT` according to the spec. (#1886) Adds `trace.WithSchemaURL` option for configuring the tracer with a Schema URL. (#1889) Added an example of using OpenTelemetry Go as a trace context forwarder. (#1912) `ParseTraceState` is added to the `go.opentelemetry.io/otel/trace` package. It can be used to decode a `TraceState` from a `tracestate` header string value. (#1937) Added `Len` method to the `TraceState` type in the `go.opentelemetry.io/otel/trace` package. This method returns the number of list-members the `TraceState` holds. (#1937) Creates package `go.opentelemetry.io/otel/exporters/otlp/otlptrace` that defines a trace exporter that uses a `otlptrace.Client` to send data. Creates package `go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc` implementing a gRPC `otlptrace.Client` and offers convenience functions, `NewExportPipeline` and `InstallNewPipeline`, to setup and install a `otlptrace.Exporter` in tracing .(#1922) Added `Baggage`, `Member`, and `Property` types to the `go.opentelemetry.io/otel/baggage` package along with their related functions. (#1967) Added `ContextWithBaggage`, `ContextWithoutBaggage`, and `FromContext` functions to the `go.opentelemetry.io/otel/baggage` package. These functions replace the `Set`, `Value`, `ContextWithValue`, `ContextWithoutValue`, and `ContextWithEmpty` functions from that package and directly work with the new `Baggage` type. (#1967) The `OTELSERVICENAME` environment variable is the preferred source for `service.name`, used by the environment resource detector if a service name is present both there and in `OTELRESOURCEATTRIBUTES`. (#1969) Creates package `go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp` implementing an HTTP `otlptrace.Client` and offers convenience functions, `NewExportPipeline` and `InstallNewPipeline`, to setup and install a `otlptrace.Exporter` in tracing. (#1963) Changes `go.opentelemetry.io/otel/sdk/resource.NewWithAttributes` to require a schema"
},
{
"data": "The old function is still available as `resource.NewSchemaless`. This is a breaking change. (#1938) Several builtin resource detectors now correctly populate the schema URL. (#1938) Creates package `go.opentelemetry.io/otel/exporters/otlp/otlpmetric` that defines a metrics exporter that uses a `otlpmetric.Client` to send data. Creates package `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc` implementing a gRPC `otlpmetric.Client` and offers convenience functions, `New` and `NewUnstarted`, to create an `otlpmetric.Exporter`.(#1991) Added `go.opentelemetry.io/otel/exporters/stdout/stdouttrace` exporter. (#2005) Added `go.opentelemetry.io/otel/exporters/stdout/stdoutmetric` exporter. (#2005) Added a `TracerProvider()` method to the `\"go.opentelemetry.io/otel/trace\".Span` interface. This can be used to obtain a `TracerProvider` from a given span that utilizes the same trace processing pipeline. (#2009) Make `NewSplitDriver` from `go.opentelemetry.io/otel/exporters/otlp` take variadic arguments instead of a `SplitConfig` item. `NewSplitDriver` now automatically implements an internal `noopDriver` for `SplitConfig` fields that are not initialized. (#1798) `resource.New()` now creates a Resource without builtin detectors. Previous behavior is now achieved by using `WithBuiltinDetectors` Option. (#1810) Move the `Event` type from the `go.opentelemetry.io/otel` package to the `go.opentelemetry.io/otel/sdk/trace` package. (#1846) CI builds validate against last two versions of Go, dropping 1.14 and adding 1.16. (#1865) BatchSpanProcessor now report export failures when calling `ForceFlush()` method. (#1860) `Set.Encoded(Encoder)` no longer caches the result of an encoding. (#1855) Renamed `CloudZoneKey` to `CloudAvailabilityZoneKey` in Resource semantic conventions according to spec. (#1871) The `StatusCode` and `StatusMessage` methods of the `ReadOnlySpan` interface and the `Span` produced by the `go.opentelemetry.io/otel/sdk/trace` package have been replaced with a single `Status` method. This method returns the status of a span using the new `Status` type. (#1874) Updated `ExportSpans` method of the`SpanExporter` interface type to accept `ReadOnlySpan`s instead of the removed `SpanSnapshot`. This brings the export interface into compliance with the specification in that it now accepts an explicitly immutable type instead of just an implied one. (#1873) Unembed `SpanContext` in `Link`. (#1877) Generate Semantic conventions from the specification YAML. (#1891) Spans created by the global `Tracer` obtained from `go.opentelemetry.io/otel`, prior to a functioning `TracerProvider` being set, now propagate the span context from their parent if one exists. (#1901) The `\"go.opentelemetry.io/otel\".Tracer` function now accepts tracer options. (#1902) Move the `go.opentelemetry.io/otel/unit` package to `go.opentelemetry.io/otel/metric/unit`. (#1903) Changed `go.opentelemetry.io/otel/trace.TracerConfig` to conform to the (#1921) Changed `go.opentelemetry.io/otel/trace.SpanConfig` to conform to the . (#1921) Changed `span.End()` now only accepts Options that are allowed at `End()`. (#1921) Changed `go.opentelemetry.io/otel/metric.InstrumentConfig` to conform to the . (#1921) Changed `go.opentelemetry.io/otel/metric.MeterConfig` to conform to the . (#1921) Refactored option types according to the contribution style guide. (#1882) Move the `go.opentelemetry.io/otel/trace.TraceStateFromKeyValues` function to the `go.opentelemetry.io/otel/oteltest` package. This function is preserved for testing purposes where it may be useful to create a `TraceState` from `attribute.KeyValue`s, but it is not intended for production use. The new `ParseTraceState` function should be used to create a `TraceState`. (#1931) Updated `MarshalJSON` method of the `go.opentelemetry.io/otel/trace.TraceState` type to marshal the type into the string representation of the `TraceState`. (#1931) The `TraceState.Delete` method from the `go.opentelemetry.io/otel/trace` package no longer returns an error in addition to a `TraceState`. (#1931) Updated `Get` method of the `TraceState` type from the `go.opentelemetry.io/otel/trace` package to accept a `string` instead of an `attribute.Key` type. (#1931) Updated `Insert` method of the `TraceState` type from the `go.opentelemetry.io/otel/trace` package to accept a pair of `string`s instead of an `attribute.KeyValue` type. (#1931) Updated `Delete` method of the `TraceState` type from the `go.opentelemetry.io/otel/trace` package to accept a `string` instead of an `attribute.Key` type. (#1931) Renamed `NewExporter` to `New` in the `go.opentelemetry.io/otel/exporters/stdout` package. (#1985) Renamed `NewExporter` to `New` in the `go.opentelemetry.io/otel/exporters/metric/prometheus` package. (#1985) Renamed `NewExporter` to `New` in the `go.opentelemetry.io/otel/exporters/trace/jaeger` package. (#1985) Renamed `NewExporter` to `New` in the `go.opentelemetry.io/otel/exporters/trace/zipkin` package. (#1985) Renamed `NewExporter` to `New` in the `go.opentelemetry.io/otel/exporters/otlp`"
},
{
"data": "(#1985) Renamed `NewUnstartedExporter` to `NewUnstarted` in the `go.opentelemetry.io/otel/exporters/otlp` package. (#1985) The `go.opentelemetry.io/otel/semconv` package has been moved to `go.opentelemetry.io/otel/semconv/v1.4.0` to allow for multiple versions to be used concurrently. (#1987) Metrics test helpers in `go.opentelemetry.io/otel/oteltest` have been moved to `go.opentelemetry.io/otel/metric/metrictest`. (#1988) The `go.opentelemetry.io/otel/exporters/metric/prometheus` is deprecated, use `go.opentelemetry.io/otel/exporters/prometheus` instead. (#1993) The `go.opentelemetry.io/otel/exporters/trace/jaeger` is deprecated, use `go.opentelemetry.io/otel/exporters/jaeger` instead. (#1993) The `go.opentelemetry.io/otel/exporters/trace/zipkin` is deprecated, use `go.opentelemetry.io/otel/exporters/zipkin` instead. (#1993) Removed `resource.WithoutBuiltin()`. Use `resource.New()`. (#1810) Unexported types `resource.FromEnv`, `resource.Host`, and `resource.TelemetrySDK`, Use the corresponding `With*()` to use individually. (#1810) Removed the `Tracer` and `IsRecording` method from the `ReadOnlySpan` in the `go.opentelemetry.io/otel/sdk/trace`. The `Tracer` method is not a required to be included in this interface and given the mutable nature of the tracer that is associated with a span, this method is not appropriate. The `IsRecording` method returns if the span is recording or not. A read-only span value does not need to know if updates to it will be recorded or not. By definition, it cannot be updated so there is no point in communicating if an update is recorded. (#1873) Removed the `SpanSnapshot` type from the `go.opentelemetry.io/otel/sdk/trace` package. The use of this type has been replaced with the use of the explicitly immutable `ReadOnlySpan` type. When a concrete representation of a read-only span is needed for testing, the newly added `SpanStub` in the `go.opentelemetry.io/otel/sdk/trace/tracetest` package should be used. (#1873) Removed the `Tracer` method from the `Span` interface in the `go.opentelemetry.io/otel/trace` package. Using the same tracer that created a span introduces the error where an instrumentation library's `Tracer` is used by other code instead of their own. The `\"go.opentelemetry.io/otel\".Tracer` function or a `TracerProvider` should be used to acquire a library specific `Tracer` instead. (#1900) The `TracerProvider()` method on the `Span` interface may also be used to obtain a `TracerProvider` using the same trace processing pipeline. (#2009) The `http.url` attribute generated by `HTTPClientAttributesFromHTTPRequest` will no longer include username or password information. (#1919) Removed `IsEmpty` method of the `TraceState` type in the `go.opentelemetry.io/otel/trace` package in favor of using the added `TraceState.Len` method. (#1931) Removed `Set`, `Value`, `ContextWithValue`, `ContextWithoutValue`, and `ContextWithEmpty` functions in the `go.opentelemetry.io/otel/baggage` package. Handling of baggage is now done using the added `Baggage` type and related context functions (`ContextWithBaggage`, `ContextWithoutBaggage`, and `FromContext`) in that package. (#1967) The `InstallNewPipeline` and `NewExportPipeline` creation functions in all the exporters (prometheus, otlp, stdout, jaeger, and zipkin) have been removed. These functions were deemed premature attempts to provide convenience that did not achieve this aim. (#1985) The `go.opentelemetry.io/otel/exporters/otlp` exporter has been removed. Use `go.opentelemetry.io/otel/exporters/otlp/otlptrace` instead. (#1990) The `go.opentelemetry.io/otel/exporters/stdout` exporter has been removed. Use `go.opentelemetry.io/otel/exporters/stdout/stdouttrace` or `go.opentelemetry.io/otel/exporters/stdout/stdoutmetric` instead. (#2005) Only report errors from the `\"go.opentelemetry.io/otel/sdk/resource\".Environment` function when they are not `nil`. (#1850, #1851) The `Shutdown` method of the simple `SpanProcessor` in the `go.opentelemetry.io/otel/sdk/trace` package now honors the context deadline or cancellation. (#1616, #1856) BatchSpanProcessor now drops span batches that failed to be exported. (#1860) Use `http://localhost:14268/api/traces` as default Jaeger collector endpoint instead of `http://localhost:14250`. (#1898) Allow trailing and leading whitespace in the parsing of a `tracestate` header. (#1931) Add logic to determine if the channel is closed to fix Jaeger exporter test panic with close closed channel. (#1870, #1973) Avoid transport security when OTLP endpoint is a Unix socket. (#2001) The OTLP exporter now has two new convenience functions, `NewExportPipeline` and `InstallNewPipeline`, setup and install the exporter in tracing and metrics pipelines. (#1373) Adds semantic conventions for exceptions. (#1492) Added Jaeger Environment variables: `OTELEXPORTERJAEGERAGENTHOST`, `OTELEXPORTERJAEGERAGENTPORT` These environment variables can be used to override Jaeger agent hostname and port (#1752) Option `ExportTimeout` was added to batch span processor. (#1755)"
},
{
"data": "is now a defined type over `byte` and `WithSampled(bool) TraceFlags` and `IsSampled() bool` methods have been added to it. (#1770) The `Event` and `Link` struct types from the `go.opentelemetry.io/otel` package now include a `DroppedAttributeCount` field to record the number of attributes that were not recorded due to configured limits being reached. (#1771) The Jaeger exporter now reports dropped attributes for a Span event in the exported log. (#1771) Adds test to check BatchSpanProcessor ignores `OnEnd` and `ForceFlush` post `Shutdown`. (#1772) Extract resource attributes from the `OTELRESOURCEATTRIBUTES` environment variable and merge them with the `resource.Default` resource as well as resources provided to the `TracerProvider` and metric `Controller`. (#1785) Added `WithOSType` resource configuration option to set OS (Operating System) type resource attribute (`os.type`). (#1788) Added `WithProcess*` resource configuration options to set Process resource attributes. (#1788) `process.pid` `process.executable.name` `process.executable.path` `process.command_args` `process.owner` `process.runtime.name` `process.runtime.version` `process.runtime.description` Adds `k8s.node.name` and `k8s.node.uid` attribute keys to the `semconv` package. (#1789) Added support for configuring OTLP/HTTP and OTLP/gRPC Endpoints, TLS Certificates, Headers, Compression and Timeout via Environment Variables. (#1758, #1769 and #1811) `OTELEXPORTEROTLP_ENDPOINT` `OTELEXPORTEROTLPTRACESENDPOINT` `OTELEXPORTEROTLPMETRICSENDPOINT` `OTELEXPORTEROTLP_HEADERS` `OTELEXPORTEROTLPTRACESHEADERS` `OTELEXPORTEROTLPMETRICSHEADERS` `OTELEXPORTEROTLP_COMPRESSION` `OTELEXPORTEROTLPTRACESCOMPRESSION` `OTELEXPORTEROTLPMETRICSCOMPRESSION` `OTELEXPORTEROTLP_TIMEOUT` `OTELEXPORTEROTLPTRACESTIMEOUT` `OTELEXPORTEROTLPMETRICSTIMEOUT` `OTELEXPORTEROTLP_CERTIFICATE` `OTELEXPORTEROTLPTRACESCERTIFICATE` `OTELEXPORTEROTLPMETRICSCERTIFICATE` Adds `otlpgrpc.WithTimeout` option for configuring timeout to the otlp/gRPC exporter. (#1821) Adds `jaeger.WithMaxPacketSize` option for configuring maximum UDP packet size used when connecting to the Jaeger agent. (#1853) The `Span.IsRecording` implementation from `go.opentelemetry.io/otel/sdk/trace` always returns false when not being sampled. (#1750) The Jaeger exporter now correctly sets tags for the Span status code and message. This means it uses the correct tag keys (`\"otel.statuscode\"`, `\"otel.statusdescription\"`) and does not set the status message as a tag unless it is set on the span. (#1761) The Jaeger exporter now correctly records Span event's names using the `\"event\"` key for a tag. Additionally, this tag is overridden, as specified in the OTel specification, if the event contains an attribute with that key. (#1768) Zipkin Exporter: Ensure mapping between OTel and Zipkin span data complies with the specification. (#1688) Fixed typo for default service name in Jaeger Exporter. (#1797) Fix flaky OTLP for the reconnnection of the client connection. (#1527, #1814) Fix Jaeger exporter dropping of span batches that exceed the UDP packet size limit. Instead, the exporter now splits the batch into smaller sendable batches. (#1828) Span `RecordError` now records an `exception` event to comply with the semantic convention specification. (#1492) Jaeger exporter was updated to use thrift v0.14.1. (#1712) Migrate from using internally built and maintained version of the OTLP to the one hosted at `go.opentelemetry.io/proto/otlp`. (#1713) Migrate from using `github.com/gogo/protobuf` to `google.golang.org/protobuf` to match `go.opentelemetry.io/proto/otlp`. (#1713) The storage of a local or remote Span in a `context.Context` using its SpanContext is unified to store just the current Span. The Span's SpanContext can now self-identify as being remote or not. This means that `\"go.opentelemetry.io/otel/trace\".ContextWithRemoteSpanContext` will now overwrite any existing current Span, not just existing remote Spans, and make it the current Span in a `context.Context`. (#1731) Improve OTLP/gRPC exporter connection errors. (#1737) Information about a parent span context in a `\"go.opentelemetry.io/otel/export/trace\".SpanSnapshot` is unified in a new `Parent` field. The existing `ParentSpanID` and `HasRemoteParent` fields are removed in favor of this. (#1748) The `ParentContext` field of the `\"go.opentelemetry.io/otel/sdk/trace\".SamplingParameters` is updated to hold a `context.Context` containing the parent span. This changes it to make `SamplingParameters` conform with the OpenTelemetry specification. (#1749) Updated Jaeger Environment Variables: `JAEGERENDPOINT`, `JAEGERUSER`, `JAEGER_PASSWORD` to `OTELEXPORTERJAEGERENDPOINT`, `OTELEXPORTERJAEGERUSER`, `OTELEXPORTERJAEGER_PASSWORD` in compliance with OTel specification. (#1752) Modify `BatchSpanProcessor.ForceFlush` to abort after timeout/cancellation. (#1757) The `DroppedAttributeCount` field of the `Span` in the `go.opentelemetry.io/otel` package now only represents the number of attributes dropped for the span"
},
{
"data": "It no longer is a conglomerate of itself, events, and link attributes that have been dropped. (#1771) Make `ExportSpans` in Jaeger Exporter honor context deadline. (#1773) Modify Zipkin Exporter default service name, use default resource's serviceName instead of empty. (#1777) The `go.opentelemetry.io/otel/sdk/export/trace` package is merged into the `go.opentelemetry.io/otel/sdk/trace` package. (#1778) The prometheus.InstallNewPipeline example is moved from comment to example test (#1796) The convenience functions for the stdout exporter have been updated to return the `TracerProvider` implementation and enable the shutdown of the exporter. (#1800) Replace the flush function returned from the Jaeger exporter's convenience creation functions (`InstallNewPipeline` and `NewExportPipeline`) with the `TracerProvider` implementation they create. This enables the caller to shutdown and flush using the related `TracerProvider` methods. (#1822) Updated the Jaeger exporter to have a default endpoint, `http://localhost:14250`, for the collector. (#1824) Changed the function `WithCollectorEndpoint` in the Jaeger exporter to no longer accept an endpoint as an argument. The endpoint can be passed with the `CollectorEndpointOption` using the `WithEndpoint` function or by setting the `OTELEXPORTERJAEGER_ENDPOINT` environment variable value appropriately. (#1824) The Jaeger exporter no longer batches exported spans itself, instead it relies on the SDK's `BatchSpanProcessor` for this functionality. (#1830) The Jaeger exporter creation functions (`NewRawExporter`, `NewExportPipeline`, and `InstallNewPipeline`) no longer accept the removed `Option` type as a variadic argument. (#1830) Removed Jaeger Environment variables: `JAEGERSERVICENAME`, `JAEGERDISABLED`, `JAEGERTAGS` These environment variables will no longer be used to override values of the Jaeger exporter (#1752) No longer set the links for a `Span` in `go.opentelemetry.io/otel/sdk/trace` that is configured to be a new root. This is unspecified behavior that the OpenTelemetry community plans to standardize in the future. To prevent backwards incompatible changes when it is specified, these links are removed. (#1726) Setting error status while recording error with Span from oteltest package. (#1729) The concept of a remote and local Span stored in a context is unified to just the current Span. Because of this `\"go.opentelemetry.io/otel/trace\".RemoteSpanContextFromContext` is removed as it is no longer needed. Instead, `\"go.opentelemetry.io/otel/trace\".SpanContextFromContex` can be used to return the current Span. If needed, that Span's `SpanContext.IsRemote()` can then be used to determine if it is remote or not. (#1731) The `HasRemoteParent` field of the `\"go.opentelemetry.io/otel/sdk/trace\".SamplingParameters` is removed. This field is redundant to the information returned from the `Remote` method of the `SpanContext` held in the `ParentContext` field. (#1749) The `trace.FlagsDebug` and `trace.FlagsDeferred` constants have been removed and will be localized to the B3 propagator. (#1770) Remove `Process` configuration, `WithProcessFromEnv` and `ProcessFromEnv`, and type from the Jaeger exporter package. The information that could be configured in the `Process` struct should be configured in a `Resource` instead. (#1776, #1804) Remove the `WithDisabled` option from the Jaeger exporter. To disable the exporter unregister it from the `TracerProvider` or use a no-operation `TracerProvider`. (#1806) Removed the functions `CollectorEndpointFromEnv` and `WithCollectorEndpointOptionFromEnv` from the Jaeger exporter. These functions for retrieving specific environment variable values are redundant of other internal functions and are not intended for end user use. (#1824) Removed the Jaeger exporter `WithSDKOptions` `Option`. This option was used to set SDK options for the exporter creation convenience functions. These functions are provided as a way to easily setup or install the exporter with what are deemed reasonable SDK settings for common use cases. If the SDK needs to be configured differently, the `NewRawExporter` function and direct setup of the SDK with the desired settings should be used. (#1825) The `WithBufferMaxCount` and `WithBatchMaxCount` `Option`s from the Jaeger exporter are removed. The exporter no longer batches exports, instead relying on the SDK's `BatchSpanProcessor` for this"
},
{
"data": "(#1830) The Jaeger exporter `Option` type is removed. The type is no longer used by the exporter to configure anything. All the previous configurations these options provided were duplicates of SDK configuration. They have been removed in favor of using the SDK configuration and focuses the exporter configuration to be only about the endpoints it will send telemetry to. (#1830) Added `Marshaler` config option to `otlphttp` to enable otlp over json or protobufs. (#1586) A `ForceFlush` method to the `\"go.opentelemetry.io/otel/sdk/trace\".TracerProvider` to flush all registered `SpanProcessor`s. (#1608) Added `WithSampler` and `WithSpanLimits` to tracer provider. (#1633, #1702) `\"go.opentelemetry.io/otel/trace\".SpanContext` now has a `remote` property, and `IsRemote()` predicate, that is true when the `SpanContext` has been extracted from remote context data. (#1701) A `Valid` method to the `\"go.opentelemetry.io/otel/attribute\".KeyValue` type. (#1703) `trace.SpanContext` is now immutable and has no exported fields. (#1573) `trace.NewSpanContext()` can be used in conjunction with the `trace.SpanContextConfig` struct to initialize a new `SpanContext` where all values are known. Update the `ForceFlush` method signature to the `\"go.opentelemetry.io/otel/sdk/trace\".SpanProcessor` to accept a `context.Context` and return an error. (#1608) Update the `Shutdown` method to the `\"go.opentelemetry.io/otel/sdk/trace\".TracerProvider` return an error on shutdown failure. (#1608) The SimpleSpanProcessor will now shut down the enclosed `SpanExporter` and gracefully ignore subsequent calls to `OnEnd` after `Shutdown` is called. (#1612) `\"go.opentelemetry.io/sdk/metric/controller.basic\".WithPusher` is replaced with `WithExporter` to provide consistent naming across project. (#1656) Added non-empty string check for trace `Attribute` keys. (#1659) Add `description` to SpanStatus only when `StatusCode` is set to error. (#1662) Jaeger exporter falls back to `resource.Default`'s `service.name` if the exported Span does not have one. (#1673) Jaeger exporter populates Jaeger's Span Process from Resource. (#1673) Renamed the `LabelSet` method of `\"go.opentelemetry.io/otel/sdk/resource\".Resource` to `Set`. (#1692) Changed `WithSDK` to `WithSDKOptions` to accept variadic arguments of `TracerProviderOption` type in `go.opentelemetry.io/otel/exporters/trace/jaeger` package. (#1693) Changed `WithSDK` to `WithSDKOptions` to accept variadic arguments of `TracerProviderOption` type in `go.opentelemetry.io/otel/exporters/trace/zipkin` package. (#1693) Removed `serviceName` parameter from Zipkin exporter and uses resource instead. (#1549) Removed `WithConfig` from tracer provider to avoid overriding configuration. (#1633) Removed the exported `SimpleSpanProcessor` and `BatchSpanProcessor` structs. These are now returned as a SpanProcessor interface from their respective constructors. (#1638) Removed `WithRecord()` from `trace.SpanOption` when creating a span. (#1660) Removed setting status to `Error` while recording an error as a span event in `RecordError`. (#1663) Removed `jaeger.WithProcess` configuration option. (#1673) Removed `ApplyConfig` method from `\"go.opentelemetry.io/otel/sdk/trace\".TracerProvider` and the now unneeded `Config` struct. (#1693) Jaeger Exporter: Ensure mapping between OTEL and Jaeger span data complies with the specification. (#1626) `SamplingResult.TraceState` is correctly propagated to a newly created span's `SpanContext`. (#1655) The `otel-collector` example now correctly flushes metric events prior to shutting down the exporter. (#1678) Do not set span status message in `SpanStatusFromHTTPStatusCode` if it can be inferred from `http.status_code`. (#1681) Synchronization issues in global trace delegate implementation. (#1686) Reduced excess memory usage by global `TracerProvider`. (#1687) Added `resource.Default()` for use with meter and tracer providers. (#1507) `AttributePerEventCountLimit` and `AttributePerLinkCountLimit` for `SpanLimits`. (#1535) Added `Keys()` method to `propagation.TextMapCarrier` and `propagation.HeaderCarrier` to adapt `http.Header` to this interface. (#1544) Added `code` attributes to `go.opentelemetry.io/otel/semconv` package. (#1558) Compatibility testing suite in the CI system for the following systems. (#1567) | OS | Go Version | Architecture | | - | - | | | Ubuntu | 1.15 | amd64 | | Ubuntu | 1.14 | amd64 | | Ubuntu | 1.15 | 386 | | Ubuntu | 1.14 | 386 | | MacOS | 1.15 | amd64 | | MacOS | 1.14 | amd64 | | Windows | 1.15 | amd64 | | Windows | 1.14 | amd64 | | Windows | 1.15 | 386 | | Windows |"
},
{
"data": "| 386 | Replaced interface `oteltest.SpanRecorder` with its existing implementation `StandardSpanRecorder`. (#1542) Default span limit values to 128. (#1535) Rename `MaxEventsPerSpan`, `MaxAttributesPerSpan` and `MaxLinksPerSpan` to `EventCountLimit`, `AttributeCountLimit` and `LinkCountLimit`, and move these fields into `SpanLimits`. (#1535) Renamed the `otel/label` package to `otel/attribute`. (#1541) Vendor the Jaeger exporter's dependency on Apache Thrift. (#1551) Parallelize the CI linting and testing. (#1567) Stagger timestamps in exact aggregator tests. (#1569) Changed all examples to use `WithBatchTimeout(5 * time.Second)` rather than `WithBatchTimeout(5)`. (#1621) Prevent end-users from implementing some interfaces (#1575) ``` \"otel/exporters/otlp/otlphttp\".Option \"otel/exporters/stdout\".Option \"otel/oteltest\".Option \"otel/trace\".TracerOption \"otel/trace\".SpanOption \"otel/trace\".EventOption \"otel/trace\".LifeCycleOption \"otel/trace\".InstrumentationOption \"otel/sdk/resource\".Option \"otel/sdk/trace\".ParentBasedSamplerOption \"otel/sdk/trace\".ReadOnlySpan \"otel/sdk/trace\".ReadWriteSpan ``` Removed attempt to resample spans upon changing the span name with `span.SetName()`. (#1545) The `test-benchmark` is no longer a dependency of the `precommit` make target. (#1567) Removed the `test-386` make target. This was replaced with a full compatibility testing suite (i.e. multi OS/arch) in the CI system. (#1567) The sequential timing check of timestamps in the stdout exporter are now setup explicitly to be sequential (#1571). (#1572) Windows build of Jaeger tests now compiles with OS specific functions (#1576). (#1577) The sequential timing check of timestamps of go.opentelemetry.io/otel/sdk/metric/aggregator/lastvalue are now setup explicitly to be sequential (#1578). (#1579) Validate tracestate header keys with vendors according to the W3C TraceContext specification (#1475). (#1581) The OTLP exporter includes related labels for translations of a GaugeArray (#1563). (#1570) Rename project default branch from `master` to `main`. (#1505) Reverse order in which `Resource` attributes are merged, per change in spec. (#1501) Add tooling to maintain \"replace\" directives in go.mod files automatically. (#1528) Create new modules: otel/metric, otel/trace, otel/oteltest, otel/sdk/export/metric, otel/sdk/metric (#1528) Move metric-related public global APIs from otel to otel/metric/global. (#1528) Fixed otlpgrpc reconnection issue. The example code in the README.md of `go.opentelemetry.io/otel/exporters/otlp` is moved to a compiled example test and used the new `WithAddress` instead of `WithEndpoint`. (#1513) The otel-collector example now uses the default OTLP receiver port of the collector. Add the `ReadOnlySpan` and `ReadWriteSpan` interfaces to provide better control for accessing span data. (#1360) `NewGRPCDriver` function returns a `ProtocolDriver` that maintains a single gRPC connection to the collector. (#1369) Added documentation about the project's versioning policy. (#1388) Added `NewSplitDriver` for OTLP exporter that allows sending traces and metrics to different endpoints. (#1418) Added codeql workflow to GitHub Actions (#1428) Added Gosec workflow to GitHub Actions (#1429) Add new HTTP driver for OTLP exporter in `exporters/otlp/otlphttp`. Currently it only supports the binary protobuf payloads. (#1420) Add an OpenCensus exporter bridge. (#1444) Rename `internal/testing` to `internal/internaltest`. (#1449) Rename `export.SpanData` to `export.SpanSnapshot` and use it only for exporting spans. (#1360) Store the parent's full `SpanContext` rather than just its span ID in the `span` struct. (#1360) Improve span duration accuracy. (#1360) Migrated CI/CD from CircleCI to GitHub Actions (#1382) Remove duplicate checkout from GitHub Actions workflow (#1407) Metric `array` aggregator renamed `exact` to match its `aggregation.Kind` (#1412) Metric `exact` aggregator includes per-point timestamps (#1412) Metric stdout exporter uses MinMaxSumCount aggregator for ValueRecorder instruments (#1412) `NewExporter` from `exporters/otlp` now takes a `ProtocolDriver` as a parameter. (#1369) Many OTLP Exporter options became gRPC ProtocolDriver options. (#1369) Unify endpoint API that related to OTel exporter. (#1401) Optimize metric histogram aggregator to re-use its slice of buckets. (#1435) Metric aggregator Count() and histogram Bucket.Counts are consistently `uint64`. (1430) Histogram aggregator accepts functional options, uses default boundaries if none given. (#1434) `SamplingResult` now passed a `Tracestate` from the parent `SpanContext` (#1432) Moved gRPC driver for OTLP exporter to `exporters/otlp/otlpgrpc`. (#1420) The `TraceContext` propagator now correctly propagates `TraceState` through the"
},
{
"data": "(#1447) Metric Push and Pull Controller components are combined into a single \"basic\" Controller: `WithExporter()` and `Start()` to configure Push behavior `Start()` is optional; use `Collect()` and `ForEach()` for Pull behavior `Start()` and `Stop()` accept Context. (#1378) The `Event` type is moved from the `otel/sdk/export/trace` package to the `otel/trace` API package. (#1452) Remove `errUninitializedSpan` as its only usage is now obsolete. (#1360) Remove Metric export functionality related to quantiles and summary data points: this is not specified (#1412) Remove DDSketch metric aggregator; our intention is to re-introduce this as an option of the histogram aggregator after are released (#1412) `BatchSpanProcessor.Shutdown()` will now shutdown underlying `export.SpanExporter`. (#1443) The `WithIDGenerator` `TracerProviderOption` is added to the `go.opentelemetry.io/otel/trace` package to configure an `IDGenerator` for the `TracerProvider`. (#1363) The Zipkin exporter now uses the Span status code to determine. (#1328) `NewExporter` and `Start` functions in `go.opentelemetry.io/otel/exporters/otlp` now receive `context.Context` as a first parameter. (#1357) Move the OpenCensus example into `example` directory. (#1359) Moved the SDK's `internal.IDGenerator` interface in to the `sdk/trace` package to enable support for externally-defined ID generators. (#1363) Bump `github.com/google/go-cmp` from 0.5.3 to 0.5.4 (#1374) Bump `github.com/golangci/golangci-lint` in `/internal/tools` (#1375) Metric SDK `SumObserver` and `UpDownSumObserver` instruments correctness fixes. (#1381) An `EventOption` and the related `NewEventConfig` function are added to the `go.opentelemetry.io/otel` package to configure Span events. (#1254) A `TextMapPropagator` and associated `TextMapCarrier` are added to the `go.opentelemetry.io/otel/oteltest` package to test `TextMap` type propagators and their use. (#1259) `SpanContextFromContext` returns `SpanContext` from context. (#1255) `TraceState` has been added to `SpanContext`. (#1340) `DeploymentEnvironmentKey` added to `go.opentelemetry.io/otel/semconv` package. (#1323) Add an OpenCensus to OpenTelemetry tracing bridge. (#1305) Add a parent context argument to `SpanProcessor.OnStart` to follow the specification. (#1333) Add missing tests for `sdk/trace/attributes_map.go`. (#1337) Move the `go.opentelemetry.io/otel/api/trace` package into `go.opentelemetry.io/otel/trace` with the following changes. (#1229) (#1307) `ID` has been renamed to `TraceID`. `IDFromHex` has been renamed to `TraceIDFromHex`. `EmptySpanContext` is removed. Move the `go.opentelemetry.io/otel/api/trace/tracetest` package into `go.opentelemetry.io/otel/oteltest`. (#1229) OTLP Exporter updates: supports OTLP v0.6.0 (#1230, #1354) supports configurable aggregation temporality (default: Cumulative, optional: Stateless). (#1296) The Sampler is now called on local child spans. (#1233) The `Kind` type from the `go.opentelemetry.io/otel/api/metric` package was renamed to `InstrumentKind` to more specifically describe what it is and avoid semantic ambiguity. (#1240) The `MetricKind` method of the `Descriptor` type in the `go.opentelemetry.io/otel/api/metric` package was renamed to `Descriptor.InstrumentKind`. This matches the returned type and fixes misuse of the term metric. (#1240) Move test harness from the `go.opentelemetry.io/otel/api/apitest` package into `go.opentelemetry.io/otel/oteltest`. (#1241) Move the `go.opentelemetry.io/otel/api/metric/metrictest` package into `go.opentelemetry.io/oteltest` as part of #964. (#1252) Move the `go.opentelemetry.io/otel/api/metric` package into `go.opentelemetry.io/otel/metric` as part of #1303. (#1321) Move the `go.opentelemetry.io/otel/api/metric/registry` package into `go.opentelemetry.io/otel/metric/registry` as a part of #1303. (#1316) Move the `Number` type (together with related functions) from `go.opentelemetry.io/otel/api/metric` package into `go.opentelemetry.io/otel/metric/number` as a part of #1303. (#1316) The function signature of the Span `AddEvent` method in `go.opentelemetry.io/otel` is updated to no longer take an unused context and instead take a required name and a variable number of `EventOption`s. (#1254) The function signature of the Span `RecordError` method in `go.opentelemetry.io/otel` is updated to no longer take an unused context and instead take a required error value and a variable number of `EventOption`s. (#1254) Move the `go.opentelemetry.io/otel/api/global` package to `go.opentelemetry.io/otel`. (#1262) (#1330) Move the `Version` function from `go.opentelemetry.io/otel/sdk` to `go.opentelemetry.io/otel`. (#1330) Rename correlation context header from `\"otcorrelations\"` to `\"baggage\"` to match the OpenTelemetry specification. (#1267) Fix `Code.UnmarshalJSON` to work with valid JSON only. (#1276) The `resource.New()` method changes signature to support builtin attributes and functional options, including `telemetry.sdk.*` and `host.name` semantic conventions; the former method is renamed `resource.NewWithAttributes`. (#1235) The Prometheus exporter now exports non-monotonic counters"
},
{
"data": "`UpDownCounter`s) as gauges. (#1210) Correct the `Span.End` method documentation in the `otel` API to state updates are not allowed on a span after it has ended. (#1310) Updated span collection limits for attribute, event and link counts to 1000 (#1318) Renamed `semconv.HTTPUrlKey` to `semconv.HTTPURLKey`. (#1338) The `ErrInvalidHexID`, `ErrInvalidTraceIDLength`, `ErrInvalidSpanIDLength`, `ErrInvalidSpanIDLength`, or `ErrNilSpanID` from the `go.opentelemetry.io/otel` package are unexported now. (#1243) The `AddEventWithTimestamp` method on the `Span` interface in `go.opentelemetry.io/otel` is removed due to its redundancy. It is replaced by using the `AddEvent` method with a `WithTimestamp` option. (#1254) The `MockSpan` and `MockTracer` types are removed from `go.opentelemetry.io/otel/oteltest`. `Tracer` and `Span` from the same module should be used in their place instead. (#1306) `WorkerCount` option is removed from `go.opentelemetry.io/otel/exporters/otlp`. (#1350) Remove the following labels types: INT32, UINT32, UINT64 and FLOAT32. (#1314) Rename `MergeItererator` to `MergeIterator` in the `go.opentelemetry.io/otel/label` package. (#1244) The `go.opentelemetry.io/otel/api/global` packages global TextMapPropagator now delegates functionality to a globally set delegate for all previously returned propagators. (#1258) Fix condition in `label.Any`. (#1299) Fix global `TracerProvider` to pass options to its configured provider. (#1329) Fix missing handler for `ExactKind` aggregator in OTLP metrics transformer (#1309) OTLP Metric exporter supports Histogram aggregation. (#1209) The `Code` struct from the `go.opentelemetry.io/otel/codes` package now supports JSON marshaling and unmarshaling as well as implements the `Stringer` interface. (#1214) A Baggage API to implement the OpenTelemetry specification. (#1217) Add Shutdown method to sdk/trace/provider, shutdown processors in the order they were registered. (#1227) Set default propagator to no-op propagator. (#1184) The `HTTPSupplier`, `HTTPExtractor`, `HTTPInjector`, and `HTTPPropagator` from the `go.opentelemetry.io/otel/api/propagation` package were replaced with unified `TextMapCarrier` and `TextMapPropagator` in the `go.opentelemetry.io/otel/propagation` package. (#1212) (#1325) The `New` function from the `go.opentelemetry.io/otel/api/propagation` package was replaced with `NewCompositeTextMapPropagator` in the `go.opentelemetry.io/otel` package. (#1212) The status codes of the `go.opentelemetry.io/otel/codes` package have been updated to match the latest OpenTelemetry specification. They now are `Unset`, `Error`, and `Ok`. They no longer track the gRPC codes. (#1214) The `StatusCode` field of the `SpanData` struct in the `go.opentelemetry.io/otel/sdk/export/trace` package now uses the codes package from this package instead of the gRPC project. (#1214) Move the `go.opentelemetry.io/otel/api/baggage` package into `go.opentelemetry.io/otel/baggage`. (#1217) (#1325) A `Shutdown` method of `SpanProcessor` and all its implementations receives a context and returns an error. (#1264) Copies of data from arrays and slices passed to `go.opentelemetry.io/otel/label.ArrayValue()` are now used in the returned `Value` instead of using the mutable data itself. (#1226) The `ExtractHTTP` and `InjectHTTP` functions from the `go.opentelemetry.io/otel/api/propagation` package were removed. (#1212) The `Propagators` interface from the `go.opentelemetry.io/otel/api/propagation` package was removed to conform to the OpenTelemetry specification. The explicit `TextMapPropagator` type can be used in its place as this is the `Propagator` type the specification defines. (#1212) The `SetAttribute` method of the `Span` from the `go.opentelemetry.io/otel/api/trace` package was removed given its redundancy with the `SetAttributes` method. (#1216) The internal implementation of Baggage storage is removed in favor of using the new Baggage API functionality. (#1217) Remove duplicate hostname key `HostHostNameKey` in Resource semantic conventions. (#1219) Nested array/slice support has been removed. (#1226) A `SpanConfigure` function in `go.opentelemetry.io/otel/api/trace` to create a new `SpanConfig` from `SpanOption`s. (#1108) In the `go.opentelemetry.io/otel/api/trace` package, `NewTracerConfig` was added to construct new `TracerConfig`s. This addition was made to conform with our project option conventions. (#1155) Instrumentation library information was added to the Zipkin exporter. (#1119) The `SpanProcessor` interface now has a `ForceFlush()` method. (#1166) More semantic conventions for k8s as resource attributes. (#1167) Add reconnecting udp connection type to Jaeger exporter. This change adds a new optional implementation of the udp conn interface used to detect changes to an agent's host dns"
},
{
"data": "It then adopts the new destination address to ensure the exporter doesn't get stuck. This change was ported from jaegertracing/jaeger-client-go#520. (#1063) Replace `StartOption` and `EndOption` in `go.opentelemetry.io/otel/api/trace` with `SpanOption`. This change is matched by replacing the `StartConfig` and `EndConfig` with a unified `SpanConfig`. (#1108) Replace the `LinkedTo` span option in `go.opentelemetry.io/otel/api/trace` with `WithLinks`. This is be more consistent with our other option patterns, i.e. passing the item to be configured directly instead of its component parts, and provides a cleaner function signature. (#1108) The `go.opentelemetry.io/otel/api/trace` `TracerOption` was changed to an interface to conform to project option conventions. (#1109) Move the `B3` and `TraceContext` from within the `go.opentelemetry.io/otel/api/trace` package to their own `go.opentelemetry.io/otel/propagators` package. This removal of the propagators is reflective of the OpenTelemetry specification for these propagators as well as cleans up the `go.opentelemetry.io/otel/api/trace` API. (#1118) Rename Jaeger tags used for instrumentation library information to reflect changes in OpenTelemetry specification. (#1119) Rename `ProbabilitySampler` to `TraceIDRatioBased` and change semantics to ignore parent span sampling status. (#1115) Move `tools` package under `internal`. (#1141) Move `go.opentelemetry.io/otel/api/correlation` package to `go.opentelemetry.io/otel/api/baggage`. (#1142) The `correlation.CorrelationContext` propagator has been renamed `baggage.Baggage`. Other exported functions and types are unchanged. Rename `ParentOrElse` sampler to `ParentBased` and allow setting samplers depending on parent span. (#1153) In the `go.opentelemetry.io/otel/api/trace` package, `SpanConfigure` was renamed to `NewSpanConfig`. (#1155) Change `dependabot.yml` to add a `Skip Changelog` label to dependabot-sourced PRs. (#1161) The has been updated to recommend the use of `newConfig()` instead of `configure()`. (#1163) The `otlp.Config` type has been unexported and changed to `otlp.config`, along with its initializer. (#1163) Ensure exported interface types include parameter names and update the Style Guide to reflect this styling rule. (#1172) Don't consider unset environment variable for resource detection to be an error. (#1170) Rename `go.opentelemetry.io/otel/api/metric.ConfigureInstrument` to `NewInstrumentConfig` and `go.opentelemetry.io/otel/api/metric.ConfigureMeter` to `NewMeterConfig`. ValueObserver instruments use LastValue aggregator by default. (#1165) OTLP Metric exporter supports LastValue aggregation. (#1165) Move the `go.opentelemetry.io/otel/api/unit` package to `go.opentelemetry.io/otel/unit`. (#1185) Rename `Provider` to `MeterProvider` in the `go.opentelemetry.io/otel/api/metric` package. (#1190) Rename `NoopProvider` to `NoopMeterProvider` in the `go.opentelemetry.io/otel/api/metric` package. (#1190) Rename `NewProvider` to `NewMeterProvider` in the `go.opentelemetry.io/otel/api/metric/metrictest` package. (#1190) Rename `Provider` to `MeterProvider` in the `go.opentelemetry.io/otel/api/metric/registry` package. (#1190) Rename `NewProvider` to `NewMeterProvider` in the `go.opentelemetry.io/otel/api/metri/registryc` package. (#1190) Rename `Provider` to `TracerProvider` in the `go.opentelemetry.io/otel/api/trace` package. (#1190) Rename `NoopProvider` to `NoopTracerProvider` in the `go.opentelemetry.io/otel/api/trace` package. (#1190) Rename `Provider` to `TracerProvider` in the `go.opentelemetry.io/otel/api/trace/tracetest` package. (#1190) Rename `NewProvider` to `NewTracerProvider` in the `go.opentelemetry.io/otel/api/trace/tracetest` package. (#1190) Rename `WrapperProvider` to `WrapperTracerProvider` in the `go.opentelemetry.io/otel/bridge/opentracing` package. (#1190) Rename `NewWrapperProvider` to `NewWrapperTracerProvider` in the `go.opentelemetry.io/otel/bridge/opentracing` package. (#1190) Rename `Provider` method of the pull controller to `MeterProvider` in the `go.opentelemetry.io/otel/sdk/metric/controller/pull` package. (#1190) Rename `Provider` method of the push controller to `MeterProvider` in the `go.opentelemetry.io/otel/sdk/metric/controller/push` package. (#1190) Rename `ProviderOptions` to `TracerProviderConfig` in the `go.opentelemetry.io/otel/sdk/trace` package. (#1190) Rename `ProviderOption` to `TracerProviderOption` in the `go.opentelemetry.io/otel/sdk/trace` package. (#1190) Rename `Provider` to `TracerProvider` in the `go.opentelemetry.io/otel/sdk/trace` package. (#1190) Rename `NewProvider` to `NewTracerProvider` in the `go.opentelemetry.io/otel/sdk/trace` package. (#1190) Renamed `SamplingDecision` values to comply with OpenTelemetry specification change. (#1192) Renamed Zipkin attribute names from `ot.statuscode & ot.statusdescription` to `otel.statuscode & otel.statusdescription`. (#1201) The default SDK now invokes registered `SpanProcessor`s in the order they were registered with the `TracerProvider`. (#1195) Add test of spans being processed by the `SpanProcessor`s in the order they were registered. (#1203) Remove the B3 propagator from `go.opentelemetry.io/otel/propagators`. It is now located in the `go.opentelemetry.io/contrib/propagators/` module. (#1191) Remove the semantic convention for HTTP status text, `HTTPStatusTextKey` from package `go.opentelemetry.io/otel/semconv`. (#1194) Zipkin example no longer mentions `ParentSampler`, corrected to `ParentBased`. (#1171) Fix missing shutdown processor in otel-collector"
},
{
"data": "(#1186) Fix missing shutdown processor in basic and namedtracer examples. (#1197) Support for exporting array-valued attributes via OTLP. (#992) `Noop` and `InMemory` `SpanBatcher` implementations to help with testing integrations. (#994) Support for filtering metric label sets. (#1047) A dimensionality-reducing metric Processor. (#1057) Integration tests for more OTel Collector Attribute types. (#1062) A new `WithSpanProcessor` `ProviderOption` is added to the `go.opentelemetry.io/otel/sdk/trace` package to create a `Provider` and automatically register the `SpanProcessor`. (#1078) Rename `sdk/metric/processor/test` to `sdk/metric/processor/processortest`. (#1049) Rename `sdk/metric/controller/test` to `sdk/metric/controller/controllertest`. (#1049) Rename `api/testharness` to `api/apitest`. (#1049) Rename `api/trace/testtrace` to `api/trace/tracetest`. (#1049) Change Metric Processor to merge multiple observations. (#1024) The `go.opentelemetry.io/otel/bridge/opentracing` bridge package has been made into its own module. This removes the package dependencies of this bridge from the rest of the OpenTelemetry based project. (#1038) Renamed `go.opentelemetry.io/otel/api/standard` package to `go.opentelemetry.io/otel/semconv` to avoid the ambiguous and generic name `standard` and better describe the package as containing OpenTelemetry semantic conventions. (#1016) The environment variable used for resource detection has been changed from `OTELRESOURCELABELS` to `OTELRESOURCEATTRIBUTES` (#1042) Replace `WithSyncer` with `WithBatcher` in examples. (#1044) Replace the `google.golang.org/grpc/codes` dependency in the API with an equivalent `go.opentelemetry.io/otel/codes` package. (#1046) Merge the `go.opentelemetry.io/otel/api/label` and `go.opentelemetry.io/otel/api/kv` into the new `go.opentelemetry.io/otel/label` package. (#1060) Unify Callback Function Naming. Rename `Callback` with `Func`. (#1061) CI builds validate against last two versions of Go, dropping 1.13 and adding 1.15. (#1064) The `go.opentelemetry.io/otel/sdk/export/trace` interfaces `SpanSyncer` and `SpanBatcher` have been replaced with a specification compliant `Exporter` interface. This interface still supports the export of `SpanData`, but only as a slice. Implementation are also required now to return any error from `ExportSpans` if one occurs as well as implement a `Shutdown` method for exporter clean-up. (#1078) The `go.opentelemetry.io/otel/sdk/trace` `NewBatchSpanProcessor` function no longer returns an error. If a `nil` exporter is passed as an argument to this function, instead of it returning an error, it now returns a `BatchSpanProcessor` that handles the export of `SpanData` by not taking any action. (#1078) The `go.opentelemetry.io/otel/sdk/trace` `NewProvider` function to create a `Provider` no longer returns an error, instead only a `*Provider`. This change is related to `NewBatchSpanProcessor` not returning an error which was the only error this function would return. (#1078) Duplicate, unused API sampler interface. (#999) Use the provided by the SDK instead. The `grpctrace` instrumentation was moved to the `go.opentelemetry.io/contrib` repository and out of this repository. This move includes moving the `grpc` example to the `go.opentelemetry.io/contrib` as well. (#1027) The `WithSpan` method of the `Tracer` interface. The functionality this method provided was limited compared to what a user can provide themselves. It was removed with the understanding that if there is sufficient user need it can be added back based on actual user usage. (#1043) The `RegisterSpanProcessor` and `UnregisterSpanProcessor` functions. These were holdovers from an approach prior to the TracerProvider design. They were not used anymore. (#1077) The `oterror` package. (#1026) The `othttp` and `httptrace` instrumentations were moved to `go.opentelemetry.io/contrib`. (#1032) The `semconv.HTTPServerMetricAttributesFromHTTPRequest()` function no longer generates the high-cardinality `http.request.content.length` label. (#1031) Correct instrumentation version tag in Jaeger exporter. (#1037) The SDK span will now set an error event if the `End` method is called during a panic (i.e. it was deferred). (#1043) Move internally generated protobuf code from the `go.opentelemetry.io/otel` to the OTLP exporter to reduce dependency overhead. (#1050) The `otel-collector` example referenced outdated collector processors. (#1006) This release migrates the default OpenTelemetry SDK into its own Go module, decoupling the SDK from the API and reducing dependencies for instrumentation packages. The Zipkin exporter now has `NewExportPipeline` and `InstallNewPipeline` constructor functions to match the common"
},
{
"data": "These function build a new exporter with default SDK options and register the exporter with the `global` package respectively. (#944) Add propagator option for gRPC instrumentation. (#986) The `testtrace` package now tracks the `trace.SpanKind` for each span. (#987) Replace the `RegisterGlobal` `Option` in the Jaeger exporter with an `InstallNewPipeline` constructor function. This matches the other exporter constructor patterns and will register a new exporter after building it with default configuration. (#944) The trace (`go.opentelemetry.io/otel/exporters/trace/stdout`) and metric (`go.opentelemetry.io/otel/exporters/metric/stdout`) `stdout` exporters are now merged into a single exporter at `go.opentelemetry.io/otel/exporters/stdout`. This new exporter was made into its own Go module to follow the pattern of all exporters and decouple it from the `go.opentelemetry.io/otel` module. (#956, #963) Move the `go.opentelemetry.io/otel/exporters/test` test package to `go.opentelemetry.io/otel/sdk/export/metric/metrictest`. (#962) The `go.opentelemetry.io/otel/api/kv/value` package was merged into the parent `go.opentelemetry.io/otel/api/kv` package. (#968) `value.Bool` was replaced with `kv.BoolValue`. `value.Int64` was replaced with `kv.Int64Value`. `value.Uint64` was replaced with `kv.Uint64Value`. `value.Float64` was replaced with `kv.Float64Value`. `value.Int32` was replaced with `kv.Int32Value`. `value.Uint32` was replaced with `kv.Uint32Value`. `value.Float32` was replaced with `kv.Float32Value`. `value.String` was replaced with `kv.StringValue`. `value.Int` was replaced with `kv.IntValue`. `value.Uint` was replaced with `kv.UintValue`. `value.Array` was replaced with `kv.ArrayValue`. Rename `Infer` to `Any` in the `go.opentelemetry.io/otel/api/kv` package. (#972) Change `othttp` to use the `httpsnoop` package to wrap the `ResponseWriter` so that optional interfaces (`http.Hijacker`, `http.Flusher`, etc.) that are implemented by the original `ResponseWriter`are also implemented by the wrapped `ResponseWriter`. (#979) Rename `go.opentelemetry.io/otel/sdk/metric/aggregator/test` package to `go.opentelemetry.io/otel/sdk/metric/aggregator/aggregatortest`. (#980) Make the SDK into its own Go module called `go.opentelemetry.io/otel/sdk`. (#985) Changed the default trace `Sampler` from `AlwaysOn` to `ParentOrElse(AlwaysOn)`. (#989) The `IndexedAttribute` function from the `go.opentelemetry.io/otel/api/label` package was removed in favor of `IndexedLabel` which it was synonymous with. (#970) Bump github.com/golangci/golangci-lint from 1.28.3 to 1.29.0 in /tools. (#953) Bump github.com/google/go-cmp from 0.5.0 to 0.5.1. (#957) Use `global.Handle` for span export errors in the OTLP exporter. (#946) Correct Go language formatting in the README documentation. (#961) Remove default SDK dependencies from the `go.opentelemetry.io/otel/api` package. (#977) Remove default SDK dependencies from the `go.opentelemetry.io/otel/instrumentation` package. (#983) Move documented examples for `go.opentelemetry.io/otel/instrumentation/grpctrace` interceptors into Go example tests. (#984) A new Resource Detector interface is included to allow resources to be automatically detected and included. (#939) A Detector to automatically detect resources from an environment variable. (#939) Github action to generate protobuf Go bindings locally in `internal/opentelemetry-proto-gen`. (#938) OTLP .proto files from `open-telemetry/opentelemetry-proto` imported as a git submodule under `internal/opentelemetry-proto`. References to `github.com/open-telemetry/opentelemetry-proto` changed to `go.opentelemetry.io/otel/internal/opentelemetry-proto-gen`. (#942) Non-nil value `struct`s for key-value pairs will be marshalled using JSON rather than `Sprintf`. (#948) Removed dependency on `github.com/open-telemetry/opentelemetry-collector`. (#943) The `B3Encoding` type to represent the B3 encoding(s) the B3 propagator can inject. A value for HTTP supported encodings (Multiple Header: `MultipleHeader`, Single Header: `SingleHeader`) are included. (#882) The `FlagsDeferred` trace flag to indicate if the trace sampling decision has been deferred. (#882) The `FlagsDebug` trace flag to indicate if the trace is a debug trace. (#882) Add `peer.service` semantic attribute. (#898) Add database-specific semantic attributes. (#899) Add semantic convention for `faas.coldstart` and `container.id`. (#909) Add http content size semantic conventions. (#905) Include `http.requestcontentlength` in HTTP request basic attributes. (#905) Add semantic conventions for operating system process resource attribute keys. (#919) The Jaeger exporter now has a `WithBatchMaxCount` option to specify the maximum number of spans sent in a batch. (#931) Update `CONTRIBUTING.md` to ask for updates to `CHANGELOG.md` with each pull request. (#879) Use lowercase header names for B3 Multiple Headers. (#881) The B3 propagator `SingleHeader` field has been replaced with `InjectEncoding`. This new field can be set to combinations of the `B3Encoding` bitmasks and will inject trace information in these"
},
{
"data": "If no encoding is set, the propagator will default to `MultipleHeader` encoding. (#882) The B3 propagator now extracts from either HTTP encoding of B3 (Single Header or Multiple Header) based on what is contained in the header. Preference is given to Single Header encoding with Multiple Header being the fallback if Single Header is not found or is invalid. This behavior change is made to dynamically support all correctly encoded traces received instead of having to guess the expected encoding prior to receiving. (#882) Extend semantic conventions for RPC. (#900) To match constant naming conventions in the `api/standard` package, the `FaaS*` key names are appended with a suffix of `Key`. (#920) `\"api/standard\".FaaSName` -> `FaaSNameKey` `\"api/standard\".FaaSID` -> `FaaSIDKey` `\"api/standard\".FaaSVersion` -> `FaaSVersionKey` `\"api/standard\".FaaSInstance` -> `FaaSInstanceKey` The `FlagsUnused` trace flag is removed. The purpose of this flag was to act as the inverse of `FlagsSampled`, the inverse of `FlagsSampled` is used instead. (#882) The B3 header constants (`B3SingleHeader`, `B3DebugFlagHeader`, `B3TraceIDHeader`, `B3SpanIDHeader`, `B3SampledHeader`, `B3ParentSpanIDHeader`) are removed. If B3 header keys are needed should be used instead. (#882) The B3 Single Header name is now correctly `b3` instead of the previous `X-B3`. (#881) The B3 propagator now correctly supports sampling only values (`b3: 0`, `b3: 1`, or `b3: d`) for a Single B3 Header. (#882) The B3 propagator now propagates the debug flag. This removes the behavior of changing the debug flag into a set sampling bit. Instead, this now follow the B3 specification and omits the `X-B3-Sampling` header. (#882) The B3 propagator now tracks \"unset\" sampling state (meaning \"defer the decision\") and does not set the `X-B3-Sampling` header when injecting. (#882) Bump github.com/itchyny/gojq from 0.10.3 to 0.10.4 in /tools. (#883) Bump github.com/opentracing/opentracing-go from v1.1.1-0.20190913142402-a7454ce5950e to v1.2.0. (#885) The tracing time conversion for OTLP spans is now correctly set to `UnixNano`. (#896) Ensure span status is not set to `Unknown` when no HTTP status code is provided as it is assumed to be `200 OK`. (#908) Ensure `httptrace.clientTracer` closes `http.headers` span. (#912) Prometheus exporter will not apply stale updates or forget inactive metrics. (#903) Add test for api.standard `HTTPClientAttributesFromHTTPRequest`. (#905) Bump github.com/golangci/golangci-lint from 1.27.0 to 1.28.1 in /tools. (#901, #913) Update otel-colector example to use the v0.5.0 collector. (#915) The `grpctrace` instrumentation uses a span name conforming to the OpenTelemetry semantic conventions (does not contain a leading slash (`/`)). (#922) The `grpctrace` instrumentation includes an `rpc.method` attribute now set to the gRPC method name. (#900, #922) The `grpctrace` instrumentation `rpc.service` attribute now contains the package name if one exists. This is in accordance with OpenTelemetry semantic conventions. (#922) Correlation Context extractor will no longer insert an empty map into the returned context when no valid values are extracted. (#923) Bump google.golang.org/api from 0.28.0 to 0.29.0 in /exporters/trace/jaeger. (#925) Bump github.com/itchyny/gojq from 0.10.4 to 0.11.0 in /tools. (#926) Bump github.com/golangci/golangci-lint from 1.28.1 to 1.28.2 in /tools. (#930) This release implements the v0.5.0 version of the OpenTelemetry specification. The othttp instrumentation now includes default metrics. (#861) This CHANGELOG file to track all changes in the project going forward. Support for array type attributes. (#798) Apply transitive dependabot go.mod dependency updates as part of a new automatic Github workflow. (#844) Timestamps are now passed to exporters for each export. (#835) Add new `Accumulation` type to metric SDK to transport telemetry from `Accumulator`s to `Processor`s. This replaces the prior `Record` `struct` use for this purpose. (#835) New dependabot integration to automate package upgrades. (#814) `Meter` and `Tracer` implementations accept instrumentation version version as an optional argument. This instrumentation version is passed on to exporters. (#811)"
},
{
"data": "(#802) The OTLP exporter includes the instrumentation version in telemetry it exports. (#811) Environment variables for Jaeger exporter are supported. (#796) New `aggregation.Kind` in the export metric API. (#808) New example that uses OTLP and the collector. (#790) Handle errors in the span `SetName` during span initialization. (#791) Default service config to enable retries for retry-able failed requests in the OTLP exporter and an option to override this default. (#777) New `go.opentelemetry.io/otel/api/oterror` package to uniformly support error handling and definitions for the project. (#778) New `global` default implementation of the `go.opentelemetry.io/otel/api/oterror.Handler` interface to be used to handle errors prior to an user defined `Handler`. There is also functionality for the user to register their `Handler` as well as a convenience function `Handle` to handle an error with this global `Handler`(#778) Options to specify propagators for httptrace and grpctrace instrumentation. (#784) The required `application/json` header for the Zipkin exporter is included in all exports. (#774) Integrate HTTP semantics helpers from the contrib repository into the `api/standard` package. #769 Rename `Integrator` to `Processor` in the metric SDK. (#863) Rename `AggregationSelector` to `AggregatorSelector`. (#859) Rename `SynchronizedCopy` to `SynchronizedMove`. (#858) Rename `simple` integrator to `basic` integrator. (#857) Merge otlp collector examples. (#841) Change the metric SDK to support cumulative, delta, and pass-through exporters directly. With these changes, cumulative and delta specific exporters are able to request the correct kind of aggregation from the SDK. (#840) The `Aggregator.Checkpoint` API is renamed to `SynchronizedCopy` and adds an argument, a different `Aggregator` into which the copy is stored. (#812) The `export.Aggregator` contract is that `Update()` and `SynchronizedCopy()` are synchronized with each other. All the aggregation interfaces (`Sum`, `LastValue`, ...) are not meant to be synchronized, as the caller is expected to synchronize aggregators at a higher level after the `Accumulator`. Some of the `Aggregators` used unnecessary locking and that has been cleaned up. (#812) Use of `metric.Number` was replaced by `int64` now that we use `sync.Mutex` in the `MinMaxSumCount` and `Histogram` `Aggregators`. (#812) Replace `AlwaysParentSample` with `ParentSample(fallback)` to match the OpenTelemetry v0.5.0 specification. (#810) Rename `sdk/export/metric/aggregator` to `sdk/export/metric/aggregation`. #808 Send configured headers with every request in the OTLP exporter, instead of just on connection creation. (#806) Update error handling for any one off error handlers, replacing, instead, with the `global.Handle` function. (#791) Rename `plugin` directory to `instrumentation` to match the OpenTelemetry specification. (#779) Makes the argument order to Histogram and DDSketch `New()` consistent. (#781) `Uint64NumberKind` and related functions from the API. (#864) Context arguments from `Aggregator.Checkpoint` and `Integrator.Process` as they were unused. (#803) `SpanID` is no longer included in parameters for sampling decision to match the OpenTelemetry specification. (#775) Upgrade OTLP exporter to opentelemetry-proto matching the opentelemetry-collector v0.4.0 release. (#866) Allow changes to `go.sum` and `go.mod` when running dependabot tidy-up. (#871) Bump github.com/stretchr/testify from 1.4.0 to 1.6.1. (#824) Bump github.com/prometheus/client_golang from 1.7.0 to 1.7.1 in /exporters/metric/prometheus. (#867) Bump google.golang.org/grpc from 1.29.1 to 1.30.0 in /exporters/trace/jaeger. (#853) Bump google.golang.org/grpc from 1.29.1 to 1.30.0 in /exporters/trace/zipkin. (#854) Bumps github.com/golang/protobuf from 1.3.2 to 1.4.2 (#848) Bump github.com/stretchr/testify from 1.4.0 to 1.6.1 in /exporters/otlp (#817) Bump github.com/golangci/golangci-lint from 1.25.1 to 1.27.0 in /tools (#828) Bump github.com/prometheus/client_golang from 1.5.0 to 1.7.0 in /exporters/metric/prometheus (#838) Bump github.com/stretchr/testify from 1.4.0 to 1.6.1 in /exporters/trace/jaeger (#829) Bump github.com/benbjohnson/clock from 1.0.0 to 1.0.3 (#815) Bump github.com/stretchr/testify from 1.4.0 to 1.6.1 in /exporters/trace/zipkin (#823) Bump github.com/itchyny/gojq from 0.10.1 to 0.10.3 in /tools (#830) Bump github.com/stretchr/testify from 1.4.0 to 1.6.1 in /exporters/metric/prometheus (#822) Bump google.golang.org/grpc from 1.27.1 to 1.29.1 in /exporters/trace/zipkin (#820) Bump google.golang.org/grpc from 1.27.1 to 1.29.1 in /exporters/trace/jaeger (#831) Bump github.com/google/go-cmp from 0.4.0 to"
},
{
"data": "(#836) Bump github.com/google/go-cmp from 0.4.0 to 0.5.0 in /exporters/trace/jaeger (#837) Bump github.com/google/go-cmp from 0.4.0 to 0.5.0 in /exporters/otlp (#839) Bump google.golang.org/api from 0.20.0 to 0.28.0 in /exporters/trace/jaeger (#843) Set span status from HTTP status code in the othttp instrumentation. (#832) Fixed typo in push controller comment. (#834) The `Aggregator` testing has been updated and cleaned. (#812) `metric.Number(0)` expressions are replaced by `0` where possible. (#812) Fixed `global` `handler_test.go` test failure. #804 Fixed `BatchSpanProcessor.Shutdown` to wait until all spans are processed. (#766) Fixed OTLP example's accidental early close of exporter. (#807) Ensure zipkin exporter reads and closes response body. (#788) Update instrumentation to use `api/standard` keys instead of custom keys. (#782) Clean up tools and RELEASING documentation. (#762) Support for `Resource`s in the prometheus exporter. (#757) New pull controller. (#751) New `UpDownSumObserver` instrument. (#750) OpenTelemetry collector demo. (#711) New `SumObserver` instrument. (#747) New `UpDownCounter` instrument. (#745) New timeout `Option` and configuration function `WithTimeout` to the push controller. (#742) New `api/standards` package to implement semantic conventions and standard key-value generation. (#731) Rename `Register` functions in the metric API to `New` for all `Observer` instruments. (#761) Use `[]float64` for histogram boundaries, not `[]metric.Number`. (#758) Change OTLP example to use exporter as a trace `Syncer` instead of as an unneeded `Batcher`. (#756) Replace `WithResourceAttributes()` with `WithResource()` in the trace SDK. (#754) The prometheus exporter now uses the new pull controller. (#751) Rename `ScheduleDelayMillis` to `BatchTimeout` in the trace `BatchSpanProcessor`.(#752) Support use of synchronous instruments in asynchronous callbacks (#725) Move `Resource` from the `Export` method parameter into the metric export `Record`. (#739) Rename `Observer` instrument to `ValueObserver`. (#734) The push controller now has a method (`Provider()`) to return a `metric.Provider` instead of the old `Meter` method that acted as a `metric.Provider`. (#738) Replace `Measure` instrument by `ValueRecorder` instrument. (#732) Rename correlation context header from `\"Correlation-Context\"` to `\"otcorrelations\"` to match the OpenTelemetry specification. (#727) Ensure gRPC `ClientStream` override methods do not panic in grpctrace package. (#755) Disable parts of `BatchSpanProcessor` test until a fix is found. (#743) Fix `string` case in `kv` `Infer` function. (#746) Fix panic in grpctrace client interceptors. (#740) Refactor the `api/metrics` push controller and add `CheckpointSet` synchronization. (#737) Rewrite span batch process queue batching logic. (#719) Remove the push controller named Meter map. (#738) Fix Histogram aggregator initial state (fix #735). (#736) Ensure golang alpine image is running `golang-1.14` for examples. (#733) Added test for grpctrace `UnaryInterceptorClient`. (#695) Rearrange `api/metric` code layout. (#724) Batch `Observer` callback support. (#717) Alias `api` types to root package of project. (#696) Create basic `othttp.Transport` for simple client instrumentation. (#678) `SetAttribute(string, interface{})` to the trace API. (#674) Jaeger exporter option that allows user to specify custom http client. (#671) `Stringer` and `Infer` methods to `key`s. (#662) Rename `NewKey` in the `kv` package to just `Key`. (#721) Move `core` and `key` to `kv` package. (#720) Make the metric API `Meter` a `struct` so the abstract `MeterImpl` can be passed and simplify implementation. (#709) Rename SDK `Batcher` to `Integrator` to match draft OpenTelemetry SDK specification. (#710) Rename SDK `Ungrouped` integrator to `simple.Integrator` to match draft OpenTelemetry SDK specification. (#710) Rename SDK `SDK` `struct` to `Accumulator` to match draft OpenTelemetry SDK specification. (#710) Move `Number` from `core` to `api/metric` package. (#706) Move `SpanContext` from `core` to `trace` package. (#692) Change traceparent header from `Traceparent` to `traceparent` to implement the W3C specification. (#681) Update tooling to run generators in all submodules. (#705) gRPC interceptor regexp to match methods without a service name. (#683) Use a `const` for padding 64-bit B3 trace"
},
{
"data": "(#701) Update `mockZipkin` listen address from `:0` to `127.0.0.1:0`. (#700) Left-pad 64-bit B3 trace IDs with zero. (#698) Propagate at least the first W3C tracestate header. (#694) Remove internal `StateLocker` implementation. (#688) Increase instance size CI system uses. (#690) Add a `key` benchmark and use reflection in `key.Infer()`. (#679) Fix internal `global` test by using `global.Meter` with `RecordBatch()`. (#680) Reimplement histogram using mutex instead of `StateLocker`. (#669) Switch `MinMaxSumCount` to a mutex lock implementation instead of `StateLocker`. (#667) Update documentation to not include any references to `WithKeys`. (#672) Correct misspelling. (#668) Fix clobbering of the span context if extraction fails. (#656) Bump `golangci-lint` and work around the corrupting bug. (#666) (#670) `Dockerfile` and `docker-compose.yml` to run example code. (#635) New `grpctrace` package that provides gRPC client and server interceptors for both unary and stream connections. (#621) New `api/label` package, providing common label set implementation. (#651) Support for JSON marshaling of `Resources`. (#654) `TraceID` and `SpanID` implementations for `Stringer` interface. (#642) `RemoteAddrKey` in the othttp plugin to include the HTTP client address in top-level spans. (#627) `WithSpanFormatter` option to the othttp plugin. (#617) Updated README to include section for compatible libraries and include reference to the contrib repository. (#612) The prometheus exporter now supports exporting histograms. (#601) A `String` method to the `Resource` to return a hashable identifier for a now unique resource. (#613) An `Iter` method to the `Resource` to return an array `AttributeIterator`. (#613) An `Equal` method to the `Resource` test the equivalence of resources. (#613) An iterable structure (`AttributeIterator`) for `Resource` attributes. zipkin export's `NewExporter` now requires a `serviceName` argument to ensure this needed values is provided. (#644) Pass `Resources` through the metrics export pipeline. (#659) `WithKeys` option from the metric API. (#639) Use the `label.Set.Equivalent` value instead of an encoding in the batcher. (#658) Correct typo `trace.Exporter` to `trace.SpanSyncer` in comments. (#653) Use type names for return values in jaeger exporter. (#648) Increase the visibility of the `api/key` package by updating comments and fixing usages locally. (#650) `Checkpoint` only after `Update`; Keep records in the `sync.Map` longer. (#647) Do not cache `reflect.ValueOf()` in metric Labels. (#649) Batch metrics exported from the OTLP exporter based on `Resource` and labels. (#626) Add error wrapping to the prometheus exporter. (#631) Update the OTLP exporter batching of traces to use a unique `string` representation of an associated `Resource` as the batching key. (#623) Update OTLP `SpanData` transform to only include the `ParentSpanID` if one exists. (#614) Update `Resource` internal representation to uniquely and reliably identify resources. (#613) Check return value from `CheckpointSet.ForEach` in prometheus exporter. (#622) Ensure spans created by httptrace client tracer reflect operation structure. (#618) Create a new recorder rather than reuse when multiple observations in same epoch for asynchronous instruments. #610 The default port the OTLP exporter uses to connect to the OpenTelemetry collector is updated to match the one the collector listens on by default. (#611) Fix `pre_release.sh` to update version in `sdk/opentelemetry.go`. (#607) Fix time conversion from internal to OTLP in OTLP exporter. (#606) Update `tag.sh` to create signed tags. (#604) New API package `api/metric/registry` that exposes a `MeterImpl` wrapper for use by SDKs to generate unique instruments. (#580) Script to verify examples after a new release. (#579) The dogstatsd exporter due to lack of support. This additionally removes support for statsd. (#591) `LabelSet` from the metric API. This is replaced by a `[]core.KeyValue` slice. (#595) `Labels` from the metric API's `Meter` interface. (#595) The metric"
},
{
"data": "became an interface which the SDK implements and the `export` package provides a simple, immutable implementation of this interface intended for testing purposes. (#574) Renamed `internal/metric.Meter` to `MeterImpl`. (#580) Renamed `api/global/internal.obsImpl` to `asyncImpl`. (#580) Corrected missing return in mock span. (#582) Update License header for all source files to match CNCF guidelines and include a test to ensure it is present. (#586) (#596) Update to v0.3.0 of the OTLP in the OTLP exporter. (#588) Update pre-release script to be compatible between GNU and BSD based systems. (#592) Add a `RecordBatch` benchmark. (#594) Moved span transforms of the OTLP exporter to the internal package. (#593) Build both go-1.13 and go-1.14 in circleci to test for all supported versions of Go. (#569) Removed unneeded allocation on empty labels in OLTP exporter. (#597) Update `BatchedSpanProcessor` to process the queue until no data but respect max batch size. (#599) Update project documentation godoc.org links to pkg.go.dev. (#602) This is a first official beta release, which provides almost fully complete metrics, tracing, and context propagation functionality. There is still a possibility of breaking changes. Add `Observer` metric instrument. (#474) Add global `Propagators` functionality to enable deferred initialization for propagators registered before the first Meter SDK is installed. (#494) Simplified export setup pipeline for the jaeger exporter to match other exporters. (#459) The zipkin trace exporter. (#495) The OTLP exporter to export metric and trace telemetry to the OpenTelemetry collector. (#497) (#544) (#545) Add `StatusMessage` field to the trace `Span`. (#524) Context propagation in OpenTracing bridge in terms of OpenTelemetry context propagation. (#525) The `Resource` type was added to the SDK. (#528) The global API now supports a `Tracer` and `Meter` function as shortcuts to getting a global `*Provider` and calling these methods directly. (#538) The metric API now defines a generic `MeterImpl` interface to support general purpose `Meter` construction. Additionally, `SyncImpl` and `AsyncImpl` are added to support general purpose instrument construction. (#560) A metric `Kind` is added to represent the `MeasureKind`, `ObserverKind`, and `CounterKind`. (#560) Scripts to better automate the release process. (#576) Default to to use `AlwaysSampler` instead of `ProbabilitySampler` to match OpenTelemetry specification. (#506) Renamed `AlwaysSampleSampler` to `AlwaysOnSampler` in the trace API. (#511) Renamed `NeverSampleSampler` to `AlwaysOffSampler` in the trace API. (#511) The `Status` field of the `Span` was changed to `StatusCode` to disambiguate with the added `StatusMessage`. (#524) Updated the trace `Sampler` interface conform to the OpenTelemetry specification. (#531) Rename metric API `Options` to `Config`. (#541) Rename metric `Counter` aggregator to be `Sum`. (#541) Unify metric options into `Option` from instrument specific options. (#541) The trace API's `TraceProvider` now support `Resource`s. (#545) Correct error in zipkin module name. (#548) The jaeger trace exporter now supports `Resource`s. (#551) Metric SDK now supports `Resource`s. The `WithResource` option was added to configure a `Resource` on creation and the `Resource` method was added to the metric `Descriptor` to return the associated `Resource`. (#552) Replace `ErrNoLastValue` and `ErrEmptyDataSet` by `ErrNoData` in the metric SDK. (#557) The stdout trace exporter now supports `Resource`s. (#558) The metric `Descriptor` is now included at the API instead of the SDK. (#560) Replace `Ordered` with an iterator in `export.Labels`. (#567) The vendor specific Stackdriver. It is now hosted on 3rd party vendor infrastructure. (#452) The `Unregister` method for metric observers as it is not in the OpenTelemetry specification. (#560) `GetDescriptor` from the metric SDK. (#575) The `Gauge` instrument from the metric API. (#537) Make histogram aggregator checkpoint consistent. (#438) Update README with import instructions and how to build and test. (#505) The default label encoding was updated to be unique. (#508) Use `NewRoot` in the othttp plugin for public endpoints. (#513) Fix data race in"
},
{
"data": "(#518) Skip test-386 for Mac OS 10.15.x (Catalina and upwards). #521 Use a variable-size array to represent ordered labels in maps. (#523) Update the OTLP protobuf and update changed import path. (#532) Use `StateLocker` implementation in `MinMaxSumCount`. (#546) Eliminate goroutine leak in histogram stress test. (#547) Update OTLP exporter with latest protobuf. (#550) Add filters to the othttp plugin. (#556) Provide an implementation of the `Header*` filters that do not depend on Go 1.14. (#565) Encode labels once during checkpoint. The checkpoint function is executed in a single thread so we can do the encoding lazily before passing the encoded version of labels to the exporter. This is a cheap and quick way to avoid encoding the labels on every collection interval. (#572) Run coverage over all packages in `COVERAGEMODDIR`. (#573) `RecordError` method on `Span`s in the trace API to Simplify adding error events to spans. (#473) Configurable push frequency for exporters setup pipeline. (#504) Rename the `exporter` directory to `exporters`. The `go.opentelemetry.io/otel/exporter/trace/jaeger` package was mistakenly released with a `v1.0.0` tag instead of `v0.1.0`. This resulted in all subsequent releases not becoming the default latest. A consequence of this was that all `go get`s pulled in the incompatible `v0.1.0` release of that package when pulling in more recent packages from other otel packages. Renaming the `exporter` directory to `exporters` fixes this issue by renaming the package and therefore clearing any existing dependency tags. Consequentially, this action also renames all exporter packages. (#502) The `CorrelationContextHeader` constant in the `correlation` package is no longer exported. (#503) `HTTPSupplier` interface in the propagation API to specify methods to retrieve and store a single value for a key to be associated with a carrier. (#467) `HTTPExtractor` interface in the propagation API to extract information from an `HTTPSupplier` into a context. (#467) `HTTPInjector` interface in the propagation API to inject information into an `HTTPSupplier.` (#467) `Config` and configuring `Option` to the propagator API. (#467) `Propagators` interface in the propagation API to contain the set of injectors and extractors for all supported carrier formats. (#467) `HTTPPropagator` interface in the propagation API to inject and extract from an `HTTPSupplier.` (#467) `WithInjectors` and `WithExtractors` functions to the propagator API to configure injectors and extractors to use. (#467) `ExtractHTTP` and `InjectHTTP` functions to apply configured HTTP extractors and injectors to a passed context. (#467) Histogram aggregator. (#433) `DefaultPropagator` function and have it return `trace.TraceContext` as the default context propagator. (#456) `AlwaysParentSample` sampler to the trace API. (#455) `WithNewRoot` option function to the trace API to specify the created span should be considered a root span. (#451) Renamed `WithMap` to `ContextWithMap` in the correlation package. (#481) Renamed `FromContext` to `MapFromContext` in the correlation package. (#481) Move correlation context propagation to correlation package. (#479) Do not default to putting remote span context into links. (#480) `Tracer.WithSpan` updated to accept `StartOptions`. (#472) Renamed `MetricKind` to `Kind` to not stutter in the type usage. (#432) Renamed the `export` package to `metric` to match directory structure. (#432) Rename the `api/distributedcontext` package to `api/correlation`. (#444) Rename the `api/propagators` package to `api/propagation`. (#444) Move the propagators from the `propagators` package into the `trace` API package. (#444) Update `Float64Gauge`, `Int64Gauge`, `Float64Counter`, `Int64Counter`, `Float64Measure`, and `Int64Measure` metric methods to use value receivers instead of pointers. (#462) Moved all dependencies of tools package to a tools directory. (#466) Binary propagators. (#467) NOOP propagator. (#467) Upgraded `github.com/golangci/golangci-lint` from `v1.21.0` to `v1.23.6` in `tools/`. (#492) Fix a possible nil-dereference crash (#478) Correct comments for `InstallNewPipeline` in the stdout exporter. (#483) Correct comments for `InstallNewPipeline` in the dogstatsd"
},
{
"data": "(#484) Correct comments for `InstallNewPipeline` in the prometheus exporter. (#482) Initialize `onError` based on `Config` in prometheus exporter. (#486) Correct module name in prometheus exporter README. (#475) Removed tracer name prefix from span names. (#430) Fix `aggregator_test.go` import package comment. (#431) Improved detail in stdout exporter. (#436) Fix a dependency issue (generate target should depend on stringer, not lint target) in Makefile. (#442) Reorders the Makefile targets within `precommit` target so we generate files and build the code before doing linting, so we can get much nicer errors about syntax errors from the compiler. (#442) Reword function documentation in gRPC plugin. (#446) Send the `span.kind` tag to Jaeger from the jaeger exporter. (#441) Fix `metadataSupplier` in the jaeger exporter to overwrite the header if existing instead of appending to it. (#441) Upgraded to Go 1.13 in CI. (#465) Correct opentelemetry.io URL in trace SDK documentation. (#464) Refactored reference counting logic in SDK determination of stale records. (#468) Add call to `runtime.Gosched` in instrument `acquireHandle` logic to not block the collector. (#469) Use stateful batcher on Prometheus exporter fixing regression introduced in #395. (#428) Global meter forwarding implementation. This enables deferred initialization for metric instruments registered before the first Meter SDK is installed. (#392) Global trace forwarding implementation. This enables deferred initialization for tracers registered before the first Trace SDK is installed. (#406) Standardize export pipeline creation in all exporters. (#395) A testing, organization, and comments for 64-bit field alignment. (#418) Script to tag all modules in the project. (#414) Renamed `propagation` package to `propagators`. (#362) Renamed `B3Propagator` propagator to `B3`. (#362) Renamed `TextFormatPropagator` propagator to `TextFormat`. (#362) Renamed `BinaryPropagator` propagator to `Binary`. (#362) Renamed `BinaryFormatPropagator` propagator to `BinaryFormat`. (#362) Renamed `NoopTextFormatPropagator` propagator to `NoopTextFormat`. (#362) Renamed `TraceContextPropagator` propagator to `TraceContext`. (#362) Renamed `SpanOption` to `StartOption` in the trace API. (#369) Renamed `StartOptions` to `StartConfig` in the trace API. (#369) Renamed `EndOptions` to `EndConfig` in the trace API. (#369) `Number` now has a pointer receiver for its methods. (#375) Renamed `CurrentSpan` to `SpanFromContext` in the trace API. (#379) Renamed `SetCurrentSpan` to `ContextWithSpan` in the trace API. (#379) Renamed `Message` in Event to `Name` in the trace API. (#389) Prometheus exporter no longer aggregates metrics, instead it only exports them. (#385) Renamed `HandleImpl` to `BoundInstrumentImpl` in the metric API. (#400) Renamed `Float64CounterHandle` to `Float64CounterBoundInstrument` in the metric API. (#400) Renamed `Int64CounterHandle` to `Int64CounterBoundInstrument` in the metric API. (#400) Renamed `Float64GaugeHandle` to `Float64GaugeBoundInstrument` in the metric API. (#400) Renamed `Int64GaugeHandle` to `Int64GaugeBoundInstrument` in the metric API. (#400) Renamed `Float64MeasureHandle` to `Float64MeasureBoundInstrument` in the metric API. (#400) Renamed `Int64MeasureHandle` to `Int64MeasureBoundInstrument` in the metric API. (#400) Renamed `Release` method for bound instruments in the metric API to `Unbind`. (#400) Renamed `AcquireHandle` method for bound instruments in the metric API to `Bind`. (#400) Renamed the `File` option in the stdout exporter to `Writer`. (#404) Renamed all `Options` to `Config` for all metric exports where this wasn't already the case. Aggregator import path corrected. (#421) Correct links in README. (#368) The README was updated to match latest code changes in its examples. (#374) Don't capitalize error statements. (#375) Fix ignored errors. (#375) Fix ambiguous variable naming. (#375) Removed unnecessary type casting. (#375) Use named parameters. (#375) Updated release schedule. (#378) Correct http-stackdriver example module name. (#394) Removed the `http.request` span in `httptrace` package. (#397) Add comments in the metrics SDK (#399) Initialize checkpoint when creating ddsketch aggregator to prevent panic when merging into a empty one. (#402) (#403) Add documentation of compatible exporters in the README. (#405) Typo"
},
{
"data": "(#408) Simplify span check logic in SDK tracer implementation. (#419) Unary gRPC tracing example. (#351) Prometheus exporter. (#334) Dogstatsd metrics exporter. (#326) Rename `MaxSumCount` aggregation to `MinMaxSumCount` and add the `Min` interface for this aggregation. (#352) Rename `GetMeter` to `Meter`. (#357) Rename `HTTPTraceContextPropagator` to `TraceContextPropagator`. (#355) Rename `HTTPB3Propagator` to `B3Propagator`. (#355) Rename `HTTPTraceContextPropagator` to `TraceContextPropagator`. (#355) Move `/global` package to `/api/global`. (#356) Rename `GetTracer` to `Tracer`. (#347) `SetAttribute` from the `Span` interface in the trace API. (#361) `AddLink` from the `Span` interface in the trace API. (#349) `Link` from the `Span` interface in the trace API. (#349) Exclude example directories from coverage report. (#365) Lint make target now implements automatic fixes with `golangci-lint` before a second run to report the remaining issues. (#360) Drop `GO111MODULE` environment variable in Makefile as Go 1.13 is the project specified minimum version and this is environment variable is not needed for that version of Go. (#359) Run the race checker for all test. (#354) Redundant commands in the Makefile are removed. (#354) Split the `generate` and `lint` targets of the Makefile. (#354) Renames `circle-ci` target to more generic `ci` in Makefile. (#354) Add example Prometheus binary to gitignore. (#358) Support negative numbers with the `MaxSumCount`. (#335) Resolve race conditions in `push_test.go` identified in #339. (#340) Use `/usr/bin/env bash` as a shebang in scripts rather than `/bin/bash`. (#336) Trace benchmark now tests both `AlwaysSample` and `NeverSample`. Previously it was testing `AlwaysSample` twice. (#325) Trace benchmark now uses a `[]byte` for `TraceID` to fix failing test. (#325) Added a trace benchmark to test variadic functions in `setAttribute` vs `setAttributes` (#325) The `defaultkeys` batcher was only using the encoded label set as its map key while building a checkpoint. This allowed distinct label sets through, but any metrics sharing a label set could be overwritten or merged incorrectly. This was corrected. (#333) Optimized the `simplelru` map for attributes to reduce the number of allocations. (#328) Removed unnecessary unslicing of parameters that are already a slice. (#324) This release contains a Metrics SDK with stdout exporter and supports basic aggregations such as counter, gauges, array, maxsumcount, and ddsketch. Metrics stdout export pipeline. (#265) Array aggregation for raw measure metrics. (#282) The core.Value now have a `MarshalJSON` method. (#281) `WithService`, `WithResources`, and `WithComponent` methods of tracers. (#314) Prefix slash in `Tracer.Start()` for the Jaeger example. (#292) Allocation in LabelSet construction to reduce GC overhead. (#318) `trace.WithAttributes` to append values instead of replacing (#315) Use a formula for tolerance in sampling tests. (#298) Move export types into trace and metric-specific sub-directories. (#289) `SpanKind` back to being based on an `int` type. (#288) URL to OpenTelemetry website in README. (#323) Name of othttp default tracer. (#321) `ExportSpans` for the stackdriver exporter now handles `nil` context. (#294) CI modules cache to correctly restore/save from/to the cache. (#316) Fix metric SDK race condition between `LoadOrStore` and the assignment `rec.recorder = i.meter.exporter.AggregatorFor(rec)`. (#293) README now reflects the new code structure introduced with these changes. (#291) Make the basic example work. (#279) This is the first release of open-telemetry go library. It contains api and sdk for trace and meter. Initial OpenTelemetry trace and metric API prototypes. Initial OpenTelemetry trace, metric, and export SDK packages. A wireframe bridge to support compatibility with OpenTracing. Example code for a basic, http-stackdriver, http, jaeger, and named tracer setup. Exporters for Jaeger, Stackdriver, and stdout. Propagators for binary, B3, and trace-context protocols. Project information and guidelines in the form of a README and CONTRIBUTING. Tools to build the project and a Makefile to automate the process. Apache-2.0 license. CircleCI build CI manifest files. CODEOWNERS file to track owners of"
}
] |
{
"category": "Runtime",
"file_name": "CHANGELOG.md",
"project_name": "containerd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "This document explains how to trace Kata Containers components. The Kata Containers runtime and agent are able to generate trace spans, which allow the administrator to observe what those components are doing and how much time they are spending on each operation. An OpenTelemetry-enabled application creates a number of trace \"spans\". A span contains the following attributes: A name A pair of timestamps (recording the start time and end time of some operation) A reference to the span's parent span All spans need to be finished, or completed, to allow the OpenTelemetry framework to generate the final trace information (by effectively closing the transaction encompassing the initial (root) span and all its children). For Kata, the root span represents the total amount of time taken to run a particular component from startup to its shutdown (the \"run time\"). The runtime, which runs in the host environment, has been modified to optionally generate trace spans which are sent to a trace collector on the host. An OpenTelemetry system (such as ) uses a collector to gather up trace spans from the application for viewing and processing. For an application to use the collector, it must run in the same context as the collector. This poses a problem for tracing the Kata Containers agent since it does not run in the same context as the collector: it runs inside a virtual machine (VM). To allow spans from the agent to be sent to the trace collector, Kata provides a component. This runs in the same context as the collector (generally on the host system) and listens on a channel for traces generated by the agent, forwarding them on to the trace collector. Note: This design supports agent tracing without having to make changes to the image, but also means that can also benefit from agent tracing. The following diagram summarises the architecture used to trace the Kata Containers agent: ``` +--+ | Host | | | | ++ | | | OpenTelemetry | | | | Trace | | | | Collector | | | ++ | | ^ ++ | | | spans | Kata VM | | | +--+--+ | | | | | Kata | spans o +-+ | | | | Trace |<--| Kata | | | | | Forwarder | VSOCK o | Agent | | | | +--+ Channel | +-+ | | | ++ | +--+ ``` You must have a trace collector running. Although the collector normally runs on the host, it can also be run from inside a Docker image configured to expose the appropriate host ports to the collector. The method is the quickest and simplest way to run the collector for testing. If you wish to trace the agent, you must start the"
},
{
"data": "Notes: If agent tracing is enabled but the forwarder is not running, the agent will log an error (signalling that it cannot generate trace spans), but continue to work as normal. The trace forwarder requires a trace collector (such as Jaeger) to be running before it is started. If a collector is not running, the trace forwarder will exit with an error. By default, tracing is disabled for all components. To enable any form of tracing an `enable_tracing` option must be enabled for at least one component. Note: Enabling this option will only allow tracing for subsequently started containers. To enable runtime tracing, set the tracing option as shown: ```toml [runtime] enable_tracing = true ``` To enable agent tracing, set the tracing option as shown: ```toml [agent.kata] enable_tracing = true ``` Note: If both agent tracing and runtime tracing are enabled, the resulting trace spans will be \"collated\": expanding individual runtime spans in the Jaeger web UI will show the agent trace spans resulting from the runtime operation. The host kernel must support the VSOCK socket type. This will be available if the kernel is built with the `CONFIGVHOSTVSOCK` configuration option. The VSOCK kernel module must be loaded: ``` $ sudo modprobe vhost_vsock ``` The guest kernel must support the VSOCK socket type: This will be available if the kernel is built with the `CONFIGVIRTIOVSOCKETS` configuration option. > Note: The default Kata Containers guest kernel provides this feature. Agent tracing is only \"completed\" when the workload and the Kata agent process have exited. Although trace information can be inspected before the workload and agent have exited, it is incomplete. This is shown as `<trace-without-root-span>` in the Jaeger web UI. If the workload is still running, the trace transaction -- which spans the entire runtime of the Kata agent -- will not have been completed. To view the complete trace details, wait for the workload to end, or stop the container. is designed for high performance. It combines the best of two previous generation projects (OpenTracing and OpenCensus) and uses a very efficient mechanism to capture trace spans. Further, the trace points inserted into the agent are generated dynamically at compile time. This is advantageous since new versions of the agent will automatically benefit from improvements in the tracing infrastructure. Overall, the impact of enabling runtime and agent tracing should be extremely low. In normal operation, the Kata runtime manages the VM shutdown and performs certain optimisations to speed up this process. However, if agent tracing is enabled, the agent itself is responsible for shutting down the VM. This it to ensure all agent trace transactions are completed. This means there will be a small performance impact for container shutdown when agent tracing is enabled as the runtime must wait for the VM to shutdown fully. If you want to debug, further develop, or test tracing, is highly recommended. For working with the agent, you may also wish to to allow you to access the VM environment."
}
] |
{
"category": "Runtime",
"file_name": "tracing.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.