diff --git "a/12584701.csv" "b/12584701.csv" deleted file mode 100644--- "a/12584701.csv" +++ /dev/null @@ -1,4732 +0,0 @@ -issuekey,created,title,description,storypoints -110783356,2022-06-27 17:07:00.301,Support SGDbOps tolerations on the web console,"When #1707 is done, the web console should support for setting up SGDbOps tolerations. - -**Acceptance Criteria:** - -* [x] Support adding tolerations to any type of SGDbOps on the web console. -* [x] List the toleration info on the op summary -* [x] List the toleration info on the op details -* [x] Test the implementation",12 -110756642,2022-06-27 09:56:10.017,Change backup ConrJob concurrencyPolicy to Forbid,"When the automatic backup scheduling is too aggressive the backup Job are created in parallel. There is not really a good reason to run backups in parallel so we should avoid that. - -## Implementation plan - -Change CronJob field `.spec.concurrencyPolicy` to `Forbid` - -## Acceptance Criteria - -* [ ] Implement the change -* [ ] Tests",8 -110508474,2022-06-21 21:18:36.954,Support OIDC Auth for the AdminUI," - -### Problem to solve - -We would like to use KeyCloak as an OIDC provider for Web Console Auth - -### Further details - -Context: https://stackgres.slack.com/archives/C014UGLAD9R/p1655845008957569 - -We are wanting to avoid using Kubernetes Secrets to enable auth to the web console. Being able to leverage an OIDC provider, in our case KeyCloak, would be killer. - -### Proposal - - - -### Testing - - - - -## Acceptance Criteria - - - -### Links / references",40 -110428371,2022-06-20 15:55:29.889,Manual backups shouldnt delete the job when done,"Backups started from the SGBackupConfig ( aka automatic backups ) don't clean the Job when completed, so logs can be reviewed afterwards, when starting a backup from the UI ( aka Manual backups ) the job is cleaned when the backup is completed, that is very annoying, the only way is to get the logs is to wait for them in real time, not a good option if the process takes long, much better to simply not clean job, so log can be reviewed later..",4 -110424243,2022-06-20 15:07:50.622,Edit SGPostgresConfig form should only list custom parameters,"### Summary - -When editing an SGPostgresConfig from the web console, the form shows all the parameters on the configuration, including the default values which have not been set by the user, but by StackGres. - -![image](/uploads/ed367bd9779c447502e6d4e29289b0a2/image.png) - - -### Expected Behaviour - -The form should only list parameters which have been explicitly set by the user. - - -- StackGres version: `1.2.0`",12 -110252827,2022-06-16 17:10:07.653,Simplify action buttons names on CRD Details,"### Summary -CRDs Details views have a row of action buttons: `Edit`, `Clone`, and `Delete`. All those buttons are followed by the CRD Kind. This makes the actions section very verbose, specially on the Configurations: -- `Edit Configuration` -- `Clone Configuration` -- `Delete Configuration` - -Since there are more action items on that same row, we should simplify the naming of these buttons. We propose removing the CRD Kind and leave just the action itself: `Edit`, `Clone`, and `Delete`. - -Furthermore, on the Cluster Details view, the `Clone` button is actually named `Clone Cluster Configuration`. We propose renaming it to just `Clone`, because: -1. When the User clicks on `Clone Cluster Configuration`, a warning appears explaining that what's being cloned is the Cluster Configuration. -2. On the List views, all `Clone` icons are identical for all Kinds, and this has never felt misleading. - -### Environment -- StackGres version: 1\.2.0 - -### Relevant logs and/or screenshots -![Screenshot_2022-06-16_at_18.51.41](/uploads/19e91e1cec8197d0305f4c6999e7581e/Screenshot_2022-06-16_at_18.51.41.png) -![Screenshot_2022-06-16_at_18.51.49](/uploads/ac3743bc296cebe7d174f05abc79bdf0/Screenshot_2022-06-16_at_18.51.49.png)",4 -110251894,2022-06-16 16:45:05.621,Add button to go back to List view on Cluster Details,"### Summary -CRDs Details on the Web Console include a button to go back to the full List of resources of the same kind. This button exists on all CRDs except the Cluster. - -### Environment -- StackGres version: 1\.2.0",4 -110247195,2022-06-16 15:12:09.408,Pods can't start in Namespaces with Resource Quotas," -### Summary - -In Namespaces with ResourceQuotas assigned, the StatefulSet cannot start pods because not all containers have requests and limits set. - -### Current Behaviour - -Pods fail to create because not all containers have requests and limits set. - -#### Steps to reproduce - -1. Create a Namespace -2. Create a ResourceQuota - -```yaml - -apiVersion: v1 -kind: ResourceQuota -metadata: - labels: - app.kubernetes.io/instance: ttd-namespaces - name: kpop-test-quota - namespace: kpop-test -spec: - hard: - limits.cpu: 2 - limits.memory: 4Gi - requests.cpu: 1 - requests.memory: 2Gi -``` - -3. Create a Cluster -4. Inspect the StatefulSet - -### Expected Behaviour - -The pods should be able to be scheduled. - -### Possible Solution - -All containers (including initContainers) should have requests and limits set. - -### Environment - -- StackGres version: - -- Kubernetes version: 1.19-1.21 -- Cloud provider or hardware configuration: EKS - - -### Relevant logs and/or screenshots - -``` - Warning FailedCreate 4s (x104 over 42h) statefulset-controller create Pod pobrien-test-0 in StatefulSet pobrien-test failed error: pods ""pobrien-test-0"" is forbidden: failed quota: kpop-test-quota: must specify limits.cpu,limits.memory,requests.cpu,requests.memory -```",8 -110244908,2022-06-16 14:31:18.795,Add in the UI support to the new minor PG version 14.4,"We need to add support to a new minor version. PostgreSQL version 14.4. - -We need also to add an expressive warning for each minor previous release in the 14 major. The reason is that this minor versions could generate a data corruption and different bugs that are fixed and we should be clear that is dangerous. - -Acceptance criteria: -- [x] Add the support to the new minor release. -- [x] Create the Warning messages for the UI.",4 -110196135,2022-06-15 21:32:03.094,Support cert-manager Certificates for the Stackgres Operator," - -### Problem to solve - -We would like to use [cert-manager](https://cert-manager.io/) issued TLS certificates instead of passing our own in via values.yaml or using a self-signed certificate. - -### Further details - -This would considerably ease Operations on our side as we already have a working cert-manager installation in all of our clusters. - -### Proposal - -I think the first step would be to generate the self-signed certificates as a `kubernetes.io/tls` Secret that matches the format that cert-manager creates: - -``` -apiVersion: v1 -data: - ca.crt: - tls.crt: - tls.key: -kind: Secret -metadata: - annotations: - name: test-tls-cert - namespace: test -type: kubernetes.io/tls -``` - -After that I believe we would want a flag in values.yaml that would allow us to easily turn off cert generation: - -``` -cert: - auto_generate: true -``` - -I think we would also probably want to be able to name the Secret that would get volume mounted? - -If you know of a workaround here I am all ears. I searched through the docs and Issues and couldn't find anything relevant. - -### Testing - - - - -## Acceptance Criteria - -* [x] Allow to use an already existent secrets for both operator and REST API -* [x] Add a flag in order to create cert manager custom resources that allow to create the operator secret automatically -* [x] Tests -### Links / references",16 -110108366,2022-06-14 16:06:51.483,Cluster bootstrap completed event is updated continuously,"### Summary - -Cluster bootstrap completed event is updated continuously. - -``` -Normal ClusterBootstrapCompleted 8s (x178 over 28m) cluster-controller Cluster bootstrap completed -``` - -#### Steps to reproduce - -1. Create a cluster - -### Expected Behaviour - -Cluster bootstrap completed event is create once. - -### Environment - -- StackGres version: 1.2.0 -- Kubernetes version: ? -- Cloud provider or hardware configuration: ?",1 -110099182,2022-06-14 14:02:03.667,Resource name not shown on breadcrumbs,"### Summary - -When browsing any resource on the web console, the breadcrumbs do not include the name of the requested resource. - -![image](/uploads/487e33bb2206767e8c4f71463a870cbf/image.png) - - -#### Steps to reproduce - -- Enter the web console -- Select any resource and click on its name to see its details -- On the resource details screen the name is not included on the breadcrumbs - - - -### Environment - -- StackGres version: `1.2.0`",4 -110098121,2022-06-14 13:52:16.928,Missing service status on SGCluster and SGDistributedLogs details,"### Summary - -When listing services details for SGClusters and SGDistributedLogs, there's no distinction between enabled and disabled services, which might be misleading. - -![image](/uploads/e9519916330549568b0103557933f35f/image.png) - -![image](/uploads/73fbf5964287dbb112bbfae355beade4/image.png) - - -#### Steps to reproduce - -* Enter the web console -* Create an SGCluster or an SGDistributedLog disabling either the Primary or Replicas service -* Enter the resource details -* The service is shown regardless of its status, but the later is not shown at all - - -### Expected Behaviour - -Services statuses should always be shown - - -### Environment - -- StackGres version: `1.2.0`",2 -110036659,2022-06-13 15:32:04.139,Change text of Close Details button,"### Summary -On the web console, CRD details sections have a `Close Details` button. The `Close Details` button redirects to the List view of the parent Kind. The text of the button can be misleading, since the User may think it literally closes the details view, in the sense of it redirects back to the previous page. The text of the button should be improved to make it clear it will show the List view of the parent Kind. - -Text proposal: `Go to [kind] List` - -### Environment -- StackGres version: 1\.2.0",2 -110021894,2022-06-13 11:36:22.122,Boolean specs in negative should be expressed in positive,"### Summary -On the web console, there are boolean specs that are expressed in negative (e.g. `Disable Metrics Exporter`, `No Kill Backend`). Some of them are expressed in a 'reverse' manner (in positive, i.e. `Metrics Exporter`), but others are not. All specs expressed in negative should be expressed in positive. - -### Environment -- StackGres version: 1\.2.0",8 -110021056,2022-06-13 11:20:06.336,Wait Timeout on Repack databases appears empty,"### Summary -When adding databases to a Repack operation, the `Wait Timeout` field appears empty on all databases after the first one. - -#### Steps to reproduce -1. Go to `Create DbOps` and select `Repack`. -2. Enable `Database Specific Options`. -3. Add a new database. -4. The `Wait Timeout` field of the new database appears empty. - -### Expected Behaviour -The default value of `Wait Timeout` should be `Inherit from global settings`. - -### Environment -- StackGres version: 1\.2.0 - -### Relevant logs and/or screenshots -![Screenshot_2022-06-13_at_12.30.53](/uploads/49ea9802dab5b558b3c0635e294e665b/Screenshot_2022-06-13_at_12.30.53.png)",6 -109930967,2022-06-10 15:57:05.338,Fix specs display on details views,"### Summary -Some specs of the details view need to be fixed: -- Major Version Upgrade details: Backup Path appears even if it has no value. -- Cluster details: Cluster Pod Anti Affinity only appears if disabled. -- Logs Server details: Cluster Pod Anti Affinity only appears if disabled. - -### Environment -- StackGres version: 1\.2.0",2 -109844025,2022-06-09 12:13:47.242,Close Details should return to previous screen,"### Summary -`Close Details` button redirects to the List view of the CRD, instead of showing the previous screen. - - -#### Steps to reproduce -1. Go to any Cluster Details. -2. Click on the eye icon of any CRD (e.g. the Instance Profile). -3. The eye icon opens the Details view of that specific CRD (in this case, the details of the Instance Profile). -4. Click on `Close Details`. -5. The User is redirected to the List view of that CRD (in this case, the Instance Profiles List), instead of returning to the previous screen (in this case, Cluster Details). - -### Expected Behaviour -When clicked on `Close Details` the details should close and show the previous screen. - -### Possible Solution -Make `Close Details` behave as the `Cancel` button on the forms. - -### Environment - -- StackGres version: 1\.2.0",2 -109653188,2022-06-06 14:24:03.527,Support topologySpreadConstraints for SGCluster,"### Problem to solve - -Support [`topologySpreadConstraints`](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/) in the `SGCluster` custom resource. - -### Proposal - -Proposed section to map to `.spec.template.spec.topologySpreadConstraints` section of generated `StatefulSet`: - -```yaml -apiVersion: stackgres.io/v1 -kind: SGCluster -spec: - pods: - scheduling: - topologySpreadConstraints: [ ] # the same as https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#topologyspreadconstraint-v1-core -``` - -## Acceptance Criteria - -* [ ] Implement the change in the SGCluster CRD -* [ ] Implement the change in the REST API -* [ ] Tests -* [ ] Documentation - -### Links / references - -* https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#topologyspreadconstraint-v1-core",8 -109648195,2022-06-06 13:12:52.194,Remove support for StackGres 1.0,"Remove support for StackGres version 1.0. In particular: - -* [ ] Remove code that support StackGres version 1.0 in the reconciliation cycle. -* [ ] Remove code that support StackGres version 1.0 in the webhooks. -* [ ] Remove code that support StackGres version 1.0 in the E2E tests. -* [ ] When upgrading the helm chart from StackGres 1.0 an error will be showed and the upgrade must fail. -* [ ] Upgrading from StackGres 1.0 or any other previous version, including alpha and beta version may be performed if value `forceUpgradeFromUnsupportedVersion` is set to `true`. The message when trying to upgrade from an unsupported version have to mention that it is possible to perform the upgrade using the `forceUpgradeFromUnsupportedVersion` value but has the drawback that the cluster will not be reconciled anymore and some operation may stop working until the user performs a security upgrade database operation.",8 -109645391,2022-06-06 12:17:57.184,Unify switches texts on forms,"### Summary -We should unify switches texts. Right now we use ON/OFF and YES/NO. We should use `ENABLE: YES/NO` in all switches on all forms. - -### Environment - -- StackGres version: 1\.2.0 - -### Relevant logs and/or screenshots -![image-1](/uploads/181d84bb8c2e7bc362213b976c679f8c/image-1.png)",8 -109641754,2022-06-06 10:58:50.023,Namespaces Overview header appears when logged out but won't show on login,"### Summary - -On the web console, when users log in successfully, they are taken to the ""Namespaces Overview"" section, but there's no header shown on top of the namespaces' cards. - -![image](/uploads/bf7a6c906846650088506e86272fa544/image.png) - - -Also, once the user has navigated through different sections and goes back to the ""Namespaces Overview"". If the user logs out of the web console, the ""Namespaces Overview"" header still appears on top of the login form. - -![image](/uploads/ac61c7ca3d1d73b0e21220ceec010bcc/image.png) - - - -#### Steps to reproduce - -- Login to the web console -- The ""Namespaces Overview"" section loads with no header on top (which is wrong) -- Navigate to any other section -- Head back to the ""Namespaces Overview"" section by clicking on the StackGres logo -- The ""Namespaces Overview"" header appears (which is right) -- Logout of the web console -- The ""Namespaces Overview"" header still appears (which is wrong) - - -### Expected Behaviour - -The behavior on every header on the web console should be consistent. They should always appear when logged in and should never be shown when logged out - - -### Environment - -- StackGres version: `1.2.0`",2 -109569116,2022-06-03 15:37:20.291,Review and adjust tooltips that won't match reverse-logic specs on the web console,"## Summary - -On the web console, when showing boolean specs which are related to disabling options like `disableMetricsExporter` or `disableClusterPodAntiaffinity`, the UI acts in a ""reverse"" manner, in the sense that to ""disable"" the corresponding spec you must set the value to false. - -![image](/uploads/e27915d6f1f5af80a1d1567e72dd5f26/image.png) - -For such reasons, some tooltips might not match properly the information shown on the web console: - -![image](/uploads/bc8dbdfcfb2215197ce3bba95e9939ca/image.png) - - -## Proposed Solution - -We should review such specs and their corresponding tooltips and rewrite on the web console those who do not match. - - -## Acceptance criteria - -- [ ] Implement the proposed solution",2 -109518872,2022-06-02 16:17:57.143,Pods and time range selectors missing on monitoring tab,"### Summary - -Pods and time range selector missing on monitoring tab - -![image](/uploads/0d641e6aa7bd1f79453e33292f014c22/image.png) - - - -#### Steps to reproduce - -- Enter the web console -- Select any cluster with one or more active pods -- Enter the ""Monitoring"" tab -- The ""Time Range"" and ""Pods"" dropdown selectors are not shown - -### Expected Behaviour - -Both dropdown selectors should appear on the right side of the cluster tabs - - -### Possible Solution - -- The code for such selectors is missing after the changes introduced on https://gitlab.com/ongresinc/stackgres/-/merge_requests/1084 - -### Environment - -- StackGres version: `1.2.0`",2 -109506041,2022-06-02 13:00:54.358,Grafana tab is visible even if Monitoring is not enabled,"### Summary -Grafana tab is visible even if Monitoring is not available. - -This happens when clusters are created without enabling `prometheusAutobind`, which causes the tab to appear and show all graphs with ""No data"" messages on them - -### Environment -- StackGres version: 1.2.0 - -### Relevant logs and/or screenshots -![image](/uploads/0cfe4b3033c8a14c4ce30bfd78407477/image.png)",2 -109429338,2022-06-01 08:30:54.488,Update and improve the UI Connection Info popup,"Current pop-up looks like this: - -![image](/uploads/f49c7a1427b4fbe1283fa2b9807aeec0/image.png) - -There are two ideas to update/improve here: - -* ~The DNS proposed is still using the legacy `${clusterName}-primary` DNS name. Should be replaced by the now current `${clusterName}`.~ (will fixed by #1868) -* If the cluster is created with the Babelfish flavor, an additional text should appear to explain how to connect via the SQL Server protocol. The command should look like `kubectl -n ${namespace} run usql --rm -it --image ongres/postgres-util --restart=Never -- usql --password ms://babelfish@${clusterName}:1433` and include information on how to retrieve the secret (`kubectl -n ${namespace} get secret ${clusterName} --template '{{ printf ""%s"" (index .data ""babelfish-password"" | base64decode) }}'`) - -Acceptance criteria: -* [x] Implement the change",4 -109204776,2022-05-27 09:14:52.325,Namespace selector won't stay open,"### Summary - -Sometimes, after navigating through different sections of the web console, when clicking the namespace selector on the sidebar, it won't remain open. - -![namespace-error](/uploads/38399c76e954410726b6b977f1f67824/namespace-error.mov) - - -#### Steps to reproduce - -- Enter the web console -- Navigate to any section on any namespace -- Go to Namespaces Overview (click on StackGres logo) -- Navigate to any section on any namespace -- Click on the namespace selector on the sidebar - -### Expected Behaviour - -The behavior on the namespace selector should be consistent and it should remain open when toggling such mode. - - -### Environment - -- StackGres version: `1.2.0`",1 -109201197,2022-05-27 08:00:04.448,Backups Job show some permission errors in the log,"### Summary - -Backups Job show some permission errors in the log - -### Current Behaviour - -Permission errors are showed in the logs - -#### Steps to reproduce - -1. Create a cluster with backup config in a namespace -2. Create a cluster with backup config in another namespace -3. Create a backup for the first cluster -4. Create a backup for the second cluster - -### Expected Behaviour - -Permission errors are not showed in the logs - -### Environment - -- StackGres version: 1.2.0 -- Kubernetes version: ? -- Cloud provider or hardware configuration: ? - -### Relevant logs and/or screenshots - -``` -Lock acquired -Error from server (Forbidden): sgbackups.stackgres.io is forbidden: User ""system:serviceaccount:ndb-stage1:ndb-metadata-store-stage-backup"" cannot list resource ""sgbackups"" in API group ""stackgres.io"" in the namespace ""ndb-dev2"" -Error from server (Forbidden): sgbackups.stackgres.io is forbidden: User ""system:serviceaccount:ndb-stage1:ndb-metadata-store-stage-backup"" cannot list resource ""sgbackups"" in API group ""stackgres.io"" in the namespace ""ndb-dev3"" -Creating backup CR -apiVersion: stackgres.io/v1 -kind: SGBackup -metadata: - annotations: - scheduled-backup: ""true"" - stackgres.io/operatorVersion: 1.2.0-SNAPSHOT - creationTimestamp: ""2022-05-18T00:00:08Z"" - generation: 1 - name: ndb-metadata-store-stage-2022-05-18-00-00-07 - namespace: ndb-stage1 - resourceVersion: ""58474934"" - uid: 8588c36a-d51c-49fc-9479-40c93d36dfbc -spec: - managedLifecycle: true - sgCluster: ndb-metadata-store-stage -status: - process: - jobPod: ndb-metadata-store-stage-backup-27547200--1-f4ljp - status: Running - sgBackupConfig: - baseBackups: - compression: lz4 - storage: - s3: - awsCredentials: - secretKeySelectors: - accessKeyId: - key: accessKeyId - name: eks-backup-bucket-secret - secretAccessKey: - key: secretAccessKey - name: eks-backup-bucket-secret - bucket: era-stackgres-backup - type: s3 -Retrieving primary and replica -Primary is pod/ndb-metadata-store-stage-0 -Replica not found, primary will be used for cleanups -Performing backup -Backup completed -Extracting pg_controldata -Extraction of pg_controldata completed -Retain backups -Check if backup base_000000010000000000000004 has to be retained and will retain 3 backups -Mark base_000000010000000000000004 as permanent and will retain 2 more backups -INFO: 2022/05/18 00:00:15.029810 Retrieving previous related backups to be marked: toPermanent=true -INFO: 2022/05/18 00:00:15.310107 Retrieved backups to be marked, marking: [base_000000010000000000000004] -Cleaning up impermanent backups -INFO: 2022/05/18 00:00:15.856539 retrieving permanent objects -INFO: 2022/05/18 00:00:16.027291 Found permanent objects: backups=map[base_000000010000000000000004:true], wals=map[000000010000000000000004:true] -INFO: 2022/05/18 00:00:16.186865 Start delete -INFO: 2022/05/18 00:00:16.369698 Objects in folder: -INFO: 2022/05/18 00:00:16.369744 will be deleted: wal_005/000000010000000000000001.lz4 -INFO: 2022/05/18 00:00:16.369752 will be deleted: wal_005/000000010000000000000002.lz4 -INFO: 2022/05/18 00:00:16.369759 will be deleted: wal_005/000000010000000000000003.lz4 -Check if backup base_000000010000000000000004 has to be set permanent or impermanent -Mark base_000000010000000000000004 as impermanent -INFO: 2022/05/18 00:00:17.458217 Retrieving previous related backups to be marked: toPermanent=false -INFO: 2022/05/18 00:00:17.602567 retrieving permanent objects -INFO: 2022/05/18 00:00:17.863245 Retrieved backups to be marked, marking: [base_000000010000000000000004] -Reconciliation of backups completed -Listing existing backups -Updating backup CR as completed -sgbackup.stackgres.io/ndb-metadata-store-stage-2022-05-18-00-00-07 patched -Backup CR updated as completed -Reconcile backup CRs -Reconciliation of backup CRs completed -Lock released -```",8 -109167697,2022-05-26 15:36:17.431,Details about Distributed logs configuration not shown in logs server section,"### Summary - -In a cluster with distributed logs configured in a different namespace of the SGCluster, the information is not show in the Logs servers section. - -![image](/uploads/4a6c85e656ae2c56e915733478194684/image.png) - -But the configuration exist(SGCluster configurations): - -![image](/uploads/38a9358ae7015efdb716f53d70b87b68/image.png) - - -and it is working: - -![image](/uploads/4a4afa5929a3fd7b94f4d5889a828561/image.png) - -if you go to `/namespace/SGDistributedLogs/monitoring.distributedlogs`: - -![image](/uploads/301cbee8a031223d780b224b17275aa1/image.png) - - -#### Steps to reproduce - -1- Configure a SGCluster with a SGDistributed logs in a different namespace: - -``` -apiVersion: stackgres.io/v1 -kind: SGDistributedLogs -metadata: - name: distributedlogs - namespace: monitoring -spec: - persistentVolume: - size: 100Gi -``` - -2. Add to the SGCluster definition: - -``` - distributedLogs: - sgDistributedLogs: 'monitoring.distributedlogs' - retention: ""15 days"" -``` -3. Then check the UI Logs server section. - -### Expected Behaviour - -The UI show information about the SGDistributedlogs configured in the Cluster. - -### Possible Solution - -For the UI the Logs Servers section information should be global and not namespace dependent. - -### Environment - -- StackGres version: 1.2.0 -- Kubernetes version: 1.21 -- Cloud provider or hardware configuration: GKE - - -### Relevant logs and/or screenshots -``` -kubectl get pods -n monitoring distributedlogs-0 - -NAME READY STATUS RESTARTS AGE -distributedlogs-0 3/3 Running 0 23h -```",1 -109163975,2022-05-26 14:14:44.252,Lower the initial param autovacuum_work_mem,"The objective of this change is lower the amount of memory allocated to the parameter: `autovacuum_work_mem` to the value 512 MB. Currently, our initial setup is 2 GB. - -For reference: https://postgresqlco.nf/doc/en/param/autovacuum_work_mem/ - - -**Acceptance criteria:** -- [ ] Set the new lower value -- [ ] Test the new default config",4 -109152094,2022-05-26 10:16:29.955,Remove support for 1.0,"Remove support for version 1.0 of StackGres - -## Acceptance Criteria - -* [ ] Remove the version 1.0 from `StackGresVersion` and `Components` classes -* [ ] Remove all the code that is specifically handling the 1.0 version (conciliation, conversion, validation and mutation) -* [ ] Remove all 1.0 E2E tests",2 -109118887,2022-05-25 17:03:31.313,Monitoring tab is empty when there are no active pods,"### Summary - -When there are no active pods for a cluster, the monitoring tab appears empty with no information whatsoever. - -![image](/uploads/88aa9ee41e56ae8c73eb390787811240/image.png) - - - -#### Steps to reproduce - -- Enter the web console -- Look for any cluster which has no active pods -- Enter the ""Monitoring"" tab -- The tab shows no content - -### Expected Behaviour - -- If there is no actual data or reports coming from Grafana, there should be some kind of message shown to the user - - -### Environment - -- StackGres version: `1.2.0` - - -### Acceptance criteria - -* [x] Fix the bug",1 -109101992,2022-05-25 13:09:33.782,Not Found appears on top of Header on Details views,"### Summary -When requesting details of a non-existing CRD, the Not Found section appears on top of the Header section. -This only happens on the Details views. - -#### Steps to reproduce -1. Modify the URL of any Details view to point to a non-existing CRD. -2. The Not Found section will appear on top of the Header. - -### Expected Behaviour -The Header section should not be displayed on Not Found. - -### Environment -- StackGres version: 1.2.0 - -### Relevant logs and/or screenshots -![Screenshot_2022-05-25_at_14.49.34](/uploads/b81ed861e2de5e48d54065df60c8ad6f/Screenshot_2022-05-25_at_14.49.34.png) - - -Acceptance criteria: - - [x] Implement the feature. - - [x] Create tests",2 -109053786,2022-05-24 17:53:42.571,Error message for pods scheduling is repeated during SGCluster creation,"### Summary - -When creating an SGCluster with errors on the `operator` spec for `spec.pods.scheduling.tolerations`, the text for the error message coming from the REST API is repeated. - -- How it looks on the web console: - -![image](/uploads/8b37c232f17f4d311e1c5fc982cd9d0e/image.png) - - -- How it comes on the REST API response: - -``` -{ - ""type"":""https://stackgres.io/doc/1.2-dev/api/responses/error#constraint-violation"", - ""title"":""Some fields do not comply with the syntactic rules"", - ""detail"":""operator must be Exists when key is empty.\noperator must be Exists when key is empty."", - ""status"":422, - ""fields"":[ - ""spec.pods.scheduling.tolerations[0].key"", - ""spec.pods.scheduling.tolerations[0].operator"" - ] -} -``` - - -#### Steps to reproduce - -- Enter the web console -- Enter the Create Cluster form -- Try to create a cluster with a toleration with no key set - - -### Expected Behaviour - -- The text should only appear once on the REST API response - - - -### Environment - -- StackGres version: `1.2.0`",4 -108970705,2022-05-23 11:32:59.969,"Implement ""ManagedSQL"" as a new SGScripts CRD.","The description on the task is on the epic#18. - -The goal is to implement ""ManagedSQL"" as a new SGScripts CRD. - -The implementation is defined on the epic &18 - -**Acceptance criteria:** -- [ ] Implement the feature -- [ ] Create the tests -- [ ] Add the docuementation",24 -108961749,2022-05-23 09:25:12.630,Default boundaries for version binder are not the latest and oldest," -Default boundaries for version binder are not the latest and oldest - - -## Implementation plan - -Set latest and oldest as default boundaries for version binder - -## Acceptance Criteria - -* [ ] All tests pass",1 -108874949,2022-05-20 13:10:46.919,View Connection Info is not styled for clusters with no active pods,"### Summary -When no pods are available yet for a cluster, the `View Connection Info` popup appears unstyled. - -#### Steps to reproduce -1. Go to a cluster that has no active pods. -2. Click on `View Connection Info`. -3. A popup appears, but it is not styled. - -### Expected Behaviour -The same styles should be applied, no matter if pods are available or not. - -### Environment - -- StackGres version: `1.2.0` - -### Relevant logs and/or screenshots -![Screenshot_2022-05-20_at_15.08.17](/uploads/598c5584dbc7e1b8105c5ae17cf057ea/Screenshot_2022-05-20_at_15.08.17.png)",1 -108747185,2022-05-18 13:28:19.464,Parameter max_wal_senders not applied during cluster creation,"### Summary - -Parameter [`max_wal_senders`](https://postgresqlco.nf/doc/en/param/max_wal_senders/) is not applied on a new SGCluster. - -#### Steps to reproduce - -1. Create a SGPostgresConfig with `max_wal_senders` set to 50 -2. Create an SGCluster referencing the above configuration - -### Expected Behaviour - -Parameter [`max_wal_senders`](https://postgresqlco.nf/doc/en/param/max_wal_senders/) is applied to the new SGCluster. - -### Environment - -- StackGres version: 1.1.0 -- Kubernetes version: ? -- Cloud provider or hardware configuration: ? - -Acceptance criteria: -- [ ] Fix the bug -- [ ] Create the test",4 -108509222,2022-05-13 13:07:31.229,"""Create Resource"" option available on SGCluster form when user has no permissions","### Summary - -When creating an SGCluster from the web console, it is possible for users to create configurations and other resources by clicking on the ""Create New"" from the dropdown selectors. Even for users who have no permission to create the specified resource, this option is still available. - -![image](/uploads/2924367c98757bd40731804acb52bacb/image.png) - - -#### Steps to reproduce - -- Create a user with permissions to create SGClusters but no permissions to create dependencies (SGInstanceProfiles, SGPostgresConfigs, etc.) -- Log in to the web console with the specified user -- Enter the Create Cluster form -- Toggle the corresponding dependencies dropdown -- The option to create the dependency will appear even though it shouldn't - -### Expected Behaviour - -There should be no CTAs on the web console for operations the user is not allowed to do - -### Possible Solution - -Validate RBAC permissions before loading the ""create"" option - -### Environment - -- StackGres version: `1.2.0-RC`",1 -108508402,2022-05-13 12:51:33.081,Allow to start cluster pods in parallel,"### Problem to solve - -SGCluster Pods are started using the [OrderedReady management policy of the StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#orderedready-pod-management). In some scenario a user may want to spin up a cluster and his replicas as soon as possible by starting them in parallel using the [Parallel management policy of the StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#parallel-pod-management). This may also benefit the CI/CD pipepline (see #630) - -### Proposal - -Add property `.spec.pods.managementPolicy` that default to `OrderedReady` and can be also set to `Parallel` - -### Testing - -Set the default as Parallel for testing - -## Acceptance Criteria - -* [x] Implement the feature -* [x] Set the feature behavior as default in tests -* [ ] Review and merge !1112",1 -108507890,2022-05-13 12:42:41.932,Cluster cloning fails when scripts include references to configMaps,"### Summary - -The web console implements cloning configurations for a SGCluster. - -Even though the cloning process works fine, there might be cases on which the cluster to clone contains references to configMaps, which causes the request to fail because of extra information included in the payload. - -![image](/uploads/278d9f73b71a09fe9b9456356ff6b226/image.png) - - -The reason why this fails is because the request includes both, the reference to the configMap and the script associated to it. The later shouldn't be included on the payload. - -``` - ""spec"":{ - ... - ""initialData"":{ - ""scripts"":[ - { - ""name"":""map"", - ""database"":""db"", - ""scriptFrom"":{ - ""configMapScript"":""CREATE USER pguser2 WITH PASSWORD 'pguser2';"", - ""configMapKeyRef"":{ - ""key"":""script-key"", - ""name"":""script-configmap"" - } - } - } - ] - } - ... - }, -``` - - -#### Steps to reproduce - -- Enter the web console -- Create any cluster with script references to configMaps -- Clone the created cluster -- An error like the one on the image above will appear - -### Expected Behaviour - -The cloning should work fine even when referencing configMaps - -### Possible Solution - -Remove the additional `configMapScript` property from the payload - -### Environment - -- StackGres version: `1.2.0`",1 -108424538,2022-05-12 09:20:29.472,Improve operator helm chart install / upgrade times,"Currentrly the operator helm chart takes between 1 and 2 minutes regardless the fact that it needs to update CRDs or not (it always update them) or it has to recreate the certificate or not (it always recreate the certificate). Also the secret for the Web Console user is recreated regardless of the fact that it already exists or not. - -## Implementation plan - -Detect if CRDs exists and are already upgraded to the latest version by annotating them using the helm chart version. If the CRDs already exists and they are all annotated with the latest version they are considered to be valid. - Create a flag in order to indicate that the certificate have to be created and in such case - -## Acceptance Criteria - -* [x] Implement skip of CRD upgrade -* [x] Implement skip of certificate upgrade -* [x] Implement skip of Web Console secret upgrade",1 -108356624,2022-05-11 09:20:29.284,Prevent showing multiple notifications for the same error message,"### Summary - -Whenever there's an error response coming from the REST API, the web console shows a notification related to it. - -There might be cases where a `502 Bad Gateway` error affects the whole set of requests made to the API, which generates several notifications with the exact same message. - -![image](/uploads/933add537f99c8cfa44e98e6beee6c5c/image.png) - - -### Expected Behaviour - -If there is more than one message with the exact same info, only one message should be shown to the user. - -### Environment - -- StackGres version: `1.1.0` - -**Acceptance criteria:** -- [x] Fix the bug",2 -108255015,2022-05-09 15:34:10.590,Add missing resources to the can-i REST API endpoint,"### Summary - -The `/stackgres/can-i` REST API endpoint does not return permissions for the following resources: - -- stats -- events -- logs -- pod/exec - -Some permission dependencies related to those resources are required by the web console in order to ensure proper access restrictions. - - -#### Steps to reproduce - -1. Call the `/stackgres/can-i` REST API endpoint -2. The response won't include any of the resources listed above - - -### Expected Behaviour - -The `/stackgres/can-i` REST API endpoint returns permissions for every resource directly accesible via the REST API - - -### Possible Solution - -Add `events`, `stats`, `logs` and `pod/exec` to the list of resources for which permissions are returned in the `caniList()` method of the class `RbacResource`. - -### Environment - -- StackGres version: 1.2.0 -- Kubernetes version: ? -- Cloud provider or hardware configuration: ?",4 -107281751,2022-04-27 10:03:50.604,Web console won't match field validations on repeaters,"### Summary - -When receiving error responses for POST/PUT requests made to the REST API, the web console might receive information about the fields that caused the error. In such cases, the web console highlights the input with a red border, letting the user know which fields are causing the issue. - -This works fine for single fields which appear only once on the form, but when matching fields contained inside repeaters, the web console does not consider the index of the field coming from the REST API response. - - -### Expected Behaviour - -Web console should match the index of the field coming from the REST API response. - - -### Environment - -- StackGres version: `1.1.0` - -**Acceptance criteria:** -- [x] Fix the bug -- [x] Create a test",2 -107280310,2022-04-27 09:39:19.652,Backup creation should fail validation if the referenced cluster is not configured for backups and is not a copy of another backup," -### Summary - -Creating a cluster without any backup config allow to create backup from the Web Console and they only show a `pending` state without showing any validation message. - -![image](/uploads/a78a5318f9b646f8442bb5bf6ac3f2df/image.png) - -![image](/uploads/4de8ef65f55ccad6d0d01966506930f2/image.png) - -K8s events shows the validation message: - -``` -0s Warning BackupConfigFailed sgbackup/test-backup-1 Missing SGBackupConfig for cluster demo-db.demo-db -0s Warning BackupConfigFailed sgbackup/test-backup-1 Backup reconciliation cycle failed: SGBackup demo-db/test-backup-1 target SGCluster demo-db without a SGBackupConfig - -``` - -Web console events also does not show any backups events: - -![image](/uploads/2a40cbb24d2e00dfb7909f83d8b53a8e/image.png) - -#### Steps to reproduce - -1- Create a SGCluster without a Backup Config - -2- Create a Backup from the Web console - -3- Check the state from the Web Console - -4- Check the k8s Events - -### Expected Behaviour - -Web console showing a message indicating that Backup config is missing. - -### Possible Solution - -Check that a `SGBackupConfig` is referenced by the associated `SGCluster` in the validating webhook during creation of any `SGBackup` that does not have section `status.sgBackupConfig` defined. If, in such case, there is no `SGBackupConfig` reference in the associated `SGCluster` a validation error is returned saying ""To create a backup you must first configure the SGCluster for backups"". - -### Environment - -- StackGres version: 1.2.0-beta1 - -- Kubernetes version: 1.21 - -- Cloud provider or hardware configuration: Minikube - - -### Relevant logs and/or screenshots - -",8 -107136105,2022-04-25 12:38:44.955,Certificate Singing Request does not return the certificate after approval in EKS 1.22,"Using EKS 1.22 the certified is approved but not issued: - -``` -❯ kubectl get csr -n stackgres -NAME AGE SIGNERNAME REQUESTOR -stackgres-operator 2m39s kubernetes.io/kubelet-serving system:serviceaccount:stackgres:stackgres-operator-init Approved -``` - -``` -kubectl version -Client Version: version.Info{Major:""1"", Minor:""23"", GitVersion:""v1.23.5"", GitCommit:""c285e781331a3785a7f436042c65c5641ce8a9e9"", GitTreeState:""clean"", BuildDate:""2022-03-16T15:51:05Z"", GoVersion:""go1.17.8"", Compiler:""gc"", Platform:""darwin/arm64""} -Server Version: version.Info{Major:""1"", Minor:""22+"", GitVersion:""v1.22.6-eks-7d68063"", GitCommit:""f24e667e49fb137336f7b064dba897beed639bad"", GitTreeState:""clean"", BuildDate:""2022-02-23T19:29:12Z"", GoVersion:""go1.16.12"", Compiler:""gc"", Platform:""linux/amd64""} -``` - -**Acceptance criteria:** -- [ ] Fix the issue",3 -106960975,2022-04-21 10:12:10.325,Upgrade E2E framework to use kind 0.14.0,"The following discussion from !1089 should be addressed: - -- [ ] @jorsol started a [discussion](https://gitlab.com/ongresinc/stackgres/-/merge_requests/1089#note_919378721): (+1 comment) - - > Ideally use 1.18.20 that resolve a regression introduced in 1.18.19 - -Acceptance criteria -- [ ] execute upgrade",8 -106134901,2022-04-06 00:54:57.563,Tolerations for SGDbOps," - -### Problem to solve - -Currently, we are running StackGres Cluster on our k8s cluster where all the nodes have taints. There is a need to add taints to even the pods for SGDbOps. -Currently, this is not supported and it would be helpful to add these tolerations to all the SGDbOps. - - - -### Further details - - -Consider a case where all the nodes are tainted in a cluster and we need to run SGDbOps on this cluster. - -### Proposal - - -Please add tolerations capability for SGDbOps. - -SGCluster example: - -```yaml -sgCluster: -scheduling: - tolerations: - items: - key: - operator: - value: - effect: - tolerationSeconds: -``` - -### Testing - - -Seems like a minor change that wouldn't affect the functionality much. - - -**Acceptance Criteria:** -- [ ] Able to add tolerations to any type of SGDbOps. -- [ ] Update documentation of the CRD -- [ ] Test the implementation",16 -105464612,2022-03-24 21:59:42.488,"Enhance Web Console's usability/discoverability of the ""enable monitoring"" option when creating a cluster","Monitoring is one of the most important features you may want from a Postgres cluster. In SG it is not enabled by default, and that's probably fine. It also depends on the user installing the optional dependencies that the user requires for monitoring, and we don't have much control over that. - -However, if you have installed the dependencies, discovering how to enable monitoring for the cluster is not immediate. It is ""hidden"" under `Advanced options` and then `Sidecars`. Not very intuitive. - -It actually makes sense where it is, since it requires deploying a sidecar. However, we should *also* make it more accessible. Therefore I propose to add in the main screen, `Cluster` section, at the bottom, a simple checkbox with the following text and tooltip: - -* Text: _Enable monitoring_ -* Tooltip: _StackGres supports enabling automatic monitoring for your Postgres cluster, but you need to provide or install the [Prometheus stack as a pre-requisite](https://stackgres.io/doc/latest/install/prerequisites/monitoring/). Then, check this option to configure automatically sending metrics to the Prometheus stack_. - -This won't change anything under the `Sidecars` section, other than the enabled/disabled status in both the `Cluster` and `Sidecars` section needs to stay in sync. - -**Acceptance criteria:** -- [x] Adapt the UI to attend the request -- [x] Tests of the implementation",2 -105038385,2022-03-18 01:52:03.202,"Annotations, affinity, tolerations, and nodeSelector to be added in operator helm chart","Requesting a change to the operator helm chart. - -The helm chart is missing these scheduling properties (affinity, tolerations and nodeSelector) and the ability to add annotations to Pods, Services and ServiceAccounts. - -These are required if it is preferred for the StackGres operator to be deployed to a particular node only or for any special configuration that some tools or cloud environments provide. - -# Proposal - -Add the following sections to the operator helm chart `values.yaml` - -```yaml -operator: - annotations: {} - service: - annotations: {} - serviceAccount: - annotations: {} - affinity: {} - tolerations: {} - nodeSelector: {} -restapi: - annotations: {} - service: - annotations: {} - serviceAccount: - annotations: {} - affinity: {} - tolerations: {} - nodeSelector: {} -jobs: - annotations: {} - service: - annotations: {} - serviceAccount: - annotations: {} - affinity: {} - tolerations: {} - nodeSelector: {} -``` - -**Acceptance criteria:** -- [x] Add the node property on the helm chart -- [ ] Update the documentation on the CRD -- [ ] Test the implementation",8 -104379545,2022-03-08 11:37:00.602,Validate cloning of resources from the web console when the resource contains references to secrets,"### The Problem - -The web console implements cloning features for most of the CRDs supported by StackGres. - -Even though the cloning process works fine, there might be cases on which the resource to clone contains references to secrets which cannot be accessed from the REST API, so it is quite possible that the cloned resource does not take into account such specs. - - -### The Request - -We should validate what would happen for such cases and, if needed, create an issue to implement any adjustments that might be needed to let the user know about such limitations during the cloning process. - -**Acceptance criteria:** -- [x] Make a POC to verify there is a bug",1 -102047019,2022-02-08 09:51:25.210,Allow specify loadBalancerIP for postgres services,"### Problem to solve - -Currently, it is not possible to specify a custom load balancer IP for the Postgres services, the idea of this is to be able to set a custom load balancer IP for R/W and R/O connections, this also allows to keep the same IP in case if it's necessary to recreate the services. - -### Proposal - -In the SGCluster services section allows to add a custom load balancer IP (SGCluster and DistributedLogs) - -```yaml -postgresServices: - primary: - type: LoadBalancer - loadBalancerIP: 80.11.12.10 - replicas: - type: LoadBalancer - loadBalancerIP: 80.11.12.11 -``` - -And generate a service like: - -```yaml -apiVersion: v1 -kind: Service -metadata: - name: my-cluster -spec: - selector: - app: StackGresCluster - cluster-name: my-cluster - role: master - ports: - - name: pgport - protocol: TCP - port: 5432 - targetPort: pgport - loadBalancerIP: 80.11.12.10 -``` - -### Links / references - -https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer - -**Acceptance criteria:** -- [x] Implement the CRDs (SGCluster and DistributedLogs) -- [x] Implement the change on the REST API(SGCluster and DistributedLogs) -- [x] Implement the change on the cluster demo helm chart(SGCluster and DistributedLogs) -- [ ] Create an Issue for the UI @jose_oss_pg todo -- [x] Create tests -- [x] Documentation(SGCluster and DistributedLogs)",16 -101997159,2022-02-07 13:17:11.547,"Support cascading, remote and WAL-based SGCluster replication on the web console","Once we're done with #864 / #1547, we must add support for cascading, remote and WAL-based SGCluster replication specs as proposed on #866 - -**Acceptance criteria:** -- [ ] Define and implement the UI changes -- [ ] Create tests",16 -101233260,2022-01-25 16:23:27.688,Support SGObjectStorage CRD on the web console,"The web console should support management of SGObjectStorage CRDs. This resource is, basically, the sub-object of `SGBackupConfig.storage`. - -A new entry on the sidebar should be included, with the corresponding icon/links/actions for such new resource. - -**Acceptance criteria:** -- [x] Development of the feature -- [x] Test the feature",2 -101231211,2022-01-25 15:52:13.506,Create SGObjectStorage CRD,"Related to #862. - -This issue only require the following feature: - -* Create an `SGObjectStorage` CRD extracting the model from `SGBackupConfig` `.spec.storage` into the field `.spec` of the new `SGObjectStorage`. -* Add endpoints in the REST API to list, get, create, patch and delete `SGObjectStorage`. - -**Acceptance criteria:** -- [x] Implement the CRD -- [x] Implement REST API -- [x] Document the implementation -- [x] Test the implementation",24 -100837004,2022-01-18 12:28:28.758,Support new backup configuration in SGCluster on the web console,"When #862 is done, apply the following proposed spec on the SGCluster CRD: - -```yaml -spec: - configurations: - sgBackupConfig: # this does not change bu will be deprecated and mutually exclusive with `backups` field. - backups: - - path: - compression: - cronSchedule: - performance: - maxDiskBandwitdh: - maxNetworkBandwitdh: - uploadDiskConcurrency: - retention: - sgObjectStorage: # name of an SGObjectStorage in the same namespace -``` - -**Implementation plan:** - -- [ ] Detail the CRUD model for this CRD. - -**Acceptance criteria:** -- [x] Implement the CRUD. -- [x] Implement some tests.",16 -100784327,2022-01-17 15:35:01.785,Support ManagedSQL operation on the web console,"When #970 is done, the web console should include ManagedSQL in the SGDbOps list. The operation specs details are yet to be defined on #970",24 -100131128,2022-01-05 08:49:52.719,Set default log_statement value to none for SGPostgresConfig,"Default `log_statement` value is set to `ddl` when creating an `SGPostgresConfig`. With such configuration value users passwords will be logged during creation or modification of users. This is a security flaw allowing someone with access to the database logs to see users credentials if the statement is not protected by setting `log_statement` to `none` for local session prior execution of any of those commands. - -## Implementation plan - -Set default `log_statement` to `none` for `SGPostgresConfig`.",4 -97570645,2021-11-18 16:42:34.437,Allow specify resources to assign to Distributedlogs pods," -### Problem to solve - -Currently, it is not possible to specify the amount of resources assigned to distributed logs pod. - -### Further details - -Some environments generate a lot of logs and log database grow very fast and getting information from the logs becomes very slow. - -### Proposal - -Allow apply an `SGInstanceProfile` to Distributed logs and also an `SGPostgresConfig`. - -```yaml -apiVersion: stackgres.io/v1 -kind: SGDistributedLogs -metadata: - name: distributedlogs -spec: - sgInstanceProfile: 'size-m' - configurations: - sgPostgresConfig: 'pgconfig' - persistentVolume: - size: 100Gi -``` - -## Acceptance Criteria - -- [ ] Implement the feature in the CRD -- [ ] Implement the logic in the mutating and validating webhook similar to the one existing for `SGCluster` CRD -- [ ] Implement the change in the REST API -- [ ] Add tests",2 -96943073,2021-11-08 14:41:13.179,Set default_toast_compression=lz4 on PostgreSQL 14's default config,"Based on the benchmarks on this [post](https://www.postgresql.fastware.com/blog/what-is-the-new-lz4-toast-compression-in-postgresql-14) I consider LZ4 highly beneficial compared to PGLZ. - -For this reason I'd propose making it the default TOAST compression algorithm for Postgres 14, adding to our default configuration (only for v14 and future later versions) [`default_toast_compression=lz4`](https://postgresqlco.nf/doc/en/param/default_toast_compression/).",4 -95371922,2021-10-13 16:12:16.150,benchmark is missing jq and kubectl," -### Summary - - - -When running a benchmark with SGDBOps, it keeps complaining about the missing `kubectl` and `jq` tools. - -### Current Behaviour - - -#### Steps to reproduce - -1. create a cluster with pg 12, like below: - -```yaml -apiVersion: stackgres.io/v1 -kind: SGCluster -metadata: - name: backup-test - namespace: default -spec: - instances: 2 - pods: - persistentVolume: - size: 10Gi - storageClass: gp2 - postgres: - version: ""12.8"" -``` - -1. wait for the cluster be up and running -1. create the sgdbops: - -```yaml -apiVersion: stackgres.io/v1 -kind: SGDbOps -metadata: - name: database-fill - namespace: default -spec: - benchmark: - pgbench: - concurrentClients: 1 - databaseSize: 8Gi - duration: P0DT0H1M0S - threads: 1 - usePreparedStatements: false - type: pgbench - maxRetries: 0 - op: benchmark - sgCluster: backup-test -``` - - -When checking the logs, lots of messages like this appears: - -``` -++ kubectl get '' -n '' '' -o json -/usr/local/bin/shell-utils: line 142: kubectl: command not found -+ RESULT= -++ jq -R . -/usr/local/bin/shell-utils: line 143: jq: command not found -++ printf %s 'Database pgbench_6166fbe8 created' -``` - -When checking the job, I've found that is using the images below: - -``` -❯ kubectl get jobs.batch database-fill-benchmark-fea9f890aa0a46dd-0 -o json | gojq '.spec.template.spec.initContainers |map( { name: .name, image: .image })' -[ - { - ""image"": ""docker.io/ongres/kubectl:v1.19.14-build-6.6"", - ""name"": ""set-dbops-running"" - } -] - -❯ kubectl get jobs.batch database-fill-benchmark-fea9f890aa0a46dd-0 -o json | gojq '.spec.template.spec.containers |map( { name: .name, image: .image })' -[ - { - ""image"": ""docker.io/ongres/patroni:v2.1.1-pg12.8-build-6.6"", - ""name"": ""run-dbops"" - }, - { - ""image"": ""docker.io/ongres/kubectl:v1.19.14-build-6.6"", - ""name"": ""set-dbops-result"" - } -] -``` - -### Expected Behaviour - -No errors from `kubectl` or `jq`. - - -### Possible Solution - -Since both `postgres-util` and `patroni` doesn't have the binaries installed will be necessary to use another image. - -Use the `kubectl` image or mount/copy the binaries from it. - -``` -❯ docker run --rm -it docker.io/ongres/kubectl:v1.19.14-build-6.6 bash -bash-4.4$ jq --version -jq-1.5 -bash-4.4$ kubectl version -Client Version: version.Info{Major:""1"", Minor:""19"", GitVersion:""v1.19.14"", GitCommit:""0fd2b5afdfe3134d6e1531365fdb37dd11f54d1c"", GitTreeState:""clean"", BuildDate:""2021-08-11T18:07:41Z"", GoVersion:""go1.15.15"", Compiler:""gc"", Platform:""linux/amd64""} -The connection to the server localhost:8080 was refused - did you specify the right host or port? -``` - -### Environment - -- StackGres version: - -``` -❯ kubectl get deployments -n stackgres stackgres-operator --template '{{ printf ""%s\n"" (index .spec.template.spec.containers 0).image }}' -stackgres/operator:1.0.0-RC1 -``` - -- Kubernetes version: - -``` -❯ kubectl version -Client Version: version.Info{Major:""1"", Minor:""22"", GitVersion:""v1.22.0"", GitCommit:""c2b5237ccd9c0f1d600d3072634ca66cefdf272f"", GitTreeState:""clean"", BuildDate:""2021-08-04T17:56:19Z"", GoVersion:""go1.16.6"", Compiler:""gc"", Platform:""darwin/amd64""} -Server Version: version.Info{Major:""1"", Minor:""20+"", GitVersion:""v1.20.7-eks-d88609"", GitCommit:""d886092805d5cc3a47ed5cf0c43de38ce442dfcb"", GitTreeState:""clean"", BuildDate:""2021-07-31T00:29:12Z"", GoVersion:""go1.15.12"", Compiler:""gc"", Platform:""linux/amd64""} -WARNING: version difference between client (1.22) and server (1.20) exceeds the supported minor version skew of +/-1 -``` - - -- Cloud provider or hardware configuration: AWS with EKS - - -### Relevant logs and/or screenshots - - - -* [dbops-full-log.txt](/uploads/d50c6e60d7e370fc720b2088c4562680/dbops-full-log.txt) -* [cluster-objects.yaml](/uploads/7ca1789084f84b2793fd59f5afa18d6e/cluster-objects.yaml) - -**Acceptance criteria:** -- [ ] Fix the bug -- [ ] create the test",16 -90026365,2021-07-09 13:03:13.142,Refactor and improve unit test data fixtures,"We have many files in the `test-util` module. - -We want to have test classes to: - -* Load such files with strong typing (e.g. return a `StackGresCluster` class instance) -* Enumerate all the available fixtures for ease of use -* Reuse the fixture data instance in all the StackGres Java modules -* Introduce fixture variations easier to be placed using code instead to create a full copy of JSON file with small differences - -Other possible uses: - -* Generate random values by default depending on the fixture (since random values should be used or not depending on the test usage of the fixture) - -# Proposition - -Move JSON files in the `common` module and wrap them in fixture classes. Make the `common` module install the `test-jar` package following this guide: https://maven.apache.org/guides/mini/guide-attached-tests.html",1 -85763678,2021-04-22 17:48:50.636,"Implement ""ManagedSQL"" as a new SGScripts CRD referenced by SGCluster","We want more and more to offer a fully-featured GitOps approach to managing Postgres databases. - -One such areas that are not fully covered is managing a database DDL, or even more generic, managing a database with SQL. While you'd normally interact with a database with a CLI or GUI client, having the ability to programmatically maintain database DDL with a CRD is a well justified use-case. The current ability to execute operations by a component of the operator make this system very powerful for programmatic creation and update of DDL. - -This issue aims to capture this and define a new CRD, called `SGScripts`, to be added to StackGres' set of CRDs. This would allow to ""define in YAML"" commands to execute on a cluster. - -Note that it won't be a goal to support DDL migrations, deltas and others --that would be relied on the user or other external tools. If any edit is performed to the `SGScripts`, it will be blindly re-executed. If it is deleted, no action will be executed. - -The internal spec of the CRD would be very similar to the actual [`initialData.scripts`](https://stackgres.io/doc/latest/reference/crd/sgcluster/#scripts-configuration) spec: - -```yaml - - name: $scriptName - database: $dbname #optional, default to 'postgres' - scriptFrom | script: read the script from either a ConfigMap, Secret or inline. - - name: ... - ... -``` - -There is some overlap with the `spec.initialData.scripts` section of the `SGCluster` and for this reason the intermediary CRD called `SGScripts` will be created in order to be used as a reference by both `SGCluster` (and in the future `SGDbOps`). Section `spec.initialData.scripts` of the `SGCluster` will be deprecated and the new array field `spec.managedSQL.sgScripts` will be added in order to reference an array of `SGScripts` instances. `SGScripts` CRD will have the following model: - -```yaml -apiVersion: stackgres.io/v1beta1 -kind: SGScripts -spec: - managedVersions: # If `true` the versions will be managed by the operator automatically. The user will still be able to update them. `true` by default. - continueOnError: # if true, when any entry fail will not prevent subsequent entries from being executed. `false` by default. - scripts: - - name: # Optional. Does not determine order of execution but creates a label to identify this script externally (like in events and logs). - id: # The id is immutable and must be unique across all the script entries. It is replaced by the operator and and is used to identify the script for the whole life of the `SGScript` object. - version: # The version of the script. Set to `1` by default. - database: # Database where the script is executed. Defaults to the `postgres` database - user: # User that will execute the script. Defaults to the `postgres` user - wrapInTransaction: # Wrap the script in a transaction using the specified transaction mode (`NONE` (The default, means that the script will not be wrapped in a transaction), `READ COMMITTED`, `REPEATABLE READ` and `SERIALIZABLE`). - storeStatusInDatabase: # When specified `inTransaction` field must not to be set to `null` or `NONE` and the script execution will include storing the status of the execution of this script in the table `managed_sql.status` that will be created in the specified `database`. This will avoid an operation that fails partially to be unrecoverable requiring the intervention from the user. This is `false` by default. - retryOnError: # if true it retries to execute the script entry with an exponential back-off algorithm limited to 5 minutes with a variation of 1 minute. Default is false. - script: # Raw SQL script to execute. This field is mutually exclusive with - `scriptFrom` field. - scriptFrom: - configMapKeyRef: - name: # The name of the ConfigMap that contains the SQL script to execute. - key: # The key name within the ConfigMap that contains the SQL script to execute. - secretKeyRef: - name: # The name of the Secret that contains the SQL script to execute. - key: # The key name within the Secret that contains the SQL script to execute. -status: - scripts: - - id: # Identify the associated script entry with the same value in the `id` field. - hash: # Hash of the value of the ConfigMap or Secret referenced by `scriptFrom.configMapKeyRef` and `scriptFrom.secretKeyRef` respectively. -``` - -However, old `spec.initialData.scripts` section of the `SGCluster` was intended to be run only once, and only if a backup is not restored. We instead allow all referenced `SGScripts` from field `.spec.managedSQL.sgScripts` to be modified and to be applied ""live"" to the `SGCluster`. The new `version` field will allow to detect if script have to be executed and will be managed by the operator (default) or manually if the `.spec.managedVersions` is set to `false`. When `.spec.managedVersions` is set to `true` in the `SGScripts` following rules will apply: - -* Initially set version to 1. -* Secret or ConfigMap values hash is stored in the `SGScripts` `.status.scripts[].hash` section -* When an inline value is changed the version will be automatically incremented by 1. -* When a Secret or a ConfigMap is changed, the operator reconciliation cycle will update the hash and the version will be automatically incremented by 1. -* User will still be able to increment the value manually if needed. - -Here is the proposed change to `SGCluster` in order to reference an array of `SGScripts` and maintain the status of applied `SGScripts`: - -```yaml -apiVersion: stackgres.io/v1 -kind: SGCluster -spec: - managedSQL: - continueOnSGScriptsError: # if true, when any entry of any `SGScripts` fail will not prevent subsequent `SGScripts` from being executed. `false` by default. - sgScripts: - - id: # The id is immutable and must be unique across all the `SGScript` entries. It is replaced by the operator and is used to identify the `SGScript` entry. - name: # a reference to an `SGScript` -status: - managedSQL: - sgScripts: - - id: # Identify the associated `SGScript` entry with the same value in the `id` field. - startedAt: # ISO-8601 datetime of when the script execution has been started. Will be reset each time the referenced `SGScripts` will be re-applied. - failedAt: # ISO-8601 datetime of when the script execution had failed (mutually exclusive with `completedAt`). Will be reset each time the referenced `SGScripts` will be re-applied. - completedAt: # ISO-8601 datetime of when the script execution had completed (mutually exclusive with `failedAt`). Will be reset each time the referenced `SGScripts` will be re-applied. - scripts: - - id: # Identify the associated script entry with the same value in the `id` field. - version: # The latest version applied - intents: # Indicates the number of intents or failures occurred - failureCode: # If failed, the error code of the failure. See also https://www.postgresql.org/docs/current/errcodes-appendix.html - failure: # If failed, a message of the failure -``` - -The user will be able to add or replace any entry of the `.spec.managedSQL.sgScripts` array in the `SGCluster`. When replacing a value the associated status entry will also be emptied as it was never executed. - -The order of execution and rules of the execution flow will be as follow: - -* Each `SGScripts` will run in sequence one after another following the order of the `.spec.managedSQL. -* If any entry of any `SGScripts` fails next `SGScripts` will not be executed unless `.spec.managedSQL.continueOnSGScriptsError` is set to `true` in `SGCluster`. -* An entry of an `SGScripts` that has not been executed will be executed only if there are no other previous entry (following the array order) of the same `SGScripts` that has not been executed. -* If execution of an `SGScripts`'s entry fail no subsequent entries will be executed. Execution of subsequent entries is performed when previous entry execution failed only if field `.spec.continueOnError` is set to `true` in `SGScripts`. -* If any entry of an `SGScripts` has been already executed it will not be re-executed even if it failed. -* An entry is detected as ""not being executed"" if the entry version is missing from the status or different than the entry version in the status. - -This is inspired by the original ideas by @stoetti1 and subsequent discussion on #956. - -**Acceptance criteria:** - -- [ ] Implement the feature, TODO raise to epic and create sub epics. Epic measured in 10 days to accomplish. -Sub tasks: - - [ ] #1545 The UI part. to be evaluated. - - [x] Creating a CRD to implement sgcripts. - - [x] Adapting webhooks and rest API. - - [x] Tests CRD - - [x] Documentation CRD - - [x] Implement applying the SQL - - [x] Implement migrating from `.spec.initialData.scripts` - - [x] Implement the test and documentation.",24 -79593823,2021-02-23 01:04:45.175,Use SGObjectStorage CRD for Postgres base backup and WAL storage,"The proposed CRD is basically a small evolution over the currently existing [`SGBackupConfig.storage` subobject](https://stackgres.io/doc/latest/reference/crd/sgbackupconfig/#storage-configuration). The main goal of extracting it into a separate CRD is for being able to re-use and reference it from different contexts. - -This would enable to support clusters initialized from a Postgres backup and WAL archive. Having a CRD that references this archive, there could be two use cases perfectly supported by this CRD: -* A user wants to have a cluster replicating from the archive, and there's an existing CRD representing that archive within the same K8s cluster (and namespace). In this case, a reference to the existing CRD should be enough. -* A user wants to have a cluster replicating from the archive, on a different K8s cluster (i.e. a DR scenario). In this case, a local (to the destination K8s cluster) CRD needs to be created, but could have the same or similar contents from the source CRD, which could be easily copied, and then referenced. - -The proposed name for this new CRD is `SGObjectStorage`. - -More precisely, this change would imply the following changes: -1. ~~Extract the `storage` subobject from the `SGBackupConfig` CRD and ""upgrade"" it to a full CRD called `SGObjectStorage`.~~ (See #1564) -1. ~~Change the `SGBackupConfig` to include a new property, `.spec.sgObjectStorage`, that will become a reference to the new CRD.~~ -1. Add a new property to the new CRD, called `SGObjectStorage.mode`, which will be an enum with possible values `RW, RO`. This would be useful for standby clusters, which might be provided with more restricted user access credentials. -1. Create the section `.spec.configurations.backups` with an array (only allowed to have just 1 element) that will replace the field `.spec.configurations.sgBackupConfig` including a reference to an `SGObjectStorage` and the other properties of `SGBackupConfig`. -1. Deprecate the field `.spec.configurations.sgBackupConfig` in `SGCluster` CRD and forbid create new `SGCluster` using such field. -1. Deprecate `SGBackupConfig` CRD and forbid creation of `SGBackupConfig` CRs. - -Note that this scheme also allows for creating standby clusters processing WALs from an object storage that might not be the original source one, but possibly one replicated in other region (eg. an S3 bucket replicated to another S3 bucket on another region), in which case source and destination CRDs will only differ on the bucket name and region (and potentially, user credentials). - -Also note that this issue doesn't deal with how to modify a `SGCluster` to support standby cluster creation. That will be addressed on a separate issue (see #866). - -Ideally, some strong validation support should be built into the operator to validate access credentials when creating the CRD (i.e., validate that we can read or read and write to the object storage). This will improve the current user experience of creating an invalid `SGBackupConfig`. - -Proposed change to `SGCluster`: - -```yaml -spec: - configurations: - backupPath: - sgBackupConfig: # this does not change but will be deprecated and mutually exclusive with `backups` field. - backups: # The backups is an array that can contain at most 1 element to allow in the future specify multiple backup configuration so that backup may be stored on multiple storages. - - path: - compression: - cronSchedule: - performance: - maxDiskBandwitdh: - maxNetworkBandwitdh: - uploadDiskConcurrency: - retention: - sgObjectStorage: # name of an SGObjectStorage in the same namespace -``` - -**Acceptance criteria:** -- [x] Implement a mutating webhook in order to migrate from `SGBackupConfig` + backupPath to `SGObjectStorage` + new backups section in `SGCluster` -- [x] Implement a validating webhook in order to disallow usage of `SGBackupConfig` in `SGCluster`",24 -78219415,2021-02-01 16:15:36.448,Allow specify request and limit in sginstanceprofile for non-production,"Currently sginstanceprofile only allow to specify a single value for cpu and memory that will be used for both request and limit each of those resources. - -Some non-production environment requires to specify both to make it possible to create a cluster that can be deployed in node with less resources and are not critical environment. - -Proposed change for the `SGInstanceProfile` CRD to allow to specify a request for resources different from the limit: - -```yaml -apiVersion: stackgres.io/v1 -kind: SGInstanceProfile -spec: - cpu: # CPU(s) (cores) limit for every Pod of an `SGCluster`. Please note that - # every Pod contains not only the `patroni` (Patroni and Postgres) container, but - # several other sidecar containers. While the majority of the resources are - # devoted to the main Postgres container, some CPU is needed for the - # sidecars. - # - # The number of cores set is applied only to the `patroni` container of each Pod while the - # other sidecars will not be subjected to any resource restriction. - # - # A minimum of 2 cores is recommended. - memory: # RAM limit for every Pod of an `SGCluster`. The suffix `Mi` or `Gi` - # specifies Mebibytes or Gibibytes, respectively. Please note that every - # Pod contains not only the `patroni` (Patroni and Postgres) container, but several - # other sidecar containers. While the majority of the resources are devoted - # to the main Postgres container, some RAM is needed for the sidecars. - # - # The amount of RAM set is applied only to the `patroni` container of each Pod while the - # other sidecars will not be subjected to any resource restriction. - # - # A minimum of 2-4Gi is recommended. - requests: - cpu: # CPU(s) (cores) requested for every Pod of an `SGCluster`. Please note that - # every Pod contains not only the `patroni` (Patroni and Postgres) container, but - # several other sidecar containers. While the majority of the resources are - # devoted to the main Postgres container, some CPU is needed for the - # sidecars. - # - # The number of cores set is applied only to the `patroni` container of each Pod while the - # other sidecars will not be subjected to any resource restriction. - # - # A minimum of 2 cores is recommended. - # - # If not specified or if field `.spec.nonProductionOptions.enableInstanceProfileRequests` - # is not set to `true` in the associated `SGCluster` will be equals to `.spec.cpu`. - memory: # RAM requested for every Pod of an `SGCluster`. The suffix `Mi` or `Gi` - # specifies Mebibytes or Gibibytes, respectively. Please note that every - # Pod contains not only the `patroni` (Patroni and Postgres) container, but several - # other sidecar containers. While the majority of the resources are devoted - # to the main Postgres container, some RAM is needed for the sidecars. - # - # The amount of RAM set is applied only to the `patroni` container of each Pod while the - # other sidecars will not be subjected to any resource restriction. - # - # A minimum of 2-4Gi is recommended. - # - # If not specified or if field `.spec.nonProductionOptions.enableInstanceProfileRequests` - # is not set to `true` in the associated `SGCluster` will be equals to `.spec.memory`. -``` - -The fields `.spec.requests.cpu` and `.spec.requests.memory` will be optional and their values will be the same as respectively `.spec.cpu` and `.spec.memory` if the former are not specified. - -A flag `enableInstanceProfileRequests` will be added under section `.spec.nonProductionOptions` in `SGCluster` in order to be able to use the new proposed fields of `SGInstanceProfile`. - -```yaml -apiVersion: stackgres.io/v1 -kind: SGCluster -spec: - nonProductionOptions: - enableInstanceProfileRequests: # If true, will allow to use fields `.spec.requests.cpu` and `.spec.requests.memory` from `SGInstanceProfile`. By default is `false`. -``` - -**Acceptance criteria:** -- [ ] Implement the feature -- [ ] Test the feature -- [ ] Create issue for UI @jose_oss_pg TODO",16 -71194118,2020-09-15 11:58:25.473,Generated prometheus stats services return ambiguous stats,"Currently two service are create for each cluster. One points to Envoy port 8008 and other point to Postgres Exporter port 9187: - -```shell -$ kubectl get service -n operator-upgrade-5f60a6c7 | grep 'prometheus' -operator-upgrade-3-prometheus-envoy ClusterIP 10.111.97.168 8001/TCP 19m -operator-upgrade-3-prometheus-postgres-exporter ClusterIP 10.99.148.144 9187/TCP 19m -``` - -The service points to every pod in the cluster so that stats returned are from the pod that is routed by the service during the call. This lead to ambiguous result. - -```shell -$ curl -s operator-upgrade-3-prometheus-postgres-exporter:9187/metrics | grep pg_wal_ -# HELP pg_wal_position_bytes Postgres LSN (log sequence number) being generated on primary or replayed on replica (truncated to low 52 bits) -# TYPE pg_wal_position_bytes counter -pg_wal_position_bytes{server=""/var/run/postgresql:5432""} 1.006666e+08 -$ curl -s operator-upgrade-3-prometheus-postgres-exporter:9187/metrics | grep pg_wal_ -# HELP pg_stat_replication_pg_wal_lsn_diff Lag in bytes between master and slave -# TYPE pg_stat_replication_pg_wal_lsn_diff gauge -pg_stat_replication_pg_wal_lsn_diff{application_name=""operator-upgrade-3-0"",client_addr=""127.0.0.1"",server=""/var/run/postgresql:5432"",slot_name=""169"",state=""streaming""} 0 -pg_stat_replication_pg_wal_lsn_diff{application_name=""operator-upgrade-3-2"",client_addr=""127.0.0.1"",server=""/var/run/postgresql:5432"",slot_name=""168"",state=""streaming""} 0 -# HELP pg_wal_position_bytes Postgres LSN (log sequence number) being generated on primary or replayed on replica (truncated to low 52 bits) -# TYPE pg_wal_position_bytes counter -pg_wal_position_bytes{server=""/var/run/postgresql:5432""} 1.006666e+08 -$ -``` - -Those services are not used currently so the proposal is to only point service to the primary pod. - -**Acceptance criteria:** -- [x] fix the bug -- [x] Create a test",8 -71112775,2020-09-14 13:12:46.228,Validate and integrate into tests OpenShift 4.9+,"Acceptance criteria: - -- [x] Create the e2e test environment using one of the following options: - - Use [Code Ready Containers](https://github.com/code-ready/crc) using a [Google Cloud Nested Virtualization Instance](https://cloud.google.com/compute/docs/instances/nested-virtualization/overview). :white_check_mark: - - Use [Rosa](https://docs.openshift.com/rosa/welcome/index.html). -- [x] all the StackGres e2e tests should pass for the latest stable release - - Except operator-demo and ui (due to cypress out of memory bug)",5 -53106182,2020-07-01 17:02:24.786,Support user-supplied sidecars for SGCluster pods,"Some applications might want to access Postgres, or its filesystem, or part of other components, directly (i.e. not through the Postgres wire protocol). For this cases, highly user dependent, we might want to add the possibility to ""inject"" a user sidecar into the SGCluster pods. - -This could potentially imply many options like: -* Container image. -* Annotations. -* Environment variables, config maps? -* Inject some additional env vars to the container, like the postgres superuser and password. -* Potentially access the Postgres filesystem (PGDATA) and/or the logs, etc. -* Resource limits for the user-added sidecar, to control its potential impact. - -# Proposal - -```yaml -apiVersion: stackgres.io/v1 -kind: SGCluster -spec: - postgresServices: - primary: - customPorts: # The list of extra ports that will be exposed by the primary service - - # See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#serviceport-v1-core - replicas: - customPorts: # The list of extra ports that will be exposed by the replicas service - - # See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#serviceport-v1-core - pods: - customVolumes: - - # See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#volume-v1-core - customInitContainers: - - # See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#container-v1-core - customContainers: - - # See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#container-v1-core -``` - -Sections `.spec.postgresServices.primary.customPorts` and `.spec.postgresServices.replicas.customPorts` will allow to expose any custom service defined by a custom sidecars. We may think about renaming section `.spec.postgresServices` to `.spec.services` in a future version of `SGCluster` CRD since we will expose ports from custom containers and user may not want to expose Postgres ports at all but just those of their custom services. - -The `.spec.pods.customContainers[]` and `.spec.pods.customInitContainers[]` are the same as the `Container` object present in the Pod resource under section `.spec.containers[]` and `.spec.initContainers[]`. - -The `.name` field in `Container` object will forbid using any container name that is used by StackGres. - -~~We will forbid following fields: - -* `.securityContext`: The users of the container will use the main security context and will be replaced by mounting `/etc/passwd`, `/etc/shadow`, `/etc/groups` and `/etc/gshadow`. This will allow to use postgres UNIX socket and the data volume with the same user as postgres. Also it will allow to use the same container in OpenShift with arbitrary user ids. -* `.stdin`: This will be left unset -* `.stdinOnce`: This will be left unset -* `.terminationMessagePath`: This will be left unset -* `.terminationMessagePolicy`: This will be left unset -* `.tty`: This will be left unset -* `.volumeDevices`: This will be left unset since StatefulSet can not work with it. -* `.resources`: This will be handled by `SGInstanceProfile` (see also #144) and first implementation should be the same as for other containers and init containers other than `patroni`. We will open an issue to implement resources for custom containers.~~ - -> Instead of forbidding we should document the fields that should not be touch and why. This is a very advanced feature and user that does not know how to define correctly a container and how StackGres internal works will be warned that using such feature may break the cluster in unexpected ways. - -We may ~~also forbid~~ reduce definition of following fields in the UI to make a first implementation easier: - -* `.lifecycle` -* `.startupProbe` -* `.readinessProbe` -* `.livenessProbe` - -User will be able to use any volume that is defined in the StatefulSet including the data PV by setting the section `.volumeMounts[]` in `Container` object. To use the custom volumes defined in the section `.spec.pods.customVolumes` the names user in the `.volumeMounts[]` section of `Container` object will have to be prepended by the `custom-` prefix string (this is to avoid collisions). This will be documented properly in the `SGCluster` CRD to specify the names of such container and what they do. - -The `.name` and `.ports[].name` fields of `Container` object as the `.name` and `.targetPort` fields of `ServicePort` object will also be prepended with the `custom-` string prefix to avoid collisions. - -## Acceptance Criteria - -* [ ] Implement the feature -* [ ] Create tests -* [ ] Documentation",32 -31938948,2020-03-12 20:40:34.692,Develop pod-local StackGres Controller,"Currently, StackGres works as an operator, that runs as a separate pod (from the Postgres cluster pods). From there, it interacts with the Kubernetes API to perform StackGres actions, including creating and destroying pods and many other actions. - -However, it doesn't have a clear way to interact with the clusters once created. In particular, it has no way to explicitly run commands / take some actions inside of some of the containers of the StackGres pods. And some maintenance operations (like for example reloading configuration files) require to run some commands on those containers. - -Rather than creating custom scripts, or executing commands with the Kubernetes API, the team has decided to continue leveraging the _sidecar pattern_ and implement this via a sidecar container that will act as a pod-local StackGres controller. - -This controller will expose an HTTP API that the ""main/central"" controller will be called. This would also help to abstract the implementation of how those actions are executed within the pod containers. - -Similarly to other HTTP APIs used by StackGres, this HTTP API will follow [RFC-7807](https://tools.ietf.org/html/rfc7807) for communicating back errors. This pod-local controller will be a separate container, where a Java-developed software compiled to native code via GraalVM will be running. - -For a first implementation, this pod-local controller must implement the following operations. Also suggested implementation details are proposed as part of this issue: - -* [ ] ~~Reload Postgres configuration. Postgres configuration (both `postgresql.conf` and `pg_hba.conf`) is currently handled via Patroni. Patroni stores this configuration as a JSON within and Endpoint annotation --which is backed by K8s etcd--. Whenever it is changed, Patroni propagates these changes to all nodes of the cluster automatically. However, they are not refreshed. Implementation: ideally, Patroni's own HTTP API should be called. It exposes a `POST /reload` method that handles precisely this use case. It may also be an option to execute the command `patronictl reload`, but is less desirable.~~ - -* [ ] Reload Pgbouncer configuration. PGbouncer provides a command to reload the configuration (see [RELOAD](http://www.pgbouncer.org/usage.html)) that will perform this task. This requires access to the administrative database of pgbouncer (typically called `pgbouncer`, but this depends on the configuration). Ideally, this may be over localhost using the Postgres protocol, but may happen otherwise. Another possible, although less desirable implementation, is to `kill -HUP` pgbouncer's process, getting the pid from the `pgbouncer.pid` or similar file. - - -One common issue to these methods is to ensure that the configuration change has been adequately propagated to the pod **before** the reload method is called. There may be a race condition here since propagation of changes via the downward API or annotations on objects is asynchronous with respect to the change operation. One way to avoid this problem is to introduce a version field as part of every configuration, that is monotonically increased whenever any change is performed. Then, the reload operations must also specify which version they are expecting the configuration to be at. The reload mechanism (executed by the pod-local controller) must check that the version of the configuration materialized on the filesystem matches the one requested via the web service; or wait with some retries until it gets updated. Return an error if after several retries that version got never updated.",2 -26732841,2019-11-05 18:14:36.142,Implement a correct solution for resources of cluster's pods,"Currently the cluster is created setting uniquely resources request and limit on the patroni container but a correct solution would be to split the resources limit and request among all containers of the cluster's pods. We should research the correct solution to this problem making sure that: - -* Pod total resources limit and request are shared among all containers and the sum of resources limit and request is equals to the one specified in the profile. -* Pod resources limit and request should be balanced in a way that respect the role of each container. We should make sure each container has the right amount of memory and cpu for it to operate correctly and at best performance possible with that profile configuration. -* Resources limit and request are correctly specified in the configuration of a profile. Currently we set a value for cpu and memory and use that in both resource limit and request, we should check this is a correct approach. - -# Proposal - -This is a proposal to enhance `SGInstanceProfile` in order to set the resource requirements for containers and init containers and a similar section to include with #820 or implement with this issue if #820 get implemented before this one: - -```yaml -apiVersion: stackgres.io/v1 -kind: SGInstanceProfile -spec: - cpu: 16000 - memory: 64Gi - containers: - pgbouncer: - cpu: 1000 - memory: 64Mi - envoy: - cpu: 1000 - memory: 64Mi - prometheus-postgres-exporter: - cpu: 1000 - memory: 8Mi - postgres-util: - cpu: 1000 - memory: 8Mi - fluent-bit: - cpu: 1000 - memory: 8Mi - fluentd: - cpu: 4000 - memory: 2Gi - cluster-controller: # Could be applied also to distributedlogs-controller - cpu: 1000 - memory: 512Mi # Fix #1566 to improve this - initContainers: - setup-arbitrary-user: - cpu: 1000 - memory: 8Mi - setup-data-paths: - cpu: 1000 - memory: 8Mi - relocate-binaries: - cpu: 1000 - memory: 8Mi - setup-scripts: - cpu: 1000 - memory: 8Mi - pgbouncer-auth-file: - cpu: 1000 - memory: 8Mi - cluster-reconciliation-cycle: # Could be applied also to distributedlogs-reconciliation-cycle - cpu: 1000 - memory: 512Mi # Fix #1566 to improve this - major-version-upgrade: - cpu: 16000 - memory: 64Gi - reset-patroni: - cpu: 1000 - memory: 8Mi -``` - -If the user do not specify a container sub-section in the `.spec.containers` or `.spec.initContainers` section here are the proposed formulas to calculate the values of such sub-sections on creation of `sGInstanceProfile`: - -> Those values may be lowered. Also in those cases the requests may be lower than the limit when it comes to apply this resource requirements to specified containers in order to use the shared CPUs when [CPU manager policy is set to static](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#static-policy) in order to be in the `Burstable` group and use the shared pool of CPUs. - -* For `pgbouncer`: - - * `cpu` as millicpu: `min(1000, floor(toMillicpu("".spec.cpu"") / 16))` - * `memory` as Mi: `64` - -With `64Mi` PgBouncer should be able to handle 4096 connections. See https://www.pgbouncer.org/features.html - -> Low memory requirements (2 kB per connection by default). This is because PgBouncer does not need to see full packets at once. - -* For `envoy`: - - * `cpu` as millicpu: `min(4000, floor(toMillicpu("".spec.cpu"") / 4))` - * `memory` as Mi: `64` - -Scaling connections with pgbench from 1 connection to 100 only add 1Mi of used memory to the 20Mi initially used and with another increment of 11Mi with 1000 connections. If we consider 12Mi each 1000 connections with 64Mi envoy should be able to handle up to 3500 connections. - -* For `prometheus-postgres-exporter`: - - * `cpu` as millicpu: `min(1000, floor(toMillicpu("".spec.cpu"") / 16))` - * `memory` as Mi: `8Mi` - -* For `postgres-util`: - - * `cpu` as millicpu: `min(1000, floor(toMillicpu("".spec.cpu"") / 16))` - * `memory` as Mi: `8Mi` - -* For `fluent-bit`: - - * `cpu` as millicpu: `min(1000, floor(toMillicpu("".spec.cpu"") / 16))` - * `memory` as Mi: `8Mi` - -* For `fluentd`: - - * `cpu` as millicpu: `min(4000, floor(toMillicpu("".spec.cpu"") / 4))` - * `memory` as Mi: `2Gi` - -Memory usage is quite high. Starting from 512Mi it easily increases to 612Mi with a cluster of 3 instances with low log usage. So `2Gi` seems a quite safe value. - -For `cluster-controller` / `distributedlogs-controller` / `cluster-reconciliation-cycle` / `distributedlogs-reconciliation-cycle`: - - * `cpu` as millicpu: `min(1000, floor(toMillicpu("".spec.cpu"") / 4))` - * `memory` as Mi: `512Mi` - -See #1566. - -For `major-version-upgrade`: - - * `cpu` as millicpu: `toMillicpu("".spec.cpu"")` - * `memory` as Mi: `toMi("".spec.memory"")` - -Major version upgrade will run `pg_upgrade` command that may or may not require all of that CPU and memory. In any case better to give all the available resources to this container since it runs alone and we may find out a faster alternative that requires more memory and CPU in the future. - -* For `prometheus-postgres-exporter` / `postgres-util` / `fluent-bit` / `setup-arbitrary-user` / `setup-data-paths` / `relocate-binaries` / `setup-scripts` / `pgbouncer-auth-file` / `reset-patroni`: - - * `cpu` as millicpu: `toMillicpu("".spec.cpu"")` - * `memory` as Mi: `toMi("".spec.memory"")` - -Init containers may or may not require all of that CPU and memory. In any case better to give all the available resources to any init container since they runs alone. - -Reference: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-requests-are-scheduled - -** Acceptance criteria: ** -- [x] Implement the feature -- [ ] Implement the changhe in the REST API. -- [x] create test -- [ ] Documentation",24 -126929411,2023-04-19 14:18:32.786,SGBackups list won't load when start time is not present on a backup,"### Summary - -At the moment the web console reads the SGBackups list from the REST API, it calculates the time it takes to create each backup. This causes the UI to fail loading the complete backups list if any of the properties used to calculate such time doesn't exist. - -An error like the following might be seen on the browser's console. - -![image](/uploads/24ed1363afc36d965309252ffbb27752/image.png) - - -### Expected Behaviour - -The web console should list every backup regardless of the possibility to calculate its duration or not. - - -### Environment - -- StackGres version: `1.4.2`",4 -126272022,2023-04-05 10:01:48.528,Show secretKeySelectors for GCS service account json on edit mode,"### Summary - -When editing a GCS SGObjectStorage, even when the `serviceAccountJSON` has been set, the UI shows a file input as if it wasn't. - -![image](/uploads/f72d8107e77963d28317b2dc28584d59/image.png) - -Instead of just showing the plain input with no file associated, the form should show the `secretKeySelectors` that match the uploaded file, while also giving the user the option to replace such selectors by uploading a new file. - - -#### Steps to reproduce - -- Enter the web console -- Create a GCS SGObjectStorage with an associated `serviceAccountJSON` -- Edit the recently created resource -- No secret key selectors info is shown - - -### Expected Behaviour - -Even though the `secretKeySelectors` are generally the same for every GCS storage, there should be a way to show users that the credentials have been set. - - -### Environment - -- StackGres version: `1.5.0-beta1`",8 -125958990,2023-03-29 17:42:35.796,Script content not shown on summary when set from a ConfigMap,"### Summary - -When editing an SGCluster from the web console, if the resource has any scripts associated to it which have been read from a ConfigMap, once the user clicks on ""View Summary"", te reference to the ConfigMap is shown, but not the script itself (even though it shows on the form) - - -#### Steps to reproduce - -- Enter the web console -- Enter the Create SGCluster form -- Create any cluster with at least one script associated to a ConfigMap -- Edit the cluster -- The script should appear on the ""Scripts"" step -- Click on ""View Summary"" -- The script is not shown, only the ConfigMap references - - -### Expected Behaviour - -Whenever the web console has access to the script, it should be shown on the summary - - -### Environment - -- StackGres version: `1.5.0-beta1` - -### Relevant logs and/or screenshots - -![image](/uploads/5057e6c65dc3f62b6b8344043a032121/image.png) - -![image](/uploads/8ac7be53d979b45d043e5099dcc2e2ae/image.png)",4 -125386054,2023-03-16 17:25:29.348,Replace instances dropdown with numeric input on SGCluster form,"### Summary - -When creating or editing SGClusters on the web console, the number of instances is defined by using a dropdown list with a maximum of 10 instances, even though there's no actual limit of instances for StackGres. - -![image](/uploads/a980eac99e7b3ac072e6bba1da18da52/image.png) - - -### Possible Solution - -Replace the dropdown with a numeric input. - -### Environment - -- StackGres version: `1.4.x`",4 -125041552,2023-03-09 11:35:03.741,Backup path is not updated after major version upgrade,"### Summary - -After performing a major version upgrade the backup `path` parameter is not updated and this leads to errors when starting the cluster. - -![image](/uploads/8075a5d50259bb2c26a0812236570864/image.png) - -### Current Behaviour - -If the backup path is set even with the default values, if you have a cluster with pg 14 and upgrade it to pg 15, the value is not updated - -before upgrade (pg14): - -``` -spec: - configurations: - backups: - - compression: lz4 - cronSchedule: 30 01 * * 6 - path: sgbackups.stackgres.io/my-db/my-db/14 -``` - -after upgrade (pg15): - -``` -spec: - configurations: - backups: - - compression: lz4 - cronSchedule: 30 01 * * 6 - path: sgbackups.stackgres.io/my-db/my-db/14 -``` - -The cluster does not start and throws the next type of error: - -``` -/var/run/postgresql:5432 - rejecting connections -ERROR: 2023/03/02 13:06:37.118279 Archive '00000003000002D800000001' does not exist. -ERROR: 2023/03/02 13:06:37.347286 Archive '00000002000002D800000001' does not exist. -ERROR: 2023/03/02 13:06:37.588631 Archive '00000001000002D800000001' does not exist. -2023-03-02 13:06:37 UTC [7051]: db=,user=,app=,client= FATAL: requested timeline 5 is not a child of this server's history -2023-03-02 13:06:37 UTC [7051]: db=,user=,app=,client= DETAIL: Latest checkpoint is at 2D8/1000228 on timeline 1, but in the history of the requested timeline, the server forked off from that timeline at 2D7/EF00E128. -``` - -``` -NAME READY STATUS RESTARTS AGE -my-db-0 6/7 Running 0 1h -``` - -After changing the value of the backup path to the new version the cluster starts normally. - -``` -spec: - configurations: - backups: - - compression: lz4 - cronSchedule: 30 01 * * 6 - path: sgbackups.stackgres.io/my-db/my-db/15 -``` - - -``` -NAME READY STATUS RESTARTS AGE -my-db-0 7/7 Running 0 1m -``` - - -#### Steps to reproduce - -1. Create a cluster with the backup configuration (pg14) -2. Execute a Major version upgrade from pg14 to pg15 -3. Check the backup `path` parameter - -### Expected Behaviour - -If the value is not set, the operator also upgrades the value to the newer version in order to avoid cluster issues. - -### Environment - -- StackGres version: 1.4.0 -- Kubernetes version: 1.24 -- Cloud provider or hardware configuration: DO and AWS",2 -124451744,2023-02-28 19:23:11.475,[UI] Support user-supplied sidecars for pods customVolumes,"Implement support for user-supplied sidecars on SGCluster pods' customVolumes based on the following spec proposal: - -```yaml -apiVersion: stackgres.io/v1 -kind: SGCluster -spec: - pods: - customVolumes: - - name: - emptyDir: - medium: - sizeLimit: - configMap: - name: - optional: - defaultMode: - items: - - key: - mode: - path: - secret: - secretName: - optional: - defaultMode: - items: - - key: - mode: - path: -``` - -For more info see #538 - -**Acceptance criteria:** - -* [ ] Implement the proposed changes on the UI -* [ ] Setup test for this feature",24 -124451637,2023-02-28 19:20:15.219,[UI] Support user-supplied sidecars for SGCluster services,"Implement support for user-supplied sidecars on SGCluster services based on the following spec proposal: - -```yaml -apiVersion: stackgres.io/v1 -kind: SGCluster -spec: - postgresServices: - primary: - customPorts: # The list of extra ports that will be exposed by the primary service - - appProtocol: - name: - nodePort: - port: - protocol: - targetPort: - replicas: - customPorts: # The list of extra ports that will be exposed by the replicas service - - appProtocol: - name: - nodePort: - port: - protocol: - targetPort: -``` - -For more info see #538 - -**Acceptance criteria:** - -* [x] Implement the proposed changes on the UI -* [x] Setup test for this feature",16 -122284272,2023-01-23 10:25:04.014,Allow empty Backup path generate errors after update," -### Summary -Performing major version upgrade from 13.8 to 14.4 using the UI, I didn't fill the backup path field, and the SGDbOps continue to use the old one, provoking the cluster to not start: - -``` -2023-01-23 09:50:11.200 UTC,,,8369,,63ce5853.20b1,3,,2023-01-23 09:50:11 UTC,,0,LOG,00000,""listening on IPv4 address """"127.0.0.1"""", port 5432"",,,,,,,,,"""",""postmaster"",,0 -2023-01-23 09:50:11.206 UTC,,,8369,,63ce5853.20b1,4,,2023-01-23 09:50:11 UTC,,0,LOG,00000,""listening on Unix socket """"/var/run/postgresql/.s.PGSQL.5432"""""",,,,,,,,,"""",""postmaster"",,0 -2023-01-23 09:50:11.217 UTC,,,8373,,63ce5853.20b5,1,,2023-01-23 09:50:11 UTC,,0,LOG,00000,""database system was shut down in recovery at 2023-01-23 09:47:41 UTC"",,,,,,,,,"""",""startup"",,0 -2023-01-23 09:50:11.416 UTC,,,8373,,63ce5853.20b5,2,,2023-01-23 09:50:11 UTC,,0,LOG,00000,""restored log file """"00000002.history"""" from archive"",,,,,,,,,"""",""startup"",,0 -2023-01-23 09:50:11.627 UTC,,,8373,,63ce5853.20b5,3,,2023-01-23 09:50:11 UTC,,0,LOG,00000,""restored log file """"00000003.history"""" from archive"",,,,,,,,,"""",""startup"",,0 -2023-01-23 09:50:11.873 UTC,,,8373,,63ce5853.20b5,4,,2023-01-23 09:50:11 UTC,,0,LOG,00000,""entering standby mode"",,,,,,,,,"""",""startup"",,0 -2023-01-23 09:50:12.086 UTC,,,8373,,63ce5853.20b5,5,,2023-01-23 09:50:11 UTC,,0,LOG,00000,""restored log file """"00000003.history"""" from archive"",,,,,,,,,"""",""startup"",,0 -2023-01-23 09:50:12.095 UTC,,,8373,,63ce5853.20b5,6,,2023-01-23 09:50:11 UTC,,0,FATAL,XX000,""requested timeline 3 is not a child of this server's history"",""Latest checkpoint is at 2D7/EF00E0B0 on timeline 1, but in the history of the requested timeline, the server forked off from that timeline at 2D7/EB0093D0."",,,,,,,,"""",""startup"",,0 -2023-01-23 09:50:12.096 UTC,,,8369,,63ce5853.20b1,5,,2023-01-23 09:50:11 UTC,,0,LOG,00000,""startup process (PID 8373) exited with exit code 1"",,,,,,,,,"""",""postmaster"",,0 -2023-01-23 09:50:12.096 UTC,,,8369,,63ce5853.20b1,6,,2023-01-23 09:50:11 UTC,,0,LOG,00000,""aborting startup due to startup process failure"",,,,,,,,,"""",""postmaster"",,0 -2023-01-23 09:50:12.111 UTC,,,8369,,63ce5853.20b1,7,,2023-01-23 09:50:11 UTC,,0,LOG,00000,""database system is shut down"",,,,,,,,,"""",""postmaster"",,0 - -``` - - -This is the the SGDbOps manifest generated: - -``` -apiVersion: stackgres.io/v1 -kind: SGDbOps -metadata: - annotations: - stackgres.io/operatorVersion: 1.4.0 - creationTimestamp: ""2023-01-23T09:35:50Z"" - generation: 5 - name: op2023-01-23-10-35-04 - namespace: my-db - resourceVersion: ""252313591"" - uid: ec77f0c9-7728-45f2-a70c-64efb01b51c8 -spec: - majorVersionUpgrade: - backupPath: """" - check: false - clone: false - link: true - postgresVersion: ""14.4"" - sgPostgresConfig: pgconfig-14 - maxRetries: 0 - op: majorVersionUpgrade - scheduling: - tolerations: - - key: stackgres - operator: Exists - sgCluster: my-db -status: - conditions: - - lastTransitionTime: ""2023-01-23T09:59:33Z"" - reason: OperationNotRunning - status: ""False"" - type: Running - - lastTransitionTime: ""2023-01-23T09:59:33Z"" - reason: OperationCompleted - status: ""True"" - type: Completed - - lastTransitionTime: ""2023-01-23T09:59:33Z"" - reason: OperationNotFailed - status: ""False"" - type: Failed - majorVersionUpgrade: - initialInstances: - - my-db-0 - pendingToRestartInstances: [] - primaryInstance: my-db-0 - restartedInstances: - - my-db-0 - sourcePostgresVersion: ""13.8"" - targetPostgresVersion: ""14.4"" - opRetries: 0 - opStarted: ""2023-01-23T09:35:52Z"" -``` - -#### Steps to reproduce - -- Create an SGCluster with backup configuration and backup path set, for example: - -``` -configurations: - backups: - - compression: lz4 - cronSchedule: 30 01 * * 6 - path: sgbackups.stackgres.io/my-db/my-db/13 -``` - -- Create a Major version Upgrade SGDbOps without filling the backup path field (from the UI) -- The major version upgrade finished but the cluster will not start because of the value of the backup path - -### Expected Behaviour - -The Cluster is upgraded successfully and the sgbackups path is set with a new valid value. - -### Possible Solution - -In the case to solve the issue I follow the next steps: - -1. Delete the StackGres validation webhook (I did a backup first) -2. Edit the SGCluster and set the new value for the backup path -3. Re-create the validation webhook -4. Delete the sgcluster pod -5. wait for the cluster to start. - -It would be nice to have a warning message or error if you do not specify a correct value and prevent the SGDbOps to be created. - - -### Environment - -- StackGres version: 1.4.0 - -- Kubernetes version: 1.23.14 - -- Cloud provider or hardware configuration: DigitalOcean - - -### Relevant logs and/or screenshots - -",4 -122139927,2023-01-19 18:09:05.356,Web console sends each script entry as a base script source,"### Summary - -When more than one script entry is set for a given script during cluster creation, the web console sends each entry as a single script containing all of the sibling entries. - - -#### Steps to reproduce - -* Enter the web console -* Enter the ""Create Cluster"" form -* Click on the ""ManagedSQL"" step -* Create one script with two or more script entries -* Click on create cluster -* The request sent to the API includes one script set for each single script entry - -### Expected Behaviour - -Script Entries from the same parent script should all be sent in a single script set - -### Possible Solution - -Review `cleanupScripts` function on the `CreateSGClusters.vue` file - -### Environment - -- StackGres version: `1.4.0`",8 -122139528,2023-01-19 18:00:25.587,Script source not cleared when deleting a script from the web console,"### Summary - -When creating a cluster with ManagedSQL specs, users can define one or more scripts. These scripts can be chosen from already existent SGScripts, or created inline by the user. But when one of the scripts is deleted from the set, the index for such script source is kept on memory, which causes inconsistencies on the scripts source set. - - -#### Steps to reproduce - -- Enter the web console -- Enter the ""Create Cluster"" form -- Click on the ""ManagedSQL"" step -- Create/choose two or more scripts -- Delete any of the scripts from the set -- A mismatch will appear on the set of sources, because the source from the deleted script has not been removed - - -### Expected Behaviour - -When a script is deleted, its source reference should be deleted also. - - -### Environment - -- StackGres version: `1.4.0`",8 -122101278,2023-01-19 08:29:38.130,Web console allows setting PITR specs when it's not possible,"### Summary - -As stated [on the SG docs](https://stackgres.io/doc/latest/reference/crd/sgcluster/#restore-from-backup): - -> It is possible to restore the database to its state at any time since your backup was taken using Point-in-Time Recovery (PITR) **as long as another backup newer than the PITR requested restoration date does not exists.** - -Yet the web console allows for users to set PITR specs even if one or several backups have been taken after the backup chosen for the initialization. - - -#### Steps to reproduce - -- Enter the web console - -- **Create Base Backups:** - - Create backup `X` for any cluster - - Wait for backup `X` to be completed - - Crate backup `Y` for the same cluster - - Wait for backup `Y` to be completed - -- Enter the ""Create Cluster"" form: - - Enable Advanced Options - - Click on the ""Initialization"" step - - Select backup `X` from the dropdown - -- PITR inputs will appear, even though backup `Y` exists and it has been taken after backup `X` - - -### Expected Behaviour - -PITR inputs can only be shown if no backups have been taken after the one chosen for cluster initialization - - -### Environment - -- StackGres version: `1.4.0`",4 -121970514,2023-01-17 13:53:11.633,Fix Extensions table layout,"### Summary -On the Cluster form, the Extensions table layout is too narrow and the names of the extensions do not fit properly. - -### Possible Solution -Add ellipsis for long extension names. - -### Environment -- StackGres version: 1\.4.0 - -### Relevant logs and/or screenshots -![Screenshot_2023-01-17_at_14.49.13](/uploads/411779ce9b95900b1684b71824ef2e4d/Screenshot_2023-01-17_at_14.49.13.png)",4 -121951234,2023-01-17 08:34:36.386,Cluster name on SGDbOp details is not clickable,"### Summary - -When listing details for any CRD, if there's any reference to another CRD on the web console, the name should be clickable and take the user to the corresponding CRD details page. - -This is not the case for cluster names on the SGDbOp screen. - -![image](/uploads/d9cb25a69bab9e5e12994866730ea793/image.png) - - - -#### Steps to reproduce - -- Enter the web console -- Create any SGDbOp -- Enter the op details -- Cluster name is not clickable - - -### Expected Behaviour - -Any CRD listed on the web console should include a link to see its details. - -### Note: -Same issue spotted on SGBackups details. - -### Environment - -- StackGres version: `1.4.0`",4 -121911544,2023-01-16 16:18:13.803,Unify click behavious when clicked on a switch and on its label,"### Summary -On the Web Console, clicking on the label of a switch and on the switch itself has the same effect. This is not the case for the `Enable Path Style Addressing` switch: when clicked on the label, the click has no effect. - -**Note:** Although this issue was spotted on the `Enable Path Style Addressing` switch, we need to make sure it does not happen on other switches as well. - -### Expected Behaviour -Clicking on the label of a switch and on the switch itself should have the same effect. - -### Environment -- StackGres version: 1\.4.0",8 -121910734,2023-01-16 16:06:20.691,Remove Advanced switch from Azure section,"### Summary -On the Object Storage form, the Azure section has an `Advanced Options` switch, but Azure has no Advanced Options. The switch has no effect. - -### Possible Solution -Remove the `Advanced Options` switch from the Azure section. - -### Environment -- StackGres version: 1\.4.0 - - -### Relevant logs and/or screenshots -![Screenshot_2023-01-16_at_17.02.03](/uploads/54603b16c4b411520e1e29a786dc98bb/Screenshot_2023-01-16_at_17.02.03.png)",4 -121525920,2023-01-10 15:09:44.765,View Script button text and icon on Cluster Configuration tab have different behaviours,"### Summary -When trying to view the Managed SQL Script within the SGCluster Configuration tab, the button text (`View Script`) and the eye icon behave in different ways: -- `View Script` opens a pop-up containing the Script. -- The eye icon opens a pop-up containing the text `undefined`. - -**Note:** Although this issue was spotted on the `View Script` button, we need to make sure it does not happen on other buttons as well. - -#### Steps to reproduce -1. Create a Cluser. -2. Go to the Configuration tab. Scroll to Managed SQL section and click on `View Entries`. A pop-up with the Script Entries will appear. -4. Click on the `View Script` text. A pop-up containing the Script will appear. -5. Close the Script pop-up and click on the eye icon next to the `View Script` text. A pop-up with the text `undefined` will appear. - -### Expected Behaviour -Both the text and the eye icon of the button should behave in the same way and open the Script pop-up. - -### Environment -- StackGres version: 1\.4.0 - -### Relevant logs and/or screenshots -![image](https://gitlab.com/ongresinc/stackgres/uploads/674bbb18beea6f1f9325b1aa222de97e/image.png)",4 -121468080,2023-01-09 17:03:54.608,notValid class not being removed on Babelfish Experimental Feature,"### Summary -`notValid` class is not being removed on `Babelfish Experimental Feature` field. - -**Note:** Although this issue was spotted on the `Babelfish Experimental Feature` field, we need to make sure it does not happen on other fields as well. - -#### Steps to reproduce -1. Go to Cluster Creation form. -2. Give the cluster a name. Select Babelfish flavor but do not enable `Babelfish Experimental Feature (required)`. -3. Click on `View Summary`. -4. The Summary won't open because not all required fields are filled in. The unfilled required inputs and the Step containing them now have a `notValid` class and appear in red. -5. Enable `Babelfish Experimental Feature (required)`. -6. The class `notValid` remains, though the Summary now opens. - -### Expected Behaviour -Once `Babelfish Experimental Feature (required)` is enabled, `notValid` class should be removed. - -### Environment -- StackGres version: 1\.4.0 - -### Relevant logs and/or screenshots -![Screen_Recording_2023-01-09_at_17.31.53](/uploads/32b204610965b9239409fc5c5e49face/Screen_Recording_2023-01-09_at_17.31.53.mov)",8 -121463695,2023-01-09 16:03:25.387,Service Account JSON not shown on Summary,"### Summary -Service Account JSON is not shown on the Summary on Object Storage Creation form (Google Cloud Storage). - -#### Steps to reproduce -1. Go to Object Storage Creation form. -2. Select Google Cloud Storage. -3. Fill in all required fields, including the `Service Account JSON`. -4. Click con `View Summary`. -5. The file name and contents do not appear on the Summary. - -### Expected Behaviour -Service Account JSON name should be shown on Object Storage Summary. - -### Environment -- StackGres version: 1\.4.0",4 -121448426,2023-01-09 12:07:48.838,Continue on SGScripts Error should not be visible if there are no Scripts set,"### Summary -On Cluster Creation form, Scripts step, the field `Continue on SGScripts Error` is placed at the very top. `Continue on SGScripts Error` depends on the existence of Scripts, and enabling/disabling it will have no impact if no Scripts are set. Even though `Continue on SGScripts Error` depends on Scripts existence, it is always visible at the top and the User can enable/disable it (with no effects). - -### Expected Behaviour -If enabling/disabling `Continue on SGScripts Error` won't have any effects unless there are Scripts set, `Continue on SGScripts Error` field should not appear unless there are Scripts set. Also, since it depends on Scripts being set, it should appear _after_ the Scripts fieldset. - -Also, on the Cluster tab, the value for `Continue on SGScripts Error` does not appear. - -### Environment -- StackGres version: 1\.4.0",4 -121446670,2023-01-09 11:33:43.458,Summaries should not open unless all required fields are filled in,"### Summary -On all Creation forms through the Web UI, Summaries can only be opened when all required fields are filled in. This does not happen for Storage Type (SGObjectStorage form) and Database Operation (SGDbOps form) fields, both of them required. - -### Current Behaviour -The User can open the Summary without selecting a Storage Type (required) or Database Operation (required). - -#### Steps to reproduce -1. Go to Object Storage Creation form. -2. Give the configuration a name. -3. Do not select any Storage Types (required). -4. Click on `View Summary`. -5. The Summary will open even though no Storage Type (required) was set. - -### Expected Behaviour -- Object Storage Summary should not open unless a Storage Type (required) is selected. -- DbOp Summary should not open unless a Database Operation (required) is selected. - -### Environment -- StackGres version: 1\.4.0",8 -121442290,2023-01-09 10:23:02.078,Disable Connection Pooling not working properly on Cluster Form,"### Summary -On the Cluster Creation form, Connection Pooling Configuration is set even if Connection Pooling is disabled. - -#### Steps to reproduce -1. Go to Create Cluster form. -2. Select a Connection Pooling Configuration. -3. Disabled Connection Pooling. -4. The input for Connection Pooling Configuration still appears (disabled) and shows the selected Connection Pooling Configuration. -5. Create the Cluster. -6. On the Configuration tab it is shown that the Cluster has a Connection Pooling Configuration. - -**Note**: if no Connection Pooling Configuration is selected, a default Connection Pooling Configuration is created even if Connection Pooling is disabled. - -### Expected Behaviour -If Connection Pooling is disabled: -- The Connection Pooling Configuration field should not be shown. -- No Connection Pooling Configuration should be set or created when creating the Cluster. - -### Environment -- StackGres version: 1\.4.0 - -### Relevant logs and/or screenshots -**Cluster Creation form. Connection Pooling is disabled, but the selected configuration `test` still appears:** -![Screenshot_2023-01-09_at_11.07.06](/uploads/ae0a70fe23531087dae34887e4c7f0d8/Screenshot_2023-01-09_at_11.07.06.png) - -**Cluster Configuration tab. Even though Connection Pooling was disabled, the configuration `test` appears:** -![Screenshot_2023-01-09_at_11.07.49](/uploads/005f264d71b18957a3f151520b75621c/Screenshot_2023-01-09_at_11.07.49.png)",8 -121439458,2023-01-09 09:39:34.529,Script Source on Create Cluster form not working,"### Summary -When trying to select a Script Source on the Create Cluster form, the select is not working. No errors appear on the Console. - -#### Steps to reproduce -1. Go to Create Cluster. -2. Try to select a Source on the Scripts step. - -### Expected Behaviour -The Script Source should load. - - -### Environment -- StackGres version: 1\.4.0 - - -### Relevant logs and/or screenshots -![Screen_Recording_2023-01-09_at_10.33.44](/uploads/4d5e3480d94854388b93830230a227b1/Screen_Recording_2023-01-09_at_10.33.44.mov)",8 -121306538,2023-01-05 16:37:28.301,Object Storage selector on Cluster form shows all Object Storages from all Namespaces,"### Summary -When selecting an Object Storage on the Create Cluster form, the dropdown displays all Object Storages from all Namespaces. If the selected Object Storage is from a different Namespace, the Cluster creation will fail. - -#### Steps to reproduce -1. Create Object Storages on different Namespaces. -2. Go to Create Cluster, Backups step. Select an Object Storage. -3. The dropdown shows a list of all Object Storages from all Namespaces. - -### Expected Behaviour -Only Object Storages from the Namespace the User is on should be shown. - -### Possible Solution -Object Storages should be filtered by Namespace on the Create Cluster form. - -### Environment -- StackGres version: 1\.4.0",4 -120947636,2022-12-27 12:32:54.528,Minimized sidebar namespaces list is open when tab is reloaded,"On the web console, when the sidebar is minimized, a cookie is set for the browser to remember user's preference. - -When the browser tab is reloaded, even though the sidebar appears minimized, the namespaces list is always shown floating next to the sidebar and it hides only when passing the mouse hover it. - -![image](/uploads/47dc537c89eb724ba15108f4aaee7e12/image.png)",4 -120347338,2022-12-13 15:57:33.159,Web console sidebar items hidden behind dialog popups,"### Summary - -On the web console, when a dialog box is shown on the main container, even though users can interact with the sidebar, its nav items are hidden behind the dialog box. - -![image](/uploads/6c0f0dc410434ead96a82e94a9c7ccec/image.png) - - -#### Steps to reproduce - -- Enter the web console -- Edit any resource -- Click on ""View Summary"" (a dialog box will open) -- Interact with sidebar items -- Links to access resources will be hidden behind the dialog box - - -### Expected Behaviour - -Whenever interaction with the sidebar items is possible, they must be on top of any other content available. - - -### Environment - -- StackGres version: `1.4.0`",1 -119480593,2022-11-28 11:51:44.493,"Implement the UI for containers, initContainers and requests for SGInstanceProfile","The objective of this issue is to create the necessary requirements for the UI to support containers, initContainers and requests for SGInstanceProfile. - -Based on the development of the issue: #144 and #820 - -**Acceptance criteria:** - -* [ ] Implement the UI for containers, initContainers and requests for SGInstanceProfile -* [ ] Test the implementation by setting value for containers, initContainers and resources section.",2 -118687421,2022-11-14 16:52:32.508,Replicate from a cluster that has backup configuration make reconciliation cycle fail,"### Summary - -Replicate from a cluster that has backup configuration make reconciliation cycle fail - -### Current Behaviour - -The operator fail to reconcile the cluster - -#### Steps to reproduce - -1. Create a Cluster with backup configuration -2. Create a Cluster that replicates from the previous one - -### Expected Behaviour - -The operator reconcile the cluster - -### Environment - -- StackGres version: ? -- Kubernetes version: ? -- Cloud provider or hardware configuration: ? - -### Relevant logs and/or screenshots - -``` -2022-11-14 16:46:47,560 ERROR [io.st.op.conciliation] (SGCluster-ReconciliationLoop) Reconciliation of SGCluster ui/ui-replica failed: java.util.NoSuchElementException: No value present - at java.base/java.util.Optional.orElseThrow(Optional.java:377) - at io.stackgres.operator.conciliation.factory.cluster.replicate.ReplicateConfigMap.lambda$buildSource$0(ReplicateConfigMap.java:73) - at java.base/java.util.Optional.ifPresent(Optional.java:178) - at io.stackgres.operator.conciliation.factory.cluster.replicate.ReplicateConfigMap.buildSource(ReplicateConfigMap.java:70) - at io.stackgres.operator.conciliation.factory.cluster.replicate.ReplicateConfigMap.buildVolumes(ReplicateConfigMap.java:52) - at io.stackgres.operator.conciliation.factory.cluster.replicate.ReplicateConfigMap.buildVolumes(ReplicateConfigMap.java:35) - at io.stackgres.operator.conciliation.factory.cluster.ClusterVolumeDiscoverer.lambda$discoverVolumes$1(ClusterVolumeDiscoverer.java:41) - at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:273) - at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179) - at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625) - at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) - at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) - at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) - at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) - at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) - at io.stackgres.operator.conciliation.factory.cluster.ClusterVolumeDiscoverer.discoverVolumes(ClusterVolumeDiscoverer.java:42) - at io.stackgres.operator.conciliation.factory.cluster.ClusterVolumeDiscoverer.discoverVolumes(ClusterVolumeDiscoverer.java:24) - at io.stackgres.operator.conciliation.factory.cluster.ClusterVolumeDiscoverer_ClientProxy.discoverVolumes(Unknown Source) - at io.stackgres.operator.conciliation.factory.cluster.ClusterStatefulSet.generateResource(ClusterStatefulSet.java:94) - at io.stackgres.operator.conciliation.factory.cluster.ClusterStatefulSet.generateResource(ClusterStatefulSet.java:42) - at io.stackgres.operator.conciliation.AbstractRequiredResourceDecorator.lambda$decorateResources$0(AbstractRequiredResourceDecorator.java:30) - at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:273) - at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625) - at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) - at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) - at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:575) - at java.base/java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260) - at java.base/java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:616) - at java.base/java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:622) - at java.base/java.util.stream.ReferencePipeline.toList(ReferencePipeline.java:627) - at io.stackgres.operator.conciliation.AbstractRequiredResourceDecorator.decorateResources(AbstractRequiredResourceDecorator.java:30) - at io.stackgres.operator.conciliation.cluster.ClusterRequiredResourceDecorator_ClientProxy.decorateResources(Unknown Source) - at io.stackgres.operator.conciliation.cluster.ClusterRequiredResourcesGenerator.getRequiredResources(ClusterRequiredResourcesGenerator.java:223) - at io.stackgres.operator.conciliation.cluster.ClusterRequiredResourcesGenerator.getRequiredResources(ClusterRequiredResourcesGenerator.java:63) - at io.stackgres.operator.conciliation.cluster.ClusterRequiredResourcesGenerator_ClientProxy.getRequiredResources(Unknown Source) - at io.stackgres.operator.conciliation.Conciliator.evalReconciliationState(Conciliator.java:28) - at io.stackgres.operator.conciliation.cluster.ClusterConciliator.evalReconciliationState(ClusterConciliator.java:30) - at io.stackgres.operator.conciliation.cluster.ClusterConciliator.evalReconciliationState(ClusterConciliator.java:18) - at io.stackgres.operator.conciliation.cluster.ClusterConciliator_ClientProxy.evalReconciliationState(Unknown Source) - at io.stackgres.operator.conciliation.AbstractReconciliator.lambda$reconciliationCycle$4(AbstractReconciliator.java:108) - at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) - at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179) - at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625) - at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) - at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) - at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) - at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) - at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) - at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596) - at io.stackgres.operator.conciliation.AbstractReconciliator.reconciliationCycle(AbstractReconciliator.java:100) - at io.stackgres.operator.conciliation.AbstractReconciliator.reconciliationLoop(AbstractReconciliator.java:90) - at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) - at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) - at java.base/java.lang.Thread.run(Thread.java:833) -```",2 -117512170,2022-10-25 07:33:43.262,Update Babelfish Compass to version 2022-10,"Update babelfish compass component to version 2022-10 - -## Acceptance Criteria - -* [ ] Update version -* [ ] Pass tests",2 -117153033,2022-10-19 08:14:08.745,Cluster is not reconciled when prometheus auto bind is disabled in the operator,"### Summary - -Cluster is not reconciled when prometheus auto bind is disabled in the operator - -### Current Behaviour - -The cluster is not created and the operator logs the following exception: - -```java -Cluster reconciliation cycle failed: {""kind"":""Status"",""apiVersion"":""v1"",""metadata"":{},""status"":""Failure"",""message"":""servicemonitors.monitoring.coreos.com is forbidden: User \""system:serviceaccount:stackgres:stackgres-operator\"" cannot list resource \""servicemonitors\"" in API group \""monitoring.coreos.com\"" in the namespace \""keycloak\"""",""reason"":""Forbidden"",""details"":{""group"":""monitoring.coreos.com"",""kind"":""servicemonitors""},""code"":403} -``` - -#### Steps to reproduce - -1. Install the operator with `--set prometheus.allowAutobind=false` -2. Create a SGCluster - -The test may require the prometheus CRDs to be installed - -### Expected Behaviour - -The cluster is created and no exception is logged. - -### Possible Solution - -Skip scanning of prometheus CRs in the reconciliation cycle when the promethues auto bind is disabled. - -### Environment - -- StackGres version: 1.3.3 -- Kubernetes version: ? -- Cloud provider or hardware configuration: ?",2 -117099639,2022-10-18 11:48:47.055,Backup are marked completed when they are not,"### Summary - -Backup are marked completed when they are not. - -### Current Behaviour - -The backup has the `Completed` status but no information about the backup name and other related properties are present: - -```yaml -apiVersion: stackgres.io/v1 -kind: SGBackup -status: - backupPath: sgbackups.stackgres.io/test/test/14 - process: - jobPod: bk2022-10-17-17-10-58-backup-8542j - status: Completed - sgBackupConfig: - baseBackups: - compression: lz4 - storage: - s3Compatible: - awsCredentials: - secretKeySelectors: - accessKeyId: - key: accessKey - name: test-backup-credentials - secretAccessKey: - key: secretKey - name: test-backup-credentials - bucket: 4amlunch-db-backups - enablePathStyleAddressing: true - endpoint: https://minio - region: k8s - type: s3Compatible -``` - -#### Steps to reproduce - -1. Create a SGCluster with backup configuration -2. Create a SGBackup -3. While the backup is running and is executing the remote backup with wal-g change StackGres operator deployment to have 0 replicas -4. Wait for the Job to complete -5. Change StackGres operator deployment to have 1 replicas - -### Expected Behaviour - -The backup has the `Completed` status and all the information about the backup name and other related properties must be present. - -### Environment - -- StackGres version: 1.3.2 (but probably this affects also previous versions) -- Kubernetes version: ? -- Cloud provider or hardware configuration: ?",2 -117081197,2022-10-18 06:43:26.898,"setup-data-path.sh not executing, resulting in broken permissions"," -### Summary - -In v1.3.2 there is a script, that can't be executed before the postgresql instance starts: `/etc/patroni/setup-data-path.sh`. This script seems to set some folder permissions for the postgresql data. - -I believe this is related to #2052. - -### Current Behaviour - -There's an error logged before the postgresql instance is started: -``` -2022-10-18 04:58:35,278 ERROR: Failed to execute ['/etc/patroni/setup-data-path.sh', 'on_start', 'replica', 'clustername'] -Traceback (most recent call last): - File ""/usr/lib/python3.6/site-packages/patroni/postgresql/cancellable.py"", line 30, in _start_process - self._process = psutil.Popen(cmd, *args, **kwargs) - File ""/usr/lib64/python3.6/site-packages/psutil/__init__.py"", line 1316, in __init__ - self.__subproc = subprocess.Popen(*args, **kwargs) - File ""/usr/lib64/python3.6/subprocess.py"", line 729, in __init__ - restore_signals, start_new_session) - File ""/usr/lib64/python3.6/subprocess.py"", line 1364, in _execute_child - raise child_exception_type(errno_num, err_msg, err_filename) -OSError: [Errno 8] Exec format error: '/etc/patroni/setup-data-path.sh' -``` - -This script sets some permissions on the postgresql data folder. Because it's not executed the permissions for the folder aren't correct, and postgresql refuses to start. Manually executing the script is working. - -#### Steps to reproduce - -1. Update operator to v1.3.2 -2. restart a cluster -3. The nodes fail to restart due to the broken permissions - -Note: downgrading to v1.3.1 resolves the issue. - -### Expected Behaviour - -The script can be run so that the permissions are set correctly. - -### Possible Solution - -The script is currently missing a shebang. It might be enough to specify it. Otherwise, call it explicitly with something like `sh /etc/patroni/setup-data-path.sh`. - -### Environment - -- StackGres version: 1.3.2 -- Kubernetes version: 1.24 -- Cloud provider or hardware configuration: Ceph CSI RBD - - -### Relevant logs and/or screenshots - -``` -2022-10-18 04:58:35,278 ERROR: Failed to execute ['/etc/patroni/setup-data-path.sh', 'on_start', 'replica', 'earthnet'] -Traceback (most recent call last): - File ""/usr/lib/python3.6/site-packages/patroni/postgresql/cancellable.py"", line 30, in _start_process - self._process = psutil.Popen(cmd, *args, **kwargs) - File ""/usr/lib64/python3.6/site-packages/psutil/__init__.py"", line 1316, in __init__ - self.__subproc = subprocess.Popen(*args, **kwargs) - File ""/usr/lib64/python3.6/subprocess.py"", line 729, in __init__ - restore_signals, start_new_session) - File ""/usr/lib64/python3.6/subprocess.py"", line 1364, in _execute_child - raise child_exception_type(errno_num, err_msg, err_filename) -OSError: [Errno 8] Exec format error: '/etc/patroni/setup-data-path.sh' -```",2 -116754140,2022-10-12 13:13:40.499,Cluster Crash-Loop if Kubernetes API is temporarily unavailable,"### Summary - -If the Kubernetes API is temporarily unavailable, the Cluster crashes and is unable to heal itself. -The only way to get the Cluster running again is to manually kill the primary pod. - -### Current Behaviour -Whenever our Kubernetes API is down for a short period of time (seconds), -the Pimary Pod crashes and remains in a crash recovery loop. -Even if the Kubernetes API is available again. - -The cluster is down until someone restarts the pod. - -#### Steps to reproduce -Timeout the Kubernetes-API Requests... -Reproduce depending a bit on the Cloud-Environment / Networking. - -### Expected Behaviour -When the Kubernetes API is back, the cluster should be able to heal itself. - -### Possible Solution -Auto-Kill the Pod via Livenessprobe. -A pod in this condition is definitely not live anymore... -Postmortem Logs: (WARNING: Postgresql is not running.) - -### Environment -- StackGres version: v1.2.1 -- Kubernetes version: v1.22.5 -- Cloud provider or hardware configuration: -A Managed Kubernetes hosted on Openstack. - -To be sure, the IP `10.240.16.1` is a Kubernetes Service with the Kubernetes API. - -```bash -$ kubectl get service kubernetes -o wide -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR -kubernetes NodePort 10.240.16.1 443:31902/TCP 487d -``` -### Postmortem logs - -Here are the relevant parts from our logs (postmortem) - -The ClusterName is `app-pgsql` with user `appUser` and the `app-db` in this example. -We run with the following 3 nodes: -- app-pgsql-1 -- app-pgsql-2 *Primary* -- app-pgsql-3 - -At the time of the crash, `app-pgsql-2` was the `primary`. -The path to a crash begins with a read timeout in the `Primary` Pod. - -**app-pgsql-2** (Primary) -``` -ERROR: Request to server https://10.240.16.1:443 failed: ReadTimeoutError(""HTTPSConnectionPool(host='10.240.16.1', port=443): Read timed out. (read timeout=4.9997613350860775)"",) -[203] LOG C-0xcfc0d0: app-db/appUser@127.0.0.1:44462 login attempt: db=app-db user=appUser tls=no -[203] LOG C-0xcfc760: app-db/appUser@127.0.0.1:44460 login attempt: db=app-db user=appUser tls=no -[203] LOG C-0xcfc530: app-db/appUser@127.0.0.1:44464 login attempt: db=app-db user=appUser tls=no -[203] LOG C-0xcfc530: app-db/appUser@127.0.0.1:44464 closing because: client close request (age=0s) -[203] LOG C-0xcfc760: app-db/appUser@127.0.0.1:44460 closing because: client close request (age=0s) -[203] LOG C-0xcfc0d0: app-db/appUser@127.0.0.1:44462 closing because: client close request (age=0s) -WARNING: Concurrent update of app-pgsql -INFO: no action. I am (app-pgsql-2), the leader with the lock -``` - -after a few seconds and more failed requests, the *primary* demotes itself as expected with a network partition. - -``` -ERROR [io.st.cl.co.ClusterControllerReconciliationCycle] (Cluster-ReconciliationCycle) 83517| Cluster reconciliation cycle failed sending event while retrieving reconciliation cycle contexts: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://10.240.16.1/api/v1/namespaces/development/events. Message: rpc error: code = Unavailable desc = error reading from server: read tcp 10.244.111.77:58466->10.244.109.41:2379: read: connection timed out. Received status: Status(apiVersion=v1, code=500, details=null, kind=Status, message=rpc error: code = Unavailable desc = error reading from server: read tcp 10.244.111.77:58466->10.244.109.41:2379: read: connection timed out, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=null, status=Failure, additionalProperties={}). -WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=0, status=None)) after connection broken by 'ReadTimeoutError(""HTTPSConnectionPool(host='10.240.16.1', port=443): Read timed out. (read timeout=4.999653317965567)"",)': /api/v1/namespaces/development/endpoints/app-pgsql - -ERROR: failed to update leader lock -INFO: Demoting self (immediate-nolock) -``` - -Here is the full demotion from the Primary Pod: - -``` -INFO: Demoting self (immediate-nolock) -UTC [231]: db=,user=,app=,client= LOG: received immediate shutdown request -[203] WARNING S-0xd09560: app-db/appUser@unix:5432 got packet 'N' from server when not linked -[203] WARNING S-0xd09790: app-db/appUser@unix:5432 got packet 'N' from server when not linked -[203] LOG S-0xd09560: app-db/appUser@unix:5432 closing because: server conn crashed? (age=412s) -[203] LOG S-0xd08ed0: app-db/appUser@unix:5432 closing because: server conn crashed? (age=255s) -[203] LOG C-0xcfb5e0: app-db/appUser@127.0.0.1:33194 closing because: server conn crashed? (age=255s) -[203] WARNING C-0xcfb5e0: app-db/appUser@127.0.0.1:33194 pooler error: server conn crashed? -[203] WARNING S-0xd09330: app-db/authenticator@unix:5432 got packet 'N' from server when not linked -[203] WARNING S-0xd0a4b0: app-db/authenticator@unix:5432 got packet 'N' from server when not linked -[203] LOG S-0xd09330: app-db/authenticator@unix:5432 closing because: server conn crashed? (age=3345s) -[203] LOG S-0xd0a4b0: app-db/authenticator@unix:5432 closing because: server conn crashed? (age=2165s) -[203] LOG S-0xd09790: app-db/appUser@unix:5432 closing because: server conn crashed? (age=2766s) -[203] LOG S-0xd0a280: app-db/appUser@unix:5432 closing because: server conn crashed? (age=2676s) -[203] LOG C-0xcfb810: app-db/appUser@127.0.0.1:45186 closing because: server conn crashed? (age=145s) -[203] WARNING C-0xcfb810: app-db/appUser@127.0.0.1:45186 pooler error: server conn crashed? -[231]: db=,user=,app=,client= LOG: database system is shut down -[203] WARNING sbuf_connect failed: No such file or directory -[203] LOG S-0xd0a280: app-db/authenticator@unix:5432 closing because: connect failed (age=0s) -[203] LOG C-0xcfb5e0: app-db/(nouser)@127.0.0.1:44198 closing because: pgbouncer cannot connect to server (age=0s) -[203] WARNING C-0xcfb5e0: app-db/(nouser)@127.0.0.1:44198 pooler error: pgbouncer cannot connect to server -[203] LOG C-0xcfb5e0: app-db/(nouser)@127.0.0.1:44200 closing because: pgbouncer cannot connect to server (age=0s) -[203] WARNING C-0xcfb5e0: app-db/(nouser)@127.0.0.1:44200 pooler error: pgbouncer cannot connect to server -[203] LOG C-0xcfb5e0: app-db/(nouser)@127.0.0.1:44338 closing because: pgbouncer cannot connect to server (age=0s) -[203] WARNING C-0xcfb5e0: app-db/(nouser)@127.0.0.1:44338 pooler error: pgbouncer cannot connect to server -[203] LOG C-0xcfc530: app-db/(nouser)@127.0.0.1:44336 closing because: pgbouncer cannot connect to server (age=0s) -[203] WARNING C-0xcfc530: app-db/(nouser)@127.0.0.1:44336 pooler error: pgbouncer cannot connect to server -[203] LOG C-0xcfc530: app-db/(nouser)@127.0.0.1:44340 closing because: pgbouncer cannot connect to server (age=0s) -[203] WARNING C-0xcfc530: app-db/(nouser)@127.0.0.1:44340 pooler error: pgbouncer cannot connect to server -INFO: demoted self because failed to update leader lock in DCS -WARNING: Loop time exceeded, rescheduling immediately. -INFO: closed patroni connection to the postgresql cluster -INFO: Lock owner: app-pgsql-2; I am app-pgsql-2 -INFO: updated leader lock during starting after demotion -INFO: postmaster pid=1177595 -[1177595]: db=,user=,app=,client= FATAL: data directory ""/var/lib/postgresql/data"" has invalid permissions -[1177595]: db=,user=,app=,client= DETAIL: Permissions should be u=rwx (0700) or u=rwx,g=rx (0750). -/var/run/postgresql:5432 - no response -[203] LOG C-0xcfc530: app-db/(nouser)@127.0.0.1:44360 closing because: pgbouncer cannot connect to server (age=0s) -[203] WARNING C-0xcfc530: app-db/(nouser)@127.0.0.1:44360 pooler error: pgbouncer cannot connect to server -ERROR: postmaster is not running -INFO [io.st.co.co.PersistentVolumeSizeReconciliator] (Cluster-ReconciliationCycle) Reconciling persistent volume claim sizes -INFO [io.st.co.po.PostgresBootstrapReconciliator] (Cluster-ReconciliationCycle) Cluster bootstrap completed -INFO [io.st.co.po.PostgresBootstrapReconciliator] (Cluster-ReconciliationCycle) Setting cluster arch x86_64 and os linux -INFO [io.st.co.ex.ExtensionReconciliator] (Cluster-ReconciliationCycle) Reconcile postgres extensions... -INFO [io.st.co.ex.ExtensionReconciliator] (Cluster-ReconciliationCycle) Reconciliation of postgres extensions completed -ERROR [io.st.cl.co.PgBouncerReconciliator] (Cluster-ReconciliationCycle) An error occurred while updating pgbouncer auth_file: org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections. -... -Caused by: java.net.ConnectException: Connection refused -``` - -after that we get an infinite loop with the following message - -``` -/var/run/postgresql:5432 - no response -WARNING: Postgresql is not running. -INFO: Lock owner: app-pgsql-2; I am app-pgsql-2 -INFO: pg_controldata: - pg_control version number: 1300 - Catalog version number: 202107181 - Database system identifier: 7059687927088955601 - Database cluster state: in production - pg_control last modified: Mon Oct 10 21:30:26 2022 - Latest checkpoint location: 2/C815AD20 - Latest checkpoint's REDO location: 2/C815ACB8 - Latest checkpoint's REDO WAL file: 000000CB00000002000000C8 - Latest checkpoint's TimeLineID: 203 - Latest checkpoint's PrevTimeLineID: 203 - Latest checkpoint's full_page_writes: on - Latest checkpoint's NextXID: 0:1034458 - Latest checkpoint's NextOID: 90416 - Latest checkpoint's NextMultiXactId: 2720 - Latest checkpoint's NextMultiOffset: 5439 - Latest checkpoint's oldestXID: 727 - Latest checkpoint's oldestXID's DB: 1 - Latest checkpoint's oldestActiveXID: 1034458 - Latest checkpoint's oldestMultiXid: 1 - Latest checkpoint's oldestMulti's DB: 1 - Latest checkpoint's oldestCommitTsXid: 504930 - Latest checkpoint's newestCommitTsXid: 1034457 - Time of latest checkpoint: Mon Oct 10 21:30:22 2022 - Fake LSN counter for unlogged rels: 0/3E8 - Minimum recovery ending location: 0/0 - Min recovery ending loc's timeline: 0 - Backup start location: 0/0 - Backup end location: 0/0 - End-of-backup record required: no - wal_level setting: logical - wal_log_hints setting: on - max_connections setting: 100 - max_worker_processes setting: 8 - max_wal_senders setting: 20 - max_prepared_xacts setting: 32 - max_locks_per_xact setting: 128 - track_commit_timestamp setting: on - Maximum data alignment: 8 - Database block size: 8192 - Blocks per segment of large relation: 131072 - WAL block size: 8192 - Bytes per WAL segment: 16777216 - Maximum length of identifiers: 64 - Maximum columns in an index: 32 - Maximum size of a TOAST chunk: 1996 - Size of a large-object chunk: 2048 - Date/time type storage: 64-bit integers - Float8 argument passing: by value - Data page checksum version: 1 - Mock authentication nonce: 3862e35396b451e79a4fd0cf1af11a55f6c4b485aea57187864790b6e50db529 -INFO: doing crash recovery in a single user mode -ERROR: Crash recovery finished with code=1 -INFO: stdout= -INFO: stderr=2022-10-10 21:39:03 UTC [1177659]: db=,user=,app=,client= FATAL: data directory ""/var/lib/postgresql/data"" has invalid permissions -[1177659]: db=,user=,app=,client= DETAIL: Permissions should be u=rwx (0700) or u=rwx,g=rx (0750). -INFO [io.st.co.po.PostgresBootstrapReconciliator] (Cluster-ReconciliationCycle) Cluster bootstrap completed -INFO [io.st.co.po.PostgresBootstrapReconciliator] (Cluster-ReconciliationCycle) Setting cluster arch x86_64 and os linux -INFO [io.st.co.ex.ExtensionReconciliator] (Cluster-ReconciliationCycle) Reconcile postgres extensions... -INFO [io.st.co.ex.ExtensionReconciliator] (Cluster-ReconciliationCycle) Reconciliation of postgres extensions completed -ERROR [io.st.cl.co.PgBouncerReconciliator] (Cluster-ReconciliationCycle) An error occurred while updating pgbouncer auth_file: org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections. -``` - -In the meantime, the other pods have also reported simliar problems with the API. - -**app-pgsql-0** (replica) -``` -ERROR [io.st.cl.co.ClusterControllerReconciliationCycle] (Cluster-ReconciliationCycle) 83535| Cluster reconciliation cycle failed reconciling development.app-pgsql: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://10.240.16.1/api/v1/namespaces/development/configmaps?labelSelector=stackgres.io%2Fcluster-name%3Dapp-pgsql%2Capp%3DStackGresCluster. Message: rpc error: code = Unavailable desc = error reading from server: read tcp 10.244.111.77:58512->10.244.109.41:2379: read: connection timed out. Received status: Status(apiVersion=v1, code=500, details=null, kind=Status, message=rpc error: code = Unavailable desc = error reading from server: read tcp 10.244.111.77:58512->10.244.109.41:2379: read: connection timed out, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=null, status=Failure, additionalProperties={}). -``` - - -**app-pgsql-1** (replica) -``` -ERROR [io.st.cl.co.PgBouncerReconciliator] (Cluster-ReconciliationCycle) An error occurred while updating pgbouncer auth_file: io.fabric8.kubernetes.client.KubernetesClientException: Operation: [get] for kind: [SGPoolingConfig] with name: [generated-from-default-1659596625556] in namespace: [development] failed. -... -Caused by: java.net.SocketTimeoutException: timeout -``` -Thank you and best regards, -Ricardo",2 -116575333,2022-10-10 11:50:00.456,Babelfish compass throws error on the web console,"### Summary - -When using the Babelfish Compass feature on the web console, the request made to the REST API is missing a trailing slash, which causes the request to return a `404 Not Found` code. - -#### Steps to reproduce - -- Enter the web console -- Select any namespace -- On the sidebar, click on `Applications > babelfish-compass` -- Enter any report name + file -- Click on SEND - -The request will return an error and a `404 not found` code can be seen on the browser's console. - - -### Expected Behaviour - -Requests made to babelfish-compass should point to the correct endpoint `/applications/com.ongres/babelfish-compass` - -### Possible Solution - -Include the trailing slash at the beginning of the endpoint request. - - - -### Environment - -- StackGres version: `1.3.1`",2 -116445244,2022-10-07 07:54:16.912,Containers others than patroni should only set requests resource requirements,"Implementation of #144 introduced a lot of problems due to default resource limits being enforced to be too low for the correct functioning of the StackGres ecosystem. In particular container where OOMKilled for SGBackups, SGDbOps and other sidecars. - -To make this change compatible we will also include boolean field `.spec.pods.resources.enableClusterLimitsRequirements` that will allow to enforce resource limits for containers other than patroni as specified in the reference SGInstanceProfile of the SGCluster. - -## Acceptance Criteria - -* [ ] Implement the change -* [ ] Test the change -* [ ] Documentation",2 -116443632,2022-10-07 07:37:48.983,"Old SGBackup, SGDbOps, SGScript and SGDistributedLogs throws exception about invalid version","### Summary - -Some StackGres custom resources throw an exception when the operator is upgraded and do not support old version any more. - -### Current Behaviour - -Whenever an SGBackup, SGDbOps, SGScript, SGDistributedLogs are annotated with an unsupported StackGres version the operator show the following error stack trace: - -``` -2022-10-06 08:39:19,098 ERROR [io.st.op.conciliation] (SGBackup-ReconciliationLoop) Reconciliation of SGBackup default/test failed: java.lang.IllegalArgumentException: Invalid version 1.0.0 -at io.stackgres.common.StackGresVersion.lambda$ofVersion$1(StackGresVersion.java:90) -... -``` - -And the resource is not reconcilied. - -#### Steps to reproduce - -1. Install StackGres Operator 1.0.0 -2. Install MinIO -3. Create an SGBackupConfig targeting MinIO -4. Create an SGCluster with backups referencing the created SGObjectStorage -5. Create an SGBackup -6. Upgrade to StackGres Operator 1.2.0 -7. Perform an SGDbOps secutiry upgrade of the cluster SGCluster -8. Upgrade to StackGres Operator 1.3.1 - -### Expected Behaviour - -The SGBackup reconciliation cycle should not throw any exception and reconcile it correctly. - -### Possible Solution - -Reconcile old versions of some StackGres custom resources. - -### Environment - -- StackGres version: 1.3.1 -- Kubernetes version: ? -- Cloud provider or hardware configuration: ?",2 -116207115,2022-10-03 18:26:28.491,Support podAffinity and podAntiAffinity for SGCluster,"### Problem to solve - -Support [`podAffinity` and `podAntiFinity`](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) in the `SGCluster` custom resource. - -### Proposal - -Proposed section to map to `.spec.template.spec.affinity.podAffinity` and `.spec.template.spec.affinity.podAntiAffinity` section of generated `StatefulSet`: - -```yaml -apiVersion: stackgres.io/v1 -kind: SGCluster -spec: - pods: - scheduling: - podAffinity: [ ] # the same as https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#podaffinity-v1-core - podAntiAffinity: [ ] # the same as https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#podantiaffinity-v1-core -``` - -## Acceptance Criteria - -* [ ] Implement the change in the SGCluster CRD -* [ ] Implement the change in the REST API -* [ ] Tests -* [ ] Documentation - -### Links / references - -* https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#podaffinity-v1-core -* https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#podantiaffinity-v1-core",8 -116192284,2022-10-03 13:59:18.631,Increase memory limit for backup.create-backup container to 256Mi,We saw that the current minimum size of 64Mi for backup.create-backup container memory limit is not enough in some real life usage scenario. We should increase the default memory limit to 256Mi for backup.create-backup in order to prevent OOM in more cases. This does not means that users will have to increase it in some edge cases.,2 -116191066,2022-10-03 13:42:47.576,Missing field .spec.pods.managementPolicy in REST API for sgcluster,"The field `.spec.pods.managementPolicy` in REST API endpoints related to sgcluster - -## Implementation plan - -Add the field to the DTO class `io.stackgres.apiweb.dto.cluster.ClusterPod` - -## Acceptance Criteria - -* [ ] Implement the change -* [ ] Test the change",4 -116057676,2022-09-30 08:15:28.766,Support babelfish 2.1.1,"The goal is to accept babelfish version 2.1.1 on the release StackGres 1.4 - - -**Acceptance criteria:** -- [ ] Generate the image/packages -- [ ] Adapt the UI -- [ ] Add StackGres support -- [ ] Test the new version of babelfish",2 -116055341,2022-09-30 07:38:03.683,Support PostgreSQL v15.0,"The goal of this issue is to support PostgreSQL v15.0 in StackGres - -Released on October 13: https://www.postgresql.org/about/news/postgresql-15-released-2526/ - -Release notes: https://www.postgresql.org/docs/15/release-15.html - - -**Acceptance criteria:** - -- [ ] Test and add the support for PG 15",2 -116036047,2022-09-29 20:40:54.852,The fields in section .spec.nonProductionOptions are reset by the Web Console,"### Summary - -After updating (without changing anything) an SGCluster custom resource the fields in section `.spec.nonProductionOptions` other than `.spec.nonProductionOptions.disableClusterPodAntiAffinity` are removed by the Web Console - -### Current Behaviour - -If we have a `.spec.nonProductionOptions` like the following: - -```yaml -disableClusterPodAntiAffinity : true -disableClusterResourceRequirements: true -disablePatorniResourceRequirements: false -``` - -After an update without changing any field the section is updated as follow: - -```yaml -disableClusterResourceRequirements: true -disablePatorniResourceRequirements: false -``` - -#### Steps to reproduce - -1. Create an SGCluster setting `.spec.nonProductionOptions.disableClusterResourceRequirements` to `true` -2. Update the SGCluster without changing any field from the Web Console - -### Expected Behaviour - -After an update without changing any field the section `.spec.nonProductionOptions` is not changed. - -### Environment - -- StackGres version: 1.3.1 -- Kubernetes version: ? -- Cloud provider or hardware configuration: ?",16 -115826714,2022-09-27 07:29:41.263,Increase memory limit for prometheus-postgres-exporter container to 256Mi,"We saw that the current minimum size of 64Mi for prometheus-postgres-exporter container memory limit is not enough in some real life usage scenario. The amount of memory measured for such container reached a maximum of 107Mi. We should increase the default memory limit to 256Mi for prometheus-postgres-exporter in order to prevent OOM in more cases. This does not means that users will have to increase it in some edge cases. - -## Implementation plan - -Set memory limit to 256Mi for container prometheus-postgres-exporter in class `io.stackgres.common.StackGresContainer`. - -## Acceptance Criteria - -* [ ] Implement the solution -* [ ] Test the implementation",2 -115111640,2022-09-15 09:26:23.303,Make major version upgrade check avoid apply any change and revert the cluster back to previous version,"The check flag is not really useful since an user may want to know before performing the actual major version upgrade to know if preliminary checks will pass. - -Make major version upgrade check avoid apply any change and revert the cluster back to previous version after the operation completes - -## Implementation plan - -Use the same rollback operation implemented in #1983 to always rollback the operation after the check is successful. - -## Acceptance Criteria - -* [ ] Implement the feature -* [ ] Tests",4 -114925741,2022-09-13 08:19:07.570,Improve major version upgrade copy missing files with cp,"Current mechanism to copy missing files from old lib64 folder in order to make them available to `pg_upgrade` command during major version upgrade SGDbOps operation is slow since it uses a custom method with recursive shell functions. Try to replace it using a more performant method. - -## Implementation plan - -The command `cp -auv` can be used as a replacement for this operation. - -## Acceptance Criteria - -* [ ] Refactor the copy of missing file method -* [ ] Pass the current dbops-major-version-upgrade* E2E tests",2 -114876953,2022-09-12 15:38:57.873,Kubernetes Client throws UnrecognizedPropertyException after upgrading the operator when a new field is added to any CRD,"Kubernetes Client throws `UnrecognizedPropertyException` after upgrading the operator when a new field is added to any CRD. This happens whenever the field is filled in any CR making the old version of the local pod controller to fail. - -## Implementation plan - -Make the Kubernetes Client use the `ObjectMapper` configured by Quarkus - -## Acceptance Criteria - -* [ ] Add a specific check in the dbops-secutiry-upgrade E2E test to make sure no `UnrecognizedPropertyException` error appear after an upgrade -* [ ] Solve the issue in the code",4 -114753352,2022-09-09 16:51:49.680,Support Kubernetes 1.25,"Release info: -* https://kubernetes.io/blog/2022/08/23/kubernetes-v1-25-release/ - -Updated Kind 0.15.0 -* https://github.com/kubernetes-sigs/kind/releases/tag/v0.15.0 - -**Acceptance criteria:** -- [ ] Update Kind to 0.15.0 -- [ ] Update Helm chart -- [ ] Test K8s 1.25",1 -114727241,2022-09-09 10:25:33.455,Hide Parameters title when that section is empty on SGPostgresConfig details,"### Summary -SGPostgresConfigs Details view have two sections to show parameters: `Parameters` and `Default Parameters`. SGPostgresConfigs generated by default only have default parameters, and on their Details view the section for custom parameters appears empty (only title visible). - -#### Steps to reproduce -1. Go to the Details view of any SGPostgresConfig generated by default. -2. The title `Parameters` appears but the section is empty. - -### Expected Behaviour -Titles for empty sections should not appear. - -### Environment -- StackGres version: 1\.3.0 - -### Relevant logs and/or screenshots -![Screenshot_2022-09-09_at_12.14.46](/uploads/dbf73a3587467cff968633119512d428/Screenshot_2022-09-09_at_12.14.46.png)",4 -114669719,2022-09-08 14:22:28.500,PITR time slots are not set properly on the Web Console,"### Summary -Cluster data initialization uses backups ordered alphabetically to setup PITR time slots, no matter the time a backup was finally stored. - -#### Steps to reproduce -1. Create a new SGBackup and give it a name starting with `z`. -2. Wait for the first backup to complete and then create a new SGBackup and give it a name starting with `a`. -3. Go to Create new SGCluster. -4. On the Initialization tab, on the Backups section, note that the Backups are ordered alphabetically, which means the backup starting with `a` will appear first even though it was created later than backup starting with `z`. -5. Select the backup starting with `a`. -6. The PITR datepicker will allow you to select a time between the backup `a` and the **next** backup, no matter their chronological order. -7. Now select the backup starting with `z`. -8. Since that backup is the last of the list, there is no other backup after it, and the PITR select won't work. - -### Expected Behaviour -PITR datepicker should use the last backup based on time, not on alphabetical order. - -### Environment -- StackGres version:1\.3.0",8 -114520557,2022-09-06 15:40:37.207,Postgres Version not shown on SGPostgresConfig details,"### Summary -When browsing the details of a SGPostgresConfig, the Postgres Version is not shown. - -#### Steps to reproduce -1. Go to the Details view of any SGPostgresConfig. -2. The Postgres Version is not included on any table. - -### Environment - -- StackGres version:1\.3.0",4 -114519385,2022-09-06 15:21:27.761,Disable Delete option on SGInstanceProfiles and SGPostgresConfigs in use,"### Summary -The User is allowed to delete SGInstanceProfiles and SGPostgresConfigs even if those are already in use by a SGDistributedLogs. - -### Expected Behaviour -Delete option should be disabled on SGInstanceProfiles and SGPostgresConfigs that are being used. - -### Environment -- StackGres version: 1\.3.0",8 -114495412,2022-09-06 09:55:08.606,Cluster Config tab wont load because of Script without scriptSpec,"### Summary -Some scripts may not have `scriptSpec`. Cluster Config tab fails to load because it is expected that `scriptSpec` always exists. - -### Possible Solution -Validate whether Script has `scriptSpec` on Cluster Config tab. - -### Environment -- StackGres version: 1\.3.0",4 -114490272,2022-09-06 08:44:29.451,Event of missing SGBackupConfig is sent even if backup is working as expected,"### Summary - -Warning event that say `Missing SGBackupConfig for cluster .` is sent even if backup is working as expected. - -### Current Behaviour - -A warning event is sent when SGBackupConfig is not used. - -#### Steps to reproduce - -1. Create MinIO deployment -2. Create an SGObjectStorage targeting MinIO -3. Create an SGCluster with backups configuration targeting the created SGObjectStorage - -### Expected Behaviour - -No warning event about missing SGBackupConfig is sent. - -### Possible Solution - -Change the logic so that a warning event is sent about missing SGBackupConfig only if SGBackupConfig is in use or a warning event is sent about missing SGObjectStorage only if SGObjectStorage is in use. - -### Environment - -- StackGres version: 1.3.0 -- Kubernetes version: ? -- Cloud provider or hardware configuration: ?",2 -114459782,2022-09-05 16:44:34.149,"Cluster form title is ""CREATE CLUSTER"" even when editing","### Summary - -When editing a cluster from the web console, the form's title remains ""CREATE CLUSTER"", which might be misleading. - -![image](/uploads/d93dc427cc1b42935e9a9eb07d86aa12/image.png) - - -#### Steps to reproduce - -- Enter the web console -- Select any cluster -- Click on EDIT -- The form title will be ""CREATE CLUSTER"" - - -### Expected Behaviour - -When on edit mode, the form title should be ""EDIT CLUSTER"" - - -### Environment - -- StackGres version: `1.3.0`",4 -114384433,2022-09-03 15:24:44.765,UI backup scheduling inserting trailing zero in cron job minutes config," -### Summary - -When updating a backup schedule in the UI, an extra zero is added to the minutes component of the cron string causing it to be invalid - -### Current Behaviour - -Extra 0 added to minutes component, so trying to schedule for daily 15:59 produces ""590 15 * * *"" - -#### Steps to reproduce - -1. Update minutes in backup config schedule to >=10,<=59 -2. `kubectl get -o yaml ` -3. Observe the incorrect config - -### Expected Behaviour - -Schedule is added as entered - -### Possible Solution - -The field is not being properly synchronized and the default value is being appended to the entered value - -### Environment - -- StackGres version: 1.3.0 - -- Kubernetes version: 1.24.4 - -- Cloud provider or hardware configuration: - - -### Relevant logs and/or screenshots - -",8 -114384206,2022-09-03 15:16:46.585,Backups failing with new cluster/objectstorage," -### Summary - - -After upgrading to StackGres 1.3.0, I backed up and recreated my cluster because of other issues I encountered. Along with this, I created a new Object Storage configuration. This appears to be due to the backup job expecting a SGBackupConfig to exist when none does as I created this object storage config after upgrading to 1.3.0 - -### Current Behaviour - -Backup is stuck in pending - -#### Steps to reproduce - -1. Create a new ObjectStorage -2. Create a new cluster utilizing the new ObjectStorage -3. Create a new backup - -### Expected Behaviour - -Backup is successful - -### Possible Solution - - - -Update items still looking for legacy SGBackupConfig - -### Environment - -- StackGres version: 1.3.0 - -- Kubernetes version: -Client Version: v1.25.0 -Kustomize Version: v4.5.7 -Server Version: v1.24.4+k3s1 -- Cloud provider or hardware configuration: -Assorted x86 bare metal and VMs - -### Relevant logs and/or screenshots - -``` -➜ ~ kubectl describe sgbackup bk2022-09-03-15-1-7 -Name: bk2022-09-03-15-1-7 -Namespace: default -Labels: -Annotations: stackgres.io/operatorVersion: 1.3.0 -API Version: stackgres.io/v1 -Kind: SGBackup -Metadata: - Creation Timestamp: 2022-09-03T15:01:12Z - Generation: 2 - Managed Fields: - API Version: stackgres.io/v1 - Fields Type: FieldsV1 - fieldsV1: - f:spec: - .: - f:managedLifecycle: - f:sgCluster: - f:status: - .: - f:process: - .: - f:status: - Manager: okhttp - Operation: Update - Time: 2022-09-03T15:01:17Z - Resource Version: 203653145 - UID: e5dbe4ad-0a2a-468f-9163-d80d2cd7b336 -Spec: - Managed Lifecycle: true - Sg Cluster: postgresql -Status: - Process: - Status: Pending -Events: - Type Reason Age From Message - ---- ------ ---- ---- ------- - Normal BackupCreated 3m21s stackgres-operator Backup default.bk2022-09-03-15-1-7 created: +Job:bk2022-09-03-15-1-7-backup - Warning BackupConfigFailed 1s (x21 over 3m21s) stackgres-operator Missing SGBackupConfig for cluster default.postgresql - Normal BackupUpdated 1s (x20 over 3m11s) stackgres-operator Backup default.bk2022-09-03-15-1-7 updated: Job:bk2022-09-03-15-1-7-backup (+/spec/completionMode -> NonIndexed), Job:bk2022-09-03-15-1-7-backup (+/spec/suspend -> false) -``` - -```➜ ~ kubectl logs -f bk2022-09-03-15-1-7-backup-mbp7m -Lock acquired -Updating backup CR -/usr/local/bin/create-backup.sh: line 287: 2666 Killed kubectl get ""$BACKUP_CRD_NAME"" -n ""$CLUSTER_NAMESPACE"" ""$BACKUP_NAME"" -o yaml -error: must specify --patch or --patch-file containing the contents of the patch -Lock released -cat: /tmp/backup-push: No such file or directory -Backup failed -```",2 -114351077,2022-09-02 13:24:53.056,Backup EDIT button points to undefined resource,"### Summary - -When hitting the EDIT button on a Backup view, the web console redirects users to an _undefined_ resource adress. - -![image](/uploads/0c778a43ae220df05974fa23c8bbef5d/image.png) - - -#### Steps to reproduce - -- Enter the web console -- Click on any backup to view its details -- Click on the EDIT button on the top right corner - -### Expected Behaviour - -EDIT button should take users to the backup edition section - - -### Environment - -- StackGres version: `1.3.0`",4 -114209415,2022-08-31 09:25:16.362,"Failed to upgrade to 1.3.0 with error ""admission webhook sgscripts.stackgres.stackgres denied the request: SGScript has invalid properties.""","### Summary - -When upgrading from v1.2.1 to v1.3.0 using the helm chart, the stackgres-cr-updater job fails with the following error: `admission webhook sgscripts.stackgres.stackgres denied the request: SGScript has invalid properties`. - -I also attached the SGCluster resource below. - -### Environment - -- StackGres version: v1.3.0 (upgraded from v1.2.1; also tested from v1.1.0) -- Kubernetes version: v1.21.9 -- Cloud provider or hardware configuration: Kubernetes cluster installed with Rancher and using RancherOS - -### Relevant logs and/or screenshots - -stackgres-cr-updater job logs: - -``` -2022-08-31 09:09:07,021 INFO [io.quarkus] (main) stackgres-jobs 1.3.0 native (powered by Quarkus 2.11.2.Final) started in 0.077s. Listening on: http://0.0.0.0:8080 -2022-08-31 09:09:07,026 INFO [io.quarkus] (main) Profile prod activated. -2022-08-31 09:09:07,026 INFO [io.quarkus] (main) Installed features: [cdi, hibernate-validator, kubernetes-client, smallrye-context-propagation, vertx] -2022-08-31 09:09:07,235 INFO [io.st.jo.cr.CrUpdaterImpl] (main) Patching existing custom resources to apply defaults for CRD SGBackup -2022-08-31 09:09:07,411 INFO [io.st.jo.cr.CrUpdaterImpl] (main) Existing custom resources for CRD SGBackup. Patched -2022-08-31 09:09:07,411 INFO [io.st.jo.cr.CrUpdaterImpl] (main) Patching existing custom resources to apply defaults for CRD SGBackupConfig -2022-08-31 09:09:07,595 INFO [io.st.jo.cr.CrUpdaterImpl] (main) Existing custom resources for CRD SGBackupConfig. Patched -2022-08-31 09:09:07,595 INFO [io.st.jo.cr.CrUpdaterImpl] (main) Patching existing custom resources to apply defaults for CRD SGCluster -2022-08-31 09:09:07,750 ERROR [io.qu.ru.Application] (main) Failed to start application (with profile prod): io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: PUT at: https://10.43.0.1/apis/stackgres.io/v1/namespaces/bar-304-k8sapps-stag-r1az-zbgais/sgclusters/bar-foo-bar-pg. Message: admission webhook ""sgcluster.stackgres.stackgres"" denied the request: Failure executing: POST at: https://10.43.0.1/apis/stackgres.io/v1/namespaces/bar-304-k8sapps-stag-r1az-zbgais/sgscripts. Message: admission webhook ""sgscripts.stackgres.stackgres"" denied the request: SGScript has invalid properties. secretKeyRef and configMapKeyRef are mutually exclusive and one of them is required.. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.scripts[0].scriptFrom.configMapKeyRef, message=secretKeyRef and configMapKeyRef are mutually exclusive and one of them is required., reason=javax.validation.constraints.AssertTrue, additionalProperties={}), StatusCause(field=spec.scripts[0].scriptFrom.secretKeyRef, message=secretKeyRef and configMapKeyRef are mutually exclusive and one of them is required., reason=javax.validation.constraints.AssertTrue, additionalProperties={})], group=null, kind=null, name=null, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=admission webhook ""sgscripts.stackgres.stackgres"" denied the request: SGScript has invalid properties. secretKeyRef and configMapKeyRef are mutually exclusive and one of them is required., metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=https://stackgres.io/doc/1.3/api/responses/error#constraint-violation, status=Failure, additionalProperties={}).. Received status: Status(apiVersion=v1, code=500, details=null, kind=Status, message=admission webhook ""sgcluster.stackgres.stackgres"" denied the request: Failure executing: POST at: https://10.43.0.1/apis/stackgres.io/v1/namespaces/bar-304-k8sapps-stag-r1az-zbgais/sgscripts. Message: admission webhook ""sgscripts.stackgres.stackgres"" denied the request: SGScript has invalid properties. secretKeyRef and configMapKeyRef are mutually exclusive and one of them is required.. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.scripts[0].scriptFrom.configMapKeyRef, message=secretKeyRef and configMapKeyRef are mutually exclusive and one of them is required., reason=javax.validation.constraints.AssertTrue, additionalProperties={}), StatusCause(field=spec.scripts[0].scriptFrom.secretKeyRef, message=secretKeyRef and configMapKeyRef are mutually exclusive and one of them is required., reason=javax.validation.constraints.AssertTrue, additionalProperties={})], group=null, kind=null, name=null, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=admission webhook ""sgscripts.stackgres.stackgres"" denied the request: SGScript has invalid properties. secretKeyRef and configMapKeyRef are mutually exclusive and one of them is required., metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=https://stackgres.io/doc/1.3/api/responses/error#constraint-violation, status=Failure, additionalProperties={})., metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=null, status=Failure, additionalProperties={}). - at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:684) - at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:664) - at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:615) - at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:558) - at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:521) - at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleUpdate(OperationSupport.java:345) - at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleUpdate(OperationSupport.java:325) - at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleUpdate(BaseOperation.java:649) - at io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation.lambda$replace$1(HasMetadataOperation.java:195) - at io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation.replace(HasMetadataOperation.java:200) - at io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation.replace(HasMetadataOperation.java:141) - at io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation.replace(HasMetadataOperation.java:43) - at io.stackgres.jobs.crdupgrade.CrdLoaderImpl.lambda$updateExistingCustomResources$1(CrdLoaderImpl.java:93) - at io.stackgres.common.kubernetesclient.KubernetesClientUtil.lambda$retryOnConflict$0(KubernetesClientUtil.java:29) - at io.stackgres.common.kubernetesclient.KubernetesClientUtil.retryOnConflict(KubernetesClientUtil.java:41) - at io.stackgres.common.kubernetesclient.KubernetesClientUtil.retryOnConflict(KubernetesClientUtil.java:28) - at io.stackgres.jobs.crdupgrade.CrdLoaderImpl.lambda$updateExistingCustomResources$2(CrdLoaderImpl.java:83) - at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625) - at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:762) - at io.stackgres.jobs.crdupgrade.CrdLoaderImpl.updateExistingCustomResources(CrdLoaderImpl.java:82) - at io.stackgres.jobs.crdupgrade.CrUpdaterImpl.lambda$updateExistingCustomResources$0(CrUpdaterImpl.java:28) - at java.util.AbstractList$RandomAccessSpliterator.forEachRemaining(AbstractList.java:720) - at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:762) - at io.stackgres.jobs.crdupgrade.CrUpdaterImpl.updateExistingCustomResources(CrUpdaterImpl.java:25) - at io.stackgres.jobs.Main.run(Main.java:77) - at io.stackgres.jobs.Main_ClientProxy.run(Unknown Source) - at io.quarkus.runtime.ApplicationLifecycleManager.run(ApplicationLifecycleManager.java:129) - at io.quarkus.runtime.Quarkus.run(Quarkus.java:67) - at io.quarkus.runtime.Quarkus.run(Quarkus.java:41) - at io.quarkus.runner.GeneratedMain.main(Unknown Source) - -2022-08-31 09:09:07,753 INFO [io.st.co.ku.KubernetesClientProducer] (main) Closing instance of StackGresKubernetesClient -2022-08-31 09:09:07,759 INFO [io.quarkus] (main) stackgres-jobs stopped in 0.008s -``` - -SGCluster resource (as retrieved after the failed upgrade) : - -``` -apiVersion: stackgres.io/v1 -kind: SGCluster -metadata: - annotations: - meta.helm.sh/release-name: bar - meta.helm.sh/release-namespace: bar-304-k8sapps-stag-r1az-zbgais - stackgres.io/operatorVersion: 1.1.0 - creationTimestamp: ""2022-03-10T12:37:59Z"" - generation: 3 - labels: - app.kubernetes.io/managed-by: Helm - name: bar-foo-bar-pg - namespace: bar-304-k8sapps-stag-r1az-zbgais - resourceVersion: ""102373195"" - uid: 898fa701-1c72-4494-8be1-f4e9f3fc8bae -spec: - configurations: - sgPoolingConfig: generated-from-default-1646915879796 - sgPostgresConfig: postgres-13-generated-from-default-1646915879718 - initialData: - scripts: - - name: create-users - scriptFrom: - secretKeyRef: - key: INIT_CREATE_USERS - name: bar-foo-bar-pg - - name: create-db - script: CREATE DATABASE app OWNER app - - name: grant-user-db - script: GRANT ALL PRIVILEGES ON DATABASE app TO graphql; - instances: 2 - pods: - persistentVolume: - size: 1Gi - storageClass: openebs-zfs - postgres: - extensions: [] - flavor: vanilla - ssl: - certificateSecretKeySelector: - key: tls.crt - name: bar-foo-bar-selfsigned-pg - enabled: true - privateKeySecretKeySelector: - key: tls.key - name: bar-foo-bar-selfsigned-pg - version: ""13.5"" - postgresServices: - primary: - enabled: true - type: ClusterIP - replicas: - enabled: true - type: ClusterIP - replication: - mode: async - role: ha-read - sgInstanceProfile: bar-foo-bar-pg - toInstallPostgresExtensions: [] -status: - arch: x86_64 - conditions: - - lastTransitionTime: ""2022-03-10T12:38:03.198048Z"" - reason: FalseFailed - status: ""False"" - type: Failed - - lastTransitionTime: ""2022-08-19T11:39:43.822677Z"" - reason: ClusterRequiresUpgrade - status: ""True"" - type: PendingUpgrade - - lastTransitionTime: ""2022-08-31T09:02:45.311942Z"" - reason: FalsePendingRestart - status: ""False"" - type: PendingRestart - labelPrefix: """" - os: linux - podStatuses: - - installedPostgresExtensions: [] - name: bar-foo-bar-pg-1 - pendingRestart: false - replicationGroup: 1 - - installedPostgresExtensions: [] - name: bar-foo-bar-pg-0 - pendingRestart: false - replicationGroup: 0 -```",4 -114152433,2022-08-30 12:51:08.395,Wrong message during major/minor upgrade in a cluster with missing security upgrade to apply,"### Summary - -During a StackGres version upgrade if I forgot to apply the security upgrade before apply a major version upgrade or minor version upgrade, the operator should show only the current postgres version supported for old StackGres version, instead of show all versions! - - -#### Steps to reproduce - -- Install StackGres 1.1.0 -- Create a simple SGcluster using pg `12.8` -- Upgrade StackGres to version `1.2.1` -- Apply a major version upgrade from pg `12.8` to `13.7` - -Error (All pg versions included for V_1_2_1 are displayed) -``` -Error from server (https://stackgres.io/doc/1.2/api/responses/error#postgres-major-version-mismatch): error when creating ""major.yml"": admission webhook ""sgdbops.stackgres-operator.stackgres"" denied the request: Unsupported postgres version 13.7. Supported postgres versions are: (V_1_0, [14.0, 13.4, 13.3, 13.2, 13.1, 13.0, 12.8, 12.7, 12.6, 12.5, 12.4, 12.3, 12.2, 12.1]), (V_1_1, [14.1, 14.0, 13.5, 13.4, 13.3, 13.2, 13.1, 13.0, 12.9, 12.8, 12.7, 12.6, 12.5, 12.4, 12.3, 12.2, 12.1]), (V_1_2, [14.4, 14.2, 14.1, 14.0, 13.7, 13.6, 13.5, 13.4, 13.3, 13.2, 13.1, 13.0, 12.11, 12.10, 12.9, 12.8, 12.7, 12.6, 12.5, 12.4, 12.3, 12.2, 12.1]) -``` - -### Expected Behaviour - -Only the versions from the current StackGres version(V_1_1_0) -``` -Error from server (https://stackgres.io/doc/1.2/api/responses/error#postgres-major-version-mismatch): error when creating ""major.yml"": admission webhook ""sgdbops.stackgres-operator.stackgres"" denied the request: Unsupported postgres version 13.7. Supported postgres versions are: 14.1, 14.0, 13.5, 13.4, 13.3, 13.2, 13.1, 13.0, 12.9, 12.8, 12.7, 12.6, 12.5, 12.4, 12.3, 12.2, 12.1 -``` - -### Possible Solution - -Filter the version by the current StackGres version (security upgrade not applied yet in this case)",2 -114036312,2022-08-29 15:21:04.197,Rollback to previous version (if possible) when major version upgrade fails,"### Problem to solve - -Whenever a major version upgrade operation fails the user is not able to rollback the operation. Currently the only workaround is to follow the procedure to [""Reconvery PGDATA from existing volume""](https://stackgres.io/doc/latest/runbooks/recover-pgdata-from-existing-volume/) documented in a StackGres runbook. - -### Further details - -When a major version upgrade operation fails, if the user did not used the link or clone options the old data is not lost but is still present in the Persistent Volume. The problem is that the cluster can not be used due to the following reasons: - -* The `.status.dbOps.majorVersionUpgrade` section is used by the operator to know that the major-version-upgrade init container has to be created. The container blocks the cluster from starting -* The `.spec.postgres.version` can not be changed directly by the user but have to be modified by the SGDbOps generated serviceaccount. - -### Proposal - -Make the SGDbOps major version upgrade operation to automatically rollback the operation in case of failure is the clone or link options are not used or only the check has been executed. - -## Testing - -Improve major version upgrade test in order to check the rollback is performed if the operation fails - -## Acceptance Criteria - -* [ ] Implementation -* [ ] Test -* [ ] Documentation - -### Links / references - -https://www.postgresql.org/docs/current/pgupgrade.html",8 -114013409,2022-08-29 10:31:26.554,PITR date and time picker not working in the Web console," -### Summary - -Trying to restore a backup using PITR from the Web console, the date picker is not working, after the automatic refresh the date selected is cleaned. - -![image](/uploads/faec6768713c5006ab2b1bc62e4b029b/image.png) - -Also, the date range used to initialize the datepicker is wrong. - -To set the final date on the range, the initialisation process uses the backup immediately next to the chosen one, even though the SGBackups list coming from the API is ordered alphabetically rather than by date. - -#### Steps to reproduce - -1. Select create a new cluster -2. Choose an existing backup -3. Click outside of the datepicker (so the base date is chosen automatically) -4. Wait until the ""refresh icon"" to the right of the topbar spins (new info is loaded from the API) - -### Expected Behaviour - -- Date picker keeps the selected date and time. -- Date range is initialised properly based on the last backup available - - -### Environment - -- StackGres version: 1.3.0 - -- Kubernetes version: 1.21 - -- Cloud provider or hardware configuration: DO",6 -113912525,2022-08-26 11:15:02.464,.spec.managedSql.scripts entry duplicates id when a new entry with id 0 is provided by the user,"### Summary - -`.spec.managedSql.scripts` entry duplicates id when a new entry with id 0 is provided by the user when creating or updating an `SGCluster` - -#### Steps to reproduce - -1. Create an sgcluster -2. Export the sgcluster as YAML and change the name -3. Create the modified copy of the original sgcluster - -### Expected Behaviour - -`.spec.managedSql.scripts` entry should not duplicate any id - -### Environment - -- StackGres version: 1.3.0 -- Kubernetes version: n.p. -- Cloud provider or hardware configuration: n.p.",2 -113878980,2022-08-25 21:01:56.266,Upgrade WAL-G to 2.0.1,"https://github.com/wal-g/wal-g/releases/tag/v2.0.1 - -According to the release notes, this version adds support for Postgres 15. We will need this for supporting this version.",2 -113866517,2022-08-25 15:36:19.111,Node Affinity match set when no inputs have been filled,"### Summary - -When creating a cluster from the web console, if either Fields or Expressions match has been set, the other is automatically added to the parent object, even when none of its inputs have been filled. - -#### Steps to reproduce - -- Enter the web console -- Enter the cluster creation form -- Set a cluster name -- Set values for any Node Affinity match, while leaving the inputs on the sibling match empty -- Click no the ""View Summary"" button -- The Node Affinity section will include the empty match set - -![image](/uploads/2ded4376c24611df360a2ceb2a9732c7/image.png) - - -### Expected Behaviour - -No empty matches should be included on a Node Affinity - - -### Environment - -- StackGres version: `1.3.0`",2 -113776308,2022-08-24 13:08:23.817,SGCluster edit screen and summary assumes backup performance specs always exist,"### Summary - -When loading the summary of a cluster creation/edition, the Summary component assumes that Backup Performance specs are always set, while this might not necessarily be the case, since this spec is not mandatory. - - -#### Steps to reproduce - -- Enter the web console -- Select any cluster which doesn't have at least one of the backup performance specs -- Enter the EDIT CLUSTER form -- Click on the ""View Summary"" button - -Nothing will happen and an error like the following will appear on the browser: - -![Captura_de_Pantalla_2022-08-24_a_la_s__15.02.46](/uploads/8ed764e763240f7a53d1336bd91a7384/Captura_de_Pantalla_2022-08-24_a_la_s__15.02.46.png) - - -### Expected Behaviour - -Summaries should always open when every mandatory input has been set - - -### Possible Solution - -Validate if the property exists before requesting its value on the summary - -### Environment - -- StackGres version: `1.3.0`",4 -113760427,2022-08-24 09:17:19.937,Implement Cascade replication with WAL shipping,"The objective of this issue is to enable cascading replication with WAL shipping. - -Additionally, we should implement a method for the promotion of the secondary cluster. - -Acceptance criteria: - -* [ ] Implement the cascading replication with WAL shipping -* [ ] Create tests for the feature -* [ ] Document the feature for the customers -* [ ] Document the interface for the UI team",32 -113760019,2022-08-24 09:09:28.159,Implement Cascade replication from an external instance,"The objective of this issue is to create cascade replication from an external PostgreSQL instance. - -Additionally, we should implement a method for the promotion of the cascade cluster. - -Acceptance criteria: - -* [ ] Implement the cascade replication from an external instance -* [ ] Create tests for the feature -* [ ] Document the feature for the customers -* [ ] Document the interface for the UI team",48 -113758063,2022-08-24 08:34:56.987,Implement the UI for cascade replication from a local sgcluster,"The objective of this issue is to create the necessary requirements for the UI to support cascade replication from a local sgcluster. - -Based on the development of the issue: https://gitlab.com/ongresinc/stackgres/-/issues/1960. - -Please consider having a button to promote the cascade replicated cluster. - -**Acceptance criteria:** -- [x] Implement the UI for cascade replication from a local sgcluster -- [x] Test the implementation by adding a new cascade cluster -- [x] Test the promotion of the cascade cluster",16 -113757721,2022-08-24 08:28:49.567,Implement Cascade replication from a local sgcluster,"The objective of this issue is to create cascade replication from an SGCluster in the same namespace. - -Additionally, we should implement a method for the promotion of the cascade cluster. - -Acceptance criteria: - -- [ ] Implement the cascade replication from an SGCluster in the same namespace -- [ ] Create tests for the feature -- [ ] Document the feature for the customers -- [ ] Document the interface for the UI team",16 -113547071,2022-08-19 21:24:40.526,Error editing SGCluster with SGStorageBackup configured,"## Problem Description - -- Using the CLI create an SGCluster using the CRDs below -kubectl apply -``` -cat << 'EOF' | kubectl create -f - -apiVersion: stackgres.io/v1beta1 -kind: SGObjectStorage -metadata: - name: object-storage-demo -spec: - type: s3Compatible - s3Compatible: - bucket: - region: - enablePathStyleAddressing: true - endpoint: - awsCredentials: - secretKeySelectors: - accessKeyId: - key: accessKeyId - name: - secretAccessKey: - key: secretAccessKey - name: ---- -apiVersion: stackgres.io/v1 -kind: SGCluster -metadata: - name: my-cluster-db - namespace: default -spec: - instances: 2 - nonProductionOptions: - disableClusterPodAntiAffinity: true - pods: - persistentVolume: - size: 10Gi - sgInstanceProfile: 'default' - configurations: - backups: - - cronSchedule: '*/5 * * * *' - sgObjectStorage: 'object-storage-demo' - postgres: - version: ""14.4"" - prometheusAutobind: true -EOF -``` -- Connect to Web Console -- Access the list of SGClusters -- Edit the SGCluster my-cluster-db - -ERROR: - -- The details page of the SGCluster will be blank -- Inspecting the page is going to be possible to see the error on the console: -``` -TypeError: Cannot read properties of undefined (reading 'maxNetworkBandwidth') - at s (app.0e699056.js:1:536537) - at Rt.e._render (vue.esm.js:2512:28) - at s.i (vue.esm.js:2947:27) - at e.get (vue.esm.js:4119:33) - at e.run (vue.esm.js:4195:30) - at yn (vue.esm.js:3193:17) - at Array. (vue.esm.js:3826:20) - at zn (vue.esm.js:3748:16) -``` - -## Possible Solution - -The error happens because the SGCluster object doesn't has the property `sgcluster.spec.configurations.backups.performance` fulfilled. This property is ot mandatory - -- The property `sgcluster.spec.configurations.backups.performance` should be ignored is it's null",1 -113492177,2022-08-18 20:42:02.246,DOC: wrong cluster selector info when connecting to the postgres-util container,"## Description - -- Using the doc version `1.3.0-RC1` or `1.2.1`, access the [demo quick start page](https://stackgres.io/doc/1.3/demo/quickstart) -- Install `StackGres 1.3.0-RC1` or any other version -- Following the instruction, in the section `ACCESSING POSTGRES(PSQL)` try to copy/paste and execute the command to connect to the `postgres-util` container - ``` - kubectl exec -ti ""$(kubectl get pod --selector app=StackGresCluster,cluster=true,role=master -o name)"" -c postgres-util -- psql - ``` - -**Problem:** - -``` -error: pod, type/name or --filename must be specified -``` - -**Also, this command is repeated in the section ""CLUSTER MANAGEMENT AND AUTOMATED FAILOVER"" twice!** - -## Probably Solution - -- Change the command to: - ``` - kubectl exec -ti ""$(kubectl get pod --selector app=StackGresCluster,stackgres.io/cluster=true,role=master -o name)"" -c postgres-util -- psql - ``` - -## Acceptance Criteria - -- [ ] As result, the purposed solution should log into postgres using psql -``` -kubectl exec -ti ""$(kubectl get pod --selector app=StackGresCluster,stackgres.io/cluster=true,role=master -o name)"" -c postgres-util -- psql -psql (14.5 OnGres Inc.) -Type ""help"" for help. - -postgres=# - -```",1 -113361573,2022-08-16 18:42:30.788,Bump StackGres version components,"## Description - -Bump all StackGres components to the latest versions - -## Acceptance Criteria - -- [ ] Envoy `at least 1.24.0` -- [ ] Fluentbit `at least 1.9.9` -- [ ] FluentD `at least 1.15.2` -- [ ] Quarkus `at least 2.13.3` -- [ ] GraalVM `22.2.0`",4 -113190825,2022-08-12 13:34:19.745,Adjust color of Open in new tab icons,"### Summary -Wrong color on `Open in new tab` icons on Extensions table on Create Cluster form. - -### Environment - -- StackGres version: 1\.3.0-SNAPSHOT - -### Relevant logs and/or screenshots -![Screenshot_2022-08-12_at_15.31.07](/uploads/31c8f78c18520be65f1808875bca89f8/Screenshot_2022-08-12_at_15.31.07.png)",4 -113190684,2022-08-12 13:30:32.644,Adjust layout on DbOps Overview when empty,"### Summary -On the Overview of the DbOps, when there are none, the background of the row with the link to create a new one is not adjusted properly (missing one column). - -Same happens on InstanceProfiles Overview. - -### Environment -- StackGres version: 1\.3.0-SNAPSHOT - -### Relevant logs and/or screenshots -![Screenshot_2022-08-12_at_15.12.37](/uploads/05adcfc1c5ac495d1f6b56ad0c95747e/Screenshot_2022-08-12_at_15.12.37.png)",4 -112946693,2022-08-08 13:53:29.003,"Support PostgreSQL 14.5, 13.8, 12.12","Upcoming minor versions will be released on August 11th, 2022: 14.5, 13.8, 12.12",4 -112690988,2022-08-02 23:46:20.846,update docs about blocklisted parameters,"Current doc [page about `SGPGConfig`](https://stackgres.io/doc/latest/reference/crd/sgpgconfig/) doesn't contain the up-to-date [list of blocked list parameters](https://gitlab.com/ongresinc/stackgres/blob/main/stackgres-k8s/src/operator/src/main/resources/postgresql-blocklist.properties). - -## Acceptance Criteria - -- [ ] Page is updated with the new parameters -- [ ] Page also update the description, saying that the blocked parameters are not allowed and during an update or the creation, the operation is canceled.",1 -112621670,2022-08-01 19:19:13.446,Doc: Review and update SGDBOps CRD References,"## Problem Description - -The documentation of [SGDBOps CRD reference](https://stackgres.io/doc/latest/reference/crd/sgdbops) is outdated and missing a lot of required fields for almost all SGDBOps jobs. - -## Possible Solution -- Review/Add all fields(required and not required) for each type of job - - [benchmark](https://stackgres.io/doc/latest/reference/crd/sgdbops/#benchmark) - - [vacuum](https://stackgres.io/doc/latest/reference/crd/sgdbops/#vacuum) - - [repack](https://stackgres.io/doc/latest/reference/crd/sgdbops/#repack) - - [majorversionupgrade](https://stackgres.io/doc/latest/reference/crd/sgdbops/#major-version-upgrade) - - [minorversionupgrade](https://stackgres.io/doc/latest/reference/crd/sgdbops/#minor-version-upgrade) - - [restart](https://stackgres.io/doc/latest/reference/crd/sgdbops/#restart) - - [securityupgrade](https://stackgres.io/doc/latest/reference/crd/sgdbops/#security-upgrade)",2 -112590707,2022-08-01 10:16:12.196,Highlight Storage Types fields when they cause an error on submit,"### Summary -On the Object Storages form, when no Storage Type has beed selected, there is no clear indication for the user to know what field is causing the error on submit. - -#### Steps to reproduce -1. Go to Create Object Storages form. -2. Set a name. -3. Try to create the configuration. -4. An error appears, but no fields are highlighted. - -### Expected Behaviour -Storage Types icons should be highlighted in red when they cause an error on submit. - -### Environment -- StackGres version: 1\.3.0-SNAPSHOT",8 -112590100,2022-08-01 10:04:56.620,Benchmark duration fields remain red even if duration is set,"### Summary -Once a Benchmark duration is not valid or not set, the duration fields remain red even if the User sets a duration. - -#### Steps to reproduce -1. Go to Create DbOps form and select `Benchmark`. -2. Fill in all mandatory fields but the duration. -3. Try to create the DbOp. The duration fields will turn red since they are required. -4. Set a duration. The other duration fields remain red but should't. - -### Expected Behaviour -When a duration is set, all duration fields should stop being red. - -### Environment -- StackGres version: 1\.3.0-SNAPSHOT - -### Relevant logs and/or screenshots -![Screenshot_2022-08-01_at_11.48.53](/uploads/40b370f3f68bce98564818a51db3002c/Screenshot_2022-08-01_at_11.48.53.png)",8 -112516580,2022-07-29 17:04:30.238,Update Quarkus to 2.11.x.Final,"* https://quarkus.io/blog/quarkus-2-11-1-final-released/ - -Also, update other dependencies like the commons-configuration2 to fix a potential security issue.",2 -112376079,2022-07-28 16:58:06.097,Postgres Utils missing on Cluster Details,"### Summary -`Postgres Utils` missing on Cluster Details (Cluster Config tab). - -### Environment -- StackGres version: 1.3.0-SNAPSHOT",2 -112148141,2022-07-25 12:13:32.635,password compromising in Downloading extensions metadata from log message,"Hello, Stackgres Team, - -Using helm we deployed Stackgres and configured internal proxy (--set-string extensions.repositoryUrls[0]=""...""). - -We uses AD proxy authentication, and in stackgres-restapi we see next messages that compromises AD account password: - -`2022-07-25 09:53:39,196 INFO [io.stackgres.common.extension.ExtensionMetadataManager] (executor-thread-106) Downloading extensions metadata from https://extensions.stackgres.io/postgres/repository?proxyUrl=http%3A%2F%2Fsvc_stackgreststprxy%3A%40proxy.domain.local%3A3128` - -1) Is it possible somehow to masquerade account password to do not show it in output log?
-2) Could we deploy and point stackgres to our own internal repository for this purposes? Is there any instructions how to deploy it?",4 -111925282,2022-07-20 14:33:36.443,Adjust pagination color scheme on darkmode,"### Summary - -When showing pagination for tables on the web console, if darkmode is enabled, the color scheme makes text hard to read. - -![image](/uploads/90888402baea647be239b857354ec748/image.png) - - -#### Steps to reproduce - -- Enter the web console -- Enable darkmode by clicking on the color scheme icon on the topbar -- Load any component with enough records to show pagination -- The text on the pagination will be hard to read - - -### Expected Behaviour - -Text should be completely legible - - -### Environment - -- StackGres version: `1.3.0-SNAPSHOT`",2 -111905501,2022-07-20 09:10:30.555,Hide empty sections on Summaries,"### Summary -Titles of empty sections are shown on some Summaries. - -#### Steps to reproduce -1. Go to Create Object Storage. -2. Give it a name. -3. Open the Summary. -4. On the Summary, the `Type` section is shown but it's empty. - -### Expected Behaviour -Empty sections should not be shown. - -### Environment -- StackGres version: 1\.2.0 - -### Relevant logs and/or screenshots -![Screenshot_2022-07-20_at_11.08.44](/uploads/de41a594c7a39bd6ecbed4a5a0dcf66b/Screenshot_2022-07-20_at_11.08.44.png)",4 -111867143,2022-07-19 15:49:03.410,Fix help tooltips paths on Distributed Logs details,"### Summary -On the Distributed Logs details, some of the paths of the help tooltips are wrong. - -### Environment -- StackGres version: 1\.2.0",4 -111865767,2022-07-19 15:23:42.019,Remove Enable Primary Service toggle from Distributed Logs form,"### Summary -Although it is not possible to disable `Primary Service` on the Distributed Logs, we have a switch to do so on the Web Console. We have to remove that switch from the Distributed Logs form. - -### Environment -- StackGres version: 1\.2.0",2 -111643718,2022-07-14 13:55:56.951,Misplaced warning icons on cluster tabs,"### Summary -Some warning icons appear misplaced on cluster tabs. - -### Environment -- StackGres version: 1\.2.0 - -### Relevant logs and/or screenshots -![image__1_](/uploads/6697fd091a132b5f1d2189b28e249bc7/image__1_.png) -![image](/uploads/dd974525e391f4b247daee1e7cb745de/image.png)",2 -111521839,2022-07-12 15:45:33.119,fix the helm command adding the missing flag on this scenario,"Looking over your helm installation page, image attached, and wondering why when selecting the Customized parameters tab, the helm command does not include --create-namespace This would create the chosen custom namespace and not fail if it's already there. - - This adjustment would make the user experience a bit easier. - -![Screenshot_from_2022-07-12_17-41-48](/uploads/099ce6635bf9347b089b4a620241d721/Screenshot_from_2022-07-12_17-41-48.png) - -**Acceptance criteria:** -- [ ] add the `--create-namespace` in the helm command",1 -111313456,2022-07-07 12:31:56.723,Support SGInstanceProfile and SGPostgresConfig on Distributed Logs on the Web Console,"The web console should allow applying SGInstanceProfile and SGPostgresConfig to Distributed Logs. - -## Acceptance Criteria: -- [x] Support adding SGInstanceProfile and SGPostgresConfig to Distributed Logs. -- [x] List SGInstanceProfile and SGPostgresConfig on Distributed Logs Summary. -- [x] List SGInstanceProfile and SGPostgresConfig on Distributed Logs Details. -- [x] Test the implementation.",8 -111308428,2022-07-07 10:51:33.700,Toleration Seconds value not shown on Distributed Logs and Cluster Summaries,"### Summary -`Toleration Seconds` value is not shown on the DistributedLogs and Cluster Summaries. - -### Expected Behaviour -`Toleration Seconds` value should be shown. - -### Environment -- StackGres version: 1\.2.0",2 -111273479,2022-07-06 19:12:37.082,stackgres-operator-crd-upgrade pod errors out because of bad permissions on service account token," -### Summary - -stackgres-operator-crd-upgrade pod errors out because of bad permissions on service account token. - -### Current Behaviour - -The pod errors out because it is unable to read the service account token with a permission denied error: - -``` -2022-07-06 19:04:08,336 INFO [io.quarkus] (main) stackgres-jobs 1.2.0 native (powered by Quarkus 2.8.2.Final) started in 0.017s. Listening on: http://0.0.0.0:8080 -2022-07-06 19:04:08,337 INFO [io.quarkus] (main) Profile prod activated. -2022-07-06 19:04:08,337 INFO [io.quarkus] (main) Installed features: [cdi, hibernate-validator, kubernetes-client, smallrye-context-propagation, vertx] - - - -2022-07-06 19:04:08,338 WARN [io.fa.ku.cl.Config] (main) Error reading service account token from: [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring. - - - -2022-07-06 19:04:08,423 ERROR [io.qu.ru.Application] (main) Failed to start application (with profile prod): io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://172.20.0.1/apis/apiextensions.k8s.io/v1/customresourcedefinitions/sgbackups.stackgres.io. Message: customresourcedefinitions.apiextensions.k8s.io ""sgbackups.stackgres.io"" is forbidden: User ""system:anonymous"" cannot get resource ""customresourcedefinitions"" in API group ""apiextensions.k8s.io"" at the cluster scope. Received status: Status(apiVersion=v1, code=403, details=StatusDetails(causes=[], group=apiextensions.k8s.io, kind=customresourcedefinitions, name=sgbackups.stackgres.io, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=customresourcedefinitions.apiextensions.k8s.io ""sgbackups.stackgres.io"" is forbidden: User ""system:anonymous"" cannot get resource ""customresourcedefinitions"" in API group ""apiextensions.k8s.io"" at the cluster scope, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=Forbidden, status=Failure, additionalProperties={}). - at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:682) - at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:661) - at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:612) - at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:555) - at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:518) - at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:487) - at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:457) - at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:698) - at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:184) - at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:151) - at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:83) - at io.stackgres.jobs.crdupgrade.CustomResourceDefinitionFinder.findByName(CustomResourceDefinitionFinder.java:30) - at io.stackgres.jobs.crdupgrade.CrdInstallerImpl.installCrd(CrdInstallerImpl.java:50) - at io.stackgres.jobs.crdupgrade.CrdInstallerImpl.lambda$installCustomResourceDefinitions$0(CrdInstallerImpl.java:42) - at java.lang.Iterable.forEach(Iterable.java:75) - at io.stackgres.jobs.crdupgrade.CrdInstallerImpl.installCustomResourceDefinitions(CrdInstallerImpl.java:42) - at io.stackgres.jobs.Main.run(Main.java:61) - at io.stackgres.jobs.Main_ClientProxy.run(Unknown Source) - at io.quarkus.runtime.ApplicationLifecycleManager.run(ApplicationLifecycleManager.java:124) - at io.quarkus.runtime.Quarkus.run(Quarkus.java:67) - at io.quarkus.runtime.Quarkus.run(Quarkus.java:41) - at io.quarkus.runner.GeneratedMain.main(Unknown Source) - -2022-07-06 19:04:08,424 INFO [io.st.co.ku.KubernetesClientProducer] (main) Closing instance of StackGresKubernetesClient -2022-07-06 19:04:08,426 INFO [io.quarkus] (main) stackgres-jobs stopped in 0.003s -``` - - - -#### Steps to reproduce - -Install helm chart via ArgoCD - -### Expected Behaviour - -This Job should be able to read the service account token - -### Possible Solution - -If you look at the pod definition you will see that securityContext is empty: - -``` -spec: - containers: - - env: - - name: OPERATOR_NAME - value: stackgres-operator - - name: OPERATOR_NAMESPACE - valueFrom: - fieldRef: - apiVersion: v1 - fieldPath: metadata.namespace - - name: CRD_UPGRADE - value: ""true"" - - name: CONVERSION_WEBHOOKS - value: ""false"" - image: stackgres/jobs:1.2.0 - imagePullPolicy: IfNotPresent - name: stackgres-jobs - resources: {} - terminationMessagePath: /dev/termination-log - terminationMessagePolicy: File - volumeMounts: - - mountPath: /var/run/secrets/kubernetes.io/serviceaccount - name: kube-api-access-n8xsj - readOnly: true - dnsPolicy: ClusterFirst - enableServiceLinks: true - nodeName: ip-10-100-155-169.ec2.internal - preemptionPolicy: PreemptLowerPriority - priority: 0 - restartPolicy: OnFailure - schedulerName: default-scheduler - - - - securityContext: {} - - - - serviceAccount: stackgres-operator-crd-upgrade - serviceAccountName: stackgres-operator-crd-upgrade - terminationGracePeriodSeconds: 30 -``` - -Adding spec.securityContext.fsGroup=65534 appears to fix this issue. - -Before: - -``` -~ ❯ k exec -it job-test -- /bin/bash -bash-4.4$ cat /var/run/secrets/kubernetes.io/serviceaccount/token -cat: /var/run/secrets/kubernetes.io/serviceaccount/token: Permission denied -bash-4.4$ exit -``` - -After: - -``` -~ ❯ k exec -it job-test -- /bin/bash -bash-4.4$ cat /var/run/secrets/kubernetes.io/serviceaccount/token -eyJhbGciOiJSUzI1NiIs... -bash-4.4$ -``` - - -### Environment - -- StackGres version: 1.2.0 - -- Kubernetes version: v1.21.12-eks-a64ea69 - -- Cloud provider or hardware configuration: - - -### Relevant logs and/or screenshots - -",8 -111216618,2022-07-05 23:01:35.991,Review web console interceptors to REST API responses,"### Summary - -On the web console, when processing REST API responses, there are several interceptors set with generic messages depending on the response code comming from the REST API. In some cases, this may cause for the error notification to be shown several times. - -Such is the case when trying to edit/update a cluster when there's an SGDbOp running for it. - -![image](/uploads/afa147eb8af810b63d19b28af65e92eb/image.png) - - -### Expected Behaviour - -Notification messages for REST API responses should only appear once. We should review such interceptors and the notifications engine, in order to avoid repeated messages. - -### Environment - -- StackGres version: `1.2.0`",8 -111197080,2022-07-05 14:55:10.025,Managed backups specs not loading on SGCluster form,"### Summary - -When creating a cluster on the web console, the fields won't load because of a typo on the code. - - -#### Steps to reproduce - -- Enter the web console -- Go to the ""Create Cluster"" form -- Click on the ""Backups"" step -- Toggle the ""Managed Backups"" switch -- Nothing happens and the following error appears on the browser's console - -![image](/uploads/b6ae7fbf7ae5674edf9c6c22667ecf4d/image.png) - - -### Expected Behaviour - -Backup specs should load properly when enabled - -### Environment - -- StackGres version: `1.3.0-SNNAPSHOT`",2 -111117871,2022-07-04 10:57:18.407,Proposed default names contain non-valid characters,"### Summary -When trying to create a Backup or DbOp, a default name is proposed. The default name contains non-valid character and the creation fails. - -#### Steps to reproduce -1. Go to Backups form. -2. The proposed default name contains non-valid characters (`:`). - -### Expected Behaviour -Colons should be replace by dashes. - -### Environment - -- StackGres version: 1\.2.0 - -### Relevant logs and/or screenshots",4 -111024157,2022-07-01 12:57:42.241,"In stackgres-Ui trying to create a cluster with a pitr restore time from the last backup, the PITR time field doesnt let me input anything","It lets me input any time in the later backups but not in the last one, i try to recover a cluster from the latest backup this morning.. - -There is a workaround, I can make a manual backup and that lets me restore to the time i need",8 -111014756,2022-07-01 10:26:45.519,Improve web console homepage when there are no namespaces in use,"### Summary - -The homepage for the web console shows a set of cards associated to those namespaces with at least one resource in them. If there are no resources created in any namespace, the homepage just shows a ""Used Namespaces"" title. - -![image](/uploads/6ef8af9e6b1e67a50cbc8f9193b2b21e/image.png) - - -#### Steps to reproduce - -- Enter the web console -- Make sure there are no resources in any namespace -- No cards are shown but the ""Used Namespaces"" still appears - - - -### Environment - -- StackGres version: `1.2.0`",4 -111012180,2022-07-01 09:34:22.782,"NodeSelector and NodeAffinity for Backups, DistributedLogs and SGDbOps","### Problem to solve - -Currently on StackGres is possible to use NodeSelect and NodeAffinity for Backups, DistributedLogs, and SGDbOps! - -### Proposal - -Please add `NodeSelector` and `NodeAffinity` capability for Backup, DistributedLogs and SGDbOps. - -DistributedLogs: -```yaml -scheduling: - nodeAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - weight: - preference: - matchExpressions: - - key: - operator: - values: - matchFields: - - key: - operator: - values: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - matchExpressions: - - key: - operator: - values: - matchFields: - - key: - operator: - values: -``` - -SGDbOps: -```yaml -scheduling: - nodeSelector: - type: - nodeAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - weight: - preference: - matchExpressions: - - key: - operator: - values: - matchFields: - - key: - operator: - values: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - matchExpressions: - - key: - operator: - values: - matchFields: - - key: - operator: - values: -``` - -Backup example(inside `SGCluster` CRD): - -```yaml -scheduling: - backup: - nodeSelector: - type: - nodeAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - weight: - preference: - matchExpressions: - - key: - operator: - values: - matchFields: - - key: - operator: - values: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - matchExpressions: - - key: - operator: - values: - matchFields: - - key: - operator: - values: - -``` - -### Testing - -**Acceptance Criteria:** -- [ ] Able to add NodeSelector, NodeAffinity for SGDbOps, SGBackup and SGDistributedLogs -- [ ] Update documentation of the CRD -- [ ] Test the implementation",16 -110906433,2022-06-29 14:25:53.570,The info property of all sgcluster related endpoints is returning the deprecated -primary service,"### Summary - -The info property of all sgcluster related endpoints is returning the deprecated `-primary` service instead of the `` service. - -#### Steps to reproduce - -1. Create a cluster -2. Read the cluster using `/stackgres/sgclusters` or `/stackgres/namespaced//sgcluster/` - -### Expected Behaviour - -The info property of all sgcluster related endpoints should return the `` service - -### Environment - -- StackGres version: 1.2.1 -- Kubernetes version: ? -- Cloud provider or hardware configuration: ?",4 -110853656,2022-06-28 17:40:59.449,Improve Backup configuration layout/order on SGCluster form,"### Problem to solve - -When setting up backup specs on an SGCluster, the following layout is shown: - -![image](/uploads/4ccedafc86e7df06c18e5c10479397ca/image.png) - - -### Proposal - -This layout might be improved by making the following changes: - -- [ ] Include an ""Enable Managed Backups"" checkbox (since managed backups are optional and disabled by default -- [ ] Include a ""Create new"" option on the Object Storage dropdown -- [ ] Move the Object Storage dropdown from the bottom to the top of the fieldset - - - -## Acceptance Criteria -- [ ] Make the proposed changes -- [ ] Make sure everything works",4 -110850375,2022-06-28 16:36:55.059,"Clone CRD function not working for SGClusters, SGPostgresConfigs and SGPoolingConfigs","### Summary - -When trying to make a clone of SGClusters, SGPostgresConfigs or SGPoolingConfigs from the web console, the clone dialog box does not load when clicking on the corresponding button. - - -#### Steps to reproduce - -- Enter the web console -- Head to any namespace -- Click on the ""CLONE"" button for any of SGCluster, SGPoolConfig or SGPostgresConfig -- The dialog box to set the clone info does not open - -### Expected Behaviour - -- The CLONE dialog box should open and cloning any resource other than SGBackups and SGDbOps should work properly - - - -### Environment - -- StackGres version: `1.3.0-SNAPSHOT`",8 -110829558,2022-06-28 11:32:56.201,WebUI - Add loadBalancerIP to SGCluster and SGDistributedLogs,"### Problem to solve - -Currently, it is not possible to specify a custom load balancer IP for the Postgres services, the idea of this is to be able to set a custom load balancer IP for R/W and R/O connections, this also allows to keep the same IP in case if it's necessary to recreate the services. - -### Proposal - -In the SGCluster services section allows to add a custom load balancer IP (SGCluster and DistributedLogs) - -```yaml -postgresServices: - primary: - type: LoadBalancer - loadBalancerIP: 80.11.12.10 - replicas: - type: LoadBalancer - loadBalancerIP: 80.11.12.11 -``` - -And generate a service like: - -```yaml -apiVersion: v1 -kind: Service -metadata: - name: my-cluster -spec: - selector: - app: StackGresCluster - cluster-name: my-cluster - role: master - ports: - - name: pgport - protocol: TCP - port: 5432 - targetPort: pgport - loadBalancerIP: 80.11.12.10 -``` - -### Links / references - -**Acceptance criteria:** -- [ ] Implement the change on the WebUI - SGCluster -- [ ] Implement the change on the WebUI - DistributedLogs -- [ ] Create tests",8