diff --git "a/1304532.csv" "b/1304532.csv" deleted file mode 100644--- "a/1304532.csv" +++ /dev/null @@ -1,23312 +0,0 @@ -issuekey,created,title,description,storypoints -26249792,2019-10-23 20:15:59.531,Remove SSLMate verification records from DNS Terraform env config + state,"Noted https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/merge_requests/1124, the import of records from DNS into terraform config obtained SSLMate verification records, which are maintained (via API) by SSLMate itself, and should not be in terraform. - -They need to be removed from the config, and from the terraform state (so we don't actually delete them from Route53)",1.0 -26230890,2019-10-23 12:32:17.812,RCA: 2019-10-23: Short outage of gitlab.com," - -Incident: gitlab-com/gl-infra/production#1272 - -## Summary - -We had 2 short outages of gitlab.com related to issues with our redis cluster being caused by elevated memory usage. - -- Service(s) affected : ~""Service:Web"" -- Team attribution : -- Minutes downtime or degradation : 14m - -For calculating duration of event, use the [Platform Metrics Dashboard](https://dashboards.gitlab.net/d/ZUei7TkWz/platform-metrics?orgId=1) to look at appdex and SLO violations. - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? - - all services depending on Redis didn't work during the outage, which was affecting most services. -- Who was impacted by this incident? - - all users -- How did the incident impact customers? - - 500 errors on gitlab.com -- How many attempts were made to access the impacted service/feature? -- How many customers were affected? -- How many customers tried to access the impacted service/feature? - -Include any additional metrics that are of relevance. - -Provide any relevant graphs that could help understand the impact of the incident and its dynamics. - -## Detection & Response - -Start with the following: - -- How was the incident detected? - - pingdom alerts and immediate reports from users -- Did alarming work as expected? - - yes -- How long did it take from the start of the incident to its detection? -- How long did it take from detection to remediation? -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team memeber wasn't page-able, ...) - -## Root Cause Analysis - -The purpose of this document is to understand the reasons that caused an incident, and to create mechanisms to prevent it from recurring in the future. A root cause can **never be a person**, the way of writing has to refer to the system and the context rather than the specific actors. - -Follow the ""**5 whys**"" in a **blameless** manner as the core of the root-cause analysis. - -For this it is necessary to start with the incident, and question why it happened. Keep iterating asking ""why?"" 5 times. While it's not a hard rule that it has to be 5 times, it helps to keep questions get deeper in finding the actual root cause. - -Keep in min that from one ""why?"" there may come more than one answer, consider following the different branches. - -### Example of the usage of ""5 whys"" -The vehicle will not start. (the problem) - -1. Why? - The battery is dead. -2. Why? - The alternator is not functioning. -3. Why? - The alternator belt has broken. -4. Why? - The alternator belt was well beyond its useful service life and not replaced. -5. Why? - The vehicle was not maintained according to the recommended service schedule. (Fifth why, a root cause) - -## What went well - -Start with the following: - -- Identify the things that worked well or as expected. -- Any additional call-outs for what went particularly well. - -## What can be improved - -Start with the following: - -- Using the root cause analysis, explain what can be improved to prevent this from happening again. -- Is there anything that could have been done to improve the detection or time to detection? -- Is there anything that could have been done to improve the response or time to response? -- Is there an existing issue that would have either prevented this incident or reduced the impact? -- Did we have any indication or beforehand knowledge that this incident might take place? - -## Corrective actions - -- annotations for feature flags https://gitlab.com/gitlab-com/gl-infra/delivery/issues/525 -- metrics for oom events https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8250 -- make sure we have proper memory saturation alerts for redis -- research why we didn't fail over to redis-02 -- fix sending redis logs to elastic: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8252 -- Upgrading to Redis 4 or higher would allow us to have used the `MEMORY PURGE` option to deallocate memory, but this is not available to us since we're still on Redis 3.2 https://gitlab.com/gitlab-org/omnibus-gitlab/issues/3785 -- Add Redis memory fragmentation alerts https://gitlab.com/gitlab-com/runbooks/merge_requests/1547 -- Add specialised Redis memory saturation metric https://gitlab.com/gitlab-com/runbooks/merge_requests/1548 - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",3.0 -26226264,2019-10-23 10:13:46.540,Improve project restoration runbook,"While working on https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8239 I needed to spend a lot of time to research db restoration steps that are not covered in enough detail in the [runbook](https://gitlab.com/gitlab-com/runbooks/blob/master/howto/community-project-restore.md). - -We should add documentation for how to find the right base backup, which variables need to be set for the pipeline, necessary timeout settings to let the job complete, which gcp project the restore instance is running in etc. - -Thankfully, part of that already is done in this branch: https://ops.gitlab.net/gitlab-com/gl-infra/gitlab-restore/postgres-gprd/tree/T4cC0re-master-patch-46363. But it's not merged and the README.md instructions contain typos (it should be `IS_RESTORE` instead of `IS_RECOVERY` and `PSQL_BACKUP_TIME` instead of `PQSL_BACKUP_TIME`). - -/cc @T4cC0re",3.0 -26225375,2019-10-23 09:53:42.754,Add external access to Kubernetes Prometheus,"We need a simplified way to directly access Prometheus server console (`:9090`). - -1. Document/script proxy/tunneling access. -2. Create web service with auth to access.",3.0 -26222269,2019-10-23 08:48:58.604,Clean up recording rules in Kubernetes,"Recording rules fail in Kubernetes due to label differences in Chef vs K8s. - -1. Fix rules so they work in both. -2. Split or filter out K8s-incompatible rules. - -This is breaking out one of the items from https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8186",4.0 -26170502,2019-10-21 19:16:32.644,Deploy redundant praefect nodes on gstg behind a load balancer,"Spawning from a discussion on the Praefect readiness review https://gitlab.com/gitlab-com/gl-infra/readiness/merge_requests/10: - -Let's try deploying redundant praefect nodes behind a GCP load balancer to avoid introducing a new SPOF in our infrastructure. - -- [x] Terraform changes: Terraform changes https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/merge_requests/1189 -- [x] chef-repo MR: https://ops.gitlab.net/gitlab-cookbooks/chef-repo/merge_requests/2167 - -/cc @gl-gitaly",5.0 -26160887,2019-10-21 13:54:25.196,DNS Record modification for domain gitlab.design,"UX would like to run research using UX prototypes POCs and we need a domain name to host that under. We already have `gitlab.design` domain that is used to redirect to the Pyjamas site. - -Can we have subdomain `research.gitlab.design` allocated for the prototypes? It would be great to add this as a DNS Zone in GCP, so that going forward, people can add second-level subdomains to this sub-domain and point their instances to them. - -PS: The redirect site does not have TLS, we should fix that too while we're at it.",1.0 -26121604,2019-10-19 19:14:29.528,Discuss: How to make SLO alerts for CI Runners less noisy and more actionable?,"The SLO alerts for CI Runners does not yet meet reasonable standards for an actionable alert. We all know this, but I wanted to jot down a few specific things that would make it better. - -- This alert still has an unreasonably low signal-to-noise ratio. - - The apdex score dropped below SLO [9 times in the last 24 hours](https://dashboards.gitlab.net/d/general-service/general-service-platform-metrics?orgId=1&from=1571420700000&to=1571507100000&var-PROMETHEUS_DS=Global&var-environment=gprd&var-type=ci-runners&var-stage=main&var-sigma=2&fullscreen&panelId=7), 5 of which lasted long enough for PagerDuty to alert (see [screenshot as of 17:45 UTC](/uploads/4bb87a9e4b688fe32461617bc310bcd1/Screenshot_from_2019-10-19_10-56-08.png)). -- This alert has no runbook yet. - - The PagerDuty alert refers to `troubleshooting/service-ci-runners.md`, but that file does not exist in the [runbooks repo](https://ops.gitlab.net/gitlab-com/runbooks/tree/master/troubleshooting). - - The PagerDuty alert itself gives a recommended response which amounts to: look at other metrics and logs for signs of abusive behavior, high request rate, or slow responses from dependent services. That's good generic advice for any SLO violation, but it's too vague to be actionable. -- What dashboards are useful for troubleshooting CI Runners? - - The [""CI Runners Service"" -> ""CI"" dashboard](https://dashboards.gitlab.net/d/000000159/ci?orgId=1&refresh=5m) shows workload-related metrics: - - queue sizes - - percentiles of durations of recently completed jobs - - job count by runner node - - The [""Service Platform Metrics"" dashboard](https://dashboards.gitlab.net/d/general-service/general-service-platform-metrics?orgId=1&var-PROMETHEUS_DS=Global&var-environment=gprd&var-type=ci-runners&var-stage=main&var-sigma=2&fullscreen&panelId=7) can show the latency apdex graph when `type` is set to `ci-runners`. -- Should the SLO should be reframed in a way that makes it more achievable? - - The following are just my initial thoughts, but I believe @andrewn is already advocating for improving this, and he probably already has much more specific ideas for improvements. - - The CI Runner workload is largely outside of our control (e.g. user-defined jobs may take any amount of time to complete, and they may depend on external resources that are outside of both our control and visibility. Example: Would SLO be violated by a sufficiently large number of jobs running `sleep 1000`? - - Reframing the SLO in terms that address *only the controllable aspects* of our platform's execution of jobs would make this goal more achievable. What working agreements do we already have for the CI Runner platform, describing to our end-users under what conditions a job execution may be aborted, such as job timeout deadline, usage quotas on machine resources (CPU, network bandwidth, max connection count, max disk IOPS, etc.)? - - We may benefit from comparing our set of quotas to those of other platform providers (e.g. GCP, AWS, Heroku, etc.). Other platform providers also share this problem of needing to set achievable SLOs that are resilient to abusive behavior and support auto-scaling, auto-termination, and auto-rate-limiting. -- Both auto-scaling and building automatic defensive mitigations for abuse requires having more specific goals than just a latency target for an unconstrained input workload. More prescriptively describing the service level would also be super helpful in making concrete proposals for automating tactics to achieve these goals while also improving cost efficiency and reducing toil. - -cc @gitlab-com/gl-infra",1.0 -26102795,2019-10-18 19:45:22.832,Add Slack Integration to Sentry,"**Problem:** Server 500 errors are being collected by Sentry and the appropriate teams are not learning about those errors on a timely basis. - -**Solution:** -I would like to configure Sentry to have richer Slack integration. This can be done following these steps: -https://forum.sentry.io/t/how-to-configure-slack-in-your-on-prem-sentry/3463 - -I have already created a Slack Application as per the configuration documents. - -The 3 pieces of data to complete the integration are - -``` -slack.client-id: <'client id'> -slack.client-secret: -slack.verification-token: -``` - -These values need to be added to the config.yml for Sentry via the chef recipe. - -The values for those variables are stored in 1Password Team Vault under the record 'Slack App for Secure Sentry'. - -Once those values are added to config.yml for Sentry and Sentry has been restarted, the ""+Add Workspace"" on this page should work, and allow us to configure the rest of the alerting rules. -https://sentry.gitlab.net/settings/gitlab/gitlabcom/integrations/slack/",1.0 -26063153,2019-10-18 03:08:51.377,Kubernetes ingress crashing for version.gitlab.com,"The new kubernetes ingress for GitLab Services which is managed by the GitLab Kubernetes integration keeps crashing. It's unclear whether this is a problem with the integration, with the app, or with the version of kubernetes that we are running. - -The migration from AWS to GKE has been rolled back until this is solved. - -The ingress pods show `0 of 1 updated replicas available - Pods have warnings` - -And the Pod Logs show: - -``` -I1018 02:53:08.287037 7 main.go:150] Received SIGTERM, shutting down -I1018 02:53:08.287102 7 nginx.go:321] shutting down controller queues -I1018 02:53:08.287131 7 status.go:115] updating status of Ingress rules (remove) -I1018 02:53:08.303618 7 status.go:134] removing address from ingress status ([35.243.233.112]) -I1018 02:53:08.308592 7 status.go:342] updating Ingress version-gitlab-com-6491770-production/production-auto-deploy status to [] -I1018 02:53:08.316522 7 nginx.go:329] stopping NGINX process... -``` - -The ingress in question is: https://console.cloud.google.com/kubernetes/deployment/us-east1/gs-production-gke/gitlab-managed-apps/ingress-nginx-ingress-controller?project=gs-production-efd5e8&authuser=1&cloudshell=false&organizationId=769164969568&tab=overview&deployment_overview_active_revisions_tablesize=50&duration=PT1H&pod_summary_list_tablesize=20&service_list_datatablesize=20 - - -//cc @jameslopez",2.0 -26061435,2019-10-18 00:57:53.905,Investigate intermittent response time spikes concurrently affecting multiple services,"Hunt for clues about intermittent slowness reported for GitLab.com. - -During at least the last 2 days (maybe more), several internal users have reported observing brief periods of slow performance on the web UI and git-push. These reports so far lack details, but reviewing the frontend metrics revealed several brief but large spikes in the 50th percentile response times recorded by HAProxy. Several (but not all) of HAProxy's backend service pools concurrently reported these spikes, which implies a shared dependency of these specific services may have been stressed. - -This issue aims to: -* Document the findings so far. -* Try to identify which internal services were stressed during a few of these spikes. -* If a pattern emerges, try to determine more about the nature of the bottleneck (e.g. higher overall request rate, higher rate of slow/costly requests, contention over a machine resource, lock contention, saturation of a resource pool, etc.).",2.0 -26060003,2019-10-17 23:24:15.783,Scale up `review-apps` runners for about-src.gitlab.com,"Right now we have 80+ jobs stuck waiting for the single `about-src.gitlab.com` runner to pick up the `review-apps` jobs: - -![image](/uploads/4e991349df60e5e7b18b010587e2207c/image.png) - -https://gitlab.com/gitlab-com/www-gitlab-com/-/jobs?page=2&scope=pending - -I think we should scale this up, although the bulk of the time may be downloading the artifacts. We may need more cores on that machine as well to handle this.",6.0 -26055944,2019-10-17 20:25:04.010,Investigate running Discourse on GKE,"We're looking to [migrate forum.gitlab.com to GKE](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/101). This issue is to track research on how we're going to be able to do that. - -Test project: https://gitlab.com/cody/discourse-k8s-sandbox-2 - -## Can we use AutoDevOps? - -I think we can probably use AutoDevOps for the build and deployment of Discourse. But I'm still investigating how that might work.",5.0 -26055145,2019-10-17 19:41:25.481,SSL certificate has expired for pre.gitlab.com,The SSL certificate has expired for https://pre.gitlab.com/.,3.0 -26038539,2019-10-17 12:30:52.307,Enable HTTP compression on customers.gitlab.com,"# Summary -https://customers.gitlab.com doesn't have HTTP compression enabled. See [this report](https://developers.google.com/speed/pagespeed/insights/?url=http%3A%2F%2Fcustomers.gitlab.com%2F) from PageSpeed. - -# Proposal -Enable gzip compression on nginx. This will greatly improve performance and shouldn't cause any issues.",1.0 -26034432,2019-10-17 10:34:50.873,Request access to ops.gitlab.net and restore project for OnGres team,"We need access to https://ops.gitlab.net and https://dashboards.gitlab.net/ for all the @gl-consultants-ongres team, in order to move forward with some issues. - -We already have access to the restore project (https://ops.gitlab.net/gitlab-com/gl-infra/gitlab-restore). - -cc/ @glopezfernandez",2.0 -26008352,2019-10-16 15:37:34.682,RCA: Oct 16 / Customers.gitlab.com 500 outage," - -Incident: https://gitlab.com/gitlab-com/gl-infra/production/issues/1259 - -## Summary - -During a production change to customers.gitlab.com, chef was run via ```sudo``` and was unable to create files required to run rake commands to set up Yarn. This caused the application to be unable to serve requests properly. - -To get into this specifically, the commands to use yarn depend on a specific file being updated to trigger them. When chef was run via sudo, the version file was updated, and the specific commands to initialize yarn were not run due to a file permission issue. Subsequent runs of chef (as root or not) would not trigger the initialization commands since the specific file was not being updated again. - -https://gitlab.com/gitlab-com/gl-infra/production/issues/1257 - -Service(s) affected : customers.gitlab.com -Team attribution : -Minutes downtime or degradation : - -For calculating duration of event, use the [Platform Metrics Dashboard](https://dashboards.gitlab.net/d/ZUei7TkWz/platform-metrics?orgId=1) to look at appdex and SLO violations. - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? (i.e. service outage, sub-service brown-out, exposure of sensitive data, ...) -- Who was impacted by this incident? (i.e. external customers, internal customers, specific teams, ...) -- How did the incident impact customers? (i.e. preventing them from doing X, incorrect display of Y, ...) -- How many attempts were made to access the impacted service/feature? -- How many customers were affected? -- How many customers tried to access the impacted service/feature? - -Include any additional metrics that are of relevance. - -Provide any relevant graphs that could help understand the impact of the incident and its dynamics. - -## Detection & Response - -Start with the following: - -- How was the incident detected? -- Did alarming work as expected? -- How long did it take from the start of the incident to its detection? -- How long did it take from detection to remediation? -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team memeber wasn't page-able, ...) - -## Root Cause Analysis - -The purpose of this document is to understand the reasons that caused an incident, and to create mechanisms to prevent it from recurring in the future. A root cause can **never be a person**, the way of writing has to refer to the system and the context rather than the specific actors. - -Follow the ""**5 whys**"" in a **blameless** manner as the core of the root-cause analysis. - -For this it is necessary to start with the incident, and question why it happened. Keep iterating asking ""why?"" 5 times. While it's not a hard rule that it has to be 5 times, it helps to keep questions get deeper in finding the actual root cause. - -Keep in min that from one ""why?"" there may come more than one answer, consider following the different branches. - -### Example of the usage of ""5 whys"" -Customers.gitlab.com is returning 500 errors. - -1. Why? - The Yarn install was not run. -2. Why? - Chef attempted to run this, but it could not create files in a /home folder. -3. Why? - Chef-client run via sudo doesn't have permissions to the executing user's home folder. -4. Why? - The chef recipe assumes the rake commands will have access to the folder. -5. Why? - Chef is usually run as root (full login) or via the chef-client daemon. - -## What went well - -Start with the following: - -- Identify the things that worked well or as expected. -- Any additional call-outs for what went particularly well. - -## What can be improved - -Start with the following: - -- Using the root cause analysis, explain what can be improved to prevent this from happening again. -- Is there anything that could have been done to improve the detection or time to detection? -- Is there anything that could have been done to improve the response or time to response? -- Is there an existing issue that would have either prevented this incident or reduced the impact? -- Did we have any indication or beforehand knowledge that this incident might take place? - -## Corrective actions - -- List issues that have been created as corrective actions from this incident. -- For each issue, include the following: - - - Issue labeled as ~""corrective action"". - - Include an estimated date of completion of the corrective action. - - Incldue the named individual who owns the delivery of the corrective action. - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",1.0 -26006259,2019-10-16 15:24:32.623,Multiple configuration issues with Prometheus in Kubernetes,"* [x] Failing rule evaluations are showing up in the logs quite often -* [x] No monitoring of Prometheuses running in our clusters (such as the ops environment monitoring all our existing prometheus servers) - * We are configured to have at least 2 Prometheus Pods running in `gprd` but only 1 is running currently: `up{cluster=""gprd-gitlab-gke"",env=""gprd"",job=""gitlab-monitoring-promethe-operator""}` and we are completely unaware of this missing redundancy -* [x] No external mechanism to reach the prometheus endpoint in our clusters - * We only expose the thanos endpoint. One must utilize `kube port-forward` to reach the prometheus endpoint: https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/monitoring -* [x] Scraping data appears to be inconsistent - * Metric scrapes in Kubernetes appear to only be capturing one metric over the course of time, example query: `kube_pod_container_info{container=""registry"", cluster=""gprd-gitlab-gke""}[5m]` -* [x] Deduplication of mulitple prometheus instances is not occuring - * Example query: `count (kube_pod_container_info{container=""registry"", cluster=""gprd-gitlab-gke""}) without(cluster,container,container_id,endpoint,env,environment,image,image_id,instance,job,namespace,pod,prometheus,provider,region,service)` - * Returns multiple data points: -``` -✔{prometheus_replica=""prometheus-gitlab-monitoring-promethe-prometheus-1""} -✔{prometheus_replica=""prometheus-gitlab-monitoring-promethe-prometheus-0""} -``` - -All of the above is contributing to awkward metrics at times. Specifically, noting our Container Registry dashboard: https://dashboards.gitlab.net/d/8wlZHpTZz/registry-pod-info-copy?orgId=1&from=now-3h&to=now&var-PROMETHEUS_DS=Global&var-environment=gprd&var-cluster=gprd-gitlab-gke&var-namespace=monitoring&var-Node=All&var-Deployment=gitlab-registry, some metrics appear duplicated at times, which is impacting our ability to properly monitor our clusters with confidence. - - -/cc @ansdval",8.0 -25999110,2019-10-16 12:40:30.585,Extend certificate-updater to update expiring GCP LB certificates,"As was suggested in today's DNA meeting, it would be a good addition to [Certificate Updater](https://gitlab.com/gitlab-com/gl-infra/certificates-updater/) to be able to update GCP LB certificates.",2.0 -25931768,2019-10-15 01:57:08.028,Remove testbed env pubsub beat variable,"As noted at https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/merge_requests/1083#note_35520 there is spurious pubsubbeat config in the testbed env. - -It should be removed",1.0 -25928449,2019-10-14 21:08:02.743,file-33 and file-34 need to be rebalanced,"The gitaly nodes file-33 and file-34 have crossed the 80% disk usage threshold and need to have projects moved to less full servers. For reference: https://gitlab.com/gitlab-com/runbooks/blob/master/howto/sharding.md - -@dawsmith I don't know if this is SRE-Oncall or part of normal workflows for one of the teams.",2.0 -25928397,2019-10-14 21:04:06.723,Move terraform GCS instance service account to project module,"While cleaning up terraform code for `gitlab-ENV-secrets` buckets duplicated between the `project` and `storage-buckets` modules, [this block](https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/storage-buckets/blob/master/main.tf#L73-81) was leftover creating the IAM binding granting access to the `terraform` service account used for all GCE VM instances. - -With the secrets bucket moved to the project module, it doesn't make sense to keep this IAM binding in the old module, but moving it implicates also managing the `terraform` service account with the project, and exporting the account email/ID for other terraform configurations to access via remote state lookup for the bindings that _are_ staying in the `storage-buckets` module. I inspected the terraform state files for `gstg` and `gprd` and it appears that this service account is not currently managed by terraform, and so can easily be imported into the `env-projects` configuration after the `project` module is updated. - -1. [ ] Add a `google_service_account` resource to the `project` module for the `terraform` account. -1. [ ] Remove the `google_storage_bucket_iam_binding` from the `storage-buckets` module -1. [ ] Add a `google_storage_bucket_iam_binding` to the project module -1. [ ] Import the service account to the terraform state for `env-projects` -1. [ ] Import the `google_storage_bucket_iam_binding` resource to the terraform state for `env-projects` -1. [ ] Bump module versions for `project` and `storage-buckets` across all environments",3.0 -25912154,2019-10-14 14:21:08.993,Rollout Thanos v0.8.x,"Thanos v0.8.0 includes a number of improvements. Including a new sidecar flag `--min-time` that allows us to more easily test changes to depending on Prometheus retention. - -See: https://github.com/thanos-io/thanos/releases/tag/v0.8.0",2.0 -25875261,2019-10-12 16:20:20.061,Setup logging for Praefect on gstg,"As part of its [readiness review](https://gitlab.com/gitlab-com/gl-infra/readiness/merge_requests/10), we must setup logging for Praefect. Since we already deployed it on gstg we can setup its logging infrastructure there so we can verify it and replicate it later on gstg. We'll need: - -- [x] A pubsub host: https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/merge_requests/1103 -- [x] A gitlab_fluentd recipe: https://gitlab.com/gitlab-cookbooks/gitlab_fluentd/merge_requests/92 -- [x] Add the recipe to our praefect hosts: https://ops.gitlab.net/gitlab-cookbooks/chef-repo/merge_requests/1983",3.0 -25845962,2019-10-11 10:24:08.514,Remove sidekiq mtail metrics,"Sidekiq now has internal metrics for jobs, so we can remove the mtail collection of this data.",2.0 -25813869,2019-10-10 14:19:55.222,Create third storage node in the staging environment,Create new third storage node in the staging environmnent in order to support synthetic system halt testing.,3.0 -25783498,2019-10-09 20:59:33.333,Services resource deprecated for google_project_services,"The `google_project_services` resource used in the project module is deprecated. - -``` -Warning: google_project_services is deprecated - many users reported issues with dependent services that were not resolvable. Please use google_project_service or the https://github.com/terraform-google-modules/terraform-google-project-factory/tree/master/modules/project_services module. This resource will be removed in version 3.0.0. -```",1.0 -25776897,2019-10-09 18:31:08.455,[RCA] forum.gitlab.com down,"Incident: gitlab-com/gl-infra/production#1215 - -## Summary - -forum.gitlab.com was down, found the container had stopped/failed on initial login during triage. - -Service(s) affected : forum.gitlab.com -Team attribution : @gitlab-com/gl-infra -Minutes downtime or degradation : 44 minutes - -## Timeline - -2019-10-01 - -* 17:43 UTC - EOC alerted that forum.gitlab.com was down -* 17:53 UTC - EOC acknowledged Pagerduty alert -* 18:10 UTC - EOC determines that the existing discourse container cannot be restarted -* 18:13 UTC - EOC initiates a rebuild of the discourse container to destroy the old, rebuild/restart the new, and initiate the bootstrap process -* 18:18 UTC - Rebuild process accidentally terminated -* 18:19 UTC - Rebuild restarted -* 18:27 UTC - Container rebuild completed, application begins serving requests again - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? (i.e. service outage, sub-service brown-out, exposure of sensitive data, ...) -- Who was impacted by this incident? (i.e. external customers, internal customers, specific teams, ...) -- How did the incident impact customers? (i.e. preventing them from doing X, incorrect display of Y, ...) -- How many attempts were made to access the impacted service/feature? -- How many customers were affected? -- How many customers tried to access the impacted service/feature? - -Include any additional metrics that are of relevance. - -Provide any relevant graphs that could help understand the impact of the incident and its dynamics. - -## Detection & Response - -Start with the following: - -- How was the incident detected? -- Did alarming work as expected? -- How long did it take from the start of the incident to its detection? -- How long did it take from detection to remediation? -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team memeber wasn't page-able, ...) - -## Root Cause Analysis - -The purpose of this document is to understand the reasons that caused an incident, and to create mechanisms to prevent it from recurring in the future. A root cause can **never be a person**, the way of writing has to refer to the system and the context rather than the specific actors. - -Follow the ""**5 whys**"" in a **blameless** manner as the core of the root-cause analysis. - -For this it is necessary to start with the incident, and question why it happened. Keep iterating asking ""why?"" 5 times. While it's not a hard rule that it has to be 5 times, it helps to keep questions get deeper in finding the actual root cause. - -Keep in min that from one ""why?"" there may come more than one answer, consider following the different branches. - -### Example of the usage of ""5 whys"" -The vehicle will not start. (the problem) - -1. Why? - The battery is dead. -2. Why? - The alternator is not functioning. -3. Why? - The alternator belt has broken. -4. Why? - The alternator belt was well beyond its useful service life and not replaced. -5. Why? - The vehicle was not maintained according to the recommended service schedule. (Fifth why, a root cause) - -## What went well - -Start with the following: - -- Identify the things that worked well or as expected. -- Any additional call-outs for what went particularly well. - -## What can be improved - -Start with the following: - -- Using the root cause analysis, explain what can be improved to prevent this from happening again. -- Is there anything that could have been done to improve the detection or time to detection? -- Is there anything that could have been done to improve the response or time to response? -- Is there an existing issue that would have either prevented this incident or reduced the impact? -- Did we have any indication or beforehand knowledge that this incident might take place? - -## Corrective actions - -- List issues that have been created as corrective actions from this incident. -- For each issue, include the following: - - - Issue labeled as ~""corrective action"". - - Include an estimated date of completion of the corrective action. - - Incldue the named individual who owns the delivery of the corrective action. - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",3.0 -25776600,2019-10-09 18:19:08.948,Test effects of making kernel watchdog more patient than NFS timeout on api-XX nodes,"#### Goal: - -Determine if making the watchdog more patient can change the failure mode from a kernel panic and reboot to a failed synchronous write operation. - -Determine if a failed NFS write is more or less impactful to our Rails app on `api-XX` VMs than a kernel panic and automatic reboot. (Example: If the filesystem becomes read-only, then the panic+reboot is preferable because recovery requires no manual intervention.) - -#### Method outline: - -* Check that the NFS timeout really does follow the arithmetic outlined below (which is based on the nfs manpage, not based on empirical testing). The two relevant mount options are: `timeo=50` (50 centiseconds = 5 seconds) and `retrans=2`. The manpage says each retry linearly adds another `timeo` to the timeout, and 2 retries means 3 tries in total. So: 1st try waits 5 seconds, 2nd try waits 10 seconds, 3rd try waits 15 seconds, then fail after a total patience of 5+10+15=30 seconds. -* On a non-production VM, try to do a *synchronous write* to an NFS file while sabotaging that write in some way that the NFS client is unaware of. (Maybe a client-side iptables rule to drop outgoing or incoming NFS traffic? Disabling the NIC would let the kernel know to sever the TCP connection, so that might not fail in the intended way.) This test aims to synthetically confirm that we can reproduce the observed behavior. -* Adjust the kernel watchdog timeout (2 * `kernel.watchdog_thresh` = 20 seconds) to be more patient than the NFS timeout (30 seconds I think, see above). -* Repeat the synthetic test. Does the VM fail the write instead of panic and reboot? Is the NFS volume still mounted in read-write mode? -* Revert the sabotage. Does the VM's NFS mount become usable again? -* If the above tests were done on one of our `api-XX` VMs, is the Rails app still functional without rebooting the VM? - -#### Background: - -In #7872 we have been tracking GCP VM spontaneous reboots. A subset of those reboots are `soft lockup` kernel panics. In the weeks since we started tracking these unplanned reboots, this particular flavor has only occurred on `api-XX` and `web-XX` VMs. These VMs regularly use NFS volumes, and that might be relevant. - -We have ample evidence that the GCP platform sometimes drops network connectivity to some or all network peers, often lasting for tens of seconds before recovering. Some of these `soft lockup` events show signs of such a network connectivity disruption preceding the kernel panic. So far **all** of these `soft lockup` events share [nearly identical kernel stack traces](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7872#note_227714232), suggesting these events have a common cause. At the time of the panics, the hung process that was hogging a CPU (implicitly preventing that CPU's watchdog thread from running) was stalled waiting on a synchronous `write` system call. - -As a [working hypothesis](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7872#note_227712046), that hung process's `write` syscall may leave the process in uninterruptible sleep, preventing other processes (including watchdog) from being scheduled on that vCPU. We do not know what that `write` syscall's file descriptor refers to. It could point to a file on NFS, a file on ext4, or a network socket. Because this pathology has so far only affected hosts using NFS, that possibility may be more likely. - -If the hung process is stalled while writing to NFS, then if we adjust the watchdog timer to be more patient than the filesystem's timeout, then the outcome should change from a kernel panic to a failed write. - -A write failure should be logged by the kernel (i.e. reliably detectable and more clear to interpret than a kernel panic). But we need to know if the failed write will lead to longer lasting application problems, since it would not automatically reboot the VM. - -This issue is to test the effects of a failed NFS write on one of our `api-XX` VM, and determine if this outcome is: -* diagnostically useful -* preferable to a reboot from an availability standpoint - - -##### What is a `soft lockup`? - -A `soft lockup` kernel panic is triggered by the kernel detecting that at least one of its CPUs appears to be hung. The detection mechanism is that each processor (CPU core or vCPU) has a dedicated watchdog thread that resets a count-down timer each time it runs. If any processor fails to run that watchdog thread before the timer reaches zero, the kernel takes this as a sign of extreme duress and possibly a bug, since normally the scheduler would easily be able to issue a time slice to the watchdog thread even under heavy load. So the kernel panics, calling it a `soft lockup` event.",2.0 -25771098,2019-10-09 15:26:17.708,Synthetic system halt test in staging,"The `file-33-stor-gprd` project storage node is currently experiencing repeated unexplained system halts. - -In order to support migrations for re-balancing large projects off of `file-33-stor-gprd` and onto `file-40-stor-gprd`, it is important to collect some data on what to expect if a source system were to halt in the middle of a project migration. - -Planning the steps to undertake in orer to accomplish a test of this scenario, as well as the documenting of the consequences of the expected failures, will be the focus of this issue. - -## Plan - -1. [x] Create a new file storage node. -1. [ ] Enable the node to be used as storage using the GitLab admin console. -1. [ ] Create or move a handful (at least three or four, maybe more) projects -1. [ ] Begin migration of project from `file-04-stor-gstg` to one of the other nodes. -1. [ ] Meanwhile, in a shell session in the `file-04-stor-gstg` system, invoke the following commands: `sudo sysctl -w kernel.panic=10; sudo sysctl -w kernel.sysrq=1; echo c | sudo tee /proc/sysrq-trigger` -1. [ ] Now observe the serial port logs for `file-04-stor-gstg` at https://console.cloud.google.com/compute/instancesDetail/zones/us-east1-c/instances/file-04-stor-gstg/console?project=gitlab-staging-1&organizationId=769164969568#end -1. [ ] Analyze the migration logs and the system logs to determine the consequences of the system halt on the project migration.",5.0 -25734517,2019-10-08 18:04:26.654,Add metadata to terraform managed projects,"Some of our GCP projects are managed by the [Project](https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/project) terraform module. Some are managed manually. There is currently not an easy way to tell which is which without asking someone or checking many different repositories. - -I propose altering the Project module to add some metadata to all projects indicating that it is managed by terraform and should not be manually changed. This could be as simple as adding a `terraform-managed` tag like we have on some other resources. - -/cc @Craig @cmcfarland",1.0 -25724069,2019-10-08 12:37:01.595,file-41 was not provisioned properly,"During this incident https://gitlab.com/gitlab-com/gl-infra/production/issues/1222 we created file-41 and ended up not using it, but kept it anyway as it would have to be created at some point. There were issues with initial provisioning. We've hit this https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6933 , manual rerun of `chef-client` was successful. Then, gitaly was complaining that the `/var/opt/gitlab/git-data/repositories` directory was missing and indeed it was not there. The directory was actually up one level higher, and `/var/opt/gitlab/git-data` was missing. It was manually created, but gitaly still complained until `/var/opt/gitlab/git-data/repositories` was also created. - -What remains to be done is: -- verify that file-41 is fully operational -- investigate why the directory was not created in the first place (find it in Chef code for example) -- investigate https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6933 - - -/cc @craig",3.0 -25716261,2019-10-08 09:28:03.708,Tracking: SPF record lookup timeout,Tracking issue for https://gitlab.com/gitlab-com/business-ops/Business-Operations/issues/93,4.0 -25695650,2019-10-07 17:37:07.306,Fix terraform CI permissions,"While attempting to apply https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/merge_requests/1095, the job returned the following errors: - -```hcl -Error: Error creating Disk: googleapi: Error 403: Required 'compute.disks.create' permission for 'projects/gitlab-production/zones/us-east1-b/disks/api-21-sv-gprd-log' -More details: -Reason: forbidden, Message: Required 'compute.disks.create' permission for 'projects/gitlab-production/zones/us-east1-b/disks/api-21-sv-gprd-log' -Reason: forbidden, Message: Required 'compute.disks.setLabels' permission for 'projects/gitlab-production/zones/us-east1-b/disks/api-21-sv-gprd-log' - - - on .terraform/modules/api/instance.tf line 152, in resource ""google_compute_disk"" ""log_disk"": - 152: resource ""google_compute_disk"" ""log_disk"" { - - - -Error: Error creating Disk: googleapi: Error 403: Required 'compute.disks.create' permission for 'projects/gitlab-production/zones/us-east1-c/disks/api-22-sv-gprd-log' -More details: -Reason: forbidden, Message: Required 'compute.disks.create' permission for 'projects/gitlab-production/zones/us-east1-c/disks/api-22-sv-gprd-log' -Reason: forbidden, Message: Required 'compute.disks.setLabels' permission for 'projects/gitlab-production/zones/us-east1-c/disks/api-22-sv-gprd-log' - - - on .terraform/modules/api/instance.tf line 152, in resource ""google_compute_disk"" ""log_disk"": - 152: resource ""google_compute_disk"" ""log_disk"" { -``` - -While I manually applied this change to resolve the error in the near term, we need to update the permissions assigned to the `terraform-ci` service account, to assign the `compute.disks.create` permission as noted for the long-term fix. - -/cc @gitlab-com/gl-infra/secure-and-defend",1.0 -25607220,2019-10-03 22:14:16.044,Plan to synthesize a system halt on staging storage node `file-02-stor-gstg.c.gitlab-staging-1.internal` during a project migration,"The `file-33-stor-gprd` node usage has exceeded our SLO target. However, it continues to experience unexpected system halts on a nearly daily basis. - -In order to determine the level of severity of and to otherwise understand the consequences of a system halt while in the midst of the procedure to copy a file from one storage node to another, it is necessary to design the replication of such a scenario in our staging environment.",4.0 -25603289,2019-10-03 21:23:54.418,Investigate Exporting Stackdriver Metrics for kernel panics and reboots to Kibana or Prometheus,"## Problem Summary - -#7872 is currently being used as a comment collector for tracking reboots after manually parsing through the StackDriver logs. Rather than checking this daily, let's automate the collection of data points by either counting the logs in Kibana or using an exporter to aggregate in Prometheus. I have no strong preference for Kibana or Prometheus, but depending on the log format I expect one may prove to be easier to accomplish than the other. If that's true, we should choose the path of least effort. - -### Definition of Done - -This issue is primarily focused with determining how this log forwarding or metrics aggregation can be achieved, if at all. The work to develop the query and write a runbook pointing to the dashboard is in scope, but not required to close this issue. If writing the runbook and creating the dashboard is achievable with minimal effort, it can be tracked her. Otherwise, a new issue should be opened and linked here.",2.0 -25593292,2019-10-03 16:10:23.100,"Document Patroni, PGBouncer, and Consul Design and Troubleshooting in Runbooks","Discovered #8050 after opening this ticket. They're now linked. -
-Per https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5844#note_223351269, we need to update our [Runbooks](https://gitlab.com/gitlab-com/runbooks) to include more information about how Patroni relies on Consul for health checks and leader election, and specific steps to diagnose and troubleshoot issues. - -A good bit of that information is already captured in https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7790. And will change after https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8050 is completed. - -Below are more ~Discussion and ~Investigation issues @msmiley has authored that may or may not be valuable to add to the `howto` or `troubleshooting` sections in the Runbooks project repository. - -- https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7440 -- https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7735 -- https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7813 -- https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7551",3.0 -25591865,2019-10-03 15:33:41.830,Move stackdriver_exporter to Kubernetes,"We currently run the stackdriver exporter for each project from a VM. - -This makes sharding the exporter by stackdriver type prefix more difficult. - -Moving this to Kubernetes would greatly simplify the sharding, and reduce resource waste.",3.0 -25563008,2019-10-03 01:07:58.598,Reduce Patroni sensitivity to transient Consul SerfCheck failures,"**Goal:** - -Tune or replace the Consul SerfCheck, so that the Patroni leader does not lose its cluster_lock before its TTL expires. The SerfCheck currently detects when clients cannot reach a Patroni node. Disabling SerfCheck would mean Patroni would no longer know to failover if its clients cannot reach it. However, on an unreliable network, transient SerfCheck failures have caused unwanted Patroni failovers. - -The overall goal is to improve availability of the writable Postgres instance by avoiding unnecessary Patroni failovers during very brief network disruptions but still allowing Patroni failovers during network disruptions lasting more than a modest timeout (e.g. 30-60 seconds). Disabling SerfCheck is implicitly favoring one failover mode over another; SerfCheck works well on a reliable network, but it causes failovers a little too aggressively on an unreliable network. - -**Background:** - -In recent months, most of the Patroni failover events have been triggered by brief network connectivity disruptions. When the failover itself takes significantly longer to complete than the health check failure takes to resolve, then availability would have been higher if we had waited a little longer before failover. - -Root cause analysis has identified [3 known ways that Patroni failovers have been triggered by intermittent network disruptions](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7790#note_215232905). The current issue aims to mitigate Scenario C from the linked notes, copied below for convenience: - -> **Scenario C:** -> -> *Cause:* Serf-LAN health check messages (UDP port 8301) are dropped in one or both directions on the network path to Patroni leader from *any other Consul agent*. **And** the Consul agent on the Patroni leader is too slow in refuting that suspicion. -> -> *Effect:* The Consul agent on a non-Patroni host declares suspicion that Patroni leader has failed. Patroni leader has a limited window to refute this suspicion, which it can learn about via gossip with other Consul agents. If Patroni leader does not promptly refute this suspicion, the Consul server invalidates the Patroni leader's `cluster_lock` (even before its `ttl` expires), leaving the Patroni cluster leaderless. The Patroni replicas detect this and begin the Patroni failover procedure. -> -> **Remedies:** -> -> ... -> -> Tuning or replacing the Serf check as a dependency of the Patroni cluster_lock ([mentioned here](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7790#note_213248381) but not yet ticketed) would reduce or eliminate the chances of Scenario C. If the `serfCheck` is replaced, we must be careful in designing the new health check's failure modes. (For example, we must ensure that planned maintenance does not implicitly break the new health check, because doing so would trigger a Patroni failover -- the very thing we want to avoid when unnecessary.) - -**Caveat:** - -Since completing the above analysis, we have learned more about the GCP networking infrastructure upon which our service stack currently runs. We still believe Consul was correctly detecting brief network connectivity outages. - -We do not have long-term data, but it's worth noting that the frequency of network disruptions as detected by Consul SerfCheck appears to be lower in the last few weeks (see below) than it was [in early September](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7790#note_214470872). (Note that some of the SerfCheck failures shown below are from reboots, many of which were unplanned kernel panics, an unrelated but also interesting issue [being tracked here](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7872).) - -``` -msmiley@web-35-sv-gprd.c.gitlab-production.internal:~$ ( ls -1tr /var/log/syslog*gz | xargs -r sudo zcat ; sudo cat /var/log/syslog{.1,} ) | egrep 'consul.[0-9]*.:.*2019/' | grep 'EventMemberFailed' | tee /tmp/results.out | wc -l -20 -``` - -Consequently, Patroni has not triggered failovers recently. If that trend persists, then this issue for tuning or replacing SerfCheck is unnecessary. But we have no reason to believe that GCP network has become more reliable in the last few weeks.",5.0 -25551894,2019-10-02 16:36:53.386,Enable alerting from CI env,The ci environment doesn't get any alert rules deployed on prometheus nor is it wired up to an alertmanager. We should change that to be able to alert on cloud NAT errors in CI (https://gitlab.com/gitlab-com/runbooks/merge_requests/1498).,3.0 -25550100,2019-10-02 15:44:54.795,gitlab-iptables appends new rules after final drop rule,"When adding new sources to an existing firewall rule attribute in the form of - -``` -""firewall"": { - ""rules"": [ - { - ""Prometheus Server - Prometheus server internal metrics access"": { - ""port"": ""1234"", - ""protocol"": ""tcp"", - ""source"": ""1.2.3.4,5.6.7.8"" - } - }, - ... -``` - -then the `gitlab-iptables` cookbook is appending iptables rules after the existing rules including the `-j DROP` match-all target which is stopping the evaluation of newly appended rules. To fix it, the `/etc/iptables.d/` directory probably needs to be cleaned from existing rules before changes are applied.",3.0 -25512267,2019-10-01 16:00:37.304,Provision infrastructure for new chef server(s) in gitlab-ops project,"In preparation for the [migration to a new, upgraded Chef server](#6128), we need to build the infrastructure on which we will deploy Chef server. ~~For resiliency purposes, we should implement a [HA architecture](https://docs.chef.io/install_server_ha.html)~~ - -In light of the fact that we are focusing all new services to run in Kubernetes (barring exceptional circumstances), and actively working to migrate existing services away from GCE VMs, expending effort to build/manage a more complex configuration likely isn't worthwhile. Especially considering that historically we have experienced few (if any) scaling issues with a single Chef server node, as it stands. - -1. [x] Create/deploy terraform modules and resources for chef infrastructure (load-balancer, ~~frontend instance group~~, backend instance group for self-healing configuration) -1. [ ] ~~Update monitoring for new Chef infrastructure (may make sense to split this into a separate issue)~~ -1. [ ] ~~Update documentation in handbook (network/architecture diagrams) and runbooks to reflect the changes~~",5.0 -25512062,2019-10-01 15:55:02.792,Identify updates required to chef-repo pipelines and related workflows,"While working through the [Chef server migration and upgrade](#6128), we need to consider the impact to existing pipelines and related workflows around `chef-repo`. Specifically, do we need to make any syntactic changes to the scripts/jobs for the cookbook uploader, cookbook version bumps, role/environment pushes, and/or how we manage data bags and encrypted vault items. In addition, we need to ensure that during the course of the migration, while we're testing the new server and updated infrastructure, we push net-new changes to _both_ Chef servers, so that they stay up to date until we perform the final migration and cutover. - -1. [ ] Review `chef-repo` pipeline for any syntax changes required by Chef server (latest as of issue creation is `13.0.17`) -1. [ ] Review runbooks and hoot documentation for references to workflows that may be impacted by the upgrade (knife commands, data bag/vault management scripts, etc.) -1. [ ] Update `chef-repo` pipelines to keep both chef servers updated in parallel while we test the new infrastructure",3.0 -25479123,2019-10-01 00:24:02.656,Remove pgio from staging patroni servers,"Exhibit 1: -* https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8026 -* https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7932 - -Exhibit 2: -https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6242#note_152545100 - -We're not using it, and it is causing problems. It should be removed.",1.0 -25478058,2019-09-30 22:59:24.458,"Document, plan, and test migration process for chef server","As a part of #6128 we need to develop the process to migrate from our current chef server in Digital Ocean to a new, upgraded infrastructure in GCP. The initial assumption is that this will center around a backup and restore process, though this may need to be adjusted as we dig into the details.",3.0 -25437232,2019-09-30 08:19:26.853,"Look into removing non-needed, and re-add needed, public IPs in gprd","After https://gitlab.com/gitlab-com/gl-infra/production/issues/1167, only the blackbox exporter and the console have public IPs in gprd. - -Removing this IP from console would break kubernetes access, and removing it from the blackbox exporter broke monitoring for an unknown reason (possibly whitelisting). - -Apparently these problems don't appear in gstg, but it's worth checking this out in case we're simply not noticing them. - -If our own whitelisting, as opposed to customer whitelisting, is the reason we need these public IPs, then we should use static IPs and not exit via the Cloud NAT. This decouples Cloud NAT IP pool scaling from our infrastructure (isolates customer concnerns from our own). - -One of the prometheus instances scrapes the CI runner managers and is federated with CI's prometheus. Firewall IP whitelisting in CI broke when we removed prometheus' public IPs, and was fixed by whitelisting the NAT IPs in CI. We should use a static IP for the prometheus instances to decouple this whitelisting from scaling the Cloud NAT IP pool.",3.0 -25427390,2019-09-29 22:27:48.482,Chef not running on dev.gitlab.org,"Plausibly broken related to the chef 12 vs chef 14 upgrade: - -``` -Sep 29 22:05:17 dev.gitlab.org chef-client[20093]: [2019-09-29T22:05:17+00:00] ERROR: cannot load such file -- highline -Sep 29 22:05:17 dev.gitlab.org chef-client[20093]: [2019-09-29T22:05:17+00:00] ERROR: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1) -cmiskell@dev:~$ ^C -cmiskell@dev:~$ dpkg -l |grep chef -ii chef 12.22.5-1 amd64 The full stack of chef -``` - -@alejandro Any input into what the right way forward is? I'd hate to break it more than it already is.",1.0 -25386425,2019-09-27 14:40:30.493,Issue for investigation of gitlab_com_db_sync errors for data team,"This issue will be for investigation of data sync issues from our replica to snowflake for the data team for gitlab.com syncs. - -It is related to https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7141 which covers the syncs for license, version, and customers. - -Facts: -- The data extracts run in Airflow from the gitlab-analysis project in GCP in a scheduled set of K8s jobs. -- https://gitlab.com/gitlab-data has the projects, docker files and config -- Connections are to: `postgresql://analytics:***@10.217.7.101:5432/gitlabhq_production` to connect to the analytics replica - which is GCP production instance postgres-dr-archive-01-db-gprd -- logs of errors are in the slack channel #analytics-pipelines -- We'll add example errors and date's times in comments. - - -cc @kathleentam @tlapiana @jjstark from the data team. I'm trying to separate threads of investigation. This issue will be for gitlab.com sync and we'll keep investigating the sync for customers, license, and version on #7141 . The backends / gcp projects are different and we can better manange priority/ solutions if we split investigation.",1.0 -25347064,2019-09-26 14:25:01.847,Rollout Prometheus 1.13,"Prometheus 1.13.0 comes with a major improvement to Thanos sidecar data streaming. This greatly reduces the overhead of remote-read API access. - -Current status: Release Candidate is available.",2.0 -25346668,2019-09-26 14:15:10.577,Validate the kubectl wrapper script is working as desired,During a recent maintenance event: https://gitlab.com/gitlab-com/gl-infra/production/issues/1192 the wrapper script failed to notify and ask for confirmation from the end user making changes in production. Utilize this issue to investigate and make the appropriate changes.,1.0 -25332462,2019-09-26 08:28:16.587,The chef secrets bucket is created in 2 places,"it looks like the secrets bucket is created in 2 places: - -1. https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/project/blob/master/main.tf#L28, which is called from env-projects (in which there is an entry for CI) -1. https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/storage-buckets/blob/master/main.tf#L59 - -Since env-projects and each gitlab env (e.g. gprd) are different TF envs, with separate tfstate, they can both track this bucket. @craig, when you created env-projects, did you have to import this bucket? If so we should decide whether or not this bucket belongs to the meta-env (env-projects) or a gitlab-google-project-tf-env (e.g. gprd, gstg). - -Since the secrets bucket must be pre-populated for any chef'ed VMs provisioned with terraform to work, I vote env-projects. @craig feel free to reassign this, but I just wanted to make sure we are on the same page before I make any change. - -cc @hphilipps (who I was chatting about this with)",1.0 -25316711,2019-09-25 18:55:23.654,Plan and document roles for Vault cluster,"In the process of initial deployment for the Vault cluster, we need to consider how to organize the key/value store in Vault, and how to assign access across that organization, as well as the mechanics/process for approving, assigning, and managing access requests. This should be done in partnership with the security team, as they will likely have ownership over the access management process.",5.0 -25316624,2019-09-25 18:52:09.057,Plan and document required backends for Vault cluster,"Once we have deployed the production vault cluster, we will need to determine the list of backends to support; this should minimally include Okta, but likely also GSuite and GitLab. This issue is intended to conduct the initial research into available backends, document the overall architecture, and plan (if/as needed) for subsequent implantation issues.",5.0 -25316457,2019-09-25 18:44:59.068,Automate and document unsealing process for Vault,"Once we have a production Vault cluster we will need to provide documentation and automation around the unsealing process - -1. Document the basic process to seal/unseal the cluster -1. Add keys for new engineers -1. Remove keys after engineers change roles / leave the team -1. Rotate keys for existing engineers -1. Monitor/report on key expiration, last rotation, etc.",3.0 -25279862,2019-09-24 23:15:21.826,Fix thanos object store errors across envs,"ThanosCompactBucketOperationsFailed is firing regularly for thanos-store-0{1,2}-inf-gprd. - -Graphs show: -1. High consistent failure rates in testbed + pre environments (not alerting due to their environment) -1. Scattering of failures in ops + gstg + dr -1. Regular failures of the 'get' operation in gprd, leading to these alerts. - -This needs cleaning up for testbed + pre (to eliminate noise), and either fixing the errors, or increasing the threshold if they are normal and recoverable.",1.0 -25268576,2019-09-24 17:15:58.755,SLO for background processing jobs with a target to reach an upper-bound of 1 min,"Currently, for Sidekiq we have an SLA status. You can see a 30 day running average of the status available at https://dashboards.gitlab.net/d/general-slas/general-slas. - -![Screen_Shot_2019-09-24_at_12.08.10](/uploads/573f434d38a8d01e28788d4e067709f3/Screen_Shot_2019-09-24_at_12.08.10.png) - -@andrewn has done a considerable amount of work setting up alerts based on SLO violation. If we don't have them already setup for background processing jobs, we should. A considerable amount of work has been done to separate and split jobs, priorities, and queues appropriately, which hopefully allows us to place monitoring granularly for each specific category of job. - -https://dashboards.gitlab.net/d/sidekiq-main/sidekiq-overview?orgId=1 shows the metrics in detail. It's been raised that certain–maybe even all?–background processing jobs should adhere to a 1 min latency average. Though, at first glance, this may not be possible for all jobs–project exports take substantially longer than others.",2.0 -25251099,2019-09-24 15:29:52.257,first practice incident - rough agenda,"Basic summary - -This is meant to be a simple problem to solve and a table top scenario. -First, we are testing incident response and basic group of host interactions. - -Open questions: -1. Could/Should we use staging to do this? -1. Should we make a project/repo (like readiness) to make md files for practice scenarios? - -Scenario: -service stop haproxy on all LB - current status no haproxy running (front door is closed) - -Incident Start: -EOC - execute command(s) to stop LB in gstg - -Start of incident handling -1. EOC use `/incident declare` in `#incident management` - -Validate: -1. [x] EOC page -2. [x] IMOC page -3. [x] CMOC page -4. [x] Creation of incident gdoc -5. [x] Creation of incident issue? -6. [x] EOC, IMOC, CMOC join incident zoom - -Once manager and cmoc join -1. IMOC/CMOC - talk through any comments to understand the issue -1. cmoc log into status.io and talk through what they would do to create incident [Link to tests status page](https://app.status.io/statuspage/5bedc0c2a394fc04c9ccc974) -1. cmoc - talk through your update to status.io - -Resolution actions -1. EOC - talk through actions you would do to get load balancers restarted -1. EOC/ Manager - talk about how to escalate to engineer on call -1. Verify incident is resolved -1. cmoc - confirm resolution and talk through status.io update -1. Follow up with action items -1. Create Incident Review issue -1. How to escalate action items to infradev. - -cc @marin and @AnthonySandoval for feedback",5.0 -25241086,2019-09-24 13:19:11.731,Run rake task to update usage data entries on version.gitlab.com,"We need to execute the following rake task on the production instance of `version.gitlab.com`: -`rake usage_data:fix_usage_data_stats` - -The goal of this rake task is to change any `usage_data` record that has non integers in the `stats` `json` column to be compliant. i.e. having only integers for values in the `stats` column. - -This rake task can be ran 2 ways: -1. Against all `usage_data` records that have a non empty `stats` column at once (in batches of 1000): -* `rake usage_data:fix_usage_data_stats` -2. Against predefined `id` ranges (to lessen the individual execution times per batch): -* `rake usage_data:fix_usage_data_stats[1,1000]` - where `1` is the starting `id` in `usage_data` table, and `1000` is the ending `id` in the `usage_data` table. In this scenario, this would need ran in chunks up until the last `id` in the `usage_data` table, which is printed out in each run if arguments are passed as above. - -Decision on which one to run is up to the infrastructure team to decide.",1.0 -25220840,2019-09-24 04:20:03.645,Investigate making haproxy metrics more robust,"https://dashboards.gitlab.net/d/AkOdlrSmk/imported-haproxy-stats uses haproxy_backend_current_sessions and haproxy_frontend_current_sessions which are point-in-time sampled values. This leads to some anomalous and misleading graphs when the sampling misses active sessions, e.g. we've seen graphs drop to 0 for 429_rate_limit, when there are provably requests being rate-limited. It's probably worse for rate-limited sessions that are dropped immediately, compared to any even slightly-longer-running normal HTTP requests which are going to result in a more consistent sampling. - -The anomalous graphs have caused us to go down a rabbit hole of investigations at least once, and we need to do better. - -Options include other haproxy exporter metrics, existing mtail metrics, or at worst, enhancing mtail to capture more data.",2.0 -25122694,2019-09-23 15:41:01.506,Stabilize Cloud NAT in CI,"See https://gitlab.com/gitlab-org/gitlab/issues/32433#note_220111066 for original context. - -Cloud NAT was rolled out to private runner machines in CI on 2019-09-20. There were then several job failures attributable to NAT failures: connection failures coinciding with error bursts at the NAT. - -Since then several increases in NAT port:VM ratio have been rolled out in order to mitigate these failures (while ensuring there are still enough IPs to support the private runner fleet). There have been no reports of job failures attributable to NAT since that ratio was set to 256 (it had originally been 64, the Cloud NAT default). - -There have still been occasional period of elevated errors but it isn't clear whether or not these result in higher-level failures / slowdowns or are at an acceptable level to be fixed by higher-layer retries (TCP, or application layer).",4.0 -25119636,2019-09-23 14:39:40.622,Add smokeping_prober to gitlab-exporters,Add a recipe to [gitlab-exporters](https://gitlab.com/gitlab-cookbooks/gitlab-exporters) to deploy the [smokeping_prober](https://github.com/SuperQ/smokeping_prober).,2.0 -25071265,2019-09-21 04:09:11.390,dashboards.gitlab.com down,"I got an alert for dashboards.gitlab.com being down. Upon investigation I can't log into it. Checked the console and lo, `thanos` had OOMed. Below is the log. - -``` -[7387773.088506] Out of memory: Kill process 6189 (thanos) score 923 or sacrifice child -[7387773.096489] Killed process 6189 (thanos) total-vm:14551252kB, anon-rss:14190680kB, file-rss:1328kB, shmem-rss:0kB -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088171] thanos invoked oom-killer: gfp_mask=0x14280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=(null), order=0, oom_score_adj=0 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088174] thanos cpuset=/ mems_allowed=0 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088183] CPU: 2 PID: 6194 Comm: thanos Not tainted 4.15.0-1034-gcp #36~16.04.1-Ubuntu -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088184] Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088185] Call Trace: -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088198] dump_stack+0x85/0xcb -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088203] dump_header+0x77/0x285 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088206] oom_kill_process+0x22e/0x450 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088208] out_of_memory+0x11d/0x4c0 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088211] __alloc_pages_slowpath+0xda2/0xe90 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088214] __alloc_pages_nodemask+0x265/0x280 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088221] alloc_pages_vma+0x88/0x1e0 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088226] __handle_mm_fault+0xe26/0x11b0 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088228] handle_mm_fault+0xcc/0x1c0 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088234] __do_page_fault+0x265/0x500 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088236] do_page_fault+0x2e/0xf0 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088242] ? page_fault+0x2f/0x50 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088244] page_fault+0x45/0x50 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088248] RIP: 0033:0x45dab3 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088249] RSP: 002b:000000c000095f08 EFLAGS: 00010202 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088252] RAX: 0000000000000000 RBX: 00000000000aa000 RCX: 000000c21d158000 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088253] RDX: 0000000003214d20 RSI: 0000000000000010 RDI: 000000c21d16a000 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088254] RBP: 000000c000095f50 R08: 00007f3ed4d14638 R09: 000000000000005e -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088255] R10: 0000000000000909 R11: 000000000000005d R12: 0000000000001901 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088257] R13: 000000000000000a R14: 0000000000000009 R15: 000000000000000e -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088272] Mem-Info: -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088278] active_anon:3719380 inactive_anon:14957 isolated_anon:0 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088278] active_file:1686 inactive_file:1810 isolated_file:12 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088278] unevictable:913 dirty:0 writeback:0 unstable:0 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088278] slab_reclaimable:13754 slab_unreclaimable:19825 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088278] mapped:4934 shmem:37945 pagetables:9802 bounce:0 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088278] free:32774 free_pcp:0 free_cma:0 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088283] Node 0 active_anon:14877520kB inactive_anon:59828kB active_file:6744kB inactive_file:7240kB unevictable:3652kB isolated(anon):0kB isolated(file):48kB mapped:19736kB dirty:0kB writeback:0kB shmem:151780kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 12288kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088285] Node 0 DMA free:15908kB min:68kB low:84kB high:100kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15908kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088290] lowmem_reserve[]: 0 2980 14999 14999 14999 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088294] Node 0 DMA32 free:61196kB min:13412kB low:16764kB high:20116kB active_anon:2976080kB inactive_anon:0kB active_file:1784kB inactive_file:1036kB unevictable:0kB writepending:0kB present:3129332kB managed:3063764kB mlocked:0kB kernel_stack:48kB pagetables:4048kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088298] lowmem_reserve[]: 0 0 12018 12018 12018 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088308] Node 0 Normal free:53992kB min:54096kB low:67620kB high:81144kB active_anon:11900208kB inactive_anon:59828kB active_file:6276kB inactive_file:6744kB unevictable:3652kB writepending:0kB present:12582912kB managed:12313748kB mlocked:3652kB kernel_stack:6400kB pagetables:35160kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088312] lowmem_reserve[]: 0 0 0 0 0 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088315] Node 0 DMA: 1*4kB (U) 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15908kB -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088328] Node 0 DMA32: 184*4kB (UME) 158*8kB (UME) 179*16kB (UME) 251*32kB (UME) 120*64kB (UME) 133*128kB (UME) 63*256kB (UME) 9*512kB (U) 1*1024kB (U) 1*2048kB (H) 0*4096kB = 61408kB -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088342] Node 0 Normal: 603*4kB (UME) 532*8kB (UME) 906*16kB (UME) 295*32kB (ME) 246*64kB (ME) 57*128kB (M) 1*256kB (M) 0*512kB 1*1024kB (M) 0*2048kB 0*4096kB = 54924kB -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088356] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088357] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088358] 40487 total pagecache pages -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088360] 0 pages in swap cache -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088361] Swap cache stats: add 0, delete 0, find 0/0 -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088362] Free swap = 0kB -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088363] Total swap = 0kB -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088363] 3932059 pages RAM -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088364] 0 pages HighMem/MovableOnly -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088365] 83704 pages reserved -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088365] 0 pages cma reserved -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088366] 0 pages hwpoisoned -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088367] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088373] [ 447] 0 447 25742 47 94208 0 0 lvmetad -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088375] [ 463] 0 463 3051 1090 61440 0 0 haveged -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088378] [ 465] 0 465 15257 4602 159744 0 0 systemd-journal -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088380] [ 475] 0 475 10775 525 118784 0 -1000 systemd-udevd -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088382] [ 1212] 0 1212 4031 566 73728 0 0 dhclient -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088384] [ 1392] 0 1392 1305 29 61440 0 0 iscsid -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088386] [ 1393] 0 1393 1430 877 65536 0 -17 iscsid -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088388] [ 1403] 0 1403 6932 454 102400 0 0 cron -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088390] [ 1413] 0 1413 95627 312 114688 0 0 lxcfs -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088392] [ 1438] 0 1438 6511 362 94208 0 0 atd -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088394] [ 1449] 0 1449 7154 293 98304 0 0 systemd-logind -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088396] [ 1457] 0 1457 69182 813 172032 0 0 accounts-daemon -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088398] [ 1464] 0 1464 1098 140 53248 0 0 runsvdir -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088400] [ 1472] 0 1472 1099 316 57344 0 0 acpid -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088402] [ 1480] 107 1480 10757 516 135168 0 -900 dbus-daemon -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088404] [ 1492] 0 1492 1060 271 53248 0 0 runsv -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088406] [ 1494] 0 1494 1060 69 53248 0 0 runsv -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088408] [ 1495] 0 1495 1060 235 53248 0 0 runsv -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088410] [ 1496] 0 1496 1060 217 49152 0 0 runsv -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088412] [ 1508] 0 1508 1096 29 53248 0 0 svlogd -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088414] [ 1511] 0 1511 1096 17 57344 0 0 svlogd -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088416] [ 1513] 999 1513 29142 6731 143360 0 0 node_exporter -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088418] [ 1521] 0 1521 1096 16 53248 0 0 svlogd -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088420] [ 1526] 0 1526 1096 270 57344 0 0 svlogd -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088422] [ 1759] 0 1759 43334 2459 233472 0 0 unattended-upgr -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088424] [ 1760] 0 1760 69272 269 184320 0 0 polkitd -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088426] [ 1787] 0 1787 3343 84 69632 0 0 mdadm -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088428] [ 1895] 112 1895 10067 290 110592 0 0 ntpd -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088430] [ 1906] 0 1906 3618 392 69632 0 0 agetty -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088432] [ 1909] 0 1909 3664 327 69632 0 0 agetty -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088434] [ 1968] 0 1968 8134 578 98304 0 0 nginx -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088436] [ 2048] 0 2048 16861 3964 180224 0 0 google_network_ -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088438] [ 2049] 0 2049 16808 3710 172032 0 0 google_clock_sk -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088441] [ 2167] 0 2167 16378 350 167936 0 -1000 sshd -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088442] [ 2173] 0 2173 16352 156 110592 0 0 master -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088445] [ 2177] 113 2177 17308 777 126976 0 0 qmgr -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088447] [ 6399] 999 6399 337760 27793 2686976 0 0 trickster -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088449] [ 8803] 998 8803 45310 5678 249856 0 0 consul -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088451] [23096] 1002 23096 11322 204 118784 0 0 systemd -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088453] [23097] 1002 23097 52180 500 163840 0 0 (sd-pam) -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088455] [23217] 1002 23217 6479 525 94208 0 0 screen -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088457] [23218] 1002 23218 4995 513 77824 0 0 bash -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088459] [23223] 0 23223 12855 527 147456 0 0 sudo -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088461] [23224] 0 23224 12751 494 143360 0 0 su -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088463] [23225] 0 23225 5361 920 86016 0 0 bash -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088465] [29388] 0 29388 43355 10443 364544 0 0 fluentd -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088467] [29393] 0 29393 193329 38773 1748992 0 0 ruby -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088469] [ 6189] 999 6189 3637813 3548002 29052928 0 0 thanos -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088471] [10135] 0 10135 41314 12584 360448 0 0 chef-client -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088473] [ 831] 115 831 15848 119 106496 0 0 memcached -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088475] [ 8839] 116 8839 650365 16297 761856 0 0 grafana-server -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088477] [16643] 104 16643 64098 243 135168 0 0 rsyslogd -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088479] [12654] 113 12654 16869 299 126976 0 0 pickup -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088482] [21777] 33 21777 8314 406 98304 0 0 nginx -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088484] [21778] 33 21778 8313 496 98304 0 0 nginx -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088485] [21779] 33 21779 8316 549 98304 0 0 nginx -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088487] [21780] 33 21780 8314 516 98304 0 0 nginx -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088490] [22344] 0 22344 1094 161 57344 0 0 sleep -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088492] [22348] 0 22348 1094 156 53248 0 0 sleep -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088494] [22350] 0 22350 12235 402 143360 0 0 cron -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088496] [22351] 0 22351 1126 174 53248 0 0 sh -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088498] [22352] 0 22352 35950 2728 184320 0 0 ruby -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088500] [22353] 0 22353 42667 14441 368640 0 0 chef-client -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088502] [22356] 0 22356 16378 608 172032 0 0 sshd -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088504] [22361] 110 22361 16378 433 163840 0 0 sshd -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.088506] Out of memory: Kill process 6189 (thanos) score 923 or sacrifice child -Sep 21 04:05:04 dashboards-com-01-inf-ops kernel: [7387773.096489] Killed process 6189 (thanos) total-vm:14551252kB, anon-rss:14190680kB, file-rss:1328kB, shmem-rss:0kB -Sep 21 04:05:05 dashboards-com-01-inf-ops kernel: [7387774.148113] oom_reaper: reaped process 6189 (thanos), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB -```",1.0 -25031726,2019-09-20 10:34:26.837,Tuning Puma settings on ops.gitlab.net,"There is a small background rate of Puma worker terminations with the default memory limit of 650MiB in combination with 4 worker threads. - -![puma-memory-max-2019-09-20](/uploads/2eed4c9667664bd5c5d14e54a126f534/puma-memory-max-2019-09-20.png) - -![puma-worker-terminations-2019-09-20](/uploads/0fcc69b87dd166cb09b566f5f3fe4c40/puma-worker-terminations-2019-09-20.png)",2.0 -24993969,2019-09-19 14:57:09.275,View stackdriver metrics from the CI project,"In order to implement https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7728, we need to be able to alert on stackdriver metrics in the CI project. This will involve: - -- [x] rolling out a sd-exporter there -- [x] Ensuring that exporter is scraped by prometheus -- [x] Being able to view those metrics in thanos -- [x] Being able to send alerts based on these metrics to Alertmanager. - -cc @hphilipps",3.0 -24938011,2019-09-18 11:54:31.020,Revert any manual work created from db user creation gstg-deploy to bypass statement_timeout,"Yesterday the staging deploy was failing due to a statement timeout. See the MR here: https://ops.gitlab.net/gitlab-cookbooks/chef-repo/merge_requests/1849 - -This is being reverted. See the MR for additional context. - -Any work that was done to create this dedicated user now needs to be removed as well. This issue can be closed when that user is removed from the system.",1.0 -24921423,2019-09-17 22:18:52.265,Track the cost of enabling serial port logging into Stackdriver,"Enabled in https://gitlab.com/gitlab-com/gl-infra/production/issues/1156, this will cause additional (duplicated) logging into stackdriver, and we would like to understand the cost implications of this. We can estimate it (and have, with an upper limit of around ~3GB/day in prod), but it will likely be easier to simply measure it after the fact to be really sure.",1.0 -24920914,2019-09-17 21:46:34.621,Move Docker Build stage from GitLab Services to ci-images repository,"There is a docker file which is built at the beginning of each pipeline in the GitLab Services project. This image rarely changes, and should be moved to our [CI Images Repository](https://gitlab.com/gitlab-com/gl-infra/ci-images/). It is currently slowing down work by adding a lot of time to every pipeline run (even no-op), for no good reason. - -The docker file is in `docker files/environments_client/Dockerfile` and is built in the [build_client_image](https://ops.gitlab.net/gitlab-com/services-base/blob/master/.gitlab-ci.yml#L19) stage. - -This image is based on `google/cloud-sdk:alpine` and includes: - -- `git` -- `openssh` -- `curl` -- `jq` -- `kubectl` -- `Helm` - -The version of the Docker image should reflect the version of `Helm` that is installed. - -Once the image is built correctly, the `.gitlab-ci.yml` file needs to be updated with the correct [DOCKER_IMAGE_TAG](https://ops.gitlab.net/gitlab-com/services-base/blob/master/.gitlab-ci.yml#L4)",2.0 -24897306,2019-09-17 13:34:45.763,Gather input from Engineers regarding the content of a Grafana 101 training video,"Objective: To identify content Engineers would like included in a Grafana 101 Training Video - -Goals: GitLab Engineers need a comprehensive tutorial video to empower them to use Grafana to ensure they have the background to use this robust analytical and visualization tool. The goal for this issue to identify the content that will be included in this video.",1.0 -24866551,2019-09-17 01:47:09.931,Terraform automation blog post,"It feels like we've hit a milestone in our Terraform configs, so it seems like a good time to do a write-up. - -Initial ideas/brainstorming - -- Using GitLab CI to run terraform - - Leverage GitLab Environments feature - - Clean up drift, upgrade to 0.12 -- Greenfield (ephemeral-environments, group-projects, services) vs legacy (gitlab-com-infrastructure) - - Are we using AutoDevOps for any of these? -- Mono-repo to versioned modules, tagging pipeline -- Cleanup/link to [design doc](https://about.gitlab.com/handbook/engineering/infrastructure/library/terraform-automation/) -- Next steps / future plans - - [GitLab Flow](https://about.gitlab.com/handbook/engineering/infrastructure/library/git-workflow/) - - [Incorporate Vault](https://about.gitlab.com/handbook/engineering/infrastructure/library/vault/) - -/cc @ansdval for prioritization / scheduling -/cc @devin @craigf @hphilipps @ggillies FYI",3.0 -24854308,2019-09-16 23:35:09.908,Remove Geo config in staging,"The staging environment is set up as a Geo primary. This is probably left over from previous work. It is causing unnecessary jobs, and likely also a lot of unnecessary database records. - -I'm going to go ahead and remove the primary config. Since there is no secondary, this shouldn't hurt anything. If someone needs it, they can set it up again. - -I'm creating this issue so that there is a record of what was done. - -![Screen_Shot_2019-09-16_at_1.33.59_PM](/uploads/6cf442f3dd08b1e292b1ce482ab5ed37/Screen_Shot_2019-09-16_at_1.33.59_PM.png)",1.0 -24846543,2019-09-16 17:55:45.472,Group Project Request: gitlab-qa-50k,"# Group Project Request - -- Project / Group Name (<17 characters): `gitlab-qa-50k` -- Project Administrator (email): gyoung@gitlab.com - -### Provide a brief overview of the reason for this project and why it is needed and for how long it will be used. - -This project will contain the environment used to test the 50k reference architecture which will be built and maintained by the Quality Department. - -This project will be used indefinitely - as long as the 50k reference environment stays relevant and supported. - -Quality's issue to create the 50k environment: https://gitlab.com/gitlab-org/quality/performance/issues/66 - -## Security - -### Provide a list of data and the corresponding classification that will be used in this project and how it will be accessed. - -The data used is a sterile copy of GitLab CE. There is no sensitive data. The data is used during the load tests to simulate an appropriately complex project. @grantyoung can you verify that what I've described here is valid and correct? - -## Group Project Access Checklist - -Make sure the following criteria is met and understood by the project administrator. - -- [-] If the gitlab.com database is copied, that data has been processed by the [pseudonymization script]( https://gitlab.com/gitlab-com/runbooks/blob/master/howto/pseudonymization-gitlab-db.md). -- [x] Regular security updates are applied to all nodes in the project. -- [x] Unused instances will be removed in a timely manner -- [x] The Project Administrator is responsible for any users or additional administrators that they add to the project -- [x] The Project Administrator is responsible for justifying any cloud spend within the project. -- [x] Group Projects are intended for development, test, or demo work. Everything in these projects is considered temporary. - -## Infrastructure Tasks - -- [x] Create file in https://ops.gitlab.net/gitlab-com/group-projects named `environments/(group name from above).tf` by copying an existing file and changing the Administrator and Group Name variables -- [x] Merge the change to master -- [x] Create a branch from master named `(group name from above)` and push -- [x] Verify that the pipeline completed successfully at https://ops.gitlab.net/gitlab-com/group-projects/pipelines",5.0 -24846514,2019-09-16 17:54:45.813,Group Project Request: gitlab-qa-25k,"# Group Project Request - -- Project / Group Name (<17 characters): `gitlab-qa-25k` -- Project Administrator (email): gyoung@gitlab.com - -### Provide a brief overview of the reason for this project and why it is needed and for how long it will be used. - -This project will contain the environment used to test the 25k reference architecture which will be built and maintained by the Quality Department. - -This project will be used indefinitely - as long as the 25k reference environment stays relevant and supported. - -Quality's issue to create the 25k environment: https://gitlab.com/gitlab-org/quality/performance/issues/57 - -## Security - -### Provide a list of data and the corresponding classification that will be used in this project and how it will be accessed. - -The data used is a sterile copy of GitLab CE. There is no sensitive data. The data is used during the load tests to simulate an appropriately complex project. @grantyoung can you verify that what I've described here is valid and correct? - -## Group Project Access Checklist - -Make sure the following criteria is met and understood by the project administrator. - -- [-] If the gitlab.com database is copied, that data has been processed by the [pseudonymization script]( https://gitlab.com/gitlab-com/runbooks/blob/master/howto/pseudonymization-gitlab-db.md). -- [x] Regular security updates are applied to all nodes in the project. -- [x] Unused instances will be removed in a timely manner -- [x] The Project Administrator is responsible for any users or additional administrators that they add to the project -- [x] The Project Administrator is responsible for justifying any cloud spend within the project. -- [x] Group Projects are intended for development, test, or demo work. Everything in these projects is considered temporary. - -## Infrastructure Tasks - -- [x] Create file in https://ops.gitlab.net/gitlab-com/group-projects named `environments/(group name from above).tf` by copying an existing file and changing the Administrator and Group Name variables -- [x] Merge the change to master -- [x] Create a branch from master named `(group name from above)` and push -- [x] Verify that the pipeline completed successfully at https://ops.gitlab.net/gitlab-com/group-projects/pipelines",5.0 -24814664,2019-09-16 07:19:59.410,Take a screenshot about the group hook under gitlab-org,"Following up from https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7855#note_217393324 - -@cmiskell I just tried to manually trigger it, but I don't think it's working. Could you please take a screenshot on the hook editing page so I can see if it's properly configured? - -Test was done at: https://gitlab.com/gitlab-org/quality/triage-serverless/issues/11#note_217393880 - -Marking as confidential because it contains tokens.",1.0 -24797351,2019-09-15 12:10:02.762,Fix Thanos compaction,"Due to lack of monitoring, Thanos compaction has been broken for quite some time. This causes a number of problems. - -* Much larger indexing overhead, leading to slow queries. -* Storage overhead due to large number of indexes, un-compacted blocks. -* Missing downsample data. - -All of the above lead to higher query overhead and slower queries.",4.0 -24783977,2019-09-14 10:06:43.651,file-15-stor-gprd rebooted,"file-15-stor-gprd rebooted at 9/14 09:37 UTC. - -Opened this issue to track repos with zeroed files. Currently searching through all repos: - -``` -cd /var/opt/gitlab/git-data/repositories/@hashed -ionice -n 5 find . -regextype sed -regex "".*/objects/.*"" -size 0 > /var/tmp/zerofiles.txt -```",3.0 -24770817,2019-09-13 15:42:06.130,Consider the use of a Docker image to contain tooling for local workstation use of Kubernetes,Not all Engineers have their machines configured the same. We may also have a need to specify certain configurations to limit/prevent access. Could a centrally managed docker image assist us with this situation?,3.0 -24745540,2019-09-13 03:08:42.600,GCP VM spontaneous reboots,"https://portal.rackspace.com/1173105/tickets/details/190911-ord-0001152 - -Raised because we couldn't see any reason for the reboots in our logs or other infrastructure logs in Stackdriver, and are vaguely hopeful GCP might be able to provide some sort of explanation.",2.0 -24740495,2019-09-12 20:19:05.937,Investigate creating a kubectl wrapper script with a production warning,"Decide if it makes sense to create a kubectl wrapper script to include a check of the current context and the command line. - -- If the current context is production **AND** -- If the current command is not **get** -- Then confirm whether to continue modifying production (y/N) - -We will have to discuss how to make sure all users are using the wrapper script and not calling `kubectl` directly",3.0 -24740391,2019-09-12 20:12:38.042,Investigate the ability to utilize kubectl from a bastion or proxy node,At this moment we allow Engineers to access our production clusters from their local workstation. Consider blocking this access and forcing Engineers to utilize a proxy or bastion type of connection in order to perform operations against a production cluster.,3.0 -24739441,2019-09-12 19:35:32.751,Set Retention period for cloudtrail bucket,"We enabled AWS CloudTrail organization wide in https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7565. This means that CloudTrail logs go to an S3 bucket in our primary org account, and that bucket will never clean itself up. - -How long do we want to keep CloudTrail logs for AWS? I was thinking that 1 year should be sufficient. Alternatively, we can keep them longer and just use a transition to cold storage for them.",1.0 -24738075,2019-09-12 18:36:23.891,Update the contributors.gitlab.com redirect link,"Can you update the contributors.gitlab.com redirect to https://gitlab.biterg.io/app/kibana#/dashboard/3e297c20-622c-11e9-8638-c11f0f1aa3fa? - -This is the ""permanent"" URL and will reflect any changes we make to the dashboard in the future.",1.0 -24736906,2019-09-12 17:53:41.962,RCA: Container Registry Deployment Deleted in the Production Kubernetes Cluster,"## Summary - -An engineer working locally was mistakenly connected to the production cluster. The engineer sent a command intended for the local development cluster was instead sent to the production cluster which resulted in the deletion of the Kubernetes Deployment object for the Container Registry. This brought down the service endpoint as no Pods were available and our HAProxy service started to send HTTP 503's otherwise known as ""The server is currently unavailable"" for any request bound to the Container Registry. - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? Container Service was down. -- Who was impacted by this incident? Anything that would have requested an image to/from the Container Registry (customers, CI jobs, etc) -- How did the incident impact customers? This prevents customers from uploading images and downloading images to the Container Registry -- How many attempts were made to access the impacted service/feature? As seen in the below chart, we sustained a request rate to the Container Registry at a rate above 150 requests per second -- How many customers were affected? TODO -- How many customers tried to access the impacted service/feature? TODO - -Include any additional metrics that are of relevance. - -![image](/uploads/c1c59190a24739854bbd406d638072c7/image.png) - -[Source](https://dashboards.gitlab.net/d/oWe9aYxmk/pod-metrics?orgId=1&from=1568307753787&to=1568309759687&var-Deployment=gitlab-registry&var-env=gprd&var-cluster=gprd-gitlab-gke&var-Node=All&var-namespace=monitoring) - -Provide any relevant graphs that could help understand the impact of the incident and its dynamics. - -![image](/uploads/d527063af64bca1baca2f88e77248327/image.png) - -![image](/uploads/902dc64d520096d8abc9e82478801383/image.png) - -[Source](https://dashboards.gitlab.net/d/AkOdlrSmk/imported-haproxy-stats?orgId=1&from=1568307581081&to=1568309405739) - -Logs from Stackdriver: [console.cloud.google.com/logs](https://console.cloud.google.com/logs/viewer?project=gitlab-production&minLogLevel=0&expandAll=false×tamp=2019-09-12T17%3A24%3A15.319215000Z&customFacets&limitCustomFacetWidth=true&dateRangeStart=2019-09-12T13%3A30%3A42.700Z&dateRangeEnd=2019-09-12T19%3A30%3A42.700Z&interval=PT6H&resource=k8s_cluster%2Flocation%2Fus-east1%2Fcluster_name%2Fgprd-gitlab-gke&scrollTimestamp=2019-09-12T17%3A24%3A15.180382000Z&filters=text%3Agitlab-registry&advancedFilter=resource.type%3D%22k8s_cluster%22%0Aresource.labels.location%3D%22us-east1%22%0Aresource.labels.cluster_name%3D%22gprd-gitlab-gke%22%0Atimestamp%3E%3D%222019-09-12T17%3A11%3A02.287936Z%22%0Atimestamp%3C%3D%222019-09-12T17%3A25%3A02.287936Z%22%0AprotoPayload.authenticationInfo.principalEmail%3D%22jskarbek%40gitlab.com%22%0AprotoPayload.resourceName%3D%22extensions%2Fv1beta1%2Fnamespaces%2Fgitlab%2Fdeployments%2Fgitlab-registry%22%20OR%20protoPayload.resourceName%3D%22apps%2Fv1beta2%2Fnamespaces%2Fgitlab%2Fdeployments%2Fgitlab-registry%22) - -## Detection & Response - -Start with the following: - -- How was the incident detected? Alerts -- Did alarming work as expected? Yes -- How long did it take from the start of the incident to its detection? 12 minutes -- How long did it take from detection to remediation? 1 minute -- Were there any issues with the response to the incident? The recreation of the deployment uses our default settings of the HPA with a minimum of 2 replicas. It took roughly 3 minutes until the container registry scaled up to it's original state prior to the incident. At this point end users may have seen performance degradation. - -## Timeline - -2019-09-12 - -- 17:11:51 UTC - `kubectl delete gitlab-registry` - was executed on the production cluster -- 17:17:XX UTC - alerts indicating our registry endpoint alerted the on-call Engineer -- 17:19:XX UTC - Incident Call Started -- 17:23:59 UTC - Engineer recreated the Container Registry deployment -- 17:27:XX UTC - Alert clears - -## Root Cause Analysis - -An Engineer whom was performing local testing on an item directly related to the naming schema associated with objects that would match that of the production cluster. Due to the nature of testing this removed a bit of context that would have immediately signaled to the Engineer that commands may have been sent to an undesired Kubernetes Cluster. The engineer performed the command `kubectl delete deploy gitlab-registry` which matches the name of the deployment on the production cluster. This deletion removes both the Deployment along with all Replicasets and Pods associated with it. The Service Object remained, but no longer had any running Pods in its place. HAProxy's healthcheck will now have signaled a failure and removed the `gke-registry` backend which left HAProxy to return HTTP 503's for all incoming requests. - -## What went well - -Start with the following: - -- We were able to diagnose the issue relatively quickly -- With the use of Kubernetes, we were able to utilize our own tooling to bring the Container Registry to a working state quickly - -## What can be improved - -Start with the following: - -- Using the root cause analysis, explain what can be improved to prevent this from happening again. - * Engineers should not have unnecessary access to production clusters -- Is there anything that could have been done to improve the detection or time to detection? - * Clusters should not have the API accessible by anyone other than service accounts acting inside of CI/CD jobs - * Clusters should not allow API traffic from users from anywhere - * A warning that an Engineer was acting upon a production cluster would caused a second guess at the commands being run locally - * An indicator on the shell of the engineer may have warned the Engineer to which cluster they are connected too -- Is there anything that could have been done to improve the response or time to response? - * Consider shortening the alert time required to trigger - * An alert dedicated to the monitoring of the Pods/Replicaset/Deployment and whether it exists would've helped diagnose faster -- Is there an existing issue that would have either prevented this incident or reduced the impact? - * No -- Did we have any indication or beforehand knowledge that this incident might take place? - * No - - -## Corrective actions - -An Epic has been created to discuss various ways of limiting exposure to production services in order to prevent future accidents such as this from occurring: https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/91 - - -## Guidelines - -* [Blameless Postmortems Guideline](https://about.gitlab.com/handbook/customer-success/professional-services-engineering/workflows/internal/root-cause-analysis.html) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",3.0 -24727158,2019-09-12 12:34:42.161,file-33-stor-gprd rebooted,"file-33-stor-gprd rebooted at 9/11 15:19 and 11/11 15:42 UTC. - -Opened this issue to track repos with zeroed files. Searching through all repos: - -``` -cd /var/opt/gitlab/git-data/repositories/@hashed -ionice -n 5 find . -regextype sed -regex "".*/objects/.*"" -size 0 > /var/tmp/zerofiles.txt -``` - -Results below.",3.0 -24714776,2019-09-12 06:45:09.178,Help delete a group webhook under gitlab-org,"The context is at https://gitlab.com/gitlab-org/quality/triage-serverless/issues/11#note_215742570 - -We want to delete the webhook under https://gitlab.com/groups/gitlab-org/-/hooks - -![image](https://gitlab.com/gitlab-org/quality/triage-serverless/uploads/d0b62d599053d89a74f33cec57fb096b/image.png) - -The one for http://triage-customers.triage-serverless-12690061.triage.serverless.grzegorz.co - -However we cannot do so via the web UI because of https://gitlab.com/gitlab-org/gitlab-ee/issues/29215 - -Could someone help us delete it? Then @rspeicher can help us create another one for https://triage-serverless.gitlab.com/ - -I am not sure why we cannot edit it, but it seems we can only delete and create a new one.",1.0 -24701290,2019-09-11 18:04:45.307,Potential bug with our GKE module related to node pool instance counts,"Two recent events brought up an issue with our GKE module: https://gitlab.com/gitlab-com/gl-infra/terraform-modules/google/gke - -During testing, an engineer was unable to get a node to be created with nodes participating in the cluster. And another time when an Engineer was adding an additional node pool, both times the Engineers ran into the same issue. - -The most probable cause may be the removal of the `node_count` option: https://gitlab.com/gitlab-com/gl-infra/terraform-modules/google/gke/commit/8e49364128f22b9b4ef4de8b029e4562de62c154 - -Utilize this issue to investigate/test/fix whatever may be wrong.",3.0 -24701150,2019-09-11 18:01:03.063,Git Data Loss on Gitaly server restart,"When a user is pushing a git repository and the Gitaly server which is hosting that repo shuts down for any reason, that push fails leaving 0 byte files on the disk. When the server comes back up, any subsequent pushes to those repositories fail until someone manually intervenes. - -The recent Infrastructure issu is: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7837 - -It's also being discussed in the EE tracker: https://gitlab.com/gitlab-org/gitlab-ee/issues/24267 - -Some suggestions are: - -- `core.fsyncObjectFiles=true` Git Option -- Set the `data=journal` mount option to prevent zeroing files on crashes. Now we have `data=ordered`. - -The ultimate solution will have to be Gitaly HA, but we will need to mitigate this problem before that is ready. Use this issue to discuss other ideas. - -/cc @gitlab-com/gl-infra",3.0 -24679182,2019-09-11 10:12:57.846,Change status.io account owner,"Current status.io account owner is still John Northrup. - -In order to transfer ownership, he needs to choose another team member _with a valid payment method on file_ as owner (https://kb.status.io/account/transfer-ownership/). - -@dawsmith Can you check if you (or somebody else) have payment info setup and can arrange with @northrup to take over ownership from him?",1.0 -24669320,2019-09-11 04:19:29.701,file-35-stor-gprd rebooted,"The `file-35-stor-gprd.c.gitlab-production.internal` server rebooted at 03:39 UTC. - -The following repositories have 0 byte files which may need to be deleted before pushes will succeed. -``` -/2d/a6/2da627fc34bd145119f7c4b28180de9b1b1ecb04cd50395d785d1c69206b35b7.git -/ee/0e/ee0e418f314258adb918e4c885f7e69d24393d6c089416490541195ccf9ec8b9.git -/ee/98/ee98bb4e4bccdb9985892d934e1019d8a83c8106fff666d12b61fa48b61f923f.git -/88/02/8802266a8fdb5df071ff72872f5026a6cac54123e19916215c28faa6cb9c48c3.git -/a5/96/a596fce6a7c5a270336cf6e4d93a2e68f0f48cf10f9dfe49fb9091b75eb8477b.git -/f4/46/f446c5ffe314c2cf078dfa09e27fdec032fadba351ab17c39132699db0e56821.git -/71/a3/71a39ee9d13571f24e71db9481c6c79fe16ae18d46ae3ea0f702a7f2ab81b9fd.git -/03/b0/03b051a9df00021abd386e862448aae39f02cac86d6705d757868abf0ae181ff.git -/70/a8/70a8b032d6fb8a191a4262bea0b7ad1edf89069c9916c0fba205df356dbd9147.git -/db/e7/dbe7ad164b13687f1f4cb406c010c3baa1cdef40e3bed488dc78bd6253172ad9.git -/86/9d/869dfce8970926048a3fc4af2dc1f6a90a777b655f9d98588a38770e60c9a6e7.git -/a9/d6/a9d6b8b752a339be792c71051237e96d792e64a4654077d79c58a64e641a60b6.git -/50/97/5097eec0504007ccff80e78fccecf3f7967688c86f6e7b2efacf5b003acc7159.git -/69/42/6942e0cdf2a521808561da1b5e0b5255404c14b56551d3c69e2e5b6523791990.git -/cf/ea/cfeae16b595d765939a14e1406ab2068a4f03c1cb835a12c3b09e4b93319d758.git -/4b/f2/4bf2aeadb0603ae6c62c755864d5cde291a48ff7ac759b529105aa498ea60c92.git -/72/f2/72f2fde2baebc34149ad159e3d24051bb70685e8512a48163b3476c4d370d185.git -/6d/be/6dbee4cf8657d93310ee25693acce504b9972c2fa162ac41c9d056e12efcb8db.git -/30/6b/306b9095b0213e79aca371b230061b110a54186e2ac80336f8b8925725702e9d.git -/9e/de/9ede26caa0c4023e99c905ffec40ce1d5aad20f69bce64fde781d51a7e2d4c30.git -/49/65/4965abd0081151b6f547f562e730f763cff5417d2352380be4c0bd64dbddfcb1.git -/49/87/498797e5e80b416ee9471d61df9782116fb5917280982006cb543da8f7a6d4f2.git -``` - -These files are located in `/var/opt/gitlab/git-data/repositories/@hashed`",2.0 -24660552,2019-09-10 20:53:08.044,Add new sitespeed graphite datasource to grafana,"To finalise a completely new setup for doing live monitoring of overall frontend performance through sitespeed I would need the setup of a new graphite datasource in our grafana instance. This would then finalise the issue of https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/2106 - -I have added the credentials + server address to 1P under `Sitespeed Graphite New`. It would be great to have this as a new datasource `sitespeed new`. - -As a standard grafana member I can add new dashboards without problems or? (So far I tested them only on my local grafana instance)",1.0 -24643616,2019-09-10 14:22:30.203,"Thanos and prometheus hosts should be labelled with `type=""metrics""` label","Currently Thanos and Prometheus do not have a `type` label. - -This means (ironically) that much of our monitoring and alerting infrastructure does not work on these hosts. - -This should be fixed.",2.0 -24643608,2019-09-10 14:22:23.241,Discuss 3 oncall shifts,"Made a spreadsheet to visualize: https://docs.google.com/spreadsheets/d/1r2VOI9S0omtqUDFKMX5Csu1opN_UsA-U5oAjenjU-0g/edit#gid=0 - -Issue to discuss this - per Infra team retro.",1.0 -24641857,2019-09-10 13:48:14.990,Add Prometheus alert routing to issues,We can now use GitLab Prometheus integration to route alerts directly to issues.,2.0 -24633485,2019-09-10 10:01:04.252,Decrease statement_timeout in staging,"We have `statement_timeout` settings in gstg that are not consistent with production. We should align that, for example to allow issues with migrations to surface earlier before they're tried in production. - -While the global `statemnt_timeout` in gstg is 15s, individual per-user settings are different: - -``` -root@patroni-06-db-gstg.c.gitlab-staging-1.internal:/var/opt/gitlab/postgresql# gitlab-psql -U gitlab -Password for user gitlab: -psql (9.6.15) -SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off) -Type ""help"" for help. - -gitlabhq_production=> show statement_timeout; - statement_timeout -------------------- - 5min -(1 row) -``` - -The `statement_timeout` has been set individually for `gitlab` user, see `pg_user` table: - -``` - gitlab | 16384 | f | f | f | f | ******** | | {statement_timeout=5min} -``` - -We should reset this to 15s to align with production. Since staging is a lot smaller than production, we might still not catch all issues - but that's another topic.",1.0 -24627527,2019-09-10 08:39:50.878,Temporary database testing instance for pshutsin,"Related to https://gitlab.com/gitlab-com/access-requests/issues/1887, we are provisioning a full database instance (from the restore pipeline) for @pshutsin to test a data migration with.",1.0 -24618359,2019-09-10 01:12:09.364,ops.gitlab.net cert renewal,"https://gitlab.pagerduty.com/incidents/P7HGAQM - -Expires in 6 days, already renewed by sslmate, just needs deployment.",1.0 -24574593,2019-09-08 20:50:11.307,Add detailed network monitoring for PostgreSQL/Patroni hosts,"As I already proposed before in https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7543#note_204263742 : - -> We might want to consider having every node send/receive a continuous stream of low bandwidth traffic, and monitor it. Something like a permanently running `iptraf`. This cannot be 1m resolution time, as some already available metrics on Prometheus. With this, we might be able to see clearly if there were network hiccups. - -I would like to raise the topic again. Prometheus network monitoring is good except for two data points that are not available: -* Low resolution (1 second resolution is desired). -* Dependent on the traffic. - -I propose to setup a system to inject traffic between the hosts at a constant rate, and export metrics to Prometheus. This way, we can see what is the performance of the network vs the expected one. It may consist of the following: - -* Each host ""monitors/talks to"" two other hosts of the cluster, ideally one in the same zone and another one in another zone, in the same region. -* A small amount of traffic, but constant, is exchanged (think of something like `dd if=/dev/zero | pv --rate-limit | nc` etc). For example, 50KB/s. -* Export network performance metrics to Prometheus, ideally with a 1 second resolution. - -This mechanism would allow to detect and measure precisely network hiccups and disruptions. It would help clearly to diagnose the recent Patroni failovers, which are believed to be caused by coordinated network disruptions (see https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7790#note_213872680). - -cc @glopezfernandez",2.0 -24564067,2019-09-07 22:30:44.459,Connect GitLab services to project environment variables,"The environments built by [GitLab Services](https://ops.gitlab.net/gitlab-com/services-base) need the ability to associate environment variables with projects and groups within the GitLab application. - -Some service environments will not require variables to connect (master, review apps). Some will need to automatically set variables in an arbitrary project or group on GitLab.com. Some will need to set variables in another instance that is not GitLab.com (staging, ops, dev instances). - -The location of the project to connect to should be configurable using a Terraform variable in the files in the [/environments](https://ops.gitlab.net/gitlab-com/services-base/tree/master/environments) folder in the GitLab Services project. The absence of this variable should result in no attempt to set variables in any project or group. - -Initially the two variables that there is initial need for are: - -- [DATABASE_URL](https://docs.gitlab.com/ee/topics/autodevops/#postgresql-database-support) set from the [CloudSQL Module](https://ops.gitlab.net/gitlab-com/services-base/blob/master/terraform/main.tf#L75) -- [POSTGRES_ENABLED](https://docs.gitlab.com/ee/topics/autodevops/#database) and possibly other database variables. - -When setting these variables, the additional attributes also need to be configurable - -- Scope - to specify which environments the variable applies to -- Masked - to hide protected variables -- State - for completeness",2.0 -24563892,2019-09-07 21:53:05.400,Connect GitLab services GKE clusters to GitLab Projects,"The environments built by [GitLab Services](https://ops.gitlab.net/gitlab-com/services-base) need the ability to associate their GKE clusters with projects and groups within the GitLab application. - -Some service environments will not have an automatic Kubernetes integration connection (master, review apps). Some will need to automatically connect to an arbitrary project or group on GitLab.com. Some will need to connect to another instance that is not GitLab.com (staging, ops, dev instances). - -The location of the project to connect to should be configurable using a Terraform variable in the files in the [/environments](https://ops.gitlab.net/gitlab-com/services-base/tree/master/environments) folder in the GitLab Services project. The absence of this variable should result in no attempt to connect GKE to a project or group. - -The three variables which need to be connected are: - -- The CA Cert from the GKE cluster defined using the [GKE Module](https://ops.gitlab.net/gitlab-com/services-base/blob/master/terraform/main.tf#L39) -- The Endpoint IP of the GKE cluster defined above -- The Service token of the `gitlab-admin` account defined in the [gitlab-admin-service-account](https://ops.gitlab.net/gitlab-com/services-base/blob/master/kubernetes/gitlab-admin-service-account.yaml) - -Additional settings are: - -- `Environment Scope` - which should come from a configuration variable in the environment file and should default to `*` -- `Base Domain` - which can come from a configuration variable, and will eventually be sync'd with a DNS module -- `Cluster Name` - should default to the name of the branch or the `var.project` TF variable - -This work should be done in an ephemeral environment and should be functional before it is merged with `master`. Assign the MR to @devin for review.",3.0 -24539695,2019-09-06 14:29:13.181,Stop using numbers in our host naming convention,"Lately, I've observed considerable amounts of thought and attention given to replacing hosts and naming them to re-sequence the pool counting up from `01`. This pattern _always_ requires destruction or renaming, if even possible, of the host being replaced before the new host can be created. The extra amount of work this creates and the thought and attention given to such operations distracts us from the goal of destroying and replacing the broken VM. - -:cow: :cow: :cow: :cow: :cow: - - -Cc @craig - I'm not going to assign this to you, but I know you'd come across a way of doing this with randomization and pools of silly names.",3.0 -24529320,2019-09-06 09:05:06.779,Fix replication on dr-delayed and dr-archive db's,postgres-dr-delayed-01-db-gprd and postgres-dr-archive-01-db-gprd stopped replicating after the last [failover](https://gitlab.com/gitlab-com/gl-infra/production/issues/1119) because they are on the wrong timeline.,3.0 -22135693,2019-06-21 15:01:48.780,Create similar alertmanager configurations across all instances of running alertmanagers,"Today the alertmanager consists of three files: -* https://gitlab.com/gitlab-cookbooks/gitlab-alertmanager/blob/master/templates/default/alertmanager.yml.erb -* https://gitlab.com/gitlab-cookbooks/gitlab-alertmanager/tree/master/files/default/alertmanager/templates - -The `alertmanager.yml.erb` is put together by chef and contains secret data that we store inside of GKMS. We need to determine a way to recreate this workflow such that we can make the same modification to one set of files and these are then pulled in by both chef, and our desired mechanism for deploying the alertmanagers inside of Kubernetes. - -Reference Doc: https://docs.google.com/document/d/1oJcWY4U2lmPwBvY8qQPlP7msPc7ioVUlo3ZKkCp-i1g/edit - -Related Issue: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6927 - -/cc @gitlab-com/gl-infra",5.0 -22135116,2019-06-21 14:42:45.864,Thanos reconfiguration,"At the time of writing up this issue, the stable/prometheus-operator helm chart does not support the same version of thanos that we are running throughout the rest of our infrastructure. The use of thanos is preferred to ensure the longevity of data beyond the time for which prometheus is able to store data in our GKE clusters. Utilize this issue to discuss/investigate/create actionable issues to ensure that we get thanos enabled with the desired configuration in GKE, and that prometheus is configured in a preferred manner. - -* [x] Check-up on which version the latest version of the stable/promethues-operator chart supports such that we can hopefully get the same version of thanos deployed across both our VM and Kubernetes infrastructure -* [x] Create a separate service account for this - * Currently the same service account used to mangle user data is storing metric data, this is not best security practice. -* [x] Configure thanos to store metrics in the cloud -* [x] Turn down the data retention of prometheus from 8 weeks to a single day - -Reference doc: https://docs.google.com/document/d/1oJcWY4U2lmPwBvY8qQPlP7msPc7ioVUlo3ZKkCp-i1g/edit - - -/cc @gitlab-com/gl-infra",3.0 -22133544,2019-06-21 13:54:40.325,Fix syntax that will break in terraform 0.12,"I tried planning with 0.12.2 on gstg, and got a bunch of errors. Luckily they are all of the form: - -``` -Error: Invalid attribute name - - on .terraform/modules/alerts/instance.tf line 176, in resource ""google_compute_instance"" ""default"": - 176: ""CHEF_URL"" = ""${var.chef_provision.[""server_url""]}"" - -An attribute name is required after a dot. -``` - -this should be easy to fix and will hopefully smooth our path to terraform 0.12. - -Related to https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6760",1.0 -22126041,2019-06-21 09:58:21.309,PoC: Database backups based on snapshots,"Postgres basebackups are currently being taken by wal-e every 24 hours. This process takes up to 16 hours and consumes a lot of IO on the primary. We cannot have wal-e take this basebackup from a replica because wal-e doesn't support that. - -During weekdays, the daily backup takes up to 16 hours to complete. It increases the load and IO wait on the primary extraordinarily when it runs during higher traffic times: - -![Screenshot_from_2019-08-08_15-01-25](/uploads/a0a99dfae791cbb9e7b958ee33ebc03f/Screenshot_from_2019-08-08_15-01-25.png) - -Additionally, from time to time, we run into issues with wal-e uploading to GCS. See for example https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7049. - -There was an effort to use wal-g over wal-e, which would support taking basebackups from replicas. However, currently we're still waiting on a wal-g release to properly support GCS. - -Even with wal-g in place, we would still have a high MTTR for restoring a full backup: -* A 3TB backup has to be fetched from GCS and extracted on the database host -* Worst case, we'd replay 24 hours worth of WAL on top of that basebackup to catch up with changes made after the basebackup started. - -MTTR is likely at >= 8 hours, if not worse (it depends on how much WAL needs to be applied). - -### Proposal - -In order to reduce the MTTR to below 1 hour, we want to use GCE disk snapshots to take basebackups. - -In more detail, we would -* Provision a dedicated replica which would consume WAL from archive and - when running - generally stay up to date with the production cluster. The replica does not participate in the HA cluster. - -At regular intervals (let's say every 2 hours, but this can be tuned) we would -* Stop postgres on the replica and flush disks to ensure snapshot consistency -* Coordinate with the GCE API to grab a snapshot of the data disk -* Start postgres again to let it catch up with the upstream cluster - -In order to restore a backup, we would create a new disk from the latest snapshot and instantiate a database instance from it. It would be configured as an archive replica and consume WAL from archive until the recovery point. Then, it's promoted to primary and can start serving connections. - -### Benefits - -* Greatly reduced MTTR: Disks can be created from snapshots in a few minutes max - no need to ship 3TB across the network -* Disk snapshots are incremental, hence cost is not a huge concern with high frequent backups -* New database instances can be created easily from the snapshots. This is also helpful if we had to recover the full cluster: We'd create all 6 instances from the same snapshot, let them catch up and then direct the replicas to talk to the chosen primary. -* Removed the nightly IO bottleneck by not taking backups from the primary any more - -### Proof of concept implementation - -(tbd) - -### Notes - -* It may be helpful if the data directory would live on its own disk and we would just snapshot that. With that, the data directory becomes a drop in we could easily mount at any database instance.",5.0 -22098581,2019-06-20 15:28:39.392,use pushgateway to indicate pg backup start and finish,"currently we capture wal-e basebackup message `Upload` https://gitlab.com/gitlab-cookbooks/gitlab-mtail/blob/master/files/default/mtail/wale.mtail#L23 and use it in pgbasebackup alert to indicate if a base backup happened. However, this alert would never notify us if the backup failed during the process. Because of it, we missed 6 base backups in the last 14 days but never get any alert https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7049. - - -We should use pushgateway before and after the backup to indicate the backup start and completion instead.",2.0 -22095890,2019-06-20 14:15:38.280,Convert the gitlab-com repo from helm template to helm local tiller,"As discussed in a recent meeting, notes captured here: https://docs.google.com/document/d/1jnh5Sf9wXmok0W5iyQJS0CDItDIl5E1hQKOBwIb4w2E/edit#heading=h.ooga5woic4af - -> Helm/Tiller - skarbek -> * Let’s utilize local tiller -> * Let’s convert the gitlab-com project over to utilizing local tiller -> * Create an issue - -We've decided it's safer to utilize the hooks provided by helm. To avoid the security implications by using tiller, we'll utilize a plugin that provides a local tiller. We are currently testing successfully this method with our [monitoring repo](https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/monitoring), we need to convert our [gitlab-com repo](https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/gitlab-com) to using the same system. - -/cc @jarv",2.0 -22094961,2019-06-20 13:41:59.397,Revoke Vendor Access to s3 Snowplow legacy bucket,FishTown has been writing data into our gitlab-com-snowplow-data s3 bucket. Remove that access.,1.0 -22075037,2019-06-19 20:38:52.630,Detecting5xxForRegistry is flappy,"`sum(backend_code:haproxy_server_http_responses_total:irate1m{backend=""registry"",code=""5xx"",tier=""lb""}) >0` - -is really flappy in the presence of sporadic errors, I believe mainly because of irate1m, irate being 'the last two samples', and over only 1m. I think we need to re-evaluate what this is doing; I'm thinking a new synthetic 'rate5m' might be more useful to smooth it out, so we get better stronger more consistent signal when things are going wrong. It probably needs a small number other than 0, but that's TBD; with a smoother signal, >0 may be sufficient. - -/cc @cmcfarland",1.0 -22072839,2019-06-19 18:42:05.072,deploy updated uptycs(osquery) to prod canary hosts,"The SecOps team would like to proceed with rolling out an updated version of uptycs (osquery) to the production canary hosts. - -This new version has some significant performance improvements around the embedded rocksdb. So far in staging, we haven't seen any significant performance impact like we did with the previous version. - -For more info/context - see [#199](https://gitlab.com/gitlab-com/gl-security/operations/issues/199)",2.0 -22071747,2019-06-19 17:36:23.170,Test restore of PackageCloud backups with new procedure,"PackageCloud is backing up its database via xbstream. It should be almost the same to restore a backup using these new backups, but it should be tested and the documentation should be updated.",4.0 -22071538,2019-06-19 17:25:23.446,Increase Retention Period on PackageCloud backups,We currently keep PackageCloud backups for 5 days. This was decided upon when we had very limited disk space due to the giant database. We should increase the retention period for backups up to 14 days.,1.0 -22071485,2019-06-19 17:23:33.979,Remove cron task to backup PackageCloud to S3,PackageCloud used to not be able to upload its backups to S3 successfully due to the very large database. It is now able to successfully upload the backups to S3 so we should remove the cron job that did that for us.,1.0 -22046171,2019-06-19 05:28:19.333,Register mailroom service on consul,Necessary for us to be able to do dynamic inventory for consul (e.g. for gitlab-org/release/framework#354).,3.0 -22042303,2019-06-19 01:11:52.749,Document in`runbooks` how to debug/maintain Geo DB replication,,2.0 -22040745,2019-06-18 22:43:02.107,Discuss/formalise recommendation for managing GKE (Google Kubernetes Engine) patching/upgrades,"Following on from #6448 and the some of the implications of #6880 , we should discuss, then write down (and link from relevant places) what gl-infra expects from people spinning up GKE clusters in GCP projects. - -Starting position for discussion: -1. If you're spinning up GKE clusters yourself in GCP projects (i.e. gl-infra are not directly involved beyond provisioning a project for you), then you are responsible for the maintenance of same. gl-infra can advise/assist when explicitly requested, but will not be pro-actively looking out for things. -2. The simplest way to avoid issues with this is to turn on automatic upgrades of the GKE cluster (master + nodes). https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-upgrades details how to do that. This also requires that you consider uptime requirements for the thing you're running on GKE, but if it's critical to the business, perhaps gl-infra should be looking after it anyway which requires explicit discussion + handover.",2.0 -22033597,2019-06-18 16:22:18.557,Update credentials for Salesforce used in customers.gitlab.com,"Hi team, the creds for Salesforce has been recently changed and out integration is broken. - -Can you please update the chef vault with the new credentials? Specifically we need to update just [two attributes](https://gitlab.com/gitlab-cookbooks/cookbook-customers-gitlab-com/blob/ca4bc07310aaf78629260425edb6d391580942ea/attributes/default.rb#L37-38). The values are stored on the `Subscription portal` shared vault from 1password, please let me know if you need access to it. - -Thanks! - -Closes gitlab-org/customers-gitlab-com#546",1.0 -22020540,2019-06-18 11:48:27.832,Automate Vault Restoration Testing via CI/CD,Implement regular backup and service restoration tests for Vault.,8.0 -22020396,2019-06-18 11:46:00.453,Rollout infrastructure secrets via vault,"* [ ] define a strategy for migrating secrets from GKMS / Chef vault to Hashicorp Vault - * moving secrets over piece by piece or maybe just adapting the shim layer for secrets-management in Chef? - * try to prevent having to live in both worlds for an extended time period -* [ ] test on one service -* [ ] rollout service group by service group - -We eventually should split this up into multiple issues when we clarified the migration strategy.",8.0 -22019719,2019-06-18 11:24:14.153,Create Runbook for Vault,"* [ ] Create runbook for Vault -* [ ] add Vault to service catalog",3.0 -22019610,2019-06-18 11:20:09.393,Configure Alerting for Vault,Configure thresholds and alerting for Vault via alertmanager.,3.0 -22018400,2019-06-18 10:35:46.577,"Be confident in our metrics, dashboards, and alerting related to ZFS git storage nodes","We want sufficient metrics to be able to diagnose and debug issues related to filesystems and storage on ZFS-backed git storage nodes, and our alert conditions to be satisfactory. - -There are 2 sides to this: - -1. ZFS-specific metrics. Check out https://github.com/ncabatoff/zfs-exporter -1. Existing metrics related to persistent disk I/O are up to scratch. Is there anything to alert on here? - -From a suggestion by @andrewn.",4.0 -22005564,2019-06-17 21:56:26.049,add forum.gitlab.com to service catalog,We should add forum.gitlab.com to the service catalog to be reminded that there is something we need to take care for (updates for example).,2.0 -22005472,2019-06-17 21:46:20.317,Register deploy service on consul,"In order for us to be able to do dynamic inventory for consul (e.g. for gitlab-org/release/framework#354) we need to register a service in the deploy nodes to be able to differentiate them - -/cc @dawsmith",1.0 -22002625,2019-06-17 21:24:30.237,Investigate automating issue creation for Discourse upgrade notifications,"We receive email notifications to the `ops-contact` account when a new version of Discourse is available for `forum.gitlab.com`, which are easy to ignore, lose track of, or fall victim to the [bystander effect](https://en.wikipedia.org/wiki/Bystander_effect). If possible, we should look into automating the creation of upgrade issues from these emails to more easily/consistently ensure that the notifications are received through our primary workflow (gitlab todos). - -/cc @ansdval @hphilipps @cmiskell @msmiley",1.0 -21980690,2019-06-17 15:00:28.242,Failure to access anything requiring the lb-bastion.dr.gitlab.com bastion,"# Bastion dr environment failures - -Examples of successful bastion access to a production redis host, and a failed bastion access to whatever environment `dr` refers to. - -## Success connecting to production bastion (lb-bastion.gprd.gitlab.com): - -```{sh} -$ ssh -v lb-bastion.gprd.gitlab.com -OpenSSH_7.9p1, LibreSSL 2.7.3 -debug1: Reading configuration data /Users/nelsnelson/.ssh/config -debug1: /Users/nelsnelson/.ssh/config line 2: Applying options for lb-bastion.gprd.gitlab.com -debug1: /Users/nelsnelson/.ssh/config line 71: Applying options for * -debug1: Reading configuration data /etc/ssh/ssh_config -debug1: /etc/ssh/ssh_config line 48: Applying options for * -debug1: Connecting to lb-bastion.gprd.gitlab.com port 22. -debug1: Connection established. -debug1: identity file /Users/nelsnelson/.ssh/id_rsa type 0 -debug1: identity file /Users/nelsnelson/.ssh/id_rsa-cert type -1 -debug1: identity file /Users/nelsnelson/.ssh/id_dsa type -1 -debug1: identity file /Users/nelsnelson/.ssh/id_dsa-cert type -1 -debug1: identity file /Users/nelsnelson/.ssh/id_ecdsa type -1 -debug1: identity file /Users/nelsnelson/.ssh/id_ecdsa-cert type -1 -debug1: identity file /Users/nelsnelson/.ssh/id_ed25519 type -1 -debug1: identity file /Users/nelsnelson/.ssh/id_ed25519-cert type -1 -debug1: identity file /Users/nelsnelson/.ssh/id_xmss type -1 -debug1: identity file /Users/nelsnelson/.ssh/id_xmss-cert type -1 -debug1: Local version string SSH-2.0-OpenSSH_7.9 -debug1: Remote protocol version 2.0, remote software version OpenSSH_7.2p2 Ubuntu-4ubuntu2.8 -debug1: match: OpenSSH_7.2p2 Ubuntu-4ubuntu2.8 pat OpenSSH_7.0*,OpenSSH_7.1*,OpenSSH_7.2*,OpenSSH_7.3*,OpenSSH_7.4*,OpenSSH_7.5*,OpenSSH_7.6*,OpenSSH_7.7* compat 0x04000002 -debug1: Authenticating to lb-bastion.gprd.gitlab.com:22 as 'nelsnelson' -debug1: SSH2_MSG_KEXINIT sent -debug1: SSH2_MSG_KEXINIT received -debug1: kex: algorithm: curve25519-sha256@libssh.org -debug1: kex: host key algorithm: ecdsa-sha2-nistp256 -debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: compression: none -debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: compression: none -debug1: expecting SSH2_MSG_KEX_ECDH_REPLY -debug1: Server host key: ecdsa-sha2-nistp256 SHA256:YjrYlnAlbKv23MI+h4UJGaGU32SWHngXti2ahIEEVz0 -debug1: Host 'lb-bastion.gprd.gitlab.com' is known and matches the ECDSA host key. -debug1: Found key in /Users/nelsnelson/.ssh/known_hosts:435 -debug1: rekey after 134217728 blocks -debug1: SSH2_MSG_NEWKEYS sent -debug1: expecting SSH2_MSG_NEWKEYS -debug1: SSH2_MSG_NEWKEYS received -debug1: rekey after 134217728 blocks -debug1: Will attempt key: cardno:000610165365 RSA SHA256:1Nj+VCcO72FXfEeTmrzP+jVE2lS1sT2A3sOx5mwUhCU agent -debug1: Will attempt key: /Users/nelsnelson/.ssh/id_rsa RSA SHA256:9Ev94jGJTsaPi/hiU57atmgVm8AB6PS/akZlxbYVqRw -debug1: Will attempt key: /Users/nelsnelson/.ssh/id_dsa -debug1: Will attempt key: /Users/nelsnelson/.ssh/id_ecdsa -debug1: Will attempt key: /Users/nelsnelson/.ssh/id_ed25519 -debug1: Will attempt key: /Users/nelsnelson/.ssh/id_xmss -debug1: SSH2_MSG_EXT_INFO received -debug1: kex_input_ext_info: server-sig-algs= -debug1: SSH2_MSG_SERVICE_ACCEPT received -debug1: Authentications that can continue: publickey -debug1: Next authentication method: publickey -debug1: Offering public key: cardno:000610165365 RSA SHA256:1Nj+VCcO72FXfEeTmrzP+jVE2lS1sT2A3sOx5mwUhCU agent -debug1: Server accepts key: cardno:000610165365 RSA SHA256:1Nj+VCcO72FXfEeTmrzP+jVE2lS1sT2A3sOx5mwUhCU agent -debug1: Authentication succeeded (publickey). -Authenticated to lb-bastion.gprd.gitlab.com ([35.196.168.43]:22). -debug1: channel 0: new [client-session] -debug1: Requesting no-more-sessions@openssh.com -debug1: Entering interactive session. -debug1: pledge: network -debug1: client_input_global_request: rtype hostkeys-00@openssh.com want_reply 0 -debug1: Sending environment. -debug1: Sending env LANG = en_US.UTF-8 -Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.10.0-1009-gcp x86_64) - - * Documentation: https://help.ubuntu.com - * Management: https://landscape.canonical.com - * Support: https://ubuntu.com/advantage - - Get cloud support with Ubuntu Advantage Cloud Guest: - http://www.ubuntu.com/business/services/cloud - -128 packages can be updated. -0 updates are security updates. - -New release '18.04.2 LTS' available. -Run 'do-release-upgrade' to upgrade to it. - - -*** System restart required *** -nelsnelson@bastion-02-inf-gprd.c.gitlab-production.internal:~$ exitdebug1: client_input_channel_req: channel 0 rtype exit-status reply 0 -debug1: client_input_channel_req: channel 0 rtype eow@openssh.com reply 0 - -logout -debug1: channel 0: free: client-session, nchannels 1 -Connection to lb-bastion.gprd.gitlab.com closed. -Transferred: sent 3176, received 4152 bytes, in 3.6 seconds -Bytes per second: sent 876.0, received 1145.2 -debug1: Exit status 0 - -$ ssh -vvv redis-01-db-gprd.c.gitlab-production.internal -OpenSSH_7.9p1, LibreSSL 2.7.3 -debug1: Reading configuration data /Users/nelsnelson/.ssh/config -debug1: /Users/nelsnelson/.ssh/config line 7: Applying options for *.gitlab-production.internal -debug1: /Users/nelsnelson/.ssh/config line 71: Applying options for * -debug1: Reading configuration data /etc/ssh/ssh_config -debug1: /etc/ssh/ssh_config line 48: Applying options for * -debug1: Executing proxy command: exec ssh lb-bastion.gprd.gitlab.com -W redis-01-db-gprd.c.gitlab-production.internal:22 -debug1: identity file /Users/nelsnelson/.ssh/id_rsa type 0 -debug1: identity file /Users/nelsnelson/.ssh/id_rsa-cert type -1 -debug1: identity file /Users/nelsnelson/.ssh/id_dsa type -1 -debug1: identity file /Users/nelsnelson/.ssh/id_dsa-cert type -1 -debug1: identity file /Users/nelsnelson/.ssh/id_ecdsa type -1 -debug1: identity file /Users/nelsnelson/.ssh/id_ecdsa-cert type -1 -debug1: identity file /Users/nelsnelson/.ssh/id_ed25519 type -1 -debug1: identity file /Users/nelsnelson/.ssh/id_ed25519-cert type -1 -debug1: identity file /Users/nelsnelson/.ssh/id_xmss type -1 -debug1: identity file /Users/nelsnelson/.ssh/id_xmss-cert type -1 -debug1: Local version string SSH-2.0-OpenSSH_7.9 -debug1: Remote protocol version 2.0, remote software version OpenSSH_7.2p2 Ubuntu-4ubuntu2.8 -debug1: match: OpenSSH_7.2p2 Ubuntu-4ubuntu2.8 pat OpenSSH_7.0*,OpenSSH_7.1*,OpenSSH_7.2*,OpenSSH_7.3*,OpenSSH_7.4*,OpenSSH_7.5*,OpenSSH_7.6*,OpenSSH_7.7* compat 0x04000002 -debug2: fd 5 setting O_NONBLOCK -debug2: fd 4 setting O_NONBLOCK -debug1: Authenticating to redis-01-db-gprd.c.gitlab-production.internal:22 as 'nelsnelson' -debug3: hostkeys_foreach: reading file ""/Users/nelsnelson/.ssh/known_hosts"" -debug3: record_hostkey: found key type ECDSA in file /Users/nelsnelson/.ssh/known_hosts:442 -debug3: load_hostkeys: loaded 1 keys from redis-01-db-gprd.c.gitlab-production.internal -debug3: order_hostkeyalgs: prefer hostkeyalgs: ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521 -debug3: send packet: type 20 -debug1: SSH2_MSG_KEXINIT sent -debug3: receive packet: type 20 -debug1: SSH2_MSG_KEXINIT received -debug2: local client KEXINIT proposal -debug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,ext-info-c -debug2: host key algorithms: ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519-cert-v01@openssh.com,rsa-sha2-512-cert-v01@openssh.com,rsa-sha2-256-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsa -debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com -debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com -debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 -debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 -debug2: compression ctos: none,zlib@openssh.com,zlib -debug2: compression stoc: none,zlib@openssh.com,zlib -debug2: languages ctos: -debug2: languages stoc: -debug2: first_kex_follows 0 -debug2: reserved 0 -debug2: peer server KEXINIT proposal -debug2: KEX algorithms: curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1 -debug2: host key algorithms: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 -debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com -debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com -debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 -debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 -debug2: compression ctos: none,zlib@openssh.com -debug2: compression stoc: none,zlib@openssh.com -debug2: languages ctos: -debug2: languages stoc: -debug2: first_kex_follows 0 -debug2: reserved 0 -debug1: kex: algorithm: curve25519-sha256@libssh.org -debug1: kex: host key algorithm: ecdsa-sha2-nistp256 -debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: compression: none -debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: compression: none -debug3: send packet: type 30 -debug1: expecting SSH2_MSG_KEX_ECDH_REPLY -debug3: receive packet: type 31 -debug1: Server host key: ecdsa-sha2-nistp256 SHA256:pbwMgpPOhyNzX9LCfxM8d6D2vOV9zBdQOxtBoBIm240 -debug3: hostkeys_foreach: reading file ""/Users/nelsnelson/.ssh/known_hosts"" -debug3: record_hostkey: found key type ECDSA in file /Users/nelsnelson/.ssh/known_hosts:442 -debug3: load_hostkeys: loaded 1 keys from redis-01-db-gprd.c.gitlab-production.internal -debug1: Host 'redis-01-db-gprd.c.gitlab-production.internal' is known and matches the ECDSA host key. -debug1: Found key in /Users/nelsnelson/.ssh/known_hosts:442 -debug3: send packet: type 21 -debug2: set_newkeys: mode 1 -debug1: rekey after 134217728 blocks -debug1: SSH2_MSG_NEWKEYS sent -debug1: expecting SSH2_MSG_NEWKEYS -debug3: receive packet: type 21 -debug1: SSH2_MSG_NEWKEYS received -debug2: set_newkeys: mode 0 -debug1: rekey after 134217728 blocks -debug1: Will attempt key: cardno:000610165365 RSA SHA256:1Nj+VCcO72FXfEeTmrzP+jVE2lS1sT2A3sOx5mwUhCU agent -debug1: Will attempt key: /Users/nelsnelson/.ssh/id_rsa RSA SHA256:9Ev94jGJTsaPi/hiU57atmgVm8AB6PS/akZlxbYVqRw -debug1: Will attempt key: /Users/nelsnelson/.ssh/id_dsa -debug1: Will attempt key: /Users/nelsnelson/.ssh/id_ecdsa -debug1: Will attempt key: /Users/nelsnelson/.ssh/id_ed25519 -debug1: Will attempt key: /Users/nelsnelson/.ssh/id_xmss -debug2: pubkey_prepare: done -debug3: send packet: type 5 -debug3: receive packet: type 7 -debug1: SSH2_MSG_EXT_INFO received -debug1: kex_input_ext_info: server-sig-algs= -debug3: receive packet: type 6 -debug2: service_accept: ssh-userauth -debug1: SSH2_MSG_SERVICE_ACCEPT received -debug3: send packet: type 50 -debug3: receive packet: type 51 -debug1: Authentications that can continue: publickey -debug3: start over, passed a different list publickey -debug3: preferred publickey -debug3: authmethod_lookup publickey -debug3: remaining preferred: -debug3: authmethod_is_enabled publickey -debug1: Next authentication method: publickey -debug1: Offering public key: cardno:000610165365 RSA SHA256:1Nj+VCcO72FXfEeTmrzP+jVE2lS1sT2A3sOx5mwUhCU agent -debug3: send packet: type 50 -debug2: we sent a publickey packet, wait for reply -debug3: receive packet: type 60 -debug1: Server accepts key: cardno:000610165365 RSA SHA256:1Nj+VCcO72FXfEeTmrzP+jVE2lS1sT2A3sOx5mwUhCU agent -debug3: sign_and_send_pubkey: RSA SHA256:1Nj+VCcO72FXfEeTmrzP+jVE2lS1sT2A3sOx5mwUhCU -debug3: sign_and_send_pubkey: signing using rsa-sha2-512 -debug3: send packet: type 50 -debug3: receive packet: type 52 -debug1: Authentication succeeded (publickey). -Authenticated to redis-01-db-gprd.c.gitlab-production.internal (via proxy). -debug1: channel 0: new [client-session] -debug3: ssh_session2_open: channel_new: 0 -debug2: channel 0: send open -debug3: send packet: type 90 -debug1: Requesting no-more-sessions@openssh.com -debug3: send packet: type 80 -debug1: Entering interactive session. -debug1: pledge: proc -debug3: receive packet: type 80 -debug1: client_input_global_request: rtype hostkeys-00@openssh.com want_reply 0 -debug3: receive packet: type 91 -debug2: channel_input_open_confirmation: channel 0: callback start -debug2: client_session2_setup: id 0 -debug2: channel 0: request pty-req confirm 1 -debug3: send packet: type 98 -debug1: Sending environment. -debug3: Ignored env SHELL -debug3: Ignored env LSCOLORS -debug3: Ignored env LESS -debug3: Ignored env XPC_FLAGS -debug3: Ignored env TERM_PROGRAM_VERSION -debug3: Ignored env JAVA_HOME -debug3: Ignored env SSH_AUTH_SOCK -debug3: Ignored env SVN_EDITOR -debug3: Ignored env TERM_SESSION_ID -debug3: Ignored env RBENV_SHELL -debug3: Ignored env GPG_TTY -debug3: Ignored env JENV_SHELL -debug3: Ignored env PWD -debug3: Ignored env LOGNAME -debug3: Ignored env MANPATH -debug3: Ignored env VIRTUALENVWRAPPER_VIRTUALENV -debug3: Ignored env HOME -debug1: Sending env LANG = en_US.UTF-8 -debug2: channel 0: request env confirm 0 -debug3: send packet: type 98 -debug3: Ignored env SECURITYSESSIONID -debug3: Ignored env PYTHONSTARTUP -debug3: Ignored env TMPDIR -debug3: Ignored env CLICOLOR -debug3: Ignored env VIRTUALENVWRAPPER_VIRTUALENV_ARGS -debug3: Ignored env FIGNORE -debug3: Ignored env TERM -debug3: Ignored env USER -debug3: Ignored env SHLVL -debug3: Ignored env XPC_SERVICE_NAME -debug3: Ignored env Apple_PubSub_Socket_Render -debug3: Ignored env PATH -debug3: Ignored env JENV_LOADED -debug3: Ignored env GOPATH -debug3: Ignored env TERM_PROGRAM -debug3: Ignored env _ -debug3: Ignored env __CF_USER_TEXT_ENCODING -debug2: channel 0: request shell confirm 1 -debug3: send packet: type 98 -debug2: channel_input_open_confirmation: channel 0: callback done -debug2: channel 0: open confirm rwindow 0 rmax 32768 -debug3: receive packet: type 99 -debug2: channel_input_status_confirm: type 99 id 0 -debug2: PTY allocation request accepted on channel 0 -debug2: channel 0: rcvd adjust 2097152 -debug3: receive packet: type 99 -debug2: channel_input_status_confirm: type 99 id 0 -debug2: shell request accepted on channel 0 -Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.10.0-1009-gcp x86_64) - - * Documentation: https://help.ubuntu.com - * Management: https://landscape.canonical.com - * Support: https://ubuntu.com/advantage - - Get cloud support with Ubuntu Advantage Cloud Guest: - http://www.ubuntu.com/business/services/cloud - -126 packages can be updated. -0 updates are security updates. - - -*** System restart required *** - PRODUCTION REDIS_CHECKCMD_ERROR nelsnelson@redis-01-db-gprd.c.gitlab-production.internal:~$ exit -debug3: receive packet: type 98 -debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 -debug3: receive packet: type 98 -debug1: client_input_channel_req: channel 0 rtype eow@openssh.com reply 0 -debug2: channel 0: rcvd eow -debug2: channel 0: chan_shutdown_read (i0 o0 sock -1 wfd 6 efd 8 [write]) -debug2: channel 0: input open -> closed -debug3: receive packet: type 96 -debug2: channel 0: rcvd eof -debug2: channel 0: output open -> drain -debug3: receive packet: type 97 -debug2: channel 0: rcvd close -debug3: channel 0: will not send data after close -logout -debug3: channel 0: will not send data after close -debug2: channel 0: obuf empty -debug2: channel 0: chan_shutdown_write (i3 o1 sock -1 wfd 7 efd 8 [write]) -debug2: channel 0: output drain -> closed -debug2: channel 0: almost dead -debug2: channel 0: gc: notify user -debug2: channel 0: gc: user detached -debug2: channel 0: send close -debug3: send packet: type 97 -debug2: channel 0: is dead -debug2: channel 0: garbage collecting -debug1: channel 0: free: client-session, nchannels 1 -debug3: channel 0: status: The following connections are open: - #0 client-session (t4 r0 i3/0 o3/0 e[write]/0 fd -1/-1/8 sock -1 cc -1) - -debug3: send packet: type 1 -debug3: fd 1 is not O_NONBLOCK -Connection to redis-01-db-gprd.c.gitlab-production.internal closed. -Transferred: sent 3176, received 3608 bytes, in 2.9 seconds -Bytes per second: sent 1112.9, received 1264.3 -debug1: Exit status 0 -``` - -## Failure connecting to dr bastion (lb-bastion.dr.gitlab.com): - -```{sh} -$ ssh -vvv lb-bastion.dr.gitlab.com -OpenSSH_7.9p1, LibreSSL 2.7.3 -debug1: Reading configuration data /Users/nelsnelson/.ssh/config -debug1: /Users/nelsnelson/.ssh/config line 21: Applying options for lb-bastion.dr.gitlab.com -debug1: /Users/nelsnelson/.ssh/config line 72: Applying options for * -debug1: Reading configuration data /etc/ssh/ssh_config -debug1: /etc/ssh/ssh_config line 48: Applying options for * -debug1: Connecting to lb-bastion.dr.gitlab.com port 22. -^C - -$ ssh -vvv console-01-sv-dr.c.gitlab-dr.internal -OpenSSH_7.9p1, LibreSSL 2.7.3 -debug1: Reading configuration data /Users/nelsnelson/.ssh/config -debug1: /Users/nelsnelson/.ssh/config line 26: Applying options for *.gitlab-dr.internal -debug1: /Users/nelsnelson/.ssh/config line 72: Applying options for * -debug1: Reading configuration data /etc/ssh/ssh_config -debug1: /etc/ssh/ssh_config line 48: Applying options for * -debug1: Executing proxy command: exec ssh lb-bastion.dr.gitlab.com -W console-01-sv-dr.c.gitlab-dr.internal:22 -debug1: identity file /Users/nelsnelson/.ssh/id_rsa type 0 -debug1: identity file /Users/nelsnelson/.ssh/id_rsa-cert type -1 -debug1: identity file /Users/nelsnelson/.ssh/id_dsa type -1 -debug1: identity file /Users/nelsnelson/.ssh/id_dsa-cert type -1 -debug1: identity file /Users/nelsnelson/.ssh/id_ecdsa type -1 -debug1: identity file /Users/nelsnelson/.ssh/id_ecdsa-cert type -1 -debug1: identity file /Users/nelsnelson/.ssh/id_ed25519 type -1 -debug1: identity file /Users/nelsnelson/.ssh/id_ed25519-cert type -1 -debug1: identity file /Users/nelsnelson/.ssh/id_xmss type -1 -debug1: identity file /Users/nelsnelson/.ssh/id_xmss-cert type -1 -debug1: Local version string SSH-2.0-OpenSSH_7.9 -^C -``` - -## Relevent ~/.ssh/config sections: - -```{sh} -# GCP production bastion host -Host lb-bastion.gprd.gitlab.com - PreferredAuthentications publickey - User nelsnelson - -# gprd boxes -Host *.gitlab-production.internal - PreferredAuthentications publickey - ProxyCommand ssh lb-bastion.gprd.gitlab.com -W %h:%p - -# GCP staging bastion host -Host lb-bastion.dr.gitlab.com - PreferredAuthentications publickey - User nelsnelson - -# dr boxes -Host *.gitlab-dr.internal - PreferredAuthentications publickey - ProxyCommand ssh lb-bastion.dr.gitlab.com -W %h:%p -``` - -All of the examples have been attempted with solely the four stanzas found in the above `~/.ssh/config` snippet.",5.0 -21934822,2019-06-15 08:31:57.440,Health checks failing on inactive web nodes,"Reported by @andrewn via Slack: https://log.gitlab.net/goto/b6cba9be8f89b80456fd17b614f363e9. - -We put the 8 new nodes in MAINT status yesterday so that HAProxy wouldn't do health checks. But it looks like we didn't consider the fact that GLB also has health checks and reports metrics. - -Went ahead and stopped the nodes (web-29 through web-36). Next week (Week of June 17th), we will be driving a couple of different efforts around separating API traffic and also scaling pgbouncer. Depending on the outcome, if we decide to scale up web fleet again I'd like to use these 8 nodes right away. If not, we can do proper terraform MR and remove these nodes from GCP.",1.0 -21934056,2019-06-15 06:21:37.734,Discussion: Notifications of Terraform module version bumps,"When someone makes a change to a terraform module, it is no longer just `gitlab-com-infrastructure/environments/*` that needs to be updated. We need a way to reliably notify all primary users of the module that it has been updated. - -Some initial ideas are: - -1. An MR template. - - Anyone with an interest can add their name to the /cc line -2. The CODEOWNERS file - - I'm not sure this will work in all cases -3. Update the readme with links to the places that need to be changed - - Who is responsible for changing it? The merger? Do they just open an issue? Or submit MR's everywhere? -4. Someone's better idea - -This probably effects myself, @skarbek and @Craig at first, but will eventually touch everyone in @gitlab-com/gl-infra, so the more input we can get the better. We may also want to think about doing something similar with chef.",1.0 -21927335,2019-06-14 19:13:00.937,Increase TTL on packages CloudFront,"Currently the default TTL for our PackageCloud CloudFront distribution is set to 86400. Since packages have unique names, we should increase this to days in order to get better cache performance.",2.0 -21926649,2019-06-14 18:28:46.689,Document CloudFront and PackageCloud,We need to document what packagecloud and cloudfront look like.,2.0 -21917102,2019-06-14 13:12:31.602,Configure Monitoring for Vault,Configure Vault monitoring via Prometheus.,5.0 -21916659,2019-06-14 12:56:56.517,Configure Vault Service,"Configure Vault Service. - -* unsealing keys -* backends -* roles -* ACLs -* ...",8.0 -21916317,2019-06-14 12:48:41.181,Setup automated Vault storage bucket backups,Develop automated script / plan for backing up (bucket versioning?),3.0 -21916186,2019-06-14 12:43:48.346,Deploy GCS Bucket for Vault Storage,Deploy the GCS Bucket for Vault Storage.,2.0 -21915928,2019-06-14 12:32:56.629,Create the K8s cluster for vault with terraform,"Create the K8s cluster for vault using terraform and deploy the K8s nodes. - -We probably can make use of https://github.com/sethvargo/vault-on-gke for that.",3.0 -21915790,2019-06-14 12:27:24.164,document vault gcp project service accounts,Document the service accounts of the vault gcp project created in #6973. Make sure the IAM permissions are scoped right and restricted to a minimum.,3.0 -21915593,2019-06-14 12:20:42.889,Create GCP project for Vault,"Create a new GCP project for vault. - -* Resolve IP Addressing and Routing via projects -* Look into cross-project DNS topology (beta GCP/GKE) - -Maybe we can use https://github.com/sethvargo/vault-on-gke to get the project setup with the right permissions etc.",3.0 -21894026,2019-06-13 22:05:06.893,Update Grafana sync key,"Per https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6952#note_180920686, we need to update the key used [by chef](https://gitlab.com/gitlab-cookbooks/gitlab-grafana/blob/master/recipes/export_dashboards.rb#L41) to use the updated key from 1Password in the `dashboards-sync Grafana API key` item within the `Team` vault.",1.0 -21889446,2019-06-13 19:02:07.673,List known potential gotchas with Elasticsearch indexing of GitLab Projects,"As an SRE, I want to understand the known or anticipated types of operational issue that may come with the upcoming addition of Elasticsearch indexing as a GitLab 12.1 feature, so that we can better interpret new observed behaviors or changes in workload for related subsystems. - -*Background:* - -While listening to today's *excellent* ""Elasticsearch Deep Dive"" presentation by @mdelaossa (see [video](https://drive.google.com/open?id=13wXpOL9GbZFUCw4DaRxoE_GmNjf1jDT5) and [slides](https://docs.google.com/presentation/d/1H-pCzI_LNrgrL5pJAIQgvLX8Ji0-jIKOg1QeJQzChug/edit?usp=sharing) and [notes and questions](https://docs.google.com/document/d/1cwo5n3XYaTDAJ48sMZJ8bHQVJ0RD5dlsdf28L96OZQw/edit#)), I realized a few areas where the introduction of ES indexing may significantly change the workload or scaling requirements of related subsystems (ES nodes themselves, Gitaly storage nodes, Sidekiq pools, Postgres, etc.). And of course as will any new technology adoption, there are new failure modes to be aware of (e.g. ES node down/dead, shard redundancy, ES quorum policy for writes/reads, impact of ES availability on upstream clients, etc.). - -This issue aims to collect a list of know topics for @gitlab-com/gl-infra to be aware of. It's just for discussion and brainstorming, not for resolving any of the topics raised. - -These topics may eventually be added to a runbook once they are actionable.",3.0 -21889206,2019-06-13 18:52:15.741,Need to update pg_hba.conf for version/license/customers restored backups,"Currently the data team is pulling data from restored backups of the version/license/customers DBs. We are going to start pulling from a different kube cluster and are getting an error around the pg_hba.conf. We need this to be updated so that we can start pulling from the new cluster. Just let us know what you need to know from us and we'll provide/help with whatever we can - -cc @tayloramurphy",2.0 -21883491,2019-06-13 15:11:56.661,Support different versions of terraform in our terraform CI/CD tests,"Currently, our tf_format test uses a global terraform version specified at the root of the repository for it's lint tests. For some earlier versions of ```terraform fmt```, this is ok. But the newest versions of terraform format in such a way that those files fail using the specified older global version. - -Since each environment can have it's own version of terraform specified, we should update our tests to use that specific version. This will allow us to use newer versions without having to drag the entire repo up into new versions of terraform.",2.0 -21863206,2019-06-13 08:07:04.084,Automate Grafana datasource provisioning for public dashboards,"Since Grafana 5.x, it supports provisioning via yaml files on disk. We should add support to auto-provision the correct datasources for the public dashboard server. - -See the upstream docs: https://grafana.com/docs/administration/provisioning/#datasources",3.0 -21857648,2019-06-13 03:47:53.339,Update `Query source` to `Global` in Grafana dashboards that are not pulling any metrics,"We have dashboards like this: https://dashboards.gitlab.net/d/_MKRXrSmk/pull-mirrors?orgId=1&refresh=30s&from=now-7d&to=now&fullscreen&edit&panelId=13 where the data source isn't populating anything. - -Per Slack discussion: https://gitlab.slack.com/archives/CB3LSMEJV/p1560355187114000, if we change the `Query` source to `Global` it works. - -See below Before & After comparison. - -BEFORE -![grafana_dashboard](/uploads/b819524fa0977840bd4fa42c4789695f/grafana_dashboard.png) - -AFTER -![grafana_dashboard_after](/uploads/4cc9fd7c03ec289ec7a1236a754af209/grafana_dashboard_after.png)",7.0 -21855579,2019-06-13 01:36:49.638,Register gitaly service on consul,"In order for us to be able to do dynamic inventory for consul (e.g. for https://gitlab.com/gitlab-org/release/framework/issues/354) we need to register the Gitaly service with a gRPC health check. - -/cc @dawsmith",2.0 -21851219,2019-06-12 19:05:17.710,Re-create public dashboards node,"While applying terraform changes the node `dashboards-com-01-inf-ops.c.GitLab-ops.internal` was left in an inconsistent state, and now console logs show filesystem errors related to the data & log volumes. For simplicity, the node should be removed/recreated, and a chef converge run to finish configuring all filesystems / application settings.",1.0 -21827850,2019-06-12 08:10:57.830,Postgres restore instances: OS login authentication broken,"The GCE instances created by https://ops.gitlab.net/gitlab-com/gl-infra/gitlab-restore/postgres-gprd use OS Login to authenticate users via SSH. This is currently broken for @NikolayS and maybe others, though it works for @abrandl. We should fix that.",1.0 -21821667,2019-06-12 03:36:37.495,RCA: High Rails Error Rate on Front End,"Received alerts for elevated error rates: `Rails is returning 5xx errors at a high rate for git . Traffic is impacted and users are likely seeing 500 errors.` - -Service(s) affected : ~""Service:GitLab Rails"" ~""Service:Git"" ~""Service:API"" ~""Service:Web"" -Team attribution : @gitlab-com/gl-infra -Minutes downtime or degradation : 178 minutes - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? portions of the api, web, and git fleet were not registered with the load-balancers, resulting in increased 5xx error rates for those services -- Who was impacted by this incident? GitLab.com users / external customers -- How did the incident impact customers? 5xx errors attempting to make API calls, or perform git operations via HTTPS -- How many attempts were made to access the impacted service/feature? -- How many customers were affected? -- How many customers tried to access the impacted service/feature? - -Include any additional metrics that are of relevance. - -Provide any relevant graphs that could help understand the impact of the incident and its dynamics. - -## Detection & Response - -Start with the following: - -- How was the incident detected? on-call engineer received pagerduty alert(s) -- Did alarming work as expected? yes, though some alerts were ignored initially due to concurrence with ongoing deployment -- How long did it take from the start of the incident to its detection? -- How long did it take from detection to remediation? -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team memeber wasn't page-able, ...) - -## Timeline - -2019-06-11 - -* 22:55 UTC - deployment started for hotpatch https://ops.gitlab.net/gitlab-com/gl-infra/patcher/merge_requests/100 -* 23:03 UTC - deployment fails, engineer experiences issues adding/removing backend servers from haproxy -* 23:40 UTC - deployment restarted -* 23:44 UTC - first evidence in grafana of imbalanced distribution for connections to https_git servers - -2019-06-12 - -* 00:16 UTC - first instance of alert fires for increased error rate -* 01:06 UTC - on-call engineer alerted of increased error rates -* 01:09 UTC - alert acknowledged by on-call engineer -* 01:11 UTC - alert auto-resolves -* 01:19 UTC - alert recurs -* 01:24 UTC - alert auto-resolves -* 01:28 UTC - alert recurs -* 01:28 UTC - alert acknowledged by on-call engineer -* 01:31 UTC - investigation started -* 01:32 UTC - inconsistent load distribution identified as probable cause -* 01:35 UTC - IMOC notified via slack -* 01:45 UTC - IMOC paged via slack -* 01:57 UTC - impact to other backends identified -* 01:59 UTC - team begins working to register all affected backend servers with haproxy load-balancers -* 02:35 UTC - git service fully restored -* 02:42 UTC - api and web services restored - -## Root Cause Analysis - -### 5-whys -The front-end fleet experienced elevated error rates and disruption of service. - -1. Why? - portions of the api, web, and git fleet were not registered with the load-balancers -2. Why? - they were not re-registered properly during/after a deployment -3. Why? - the first hot-patch deployment pipeline was canceled -4. Why? - the pipeline appeared hung -5. Why? - the script incorrectly detected nodes' status - -## What went well - -Start with the following: - -- Identify the things that worked well or as expected. -- Any additional call-outs for what went particularly well. - -## What can be improved - -Start with the following: - -- Using the root cause analysis, explain what can be improved to prevent this from happening again. -- Is there anything that could have been done to improve the detection or time to detection? -- Is there anything that could have been done to improve the response or time to response? -- Is there an existing issue that would have either prevented this incident or reduced the impact? -- Did we have any indication or beforehand knowledge that this incident might take place? - - -## Corrective actions - -- List issues that have been created as corrective actions from this incident. -- For each issue, include the following: - - - Issue labeled as ~""corrective action"". - - Include an estimated date of completion of the corrective action. - - Incldue the named individual who owns the delivery of the corrective action. - - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",2.0 -21813808,2019-06-11 20:16:19.291,Add generic Linux observability tools to our hosts,"As an SRE or DBRE, I want our Linux hosts to include tools for ad hoc observation, so that I can collect short-term metrics and investigate behaviors that are impractical or out of scope for our general purpose monitoring. - -Background: -Prometheus provides a variety of metrics collected periodically. This serves us well for most purposes. However, sometimes we need more granular or narrowly scoped instrumentation for ad hoc investigations. - -Examples: -* Polling disk I/O statistics at 1-second intervals is helpful when analyzing suspected bursts of I/O contention that get smoothed over at less frequent polling intervals. -* Measuring variation in memory access latency on a VM that runs an in-memory data store (e.g. Redis) can reveal an otherwise opaque root cause to transient query response time spikes. -* Measuring trends in TCP connection open/close events can help inform tuning the kernel's TCP stack to more gracefully handle traffic spikes. It can also lead to improving our general purpose diagnostic monitoring, to alert us when approaching saturation of certain finite resources (e.g. TCP connection table, pool of available client ports, etc.). - -In this Issue, let's build a wish list of tools we'd like to have available on our Linux hosts. To start us off, we have this list from the Slack discussion: -* iostat: Per block device I/O statistics, including queue depth, %busy, mean read/write latency, etc. -* sysstat: Provides ""sar"" utility, which gives wide variety of usage statistics, system-wide or for specific PIDs. -* [linux-tools](http://www.brendangregg.com/perf.html): Lightweight tracing facility, used via the perf-suite tools (perf, perf-top, perf-mem, perf-trace, perf-ftrace, etc.), with eBPF support for recent kernels. -* iftop: ""top"" for network flows, showing which remote IPs are currently using the most network throughput. -* ifstat: ""vmstat"" for network interfaces, polling network throughput on each interface.",2.0 -21809882,2019-06-11 18:30:36.667,Investigate git storage nodes using more disk space than expected,"Some of our Gitaly storage nodes have a significant gap between actual disk space used versus what our ""project_statistics"" table reports. - -Try to find what dominates this discrepancy. Time-box the discovery effort, as we are soon planning to significantly change our git storage architecture. This investigation aims to understand the nature and magnitude of the measurement error, but it mostly matters if it's caused by a systematic accounting error that can be improved upon. - -Example: -Host: file-25-stor-gprd (a.k.a. ""nfs-file25"") -Actual disk space used: 14 TB -Reported disk space used: 12 TB - -For more background on (re)discovering this discrepancy, see: -https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6796#note_179818651",3.0 -21803369,2019-06-11 16:05:05.869,Timing for offline upgrade,We can do an offline upgrade using `pg_upgrade`. We should know how long this is going to take to figure out if the downtime would be acceptable or not.,2.0 -21801140,2019-06-11 15:01:42.277,Update build-runner s3 settings before they lose support in version 12.0 of GitLab runner.,"We need to updated the build runner cache settings to the new format before 12.0 is rolled out to the runners. Specifically someone needs to go in and change the vault secrets. - -The build runners for the omnibus packages are throwing deprecation warnings on startup about their use of the runner cache s3 parameters. The configuration they are using is deprecated and being removed in 12.0 - -An MR for updating the visible settings is here: https://ops.gitlab.net/gitlab-cookbooks/chef-repo/merge_requests/1208 - -But the access secrets are stored in vault and also need to be updated. - -They need to be moved from where they are to be nested one level further down under the `s3` key.",1.0 -21768290,2019-06-10 19:04:30.344,Connect monitoring from Kubernetes to our existing Infrastructure,"Determine how to connect our existing monitoring infrastructure to the new kubernetes clusters monitoring solution. - -Pick up where the conversation left off of here: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6660#note_179679854",5.0 -21768260,2019-06-10 19:03:18.221,Deploy monitoring infrastructure inside of Kubernetes,"Without kubernetes clusters, we'll need a way to monitor them. Utilize this issue to perform the deploy of the necessary services/applications to monitor our clusters and the services that run on them. - -Pick up the conversation of this from the following location: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6660#note_177820136",5.0 -21768064,2019-06-10 18:55:30.657,Update consul to v1.5.1,"Our current version ([1.0.3](https://gitlab.com/gitlab-cookbooks/gitlab_consul/blob/d94d678ab95f41d2528f1a032e430410e43bd854/attributes/default.rb#L1)) doesn't allow for multi-tag querying, which we'll need for example to select a canary api node (which has tags `""cny""`, and `""api""`). That feature was introduced on v1.3.0, but there's no impediment for getting on the latest version (some breaking changes, but none that affect us. See https://github.com/hashicorp/consul/blob/master/CHANGELOG.md). - -/cc @dawsmith",1.0 -21658381,2019-06-07 04:25:51.825,Cleanup Chef ACL Permissions,"In working to grant `read only` access to Chef resources it was discovered that our default permission model was incredibly wide, such that anyone who was a 'user' in the Chef server could do anything except add another user (delete nodes, update roles, update environments, etc). - -This needs to be cleaned up and locked down so that by default users have no access in the `gitlab` chef organization unless granted it.",6.0 -21653164,2019-06-06 20:39:48.568,Camo proxy: secure configuration,"From @msmiley on https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6836#note_178684472: -We should discuss rate limiting, concurrency, and request timeouts for Camo's proxied requests. And we should decide whether to apply those limits at Camo's ingress load balancer or at each Camo instance's egress. I don't yet have a well-formed threat model, but off the cuff, we want to avoid making Camo attractive as an anonymizing proxy for attackers (abuse via us against a victim's domain) while also preventing attackers from too easily saturating our pool of Camo instances (DoS against us).",3.0 -21652190,2019-06-06 19:39:50.043,Packagecloud Upgrade REQUIRED,"**Our version of packagecloud, 2.6.0, will stop working on the 24th of June.** - -This is due to AWS deprecating support for the API in which packagecloud is utilizing. An upgrade is available. We should upgrade before this due date. - -The email I received: - ->>> -AWS S3 is deprecating the AWS S3 SigV2 API on June 24, 2019 [1], which packagecloud:enterprise 2.0.6 and earlier currently use. New versions of packagecloud:enterprise (versions 2.0.7 and higher) use AWS CloudFront for serving package objects, instead of using AWS S3 directly. This upgrade significantly improves package download speeds and avoids the S3 API deprecation. - -To continue using packagecloud:enterprise after June 24th, 2019, you will need to upgrade to version 2.0.8, which includes support for CloudFront. On packagecloud:enterprise version 2.0.8, you can optionally enable or disable CloudFront. - -Upgrading to packagecloud:enterprise 2.0.8 will allow you to create a CloudFront distribution and test it before AWS deprecates the S3 API in a few weeks. - -Note that our next release of packagecloud:enterprise, version 3.0.0, will require CloudFront. - -Creating a new CloudFront distribution can take 30 minutes or longer. We strongly urge all customers to upgrade to 2.0.8 now so that they can create their CloudFront distributions while still serving packages from AWS S3 before the deprecation takes place. This upgrade will allow for a seamless switch-over with no downtime. - -Waiting to upgrade to packagecloud:enterprise until June 24, 2019, will almost certainly result in downtime as you will need to wait for the creation of the CloudFront distribution to enable downloading of package objects. - -Users should follow the upgrade instructions: https://packagecloud.atlassian.net/wiki/spaces/ENTERPRISE/pages/15269926/Upgrading - -Once the 2.0.8 upgrade is complete, users should follow the CloudFront setup instructions: -https://packagecloud.atlassian.net/wiki/spaces/ENTERPRISE/pages/501972993/AWS+CloudFront+Setup - -Contact support@packagecloud.io if you have any questions or problems. - -Happy packaging! - -[1]: https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html#UsingAWSSDK-sig2-deprecation ->>> - -/cc @gitlab-com/gl-infra -/cc @gitlab-org/distribution",4.0 -21647603,2019-06-06 16:14:28.537,Evaluate viability of using magnetic HDDs instead of SSDs,"On Slack, @glopezfernandez mentioned the possibility of saving money by using cheaper, slower traditional HDDs. We should evaluate whether or not this is possible with production load, relying on ARC (RAM and L2 on NVMe local SSDs).",2.0 -21647032,2019-06-06 15:51:53.309,Upgrade Ruby on license.gitlab.com,https://gitlab.com/gitlab-cookbooks/cookbook-license-gitlab-com/merge_requests/11#note_178580713,1.0 -21639495,2019-06-06 12:30:32.409,Remove CI infrastructure from DigitalOcean,"DO has served us well for first years of having autoscaled Shared Runners on GitLab.com. But with our transition to GCP, connected with transition of CI infrastructure also there, it became a backup environment. - -After migrating the CI infrastructure to GCP we've added several updates to increase the efficiency of that environment and to make the configuration simpler. So at this moment configuration of the CI infrastructure in DO is not the same as in GCP. - -During last year since the transition, DO environment was used twice - both times during some problems with GCP environment. And both times unsuccessful. The bigger load that we've got since migrating to GCP and the different configuration of the infrastructure made DO environment in current shape to be no more usable for GitLab.com CI infrastructure. At this moment we get a lot of alerting noise (e.g. related to cache server) from an infrastructure part that is totally not used. - -We still should have some backup strategy. But we should rather think about how we can quickly (re)create the CI infrastructure in another region in GCP (e.g. with the idea like #4813). - -With all of this said, I think it's time to send our CI infrastructure in DO for a well-deserved retirement. - ---- - -What should be done for DigitalOcean CI infrastructure termination: - -- [x] Cleanup Prometheus configuration: - - [x] `prometheus-server` role, `node` job, `public_hosts` list: - - [x] remove `runners-cache-5.gitlab.com` - - [x] `prometheus-server` role, `prometheus` job, `public_hosts` list: - - [x] remove `prometheus-01.nyc1.do.gitlab-runners.gitlab.net` - - - [x] `gprd-infra-prometheus-server.json` role, `ci-node` job, `role_name` list: - - [x] remove `gitlab-runners-consul` - - [x] remove `runners-cache-server` - - [x] `gprd-infra-prometheus-server.json` role, `ci-node` job, `public_hosts` list: - - [x] remove `prometheus-01.nyc1.do.gitlab-runners.gitlab.net` - - [x] `gprd-infra-prometheus-server.json` role, `blackbox` job, `static_configs->targets` list: - - [x] remove `http://runners-cache-5.gitlab.com/minio/login` - - [x] remove `http://runners-cache-5.gitlab.com:1443/v2` - - [x] remove `http://runners-cache-5.gitlab.com:5000/v2` - - [x] remove `http://runners-cache-5.gitlab.com:9000/minio/login` - - [x] remote `https://prometheus-01.nyc1.do.gitlab-runners.gitlab.net` - - [x] `gprd-infra-prometheus-server.json` role, `shared-runners` job, `static_configs->targets` list: - - [x] remove `shared-runners-manager-1.gitlab.com:9402` - - [x] remove `shared-runners-manager-2.gitlab.com:9402` - - [x] `gprd-infra-prometheus-server.json` role, `shared-runners-gitlab-org` job, `static_configs->targets` list: - - [x] remove `gitlab-shared-runners-manager-1.gitlab.com:9402` - - [x] remove `gitlab-shared-runners-manager-2.gitlab.com:9402` - - [x] `gprd-infra-prometheus-server.json` role, `private-runners` job, `static_configs->targets` list: - - [x] remove `private-runners-manager-1.gitlab.com:9402` - - [x] remove `private-runners-manager-2.gitlab.com:9402` - - [x] `gprd-infra-prometheus-server.json` role, `prometheus` job, `public_hosts` list: - - [x] remove `prometheus-01.nyc1.do.gitlab-runners.gitlab.net` - - [x] `gprd-infra-prometheus-server.json` role, `ci-prometheus-fleet` job, `static_configs->targets` list: - - [x] remove `prometheus-01.nyc1.do.gitlab-runners.gitlab.net` - - [x] `gprd-infra-prometheus-server.json` role - - [x] remove `digitalocean_gitlab_bv` job - - [x] remove `digitalocean_gitlab_ci` job - - [x] remove `digitalocean_hanging_droplets_cleaner` job - - [x] remove `digitalocean_droplet_zero_machines_cleaner` job - - [x] remove `runners-cache-registry` job - - [x] remove `runners-cache-server` job - - [x] remove `runners-cache-minio` job - - - [x] `ops-infra-prometheus-server` role, `node` job, `public_hosts` list: - - [x] remove `shared-runners-manager-1.gitlab.com` - - [x] remove `shared-runners-manager-2.gitlab.com` - - [x] remove `gitlab-shared-runners-manager-1.gitlab.com` - - [x] remove `gitlab-shared-runners-manager-2.gitlab.com` - - [x] remove `private-runners-manager-1.gitlab.com` - - [x] remove `private-runners-manager-2.gitlab.com` - - [x] remove `runners-cache-5.gitlab.com` - - [x] remove `prometheus-01.nyc1.do.gitlab-runners.gitlab.net` - - [x] remove `consul-01.nyc1.do.gitlab-runners.gitlab.net` - - [x] remove `consul-02.nyc1.do.gitlab-runners.gitlab.net` - - [x] remove `consul-03.nyc1.do.gitlab-runners.gitlab.net` - - [x] `ops-infra-prometheus-server` role, `prometheus` job, `public_hosts` list: - - [x] remove `prometheus-01.nyc1.do.gitlab-runners.gitlab.net` - - [x] `ops-infra-prometheus-server` role, `thanos` job, `public_hosts` list: - - [x] remove `prometheus-01.nyc1.do.gitlab-runners.gitlab.net` -- [x] Chef cleanup: - - [x] remove nodes: - - [x] `shared-runners-manager-1.gitlab.com` - - [x] `shared-runners-manager-2.gitlab.com` - - [x] `gitlab-shared-runners-manager-1.gitlab.com` - - [x] `gitlab-shared-runners-manager-2.gitlab.com` - - [x] `private-runners-manager-1.gitlab.com` - - [x] `private-runners-manager-2.gitlab.com` - - [x] `runners-cache-5.gitlab.com` - - [x] `prometheus-01.nyc1.do.gitlab-runners.gitlab.net` - - [x] `consul-01.nyc1.do.gitlab-runners.gitlab.net` - - [x] `consul-02.nyc1.do.gitlab-runners.gitlab.net` - - [x] `consul-03.nyc1.do.gitlab-runners.gitlab.net` - - [x] remove roles: - - [x] `gitlab-runner-srm-do` - - [x] `gitlab-runner-gsrm-do` - - [x] `gitlab-runner-prm-do` - - [x] `runners-cache-5-gitlab-com` - - [x] `runners-cache-server` - - [x] `prometheus-blackbox-runners-cache` - - [x] `gitlab-runners-prometheus-do-nyc1` - - [x] `gitlab-runners-consul` - - [x] `gitlab-runners-consul-firewall` - - [x] `gitlab-runners-consul-do-nyc1` - - [x] remove chef vaults: - - [x] `gitlab-runner-srm-do ci-prd` - - [x] `gitlab-runner-gsrm-do ci-prd` - - [x] `gitlab-runner-prm-do ci-prd` - - [x] `runners-cache-5-gitlab-com ci-prd` - - [x] `gitlab-runners-prometheus-do-nyc1 ci-prd` - - [x] `gitlab-runners-consul client` - - [x] `gitlab-runners-consul cluster` -- [x] unregister Runners: - - [x] dev.gitlab.org: - - [x] Shared Runners: - - [x] `private-runners-manager-1.gitlab.com` (ID: 75) - - [x] `private-runners-manager-2.gitlab.com` (ID: 96) - - [x] `gitlab-shared-runners-manager-1.gitlab.com` (ID: 110) - - [x] `gitlab-shared-runners-manager-2.gitlab.com` (ID: 111) - - [x] GitLab.com: - - [x] Shared Runners: - - [x] `shared-runners-manager-1.gitlab.com` (ID: 40786) - - [x] `shared-runners-manager-2.gitlab.com` (ID: 40788) - - [x] `gitlab-shared-runners-manager-1.gitlab.com` (ID: 37398) - - [x] `gitlab-shared-runners-manager-2.gitlab.com` (ID: 37397) - - [x] `gitlab-org` Group Runners: - - [x] `private-runners-manager-1.gitlab.com` (ID: 395501) - - [x] `private-runners-manager-2.gitlab.com` (ID: 395502) - - [x] `gitlab-com` Group Runners: - - [x] `private-runners-manager-1.gitlab.com` (ID: 395497) - - [x] `private-runners-manager-2.gitlab.com` (ID: 395498) - - [x] `charts` Group Runners: - - [x] `private-runners-manager-1.gitlab.com` (ID: 396095) - - [x] `private-runners-manager-2.gitlab.com` (ID: 396096) -- [x] DigitalOcean resources cleanup: - - [x] remove nodes: - - [x] from `GitLab Prod` team: - - [x] `shared-runners-manager-1.gitlab.com` - - [x] `shared-runners-manager-2.gitlab.com` - - [x] `gitlab-shared-runners-manager-1.gitlab.com` - - [x] `gitlab-shared-runners-manager-2.gitlab.com` - - [x] `private-runners-manager-1.gitlab.com` - - [x] `private-runners-manager-2.gitlab.com` - - [x] `runners-cache-5.gitlab.net` + volumes - - [x] `prometheus-01.nyc1.do.gitlab-runners.gitlab.net` + volumes - - [x] `consul-01.nyc1.do.gitlab-runners.gitlab.net` - - [x] `consul-02.nyc1.do.gitlab-runners.gitlab.net` - - [x] `consul-03.nyc1.do.gitlab-runners.gitlab.net` - - [x] cleanup runner base images: - - [x] in `GitLab B.V.` team - - [x] in `GitLab CI` team -- [x] update our [base image building configuration](https://dev.gitlab.org/cookbooks/packer-runner-machines/) to stop using DO -- [x] other: - - [x] Grafana cleanup: - - [x] https://dashboards.gitlab.net/d/000000159/ci?orgId=1&refresh=5m&fullscreen&panelId=62 - remove panel - - [x] https://dashboards.gitlab.net/d/000000159/ci?orgId=1&refresh=5m&fullscreen&panelId=65 - remove panel - - [x] https://dashboards.gitlab.net/d/000000159/ci?orgId=1&refresh=5m&fullscreen&panelId=68 - remove panel - - [x] https://dashboards.gitlab.net/d/000000159/ci?orgId=1&refresh=5m&fullscreen&panelId=61 - remove panel - - [x] https://dashboards.gitlab.net/d/000000159/ci?orgId=1&refresh=5m - remove whole `Cache servers` row - - [x] https://dashboards.gitlab.net/d/000000159/ci?orgId=1&refresh=5m - remove whole `Hanging droplets cleaner` row - - [x] https://dashboards.gitlab.net/d/000000159/ci?orgId=1&refresh=5m - remove whole `Droplet zero machines cleaner` row - - [x] https://dashboards.gitlab.net/d/000000159/ci?orgId=1&refresh=5m - remove `Cache Server` variable - - [x] https://dashboards.gitlab.net/d/000000159/ci?orgId=1&refresh=5m - remove `Hanging doplets cleaner` variable - - [x] https://dashboards.gitlab.net/d/000000159/ci?orgId=1&refresh=5m - remove `Droplet zero machines cleaner` variable - - [x] https://dashboards.gitlab.net/d/sXVh89Imk/ci-autoscaling-providers?orgId=1&refresh=5m - remove whole `DigitalOcean` row",5.0 -21614481,2019-06-05 22:32:59.839,Giving read-only access to chef,"We have ~~2~~ 3 access-requests open for read-only access to chef. - -1. https://gitlab.com/gitlab-com/access-requests/issues/958 -1. https://gitlab.com/gitlab-com/access-requests/issues/286 -1. gitlab-com/access-requests#1110 - -Gathering this into an issue. -Looks like we could do: -https://github.com/chef/knife-acl#create-read-only-group-with-read-only-access - -@northrup appears to have made a read-only group, but we need to do the other knife-acl commands to finish making up the group and test with @pharrison and @asaba",3.0 -21613757,2019-06-05 21:35:59.849,Allow users in staging group to ssh into staging nodes,`openssh.allow_groups` in some `gstg-*` roles is not including 'staging' to allow ssh access for users in the staging group (e.g. the `nessus-staging` user to allow authenticated scans).,2.0 -21610917,2019-06-05 19:09:36.409,Centralize/include terraform module pipeline configs,"I've created a new project https://ops.gitlab.net/gitlab-com/gl-infra/pipeline-templates with the initial intent to centralize common elements for pipelines/jobs running under https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules, though the same principle could easily be extended to other aspects of the code/infrastructure we manage. - -1. [x] Create base `.gitlab-ci.yml` template for terraform module pipelines -1. [x] Validate template CI configuration in a terraform module pipeline -1. [x] Document pattern in `https://ops.gitlab.net/gitlab-com/gl-infra/pipeline-templates/blob/master/README.md` -1. [x] Propagate changes to remaining modules - -## Ops MRs - -### Getting the pipeline template working - -- [pipeline-templatesdone!2](https://ops.gitlab.net/gitlab-com/gl-infra/pipeline-templates/merge_requests/2) -- [pipeline-templatesdone!3](https://ops.gitlab.net/gitlab-com/gl-infra/pipeline-templates/merge_requests/3) -- [pipeline-templatesdone!4](https://ops.gitlab.net/gitlab-com/gl-infra/pipeline-templates/merge_requests/4) -- [pipeline-templatesdone!5](https://ops.gitlab.net/gitlab-com/gl-infra/pipeline-templates/merge_requests/5) -- [pipeline-templatesdone!6](https://ops.gitlab.net/gitlab-com/gl-infra/pipeline-templates/merge_requests/6) - -### Rolling template usage out to terraform-modules projects - -- [x] [bootstrap!7] -- [x] [cloud-nat!8] -- [x] [cloud-sql!10] -- [x] [database-backup-bucket!8] -- [x] [generic-stor!15] -- [x] [generic-stor-redis!15] -- [x] [generic-stor-with-group!12] -- [x] [generic-sv-sidekiq!15] -- [x] [generic-sv-with-group!10] -- [x] [gke!23] -- [x] [https-lb!4] -- [x] [monitoring-lb!3] -- [x] [monitoring-with-count!12] -- [x] [project!20] -- [x] [pubsubbeat!11] -- [x] [stackdriver!3] -- [x] [static-objects-cache!2] -- [x] [storage-buckets!15] -- [x] [tcp-lb!3] -- [x] [vpc!8] -- [x] [web-iap!4] - -[bootstrap!7]: https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/bootstrap/merge_requests/7 -[cloud-nat!8]: https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/cloud-nat/merge_requests/8 -[cloud-sql]: https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/cloud-sql/merge_requests/10 -[cloud-sql!10]: https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/cloud-sql/merge_requests/10 -[database-backup-bucket!8]: https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/database-backup-bucket/merge_requests/8 -[generic-stor!15]: https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/generic-stor/merge_requests/15 -[generic-stor-redis!15]: https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/generic-stor-redis/merge_requests/15 -[generic-stor-with-group!12]: https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/generic-stor-with-group/merge_requests/12 -[generic-sv-sidekiq!15]: https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/generic-sv-sidekiq/merge_requests/15 -[generic-sv-with-group!10]: https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/generic-sv-with-group/merge_requests/10 -[gke!23]: https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/gke/merge_requests/23 -[https-lb!4]: https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/https-lb/merge_requests/4 -[monitoring-lb!3]: https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/monitoring-lb/merge_requests/3 -[monitoring-with-count!12]: https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/monitoring-with-count/merge_requests/12 -[project!20]: https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/project/merge_requests/20 -[pubsubbeat!11]: https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/pubsubbeat/merge_requests/11 -[stackdriver!3]: https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/stackdriver/merge_requests/3 -[static-objects-cache!2]: https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/static-objects-cache/merge_requests/2 -[storage-buckets!15]: https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/storage-buckets/merge_requests/15 -[tcp-lb!3]: https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/tcp-lb/merge_requests/3 -[vpc!8]: https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/vpc/merge_requests/8 -[web-iap!4]: https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/web-iap/merge_requests/4",2.0 -21609028,2019-06-05 18:04:35.503,New S3 bucket for Greenhouse data extract,"We're going to be using the BI connector for Greenhouse to get data from them. They do nightly CSV dumps to S3. - -I need a bucket with read/write access. `datateam-greenhouse-extract` as a name would be :thumbsup: - -I'll just need the Access Key and Secret Key - -@ahanselka could you help with this?",3.0 -21603010,2019-06-05 14:50:20.116,Update Secrets Management section of Production Architecture in the handbook,"As a result of standing up Vault, our production architecture will have changed. Please update all the relevant documentation and diagrams, including https://about.gitlab.com/handbook/engineering/infrastructure/production-architecture/#secrets-management",1.0 -21602748,2019-06-05 14:45:00.795,Secrets Management - Vault Design Document,"Review and make any required changes to the design document on [Vault](https://about.gitlab.com/handbook/engineering/infrastructure/library/vault/) and add it to the [engineering/infrastructure/design/](https://about.gitlab.com/handbook/engineering/infrastructure/design/) section of the handbook. - -You may be required to ask questions about the document. That exploration and research should be considered part of this issue. Please close this task only once the design is agreed upon, the link is public, and work is ready to begin.",3.0 -21602281,2019-06-05 14:31:33.200,Benchmark syslog vs logging_collector,"Configs: - -``` -echo "" -log_destination = syslog -logging_collector = off -log_statement = none -log_min_duration_statement = 0 -"" > pg.syslog.conf - -echo "" -log_destination = stderr -logging_collector = off -log_statement = none -log_min_duration_statement = 0 -"" > pg.stderr.collector_off.conf - -echo "" -log_destination = stderr -logging_collector = on -log_statement = none -log_min_duration_statement = 0 -"" > pg.stderr.collector_on.conf - -echo "" -log_destination = csvlog -logging_collector = off -log_statement = none -log_min_duration_statement = 0 -"" > pg.csvlog.collector_off.conf - -echo "" -log_destination = csvlog -logging_collector = on -log_statement = none -log_min_duration_statement = 0 -"" > pg.csvlog.collector_on.conf - -echo "" -log_destination = stderr -log_statement = none -log_min_duration_statement = -1 -"" > pg.no_logs.conf -```",1.0 -21600891,2019-06-05 13:50:42.852,Allow users in ops group to ssh into ops-base-runner hosts,`openssh.allow_groups` in the `ops-base-runner` role is set to `[ci production]` but should be `ci ops production` to allow ssh access for users in the ops group (e.g. the `nessus-ops` user to allow authenticated scans).,3.0 -21592521,2019-06-05 09:34:24.925,DBRE sync meeting scheduling,"The DBRE sync was scheduled for Tuesdays, 2.30pm-3pm UTC. We have a conflict with the Secure&Defend team meeting which starts at 2.45pmUTC and should re-schedule the DBRE sync call. - -As pointed out, a later time would suit US timezones better. Let's see what time/day would work best. - -cc @gl-database @dawsmith @ansdval",0.0 -21590640,2019-06-05 08:23:22.536,Benchmarks for relevant candidate configs,"Benchmark the following configurations: - -All: - - n1-standard-32 - - ubuntu xenial - - `4.15.0-1033-gcp` kernel - - ZoL 0.7.5 - -raidz1: - - 9 SSD zpool - - 2 local SSD L2ARC - -raidz1 HDD: - - 9 HDD zpool - - 2 local SSD L2ARC - -single: - - 1 SSD - - 2 local SSD L2ARC - -ext4: - - 1 SSD ext4",2.0 -21582719,2019-06-04 22:31:54.099,"Camo proxy monitoring, alerting, and runbooks","Camo proxy will need some basic monitoring beyond VM health. Starting points requiring further thought/expansion -1. Service responsive via load-balancer - pageable -1. Service responsive on each node - alert, but don't page - -May require a pre-prepared encoded URL that is requested to ensure end-to-end functionality; would suggest something static on gitlab.com itself like a logo/image that won't change URL ever, and that we can expect to be up in the cases that we need camo proxy to be working (i.e. we won't be alerting because 'The Internet' is down, and if gitlab.com is completely down, we won't notice this alert specifically in the storm of alerts) - -Ideally we need some sort of graphing of the throughput/usage, although it's unclear at this stage how this will be implemented (mtail? custom exporter?) - -In addition to alert response runbooks, we'll need to include debugging steps for verifying correctness and determining where a request is failing; it seems likely that we'll end up responding to requests asking why a URL isn't responding.",2.0 -21578062,2019-06-04 18:57:58.360,Create GCP Projects for Groups,"There have been several requests recently to create GCP projects for various groups to do work in. Currently, these projects are being created ad-hoc with no standardization in how they are set up. - -The https://ops.gitlab.net/gitlab-com/group-projects project has been created to standardize this process and give each group an isolated place to work without having to go through the infra team for each request. - -Advantages of this include cost tracking, and greater accountability on each team. It also reduces the workload on the infrastructure team for non-production work. By empowering teams to manage their own isolated sandboxes in a safe way, everyone benefits. - -The following list will be updated as this issue evolves to include the projects which will be created initially. - -| Project | Admin | -|---|---| -| geo | Rachel Nienaber - rnienaber@gitlab.com | -| release | Darby Frey - dfrey@gitlab.com | -| verify | Elliot Ruston - erushton@gitlab.com | -| customer-success | Joel Krooswyk - jkrooswyk@gitlab.com | -| professional-services| Daniel Peric - dperic@gitlab.com |",5.0 -21576304,2019-06-04 17:29:12.837,Whitelist Atlasssian IP address Space for API Calls,"Atlasssian is getting `429` rate limiting from the front-end HA Proxy nodes for API access. - -This is to whitelist their requested infrastructure so that they can have an integration with `gitlab.com` - -Atlassian provides a link for IP Addresses for Atlassian Cloud (https://ip-ranges.atlassian.com/?_ga=2.49521830.1829182039.1559668865-157908062.1559668865) we will use this to derive the content for our whitelist: - -``` -http https://ip-ranges.atlassian.com/?_ga=2.49521830.1829182039.1559668865-157908062.1559668865 | jq '.items[].cidr' -```",3.0 -21572815,2019-06-04 15:27:27.235,Alert when zpool is running low on disk space,"This issue is a placeholder and needs work. - -SREs should receive a non-paging alert when some threshold (to be determined) of zpool space is reached. Be mindful about how the reservation filesystem interacts with this. The threshold chosen should be low enough that we are unlikely to run out of disk space before the awake on-call picks up the non-paging alert. - -Consider adding a predictive alert (i.e. using prometheus' `predict_linear`) to send an alert when we are predicted to run out of zpool space in a week. - -Consider adding a paging alert at a higher, emergency threshold. Discuss this with the team. - -Make sure grafana dashboards are in-place and working to inspect zpool utilization.",2.0 -21568319,2019-06-04 13:05:10.174,Figure out how to deal with registry service memory issues on kubernetes,"Currently we have a job named registry-restarter that runs once a day on the VM fleet. - -I'm wondering how this will work exactly in kubernetes, I assume the service will crash and restart but the reason why we have this restarter job is to prevent errors. It does this by carefully draining each node from haproxy and restarting the service. - -* daily restart job https://ops.gitlab.net/gitlab-com/gl-infra/registry-restarter/pipelines -* registry memory https://dashboards.gitlab.net/d/bd2Kl8Imk/registry-host-stats?orgId=1&refresh=1m&fullscreen&panelId=7&from=now-7d&to=now - -![Screen_Shot_2019-06-04_at_3.04.29_PM](/uploads/76065557671869c1f56be91cce87873c/Screen_Shot_2019-06-04_at_3.04.29_PM.png)",1.0 -21565835,2019-06-04 11:52:16.523,Document the steps from going from 0 cluster to traffic being directed to the Container Registry,"Utilize this issue to track the work necessary for implementing a cluster, the application configurations, and moving traffic over to that cluster. This is necessary for future documentation, and meant for a place to consolidate all notes that drove how the infrastructure components mesh together.",1.0 -21565766,2019-06-04 11:48:54.423,Modify pre registry LB's to point to the new GKE registry,"Now that our new cluster is stood up and the container registry is running inside of it, let's point the LB nodes to the registry. - -## Steps -1. Configure haproxy -1. Enable it",1.0 -21565111,2019-06-04 11:20:34.607,"k8s-workloads uses a direct download from a third party, this should be built into an image","If the third party site is down for any reason, we may have a failed job unrelated to changes being tested in the pipeline. We should instead mirror/fork the project and create an image for us to utilize to prevent ourselves from being susecptable to such a failure. - -Reference conversation: https://ops.gitlab.net/gitlab-com/gl-infra/k8s-workloads/gitlab-com/merge_requests/1#note_26510",1.0 -21565004,2019-06-04 11:15:42.567,Replicas missing in staging patroni cluster,"Currently, there's only two instances in the patroni cluster in gstg: - -``` -root@patroni-02-db-gstg.c.gitlab-staging-1.internal:/var/log/gitlab/postgresql# gitlab-patronictl list -+---------------+------------------------------------------------+---------------+--------+---------+-----------+ -| Cluster | Member | Host | Role | State | Lag in MB | -+---------------+------------------------------------------------+---------------+--------+---------+-----------+ -| pg-ha-cluster | patroni-02-db-gstg.c.gitlab-staging-1.internal | 10.224.29.102 | Leader | running | 0 | -| pg-ha-cluster | patroni-04-db-gstg.c.gitlab-staging-1.internal | 10.224.29.104 | | running | | -+---------------+------------------------------------------------+---------------+--------+---------+-----------+ -``` - -On other nodes, postgres is not running currently. This may be related to the network issue https://gitlab.com/gitlab-com/gl-infra/production/issues/862.",1.0 -21564842,2019-06-04 11:07:52.645,Decide on initial ZFS configuration,"After https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6726 is complete, we need to propose an initial set of config options for ZFS on git storage nodes. - -Current questions we need to answer (please add to this list, it's by no means complete): - -1. Amount of memory to dedicate to ARC (we have 120GB total) -1. Size of reservation filesystem in the zpool -1. zfs recordsize -1. Use native compression? If so, what algorithm? Investigate the disk space savings using real git repositories. -1. Use native encryption? (requires the new ZoL 0.8.0, see https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6777) -1. Should we use quotas for the repository filesystem, e.g. 16TB? This might help us get a sense of when a node is getting full, exclusive of snapshot bulk. - -Please use the discussion threads below to keep comments digestable. - -~~The output of this issue doesn't need to be code, but instead can be a comment that we reference when it comes to writing infra code.~~ Since gitlab_disks already exists, let's send MRs to that as part of this issue. This configuration is what we intend to use on a single canary in production, and refine from there. - -See also https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6306",2.0 -21564603,2019-06-04 10:57:34.336,New machines can be bootstrapped with either ZFS or ext4 mounted filesystems,"Git storage nodes are just one of many stateful node classes that are instantiated using the terraform generic-stor module: https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/generic-stor. - -All of our machines, stateful or not, run a version of the bootstrap script found in https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/bootstrap. Among other things, this script detects GCP PDs, formats them with ext4, and mounts them at a configurable location. - -Our new git storage nodes will use ZFS instead of ext4, and will possibly have many PDs in a raidz configuration depending on the output of https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6726. - -Write a chef cookbook containing a recipe that idempotently formats and mounts PDs in a configurable way: variables passed in by the terraform module should instruct the recipe as to which filesystem / disk config to use. A ZFS config and ext4 should be supported initially. With regards to ZFS, it's fine if we only support the vdev config we have already agreed on: i.e. only raidz1 or only single disk. - -Write a new iteration of the bootstrap script that does not format and mount persistent disks. - -Add this recipe to the beginning of each stateful node's chef run list. It must not interact pathologically with existing bootstrap scripts that **do** format and mount disks. - -After this issue is closed I should be able to dial up the count of the new storage nodes declared in https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6727, and see a usable ZFS filesystem mounted at /var/opt/gitlab. - -The recipe, if no config is injected, should behave as it does today: format a single PD at /dev/disk/by-id/google-persistent-disk-1 with ext4 and mount it up. - -For an existing non-git stateful role, i.e. redis, I should be able to do each of the following steps sequentially without causing data loss or downtime: - -1. Add the new recipe to the role's runlist. -1. Bump the bootstrap script version to the new one that mounts nothing. -1. Roll out new nodes. - -Try these things in staging before production. - -The main benefits of moving this functionality to chef are improved testability and speed of iteration. The former is particularly important as we would roll this change out to all stateful nodes. - -Note that the bootstrap script also detects, formats, and mounts log disks. Moving this functionality to chef is not essential, but it may be worth moving it if everything mounting-and-formatting-related appears simpler in one place.",4.0 -21558990,2019-06-04 08:34:01.118,forum does not have a functioning healthcheck endpoint,"Despite forum being down today our blackbox scraper thought it was alive, this is because requests were returning `200` status codes. - -``` -$ curl -I https://forum.gitlab.com -HTTP/2 200 -server: nginx -``` - -We need a better healthcheck so we can detect problems sooner. - -cc @axil",2.0 -21542819,2019-06-03 23:48:59.221,Camo proxy gprd deploy,"Build nodes, enable in production, and (optional?) security test",3.0 -21542800,2019-06-03 23:47:13.443,Camo proxy gstg deploy,"Build nodes, enable in staging, and security test",5.0 -21542750,2019-06-03 23:40:16.797,Camo proxy chef implementation,"Assuming VM, implement chef recipes for deploying it. - -If k8s is ready and we go that way instead, disregard.",2.0 -21542693,2019-06-03 23:28:36.098,Camo proxy deployment - terraform,"Implement terraform for the VM nodes, with external load balancer",2.0 -21542678,2019-06-03 23:25:47.831,Camo proxy performance testing,"* Estimate expected throughput (req/s peak) from existing logs and data, if available -* Do some ad-hoc performance testing to find the limits of the chosen camo proxy implementation; looking at CPU + RAM mainly -* Determine production node scale requirements.",5.0 -21542664,2019-06-03 23:22:16.395,Camo proxy security review of design,"Hello @gitlab-com/gl-security/appsec - -Could you please review the design work on this epic, particularly: -#6834, #6836, #6907 - -#6839 might also be interesting background info. - -I *think* we've covered off all the big concerns with security, but would appreciate any input you might have. If it all looks ok, let us know so we can proceed with implementation.",1.0 -21542624,2019-06-03 23:13:24.302,Camo proxy whitelist configuration and control,"On the implementation issue, there is discussion about whitelisting object storage. - -Determine: -* What this exactly means and what values it might need -* If there's anything else that might need whitelisting -* How we should manage these values - -NB: this is not strictly camo proxy config; it's a GitLab application setting which whitelists domains from being passed through camo proxy",3.0 -21542595,2019-06-03 23:08:08.375,Camo proxy network design,"Decide where to deploy it precisely within GCP (assuming VMs), and any detail any network design implications. - -Important considerations include -* public IP presence (consider NAT gateway vs per-node public IPs) -* network controls (firewalls et al) to prevent proxying back into internal networks. -* how internal systems get to camo - ILB? Something else?",3.0 -21542558,2019-06-03 23:00:38.348,Camo proxy deployment platform,"VM or K8S - -Document choice and reasons.",1.0 -21542492,2019-06-03 22:51:56.641,Choose camo implementation,"There are three options (at least): - -1. Original (https://github.com/atmos/camo) -1. cactus/go-camo (https://github.com/cactus/go-camo) -1. arachnys/go-camo (https://github.com/arachnys/go-camo - fork of cactus with more features) - -Ensure there aren't any others that need to be on the list, then do a quick evaluation of each for such criteria as features, deployability, maintenance, etc. Does not require running or active testing; this can be considered a paper-based exercise.",1.0 -21542221,2019-06-03 22:32:07.538,Organize Project Folders in GCP,"Currently, there doesn't appear to be any obvious organization scheme behind the project folders in GCP. Some top level folders contain only one project, and some nested folders have a lot in them. There is no obvious schema for why things are where they are. I'll update this diagram as things become more clear. - -I'm especially interesting in any history behind any of these locations. - - -``` -gitlab.com -├── Analytics -│ ├── gitlab-analysis -│ └── karu-gitlab-analytics -├── Customer Success -│ └── (Empty) -├── Development -│ └── Frontend -│ └── (1 project) -├── Distribution -│ └── (4 projects) -├── Infrastructure -│ ├── Environments -│ │ └── (Primary Environments, prod, staging, pre, DR) -│ ├── Ephemeral -│ │ └── (Review apps for environments, temporary projects) -│ ├── Security Products -│ │ └── (2 Gemnasium Projects) -│ └── (10 projects, ops, POC's, tools, review apps) -├── IT Operations -│ └── (Empty) -├── Marketing -│ └── GitLab-public -├── Migration Testing -│ └── (2 Projects - obsolete?) -├── Monitoring -│ └── monitoring-development -├── Sandbox -│ └── (User sandboxes, demo's, labs) -├── Security -│ └── (2 projects) -├── system-gsuite -│ └── apps-script -│ └── (2 projects) -└── (5 projects with no location) -``` - -![Screen_Shot_2019-06-03_at_3.01.02_PM](/uploads/49e49f9cad555f08b1813d07d75f6975/Screen_Shot_2019-06-03_at_3.01.02_PM.png) - -cc/ @gitlab-com/gl-infra ",3.0 -21540178,2019-06-03 20:14:39.064,RCA for June 2nd GCP related GitLab.com incident,"@ahanselka and @ahmadsherif - would you be okay owning this RCA? I'll add questions on my side too. -cc @andrewn - -## Summary - -Google had a major networking outage in their US East regions that affected their entire infrastructure, including GitLab.com - -- Service(s) affected : All services for GitLab.com -- Team attribution : External -- Minutes downtime or degradation : 20 minutes of downtime, 190 total of degradation - -https://dashboards.gitlab.net/d/ZUei7TkWz/platform-metrics?orgId=1&fullscreen&panelId=3&from=1559498453907&to=1559516462767 - -## Impact & Metrics - -Start with the following: - -- All GitLab.com services were completely down for about 30 minutes as Postgres failed over and services had to be restarted/HUPed. The application had degradation for about 3.5 hours after that. -- All users of GitLab.com were affected by this incident. -- During the first 30 minutes, GitLab.com was entirely inaccessible to users. For the remainder of the incident there was an elevated rate of 5xx errors. - - The main reason GitLab.com went completely down is due to Patroni cluster instability with failovers and the application being unable to follow the new primary. - - The longer period of elevated rate of 5xx errors was related to the network instability. -- Many of our alerts were flapping because the monitoring server was unable to reach the servers in question. -- There was a lot of flapping between redis-cache primary and secondaries. This did not seem to have an affect on the availability of the application. - -![patroni-network-drops](/uploads/623daf2bd0b5dd18bafed0a0c811855b/patroni-network-drops.png) - -![error-rates-and-resolution](/uploads/4a69fb6adf9c14137b0d0e0500d22e76/error-rates-and-resolution.png) - -## Detection & Response - -Start with the following: - -- The incident was detected via PagerDuty alerts for GitLab.com being down. -- Alarming worked as expected, however there were so many alerts it was overwhelming and made it hard to quickly determine anything. -- Because of the deluge of alerts, some important relevant alerts such as the alert indicating Postgres failed over were lost in the noise. -- It took about 2-3 minutes after the beginning of the downtime to get alerted and begin response. -- It took about 30 minutes for us to recover GitLab.com from the Postgres failover, however the site remained unstable due to the provider outage. -- Our dashboards were partially broken, which was a known issue earlier in the week (https://gitlab.com/gitlab-com/gl-infra/production/issues/849), making it much more difficult to get started with the response. - -## Timeline - -2019-06-02 - -* 18:50 UTC - Patroni failed over from -04 to -06 -* 19:05 UTC - Most of our Grafana dashboards are inconsistently working because of thanos issues: https://gitlab.com/gitlab-com/gl-infra/production/issues/849 -* 19:07 UTC - Pingdom returning errors -* 19:12 UTC - Diagnosis of Postgres failover -* 19:21 UTC - Services hup'd -* 19:26 UTC - GitLab.com operational. Pingdom reporting services as up. -* 19:41 UTC - Watching https://status.cloud.google.com/incident/compute/19003 -* 19:59 UTC - also watching https://status.cloud.google.com/incident/cloud-networking/19009 -* 20:39 UTC - Continuing to monitor google incidents -* 21:22 UTC - another failover from patroni-04 to -01 -* 21:54 UTC - postgres failed back over to -04 -* 22:00 UTC - Error rates returned to normal (that is, there were none) - -2019-06-03 - -* 09:40 UTC - Restoring tuple statistics by running cluster-wide `ANALYZE`, see https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5841#note_128321668 (done 10:10 UTC) - -2019-06-06 - -* Google posted [their RCA](https://status.cloud.google.com/incident/cloud-networking/19009) - - -## Root Cause Analysis - -GitLab.com went down for 30 minutes with instability over the course of 4 total hours. - -1. Why? - The application could no longer reach the database. -2. Why? - Postgres failed over unexpectedly. -3. Why? - There was networking instability which caused the cluster to try to fail over multiple times. -4. Why? - GCP had a major networking outage in the east region where we are located. - -## What went well - -Start with the following: - -- We found out very quickly that there was a problem -- Multiple people jumped in to help diagnose and repair the issue -- Delegation of duties and expectations was clear and effective - - i.e. ""You go update the status page"", ""I will go restart sidekiq"", etc. -- Patroni failover was successful. If we had still been on Repmgr it would have been true disaster. - -## What can be improved - -Start with the following: - -- We should better automate and tune our database failover process so that the application can gracefully handle failovers. - - We could try to execute more failovers in staging and eventually production to be more confident that a similar incident in the future would not cause a complete outage as it did in this case. -- We should also automate such that we don't have to re-run `ANALYZE` on the tables to re-populate statistics. -- We can try to prune and curate our alerts such that there isn't a massive deluge of alerts that obscure the problem and make it hard to see other relevant alerts. -- While this did not directly affect production, we didn't notice that staging had fallen apart also as a result of this. https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6854. It wasn't important to fix ASAP on a Sunday evening, but we should create a follow-up issue immediately when there is an issue like this so someone can follow up on Monday. - - -## Corrective actions - -Some of these issues are not created as a reaction specifically to this incident but are the correct actions. - -- Automated failover testing (https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5890) -- Clients still connect to old primary after failover (https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5675) -- Graceful Patroni failovers (https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5833) -- Production incident relating to Thanos problems (https://gitlab.com/gitlab-com/gl-infra/production/issues/849) -- Use a virtual IP for failovers (https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7059) - - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",4.0 -21535659,2019-06-03 17:32:01.742,Find another place to cache repository archives,"Andrew suggested this looking into https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/4454, other option would be caching in an object storage.",4.0 -21535555,2019-06-03 17:26:41.961,Ensure live traces is working properly on production,Related to https://gitlab.com/gitlab-org/gitlab-ce/issues/51496,3.0 -21535292,2019-06-03 17:12:42.954,Allow temporary personal snippet uploads to be uploaded to object storage,This is a sort-of meta issue as the main work will be done in gitlab-ce. I'll keep this issue updated with the progress.,3.0 -21531642,2019-06-03 14:50:30.559,k8s-workloads IAM service account needs higher level permissions on clusters,"As noted in this job, https://ops.gitlab.net/gitlab-com/gl-infra/k8s-workloads/gitlab-com/-/jobs/517681 the IAM service account k8s-workloads needs the ability to create Cluster Role bindings. We should create the necessary role/role binding for this user and restrict it to the namespace he'll operate on. Currently that is ONLY the `gitlab` namespace. - -/cc @gitlab-org/delivery -/cc @gitlab-com/gl-infra",1.0 -21489189,2019-05-31 22:56:10.546,Split terraform environments,"Our current repository structure for terraform leverages symlinks to reference shared code from multiple environments. This is great for keeping our code DRY, but there are frequently occasions when we need to deploy code to individual environments with follow-on deployments to other environments at some point in the future (testing new changes in staging, for example). With all environments referencing the same files, there is no way to make those changes in a single environment without requiring the same changes be applied to all other environments, holding up deployments to all other environments, requiring only targeted applies (which brings a host of other issues), or allowing configurations to drift over time. - -As an initial step towards resolving this and enabling further work toward automated deployments, we will duplicate the current `main.tf` file in each environment referencing the shared assets via symlink, then remove the symlinks. Likewise, any shared `variables.tf` content will be copied/merged into each environment's `variables.tf` file, and the symlinks removed/cleaned up.",1.0 -21489151,2019-05-31 22:50:31.327,Automate `tf plan` for all environments,"As a next step toward automated Terraform deployments, we need to update the `.gitlab-ci.yml` to automatically run `tf plan` for all environments. The jobs should be conditional based on changes to the relevant portions of the repository for that job's respective environment.",1.0 -21482307,2019-05-31 15:57:23.360,Own GCP Project for Geo Team,"The Geo Team would like to have their own GCP project to use instead of gitlab-internal. This will help us manage members and permissions and help us keep track of the resources that we are using. - -Would the Infrastructure team be able to create this for us?",1.0 -17457017,2019-01-18 08:18:34.189,Inventory Catalogue - File,"## Task -Fill out the gSIC (GitLab Service Inventory Catalogue) template for the service. The task of filling it out is really up to us as we need to reach out to various team members in order to capture the information. - -## Service -File - -## Sample Template Location -https://gitlab.com/gitlab-com/www-gitlab-com/blob/5f6a07f582be3c2a15914f4773b8de8e13a6d7e9/source/handbook/engineering/infrastructure/design/sample/201901_ServiceInventoryCatalogue/service-catalogue.yml - -## Questions or Comments about the template? -Reach out to @aamarsanaa",2.0 -17457008,2019-01-18 08:18:10.385,Inventory Catalogue - ELK,"## Task -Fill out the gSIC (GitLab Service Inventory Catalogue) template for the service. The task of filling it out is really up to us as we need to reach out to various team members in order to capture the information. - -## Service -ELK - -## Sample Template Location -https://gitlab.com/gitlab-com/www-gitlab-com/blob/5f6a07f582be3c2a15914f4773b8de8e13a6d7e9/source/handbook/engineering/infrastructure/design/sample/201901_ServiceInventoryCatalogue/service-catalogue.yml - -## Questions or Comments about the template? -Reach out to @aamarsanaa",2.0 -17456998,2019-01-18 08:17:37.524,Inventory Catalogue - Contributors,"## Task -Fill out the gSIC (GitLab Service Inventory Catalogue) template for the service. The task of filling it out is really up to us as we need to reach out to various team members in order to capture the information. - -## Service -Contributors - -## Sample Template Location -https://gitlab.com/gitlab-com/www-gitlab-com/blob/5f6a07f582be3c2a15914f4773b8de8e13a6d7e9/source/handbook/engineering/infrastructure/design/sample/201901_ServiceInventoryCatalogue/service-catalogue.yml - -## Questions or Comments about the template? -Reach out to @aamarsanaa",2.0 -17456983,2019-01-18 08:17:04.266,Inventory Catalogue - Consul,"## Task -Fill out the gSIC (GitLab Service Inventory Catalogue) template for the service. The task of filling it out is really up to us as we need to reach out to various team members in order to capture the information. - -## Service -Consul - -## Sample Template Location -https://gitlab.com/gitlab-com/www-gitlab-com/blob/5f6a07f582be3c2a15914f4773b8de8e13a6d7e9/source/handbook/engineering/infrastructure/design/sample/201901_ServiceInventoryCatalogue/service-catalogue.yml - -## Questions or Comments about the template? -Reach out to @aamarsanaa",2.0 -17456971,2019-01-18 08:16:34.563,Inventory Catalogue - Console,"## Task -Fill out the gSIC (GitLab Service Inventory Catalogue) template for the service. The task of filling it out is really up to us as we need to reach out to various team members in order to capture the information. - -## Service -Console - -## Sample Template Location -https://gitlab.com/gitlab-com/www-gitlab-com/blob/5f6a07f582be3c2a15914f4773b8de8e13a6d7e9/source/handbook/engineering/infrastructure/design/sample/201901_ServiceInventoryCatalogue/service-catalogue.yml - -## Questions or Comments about the template? -Reach out to @aamarsanaa",2.0 -17456959,2019-01-18 08:16:06.442,Inventory Catalogue - Blackbox,"## Task -Fill out the gSIC (GitLab Service Inventory Catalogue) template for the service. The task of filling it out is really up to us as we need to reach out to various team members in order to capture the information. - -## Service -Blackbox - -## Sample Template Location -https://gitlab.com/gitlab-com/www-gitlab-com/blob/5f6a07f582be3c2a15914f4773b8de8e13a6d7e9/source/handbook/engineering/infrastructure/design/sample/201901_ServiceInventoryCatalogue/service-catalogue.yml - -## Questions or Comments about the template? -Reach out to @aamarsanaa",2.0 -17456938,2019-01-18 08:15:29.046,Inventory Catalogue - API,"## Task -Fill out the gSIC (GitLab Service Inventory Catalogue) template for the service. The task of filling it out is really up to us as we need to reach out to various team members in order to capture the information. - -## Service -API - -## Sample Template Location -https://gitlab.com/gitlab-com/www-gitlab-com/blob/5f6a07f582be3c2a15914f4773b8de8e13a6d7e9/source/handbook/engineering/infrastructure/design/sample/201901_ServiceInventoryCatalogue/service-catalogue.yml - -## Questions or Comments about the template? -Reach out to @aamarsanaa",2.0 -17443838,2019-01-17 19:08:40.115,Add Gitter VPCs to terraform,The current Gitter VPCs were manually created and are not managed within Terraform; we should add those to terraform and import the current resources to the state file.,1.0 -17443767,2019-01-17 19:05:43.607,Add terraform state bucket and dynamodb table to Gitter AWS account,"As a step towards automating Gitter terraform deployments, we need to [setup](https://www.terraform.io/docs/backends/types/s3.html) an AWS S3 bucket for remote state and a DynamoDB table for state locking.",2.0 -17441944,2019-01-17 17:13:39.950,Chatbot not showing production on-call overrides,`/chatops run oncall prod` is only returning the Escalation Manager when the primary on-call is an override,1.0 -17439306,2019-01-17 15:29:40.570,Setup network-level access to database replica for ELT loads,"This is the second part of https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5847: Provide network-level access (VPC + firewall rules) to the archive replica in gprd for the ELT job runner. See the discussion over in https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5847. - -Details: -* Network access to the archive replica `postgres-dr-archive-01-db-gprd.c.gitlab-production.internal` -* From the `gitlab-analysis` project's network -* Limit access to resources in `gitlab-production` to only said replica",2.0 -17431133,2019-01-17 10:21:45.216,Review Candidate Questionnaire for PS,Review completed: https://app2.greenhouse.io/people/14331312002?application_id=15824625002,1.0 -17410845,2019-01-17 00:44:54.286,SSLMate Action Required cert renewals - Jan 2019,"We have action required emails for: - -Check that we need the new certs and renew if so: - -* [x] ~~http://sentry.gitlap.com/ - Exp Jan 20~~ -* [x] ~~http://runners-cache-3.gitlab.com/ Jan 17~~ -* [x] ~~http://runners-cache-4.gitlab.com/ Jan 30~~ -* [x] http://forum.gitlab.com/ Feb 13 -* [x] http://ee.gitlab.com/ - Jan 26 -* [x] http://ce.gitlab.com/ - Jan 26 -* [x] http://jobs.gitlab.com/ - Jan 26 -* [x] ~~http://alerts.gitlab.com/ - Jan 21~~ -* [x] http://prometheus.gitlab.com/ - Jan 21",3.0 -17407249,2019-01-16 20:08:39.769,RackSpace Access for Kathy Wang,"@kathyw used to have access to review the billing under our GCP contract and would like the same access under RackSpace. - -She needs a RackSpace Portal account with Billing Overview privileges.",1.0 -17396320,2019-01-16 16:28:55.845,Many nodes in chef server which are not connecting,"There are many nodes in the chef server which are not connecting to the server and pulling data. - -These servers should either be removed from the list of known hosts, or should be made to connect and pull data. - -I am putting the list here so we can have some discussion first, in case any of them are in this state on purpose. - -Here is the list: - -``` -jjn@thor ~/Workspace $ knife status --hide-by-mins 600 -11964 hours ago, lfs1.single.gitlab.com, ubuntu 16.04. -10475 hours ago, sync-nfs-02.geo.gitlab.com, ubuntu 16.04. -10475 hours ago, sync-nfs-01.geo.gitlab.com, ubuntu 16.04. -10263 hours ago, redis-cache-01.db.prometheus-testbed.helm-charts-win, ubuntu 16.04. -10263 hours ago, redis-cache-02.db.prometheus-testbed.helm-charts-win, ubuntu 16.04. -7881 hours ago, omnibus-builder-runners-manager.gitlab.org, ubuntu 16.04. -7770 hours ago, sentry-infra.gitlap.com, ubuntu 16.04. -4449 hours ago, consul-03.inf.stg.gitlab.net, ubuntu 16.04. -4449 hours ago, consul-02.inf.stg.gitlab.net, ubuntu 16.04. -4448 hours ago, consul-01.inf.stg.gitlab.net, ubuntu 16.04. -4116 hours ago, deploy.gitlab.com, ubuntu 16.04. -2569 hours ago, customers.stg.gitlab.com, ubuntu 16.04. -2218 hours ago, contributors-01-sv-gstg.c.gitlab-staging-1.internal, ubuntu 16.04. -2217 hours ago, contributors-01-sv-gprd.c.gitlab-production.internal, ubuntu 16.04. -1715 hours ago, contributors.gitlab.com, ubuntu 14.04. -1014 hours ago, blackbox.gitlab.com, ubuntu 16.04. -jjn@thor ~/Workspace $ -```",1.0 -17374791,2019-01-15 23:28:49.204,about-src.gitlab.com not downloading latest cookbooks,"about-src.gitlab.com is not downloading the latest cookbooks pinned on the chef server. - -Since the node `about.gitlab.com` was renamed in DNS to `about-src.gitlab.com` for our CDN SSL configuration, I think the hostname and node name on the Chef server also need to be updated, following the process in https://gitlab.com/gitlab-com/runbooks/blob/master/howto/rename-nodes.md. - -/cc @northrup for a sanity check",1.0 -17373801,2019-01-15 22:04:23.440,Chef Service Stale on gprd Patroni Servers,"Test by running: -``` -$ knife status --hide-by-mins 60 -``` -Nodes effected are: - -``` -8 hours ago, patroni-02-db-gstg.c.gitlab-staging-1.internal, ubuntu 16.04. -6 hours ago, pgbouncer-01-db-gstg.c.gitlab-staging-1.internal, ubuntu 16.04. -6 hours ago, pgbouncer-03-db-gstg.c.gitlab-staging-1.internal, ubuntu 16.04. -6 hours ago, patroni-04-db-gstg.c.gitlab-staging-1.internal, ubuntu 16.04. -6 hours ago, patroni-01-db-gstg.c.gitlab-staging-1.internal, ubuntu 16.04. -6 hours ago, patroni-05-db-gstg.c.gitlab-staging-1.internal, ubuntu 16.04. -6 hours ago, pgbouncer-02-db-gstg.c.gitlab-staging-1.internal, ubuntu 16.04. -6 hours ago, patroni-06-db-gstg.c.gitlab-staging-1.internal, ubuntu 16.04. -6 hours ago, patroni-03-db-gstg.c.gitlab-staging-1.internal, ubuntu 16.04. -5 hours ago, patroni-03-db-gprd.c.gitlab-production.internal, ubuntu 16.04. -5 hours ago, patroni-02-db-gprd.c.gitlab-production.internal, ubuntu 16.04. -5 hours ago, pgbouncer-03-db-gprd.c.gitlab-production.internal, ubuntu 16.04. -5 hours ago, pgbouncer-01-db-gprd.c.gitlab-production.internal, ubuntu 16.04. -5 hours ago, patroni-04-db-gprd.c.gitlab-production.internal, ubuntu 16.04. -5 hours ago, patroni-01-db-gprd.c.gitlab-production.internal, ubuntu 16.04. -5 hours ago, patroni-05-db-gprd.c.gitlab-production.internal, ubuntu 16.04. -5 hours ago, patroni-06-db-gprd.c.gitlab-production.internal, ubuntu 16.04. -5 hours ago, pgbouncer-02-db-gprd.c.gitlab-production.internal, ubuntu 16.04. -4 hours ago, runner-release-01-inf-ops.c.gitlab-ops.internal, ubuntu 16.04. -```",1.0 -17370685,2019-01-15 18:44:44.214,Stale Chef Runs in Production,"Chef runs were stale on several production servers. - -``` -prometheus-01.nyc1.do.gitlab-runners.gitlab.net - -prometheus-01-inf-gprd.c.gitlab-production.internal -prometheus-02-inf-gprd.c.gitlab-production.internal - -deploy-01-sv-gprd.c.gitlab-production.internal - -file-04-stor-gprd.c.gitlab-production.internal -file-18-stor-gprd.c.gitlab-production.internal -```",1.0 -17347657,2019-01-15 00:08:58.210,Fix payment issues on services,"See notification emails of payment fail for link to fix for: - -1. [x] Dead man's snitch -2. [ ] scaleway -3. [x] Digital ocean",1.0 -17347202,2019-01-14 23:38:33.960,Share learnings about Terraform changes from Dec 21 RCA,"Part of the discussion on the RCA from the December 21st incident was to document ideas on how to make sure we don't have conflicts in Terraform that could force us to go to the GCP console to clean up things. -[RCA for Dec 21](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5813) - -https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/merge_requests/671 tracks proposed notes about process. Creating this issue to point discussion to that MR.",2.0 -17343226,2019-01-14 20:26:30.930,Review Gitter security group rules & assignments,"As a follow-up to #5492, we need to perform a detailed review of Gitter network security rules, both VPC ACLs and per-instance rules in EC2/VPC Security Groups. - -This issue is to track discussion around reviewing EC2/VPC security groups, rules, and assignment to EC2 instances for necessary ingress/egress filtering, and relating any MRs to implement changes.",2.0 -17343190,2019-01-14 20:24:06.547,Review Gitter VPC ACLs,"As a follow-up to #5492, we need to perform a detailed review of Gitter network security rules, both VPC ACLs and per-instance rules in EC2/VPC Security Groups. - -This issue is to track discussion around reviewing VPC ACLs for boundary ingress/egress filtering, and relating any MRs to implement changes.",2.0 -17328415,2019-01-14 15:25:03.802,Database Reviews," -Carried over from https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5851 -* [ ] @NikolayS https://gitlab.com/gitlab-org/gitlab-ce/issues/51854#note_127186345 -* [ ] @NikolayS https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8988#note_128739113 -* [ ] Fixtures https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8240#note_129069979 -* [ ] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23464#note_126871390 -* [ ] https://gitlab.com/gitlab-org/gitlab-ce/issues/54643#note_125483243 - -New: -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8949#note_131363655 -* [ ] https://dev.gitlab.org/gitlab/gitlabhq/merge_requests/2734#note_145286 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8949 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24325 -* [x] @abrandl https://dev.gitlab.org/gitlab/gitlabhq/merge_requests/2694 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24368/#cd94b16b88eadd9a4a80a04eefd02fa71e712d63 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24368 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24144#note_132337799 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23596#note_132697035 -* [x] @NikolayS https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9110#note_132657852 << :green\_apple: @abrandl ready for approval -* [x] @NikolayS https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24198#note_132859150 << :green\_apple: @abrandl ready for approval -* [ ] Add a composite primary key for diff files https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24496 -* [x] @abrandl https://dev.gitlab.org/gitlab/gitlabhq/merge_requests/2734 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9281 -* [x] @NikolayS https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9182#note_134446447 << :green\_apple: @abrandl ready for approval -* [ ] int4→int8 https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24512 - -Forgotten? -* [ ] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/19740 – not yet approved",5.0 -17275391,2019-01-11 18:01:45.650,New GCP Project for GitLab QA GKE Clusters (and maybe more for teams),"We have multiple uses for creating Kubernetes clusters in GKE right now. Most people in the company (including our automated tests) are using the `gitlab-internal` project in GCP for doing this and as the company grows we are reaching our GCP limits quite regularly (most challenging to resolve is the IP range limit for multiple clusters https://gitlab.slack.com/archives/CB07X8AQ3/p1543741063017000?thread_ts=1543741035.016900&cid=CB07X8AQ3) - -We (the Configure team and others) use this for testing GitLab's Kubernetes integration locally and on gitlab.com. - -We also create these clusters during automated QA tests. This regularly results in failures due to exhausted resources (eg. https://gitlab.com/gitlab-org/quality/nightly/issues/50 ) and this can break in Master and requires us to ask around for people to clean up after themselves (eg. https://gitlab.slack.com/archives/C02PF508L/p1547228126465300 ) which is all a manual process. - -There may be other ways to solve these quota (or IP range exhaustion) problems. Perhaps increasing GCP quotas will help but from what I understand we may need to reconfigure something in GCP to allow more IP addresses to fit in the range (see https://gitlab.com/gl-retrospectives/configure/issues/4#note_127820216) . - -Another challenge with having everyone in the company using the same GCP instance is that it's difficult to encourage good habits about cleaning things up since not everybody realises that not cleaning things up could actually be creating blockers for other teams or even breaking our master build. If we had separate projects for QA then at least nobody could break the build but still the Configure team might end up blocked if we run out on `gitlab-internal`. Configure team relies heavily on this to do their work so quite possibly they will need their own GCP project as well.",1.0 -17268993,2019-01-11 13:44:36.526,version goes down like every weekend,"For the past couple of weekends, version.gitlab.com goes offline for a bit. We think we've reached enough installations that enough customers ping the app on the weekend to bring it down. - -Currently the server sits using nearly all available RAM. There's no swap available on this machine. This is a single instance full stack machine. The current unicorn worker count is set to 3. What can we do **SHORT TERM** to alleviate the problem? - -Sidekiq, unicorn, and chef are the top memory use offenders. I'm afraid if we try to bump the amount of workers, we'll run the system out of memory. - -References: -* https://gitlab.pagerduty.com/incidents/PC6S6FB -* https://gitlab.pagerduty.com/incidents/PP1FIQS -* https://gitlab.pagerduty.com/incidents/P2OQHHD -* https://gitlab.pagerduty.com/incidents/PFQPBXA",1.0 -17268522,2019-01-11 13:26:51.010,[Design Document] Service Production Inventory,Please fill with your ideas the following design doc: https://docs.google.com/document/d/17rgjc_2Kukw5atR3VxrcKqiGmv-jwO9u2mGKhk0wP8c/edit?usp=sharing,8.0 -17267568,2019-01-11 13:02:24.240,Create DB replica using ZFS," -Please create a zfs replica from staging and production NOT in the Patroni cluster. -First Staging, when stable we proceed to prod. - -We need to create a replica for production, to get datasets equal to production, to make our future test environments more similar, also we will scramble confidential data, but will be in another process.",3.0 -17267510,2019-01-11 12:59:30.496,[Design Document] PostgreSQL Bloat Maintenance,"Propose solution to reduce bloat for GitLab.com - -Further input: https://docs.google.com/document/d/1AJCORsmLmT2yC3axJkQ4uUZw-BBOXaKlyUsnwfyel0E/edit?usp=sharing",2.0 -17255905,2019-01-11 03:15:53.131,Discussion: Programming/Scripting languages we accept as a team,"## Objective -The objective of this issue is to derive a discussion around the programming/scripting languages that we accept (or would want to accept) for usage within our team. This came up from a feedback on a recent automation work which was done in Python and a proposal was suggested to do everything in Ruby going forward. Hence, I wanted to create this issue and gather more feedback to understand the reasoning behind this and whether 'Ruby/Go-only' is ultimately where we, as a team, go. - -## Datapoints -Starting with the most obvious: -- Our product is a Ruby-on-Rails product [1], [2] -- The SRE role job description does list ""Ruby and/or Go"" under a ""May be a fit"" [3] -- We already have ruby/go scripts that help with our infrastructure work - -## Discussion -Is the answer to this inquiry is as simple as: ""We should just go with Ruby/Go in all of our infrastructure related work""? While the above data points are already present, I know we do use bash scripts though it is not specifically called out. It is possible that it is an inherent requirement that we all should know and should be able to use. However, going further than this, what do we think about adopting/using other languages such as Python or even Java (not as a replacement of Ruby/Go but as an addition as long as they get our jobs done)? - -We might want to avoid getting into too much in-depth discussion on comparing one language to another (underlying implementation, learning-curve, syntax, performance, community support and adoption, functionalities and features...etc) but I think it might be better if we focus on what do we as a team want and why one language is a better choice over another based on our experience and infrastructure need. - -Some indicators on why I think we could possibly add Python to our list: -- We are on GCP and Google Cloud has an officially supported, Python library (just like Ruby and Go) [4] -- Kubernetes is one of the items on our radar and it has an officially supported Python library similar to Go. But, Ruby is listed on the community supported library. [5] -- If we end up using Ansible, it is also written in Python. -- Within GCP, if we want to write a Google Cloud Function - it currently only supports Node.JS and Python. - -Note, the above don't talk about one language's performance, syntax, configuration, implementation, features...etc. - -I am not sure what the degree of Ruby/Go experience is for the rest of the team. But as for transparency, my personal experience with Ruby/Go is much less than Python/Java. Therefore, a decision to stick with the former group of languages would mean I would have to ramp up on them as well. However, I will be happy to do so if we do end up sticking to Ruby/Go only. :) - -@gitlab\-com/gl\-infra - -## References -[1] https://gitlab.com/gitlab-org/gitlab-ce -[2] https://gitlab.com/gitlab-org/gitlab-ee -[3] https://about.gitlab.com/job-families/engineering/site-reliability-engineer/ -[4] https://cloud.google.com/apis/docs/cloud-client-libraries",3.0 -17246637,2019-01-10 16:39:11.752,Enable fast_destroy_uploads flag in production,"There is a new feature - async deletion of uploads (https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/20977), this is behind feature flag and disabled. To make sure this feature works properly, I'd like to do following steps on production: - -* run `rake gitlab:cleanup:remote_upload_files` on prod to get current number of orphaned files -* enable `fast_destroy_uploads` feature and wait for 1-2 days -* run `rake gitlab:cleanup:remote_upload_files` and compare number of orphaned files - if there is significant increase of orhpaned files, it might be sign that uploads deletion doesn't work as expected - -Running this rake task should not cause dramatic IO load on prod server (as most of checking should be done in object store) and it's expected to run probably few hours - https://gitlab.slack.com/archives/C101F3796/p1547136320445300 - -I set due date to 21st because based on slack discussion that week would be preferred to do this instead of doing it now - -/cc @skarbek @ahanselka",1.0 -17241002,2019-01-10 14:55:54.799,Grafana Dashboard git sync broken,"The [grafana-dashboards](https://gitlab.com/gitlab-org/grafana-dashboards) repo is synced by the [gitlab-grafana::export_dashboards](https://gitlab.com/gitlab-cookbooks/gitlab-grafana/blob/master/recipes/export_dashboards.rb) recipe. - -The account/secret config is currently broken, so the sync has been failing.",2.0 -17220699,2019-01-09 23:06:35.172,chat-ops access for .com support,"Hi there! The .com support team would love to be able to use chat-ops commands. - -The following people have active `ops` accounts and are ready to be added to the appropriate group to allow access. -Would it be possible to get them going? - -- [x] @tatkins -- [x] @tristan -- [x] @namhokim -- [x] @arihantar -- [x] @j.collins -- [x] @deandre -- [x] @amandarueda -- [x] @jeromeuy -- [x] @anazir -- [x] @cwainaina - -Thanks!",1.0 -17219445,2019-01-09 21:23:24.320,Consider additional alerting added for our storage nodes,"Recently our storage nodes surpassed 80% usage on the 4 new nodes that has been spun up. So we spun 4 new nodes up. Now we need alerting to prevent surprises in the future. - -Let's set up an overall alert if any file server breaches 80% full, warn us. These servers may be subject to our standard alerting policy for file storage, but the process of remediation is much slower and has the potential to muck with a lot of the community. We should consider a dedicated alert for these systems. - -https://dashboards.gitlab.net/d/W_Pbu9Smk/storage-stats?refresh=30m&orgId=1 - -This has the consequence of alerting us today until we rebalance the git repos on our nodes. https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5360",1.0 -17219364,2019-01-09 21:17:19.624,Sentry DSN for ops GitLab instance points to incorrect FQDN,The sentry DSN FQDN for the ops instance points to gitlap.com; it should be gitlab.com. Please remediate.,1.0 -17217220,2019-01-09 19:55:35.757,gitlab-restore-bot was missing from postgres restore project,"restore pipeline seems to start failing from 1/8 11:32pm est -https://gitlab.com/gitlab-com/gl-infra/gitlab-restore/postgres-gprd/pipelines -It's due to this line https://gitlab.com/gitlab-com/gl-infra/gitlab-restore/postgres-gprd/blob/master/common.sh#L19 error out on `{""message"":""403 Forbidden""}` -I found out gitlab-restore-bot is no longer a member of the the project, so I added it back as a developer and expiration date at 2033 as temporary fix. Not sure what happened to this account and whether my fix is correct. - -@abrandl please review and let me know",0.0 -17177255,2019-01-08 17:45:32.220,GitHub Import Starves PGBouncer Connections,"Sidekiq queues started growing, it was observed that the GitHub Import was directly correlated to DB locks and starvation of PGBouncer connection resources.",5.0 -17176496,2019-01-08 17:31:00.345,Bootstrap kernel update requires successful chef run,"Successful chef run requires successful bootstrap kernel update. - -When bootstrapping a new system, we upgrade the kernel to our desired version. - -During the bootstrap process, before the kernel upgrade, we do a chef run. - -If this chef run fails for any reason (including missing vault keys), the kernel update fails - then the system reboots, leaving the system locked in an unbootable state. - -This makes it difficult to determine why the chef run failed. Fixing the chef problem will not cause it to recover, since it will never try again. The machine needs to be destroyed and re-created. - -The kernel upgrade should not depend on the success of the chef run. It should be robust enough to succeed even if chef does not. It should end up powered on and repeatedly re-trying the chef run. That will leave it in a state where fixing the chef or credential problem will result in the next chef attempt succeeding. - -Here are the relevant lines from the bootstrap log: - -``` -Feb 12 00:01:10 geo-postgres-01-db-dr startup-script: INFO startup-script: + [[ 4.13.0-1007-gcp != *4.15.0-1015* ]] -Feb 12 00:01:10 geo-postgres-01-db-dr startup-script: INFO startup-script: + apt-get install -y linux-modules-4.15.0-1015-gcp linux-modules-extra-4.15.0-1015-gcp linux-image-4.15.0-1015-gcp linux-gcp-headers-4.15.0-1015 -Feb 12 00:01:11 geo-postgres-01-db-dr startup-script: INFO startup-script: Reading package lists... -Feb 12 00:01:11 geo-postgres-01-db-dr startup-script: INFO startup-script: Building dependency tree... -Feb 12 00:01:11 geo-postgres-01-db-dr startup-script: INFO startup-script: Reading state information... -Feb 12 00:01:11 geo-postgres-01-db-dr startup-script: INFO startup-script: E: Unable to locate package linux-modules-4.15.0-1015-gcp -Feb 12 00:01:11 geo-postgres-01-db-dr startup-script: INFO startup-script: E: Couldn't find any package by glob 'linux-modules-4.15.0-1015-gcp' -Feb 12 00:01:11 geo-postgres-01-db-dr startup-script: INFO startup-script: E: Couldn't find any package by regex 'linux-modules-4.15.0-1015-gcp' -Feb 12 00:01:11 geo-postgres-01-db-dr startup-script: INFO startup-script: E: Unable to locate package linux-modules-extra-4.15.0-1015-gcp -Feb 12 00:01:11 geo-postgres-01-db-dr startup-script: INFO startup-script: E: Couldn't find any package by glob 'linux-modules-extra-4.15.0-1015-gcp' -Feb 12 00:01:11 geo-postgres-01-db-dr startup-script: INFO startup-script: E: Couldn't find any package by regex 'linux-modules-extra-4.15.0-1015-gcp' -Feb 12 00:01:11 geo-postgres-01-db-dr startup-script: INFO startup-script: E: Unable to locate package linux-image-4.15.0-1015-gcp -Feb 12 00:01:11 geo-postgres-01-db-dr startup-script: INFO startup-script: E: Couldn't find any package by glob 'linux-image-4.15.0-1015-gcp' -Feb 12 00:01:11 geo-postgres-01-db-dr startup-script: INFO startup-script: E: Couldn't find any package by regex 'linux-image-4.15.0-1015-gcp' -Feb 12 00:01:11 geo-postgres-01-db-dr startup-script: INFO startup-script: E: Unable to locate package linux-gcp-headers-4.15.0-1015 -Feb 12 00:01:11 geo-postgres-01-db-dr startup-script: INFO startup-script: E: Couldn't find any package by glob 'linux-gcp-headers-4.15.0-1015' -Feb 12 00:01:11 geo-postgres-01-db-dr startup-script: INFO startup-script: E: Couldn't find any package by regex 'linux-gcp-headers-4.15.0-1015' -Feb 12 00:01:11 geo-postgres-01-db-dr startup-script: INFO startup-script: ++ dpkg-query -W '-f=${binary:Package}\n' 'linux-image*' 'linux-headers*' -Feb 12 00:01:11 geo-postgres-01-db-dr startup-script: INFO startup-script: ++ grep -v 4.15.0-1015 -Feb 12 00:01:11 geo-postgres-01-db-dr startup-script: INFO startup-script: + apt-get purge -y linux-headers linux-headers-3.0 linux-headers-4.13.0-1007-gcp linux-headers-gcp linux-image linux-image-4.13.0-1007-gcp linux-image-gcp -Feb 12 00:01:11 geo-postgres-01-db-dr startup-script: INFO startup-script: Reading package lists... -Feb 12 00:01:11 geo-postgres-01-db-dr startup-script: INFO startup-script: Building dependency tree... -Feb 12 00:01:11 geo-postgres-01-db-dr startup-script: INFO startup-script: Reading state information... -Feb 12 00:01:11 geo-postgres-01-db-dr startup-script: INFO startup-script: The following packages were automatically installed and are no longer required: -Feb 12 00:01:11 geo-postgres-01-db-dr startup-script: INFO startup-script: grub-common grub-gfxpayload-lists grub-pc grub-pc-bin grub2-common -Feb 12 00:01:11 geo-postgres-01-db-dr startup-script: INFO startup-script: libfreetype6 linux-gcp-headers-4.13.0-1007 os-prober -Feb 12 00:01:11 geo-postgres-01-db-dr startup-script: INFO startup-script: Use 'apt autoremove' to remove them. -Feb 12 00:01:11 geo-postgres-01-db-dr startup-script: INFO startup-script: The following packages will be REMOVED: -Feb 12 00:01:11 geo-postgres-01-db-dr startup-script: INFO startup-script: linux-gcp* linux-headers-4.13.0-1007-gcp* linux-headers-gcp* -Feb 12 00:01:11 geo-postgres-01-db-dr startup-script: INFO startup-script: linux-image-4.13.0-1007-gcp* linux-image-gcp* -Feb 12 00:01:13 geo-postgres-01-db-dr startup-script: INFO startup-script: 0 upgraded, 0 newly installed, 5 to remove and 0 not upgraded. -Feb 12 00:01:13 geo-postgres-01-db-dr startup-script: INFO startup-script: After this operation, 78.8 MB disk space will be freed. -Feb 12 00:01:13 geo-postgres-01-db-dr startup-script: INFO startup-script: (Reading database ... #015(Reading database ... 5%#015(Reading database ... 10%#015(Reading database ... 15%#015(Reading database ... 20%#015(Reading database ... 25%#015(Reading database ... 30%#015(Reading database ... 35%#015(Reading database ... 40%#015(Reading database ... 45%#015(Reading database ... 50%#015(Reading database ... 55%#015(Reading database ... 60%#015(Reading database ... 65%#015(Reading database ... 70%#015(Reading database ... 75%#015(Reading database ... 80%#015(Reading database ... 85%#015(Reading database ... 90%#015(Reading database ... 95%#015(Reading database ... 100%#015(Reading database ... 83018 files and directories currently installed.) -Feb 12 00:01:13 geo-postgres-01-db-dr startup-script: INFO startup-script: Removing linux-gcp (4.13.0.1007.9) ... -Feb 12 00:01:14 geo-postgres-01-db-dr startup-script: INFO startup-script: Removing linux-headers-gcp (4.13.0.1007.9) ... -Feb 12 00:01:14 geo-postgres-01-db-dr startup-script: INFO startup-script: Removing linux-headers-4.13.0-1007-gcp (4.13.0-1007.10) ... -Feb 12 00:01:18 geo-postgres-01-db-dr startup-script: INFO startup-script: Removing linux-image-gcp (4.13.0.1007.9) ... -Feb 12 00:01:18 geo-postgres-01-db-dr startup-script: INFO startup-script: Removing linux-image-4.13.0-1007-gcp (4.13.0-1007.10) ... -Feb 12 00:01:18 geo-postgres-01-db-dr startup-script: INFO startup-script: WARN: Proceeding with removing running kernel image. -Feb 12 00:01:18 geo-postgres-01-db-dr startup-script: INFO startup-script: Examining /etc/kernel/postrm.d . -Feb 12 00:01:18 geo-postgres-01-db-dr startup-script: INFO startup-script: run-parts: executing /etc/kernel/postrm.d/initramfs-tools 4.13.0-1007-gcp /boot/vmlinuz-4.13.0-1007-gcp -Feb 12 00:01:18 geo-postgres-01-db-dr startup-script: INFO startup-script: update-initramfs: Deleting /boot/initrd.img-4.13.0-1007-gcp -Feb 12 00:01:18 geo-postgres-01-db-dr startup-script: INFO startup-script: run-parts: executing /etc/kernel/postrm.d/zz-update-grub 4.13.0-1007-gcp /boot/vmlinuz-4.13.0-1007-gcp -Feb 12 00:01:19 geo-postgres-01-db-dr startup-script: INFO startup-script: Generating grub configuration file ... -Feb 12 00:01:20 geo-postgres-01-db-dr startup-script: INFO startup-script: done -Feb 12 00:01:20 geo-postgres-01-db-dr startup-script: INFO startup-script: The link /vmlinuz is a damaged link -Feb 12 00:01:20 geo-postgres-01-db-dr startup-script: INFO startup-script: Removing symbolic link vmlinuz -Feb 12 00:01:20 geo-postgres-01-db-dr startup-script: INFO startup-script: you may need to re-run your boot loader[grub] -Feb 12 00:01:20 geo-postgres-01-db-dr startup-script: INFO startup-script: The link /initrd.img is a damaged link -Feb 12 00:01:20 geo-postgres-01-db-dr startup-script: INFO startup-script: Removing symbolic link initrd.img -Feb 12 00:01:20 geo-postgres-01-db-dr startup-script: INFO startup-script: you may need to re-run your boot loader[grub] -Feb 12 00:01:20 geo-postgres-01-db-dr startup-script: INFO startup-script: Purging configuration files for linux-image-4.13.0-1007-gcp (4.13.0-1007.10) ... -Feb 12 00:01:20 geo-postgres-01-db-dr startup-script: INFO startup-script: Examining /etc/kernel/postrm.d . -Feb 12 00:01:20 geo-postgres-01-db-dr startup-script: INFO startup-script: run-parts: executing /etc/kernel/postrm.d/initramfs-tools 4.13.0-1007-gcp /boot/vmlinuz-4.13.0-1007-gcp -Feb 12 00:01:20 geo-postgres-01-db-dr startup-script: INFO startup-script: run-parts: executing /etc/kernel/postrm.d/zz-update-grub 4.13.0-1007-gcp /boot/vmlinuz-4.13.0-1007-gcp -Feb 12 00:01:22 geo-postgres-01-db-dr startup-script: INFO startup-script: + update-grub -Feb 12 00:01:22 geo-postgres-01-db-dr startup-script: INFO startup-script: Generating grub configuration file ... -Feb 12 00:01:22 geo-postgres-01-db-dr startup-script: INFO startup-script: done -```",2.0 -17175318,2019-01-08 16:42:48.201,Investigate Consul vs. Etcd for service discovery,"The Q1 OKR's include operationalizing service discovery for GitLab.com. - -Current documents all refer to Consul as the service discovery mechanism. In the Geo Group Conversation this morning, @sytses had some [good points](https://gitlab.com/gitlab-org/gitlab-ee/issues/3789#note_129839448) about Etcd starting to look like the industry standard. Since we are moving towards being cloud native eventually, we will eventually be using Etcd as part of Kubernetes, and at that point we will have two solutions to support. - -Consul is easier to use, and it is what we are currently using, but surely there is some value in supporting a single solution rather than two. - -The Geo team is currently researching this issue (https://gitlab.com/gitlab-org/gitlab-ee/issues/8932). The @gitlab\-com/gl\-infra team should also look into it before work starts, and either document why Consul will be the better solution long term, or lay out a path to convert existing consul use to etcd.",1.0 -17174259,2019-01-08 16:03:03.390,ChatOps SQL explain unable to connect to pgbouncer,"The ChatOps SQL explain does not work at the moment: https://gitlab.slack.com/archives/C101F3796/p1546962452337200 - -The error is: - -``` -/app/vendor/bundle/ruby/2.4.0/gems/pg-1.1.3/lib/pg.rb:56:in `initialize': ERROR: pgbouncer cannot connect to server (PG::ConnectionBad) -``` - -This can be reproduced with any `/chatops run explain SELECT...` query. - -As pointed out on Slack, the [CI variables](https://ops.gitlab.net/gitlab-com/chatops/settings/ci_cd) control which host to connect to. Currently, the CI job connects to `10.217.4.2` which is `pgbouncer-03-db-gprd.c.gitlab-production.internal`. - -We may want to change this setting to a more stable hostname, preferrably one that points to a replica.",1.0 -17368221,2019-01-08 10:49:19.670,Runners on Windows for GitLab Runner Team,"## Description - -Create a new windows environment for https://gitlab.com/gitlab-org/gitlab-runner/issues/3755 https://gitlab.com/gitlab-org/gitlab-runner/issues/3757 and have the latest version of GitLab runner installed. - -## Proposal - -We would need a **windows machine** with the following software installed; **Docker** & **gitlab-runner** running as shell executor with powershell so that we can start [building the containers](https://gitlab.com/gitlab-org/gitlab-runner/issues/3755), and [running tests](https://gitlab.com/gitlab-org/gitlab-runner/issues/3757). GCP [provides windows machine](https://cloud.google.com/compute/docs/instances/windows/) which has the following [pricing](https://cloud.google.com/compute/pricing#windows_server_pricing) which costs need to be calculated. This machine should not be provisioned manually as it would be second class from the beginning since all our linux environment is already automated. We can use a mixture of terraform and chef to provision and configure the machine. - -As pointed out in https://gitlab.com/gitlab-org/gitlab-runner/issues/3757#note_129070759 we need at least Windows server Core 2016, but we can go with [Windows Server, version 1803](https://cloud.google.com/compute/docs/instances/windows/#windows_server) since it's the latest version of windows. - -The following software needs to be installed on the machine - -- Openssh -- docker -- gitlab-runner -- Git https://github.com/git-for-windows/git - -## Links to related issues and merge requests / references - -- https://gitlab.com/gitlab-org/gitlab-runner/issues/3755 -- https://gitlab.com/gitlab-org/gitlab-runner/issues/3757",5.0 -17158979,2019-01-08 09:01:29.138,Replication lag dashboards for DR replicas don't show data,"We have separate dashboards regarding replication lag for the DR replica (archive/delayed): https://dashboards.gitlab.net/d/000000144/postgresql-overview?panelId=133&fullscreen&edit&orgId=1 - -All of these don't show data.",1.0 -17158754,2019-01-08 08:52:43.725,Workaround for GitHub importer bug,"Apply a database-level fix for affected projects for GitLab.com. This is the third time we apply a manual fix for this. - -Change issue: https://gitlab.com/gitlab-com/gl-infra/production/issues/643 - -Work is on the way though: https://gitlab.com/gitlab-org/gitlab-ce/issues/54270",1.0 -17158717,2019-01-08 08:51:35.441,Require that all scripts performing maintenance have a dry-run output that can be reviewed,"This has come up multiple times in incidents and I think its worth making it a bit more formal that if maintenance is automated there should be a dry-run mode that displays exactly what the script is doing, ssh commands, api calls, etc. so that it can be reviewed. - -Corrective action for https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5874",1.0 -17155221,2019-01-08 04:47:30.619,Unresolvable service discovery config spamming Sentry,"See https://sentry.gitlab.net/gitlab/staginggitlabcom/issues/612471: - -``` -IPAddr::InvalidAddressError: invalid address - from ipaddr.rb:649:in `in6_addr' - from ipaddr.rb:586:in `initialize' - from gitlab/database/load_balancing/service_discovery.rb:139:in `new' - from gitlab/database/load_balancing/service_discovery.rb:139:in `nameserver_ip' - from gitlab/database/load_balancing/service_discovery.rb:130:in `resolver' - from gitlab/database/load_balancing/service_discovery.rb:103:in `addresses_from_dns' - from gitlab/database/load_balancing/service_discovery.rb:68:in `refresh_if_necessary' - from gitlab/database/load_balancing/service_discovery.rb:41:in `block (2 levels) in start' - from gitlab/database/load_balancing/service_discovery.rb:38:in `loop' - from gitlab/database/load_balancing/service_discovery.rb:38:in `block in start' -Gitlab::Database::LoadBalancing::ServiceDiscovery::UnresolvableNameserverError: could not resolve localhost - from gitlab/database/load_balancing/service_discovery.rb:144:in `rescue in nameserver_ip' - from gitlab/database/load_balancing/service_discovery.rb:138:in `nameserver_ip' - from gitlab/database/load_balancing/service_discovery.rb:130:in `resolver' - from gitlab/database/load_balancing/service_discovery.rb:103:in `addresses_from_dns' - from gitlab/database/load_balancing/service_discovery.rb:68:in `refresh_if_necessary' - from gitlab/database/load_balancing/service_discovery.rb:41:in `block (2 levels) in start' - from gitlab/database/load_balancing/service_discovery.rb:38:in `loop' - from gitlab/database/load_balancing/service_discovery.rb:38:in `block in start' -``` - - -The config is: - -``` - load_balancing: {""hosts"":[],""discover"":{""record"":""replica.patroni.service.consul."",""nameserver"":""localhost"",""port"":8600}} -``` - -The error suggests this local nameserver isn't able to resolve that record.",1.0 -17152180,2019-01-07 23:37:03.287,Install and configure GitLab application for Geo,"Now that https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5467 is closed, we have a working environment. The next step is to get the application set up in such a way that it can be automatically updated along with production, so they don't diverge. - -Before we can enable Geo (#5468), this work needs to be done",4.0 -17152036,2019-01-07 23:26:47.884,Create @ groups to reduce gl-infra spam,"Currently, folks are using @gitlab\-com/gl\-infra to draw attention to all infrastructure issues. This is a very large group, and there are a lot of these alerts. Not everybody needs to see them. - -I propose splitting out these types of tags into smaller groups. It would be easier for everyone to maintain focus if they could only be tagged on issues that they have an interest in. - -We could start by splitting out the 3 new SRE groups, or we could create one for each functional area.",2.0 -17151849,2019-01-07 23:09:08.294,Chef automation breaks when deleting files,"When deleting a file from the chef repo, knife tries to push the deleted file to the chef server. This causes the whole pipeline to fail. - -If there are other changed files, they will not be uploaded to the repo, because just doing another commit won't work. Only changed files are pushed, so on the subsequent commit, that file will no longer be changed. - -We need to filter out deleted files from the list of changed files to upload to the chef server. - -This MR addresses the issue: -https://ops.gitlab.net/gitlab-cookbooks/chef-repo/merge_requests/503",1.0 -17149085,2019-01-07 19:43:20.478,Fluentd repos are broken on gitter hosts,"While updating the package cache on gitter hosts today, all returned the following error - -``` -Reading package lists... -W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://packages.treasuredata.com trusty InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 901F9177AB97ACBE - -W: GPG error: https://deb.nodesource.com trusty InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 1655A0AB68576280 -W: Failed to fetch http://packages.treasuredata.com/2/ubuntu/trusty/dists/trusty/InRelease - -W: Some index files failed to download. They have been ignored, or old ones used instead. -``` - -This is due to an update to the GPG key for fluentd package repositories. We need to download and import the new key (reference the manual process [here](https://www.fluentd.org/blog/update-gpg-key-for-td-agent)) - ---- - -Some discussion, https://gitlab.slack.com/archives/CB3LSMEJV/p1546890404268400",3.0 -17092334,2019-01-05 00:04:58.890,Ensure integrity of terraform changes before apply/merge,"[RCA](#5813) corrective action: - -> Use [Atlantis](https://runatlantis.io) for automating and queuing terraform deployments - -We should implement [Atlantis](https://runatlantis.io) for distributed locking/queuing across merge requests, and incorporate that into the work for &9. - -The description for this issue was originally `Setup Atlantis for Terraform deployments on ops instance`. More fundamentally, this issue is intended to ensure the integrity of all terraform changes before running apply and merging to master. Utilizing Atlantis (or Terraform Enterprise) would be one way of implementing a FIFO queue for merge requests, but we may also be able to leverage [merge trains](https://docs.gitlab.com/ee/ci/merge_request_pipelines/pipelines_for_merged_results/merge_trains/index.html) within the GitLab product, itself.",5.0 -17071059,2019-01-04 00:38:30.553,Clean up orphaned resources in Gitter environment,"The following DNS entries in the `prod.gitter` [zone](https://console.aws.amazon.com/route53/home?region=us-east-1#resource-record-sets:ZLV6X5P28OJXS) reference non-existent resources; we should ensure that we have any/all associated infrastructure removed from terraform and configuration cleaned up from ansible, as well as removing any corresponding DNS records/resources in the beta environment. - -``` -consul-01.prod.gitter -consul-02.prod.gitter -consul-03.prod.gitter -master-01.prod.gitter -master-02.prod.gitter -master-03.prod.gitter -minion-01.prod.gitter -minion-02.prod.gitter -packer.prod.gitter -redis-03.prod.gitter -vpn-01.prod.gitter -```",1.0 -17070459,2019-01-03 23:09:11.483,Remove unused EBS volumes in Gitter AWS account,"A recent review of EBS volumes showed the following 13 volumes as unused/unattached. If they are no longer needed, we should delete the volumes. - -![Screen_Shot_2019-01-03_at_3.01.42_PM](/uploads/4d4017beb5d23bd8b0158aff30f37ece/Screen_Shot_2019-01-03_at_3.01.42_PM.png) - -/cc @MadLittleMods @andrewn",1.0 -17070164,2019-01-03 22:34:15.693,Setup NAT Gateways for all Gitter AZs,"As a follow-up to #5492 and gitlab-com/gl-infra/production#559, we introduced a NAT gateway for egress from internal hosts within the Gitter environment, instead of the legacy VPN which also acted as a NAT instance. - -The new gateway is currently serving traffic for all subnets, but is susceptible to disruption if Amazon suffers an outage of the Availability Zone where the NAT Gateway is provisioned, and/or traffic is disrupted between AZs. - -Now that the issue from gitlab-com/gl-infra/production#559 has been mitigated, we need to shore up the infrastructure, and provision additional NAT gateways in each availability zone, and associate the local private subnets in those AZs.",2.0 -17069985,2019-01-03 22:26:45.950,Setup autoscaling for Gitter bastion,"As a follow-up to #5492 and gitlab-com/gl-infra/production#559, we introduced a bastion host for access to the Gitter environment, instead of the legacy VPN. The new bastion is a single instance, with no redundancy for fault-tolerance or HA as a quick mitigation to gitlab-com/gl-infra/production#559, but now we need to shore up the infrastructure. - -1. [ ] Setup a Network Load Balancer for SSH traffic to the bastion node -1. [ ] Setup an Autoscaling group for bastion nodes -1. [ ] Setup a Launch Configuration / Launch Template for bastion nodes -1. [ ] Provision a deterministic host key on bootstrap for all bastion nodes (allows clients to rely on strict host key checking)",5.0 -17061569,2019-01-03 18:18:03.748,Create more file nodes,"As is customary, we need to create 4 more storage nodes. We usually do this at 60% capacity and we're WAY farther behind than we usual as they are at 80%+ at this point. Thus, I'd say this is pretty urgent. - -cc/ @dawsmith ",1.0 -17044047,2019-01-03 11:06:25.955,Fix statement timeout spikes related to GitHub importer jobs,This is the infra issue to track work on https://gitlab.com/gitlab-org/gitlab-ce/issues/54270.,3.0 -17025630,2019-01-02 14:45:20.640,Transition temporary indexes into release,"This is to track the work being done on https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23188/diffs to be able to remove temporary indexes we added in production. Once the CE MR is released, we can execute https://gitlab.com/gitlab-com/gl-infra/production/issues/524 and remove the temporary indexes.",2.0 -17018615,2019-01-02 09:53:49.409,Postgres backup restores fail,"`gitlab-restore/postgres-gprd` is not checking in with deadmanssnitch anymore (since at least Dec 27). - -See https://deadmanssnitch.com/snitches/178d5bf474 and notifications in Slack: https://gitlab.slack.com/archives/C3NBYFJ6N/p1546387218020400",3.0 -17018476,2019-01-02 09:47:35.423,Database Reviews,"* [x] @NikolayS -> @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8497#note_124026675 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8927#note_126726752 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8669#note_126158494 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/19740#note_125938200 -* [x] @NikolayS https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23098#note_125777941 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24056#note_128145058 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24042#note_127617989 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8497#note_127501661 -* [ ] @NikolayS https://gitlab.com/gitlab-org/gitlab-ce/issues/51854#note_127186345 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8949#note_127338816 -* [x] @abrandl trigram https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23952#note_128130394 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/18425#note_126834567 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8961 -* [x] @abrandl Thu https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/21499 -* [x] @abrandl Thu https://dev.gitlab.org/gitlab/gitlabhq/merge_requests/2694 -* [ ] @NikolayS https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8988#note_128739113 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24119 -* [x] @abrandl binary join https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8977#note_128700496 -* [x] @abrandl trigram https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23952#note_128621830 -* [x] @abrandl LB https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8961 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/21499 with https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/7238 -* [x] @NikolayS -> @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24047#note_128914394 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8949#note_128914150 -* [x] @abrandl rather heavy migration https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8669 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/18425/diffs#note_129308895 -* [ ] Fixtures https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8240#note_129069979 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24144 -* [x] High risk https://dev.gitlab.org/gitlab/gitlabhq/merge_requests/2694#note_144679 -* [x] N+1 https://dev.gitlab.org/gitlab/gitlabhq/merge_requests/2480 -* [x] @abrandl DB load balancing https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9071#note_130145897 - -No or later milestone: - -* [ ] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23464#note_126871390 -* [ ] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/issues/54643#note_125483243",0.0 -17009034,2019-01-01 15:43:59.406,Set up DB replica for analytics pulls into EDW,"Organization: -1. In this issue, we want to setup access to the archive replica and configure it such that it can be used from the ELT loads -1. In a related issue https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5970, we are going to provide network-level access to said replica for the ELT runner jobs (with appropriate firewall rules). - -Discussion summary: -1. The ELT load will only need read-only permissions (hence a replica is fine). -1. No strict SLA guarantees regarding uptime/availability/performance. -1. Queries from the ELT load are expected to finish within 5 minutes. Statement timeouts will be enforced accordingly. - - - -------- - - -Our current method for getting updated production data into our warehouse is as follows: - -* Update pseudonymizer config if possible https://gitlab.com/gitlab-org/gitlab-ee/blob/master/config/pseudonymizer.yml -* Ping someone on production team to run the pseudonymizer (usually Stan) - * Hope that nothing fails or bombs out writing CSVs to GCS -* Import generated CSVs from object storage into Snowflake - -This presents a number of challenges, not the least of which is precious time from a member of the production team every time we want updated data. - -Reading through https://about.gitlab.com/handbook/engineering/infrastructure/database/disaster_recovery.html and https://gitlab.com/gitlab-restore/postgres-gprd it seems like we should be able to set up an isolated replica that is updated on a regular basis with production data. (We do this regularly for version, customer, and license dbs https://gitlab.com/gitlab-restore/version.gitlab.com#vpc-setup) We don't need up to the minute updates, delays of 1-2 weeks is fine. - -If we follow the same model where only our project's runner is whitelisted for access I think it'll simplify the security footprint and risk. We'd still have an explicit list of fields we'd be pulling that security would approve for updates and increased data pulls. - -My time request is that something be stood up by the end of January. We've got some OKRs related to modeling of the dotcom data. - -@stanhu does this seem reasonable? I know it'd save you some time. - -@dawsmith thoughts on this? Is this something @yguo could setup? - -@jeremy @tszuromi just FYI - I'm trying to close the time between updates for production data. - -cc @tlapiana",2.0 -17001284,2018-12-31 15:13:32.027,Investigate solutions to vacuuming our database prior to failover,"During a recent failover https://gitlab.com/gitlab-com/gl-infra/production/issues/637, patroni-04 was elected leader. Upon becoming the leader, we received an alert that we've exceeded our thresholds for Dead Tuples. - -![image](/uploads/70b247f384080c84eaa52d8630ad98ee/image.png) - -This node started off with high percentages of dead tuples. - -Use this issue to investigate and resolve the following questions: -1. Why do secondaries not have the same data or analyze table information for dead tuples as the primary? -1. Should this alert be considered high priority? -1. If this is a problem what can be done prior to failovers to ensure potentially elected secondaries do not have a high percentage of dead tuples?",2.0 -17001129,2018-12-31 14:54:38.648,Improve WAL-E alert for 'WALEBackupDelayed',"During our recent failover https://gitlab.com/gitlab-com/gl-infra/production/issues/637, the alert `WALEBackupDelayed` fired. This was technically a false alarm. Wal-E backups continued to work, the data that prometheus was looking at was correct, but fired when another server had begun processing uploads to S3. - -![image](/uploads/bedcb6f07cc4ab04bf1e82e1f74a197b/image.png) - -https://prometheus.gprd.gitlab.net/graph?g0.range_input=2d&g0.expr=gitlab_com%3Alast_wale_backup_age_in_seconds%20&g0.tab=0 - -Improve this alert to reduce alert fatigue during incidents. This alert introduced noise into an already stressful situation.",1.0 -16960452,2018-12-28 21:11:59.389,Doing graceful DB failovers under the new Patroni setup,"Right now Patroni failovers can result in 50x errors during the short period of demoting a DB a promoting another. - -We can leverage the fact that all client connections go through pgbouncer and that pgbouncer can be [paused](https://pgbouncer.github.io/usage.html#pause-db) from accepting new connections. However, with the current setup we can't utilize the pausing feature. Consider this scenario: - -1. `patroni-01` is a master, we want to failover to `patroni-02` -1. We pause pgbouncer on `patroni-01`, existing client connections are completed and closed -1. Clients are trying to establish new connections, they are still forwarded to `patroni-01` by the ILB but pgbouncer isn't accepting any yet -1. We do a failover to `patroni-02` -1. We resume (un-pause) pgbouncer on `patroni-01` -1. Now `patroni-01` has connections that are trying to execute read/write queries, they will fail - -We need to pause connections at the level of the ILB before they are forwarded to a master, but this doesn't seem possible. I tried marking all of Patroni ILB backends as unhealthy to see if new connections will fail immediately or will wait for a backend to become healthy, but the ILB forwarded the connection to one of the replicas! (even when its backend is marked as unhealthy) - -So I think we may need to have a dedicated pgbouncer node in front of the ILB, so we can control the connection flow at a higher level. pgbouncer on such node will be configured with the ILB FQDN, so we don't run into all the issues of using Consul watchers to update the databases configuration. We can go one step further and drop the ILB altogether in favor of using Consul DNS record for the master node (assuming https://gitlab.com/gitlab-com/gl-infra/production/issues/633 has made it to production)",3.0 -16957200,2018-12-28 17:39:33.280,Large file in generic-sv-with-group terraform module,"There is a 42M file in the [generic-sv-with-group](https://gitlab.com/gitlab-com/gl-infra/terraform-modules/google/generic-sv-with-group) terraform module at: - -`.terraform/plugins/darwin_amd64/terraform-provider-google_v1.18.0_x4` - -On a fast connection, it doesn't make much difference - but with the slow internet provider problems that I have been having, it's been preventing me from running `tf-init`. I suspect that it is slowing down other people as well, and they're not noticing because it's not failing. - -I didn't delete it because I wasn't sure if it was there for a reason. - -Does this need to be there? And can we delete it from the repo? - -As a side note, these repo's probably all need .gitignore and README.md files",1.0 -16954549,2018-12-28 14:59:31.903,prometheus gprd servers getting low on space,"https://gitlab.slack.com/archives/C12RCNXK5/p1546004197013500 - -Nodes 01 and 02 are using 91% and 86% space of the `data` disk mounted to `/opt/prometheus` -",1.0 -16940106,2018-12-27 22:43:12.861,Unable to run terraform against GCP when S3 is unavailable,"I had a strange issue today where I was working on the DR environment, running terraform commands to configure resources in GCP and for some reason I lost connectivity to AWS and S3 from my location. I could ping Amazon addresses, but could not connect to either S3 or the AWS console. All of my other connectivity to GCP and elsewhere was unaffected. - -This should not have been a problem, since I was only working in GCP, but since the `.tfstate` file lives in S3, I could not use terraform at all. - -I'd like to propose moving the `.tfstate` files to Google Storage. It would be nice to reduce the dependencies for critical components like this. If this happened during a high pressure incident, it would be extremely frustrating. - -Does anyone know of any reason why we shouldn't do this? - -cc/ @gitlab\-com/gl\-infra",1.0 -16932917,2018-12-27 13:35:03.678,Prometheus in the staging environment crashes,"On the night of December 27th, the early morning, prometheus kept crashing in the staging environment. We were alerted twice for both prometheus-01 and for prometheus-02 that we are restarting frequently. - -![image](/uploads/7aa266e29f6af94d4c93a8ead215666a/image.png) - -Source: https://prometheus.gstg.gitlab.net/graph?g0.range_input=12h&g0.expr=changes(process_start_time_seconds%7Bjob%3D~%22prometheus.*%22%7D%5B30m%5D)&g0.tab=0 - -Both servers appear to be suffering from a damaged wal file: -``` -2018-12-27_13:23:09.53663 level=warn ts=2018-12-27T13:23:09.536566544Z caller=wal.go:116 component=tsdb msg=""last page of the wal is torn, filling it with zeros"" segment=/opt/prometheus/prometheus/data/wal/00032880 -2018-12-27_13:25:36.46930 level=warn ts=2018-12-27T13:25:36.469178923Z caller=head.go:434 component=tsdb msg=""unknown series references"" count=536346 -``` - -Both servers have go stack traces in their log history: -``` -2018-12-27_13:23:21.56483 goroutine 3517 [select]: -2018-12-27_13:23:21.56484 net/http.(*persistConn).writeLoop(0xc16d30afc0) -2018-12-27_13:23:21.56484 /usr/local/go/src/net/http/transport.go:1885 +0x113 -2018-12-27_13:23:21.56484 created by net/http.(*Transport).dialConn -2018-12-27_13:23:21.56485 /usr/local/go/src/net/http/transport.go:1339 +0x966 -2018-12-27_13:23:21.56485 -2018-12-27_13:23:21.56485 goroutine 3391 [select]: -2018-12-27_13:23:21.56486 net/http.(*persistConn).writeLoop(0xc16d2a7c20) -2018-12-27_13:23:21.56486 /usr/local/go/src/net/http/transport.go:1885 +0x113 -2018-12-27_13:23:21.56486 created by net/http.(*Transport).dialConn -2018-12-27_13:23:21.56487 /usr/local/go/src/net/http/transport.go:1339 +0x966 -2018-12-27_13:23:21.56487 -2018-12-27_13:23:21.56487 goroutine 3496 [select]: -2018-12-27_13:23:21.56488 net/http.(*persistConn).writeLoop(0xc16b3af680) -2018-12-27_13:23:21.56488 /usr/local/go/src/net/http/transport.go:1885 +0x113 -2018-12-27_13:23:21.56488 created by net/http.(*Transport).dialConn -2018-12-27_13:23:21.56489 /usr/local/go/src/net/http/transport.go:1339 +0x966 -2018-12-27_13:23:21.56489 -2018-12-27_13:23:21.56489 goroutine 3515 [select]: -2018-12-27_13:23:21.56490 net/http.(*persistConn).writeLoop(0xc16d024360) -2018-12-27_13:23:21.56490 /usr/local/go/src/net/http/transport.go:1885 +0x113 -2018-12-27_13:23:21.56490 created by net/http.(*Transport).dialConn -2018-12-27_13:23:21.56490 /usr/local/go/src/net/http/transport.go:1339 +0x966 -2018-12-27_13:23:21.56491 -2018-12-27_13:23:21.56491 goroutine 3523 [select]: -2018-12-27_13:23:21.56491 net/http.(*persistConn).writeLoop(0xc16d024240) -2018-12-27_13:23:21.56491 /usr/local/go/src/net/http/transport.go:1885 +0x113 -2018-12-27_13:23:21.56492 created by net/http.(*Transport).dialConn -2018-12-27_13:23:21.56492 /usr/local/go/src/net/http/transport.go:1339 +0x966 - -2018-12-27_13:23:21.56493 goroutine 3488 [IO wait]: -2018-12-27_13:23:21.56493 internal/poll.runtime_pollWait(0x7f73558b98c0, 0x72, 0xc16b498a40) -2018-12-27_13:23:21.56493 /usr/local/go/src/runtime/netpoll.go:173 +0x66 -2018-12-27_13:23:21.56494 internal/poll.(*pollDesc).wait(0xc16d448c18, 0x72, 0xffffffffffffff00, 0x1e9d680, 0x2e045a8) -2018-12-27_13:23:21.56495 /usr/local/go/src/internal/poll/fd_poll_runtime.go:85 +0x9a -2018-12-27_13:23:21.56495 internal/poll.(*pollDesc).waitRead(0xc16d448c18, 0xc16d4ba000, 0x1000, 0x1000) -2018-12-27_13:23:21.56495 /usr/local/go/src/internal/poll/fd_poll_runtime.go:90 +0x3d -2018-12-27_13:23:21.56496 internal/poll.(*FD).Read(0xc16d448c00, 0xc16d4ba000, 0x1000, 0x1000, 0x0, 0x0, 0x0) -2018-12-27_13:23:21.56497 /usr/local/go/src/internal/poll/fd_unix.go:169 +0x179 -2018-12-27_13:23:21.56497 net.(*netFD).Read(0xc16d448c00, 0xc16d4ba000, 0x1000, 0x1000, 0xc16d4ad200, 0x4, 0x0) -2018-12-27_13:23:21.56498 /usr/local/go/src/net/fd_unix.go:202 +0x4f -2018-12-27_13:23:21.56499 net.(*conn).Read(0xc169185c58, 0xc16d4ba000, 0x1000, 0x1000, 0x0, 0x0, 0x0) -2018-12-27_13:23:21.56500 /usr/local/go/src/net/net.go:177 +0x68 -2018-12-27_13:23:21.56500 net/http.(*persistConn).Read(0xc16d4ad200, 0xc16d4ba000, 0x1000, 0x1000, 0xc16b498c70, 0x404c85, 0xc16d49a360) -2018-12-27_13:23:21.56501 /usr/local/go/src/net/http/transport.go:1497 +0x75 -2018-12-27_13:23:21.56502 bufio.(*Reader).fill(0xc16cdee6c0) -2018-12-27_13:23:21.56502 /usr/local/go/src/bufio/bufio.go:100 +0x106 -2018-12-27_13:23:21.56503 bufio.(*Reader).Peek(0xc16cdee6c0, 0x1, 0x0, 0x0, 0x1, 0xc093b5ea80, 0x0) - -``` - -As of the time of this writing, https://prometheus.gstg.gitlab.com throws a 502, despite both servers having prometheus ready to receive web requests... - -Due to https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5823, currently these pages for the staging environment will wake up the on-call person. - -/cc @bjk\-gitlab -/cc @gitlab\-com/gl\-infra",1.0 -16931209,2018-12-27 11:43:11.695,Create a wrapper command similar to `gitlab-ctl pgb-console` for the Patroni cluster,,2.0 -16912805,2018-12-26 13:29:37.616,packages.gitlab.com has breached 10% free of space,"The packages.gitlab.com is running low on the 16TB volume. - -Use this issue to find the problem and resolve it.",3.0 -16888035,2018-12-24 22:42:47.733,Allocate more storage on about-src.gitlab.com,"The `/home` filesystem on `about-src.gitlab.com` filled up over the weekend, preventing users from updating the website. I managed to free up ~600MB of old/unnecessary files, but will need further assistance from someone more familiar with this system on either a) which files under `/home/gitlab-runner` can be deleted to free up space, b) extending a volume in Azure, or c) both. - -/cc @gitlab\-com/gl\-infra",3.0 -16866440,2018-12-22 22:57:59.958,license.gitlab.com restarting every 30 minutes,"This looks like Chef restarting it for some reason: - -``` -root@license:/var/log/upstart# grep ""master process ready"" license-gitlab-com.log -I, [2018-12-22T07:08:59.159445 #27739] INFO -- : master process ready -I, [2018-12-22T07:40:41.871684 #29583] INFO -- : master process ready -I, [2018-12-22T08:15:51.267262 #31442] INFO -- : master process ready -I, [2018-12-22T08:50:59.267311 #832] INFO -- : master process ready -I, [2018-12-22T09:21:37.651062 #2663] INFO -- : master process ready -I, [2018-12-22T09:54:35.334625 #4508] INFO -- : master process ready -I, [2018-12-22T10:27:24.003285 #6353] INFO -- : master process ready -I, [2018-12-22T10:58:08.655725 #8164] INFO -- : master process ready -I, [2018-12-22T11:30:36.299489 #10008] INFO -- : master process ready -I, [2018-12-22T12:05:06.211216 #11835] INFO -- : master process ready -I, [2018-12-22T12:40:30.717682 #13675] INFO -- : master process ready -I, [2018-12-22T13:14:45.559951 #15540] INFO -- : master process ready -I, [2018-12-22T13:47:28.384103 #17386] INFO -- : master process ready -I, [2018-12-22T14:20:29.356144 #19244] INFO -- : master process ready -I, [2018-12-22T14:51:56.199960 #21080] INFO -- : master process ready -I, [2018-12-22T15:22:52.262106 #22910] INFO -- : master process ready -I, [2018-12-22T15:56:28.324569 #24759] INFO -- : master process ready -I, [2018-12-22T16:30:37.367573 #26623] INFO -- : master process ready -I, [2018-12-22T17:05:39.551101 #28488] INFO -- : master process ready -I, [2018-12-22T17:36:18.146337 #30317] INFO -- : master process ready -I, [2018-12-22T18:10:18.243692 #32175] INFO -- : master process ready -I, [2018-12-22T18:45:39.000223 #1579] INFO -- : master process ready -I, [2018-12-22T19:18:47.696688 #3447] INFO -- : master process ready -I, [2018-12-22T19:53:59.259197 #5287] INFO -- : master process ready -I, [2018-12-22T20:26:07.063813 #7141] INFO -- : master process ready -I, [2018-12-22T20:58:10.156486 #9312] INFO -- : master process ready -I, [2018-12-22T21:32:11.070613 #11165] INFO -- : master process ready -I, [2018-12-22T22:04:36.984267 #13026] INFO -- : master process ready -I, [2018-12-22T22:36:17.337633 #15085] INFO -- : master process ready -``` - -/cc: @ctbarrett",2.0 -16861061,2018-12-22 12:06:57.897,Move terraform modules to the ops instance,"[RCA](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5813) corrective action: - -> failed due to inaccessible terraform modules not hosted in ops instance - -We should move all modules to the ops instance and make them the source of truth for push and pull. Like other repositories on ops.gitlab.net we can setup a push mirror so they are mirrored on gitlab.com.",2.0 -16860285,2018-12-22 10:49:00.706,"Enable deletion protection flag for gitaly, pages and share","https://www.terraform.io/docs/providers/google/r/compute_instance.html#deletion_protection - -these servers are either single points of failure or can cause disruption when deleted.",2.0 -16855627,2018-12-21 22:23:06.460,RCA for 2018-12-21 Gitaly Outage,"**Please note:** if the incident relates to sensitive data, or is security related consider -labeling this issue with ~security and mark it confidential. -*** - -## Summary - -A brief summary of what happened. Try to make it as executive-friendly as possible. - -1. Service(s) affected : Gitaly Storage Nodes -2. Team attribution : Infrastructure -3. Minutes downtime or degradation : 35 Minutes - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? All customer and internal requests to Git data nodes were unable to be serviced for 34 minutes. -- Who was impacted by this incident? External Customers, CI jobs -- How did the incident impact customers? See impact above -- How many attempts were made to access the impacted service/feature? -- How many customers were affected? All -- How many customers tried to access the impacted service/feature? - -Include any additional metrics that are of relevance. - -Provide any relevant graphs that could help understand the impact of the incident and its dynamics. - - -## Detection & Response - -Start with the following: - -- How was the incident detected? PagerDuty/Slack Alerts -- Did alarming work as expected? Yes -- How long did it take from the start of the incident to its detection? Approximately 10 minutes -- How long did it take from detection to remediation? 30 minutes -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team memeber wasn't page-able, ...) -Yes - terraform module repos that were only on .com delayed us in running terraform to stand the compute node back up until a local copy was used. - -## Timeline - -On the production issue. - -## Root Cause Analysis - -While working on our Disaster Recovery project and region, an SRE on our team was unable to use Terraform to properly remove some nodes in the the DR project and region. They chose to go to the GCP console to perform the deletes to get the Terraform state back to good. While doing this, they searched for some nodes and per the illustration below had their project switched from gitlab-dr to gitlab-production. It was not clear that the project switch had been made and they proceeded to remove the gitaly compute instances (file-[1-24]) in the gitlab-production project. At that point, monitoring started to alert us to the problem and the team started to restore the deleted compute nodes. - -### Illustration - -When attempting to search for something in the GCP search bar which has a partial match, one would expect that pressing enter here would execute a search for a partial IP address, which in this case returns no results. - -![Search](/uploads/e2f6e884da37a81f47cd38613abfffba/Search.png) - -Pressing enter results in this: - -![Changed](/uploads/48988c5767305f64a109f595f342a560/Changed.png) - -Which appears to have searched and found nothing. However, it did not search - the first line in the dropdown above was highlighted - so the project changed. At this point, going and deleting the DR file nodes resulted in the gitlab-production nodes being deleted rather than the gitlab-dr nodes. - -## What went well - -Start with the following: - -- Identify the things that worked well or as expected. -- Any additional call-outs for what went particularly well. - -1. Quickness of team to jump on a zoom and start to mitigate the issue. -1. We were able to restore the affected infrastructure with no data loss - -## What can be improved - -Start with the following: - -- Using the root cause analysis, explain what can be improved to prevent this from happening again. -- Is there anything that could have been done to improve the detection or time to detection? -- Is there anything that could have been done to improve the response or time to response? -- Is there an existing issue that would have either prevented this incident or reduced the impact? -- Did we have any indication or beforehand knowledge that this incident might take place? - -1. Look at different node names for disaster recovery compute and storage node names vs production. -2. Mirror all Terraform repos (environments and modules) on ops.gitlab.net to prevent issues with access when GitLab.com is down. -3. Look at further enhancing procedures for any deletes in production requiring two sets of eyes and ways to prevent needing to do any interaction with the cloud console. Further automation to ask and double check before performing the delete. - - -## Corrective actions - -1. https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5815 : Add deletion protection for gitaly servers -2. https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5816 : Move terraform modules to the ops instance -3. https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5868 : Start practicing incident response -4. https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5867 : Create list of incident response scenarios -5. https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5869 : Setup Atlantis for Terraform deployments on ops instance -6. https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5945 : Change to TF process to prevent conflicts that sent us to Cloud console - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",3.0 -16855222,2018-12-21 21:06:03.955,Put packages.gitlab.com behind a CDN,"Discussed using fastly here - https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/4452#note_88966038 - -But maybe we should consider using cloudfront for now? In any case since we just disabled transfer acceleration this may be a semi-urgent change though it's not clear yet what effect this will have on download speed.",5.0 -16841326,2018-12-21 11:23:32.039,Beefier GitLab shared runners,"In https://gitlab.com/gitlab-com/www-gitlab-com/issues/2342#note_84878793, @rspeicher noted that if we increased the power of our runners for www-gitlab-com builds, we'd save a lot of time for those builds to happen. - -The machines have changed since that test, but mostly as part of the GCP switch rather than any dramatic change in specs; @SteveAzz and @tmaczukin confirmed (in https://gitlab.slack.com/archives/C0SFP840G/p1545226526151000?thread_ts=1545226152.150800&cid=C0SFP840G) that these are n1-standard-2. - -In https://gitlab.com/gitlab-com/www-gitlab-com/issues/2342#note_126680858, @northrup said: - -> We have a pool -that is in GCP for our use that we pay for, it makes no sense to have the -lowest form of box which takes more time because I think the combined -wasted time waiting on CI in employee pay is more than what we’d pay to -just spin up and down higher CPU count boxes. - -And I agree! Can we beef these up, please? Even having an eight-core machine just for the website's master pipeline would be great, but if we could extend that across all GitLab projects it would probably help a lot.",1.0 -16819225,2018-12-20 14:50:03.974,Update handbook db arch diagram with patroni info,I was looking at https://about.gitlab.com/handbook/engineering/infrastructure/production-architecture/#database-architecture and see that we still have references to repmgr there - seems like we should update with patroni info,2.0 -16800610,2018-12-19 22:08:58.336,sentry.gitlab.net is slow (Sentry error reporting),"https://sentry.gitlab.net/ is being really slow and I can barely load an issue there - -I even got this error once, - -> Frig -> -> Something went horribly wrong rendering this page. We use a decent error reporting service so this will probably be fixed soon. Unless our error reporting service is also broken. That would be awkward. Anyway, we apologize for the inconvenience. -> -> `TypeError: Cannot read property 'id' of undefined` - -![](https://i.imgur.com/FaVBmr8.png) - ---- - -Are there dashboards for Sentry CPU, etc? - ---- - - - Sentry error reporting",1.0 -16795683,2018-12-19 16:55:14.253,Registry nodes scale down,"Recently we spun up extra nodes due to combat a memory leak, but all nodes are still leaking memory, so we've created a job to daily restart these services. - -Since this problem is systematic and not dependent on scale, let's rid of the extra nodes. - -References: -* https://ops.gitlab.net/gitlab-com/gl-infra/registry-restarter/pipeline_schedules -* https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/merge_requests/640/diffs",1.0 -16787443,2018-12-19 11:54:40.871,Shutdown postgres and pgbouncer nodes in gstg/gprd,"Now that we're using Patroni in both environments for over a week, there's no need to keep these machines up and running. Just removing their Terraform modules should be enough. We can also take a snapshot of the production disks just in case.",1.0 -16784839,2018-12-19 10:10:32.944,Elasticsearch disk watermark monitoring,We should observe the storage capacity usage on our elastic search cluster and (in addition to the email notifications we get from Elastic) add alerts when reaching the low-watermark on a node.,3.0 -16784424,2018-12-19 09:53:52.672,Add runbook instructions for recovering the Elastic Search cluster,Add runbook instructions for recovering the Elastic Search cluster in case it went out of storage.,2.0 -16784239,2018-12-19 09:46:37.044,Evaluate our log retention needs,"Aggregating logs at a cloud service provider like Elastic is bound to costs and eventual storage/bandwidth capacity limits. To keep costs at a reasonable amount we should do some research: - -* what is the bandwidth/storage limit of Elastic? -* What is the current rate of logs send to Elastic? What do we expect in the future? -* which types of logs do we need in Elastic (e.g. do we need debug logs)? -* How long do we need to keep each of the log types? -* Which log types are we storing currently? -* How can we reduce the amount of logs send to Elastic? -* Do we need to look for alternatives to Elastic cloud? - -(related to #5757)",3.0 -16783625,2018-12-19 09:20:08.897,Increase Elastic Cloud storage watermarks,"Storage is filling up on our elastic cloud. When reaching the storage low-watermark on a node, shards will be moved to another node but if all nodes have reached the low-watermark, the cluster will stop storing any data. As per suggestion from Elastic (https://gitlab.com/gitlab-com/gl-infra/production/issues/616#note_124839760) we should increase the low-watermark to just leave about 150gb free, so we can make better use of storage capacity.",2.0 -16782710,2018-12-19 08:44:18.666,Canary instances are reporting in prometheus as `stage=main`," -https://prometheus.gprd.gitlab.net/graph?g0.range_input=1h&g0.expr=count(%7Bstage%3D%22main%22%2C%20fqdn%3D~%22.*cny.*%22%7D)%20by%20(job%2C%20fqdn%2Cstage)&g0.tab=1 - -``` -{fqdn=""fe-cny-01-lb-gprd.c.gitlab-production.internal""} 3 -{fqdn=""fe-cny-02-lb-gprd.c.gitlab-production.internal""} 3 -{fqdn=""fe-cny-01-lb-gprd.c.gitlab-production.internal"",job=""node""} 471 -{fqdn=""fe-cny-01-lb-gprd.c.gitlab-production.internal"",job=""mtail""} 157 -{fqdn=""fe-cny-01-lb-gprd.c.gitlab-production.internal"",job=""fluentd""} 13 -{fqdn=""fe-cny-01-lb-gprd.c.gitlab-production.internal"",job=""process_exporter""} 84 -{fqdn=""fe-cny-02-lb-gprd.c.gitlab-production.internal"",job=""process_exporter""} 84 -{fqdn=""fe-cny-02-lb-gprd.c.gitlab-production.internal"",job=""node""} 471 -{fqdn=""fe-cny-02-lb-gprd.c.gitlab-production.internal"",job=""mtail""} 224 -{fqdn=""fe-cny-02-lb-gprd.c.gitlab-production.internal"",job=""fluentd""} -``` - -cc @T4cC0re @jarv",1.0 -16774215,2018-12-18 21:54:37.264,Terraform reports changes on all instances in `gstg`,"A recent unidentified change to the terraform repository has resulted in most/all instances in the fleet registering as tainted: - -``` -Plan: 104 to add, 91 to change, 102 to destroy. -```",1.0 -16772659,2018-12-18 20:11:21.694,Gitter OS Patching (beta),"## Summary -As an interim step towards updating Gitter AMI images (infrastructure#5755), follow-up to infrastructure#5492, and in support of production#620, we need to perform OS patching on all Gitter instances to establish timing and validate the steps listed in production#620. - -## Patch process -The patch process should at least be scripted, or possibly implemented via Ansible. The broad strokes will require running `apt-get update && apt-get upgrade`. On at least one instance, preferably one instance of each service/group, we will also need to script and validate the rollback process before proceeding to production. Finally, we need to consider service impact when orchestrating the change, by draining connections and detaching instances from load-balancer(s) wherever possible. - -/cc @gitlab\-com/gl\-infra @MadLittleMods",1.0 -16772426,2018-12-18 19:54:16.807,RCA: Pipelines slow & queued jobs,"## Summary - -Multiple users [reported](https://gitlab.slack.com/archives/C101F3796/p1545154906806500) slowness in launching pipeline jobs on private runners. - -Timing correlated with spikes in sidekiq queues -![sidekiq stats](https://dashboards.gitlab.net/render/d-solo/000000159/ci?refresh=5m&orgId=1&var-runner_type=All&var-runner_managers=All&var-cache_server=All&var-gl_monitor_fqdn=patroni-01-db-gprd.c.gitlab-production.internal&var-has_minutes=yes&var-hanging_droplets_cleaner=All&var-droplet_zero_machines_cleaner=All&var-runner_job_failure_reason=All&var-gitlab_env=gprd&var-jobs_running_for_project=0&panelId=85&from=1545143637349&to=1545158573520&width=1000&height=500&tz=America%2FLos_Angeles) - -Which also correlated with patch deployments to `cny` and later to `gprd` (yellow annotations) -![sidekiq queues with deployments](https://dashboards.gitlab.net/render/d-solo/RZmbBr7mk/gitlab-triage?refresh=30s&orgId=1&from=1545137784683&to=1545159384683&var-environment=gprd&var-prometheus=prometheus-01-inf-gprd&var-prometheus_app=prometheus-app-01-inf-gprd&var-backend=All&var-type=All&var-stage=main&panelId=5&width=1000&height=500&tz=America%2FLos_Angeles) - -Service(s) affected : ~""Service:CI Runners"" ~""Service:Sidekiq"" -Team attribution : -Minutes downtime or degradation : TBD - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? (i.e. service outage, sub-service brown-out, exposure of sensitive data, ...) -- Who was impacted by this incident? (i.e. external customers, internal customers, specific teams, ...) -- How did the incident impact customers? (i.e. preventing them from doing X, incorrect display of Y, ...) -- How many attempts were made to access the impacted service/feature? -- How many customers were affected? -- How many customers tried to access the impacted service/feature? - -Include any additional metrics that are of relevance. - -Provide any relevant graphs that could help understand the impact of the incident and its dynamics. - - -## Detection & Response - -Start with the following: - -- How was the incident detected? -- Did alarming work as expected? -- How long did it take from the start of the incident to its detection? -- How long did it take from detection to remediation? -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team memeber wasn't page-able, ...) - -## Timeline - -2018-12-18 - -- 15:23 UTC - @jlenny [reported issues](https://gitlab.slack.com/archives/CB7P5CJS1/p1545146610059200) with shared runners [in slack](https://gitlab.slack.com/archives/C101F3796/p1545147179792500) -- 15:36 UTC - user reported [very slow slack notifications/emails and stalled pipelines](https://gitlab.zendesk.com/agent/tickets/110463) -- 15:46 UTC - user reported [hung pipelines and idle runners](https://gitlab.zendesk.com/agent/tickets/110464) -- 16:54 UTC - user reported [latency in pipeline jobs](https://gitlab.zendesk.com/agent/tickets/110472) -- 17:41 UTC - slowness [reported](https://gitlab.slack.com/archives/C101F3796/p1545154906806500) to production team via slack - -## Root Cause Analysis - -The purpose of this document is to understand the reasons that caused an incident, and to create mechanisms to prevent it from recurring in the future. A root cause can **never be a person**, the way of writing has to refer to the system and the context rather than the specific actors. - -Follow the ""**5 whys**"" in a **blameless** manner as the core of the root-cause analysis. - -For this it is necessary to start with the incident, and question why it happened. Keep iterating asking ""why?"" 5 times. While it's not a hard rule that it has to be 5 times, it helps to keep questions get deeper in finding the actual root cause. - -Keep in min that from one ""why?"" there may come more than one answer, consider following the different branches. - -###Example of the usage of ""5 whys"" -The vehicle will not start. (the problem) - -1. Why? - The battery is dead. -2. Why? - The alternator is not functioning. -3. Why? - The alternator belt has broken. -4. Why? - The alternator belt was well beyond its useful service life and not replaced. -5. Why? - The vehicle was not maintained according to the recommended service schedule. (Fifth why, a root cause) - -## What went well - -Start with the following: - -- Identify the things that worked well or as expected. -- Any additional call-outs for what went particularly well. - -## What can be improved - -Start with the following: - -- Using the root cause analysis, explain what can be improved to prevent this from happening again. -- Is there anything that could have been done to improve the detection or time to detection? -- Is there anything that could have been done to improve the response or time to response? -- Is there an existing issue that would have either prevented this incident or reduced the impact? -- Did we have any indication or beforehand knowledge that this incident might take place? - - -## Corrective actions - -- List issues that have been created as corrective actions from this incident. -- For each issue, include the following: - - - Issue labeled as ~""corrective action"". - - Include an estimated date of completion of the corrective action. - - Incldue the named individual who owns the delivery of the corrective action. - - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys) - -",3.0 -16764686,2018-12-18 16:10:42.453,Fix customers.gitlab.com database backup and restore pipeline,,3.0 -16750974,2018-12-18 10:59:21.786,create periodic restart of registry with haproxy draining,"This is something we need to do for now to address the memory issue. -The restart will need to be coordinated carefully. My current though is to leverage our deploy tooling which already has haproxy draining logc. - -cc @hphilipps",2.0 -16736493,2018-12-17 23:10:37.023,Convert environments to associate branches with deployments instead of tags,"The next step for the environments project is to correct the design decision to use tags to track deployments. - -It makes much more sense to track a deployed environment with a branch, and use tags to track versions of the code. The original design was using tags, because they could overload meta information into the tag name, but there are more intuitive - and less cool - ways to get that information, so I’m going to convert it to the boring solution that’s easy to understand, and works the way the rest of the world works.",3.0 -16736470,2018-12-17 23:07:55.489,Change environments project to use single pipeline,"There is a design decision I don't like on the original project. They were working around a limitation in the CI pipelines by running multiple pipelines for each deploy. I can see why they did it, because you can only expand a variable once per pipeline. There are better ways to work around it though. - -I am converting it to use one pipeline per deploy, which means that we can do automated rollbacks.",3.0 -16735357,2018-12-17 22:41:57.704,Rotate secret_key for customers.gitlab.com,"Hi team, can you please change the [`secret_token`](https://gitlab.com/gitlab-cookbooks/cookbook-customers-gitlab-com/blob/master/attributes/default.rb#L11) that's stored in the chef vault? - -REF: https://gitlab.com/gitlab-org/customers-gitlab-com/issues/379#note_124194313",1.0 -16718673,2018-12-17 11:28:16.446,Database Reviews,"* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8637#note_124023965 -* [ ] @NikolayS https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8497#note_124026675 -* [x] @NikolayS, @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23464#note_124871412 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8871#note_125248109 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8871 -* [x] @NikolayS https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23630#note_125506227 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8939 -* [ ] https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8927#note_126726752 -* [ ] https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8669#note_126158494 -* [ ] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/19740#note_125938200 -* [ ] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23098#note_125777941 -* [ ] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/issues/54643#note_125483243 - -Open reviews have been moved over to https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5851.",3.0 -16671827,2018-12-14 16:33:05.830,Upgrade linux kernel in production environment,"This issue is to keep track of the planning, reviews and execution (via CR) to upgrade linux kernel from `4.10.0-1009` to `4.15.0-1015` in production environment. - -For past reference the following were completed for staging: -- https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5289 -- https://gitlab.com/gitlab-com/gl-infra/production/issues/607 - -Lessons learned from upgrading the kernel in staging: -- [ ] Revert the kernel on redis, sentinels, postgres, file-*, prometheus, alertmanager in `STAGING` -- [ ] Don't upgrade kernel for redis, sentinels, postgres, file-*, prometheus, alertmanager -- [ ] Include a dry-run functionality which would just print a list of hosts to be upgraded -- [ ] Link script in the production CR (once it is created) so it can be reviewed along the CR itself as well. ",8.0 -16646158,2018-12-13 23:18:20.769,Setup automated AMI builds and OS updates for Gitter,"Currently, we configure [unattended upgrades](https://gitlab.com/gitlab-com/gl-infra/gitter-infrastructure/blob/master/ansible/roles/base/tasks/packages.yml#L98-104) to install [security updates](https://gitlab.com/gitlab-com/gl-infra/gitter-infrastructure/blob/master/ansible/roles/base/files/etc/apt/apt.conf.d/50unattended-upgrades). This is great for a lower bar, but we also need a process to keep base images updated (packer pipelines) for minimal bootstrapping of new nodes in autoscaling groups, and we need to update to the latest LTS version of ubuntu across the fleet. - -1. [ ] Trigger packer builds for each role when Ansible code is updated -1. [ ] Trigger daily packer builds to bake in latest OS package updates (`apt-get update && apt-get upgrade`) -1. [ ] Implement scheduled scaling actions to cycle ASG nodes -1. [ ] Implement cleanup job/lambda for pruning old AMI images - -These are just some high-level ideas to prompt a more intensive review/design doc, followed by more specific implementation issues.",13.0 -16635745,2018-12-13 15:34:31.915,Enable Elasticsearch on ops.gitlab.net,"I suggest that we enable Elasticsearch on ops.gitlab.net for a number of reasons: - -1. We don't test/use Elasticsearch in a real environment. This provides an opportunity to do some basic testing. -2. Global search would be helpful in looking for content across cookbooks. -3. It will expose limitations with Geo support etc. - -@vsizov @mdelaossa @jarv Thoughts?",1.0 -16608848,2018-12-12 17:22:12.504,Fix links in Postgres Grafana Dashboards,"Since the Patroni migration, the Postgres Dashboards have undergone several adjustments, leading to several links pointing to non-existent Dashboards within Grafana. For example the links from the [Top Qeries Dashboard](https://dashboards.gitlab.net/d/000000242/postgresql-top-queries?orgId=1&var-environment=gprd&var-fqdn=patroni-01-db-gprd.c.gitlab-production.internal&var-datname=gitlabhq_production&var-user=gitlab&var-interval=6h&var-prometheus=prometheus-01-inf-gprd) to the [Query Drilldown Dashboard](https://dashboards.gitlab.net/d/000000237/postgresql-query-drill-down?orgId=1) for each of the top query IDs. They should be fixed.",2.0 -16602897,2018-12-12 14:14:25.120,chatops command for oncall does not show overrides,"It looks like when there are overrides for oncall they aren't shown. -![Screen_Shot_2018-12-12_at_3.13.57_PM](/uploads/ee6684e557b8069ee64013fdcdc5d165/Screen_Shot_2018-12-12_at_3.13.57_PM.png)",2.0 -16593367,2018-12-12 08:50:31.644,Make a pre check to verify ulimit are in the proper setup in all the services before launch in production,Make a pre-check to the maintenance template to verify ulimit are in the proper setup in all the services before launching in production,1.0 -16590390,2018-12-12 05:32:52.055,AWS Rate Limiting,"Our access to our AWS project was being rate limited by AWS because of excessive usage of the API / resource calls. - -It was to the point that it was even effecting the GUI console throwing ""Rate Exceeded"" errors. - -After investigation it was determined that some automation for `gitlab-review.app` was creating the excessive usage. There were also more than 10,000 DNS records in the `gitlab-review.app` domain. - -To remove the throttling without breaking _too_ many things I removed the role policy (`Review-app-ee-dns`) from the user (`review-app-ee-dns`) which removed access.",4.0 -18327550,2019-02-18 15:01:23.014,Create a blueprint for Process exporter to monitor PG IO,Create a blueprint for Process exporter to monitor PG IO,2.0 -18303242,2019-02-17 21:26:21.994,Consider moving gitlab-production to committed use discounting,"In https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6086, we investigated where we might be able to apply [committed use discounting](https://cloud.google.com/compute/docs/instances/signing-up-committed-use-discounts) to our GCP compute spend. Two ideas emerged: - -1. [Apply CUDs to our CI runner fleet](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6238). -1. Move gitlab-production off [sustained use discounting](https://cloud.google.com/compute/docs/sustained-use-discounts) and on to committed use discounting (this issue). - -Since committed use discounts (CUDs) offer significantly deeper discounting than sustained use discounts (SUDs), this issue is to investigate the possible savings and action plan for this move. - -### Proposal - -* Identify the gitlab-production systems and compute we could move to CUDs, even if they're currently benefitting from SUDs. -* Estimate the additional discounting and possible savings. Evaluate whether or not it's worth pursuing based on the risk (we'd be committing to the spend). - -### Links/resources - -* [CloudPark.com](https://www.parkmycloud.com/blog/google-cloud-committed-use/) on Google CUDs",4.0 -18250633,2019-02-14 22:56:14.783,Chef does not complete due to broken td-agent,"`td-agent` is not starting on the DR database servers (and possibly other servers as well). This is causing chef to fail. - -The problem seems to be multiple versions of the `googleauth` gem. Deleting `0.8.0` allows `td-agent` to start, but the next chef run puts it back and it starts failing again. - -``` -/opt/td-agent/embedded/lib/ruby/site_ruby/2.4.0/rubygems/specification.rb:2280:in `check_version_conflict': can't activate googleauth-0.6.6, already activated googleauth-0.8.0 (Gem::LoadError) -``` - -This issue is related to https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6224 - but not the same issue. They both result in chef not completing a run. The other one fails during bootstrapping. This one fails later.",3.0 -18247646,2019-02-14 20:02:58.050,In case of patroni demotion events we want a notification,"The patroni primary DB was demoted (https://gitlab.com/gitlab-com/gl-infra/production/issues/690, RCA: #6227) without being noticed. -We need to make sure we get notifications for patroni events like this.",3.0 -18246426,2019-02-14 18:44:41.503,Add welcome page to the public dashboards,"We get some security reports that https://dashboards.gitlab.com is publicly visible. It's also slightly confusing when you're dropped directly into the triage dashboard. - -We should add a welcome page to the public Grafana instance.",1.0 -18235635,2019-02-14 13:27:24.787,Bump `client-output-buffer-limit` for `redis` nodes,"From https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6228: - -> Set the hard limit to 4gb as having the same hard and soft limits defies the purpose of the latter. - -> Note that `chef-client` doesn't run `gitlab-ctl reconfigure` on `redis` nodes, and we can't run it ourselves as it will restart the service upon changing `redis.conf`. So updating the file manually and running `config set ` in a Redis console is our best approach. **Experiment in staging first!**",2.0 -18232490,2019-02-14 11:45:10.282,[RCA] Primary DB failover,"## Summary - -Postgres Primary restart or failover (https://gitlab.com/gitlab-com/gl-infra/production/issues/690). - -Service(s) affected : ~""Service:Postgres"" - -Team attribution : Infrastructure - -Minutes downtime or degradation : 2 - -## Impact & Metrics - -- What was the impact of the incident? - - increased error rates during failover, maybe slightly decreased db performance after incident because of missing table stats -- Who was impacted by this incident? - - external customers -- How did the incident impact customers? - - some customers might have seen error responses during failover -- How many attempts were made to access the impacted service/feature? -- How many customers were affected? -- How many customers tried to access the impacted service/feature? - -Include any additional metrics that are of relevance. - -GitLab Triage dashboard during incident: -https://dashboards.gitlab.net/d/RZmbBr7mk/gitlab-triage?orgId=1&from=1550091600000&to=1550092800000 - - -## Detection & Response - -Start with the following: - -- How was the incident detected? - - We got a short `IncreasedErrorRates` alert which wasn't associated with any DB problems. The incident then was mistaken as just missing table stats when we started to get ""TooManyDeadTuples"" alerts. A closer look into the DB logs on next day revealed that the primary DB did restart or failover. -- Did alarming work as expected? - - We got alerted for higher error rates and dead tuples, so we knew that something was going on but we didn't get a notification for a DB failover and thus missed detecting the real cause until investigation on next day. -- How long did it take from the start of the incident to its detection? - - We detected higher error rates immediately. Detecting the cause took 14h. -- How long did it take from detection to remediation? - - The DB cluster autonomously was up again within 2 minutes. -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team member wasn't page-able, ...) - -## Timeline - -2019-02-13 - -- 21:05 UTC - patroni04 fails to connect to consul -- 21:05 UTC - Postgres restarts on the patroni04 -- 21:06 UTC - Increased Error Rates Alert (https://gitlab.pagerduty.com/incidents/PO854MA) -- 21:11 UTC - Increased Error Rates Alert resolves -- 21:48 UTC - Alert received `PostgreSQL_TooManyDeadTuples` on `ci_builds` -- 22:28 UTC - alert received `PostgreSQL_TooManyDeadTuples` on `ci_job_artifacts` -- 22:53 UTC - alert received `PostgreSQL_TooManyDeadTuples` on `ci_pipelines` - -2019-02-14 - -- 01:33 UTC - alert received `PostgreSQL_TooManyDeadTuples` on `ci_stages` -- (more similar alerts) -- 10:24 UTC - noticed postgres restart has taken place at around 21.06pm UTC yesterday (spike in the memory graphs :trolleybus:) @abrandl -- 10:45 UTC - killed all ongoing `VACUUM ANALYZE` processes for two tables @abrandl -- 10:47 UTC - started `ANALYZE VERBOSE` on the full database, see https://gitlab.com/gitlab-com/gl-infra/production/issues/690#note_141280031 @abrandl -- 11:25 UTC - `ANALYZE VERBOSE` finished, statistics are back @abrandl -- 11:30 UTC - While our own investigation is going on, I asked OnGres to look as well (https://gitlab.slack.com/archives/CBCRJDSBY/p1550143723055000) @abrandl - - -## Root Cause Analysis - -- Patroni demoted the primary postgres DB. -- Because it couldn't connect to the DCS (Consul). -- Because it got exceptions: - - `HTTPConnectionPool(host='127.0.0.1', port=8500): Max retries exceeded with url: /v1/kv/service/pg-ha-cluster/?recurse=1` - - `2019-02-13_21:05:23 patroni-04-db-gprd patroni[2192]: 2019-02-13 21:05:23,548 ERROR: Error communicating with DCS` - - `2019-02-13 21:05:47,877 INFO: demoted self because DCS is not accessible and i was a leader` - - `2019-02-13 21:05:50,524 INFO: promoted self to leader by acquiring session lock` -- Because patroni04 had network issues (maybe related to dmesg log showing TCP related kernel stack traces) - -## What went well - -- patroni failover worked very well within seconds -- we got alerts for higher error rates and dead tuples -- @yguo and @cshobe worked together to restore table stats -- @abrandl noticed the demotion - -## What can be improved - -- alerting for patroni demotion events -- missing table stats should lead to the conclusion that a db failover could have happened -- db ANALYZE procedures in case of DB failover should be better known now or be automated (https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5841) -- monitoring kernel stack dumps might help to detect issues with networking - -## Corrective actions - -* [ ] Automatically run `ANALYZE` on failover (https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5841) - - assignee: TBD - - due: TBD -* [ ] Make sure we get notifications for Patroni restart/failover (even if the role does not change like here!) (#6232) - - assignee: TBD - - due: TBD -* [ ] Investigate cause of primary demotion (#6233) - - assignee: TBD - - due: TBD -* [ ] Consider monitoring of kernel stack trace messages in dmesg log - - assignee: TBD - - due: TBD",2.0 -18221698,2019-02-14 01:41:34.322,Chef Runs broken while bootstrapping,"Chef runs are broken on the DR database servers. Machines won't bootstrap. - -It appears to be because of this file missing `/opt/prometheus/node_exporter/metrics/chef-client.prom` - -The error is: - -``` -Feb 13 23:12:29 patroni-02-db-dr startup-script: INFO startup-script: [2019-02-13T23:12:29+00:00] ERROR: Running exception handlers -Feb 13 23:12:29 patroni-02-db-dr startup-script: INFO startup-script: [2019-02-13T23:12:29+00:00] ERROR: PrometheusHandler: # -Feb 13 23:12:29 patroni-02-db-dr startup-script: INFO startup-script: [2019-02-13T23:12:29+00:00] ERROR: Exception handlers complete -Feb 13 23:12:29 patroni-02-db-dr startup-script: INFO startup-script: [2019-02-13T23:12:29+00:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out -Feb 13 23:12:29 patroni-02-db-dr startup-script: INFO startup-script: [2019-02-13T23:12:29+00:00] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report -Feb 13 23:12:29 patroni-02-db-dr startup-script: INFO startup-script: [2019-02-13T23:12:29+00:00] ERROR: undefined method `missing_extensions?' for nil:NilClass -``` - -This has been very difficult to debug since it happens before the users and ssh keys are set up, so `/var/chef/cache/chef-stacktrace.out` is unavailable. Logging in via the GCP console is not working, and the serial port presents with a login prompt which is useless without credentials. - -This worked when the machine was initially set up, using the same chef-module versions. Now after re-creating the instance, I can't get any combination of versions to result in a successful initial chef run.",3.0 -18219976,2019-02-13 22:53:07.776,Gather rough estimation of non http traffic for cloudflare estimate,"Making an issue to make notes on collection of this estimate: - -Thank you for your response. I apologize if my email wasn't clear on the specific Spectrum (non-HTTP/S) questions from the PM. What he is looking to obtain: -1. How much of the 360TB/mo will be routed through non-standard HTTP/S ports? -2. How many concurrent SSH connections do you have today? - -Plan for 1: -Get an ""average"" from queries like: -increase(haproxy_frontend_bytes_out_total{fqdn=~"".*altssh.*.lb.*"", job=~""haproxy"", frontend!=""stats""}[24h] offset 1d) - -Plan for 2: -haproxy_backend_current_sessions{fqdn=~"".*altssh.*.lb.*"", job=""haproxy"", backend!~""stats.*""}",1.0 -18215745,2019-02-13 18:51:14.604,Create and version in source control the first Packer image template,"Determine the correct place for the repository, new or existing, and add [the template](https://www.packer.io/intro/getting-started/build-image.html#the-template) with configuration for the [Google Compute](https://www.packer.io/docs/builders/googlecompute.html) builder.",3.0 -18215555,2019-02-13 18:40:19.361,Design Document for the process of restoring Database from GCE disk snapshots,"Rather than wait for ZFS database snapshots, and because we're not relying on snapshot quality at the level necessary for disaster recovery or customer facing functionality, we're going to automate the extraction of GCE disk snapshots from the production environment for use in our testing environment(s). - -The design should outline the process with technical detail. It is not limited to, but must pay attention to ways we will: - -- [ ] protect customer data by treating it as production data -- [ ] scrub data and do not introduce any personally identifiable information (PII) or intellectual property (IP) into the testing environment -- [ ] ensure the proper safeguards are in place, potentially even moving the process of scrubbing data to it's own isolated environment (to avoid accidentally manipulating production data) -- [ ] scale down data for functional use",3.0 -18202713,2019-02-13 12:28:47.547,Monitor packagecloud DB backups,"packages.gitlab.com was running out of disk space again because db backups took all the space. -Some of the backups seem to fail and then leave their temporary data on disk (ca. 600 GB for each backup). - -We should monitor for backup failures and automatically cleanup data from failed backups. - -If that isn't enough we can further trim down the local retention interval (as done in https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5820, via https://gitlab.com/gitlab-cookbooks/gitlab-packagecloud/merge_requests/10). - -Longterm solution would be to migrate from MySQL to RDS (https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5222).",3.0 -18180568,2019-02-12 19:04:59.141,Configure tests for cookbook-license-gitlab-com,"As a pre-requisite step for #5817, we need to enable basic `chefspec` and `inspec` test configuration so that we can validate the changes before deploying the updated cookbook. - -1. [x] Configure `chefspec` (existing tests) -1. [x] Configure `test-kitchen` -1. [ ] Validate `kitchen converge` (existing/default `inspec` tests) - -/cc @sdval",3.0 -18154582,2019-02-12 00:43:24.614,CI/CD handoff - switch alerting to SRE rotation,"TBD based on feedback from https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6203 - -The main action will be to change the alerting template for alerts related to ci-cd to not @-mention cicd-ops. -We'll still post to #ci-cd-alerts without the mention. - -Second, to follow our existing format - we're going to change the channel to #alerts-ci-cd as all of our other alerts channels are prefixed that way. - -cc @ahanselka for noting when we will do those changes.",2.0 -18154514,2019-02-12 00:36:16.310,"CD/D Readiness Review Part 4 - Performance, Monitoring and Alerting.","Part 4 of CI/CD to SRE. - -## Performance - -- [ ] Explain what validation was done following GitLab's [performance guidlines](https://docs.gitlab.com/ce/development/performance.html) please explain or link to the results below - * [Query Performer](https://docs.gitlab.com/ce/development/query_recorder.html) - * [Sherlock](https://docs.gitlab.com/ce/development/profiling.html#sherlock) - * [Request Profiling](https://docs.gitlab.com/ce/administration/monitoring/performance/request_profiling.html) -- [ ] Are there any potential performance impacts on the database when this feature is enabled at GitLab.com scale? -- [ ] Are there any throttling limits imposed by this feature? If so how are they managed? -- [ ] If there are throttling limits, what is the customer experience of hitting a limit? -- [ ] For all dependencies external and internal to the application, are there retry and back-off strategies for them? -- [ ] Does the feature account for brief spikes in traffic, at least 2x above the expected TPS? - - -## Monitoring and Alerts - -- [ ] Is the service logging in JSON format and are logs forwarded to logstash? -- [ ] Is the service reporting metrics to Prometheus? -- [ ] How is the end-to-end customer experience measured? -- [ ] Do we have a target SLA in place for this service? -- [ ] Do we know what the indicators (SLI) are that map to the target SLA? -- [ ] Do we have alerts that are triggered when the SLI's (and thus the SLA) are not met? -- [ ] Do we have troubleshooting runbooks linked to these alerts? -- [ ] What are the thresholds for tweeting or issuing an official customer notification for an outage related to this feature? - -## Responsibility - -- [ ] Which individuals are the subject matter experts and know the most about this feature? -- [ ] Which team or set of individuals will take responsibility for the reliability of the feature once it is in production? -- [ ] Is someone from the team who built the feature on call for the launch? If not, why not? - - -## Testing - -- [ ] Describe the load test plan used for this feature. What breaking points were validated? -- [ ] For the component failures that were theorized for this feature, were they tested? If so include the results of these failure tests. -- [ ] Give a brief overview of what tests are run automatically in GitLab's CI/CD pipeline for this feature? - -Existing docs to use as reference: -- Notes from recent summary meeting - https://docs.google.com/document/d/1diT_Dt8oE2kls09WZg6YU3twuQMM6u6NmseCsB-IlXg/edit -- Diagrams, etc - https://docs.google.com/document/d/1WYmN5oukY3DK2hPFLPkxwnuyfxES8nNPeDLMTN_KhVM/edit - -Acceptance Criteria: -- [ ] Perform the monitoring and alerting analysis above. Gather notes and link to the service catalog: https://gitlab.com/gitlab-com/runbooks/blob/master/services/service-catalog.yml -Child issue of https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6199 -- [ ] In this case, make no changes to actual alerting infra, just prep up a Design MR for the new routing of alerts to be done once we have updated all of the SRE team with the proper information, links to runbooks, etc. - -Child issue of https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6199",2.0 -18154479,2019-02-12 00:30:33.379,"CI/CD Readiness Review Part 3 - DB, backup & restore, Security","Part 3 of CI/CD to SRE - -## Database - -- [x] If we use a database, is the data structure verified and vetted by the database team? -- [x] Do we have an approximate growth rate of the stored data (for capacity planning)? -- [x] Can we age data and delete data of a certain age? - -## Security - -- [ ] Were the [gitlab security development guidelines](https://about.gitlab.com/security/#gitlab-development-guidelines) followed for this feature? -- [ ] If this feature requires new infrastructure, will it be updated regularly with OS updates? -- [ ] Has effort been made to obscure or elide sensitive customer data in logging? -- [ ] Is any potentially sensitive user-provided data persisted? If so is this data encrypted at rest? - -## Backup and Restore - -- [x] Outside of existing backups, are there any other customer data that needs to be backed up for this product feature? -- [x] Are backups monitored? -- [x] Was a restore from backup tested? - - -Existing docs to use as reference: -- Notes from recent summary meeting - https://docs.google.com/document/d/1diT_Dt8oE2kls09WZg6YU3twuQMM6u6NmseCsB-IlXg/edit -- Diagrams, etc - https://docs.google.com/document/d/1WYmN5oukY3DK2hPFLPkxwnuyfxES8nNPeDLMTN_KhVM/edit - -Acceptance Criteria: -- [ ] Perform the Summary and architecture analysis above. Gather notes and link to the service catalog: https://gitlab.com/gitlab-com/runbooks/blob/master/services/service-catalog.yml - -Child issue of https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6199",2.0 -18154434,2019-02-12 00:24:49.011,CI/CD Readiness Review Part 2 - Operational Risk Assessment,"Part 2 of CI/CD SRE handoff - -## Operational Risk Assessment - -- [ ] What are the potential scalability or performance issues that may result with this change? -- [ ] List the external and internal dependencies to the application (ex: redis, postgres, etc) for this feature and how the it will be impacted by a failure of that dependency. -- [ ] Were there any features cut or compromises made to make the feature launch? -- [ ] List the top three operational risks when this feature goes live. -- [ ] What are a few operational concerns that will not be present at launch, but may be a concern later? -- [ ] Can the new product feature be safely rolled back once it is live, can it be disabled using a feature flag? -- [ ] Document every way the customer will interact with this new feature and how customers will be impacted by a failure of each interaction. -- [ ] As a thought experiment, think of worst-case failure scenarios for this product feature, how can the blast-radius of the failure be isolated? - -Existing docs to use as reference: -* Notes from recent summary meeting - https://docs.google.com/document/d/1diT_Dt8oE2kls09WZg6YU3twuQMM6u6NmseCsB-IlXg/edit -* Diagrams, etc - https://docs.google.com/document/d/1WYmN5oukY3DK2hPFLPkxwnuyfxES8nNPeDLMTN_KhVM/edit - -Acceptance Criteria: -- [ ] Perform the risk assessment above. Gather notes and add to the service catalog: https://gitlab.com/gitlab-com/runbooks/blob/master/services/service-catalog.yml - -Child issue of https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6199",2.0 -18154389,2019-02-12 00:18:36.252,CI/CD Readiness Review Part 1 - Summary and Architecture,"Part 1 of the CI/CD Operational work. - -## Summary - -- [ ] Provide a high level summary of the CI/CD feature sets -- [ ] What metrics, including business metrics, should be monitored to ensure will this feature launch will be a success? - -## Architecture - -- [ ] Add architecture diagrams to this issue of feature components and how they interact with existing GitLab components. Include internal dependencies, ports, security policies, etc. -- [ ] For each component and dependency, what is the blast radius of failures? Is there anything in the feature design that will reduce this risk? -- [ ] Where applicable, explain how we scale and any potential single points of failure in the design. - -Existing docs to use as reference: -* Notes from recent summary meeting - https://docs.google.com/document/d/1diT_Dt8oE2kls09WZg6YU3twuQMM6u6NmseCsB-IlXg/edit -* Diagrams, etc - https://docs.google.com/document/d/1WYmN5oukY3DK2hPFLPkxwnuyfxES8nNPeDLMTN_KhVM/edit - -Acceptance Criteria: -- [ ] Perform the Summary and architecture analysis above. Gather notes and link to the service catalog: https://gitlab.com/gitlab-com/runbooks/blob/master/services/service-catalog.yml - -Child issue of https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6199",6.0 -18139872,2019-02-11 14:41:07.880,Database Reviews,"* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24743 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24986#note_139344877 along with https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9472 -* [x] @abrandl https://dev.gitlab.org/gitlab/gitlab-ee/merge_requests/776#note_151375 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22743 -* [ ] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9445 -* [x] @NikolayS https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25299#note_142327479 -* [ ] https://gitlab.com/gitlab-org/gitlab-ce/issues/57387#note_142682602 -* [x] @yguo https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25293#note_142316006 -* [ ] https://gitlab.com/gitlab-org/gitlab-ee/issues/5348#note_142649706 -* [ ] @NikolayS https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25230#note_142474150 -* [ ] @NikolayS https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25278#note_142134862 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/issues/57663#note_141096543 -* [ ] @NikolayS https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/21197#note_139607418 -* [ ] @NikolayS https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25107#note_140670702 (Nik: checking it on a ""restore"" box) -* [ ] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25182#note_143426559 -* [ ] @NikolayS https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25432#note_143741434 -* [ ] @NikolayS https://gitlab.com/gitlab-org/gitlab-ce/issues/57284#note_143632158 -* [x] @NikolayS https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25241#note_143637255 -* [ ] @NikolayS https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/7361#note_143467417 -* [x] @NikolayS https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9615#note_143467165",2.0 -18123368,2019-02-11 07:38:40.088,Service Catalog - Dashboard,Tracking the work of building a dashboard for the service catalog via this issue.,4.0 -18083211,2019-02-08 20:00:25.927,Add a GKE cluster to gitlab-org/design.gitlab.com,"# Why/What - -We want to deploy `design.gitlab.com` using Auto DevOps (https://gitlab.com/gitlab-org/design.gitlab.com/issues/96). In order to do that we need a Kubernetes cluster associated with this project. - -Probably this GKE cluster should live in our production GCP project but it could just as well live in a separate GCP project if that's preferable. - -During the setup and probably for regular maintenance and troubleshooting we will need to provide some level of access to the GKE cluster for members of the Configure team (at least @DylanGriffith and @tauriedavis) as dogfooding Auto DevOps is one of our long term objectives. Also we assume that the production team will not want to be fully responsible for keeping this system up and running as it's not production critical infrastructure. - -# How - -1. Decide which GCP project to use -1. Somebody with permission to create GKE clusters in this GCP project will need to add a cluster to the https://gitlab-org/design.gitlab.com project (`Operations > Kubernetes > Add Kubernetes cluster > Sign in with Google`) -1. Name the cluster `design-system` -1. Leave the default settings for everything else and click create",5.0 -18056896,2019-02-07 23:22:42.843,tflint is not configured property for CI/CD,"Duplicated issue opened on ops instance [here](https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/issues/1) - -Currently on stage `tf_validate` we have: - -``` -/bin/sh -e -c 'for d in $1; do echo ""Checking $d for $0"" && /terraform validate $2 ""$d"" && tflint --error-with-issues $2; done' ""$env"" ""$dirs_to_check"" ""$tf_opts"" -``` - -`$2` is `""$tf_opts""`, which evaluates `-check-variables=false` (which is not a `tflint` option as far as I can tell). This translates then in practice to running `tflint --error-with-issues` from the repo's root directory. If you try that from your local copy, you'll see it passes. But if you try it from, say `environments/gstg` you'll see errors. We should adjust this, since it's not actually doing any checks now (it seems). - -/cc @alejandro",3.0 -18050312,2019-02-07 17:35:27.477,Slack access for a Core Team member,"While we work through the Slack access [issue](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6110) in general for all Core Team members, could we provided new Core Team member Ben Bodenmiller access to the following Slack channels? Ben has already signed the NDA. - -* core -* contribute2019 -* development -* gdk -* mr-coaching -* release-post -* security - -Please let me know what you need from me. I can send Ben's email address via DM on Slack. - -cc/ @Mowry",1.0 -18047015,2019-02-07 15:36:45.471,Execute maintenance on GitLab.com,"Execute repacking maintenance on GitLab.com for indexes and selected tables (see below). - -For selected tables, we also want to execute table maintenance. Those tables are the outliers based on the analysis in https://gitlab.com/gitlab-com/www-gitlab-com/merge_requests/18277. - -Those tables are: -* `services` -* `snippets` - -The maintenance should be rolled out iteratively (with increased risk/impact): - -1. Repack normal index -1. Repack primary key -1. Repack full table - -#### User creation - -* Create specific user for repacking -* Set `idle_in_transaction_timeout`",8.0 -18046840,2019-02-07 15:32:14.809,Implement automation for pg_repack,"Executing pg_repack requires some automation, e.g. to clean up in the event of a failures. We will wrap this into a small command line tool (ruby) which can later be moved to a GitLab rake task and shipped with the product.",5.0 -18046752,2019-02-07 15:30:42.621,Include pg_repack with omnibus,"In order to ship pg_repack with the product, we'll have to include it in omnibus. This is to track the change and have pg_repack available on standard installs.",1.0 -18046726,2019-02-07 15:29:57.418,Deploy pg_repack to GitLab.com,"This is about installing and deploying pg_repack to GitLab.com. Since this is currently not based on omnibus, we can just install this through chef.",2.0 -18046673,2019-02-07 15:28:49.988,Monitor bloat estimate,"We need to measure bloat in the database to remove the need to manually measure it. Bloat can only be measured accurately on a one-off database and is a offline and rather expensive operation. Hence, we aim to estimate bloat with standard methods and push this information to prometheus. - -Ideally, the bloat estimate should be available in prometheus for -* individual indexes, -* individual tables (heap + toast), -* overall index bloat (can be derived) and -* overall table bloat (can be derived). - -It's ok if the bloat estimate only gets updated a few times a day, if a high frequency of measurements is not feasible.",0.0 -18020114,2019-02-06 21:40:45.556,Set up Geo database replication for DR,"Working database replication is a prerequisite for enabling geo. The DR site does not use omnibus bundled Postgres, so we need to get the replication set up manually and using Chef. - -Currently, the DR nodes are running with unconfigured databases: -```shell -$ ssh devin-db@console-01-sv-dr.c.gitlab-dr.internal -Starting console, please wait ... -psql: FATAL: no pg_hba.conf entry for host ""10.251.16.2"", user ""gitlab"", database ""gitlabhq_production"", SSL on -FATAL: no pg_hba.conf entry for host ""10.251.16.2"", user ""gitlab"", database ""gitlabhq_production"", SSL off -Secondary check for db host patroni-03-db-dr.c.gitlab-dr.internal failed! -Connection to console-01-sv-dr.c.gitlab-dr.internal closed. -``` - -The DR database nodes are these: -```shell -$ knife node list | grep patroni | grep gitlab-dr -patroni-01-db-dr.c.gitlab-dr.internal -patroni-02-db-dr.c.gitlab-dr.internal -patroni-03-db-dr.c.gitlab-dr.internal -``` - -The next step is to load the data which was dumped from production into the DR cluster. We want to do this at first without actually making a connection to production. This will reduce the risk. Ideally, we should be able to load the database up to a certain time, and then switch on replication to catch up the difference, rather than turning on replication and having it pull the entire database from production directly. - -Currently the DR database is minimally configured by terraform and chef. If something goes wrong, it is trivial to delete the 3 nodes and re-create them from scratch.",3.0 -18017513,2019-02-06 20:05:11.015,Automate Ansible runs for Gitter environment,"Currently, we have to manually initiate Ansible runs in the Gitter environment to effect changes, such as [deploying SSH keys for newhires](gitlab-com/gl-infra/infrastructure#6171). We should at a minimum setup a service account to perform automated runs of `ansible-playbook` within a pipeline on gitlab-com/gl-infra/gitter-infrastructure>. - -Other options possibly worth (future) discussion could include scheduled ansible runs, scheduled auto-scaling to reap older nodes and (re-)provision new ones, or even migration to an architecture based on immutable infrastructure (docker/k8s) - -/cc @MadLittleMods @andrewn @sdval",2.0 -18015299,2019-02-06 18:22:38.564,Deploy SSH key for Michal and Cameron,Apply changes from gitlab-com/gl-infra/gitter-infrastructure!96 and gitlab-com/gl-infra/gitter-infrastructure!97,1.0 -18006307,2019-02-06 12:54:09.595,get familiar with omnibus setup for postgresql and repmgr,"Please get more familiar with the setup for postgresql and rep mgr, looking forward at the integration of Patroni on the product. - -Omnibus PG : https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/config/software/postgresql.rb - -Marin: Docs for HA https://docs.gitlab.com/ee/administration/high_availability/database.html#doc-nav",3.0 -17980799,2019-02-05 16:34:14.230,Transfer tanuki.cloud to GitLab,"`tanuki.cloud` is a domain I registered in my early days @ GitLab in 2017 as we had a need for a domain to point to the Solution Architect AWS account. I realized this is still on my personal account, and I'd rather transfer ownership to the company as the domain is GitLab's. - -Let me know what other information you need from me to get started.",2.0 -17979155,2019-02-05 15:40:12.748,Consider better alerting for HTTP502,"During: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6147 - -We only received alerts, not pages that something was wrong. Re-evaluate our current rule set to see what can be done to ensure if there is a problem we are being paged.",1.0 -17973844,2019-02-05 12:50:02.760,Increase in support requests due to `repository_read_only`,"There's been a recently influx of support requests for projects that are showing up as read only for users of GitLab. - -Use this issue to figure out the overall impact to GitLab.com and investigate a way to figure out what might have flipped these projects into a read only state. - -## Related -* https://gitlab.com/gitlab-com/support/dotcom/dotcom-escalations/issues/47 -* https://gitlab.com/gitlab-org/gitlab-ce/issues/57263 - -## Other notes -* Initial cause was pointed at work being done here: https://gitlab.com/gitlab-com/gl-infra/production/issues/664 - * However, this work did not encompass any project from the above mentioned issues - * I have logs for all projects I've touched during this work - * The above projects are not in the logs, and are located on server to which the above maintenance is NOT being performed on",1.0 -17949980,2019-02-04 21:40:06.607,Gemnasium service backup - restore test leaves db behind,"Following https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/4218, we found that the clean up of restored DBs doesn't work. -(see also https://gitlab.slack.com/archives/C8S0HHM44/p1549315690766800). - -The initial clean (`destroy` function) doesn't look for remainings `production-restore-*` DBs, just for a DB with the current date (https://gitlab.com/gitlab-org/security-products/gemnasium/gcp-config/blob/cd7a218b7c951632152d40a5952dcdd9232d820f/bin/restore-test.sh#L127). -This script should be updated to list all `production-restore-*` DBs, and iterate on them to destroy each one. - -/cc @skarbek",2.0 -17942420,2019-02-04 15:53:03.364,RCA: 2019-02-01 HTTP502's,"## Summary - -A brief summary of what happened. Try to make it as executive-friendly as possible. - -* Service(s) affected: **GitLab.com** -* Team attribution: ~""team:infrastructure"" -* Minutes downtime or degradation: **3 hours 50 minutes** - -As part of a maintenance task https://gitlab.com/gitlab-com/gl-infra/production/issues/664, a script was running to slowly move older larger repos to a different file server. Jobs that were being scheduled were failing due to timeouts and then retrying. This led to excessive work being scheduled on the file server that was targeted at that time for maintenance and slowly ramped up the IO load on that file server. This high IO translated into Gitaly timeout for requests going to it, leading to many HTTP502's for customers. - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? **slow performance and HTTP 502 responses from GitLab** -- Who was impacted by this incident? **Everyone** - -Include any additional metrics that are of relevance. - -https://gitlab.com/gitlab-com/gl-infra/production/issues/674 - -![image](/uploads/ea3fcf458e7ea632ff7a1bf985bd5749/image.png) -https://dashboards.gitlab.net/d/RZmbBr7mk/gitlab-triage?orgId=1&from=1549028551843&to=1549037954496 - -![image](/uploads/9b83d3d0a41ba69125455e107df650b1/image.png) -https://dashboards.gitlab.net/d/000000204/gitaly-metrics-per-host?orgId=1&var-fqdn=file-23-stor-gprd.c.gitlab-production.internal&var-job=gitaly&from=1549028637790&to=1549050237790 - -Another fun set of charts here: https://dashboards.gitlab.net/d/WOtyonOiz/general-triage-service?orgId=1&var-prometheus_ds=Global&var-environment=gprd&var-type=gitaly&var-sigma=2&var-component_availability=All&var-component_ops=All&var-component_apdex=All&var-component_errors=All&from=1549026945355&to=1549041345355 - -## Detection & Response - -- How was the incident detected? **Customer reports** and our beta monitoring - -- Did alarming work as expected? **Not entirely** - -While we did get alerts for high rates of 500's, this alerting didn't come through until it was too late and the root cause was able to strengthen it's leverage before anything was done. We did not receive any pages. - -- How long did it take from the start of the incident to its detection? **+1 Hour** -- How long did it take from detection to remediation? **+2 Hours** - -## Timeline - -2019-02-01 - -- 12:53 UTC - maintenance on fileserver 23 had begun - command executed: `gitlab-rails runner /tmp/storage_rebalance.rb --current-file-server nfs-file23 --target-file-server nfs-file27 --move-amount 2560 --dry-run false --wait 300 | tee nfs-file23.log` -- 15:02 UTC - first page regarding high 5xx counts - https://gitlab.slack.com/archives/C12RCNXK5/p1549033324163300 -- 15:03 UTC - first alert regarding Gitaly Error counts on fileserver 23 - https://gitlab.slack.com/archives/C12RCNXK5/p1549033383163500 -- 15:30 UTC - oncall engineer reaches out to the engineer performing the maintenance -- 15:31 UTC - maintenance operation was halted to proceed further -- 15:34 UTC - estimated 31 projects were in-flight during maintenance, any process related to this maintenance work was `renice`'d to lower the IO impact on the file server -- 15:38 UTC - fileserver 23 started to show signs of recovery as `git` processes were completing -- 15:58 UTC - fileserver 23 reached okay SLO since degradation -- 16:41 UTC - fileserver 23 remained above the SLO - -## Root Cause Analysis - -A seemingly routine maintenance item, which had worked for the past entire week without incident resulted in a failure. A script that would routinely query for projects to be moved was chugging along without taking into account the amount of active jobs nor failures and retries of existing jobs. These jobs were responsible for telling a file server to move data from one server to another. Some jobs took a long time and they were marked as a failure due to timeout, despite the underlying mechanisms still running on the file server. When the job was retried, it spun up another process on that file server. One project that should have been moved, would have resulted in 3 total operations on the same project. During the time in which maintenance stopped, it was discovered that 31 projects were scheduled to move off of this file server, and all of them were continuing to process data on the repository before completing the move. - -This process of moving data invokes the `git upload-pack` command which is very IO intensive. For large repositories this can take awhile. The script created for this maintenance was built only to schedule the work and allow sidekiq to schedule the work. It is not visible to the maintenance script nor the operator to the extent of progression or failure rate occurring specifically due to these repo migrations. - -## What went well - -* The maintenance operator was able to quickly gather a list of processes tied to project moves and `renice` them to help the server along. - * We chose not to kill any running processes in order to prevent data corruption -* The script was designed to prevent overflowing the queue and needing to do manual intervention of future work that would've been queued. - -## Questions -* Is there a potential network bottleneck? -* Can we develop a better understanding of how our Gitaly SLO's impact overall webserver performance? - -## What can be improved - -* No one was notified. This situation was not treated as an incident when it should have been. Our support team and our users were lacking any status information. -* SRE's need a better understanding of the impact for project migrations, this was a new failure scenario that wasn't run into previously -* Documentation for how project moves works doesn't discuss what mechanisms are utilized to perform moves, learning that `git upload-pack` was running multiple times on the file servers was new information -* Visibility into this process overall is highly limited - * This encompasses logging - * Viewing sidekiq metrics -* We need to discuss better ways of garnering the attention of the correct parties - * https://gitlab.slack.com/archives/C101F3796/p1549034830974100 - * https://gitlab.slack.com/archives/C101F3796/p1549042978985800 - * If members think there might be an incident, we need to ensure we mark the correct urgency on potential issues. - -## Corrective actions - -- Improvements should be made to the script that was performing these project repo migrations - https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6157 -- Better handling of failure scenarios inside of GitLab - https://gitlab.com/gitlab-org/gitlab-ee/issues/9563 -- Improve this processes visibility in GitLab - https://gitlab.com/gitlab-org/gitlab-ee/issues/9534 -- Improve Gitaly process monitoring/niceness - https://gitlab.com/gitlab-org/gitlab-ee/issues/9606 -- Alert Improvements - https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6158 -- Communication Improvements - https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6160 - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys) -",3.0 -17907786,2019-02-02 16:18:04.351,Missing production_json logs from Kibana,"It seems we have a problem ingesting particular logs from production_json.log from api nodes. - -Here we have 4: - -![graphql-logs](/uploads/7f1f61fe9942ce1d2c9ba7a8f69e1650/graphql-logs.png) - -but locally we can see there are more: - -``` -$ knife ssh roles:gprd-base-fe-api ""sudo zcat /var/log/gitlab/gitlab-rails/production_json.log.2.gz | jq -c 'select(.status == 500) | {time: .time, c: .controller}'"" -api-01-sv-gprd.c.gitlab-production.internal {""time"":""2019-02-02T13:46:52.933Z"",""c"":""GraphqlController""} -api-01-sv-gprd.c.gitlab-production.internal {""time"":""2019-02-02T13:47:08.689Z"",""c"":""GraphqlController""} -api-01-sv-gprd.c.gitlab-production.internal {""time"":""2019-02-02T13:48:57.082Z"",""c"":""GraphqlController""} -api-01-sv-gprd.c.gitlab-production.internal {""time"":""2019-02-02T13:49:03.639Z"",""c"":""GraphqlController""} -api-18-sv-gprd.c.gitlab-production.internal {""time"":""2019-02-02T13:46:59.982Z"",""c"":""GraphqlController""} -api-18-sv-gprd.c.gitlab-production.internal {""time"":""2019-02-02T13:47:01.723Z"",""c"":""GraphqlController""} -api-18-sv-gprd.c.gitlab-production.internal {""time"":""2019-02-02T13:47:10.095Z"",""c"":""GraphqlController""} -api-18-sv-gprd.c.gitlab-production.internal {""time"":""2019-02-02T13:47:50.679Z"",""c"":""GraphqlController""} -api-13-sv-gprd.c.gitlab-production.internal {""time"":""2019-02-02T13:48:01.023Z"",""c"":""GraphqlController""} -api-13-sv-gprd.c.gitlab-production.internal {""time"":""2019-02-02T13:48:30.589Z"",""c"":""GraphqlController""} -[snip] -$ knife ssh roles:gprd-base-fe-api ""sudo zcat /var/log/gitlab/gitlab-rails/production_json.log.2.gz | jq -c 'select(.status == 500) | {time: .time, c: .controller}'"" | wc -l -193 -``` -```",2.0 -17889787,2019-02-01 15:11:52.916,Repositories with inconsistent move sizes,"In this issue, we are moving data from nfs-file21 over to nfs-file25. Two projects have data size inconsistencies between the two servers. For both repos (just the data not the wiki's), the size on disk is larger on the location we are migrating too. I'm curious if we can figure out potential corrupted data, and how to potentially coalesce anything if necessary. At this moment in time, the database still points the storage of these repos to the old server `nfs-file21`. These repos are currently marked as read only. The goal for this issue, would be to remove what was moved to nfs-file25 (if safe), and mark the repositories as writable again. - -## Projects in question: -* [ ] 9413811 -* [ ] 9495429 - -## Reference Material -* https://gitlab.com/gitlab-com/gl-infra/production/issues/664#note_137118144 - -/cc @dawsmith",2.0 -17871540,2019-02-01 00:25:43.248,Most requests in staging time out,"Staging is throwing 502 errors on most web requests. - -Unicorn processes are being killed: - -![Screen_Shot_2019-01-31_at_2.12.39_PM](/uploads/d1ba8db3ae88c656633fd2ce618e32ea/Screen_Shot_2019-01-31_at_2.12.39_PM.png) - -Workhors is throwing 502 errors: - -![Screen_Shot_2019-01-31_at_2.13.49_PM](/uploads/2532778d4f74013ea1d3e64aa75e07ef/Screen_Shot_2019-01-31_at_2.13.49_PM.png) - -The problem seems to be database related: - -![Screen_Shot_2019-01-31_at_2.16.00_PM](/uploads/64b698d6b25a8918d5d9588aed56e3c6/Screen_Shot_2019-01-31_at_2.16.00_PM.png) - -Queries are timing out: - -![Screen_Shot_2019-01-31_at_2.21.06_PM](/uploads/46401efc3a6ff942f5bb29d19428e9d5/Screen_Shot_2019-01-31_at_2.21.06_PM.png) - -And transactions are very high on patroni-06: - -![Screen_Shot_2019-01-31_at_2.23.44_PM](/uploads/b4cfaf17fc9d83b3524c9045531332eb/Screen_Shot_2019-01-31_at_2.23.44_PM.png)",1.0 -17835293,2019-01-31 07:59:08.063,Increase Quota on `gitlab-restore` GCP Project,"This issue is to track the quota increase for CPU, Memory, and Disk resources for the `gitlab-restore` GCP project space.",1.0 -17822007,2019-01-30 20:04:28.169,[Design Document] Productionizing Consul,"Write the technical underpinnings that represent the guidance, design, and proposed implementation method for running a robust instance of Consul in production on GitLab.com. - -The working MR is: https://gitlab.com/gitlab-com/www-gitlab-com/merge_requests/20889",3.0 -17817891,2019-01-30 17:45:00.984,Blueprint for ZFS as file system for GitLab,"Write the technical underpinnings that represent the guidance, design, and proposed implementation method for bringing ZFS to GitLab.com.",4.0 -17814420,2019-01-30 15:31:30.266,[Design Document] Moving pgbouncer to a dedicated cluster,"We need a design doc for https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5833, one is being propsed at https://gitlab.com/gitlab-com/www-gitlab-com/merge_requests/18844.",3.0 -17809742,2019-01-30 13:05:05.461,Investigate and Design Execution Plan for Migrating Chef server to GCP,"As outcome of #6066 we want to move the Chef server from Digital Ocean to GCP. -We should test restoring Chef from backups and switch from Ubuntu 14.04 to 18.04 and then consider if we also want to upgrade Chef to a newer version. - -Chef should run in a different region of the `gitlab-ops` project to be able to bootstrap GitLab.com in case other regions are down. - -1. [ ] Document, plan, and test backup & restore process to be used in production change for migration [#8028] -1. [ ] Identify/plan updates required to `chef-repo` pipelines and related workflow processes (cookbooks, roles, environments, data bags, vaults, ...others?) to support the new chef server [#8036] -1. [x] Provision infrastructure for new chef server(s) in `gitlab-ops` project [#8037] -1. [ ] Document and perform test-run(s) of the migration (no staging infrastructure available)",3.0 -17789492,2019-01-30 03:57:51.264,Blueprint for CI/CD handoff of oncall/operations to SRE,Placeholder issue for blueprint for CI/CD ownership in SRE team.,3.0 -17786862,2019-01-30 00:11:05.069,Start Blueprint for Vault for GitLab.com,Placeholder issue to attach the MR for the blueprint for defining our 1st plans for implementing hashicorp vault for GitLab.com secrets management.,4.0 -17779219,2019-01-29 17:54:30.555,Auto Fetch & Populate Certs,We should craft the needed glue to take advantage of the SSLMate API to fetch certs when they have been auto renewed and placed them in the appropriate vault.,3.0 -17775589,2019-01-29 16:32:17.501,Functional onboarding buddy,"Currently, a new hire is assigned an *onboarding buddy* but usually they're not in the same functional role as the new hire (on purpose). However, having somebody in the same functional role dedicated to your onboarding can help to speed up getting started. - -We have a lot of onboarding material and having a functional onboarding buddy is not meant to replace that. We want to stay async when onboarding. However, for certain topics, it does help to have somebody dedicated to point in the right direction or provide a short talk about a certain topic. At minimum, the functional onboarding buddy would aim to convey which topics are important to understand and provide starting points. - -This is an outcome from the DBRE sync meeting today, suggestion by @yguo and @cshobe .",1.0 -17769852,2019-01-29 15:22:01.126,gitlab.org certificate expired,Fix.,1.0 -17735529,2019-01-28 17:19:25.015,Update slack access for Core Team members,"Currently, the core team members have the same access to all GitLab Slack channels as employees. We want to restrict access to `#a_*` channels so that customer confidential information is not exposed to core team members. - -Could you implement this via script/Slack APIs?",3.0 -17727344,2019-01-28 14:49:48.569,Database Reviews,"* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9182 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8949 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24198 and https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9219#2cda0c6171fa7d04989507a1dd112e34c40df46d -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24377#note_135696943 ~""Community Contribution"" -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24458#note_135715085 ~""Community Contribution"" -* [x] @NikolayS https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24599#note_134906250 -* [x] @NikolayS https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9283#note_136104503 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24743 -* [ ] @NikolayS https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8988#note_136809963 (inherited from the previous weeks) -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9267 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24677#note_137415396 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24803#note_137093489 -* [x] @NikolayS https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9364#note_137035887 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24580#note_136674989 -* [ ] @NikolayS https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/21197#note_136377797 -* [x] @NikolayS https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9283#note_136104503 -* [x] @NikolayS https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9332#note_136032973 -* [x] @NikolayS https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9334 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24881 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24677#note_137946482 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9398/diffs -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23596 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9240#6f917fe927dcf9c1ca3635688e5ef4dca229d9eb -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9420#note_138504910 together with https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24822 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24923#note_138262555 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9267 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24822 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22743#4df3c9d1a5b5d7ae199b2914f6b019ceefbd64ae",2.0 -17726319,2019-01-28 14:10:52.835,Use consistent naming for pgbouncer in Terraform,"There seems to be a bunch of inconsistent references to `pgbouncer` in the Terraform configs. - -* `pgbouncer` -* `pg-bouncer` -* `pgb` - -It would be good to consolidate this to just `pgbouncer` to make finding related code easier.",1.0 -17695289,2019-01-27 00:27:57.898,http://stats.gitlab.com shows no data,"http://stats.gitlab.com shows no data. If this is correct, we should decom this site.![Screen_Shot_2019-01-27_at_1.26.20_AM](/uploads/89fe454b891b5807bcbb42cbc5072075/Screen_Shot_2019-01-27_at_1.26.20_AM.png)",1.0 -17657121,2019-01-25 10:13:43.853,Increase web worker pool,"Based on peaks in our monitoring data, we're hitting the limit of how many web requests we can handle at any given time. - -![web-worker-saturation-2018-01-25](/uploads/30f738a8e46d89fb6cb238c568cac661/web-worker-saturation-2018-01-25.png) - -For reference we have peak-time CPU utilization of 65% and memory utilization of 45% - -![web-fleet-cpu-2018-01-25](/uploads/abc354f59459fc76bcbbf057cc49ae3b/web-fleet-cpu-2018-01-25.png) - -![web-fleet-memory-2018-01-25](/uploads/350f9e2afdacf3a7ab486a7f9aa3d341/web-fleet-memory-2018-01-25.png) - -We currently deploy 30 workers to 16 cpu nodes. I'm thinking a 10% increase to 36/node (20% more workers) would stay within acceptable CPU and memory utilization.",2.0 -17649529,2019-01-25 00:59:53.614,Centralize GCP bootstrap module,"As a first step to breaking out terraform modules into separate repos, we started using data sources to reference the bootstrap scripts via HTTPS to the files under https://gitlab.com/gitlab-com/gitlab-com-infrastructure/tree/master/scripts/google. This has two issues -- first, the primary source for the bootstrap files used to provision gitlab.com cannot be kept in gitlab.com, so they need to be hosted on ops.gitlab.net; second, due to permissions in the ops instance, they will no longer be available via anonymous HTTP(S) URLs. - -1. [x] Create a new bootstrap module that can be included in other modules and provide bootstrap/teardown script content by version -1. [x] Configure push mirroring from ops.gitlab.net -> gitlab.com for the new module -1. [x] Replace `data_source` resources in current modules with references to the new `bootstrap` submodule -1. [x] Clean up bootstrap scripts from `gitlab-com-infrastructure` repository",1.0 -17645996,2019-01-24 20:29:33.678,Terraform Runs are not clean in any environment,"(except DR) - -This is preventing us from being able to confidently run terraform. All members of the team should be able to run terraform for a given environment and only see the changes related to the issue they are working on. Right now there are a wide variety of changes, which leads to members of the team targeting only the work they are doing, and slowly we are building up a list of things that terraform wants to change. At some point, this is going to bring a bout of destruction that we should avoid. - -Utilize this issue to keep track of the work necessary to get all environments into a clean run.",5.0 -17632840,2019-01-24 13:28:29.407,Setup a Mailgun account for Meltano,"Related to https://gitlab.com/meltano/meltano/issues/1 - -We are in the process of adding authentication to Meltano and we need a SMTP service to send confirmation/recovery emails. - -/cc @jschatz1 @ahanselka",1.0 -17630848,2019-01-24 12:11:56.144,Create runbook for standing up chef server,"The chef server is needed to bootstrap GitLab.com. But if we loose chef (or all of GitLab), how can we stand it up from off-site backups (#6065) again? We should write a runbook to have instructions on how to bootstrap chef from backups for disaster recovery or when moving to another cloud provider.",5.0 -17617820,2019-01-24 01:22:21.267,PrometheusUnreachable in DR environment,"Now that we can log into the prometheus servers in the DR environment, we should no longer be getting `PrometheusUnreachable` alerts from https://alerts.ops.gitlab.net/ - but that alert is still firing. - -We created a silence on it when the servers weren't running, but now that they're running, we should see why we're still seeing the alert.",1.0 -17617676,2019-01-24 01:04:53.000,Access error connecting to monitoring in DR environment,"We are getting errors when connecting to: - -- https://prometheus-app.dr.gitlab.net -- https://alerts.dr.gitlab.net -- https://prometheus.dr.gitlab.net - -![Captura_de_pantalla_2019-01-23_a_la_s__16.55.53](/uploads/f69a94ccb140e561c176ad108861a4b7/Captura_de_pantalla_2019-01-23_a_la_s__16.55.53.png)",1.0 -17612726,2019-01-23 19:36:51.880,Setup AWS service accounts for terraform CI,"# Overview - -While setting up the CI pipeline in gitlab-com/gl-infra/gitter-infrastructure!94 we started receiving permissions errors during `tf_plan` jobs. The pipeline had been previously setup to use the packer service account, and permissions were temporarily added to enable access to the remote state resources for planning. Going forward, we need to configure at least four other service accounts, or two accounts with two roles each: - -## Remote-state -1. [ ] Create terraform-remote-state IAM account -1. [ ] Add credentials to 1password -1. [ ] Add CI variables -1. [ ] Update `.gitlab-ci.yml` with credentials for `tf_*_remote_state` jobs -1. [ ] Attach terraform-remote-state IAM policy for `terraform plan` (Copy privileges under `DynamoDB`, `S3`, and `KMS` services from `Packer` policy) -1. [ ] Remove `DynamoDB`, `S3`, & `KMS` privileges from `Packer` IAM policy - -## Beta -1. [ ] Create terraform-beta IAM account -1. [ ] Add credentials to 1password -1. [ ] Add CI variables -1. [ ] Update `.gitlab-ci.yml` with `tf_*_beta` jobs and credentials -1. [ ] Create terraform-beta-ro [IAM role](https://www.terraform.io/docs/providers/aws/#assume-role) (read-only, for `terraform plan`), pass role ARN by variable -1. [ ] Create terraform-beta-priv [IAM role](https://www.terraform.io/docs/providers/aws/#assume-role) (admin privs, for `terraform apply`), pass role ARN by variable - -## Prod -1. [ ] Setup terraform-prod IAM account -1. [ ] Add credentials to 1password -1. [ ] Add CI variables -1. [ ] Update `.gitlab-ci.yml` with `tf_*_prod` jobs and credentials -1. [ ] Create terraform-prod-ro [IAM role](https://www.terraform.io/docs/providers/aws/#assume-role) (read-only, for `terraform plan`), pass role ARN by variable -1. [ ] Create terraform-prod-priv [IAM role](https://www.terraform.io/docs/providers/aws/#assume-role) (admin privs, for `terraform apply`), pass role ARN by variable",3.0 -17604839,2019-01-23 15:28:50.520,Database reviews: process documentation,"In order to ramp up the database review process, we are going to improve documentation and line out the current workflow for database reviews: - -* Document expectations towards database reviews -* Document workflow for database reviews",2.0 -17598570,2019-01-23 12:10:45.852,Sidekiq json logs not parsed by mtail.,Some dashboards went blank due to sidekiq-cluster logs in prod now in json format.,1.0 -17596910,2019-01-23 11:41:21.968,Consider moving chef server to GCP,"Chef server is currently running at Digital Ocean. We should consider if it would be better to move it to GCP instead. This issue is for discussing the pros and cons. A followup issue should be opened if decide to move Chef to GCP. - -### Summarizing discussion: - -#### Pro: -* Network and Permission setup might become easier -* one less one-off service to take care of -* We could upgrade Chef and Ubuntu while keeping the old Chef around as a fallback - -#### Con: -* If GCP goes down we need to first standup Chef before we can bootstrap GitLab.com. - * But installing in a different region and tested backups should mitigate the risk. - -## Conclusion - -We will move chef to GCP. Tracked via #6128. - -**Pre-Requirements are working backup and restore procedures for Chef (#6065, #6075).**",3.0 -17596223,2019-01-23 11:16:50.896,Regular backups of chef server,"We need to do regular backups of chef. While postgres is already backed up via wal-g (#5995) we still need to backup other data locally persisted by chef by using tools like `chef-server-ctl backup` or `knife ec backup`. -A runbook for restoring chef from backups also would be of help.",5.0 -17595873,2019-01-23 11:06:25.814,Monitor chef redis,Monitor redis on chef server.,2.0 -17595834,2019-01-23 11:04:47.549,Monitor chef rabbitmq,Monitor rabbitmq on chef server.,2.0 -17595118,2019-01-23 10:53:06.270,Monitor chef nginx,Monitor nginx on chef server.,2.0 -17595026,2019-01-23 10:49:26.945,Monitor Chef Erlang Components,"We need to find a way to monitor the Erlang-based components of Chef server. There probably is a way to interact with the beam vm to get metrics. We should alert if one of the components is down. - -* oc_bifrost -* opscode-chef-mover -* opscode-erchef -* opscode-expander -* opscode-pushy-server -* opscode-solr4",5.0 -17583384,2019-01-23 05:05:06.241,"When 1 server in a fleet of many goes down, multiple alerts fire","The name of this alert: -* IncreasedServerConnectionErrors -* IncreasedBackendConnectionErrors - - -**Problem description goes here** -* One server in a fleet of 8 barfed. Do we care why that ONE server went down? Sure. We totally should. Should I get woken up at midnight cuz of one server out of 8 barfed? No. We have over provisioned this service, and have 7 other server that have the ability to take up the slack during the time for which this server is down. - -This server was rebooted by google. The underlying host for which it was running had failed. - -```json -[ - { - ""insertId"": ""teusfpfndyiui"", - ""jsonPayload"": { - ""event_subtype"": ""compute.instances.automaticRestart"", - ""info"": [ - { - ""detail_message"": ""Instance automatically restarted by Compute Engine."", - ""code"": ""STATUS_MESSAGE"" - } - ], - ""version"": ""1.2"", - ""event_timestamp_us"": ""1548218996130755"", - ""actor"": { - ""user"": ""system"" - }, - ""resource"": { - ""zone"": ""us-east1-d"", - ""id"": ""8456396115316406636"", - ""name"": ""web-pages-02-sv-gprd"", - ""type"": ""instance"" - }, - ""event_type"": ""GCE_OPERATION_DONE"", - ""trace_id"": ""systemevent-1548218990085-58018d2f0450d-d19e7c56-5ebab86d"", - ""operation"": { - ""name"": ""systemevent-1548218990085-58018d2f0450d-d19e7c56-5ebab86d"", - ""type"": ""operation"", - ""zone"": ""us-east1-d"", - ""id"": ""6370736769153518747"" - } - }, - ""resource"": { - ""type"": ""gce_instance"", - ""labels"": { - ""zone"": ""us-east1-d"", - ""project_id"": ""gitlab-production"", - ""instance_id"": ""8456396115316406636"" - } - }, - ""timestamp"": ""2019-01-23T04:49:56.130755Z"", - ""severity"": ""INFO"", - ""labels"": { - ""compute.googleapis.com/resource_zone"": ""us-east1-d"", - ""compute.googleapis.com/resource_name"": ""web-pages-02-sv-gprd"", - ""compute.googleapis.com/resource_id"": ""8456396115316406636"", - ""compute.googleapis.com/resource_type"": ""instance"" - }, - ""logName"": ""projects/gitlab-production/logs/compute.googleapis.com%2Factivity_log"", - ""receiveTimestamp"": ""2019-01-23T04:49:56.197639846Z"" - }, - { - ""protoPayload"": { - ""@type"": ""type.googleapis.com/google.cloud.audit.AuditLog"", - ""serviceName"": ""compute.googleapis.com"", - ""methodName"": ""compute.instances.automaticRestart"" - }, - ""insertId"": ""xkoj2rdqd0q"", - ""resource"": { - ""type"": ""gce_instance"", - ""labels"": { - ""zone"": ""us-east1-d"", - ""project_id"": ""gitlab-production"", - ""instance_id"": ""8456396115316406636"" - } - }, - ""timestamp"": ""2019-01-23T04:49:56.026Z"", - ""severity"": ""INFO"", - ""logName"": ""projects/gitlab-production/logs/cloudaudit.googleapis.com%2Fsystem_event"", - ""operation"": { - ""id"": ""systemevent-1548218990085-58018d2f0450d-d19e7c56-5ebab86d"", - ""producer"": ""compute.instances.automaticRestart"", - ""first"": true, - ""last"": true - }, - ""receiveTimestamp"": ""2019-01-23T04:49:56.726006935Z"" - }, - { - ""insertId"": ""1jhx4mlf6fqw8x"", - ""jsonPayload"": { - ""version"": ""1.2"", - ""event_timestamp_us"": ""1548218979841940"", - ""actor"": { - ""user"": ""system"" - }, - ""resource"": { - ""name"": ""web-pages-02-sv-gprd"", - ""type"": ""instance"", - ""zone"": ""us-east1-d"", - ""id"": ""8456396115316406636"" - }, - ""event_type"": ""GCE_OPERATION_DONE"", - ""trace_id"": ""systemevent-1548218979548-58018d24f7bad-a4bf7a75-8c20aaef"", - ""operation"": { - ""type"": ""operation"", - ""zone"": ""us-east1-d"", - ""id"": ""2051909572614573196"", - ""name"": ""systemevent-1548218979548-58018d24f7bad-a4bf7a75-8c20aaef"" - }, - ""event_subtype"": ""compute.instances.hostError"", - ""info"": [ - { - ""code"": ""STATUS_MESSAGE"", - ""detail_message"": ""Instance terminated by Compute Engine."" - } - ] - }, - ""resource"": { - ""type"": ""gce_instance"", - ""labels"": { - ""zone"": ""us-east1-d"", - ""project_id"": ""gitlab-production"", - ""instance_id"": ""8456396115316406636"" - } - }, - ""timestamp"": ""2019-01-23T04:49:39.841940Z"", - ""severity"": ""INFO"", - ""labels"": { - ""compute.googleapis.com/resource_zone"": ""us-east1-d"", - ""compute.googleapis.com/resource_name"": ""web-pages-02-sv-gprd"", - ""compute.googleapis.com/resource_id"": ""8456396115316406636"", - ""compute.googleapis.com/resource_type"": ""instance"" - }, - ""logName"": ""projects/gitlab-production/logs/compute.googleapis.com%2Factivity_log"", - ""receiveTimestamp"": ""2019-01-23T04:49:39.929195716Z"" - }, - { - ""protoPayload"": { - ""@type"": ""type.googleapis.com/google.cloud.audit.AuditLog"", - ""serviceName"": ""compute.googleapis.com"", - ""methodName"": ""compute.instances.hostError"" - }, - ""insertId"": ""-u8lpvcdrl98"", - ""resource"": { - ""type"": ""gce_instance"", - ""labels"": { - ""instance_id"": ""8456396115316406636"", - ""zone"": ""us-east1-d"", - ""project_id"": ""gitlab-production"" - } - }, - ""timestamp"": ""2019-01-23T04:49:39.743Z"", - ""severity"": ""INFO"", - ""logName"": ""projects/gitlab-production/logs/cloudaudit.googleapis.com%2Fsystem_event"", - ""operation"": { - ""id"": ""systemevent-1548218979548-58018d24f7bad-a4bf7a75-8c20aaef"", - ""producer"": ""compute.instances.hostError"", - ""first"": true, - ""last"": true - }, - ""receiveTimestamp"": ""2019-01-23T04:49:40.879207891Z"" - } -] -``` - -* Use this issue as a means of discussing what we can do to remove pages and woke engineers when it's not needed. The server came back online just fine. It's healthy, chef did it's job, haproxy knows it's back.",1.0 -17580128,2019-01-23 00:18:43.774,Ensure firewall rules are in place such that we can properly scrape for DR metrics,Part of https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5469,2.0 -17576637,2019-01-22 19:36:46.476,Performance of review apps is slow,"Over the last couple of weeks, using review apps has been extremely slow on gitlab.com. The deploy job finishes as expected, but the actual use of the app is slow to the point of being unusable. I have to leave the tab open for several minutes in order to navigate through the app. - -I have noticed this on multiple projects, such as https://gitlab.com/gitlab-com/www-gitlab-com and https://gitlab.com/gitlab-org/design.gitlab.com. - -Current example that is still up as of 11:30AM PST Jan 22: https://matej-validating-settings-redesign-blog-post.about-src.gitlab.com - -slack: https://gitlab.slack.com/archives/C0AR2KW4B/p1548183331596500",1.0 -17572112,2019-01-22 17:25:41.570,Chef client fails on gstg bastion,"Looks like `chef-client` fails on `bastion-01-inf-gstg.c.gitlab-staging-1.internal` with: - -``` -[2019-01-22T17:23:40+00:00] INFO: Client key /etc/chef/client.pem is not present - registering -[2019-01-22T17:23:40+00:00] WARN: Failed to read the private key /etc/chef/validation.pem: # - -================================================================================ -Chef encountered an error attempting to create the client ""bastion-01-inf-gstg.c.gitlab-staging-1.internal"" -================================================================================ - -Private Key Not Found: ----------------------- -Your private key could not be loaded. If the key file exists, ensure that it is -readable by chef-client. - -Relevant Config Settings: -------------------------- -validation_key ""/etc/chef/validation.pem"" - -Platform: ---------- -x86_64-linux -``` - -Realized this while figuring out why @cshobe is not yet able to access staging hosts.",1.0 -17568836,2019-01-22 15:32:51.510,Right-size sidekiq nodes,"Based on the last 7 days of [metrics](https://prometheus.gprd.gitlab.net/graph?g0.range_input=1w&g0.step_input=86400&g0.expr=avg%20by%20(priority)%20(quantile_over_time(0.95%2C%20instance%3Anode_cpu_utilization%3Aratio%7Btype%3D%22sidekiq%22%2Cpriority!%3D%22utility%22%7D%5B1d%5D))%20*%20100&g0.tab=0&g1.range_input=1w&g1.step_input=86400&g1.expr=avg%20by%20(priority)%20(1%20-%20quantile_over_time(0.01%2C%20instance%3Anode_memory_available%3Aratio%7Btype%3D%22sidekiq%22%2Cpriority!%3D%22utility%22%7D%5B1d%5D))%20*%20100&g1.tab=0) we're grossly over-provisioned for sidekiq server size. - -| Priority | Mem % | CPU % -|------------|--------|------ -| traces | 22.53% | 4.28% -| asap | 44.21% | 18.98% -| pages | 16.22% | 6.26% -| pipeline | 44.18% | 19.15% -| pullmirror | 35.54% | 16.65% -| besteffort | 50.07% | 40.79% -| import | 21.45% | 13.47% -| realtime | 18.92% | 9.02% - -We should consider reducing the instance size for these nodes, and possibly adding a few more instances for the queues that need them. We should be able to safely get to 70% utilization on our most active queues without a problem. - -This would save us quite a lot of production resources. The napkin math says 50% of an engineer's salary.",3.0 -17564335,2019-01-22 13:27:14.528,"Console session logs are missing a ""history.*"" tag","Extension of https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6028, we now have the logs in Stackdriver but we can't easily filter them since they are missing a tag. - -I think it is easily fixed by including `tag ${tag}` in the [`record_transformer`](https://gitlab.com/gitlab-cookbooks/gitlab_fluentd/blob/master/templates/default/sessions.conf.erb#L28-30) filters.",2.0 -17559102,2019-01-22 10:22:30.076,Registry Fleet | Memory Starvation,"From oncall handover meeting on 1/22, we have decided to create an issue for the Registry servers getting rebooted due to memory consumption. (Between the week of 1/15 - 1/22, 14 out of the 19 incidents were due to this). - -The next thing we talked about possibly doing is doing profiling on the registry server(s) since the issues have been happening more and more frequently and see if we can identify the source of the memory leak.",2.0 -17553237,2019-01-22 08:59:37.620,Remove persona authentication from forum.gitlab.com,"Persona was an authentication provider by Mozilla, that was discontinued like [2+ years ago](https://developer.mozilla.org/en-US/docs/Archive/Mozilla/Persona)! - -I thought we got rid of it, but it seems we still use it on the forum. These are the plugins we should use https://gitlab.com/gitlab-com/runbooks/blob/master/howto/discourse-forum.md#plugins-we-use - -## Action - -1. [x] Follow steps in https://gitlab.com/gitlab-com/runbooks/blob/master/howto/discourse-forum.md#adding-or-removing-a-plugin and make sure the persona plugin is absent.",2.0 -17542428,2019-01-21 21:58:06.642,Move Chef Repo to Ops Instance,"The chef repository is currently mirrored from gitlab.com to the ops instance. This creates latency between every merge and when it can be applied. It also means that if GitLab.com is down, we cannot easily push any changes that might be needed to bring it back up. - -It is set up like this to optimize for anyone who might want to submit MR's to the repo without having access to the ops instance. We need to optimize this for the majority of requests. For the small number of outside requests that we receive for this repository, we can manually apply the patches. - -The steps are: -- [ ] Audit and adjust permissions on the chef-repo project in the ops instance -- [ ] Ensure that the repositories are sync'd -- [ ] Change the direction of the sync from the ops instance to the gitlab.com instance -- [ ] Change any CI jobs or other automation to point the chef server to the ops instance -- [ ] Notify all users to change the remotes in their local copies",3.0 -17520126,2019-01-21 11:47:03.707,[Design] OKR GitLab Implementation,Write design to implement OKRs using GitLab.,2.0 -17519947,2019-01-21 11:39:43.376,Develop a framework for automating infrastructure changes,"Take https://gitlab.com/gitlab-com/gl-infra/production/issues/633 for example, I want to have a script for it that looks roughly like this: - -```ruby -step do - stop_service 'chef-client', on_role: '{{env}}-base-db-patroni' -end - -step do - pause_patroni_cluster -end - -step do - merge_and_apply 'https://ops.gitlab.net/gitlab-cookbooks/chef-repo/merge_requests/474' -end - -patroni_replica = patroni_replicas.sample -step do - on_node patroni_replica do - converge_chef - restart_service 'consul' - end - - verify do - entries = query_consul_dns 'replica.patroni.service.consul.', on: patroni_replica - - should_include patroni_replica.private_ip, in: entries - end -end - -step do - (patroni_members - [patroni_replica]).each do |patroni_member| - on_node patroni_member do - converge_chef - restart_service 'consul' - end - - verify do - role = patroni_member.leader? ? 'master' : 'replica' - entries = query_consul_dns ""#{role}.patroni.service.consul."", on: patroni_member - - should_include patroni_member.private_ip, in: entries - end - end -end - -step do - merge_and_apply 'https://ops.gitlab.net/gitlab-cookbooks/chef-repo/merge_requests/473' -end - -step do - on_all_nodes do - converge_chef - end - - verify do - count = run_in_rails 'puts Gitlab::Database::LoadBalancing.proxy.load_balancer.instance_eval { @host_list }.hosts.map(&:host).count' - - should_equal count, patroni_replicas.count - end -end -``` - -Notes about the approach: - -* Using a DSL makes the script easy to read and review -* Each action is a Ruby class with separate methods for dry-run and full-run -* Dry-runs should allow checking prerequisites - * If a step is executing `knife ssh ...`, we can check if knife is properly configured and we can actually SSH into machines -* In full-runs you'll be prompted for step progression (skip, retry, halt, etc...) -* Environment is not mentioned explicitly in the script -* Over time we should grow an inventory of reusable actions -* Logs everywhere -* The framework should not have any gem dependencies - -I have a rough structure for the framework, will push the whole thing once we have something that works.",5.0 -17508483,2019-01-21 01:54:52.057,Inventory Catalogue - Version,"## Task -Fill out the gSIC (GitLab Service Inventory Catalogue) template for the service. The task of filling it out is really up to us as we need to reach out to various team members in order to capture the information. - -## Service -Version - -## Sample Template Location -https://gitlab.com/gitlab-com/www-gitlab-com/blob/5f6a07f582be3c2a15914f4773b8de8e13a6d7e9/source/handbook/engineering/infrastructure/design/sample/201901_ServiceInventoryCatalogue/service-catalogue.yml - -## Questions or Comments about the template? -Reach out to @aamarsanaa",2.0 -17508478,2019-01-21 01:54:03.416,Inventory Catalogue - Web,"## Task -Fill out the gSIC (GitLab Service Inventory Catalogue) template for the service. The task of filling it out is really up to us as we need to reach out to various team members in order to capture the information. - -## Service -Web - -## Sample Template Location -https://gitlab.com/gitlab-com/www-gitlab-com/blob/5f6a07f582be3c2a15914f4773b8de8e13a6d7e9/source/handbook/engineering/infrastructure/design/sample/201901_ServiceInventoryCatalogue/service-catalogue.yml - -## Questions or Comments about the template? -Reach out to @aamarsanaa",2.0 -17508471,2019-01-21 01:53:39.245,Inventory Catalogue - SideKiq,"## Task -Fill out the gSIC (GitLab Service Inventory Catalogue) template for the service. The task of filling it out is really up to us as we need to reach out to various team members in order to capture the information. - -## Service -SideKiq - -## Sample Template Location -https://gitlab.com/gitlab-com/www-gitlab-com/blob/5f6a07f582be3c2a15914f4773b8de8e13a6d7e9/source/handbook/engineering/infrastructure/design/sample/201901_ServiceInventoryCatalogue/service-catalogue.yml - -## Questions or Comments about the template? -Reach out to @aamarsanaa",2.0 -17508469,2019-01-21 01:53:15.948,Inventory Catalogue - Share,"## Task -Fill out the gSIC (GitLab Service Inventory Catalogue) template for the service. The task of filling it out is really up to us as we need to reach out to various team members in order to capture the information. - -## Service -Share - -## Sample Template Location -https://gitlab.com/gitlab-com/www-gitlab-com/blob/5f6a07f582be3c2a15914f4773b8de8e13a6d7e9/source/handbook/engineering/infrastructure/design/sample/201901_ServiceInventoryCatalogue/service-catalogue.yml - -## Questions or Comments about the template? -Reach out to @aamarsanaa",2.0 -17508463,2019-01-21 01:52:33.674,Inventory Catalogue - Runner,"## Task -Fill out the gSIC (GitLab Service Inventory Catalogue) template for the service. The task of filling it out is really up to us as we need to reach out to various team members in order to capture the information. - -## Service -Runner - -## Sample Template Location -https://gitlab.com/gitlab-com/www-gitlab-com/blob/5f6a07f582be3c2a15914f4773b8de8e13a6d7e9/source/handbook/engineering/infrastructure/design/sample/201901_ServiceInventoryCatalogue/service-catalogue.yml - -## Questions or Comments about the template? -Reach out to @aamarsanaa",2.0 -17508400,2019-01-21 01:43:43.838,Inventory Catalogue - Registry,"## Task -Fill out the gSIC (GitLab Service Inventory Catalogue) template for the service. The task of filling it out is really up to us as we need to reach out to various team members in order to capture the information. - -## Service -Registry - -## Sample Template Location -https://gitlab.com/gitlab-com/www-gitlab-com/blob/5f6a07f582be3c2a15914f4773b8de8e13a6d7e9/source/handbook/engineering/infrastructure/design/sample/201901_ServiceInventoryCatalogue/service-catalogue.yml - -## Questions or Comments about the template? -Reach out to @aamarsanaa",2.0 -17508225,2019-01-21 01:17:01.730,Inventory Catalogue - PSQL Timing,"## Task -Fill out the gSIC (GitLab Service Inventory Catalogue) template for the service. The task of filling it out is really up to us as we need to reach out to various team members in order to capture the information. - -## Service -PSQL Timing - -## Sample Template Location -https://gitlab.com/gitlab-com/www-gitlab-com/blob/5f6a07f582be3c2a15914f4773b8de8e13a6d7e9/source/handbook/engineering/infrastructure/design/sample/201901_ServiceInventoryCatalogue/service-catalogue.yml - -## Questions or Comments about the template? -Reach out to @aamarsanaa",2.0 -17508223,2019-01-21 01:16:34.994,Inventory Catalogue - Prometheus App,"## Task -Fill out the gSIC (GitLab Service Inventory Catalogue) template for the service. The task of filling it out is really up to us as we need to reach out to various team members in order to capture the information. - -## Service -Prometheus App - -## Sample Template Location -https://gitlab.com/gitlab-com/www-gitlab-com/blob/5f6a07f582be3c2a15914f4773b8de8e13a6d7e9/source/handbook/engineering/infrastructure/design/sample/201901_ServiceInventoryCatalogue/service-catalogue.yml - -## Questions or Comments about the template? -Reach out to @aamarsanaa",2.0 -17508217,2019-01-21 01:15:58.370,Inventory Catalogue - Prometheus,"## Task -Fill out the gSIC (GitLab Service Inventory Catalogue) template for the service. The task of filling it out is really up to us as we need to reach out to various team members in order to capture the information. - -## Service -Prometheus - -## Sample Template Location -https://gitlab.com/gitlab-com/www-gitlab-com/blob/5f6a07f582be3c2a15914f4773b8de8e13a6d7e9/source/handbook/engineering/infrastructure/design/sample/201901_ServiceInventoryCatalogue/service-catalogue.yml - -## Questions or Comments about the template? -Reach out to @aamarsanaa",2.0 -17508212,2019-01-21 01:15:28.947,Inventory Catalogue - Postgres DR Delayed,"## Task -Fill out the gSIC (GitLab Service Inventory Catalogue) template for the service. The task of filling it out is really up to us as we need to reach out to various team members in order to capture the information. - -## Service -Postgres DR Delayed - -## Sample Template Location -https://gitlab.com/gitlab-com/www-gitlab-com/blob/5f6a07f582be3c2a15914f4773b8de8e13a6d7e9/source/handbook/engineering/infrastructure/design/sample/201901_ServiceInventoryCatalogue/service-catalogue.yml - -## Questions or Comments about the template? -Reach out to @aamarsanaa",2.0 -17508208,2019-01-21 01:14:59.656,Inventory Catalogue - Postgres DR Archive,"## Task -Fill out the gSIC (GitLab Service Inventory Catalogue) template for the service. The task of filling it out is really up to us as we need to reach out to various team members in order to capture the information. - -## Service -Postgres DR Archive - -## Sample Template Location -https://gitlab.com/gitlab-com/www-gitlab-com/blob/5f6a07f582be3c2a15914f4773b8de8e13a6d7e9/source/handbook/engineering/infrastructure/design/sample/201901_ServiceInventoryCatalogue/service-catalogue.yml - -## Questions or Comments about the template? -Reach out to @aamarsanaa",2.0 -17508199,2019-01-21 01:14:09.945,Inventory Catalogue - Postgres,"## Task -Fill out the gSIC (GitLab Service Inventory Catalogue) template for the service. The task of filling it out is really up to us as we need to reach out to various team members in order to capture the information. - -## Service -Postgres - -## Sample Template Location -https://gitlab.com/gitlab-com/www-gitlab-com/blob/5f6a07f582be3c2a15914f4773b8de8e13a6d7e9/source/handbook/engineering/infrastructure/design/sample/201901_ServiceInventoryCatalogue/service-catalogue.yml - -## Questions or Comments about the template? -Reach out to @aamarsanaa",2.0 -17508194,2019-01-21 01:13:34.649,Inventory Catalogue - Patroni,"## Task -Fill out the gSIC (GitLab Service Inventory Catalogue) template for the service. The task of filling it out is really up to us as we need to reach out to various team members in order to capture the information. - -## Service -Patroni - -## Sample Template Location -https://gitlab.com/gitlab-com/www-gitlab-com/blob/5f6a07f582be3c2a15914f4773b8de8e13a6d7e9/source/handbook/engineering/infrastructure/design/sample/201901_ServiceInventoryCatalogue/service-catalogue.yml - -## Questions or Comments about the template? -Reach out to @aamarsanaa",1.0 -17465567,2019-01-18 13:47:13.202,Fix postgres wal archiving on chef.gitlab.com before running out of disk space,"WAL logs are piling up in `/var/opt/opscode/postgresql/9.2/data/pg_xlog` since 2018-10-03 because the wal-e archive command is failing. It is taking 250GB on disk already (92% disk usage) so we need to fix it before running out of space. - -``` -2019-01-18_13:42:35.84557 LOG: archive command failed with exit code 1 -2019-01-18_13:42:35.84559 DETAIL: The failed archive command was: /usr/bin/envdir /etc/wal-e.d/env /opt/wal-e/bin/wal-e wal-push pg_xlog/00000001000000F3000000F8 -2019-01-18_13:42:37.01083 wal_e.main ERROR MSG: no storage prefix defined -2019-01-18_13:42:37.01085 HINT: Either set one of the --file-prefix, --gs-prefix, --s3-prefix or --wabs-prefix options or define one of the WALE_FILE_PREFIX, WALE_GS_PREFIX, WALE_S3_PREFIX, WALE_SWIFT_PREFIX or WALE_WABS_PREFIX, environment variables. -``` - -We need to monitor for successful db backups. -Chef monitoring is tracked in #5951.",2.0 -17465341,2019-01-18 13:37:18.079,"Problem statement and defining a goal for ""PG Repack"" epic","The epic for ""PG Repack"" is not well defined (there's no problem statement nor a defined goal). - -Let's start with defining the problem and setting a goal for the epic. Ultimately this will be the beginning of the ""design document"" and also become the description of the Epic. - -I would like the epic to be specific and measurable, such that we know what we're aiming for (see [SMART](https://en.wikipedia.org/wiki/SMART_criteria)).",3.0 -17457094,2019-01-18 08:22:04.930,Inventory Catalogue - Pages,"## Task -Fill out the gSIC (GitLab Service Inventory Catalogue) template for the service. The task of filling it out is really up to us as we need to reach out to various team members in order to capture the information. - -## Service -Pages - -## Sample Template Location -https://gitlab.com/gitlab-com/www-gitlab-com/blob/5f6a07f582be3c2a15914f4773b8de8e13a6d7e9/source/handbook/engineering/infrastructure/design/sample/201901_ServiceInventoryCatalogue/service-catalogue.yml - -## Questions or Comments about the template? -Reach out to @aamarsanaa",2.0 -17457081,2019-01-18 08:21:44.105,Inventory Catalogue - Mailroom,"## Task -Fill out the gSIC (GitLab Service Inventory Catalogue) template for the service. The task of filling it out is really up to us as we need to reach out to various team members in order to capture the information. - -## Service -Mailroom - -## Sample Template Location -https://gitlab.com/gitlab-com/www-gitlab-com/blob/5f6a07f582be3c2a15914f4773b8de8e13a6d7e9/source/handbook/engineering/infrastructure/design/sample/201901_ServiceInventoryCatalogue/service-catalogue.yml - -## Questions or Comments about the template? -Reach out to @aamarsanaa",2.0 -17457059,2019-01-18 08:20:58.368,Inventory Catalogue - Load Balancer (FE),"## Task -Fill out the gSIC (GitLab Service Inventory Catalogue) template for the service. The task of filling it out is really up to us as we need to reach out to various team members in order to capture the information. - -## Service -Load Balancer (FE) - -## Sample Template Location -https://gitlab.com/gitlab-com/www-gitlab-com/blob/5f6a07f582be3c2a15914f4773b8de8e13a6d7e9/source/handbook/engineering/infrastructure/design/sample/201901_ServiceInventoryCatalogue/service-catalogue.yml - -## Questions or Comments about the template? -Reach out to @aamarsanaa",2.0 -17457042,2019-01-18 08:20:01.698,Inventory Catalogue - InfluxDB,"## Task -Fill out the gSIC (GitLab Service Inventory Catalogue) template for the service. The task of filling it out is really up to us as we need to reach out to various team members in order to capture the information. - -## Service -InfluxDB - -## Sample Template Location -https://gitlab.com/gitlab-com/www-gitlab-com/blob/5f6a07f582be3c2a15914f4773b8de8e13a6d7e9/source/handbook/engineering/infrastructure/design/sample/201901_ServiceInventoryCatalogue/service-catalogue.yml - -## Questions or Comments about the template? -Reach out to @aamarsanaa",2.0 -32556319,2020-03-27 21:28:15.235,Create new gitaly storage shard node to replace `nfs-file46`,"Gitaly storage shard `nfs-file46` (`file-46-stor-gprd.c.gitlab-production.internal`) is currently at `62.27%` usage. - -A new gitaly node should be created, and added to the list of shards configured to be included in consideration for storing new project repositories. - -That way, `nfs-file46` can be removed from rotation without any concern that the node's removal from the configuration will put any additional burden on the remaining nodes. - -It is important to avoid acceleration of usage growth on the remaining nodes accepting new repositories.",3.0 -32556303,2020-03-27 21:27:07.447,Create new gitaly storage shard node to replace `nfs-file43`,"Gitaly storage shard `nfs-file43` (`file-43-stor-gprd.c.gitlab-production.internal`) is currently at `60.38%` usage. - -A new gitaly node should be created, and added to the list of shards configured to be included in consideration for storing new project repositories. - -That way, `nfs-file43` can be removed from rotation without any concern that the node's removal from the configuration will put any additional burden on the remaining nodes. - -It is important to avoid acceleration of usage growth on the remaining nodes accepting new repositories.",2.0 -32552241,2020-03-27 18:26:50.129,Resolve redis config discrepancy for redis-cache between gstg and gprd,"Apart from credentials, the only discrepancy between the production and staging config for redis-cache nodes is the disk persistence options. The redis-cache nodes in the staging environment enabled periodically writing an RDB dump, whereas production does not. Since this dataset is meant to be ephemeral and gets persisted to disk anyway on clean shutdown, this may be a reasonable setting. We can re-enable it later if we decide it's too expensive to repopulate this cache from empty, but anecdotally several people have mentioned that they think it's tolerable. For now, let's just get the staging config to match production. - -For reference, here's the discrepancy we're going to resolve: - -```shell -msmiley@saoirse:~$ ssh redis-cache-01-db-gprd.c.gitlab-production.internal -- sudo cat /var/opt/gitlab/redis/redis.conf | egrep -v '^ *($|#)' | perl -pe 's/(masterauth|requirepass) .*/$1 **REDACTED**/' > /tmp/redis.conf.gprd - -msmiley@saoirse:~$ ssh redis-cache-01-db-gstg.c.gitlab-staging-1.internal -- sudo cat /var/opt/gitlab/redis/redis.conf | egrep -v '^ *($|#)' | perl -pe 's/(masterauth|requirepass) .*/$1 **REDACTED**/' > /tmp/redis.conf.gstg - -msmiley@saoirse:~$ diff -U0 /tmp/redis.conf.{gstg,gprd} ---- /tmp/redis.conf.gstg 2020-03-27 10:53:49.830460831 -0700 -+++ /tmp/redis.conf.gprd 2020-03-27 10:53:26.821766688 -0700 -@@ -12,3 +11,0 @@ --save 900 1 --save 300 10 --save 60 10000 -```",1.0 -32538978,2020-03-27 13:25:00.939,Elastic: field expansion matches too many fields," - -Request: - -
- -``` -{ - ""version"": true, - ""size"": 500, - ""sort"": [ - { - ""json.time"": { - ""order"": ""desc"", - ""unmapped_type"": ""boolean"" - } - } - ], - ""_source"": { - ""excludes"": [] - }, - ""aggs"": { - ""2"": { - ""date_histogram"": { - ""field"": ""json.time"", - ""fixed_interval"": ""30s"", - ""time_zone"": ""UTC"", - ""min_doc_count"": 1 - } - } - }, - ""stored_fields"": [ - ""*"" - ], - ""script_fields"": { - ""controller_and_action"": { - ""script"": { - ""source"": ""doc['json.controller.keyword'] + \""#\"" + doc['json.action.keyword']"", - ""lang"": ""painless"" - } - } - }, - ""docvalue_fields"": [ - { - ""field"": ""@timestamp"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.expiry_from"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.expiry_to"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.bucket.start"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.bucket.stop"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.commits.timestamp"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.created_after"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.created_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.created_before"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.due_date"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.head_commit.timestamp"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.pull_request.base.repo.created_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.pull_request.base.repo.pushed_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.pull_request.base.repo.updated_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.pull_request.closed_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.pull_request.created_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.pull_request.head.repo.created_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.pull_request.head.repo.pushed_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.pull_request.head.repo.updated_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.pull_request.merged_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.pull_request.updated_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.raw_response.created_on"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.raw_response.updated_on"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.repository.updated_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.bucket.start"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.bucket.stop"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.commits.timestamp"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.head_commit.timestamp"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.pull_request.base.repo.created_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.pull_request.base.repo.pushed_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.pull_request.base.repo.updated_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.pull_request.closed_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.pull_request.created_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.pull_request.head.repo.created_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.pull_request.head.repo.pushed_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.pull_request.head.repo.updated_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.pull_request.merged_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.pull_request.updated_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.repository.updated_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.time"", - ""format"": ""date_time"" - }, - { - ""field"": ""publish_time"", - ""format"": ""date_time"" - } - ], - ""query"": { - ""bool"": { - ""must"": [], - ""filter"": [ - { - ""multi_match"": { - ""type"": ""best_fields"", - ""query"": ""ExternalDiffUploader"", - ""lenient"": true - } - }, - { - ""match_phrase"": { - ""json.controller"": { - ""query"": ""Projects::MergeRequests::DiffsController"" - } - } - }, - { - ""range"": { - ""json.time"": { - ""format"": ""strict_date_optional_time"", - ""gte"": ""2020-03-27T12:00:00.000Z"", - ""lte"": ""2020-03-27T12:30:00.000Z"" - } - } - } - ], - ""should"": [], - ""must_not"": [] - } - }, - ""highlight"": { - ""pre_tags"": [ - ""@kibana-highlighted-field@"" - ], - ""post_tags"": [ - ""@/kibana-highlighted-field@"" - ], - ""fields"": { - ""*"": {} - }, - ""fragment_size"": 2147483647 - } -} -``` -
- -Response: -
- -``` -{ - ""took"": 5158, - ""timed_out"": false, - ""_shards"": { - ""total"": 762, - ""successful"": 750, - ""skipped"": 750, - ""failed"": 12, - ""failures"": [ - { - ""shard"": 0, - ""index"": ""pubsub-rails-inf-gprd-001925"", - ""node"": ""jmnNQegZRWOO0aJBFjnZew"", - ""reason"": { - ""type"": ""query_shard_exception"", - ""reason"": ""failed to create query: {\n \""bool\"" : {\n \""filter\"" : [\n {\n \""multi_match\"" : {\n \""query\"" : \""ExternalDiffUploader\"",\n \""fields\"" : [ ],\n \""type\"" : \""best_fields\"",\n \""operator\"" : \""OR\"",\n \""slop\"" : 0,\n \""prefix_length\"" : 0,\n \""max_expansions\"" : 50,\n \""lenient\"" : true,\n \""zero_terms_query\"" : \""NONE\"",\n \""auto_generate_synonyms_phrase_query\"" : true,\n \""fuzzy_transpositions\"" : true,\n \""boost\"" : 1.0\n }\n },\n {\n \""match_phrase\"" : {\n \""json.controller\"" : {\n \""query\"" : \""Projects::MergeRequests::DiffsController\"",\n \""slop\"" : 0,\n \""zero_terms_query\"" : \""NONE\"",\n \""boost\"" : 1.0\n }\n }\n },\n {\n \""range\"" : {\n \""json.time\"" : {\n \""from\"" : \""2020-03-27T12:00:00.000Z\"",\n \""to\"" : \""2020-03-27T12:30:00.000Z\"",\n \""include_lower\"" : true,\n \""include_upper\"" : true,\n \""format\"" : \""strict_date_optional_time\"",\n \""boost\"" : 1.0\n }\n }\n }\n ],\n \""adjust_pure_negative\"" : true,\n \""boost\"" : 1.0\n }\n}"", - ""index_uuid"": ""HunDEJAFRKieC7kFcif7zw"", - ""index"": ""pubsub-rails-inf-gprd-001925"", - ""caused_by"": { - ""type"": ""illegal_argument_exception"", - ""reason"": ""field expansion matches too many fields, limit: 1024, got: 1470"" - } - } - }, - { - ""shard"": 0, - ""index"": ""pubsub-rails-inf-gprd-001926"", - ""node"": ""Nce627z_R7aRVIjH1JkAog"", - ""reason"": { - ""type"": ""query_shard_exception"", - ""reason"": ""failed to create query: {\n \""bool\"" : {\n \""filter\"" : [\n {\n \""multi_match\"" : {\n \""query\"" : \""ExternalDiffUploader\"",\n \""fields\"" : [ ],\n \""type\"" : \""best_fields\"",\n \""operator\"" : \""OR\"",\n \""slop\"" : 0,\n \""prefix_length\"" : 0,\n \""max_expansions\"" : 50,\n \""lenient\"" : true,\n \""zero_terms_query\"" : \""NONE\"",\n \""auto_generate_synonyms_phrase_query\"" : true,\n \""fuzzy_transpositions\"" : true,\n \""boost\"" : 1.0\n }\n },\n {\n \""match_phrase\"" : {\n \""json.controller\"" : {\n \""query\"" : \""Projects::MergeRequests::DiffsController\"",\n \""slop\"" : 0,\n \""zero_terms_query\"" : \""NONE\"",\n \""boost\"" : 1.0\n }\n }\n },\n {\n \""range\"" : {\n \""json.time\"" : {\n \""from\"" : \""2020-03-27T12:00:00.000Z\"",\n \""to\"" : \""2020-03-27T12:30:00.000Z\"",\n \""include_lower\"" : true,\n \""include_upper\"" : true,\n \""format\"" : \""strict_date_optional_time\"",\n \""boost\"" : 1.0\n }\n }\n }\n ],\n \""adjust_pure_negative\"" : true,\n \""boost\"" : 1.0\n }\n}"", - ""index_uuid"": ""URp08IJpRjKQ6kRnKFJQ8w"", - ""index"": ""pubsub-rails-inf-gprd-001926"", - ""caused_by"": { - ""type"": ""illegal_argument_exception"", - ""reason"": ""field expansion matches too many fields, limit: 1024, got: 1136"" - } - } - } - ] - }, - ""hits"": { - ""total"": 0, - ""max_score"": 0, - ""hits"": [] - } -} -``` - -
",3.0 -32503229,2020-03-26 17:51:12.192,"Configure WAL-G's ""wal-push"" and ""backup-push"" to test Postgres backups creation with WAL-G on production","In https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/8903, we started to test WAL-G for both the creation and use of backups on **staging**. - -After 3 weeks of successful testing (see `Daily staging restore using WAL-G, from WAL-G staging archive` in https://ops.gitlab.net/gitlab-com/gl-infra/gitlab-restore/postgres-gprd/pipeline_schedules), it is time to start testing on a production replica. - -1. [x] Decide how replica will be chosen and how it will be configured using Chef (note that patroni-01 is the master in gprd right now). Such replica is to be marked as unavailable for failover. -1. [x] In addition to WAL-E, install WAL-G 0.2.14 to all Postgres instances (how it can be done: https://ops.gitlab.net/gitlab-com/gl-infra/gitlab-restore/postgres-gprd/blob/master/bootstrap.sh#L84). It is okay to have both WAL-E and WAL-G installed. -1. [x] Configure WAL-G. Use some new GCS bucket, different from the existing ones. -1. [ ] on the chosen replica in `gprd`, put WAL-G's `wal-push` to `archive_command`, and set `archive_mode` to `'always'`. -1. [ ] Ensure that wal-push log does not have errors, but shows successful WAL-pushing and the new GCS bucket/folder is being filled with WALs. -1. [ ] Configure daily `backup-push` on the same replica (a cronjob similar to WAL-E's `backup-push` cronjob on the master), to have full backups daily. - -Additional TODOs: - -- [ ] https://gitlab.com/gitlab-com/runbooks/blob/master/rules/gitlab-wale-backups.yml#L24 -- [ ] adjust runbooks",8.0 -32496839,2020-03-26 16:04:08.939,Incorrect runbook link for alert `service_apdex_slo_out_of_bounds_lower_15m`,"Link for `runbook` (""`troubleshooting/service-ci-runners.md`"") on `service_apdex_slo_out_of_bounds_lower_15m` alert is incorrect. - -Correct link is: https://ops.gitlab.net/gitlab-com/runbooks/-/blob/master/docs/ci-runners/service-ci-runners.md",1.0 -32481109,2020-03-26 09:28:59.820,Terraform artifacts too large in CI,"CI jobs for terraform are failing with `413 Request Entity Too Large` errors. - -https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/-/jobs/1029505 - -``` - environments/gstg/.terraform: found 3365 matching files - ERROR: Uploading artifacts to coordinator... too large archive id=1029505 responseStatus=413 Request Entity Too Large status=413 Request Entity Too Large token=bjnHYt3n - FATAL: too large - ERROR: Job failed: exit code 1 -```",3.0 -32478381,2020-03-26 08:25:14.694,reactive_caching queue execution time SLO violations,"We are regularly getting alerts for violating the `reactive_caching` queue execution time SLO. -We should investigate the reason and either improve the execution times or adjust the SLO.",3.0 -32344201,2020-03-23 16:26:09.344,Experimentally consider reducing Gitaly storage shard node instance GCP Machine Type,"Experimentally consider and conduct and data-based evaluation of the consequences of reducing Gitaly storage shard node instance GCP Machine Type as part of a measure to reduce Operating Expenditures. - -Proposed by @jstava: https://gitlab.com/gitlab-org/gitlab/-/issues/211609",3.0 -32341678,2020-03-23 15:23:57.562,Execute PostgreSQL upgrade on staging,"Using the template: https://ops.gitlab.net/gitlab-com/gl-infra/db-migration/-/issues/10 - -We would like to execute the first test migration for the PostgreSQL upgrade. The date would be at 31 of March of 2020 12:30 UTC - -The maintenance time for staging should be from 30 to 40 minutes. - -Please upgrade 3 nodes, we will keep 3 secondaries hosts for the rollback.",2.0 -32249101,2020-03-20 20:47:00.064,Redis PCAP files in /tmp,"@gitlab-com/gl-infra - -There are many pcap files that are quite large on `redis-cache-01-db-gprd.c.gitlab-production.internal` in the `/tmp` directory. This has put the filespace free down to 6% or so on the root filesystem. - -If you need any of these files for posterity, please move them and delete them. And if you made some and don't need them, consider deleting them ASAP. - -Thank you!",1.0 -32239542,2020-03-20 15:41:05.524,create the rollback script for the postgresql upgrade,"We need to script the rollback of the Postgresql upgrade if we need to use it. - -We would consider using GCP snapshots for it. - -Also, we would keep 2 database nodes that are read-only that we will not execute the upgrade. With those nodes, we could restore the environment quickly. - -Please consider that we will be in a scenario after the upgrade script described in the file: https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/9277 - -I would like share with you the steps for the rollback script: - -At the begining of the rollback we will have: - -* A snapshot consistent from the database before the upgrade. -* Excluded instances group - 4 nodes not upgraded. If we are successful with the upgrade, we would not upgrade them, reducing the cluster number of nodes. - -At this point we are ready to run the rollback upgrade. - -- Run Chef `merge_and_apply` to rollback the `patroni.yml` file. -- Stop services on failed instances like `chef-client`, `pgbouncer`, `patroni`, and `postgresql` (check manually). -- On failed instances change config to false values and avoid start service accidentally. -- Stop `pgbouncer` on ""excluded instances"" to avoid connections. -- Start `chef-client` services on ""excluded instances"". -- Resume patroni service --wait -- Check if patroni elect a new leader. -- Start `pgbouncer` -- [continue...]",4.0 -32155027,2020-03-18 22:38:15.960,New storage setting for Terraform state,"In https://gitlab.com/groups/gitlab-org/-/epics/2673, the Configure:System group is creating a new feature to store Terraform state in GitLab. This involves adding a new storage setting, since Terraform state doesn't correlate to any of the other settings. - -https://gitlab.com/gitlab-org/gitlab/-/merge_requests/26619 introduces an uploader that uses this new storage setting. It also adds default values for this setting. - -https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/3983 adds the storage setting to Omnibus. - -https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/1208 adds the storage setting to GitLab Helm Chart. - -- [x] Add buckets for the feature in our module https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/storage-buckets/-/merge_requests/23 -- [x] Setup buckets on gstg https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/-/merge_requests/1608 -- [x] Enable feature on gstg https://ops.gitlab.net/gitlab-cookbooks/chef-repo/-/merge_requests/3163 -- [x] Setup buckets on gprd https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/-/merge_requests/1610 -- [x] Enable feature on gprd https://ops.gitlab.net/gitlab-cookbooks/chef-repo/-/merge_requests/3164 - -/cc @nicholasklick @mattkasa",2.0 -32151883,2020-03-18 20:27:11.005,DNS record for `*.asaba-sandbox.gitlap.com`,"Please add the following A record to DNS: - -``` - ""*.asaba-sandbox.gitlap.com."": { - ""records"": [ - ""35.208.230.243"" - ], - ""ttl"": 300 - } -``` - -Related to gitlab-com/gl-security/engineering#884. - -@dawsmith @AnthonySandoval for assignment.",1.0 -32145921,2020-03-18 18:06:27.682,Incident Review for intermittent CPU saturation on gitaly node file-45 on 2020-03-16," - -Incident being reviewed: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/1774 - -Related incident with the same root cause: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/1731 - -These incidents are confidential, to protect the privacy of the projects involved. - -## Summary - -- Service(s) affected : Gitaly -- Minutes downtime or degradation : Several 10-25 minute periods of slowness, summing to 195 minutes spread across 11 days - -One of the Gitaly shards (`file-45-stor-gprd`) reached CPU saturation, leading to an increased slowness and rate of errors (timeouts). Other Gitaly shards were not affected. - -The CPU saturation was triggered by many clients concurrently running `git fetch` or `git clone` on a repo that has a large graph of commit objects to traverse. To generate the inventory of objects that the client needs, Gitaly runs `git pack-objects`, which for this repo requires significant CPU time and memory. Enough of these helper processes were running that the host starved for CPU time, resulting in the observed increased latency and error rate (timeouts) from this particular Gitaly shard. - -For reference, this incident ties back to other similar regressions on this particular Gitaly shard starting on 2020-03-05. Each regression event caused roughly 10-25 minutes of degraded performance on 1 of the 52 Gitaly shards (`file-45`). Now that we understand the pathology, we can link it back to the following regression events: -- 2020-03-16: 3 events = 65 minutes -- 2020-03-13: 1 event = 20 minutes -- 2020-03-12: 3 events = 60 minutes -- 2020-03-09: 1 event, 10 minutes -- 2020-03-05: 2 events, 40 minutes - - -### Mitigations - -The owner of the repo whose distributed builds triggered this regression has kindly offered to adjust their build to reduce its impact. Reusing fresher git-clones on the build servers may help, and reducing concurrency would definitely help. - -On the GitLab server side, Gitaly needs to avoid saturation by limiting resource usage by these helper processes. Both CPU time and memory are relevant to this problem. - -One options is to limit the concurrency of the gRPC `PostUploadPack` calls, which spawn these `git pack-objects` helper processes. (For context, the `git` client's fetch, pull, clone, etc. subcommand sends an HTTP POST request to the `[repo-url]/git-upload-pack` endpoint, which causes Rails to send Gitaly a `PostUploadPack` gRPC call.) Using an instance-wide limit in combination with a smaller scoper limit per-namespace or per-project would allow the other projects to continue functioning while the busy peer is rate-limited. - -Another possibility is using cgroups to impose a hard limit on resource usage by particular types of helper processes. However, under certain workload patterns, that could result in starving all tenants of a Gitaly shard. This may be better suited as a second line of defense, with the application-based concurrency limits acting as the primary defense with more graceful degradation behavior. - - -## Impact & Metrics - -- What was the impact of the incident? Slowness and intermittent errors for operations involving git repos stored on Gitaly shard `file-45`. -- Who was impacted by this incident? All customers having git repos stored on Gitaly shard `file-45` (roughly 2% of the repos on GitLab.com). -- How did the incident impact customers? Git operations would have been slow enough to sometimes timeout. -- How many attempts were made to access the impacted service/feature? Routine traffic over a 20 minute timespan -- How many customers were affected? Approximately 2% of repos were slow to access (sometimes up to timing out). Unknown how many customers actively experienced this slowness. -- How many customers tried to access the impacted service/feature? See above. - - -### Dashboard ""gitaly: Overview"" - -Latency apdex for the Gitaly service as a whole (i.e. not just the degraded shard) shows a drop of at most 2% during the regressions on 2020-03-16. Corresponding spikes in Gitaly error rate (due to timeouts) jumped as high as 0.6%. A similar pattern was found on the earlier dates that we later backtracked this pathology to. - -#### Latency - -https://dashboards.gitlab.net/d/gitaly-main/gitaly-overview?orgId=1&from=1584378000000&to=1584396000000&fullscreen&panelId=3&var-PROMETHEUS_DS=Global&var-environment=gprd&var-stage=main&var-sigma=2 - -![Screenshot_from_2020-03-18_13-21-00](/uploads/9f029a264cdf89e7f9b6451632788943/Screenshot_from_2020-03-18_13-21-00.png) - -#### Error rate - -https://dashboards.gitlab.net/d/gitaly-main/gitaly-overview?orgId=1&from=1584378000000&to=1584396000000&fullscreen&panelId=4&var-PROMETHEUS_DS=Global&var-environment=gprd&var-stage=main&var-sigma=2 - -![Screenshot_from_2020-03-18_13-26-30](/uploads/ecaa2a365c57c1c85905a5be558866eb/Screenshot_from_2020-03-18_13-26-30.png) - -### Dashboard ""Host Stats"" - -The host-level resource usage metrics for the affected Gitaly node (`file-45`) clearly show the CPU usage saturation and memory usage increase. - -Network egress throughput spiked as a result of sending the clients the git objects they needed, but the network throughput was nowhere near the saturation point. Disk I/O (not shown) was not significantly affected. Disk read throughput moderately increased but most disk I/O was satisfied by the filesystem cache, with physical disk I/O only increasing as a result of the cache being eroded by the anonymous memory allocations by the numerous `git pack-objects` processes. - -#### CPU usage - -https://dashboards.gitlab.net/d/bd2Kl9Imk/host-stats?orgId=1&var-environment=gprd&var-node=file-45-stor-gprd.c.gitlab-production.internal&var-promethus=prometheus-01-inf-gprd&from=1584378000000&to=1584396000000&fullscreen&panelId=8 - -![Screenshot_from_2020-03-18_13-29-29](/uploads/0399bf11617d5f941ed26ab0c3367744/Screenshot_from_2020-03-18_13-29-29.png) - -#### Memory usage - -https://dashboards.gitlab.net/d/bd2Kl9Imk/host-stats?orgId=1&var-environment=gprd&var-node=file-45-stor-gprd.c.gitlab-production.internal&var-promethus=prometheus-01-inf-gprd&from=1584378000000&to=1584396000000&fullscreen&panelId=39 - -![Screenshot_from_2020-03-18_13-31-17](/uploads/c7c94b0db43e90d19cecb7a1ed874b3c/Screenshot_from_2020-03-18_13-31-17.png) - -#### Network usage - -https://dashboards.gitlab.net/d/bd2Kl9Imk/host-stats?orgId=1&var-environment=gprd&var-node=file-45-stor-gprd.c.gitlab-production.internal&var-promethus=prometheus-01-inf-gprd&from=1584378000000&to=1584396000000&fullscreen&panelId=12 - -![Screenshot_from_2020-03-18_13-32-21](/uploads/9deb042d4ef9df1bd6adbd90831945b2/Screenshot_from_2020-03-18_13-32-21.png) - -### Command line list of processes - -`top` shows the CPU usage was predominantly from the `git pack-objects` processes spawned by Gitaly, rather than by threads within Gitaly itself. Note that each of these processes holds a significant amount of resident memory beyond its shared memory. - -Also, each of these `git pack-objects` processes has accumulated many seconds of CPU time (some have several minutes of CPU time). Follow-up analysis showed that these processes tended to have 40-60% of wall clock time spent on-CPU, at least during the regression; we suspect these processes are entirely CPU-bound and that their off-CPU time may have just been waiting for their next time slice, due to CPU starvation. - -These `git pack-objects` processes were mostly operating on the same git repo directory (not shown), which is one of the ways we traced the regression back to its triggering conditions. - -#### Top processes using CPU time - -```shell -top - 20:03:16 up 67 days, 23:14, 2 users, load average: 82.07, 71.93, 43.87 -Tasks: 972 total, 75 running, 700 sleeping, 0 stopped, 30 zombie -%Cpu(s): 94.9 us, 4.6 sy, 0.0 ni, 0.1 id, 0.0 wa, 0.0 hi, 0.4 si, 0.0 st -KiB Mem : 12376827+total, 18716604 free, 71279952 used, 33771716 buff/cache -KiB Swap: 0 total, 0 free, 0 used. 49609072 avail Mem - - PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND -30400 git 20 0 12.944g 2.407g 18684 S 87.3 2.0 3104:23 /opt/gitlab/embedded/bin/gitaly /var/opt/gitlab/gitaly/config.toml -24383 git 20 0 3666708 0.985g 724520 R 55.9 0.8 0:11.10 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -18792 git 20 0 4010080 1.421g 910388 R 52.3 1.2 0:44.75 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -15964 git 20 0 4048068 1.616g 995520 R 52.0 1.4 1:06.52 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -17808 git 20 0 4014652 1.480g 945304 R 51.3 1.3 0:50.92 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -19394 git 20 0 3912712 1.389g 891808 R 50.3 1.2 0:41.27 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -10993 git 20 0 4409636 3.195g 2.279g R 50.0 2.7 1:36.05 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -16897 git 20 0 4531220 2.765g 1.732g R 47.4 2.3 3:41.96 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -13408 git 20 0 4305772 1.813g 0.988g R 47.1 1.5 1:24.43 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -14087 git 20 0 4264240 1.769g 0.983g R 47.1 1.5 1:19.99 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -14833 git 20 0 4264240 1.758g 0.979g R 46.7 1.5 1:18.16 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -14516 git 20 0 4531220 2.773g 1.739g R 46.1 2.3 3:51.42 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -19094 git 20 0 4010080 1.418g 909592 R 46.1 1.2 0:43.87 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset - 4721 git 20 0 4431444 3.173g 2.237g R 45.8 2.7 1:43.69 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -14081 git 20 0 4531220 2.923g 1.888g R 45.4 2.5 3:57.74 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -16557 git 20 0 4035388 1.556g 971636 R 45.1 1.3 0:59.52 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -21825 git 20 0 3852644 1.219g 824620 R 45.1 1.0 0:26.51 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -12431 git 20 0 4406380 3.212g 2.300g R 44.4 2.7 1:32.37 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -23211 git 20 0 3769068 1.121g 776424 R 44.4 0.9 0:18.12 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -16331 git 20 0 4036872 1.577g 980652 R 44.1 1.3 1:01.84 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -24308 git 20 0 4495184 2.531g 1.533g R 44.1 2.1 3:13.28 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -15668 git 20 0 4049636 1.626g 976.6m R 43.8 1.4 1:07.32 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -17750 git 20 0 4021540 1.504g 953560 R 43.8 1.3 0:52.94 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset - 5924 root 20 0 775092 211420 12276 S 43.1 0.2 1190:39 /opt/td-agent/embedded/bin/ruby -Eascii-8bit:ascii-8bit /opt/td-agent/embedded/bin/fluentd --log /var/log/td-agent/td-agent.log --daem+ -12601 git 20 0 4531220 2.895g 1.860g R 42.8 2.5 3:58.59 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -18466 git 20 0 4011404 1.438g 919280 R 42.5 1.2 0:46.23 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -19942 git 20 0 3906556 1.356g 876976 R 42.5 1.1 0:37.52 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -17034 git 20 0 4035388 1.551g 969276 R 42.2 1.3 0:58.61 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -13404 git 20 0 4305772 1.803g 0.985g R 41.8 1.5 1:22.21 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -15915 git 20 0 4047368 1.610g 991744 R 41.5 1.4 1:05.33 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -19079 git 20 0 3912712 1.386g 890100 R 41.2 1.2 0:42.30 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset - 2105 git 20 0 3624628 3.441g 3600 R 39.2 2.9 2:59.31 /opt/gitlab/embedded/libexec/git-core/git unpack-objects --pack_header=2,33 --strict --max-input-size=10485760000 -18049 git 20 0 4018152 1.483g 943724 R 39.2 1.3 0:50.88 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -22826 git 20 0 4495184 2.525g 1.528g R 38.9 2.1 3:18.47 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -16607 git 20 0 4031684 1.536g 963384 R 38.6 1.3 0:57.23 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -21679 git 20 0 3854772 1.221g 823376 R 38.6 1.0 0:28.12 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -23295 git 20 0 4495184 2.544g 1.547g R 38.6 2.2 3:17.52 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -16630 git 20 0 4035388 1.569g 978100 R 38.2 1.3 1:00.71 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -19127 git 20 0 3913500 1.391g 890976 R 37.9 1.2 0:42.71 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -12430 git 20 0 4414044 3.219g 2.300g R 37.6 2.7 1:32.96 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -17812 git 20 0 4013020 1.475g 944612 R 37.6 1.2 0:50.06 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -17967 git 20 0 4016844 1.482g 943656 R 37.6 1.3 0:50.65 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -13384 git 20 0 4305772 1.812g 0.987g R 37.3 1.5 1:23.82 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -14866 git 20 0 4264240 1.716g 985.4m R 37.3 1.5 1:11.54 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -15103 git 20 0 4264240 1.722g 989.3m R 37.3 1.5 1:12.10 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset - 7408 git 20 0 2235824 870708 578048 R 36.9 0.7 1:44.94 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --progress --delta-base-offset --include-tag - 7871 git 20 0 4409636 3.151g 2.235g R 36.9 2.7 1:37.31 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -22605 git 20 0 4495184 2.534g 1.536g R 36.9 2.1 3:21.44 /opt/gitlab/embedded/libexec/git-core/git pack-objects --revs --thin --stdout --delta-base-offset -... -``` - -### CPU profile of Gitaly host `file-45` during the regression - -A CPU profile of all on-CPU processes shows that `git pack-objects` dominated CPU time and that they collectively spent most of their time calculating the deltas between what git objects the client request says it already has versus what the server-side bare repo has available to offer. This suggests that the pathology is more likely to affect git repos with a large complex graph of object trees to compare. - -#### `perf` profile of on-CPU processes' stacks during the regression - -```shell -msmiley@file-45-stor-gprd.c.gitlab-production.internal:~$ sudo perf record --freq 99 -a -g -- sleep 10 -[ perf record: Woken up 1 times to write data ] -[ perf record: Captured and wrote 5.118 MB perf.data (31680 samples) ] - -msmiley@file-45-stor-gprd.c.gitlab-production.internal:~$ sudo perf script --header > perf-script.during_cpu_saturation_by_many_git_pack_objects_pids.out - -msmiley@saoirse:~/src/git/public/FlameGraph [master|✔] $ cat /tmp/perf-script.during_cpu_saturation_by_many_git_pack_objects_pids.out | ./stackcollapse-perf.pl | ./flamegraph.pl > /tmp/file-45.during_git_pack_objects_cpu_saturation.svg -``` - -Complete SVG of flame graph: [file-45.during_git_pack_objects_cpu_saturation.svg](/uploads/647be579053e398fe994aa73d94c319b/file-45.during_git_pack_objects_cpu_saturation.svg) - -![flame-graph.git-pack-objects-saturating-cpu-on-gitaly-node-file-45](/uploads/9e427542d6431045d5e257b911e0ee83/flame-graph.git-pack-objects-saturating-cpu-on-gitaly-node-file-45.png) - - -## Detection & Response - -Start with the following: - -- How was the incident detected? PagerDuty alerts such as [here](https://gitlab.pagerduty.com/incidents/PI1CE5Z) and [here](https://gitlab.pagerduty.com/incidents/PZESZJC): ""Gitaly error rate is too high"" -- Did alarming work as expected? Yes. -- How long did it take from the start of the incident to its detection? 7 minutes. -- How long did it take from detection to remediation? The regressions self-resolved roughly 10 minutes after the alert triggered. Analysis of the regression continued after incidents resolved, and once we discovered enough about the pathology and triggering conditions, we reached out to the owner of the triggering repo. We temporarily blocked the behavior for that repo to protect other customers from the side-effects, and worked with the customer to understand their use-case and mitigate the impact. -- Were there any issues with the response to the incident? No. We had all the necessary access to people and technology. - -## Root Cause Analysis - -A Gitaly node saturated its CPUs and significantly changed its memory usage profile. - -1. Why? - More CPU-bound memory-hungry processes were spawned than there are CPUs available on the host. -2. Why? - Gitaly received numerous `PostUploadPack` gRPC calls from clients running `git fetch` (or similar), and Gitaly processed too many of them concurrently. -3. Why? - Most repos are not prone to this pathology, so it has not yet been a focus of attention for Gitaly tuning. -4. Why? - To trigger the pathology, the repo being accessed must have a very large object graph to traverse when composing a response to the client's `git fetch` (more specifically the `git` client's HTTP POST to `[repo_path]/git-upload-pack`). Even for repos that meet this condition, to cause a regression on the Gitaly node, over 30 client requests to this repo must be concurrently running. -5. Why? - The Gitaly server does not have enough CPU and memory capacity to handle workloads beyond the above specifications. Once reaching the saturation point, all other Gitaly operations begin to slow down due to CPU starvation, with the degree of effect being proportional to the number of CPU-bound processes running. - -## What went well - -- Cross-team collaboration was fantastic! -- IMOC and CMOC did a wonderful job (as usual!) of facilitating the investigation, getting help where needed, doing regular pulse checks, and guiding the balance of what info was ok to release publicly (as we were unsure at first whether this was a precursor to a malicious attack or an accidental outcome of a legitimate use-case). -- Alerting worked appropriately. -- Gitaly's and Rails' event logging to Elasticsearch gave us excellent observability into the app behavior and client request profile. -- The CPU profiling tools (`perf` and BCC) gave us a clearer picture of what the CPU time was being spent on and which of the numerous git repos were associated with the bulk of the activity. - -## What can be improved - -- To prevent this from happening again, we need to prevent Gitaly from spawning enough `git` processes to saturate machine resources such as CPU and memory. Gitaly already supports some forms of concurrency limiting, and tuning those may be sufficient. Additional layers of protection (e.g. cgroups) could also be implemented, although we should consider the effects of saturation in each case and choose the most desirable failure mode for the health of the service as a whole when saturation is reached. (For example, we prefer: slowness over downtime; downtime over possible corruption; automatic over manual recovery; etc.) -- Each of the incidents in this 11-day series were low enough severity (`S3`) that we did not take the time to do root cause analysis until we noticed they were part of a recurring pattern. Our on-call engineers field enough alerts, questions, and interruptions that we rarely have time to analyze self-recovering regressions like this. This may be worth discussing as an iterative process improvement for incident triage. Or maybe it's an acceptable cost of our existing prioritization scheme. - -## Corrective actions - -- Prevent Gitaly from running too many `git pack-objects` processes. - - Issue: https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/9554 - - Estimated date of completion: TBD - - Owner: TBD -- For short-term mitigation, add support for removing user access to CI/CD without having to fully block the user. - - Issue: https://gitlab.com/gitlab-org/gitlab/-/issues/35346 - - Estimated date of completion: TBD - - Owner: TBD - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/engineering/root-cause-analysis/) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",2.0 -32131630,2020-03-18 13:01:00.119,Execute database benchmarking on Postgresql 11,"The goal is to check our load on PostgreSQL 11 and verify that the performance is similar or better. We could find some queries that the performance could change with the new planner, so we would like to reduce these cases. - -We would execute the benchmark with the same concept that we have planned to test our primary database. - -The main steps will be: - -- List the queries from pg_stat_statements - -- Execute a JMeter from a different host to reproduce the queries against the database, to generate similar traffic as production.",4.0 -32012958,2020-03-15 22:10:14.218,Recurring CPU-bound git-fetch processes on file-praefect-02 gitaly node,"While looking into intermittent alerts about high Gitaly latency on one of the nodes behind Praefect (e.g. [here](https://gitlab.pagerduty.com/incidents/PJ6FJVO) and [here](https://gitlab.pagerduty.com/incidents/PFACGWK)), I noticed that this host is almost always running 2 or 3 `git fetch` processes. These `git fetch` processes are always CPU-bound. Because they tend to run for several minutes, it was easy to find one and do some profiling on it before it exited. Repeating this a few times on different `git fetch` processes consistently showed the same pattern -- these particular `git fetch` processes spent a lot of time reading directories via the `getdents` syscall. - -These slow CPU-intensive `git fetch` processes may or may not be why the Gitaly latency is high enough to alert -- it's plausible but not proven. - -I thought it was worth documenting, mainly because further investigation may lead to tuning the git config of the handful of GitLab-owned repos stored on this host. These repos (e.g. the repo for gitaly itself) tend to have many refs, which may or may not be related to why `git fetch` is doing so many expensive dentry fetches. - -### Observations - -#### perf_event profile - -This is one of several examples, each giving consistent results even though the `git fetch` was running in at least 3 different git repos. - -```shell -msmiley@file-praefect-02-stor-gprd.c.gitlab-production.internal:~$ pgrep -u git -f '/opt/gitlab/embedded/bin/git ' | xargs -r ps uwf -USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND -git 32544 97.1 0.1 224848 207100 ? R 21:18 0:51 /opt/gitlab/embedded/bin/git fetch tmp-f0c29603f769ef2a5e19823338fa - -msmiley@file-praefect-02-stor-gprd.c.gitlab-production.internal:~$ sudo perf record --freq 99 -g --pid 32544 -- sleep 120 -[ perf record: Woken up 3 times to write data ] -[ perf record: Captured and wrote 1.079 MB perf.data (8860 samples) ] -Terminated -``` - -Complete flamegraph in SVG format: [git-fetch.svg](/uploads/7a5463cb953f0f86279eb111e6d4b59a/git-fetch.svg) - -Screenshot without the tall thin tower: - -![Screenshot_from_2020-03-15_15-03-44](/uploads/789aac5e826cf7061a109039e41b2ab4/Screenshot_from_2020-03-15_15-03-44.png) - - -#### Size of dentry cache at the time - -The kernel's slab cache for dentries is roughly 600 MB as of time writing. - -```shell -msmiley@file-praefect-02-stor-gprd.c.gitlab-production.internal:~$ date -u ; sudo slabtop --once -Sun Mar 15 21:26:54 UTC 2020 - Active / Total Objects (% used) : 35725531 / 36164607 (98.8%) - Active / Total Slabs (% used) : 880109 / 880109 (100.0%) - Active / Total Caches (% used) : 78 / 127 (61.4%) - Active / Total Size (% used) : 5496363.91K / 5627374.90K (97.7%) - Minimum / Average / Maximum Object : 0.01K / 0.16K / 8.00K - - OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME -24567153 24533027 99% 0.10K 629927 39 2519708K buffer_head -3320415 3051784 91% 0.19K 79287 42 634296K dentry -1887918 1886804 99% 0.04K 18509 102 74036K ext4_extent_status -1250816 1237169 98% 0.06K 19544 64 78176K kmalloc-64 -1233603 1179390 95% 1.05K 42948 30 1374336K ext4_inode_cache -... - -msmiley@file-praefect-02-stor-gprd.c.gitlab-production.internal:~$ free -m - total used free shared buff/cache available -Mem: 120867 3323 13714 1160 103829 114721 -Swap: 0 0 0 -``` - -#### Confirm that `getdents` syscall is both frequent and time-consuming - -Profile the top syscalls by count and duration for another example `git fetch` process. - -```shell -msmiley@file-praefect-02-stor-gprd.c.gitlab-production.internal:~$ pgrep -u git -f '/opt/gitlab/embedded/bin/git ' | xargs -r ps uwf -USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND -git 11227 68.1 0.0 47364 12140 ? Sl 21:45 0:06 /opt/gitlab/embedded/bin/git fetch tmp-abdca8edcb52002d5ac34c56dbd7 -git 11346 99.4 0.0 42608 27176 ? R 21:46 0:04 /opt/gitlab/embedded/bin/git fetch tmp-da29fee686ab94a6c8a08bf13797 - -msmiley@file-praefect-02-stor-gprd.c.gitlab-production.internal:/usr/share/bcc/tools$ sudo ./syscount --pid 11346 --duration 10 --latency -Tracing syscalls, printing top 10... Ctrl+C to quit. -[21:47:00] -SYSCALL COUNT TIME (us) -getdents 66594 8425669.975 -open 22198 79449.929 -write 11099 79104.089 -lstat 22204 58918.045 -fstat 22198 30487.748 -close 22198 28841.996 -brk 2 16.327 -stat 1 14.969 - -Detaching... -```",2.0 -31980837,2020-03-14 02:15:01.253,Certificate expiring on prometheus.GitLab.com,The certificate on `prometheus.GitLab.com` is about to expire: https://gitlab.pagerduty.com/incidents/P0MVVYH,1.0 -31931941,2020-03-12 17:44:52.040,Update customers.gitlab.com cheat sheet to prevent outages,"Originating incident: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/1763 -Incident review: https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/9488 - -Update the page in the dotcom documentation to reflect running the console as the `gitlab-customers` user and not as root. - -The location that needs updated: https://gitlab.com/gitlab-com/support/dotcom/dotcom-internal/-/wikis/Customers-Console-Rails-Cheat-Sheet -A source showing the proper commands: https://gitlab.com/gitlab-org/customers-gitlab-com/#accessing-production-as-an-admin-and-logs-and-console",1.0 -31915588,2020-03-12 12:36:14.579,change requests to improve checkpoint setup,"As mentioned in the issues : -https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5189 - -and - -https://ops.gitlab.net/gitlab-cookbooks/chef-repo/-/merge_requests/2848 - -We would like to reconsider checkpoint-related PostgreSQL settings: - -- `max_wal_size`: `5GB` → `64GB` for less frequent ""forced"" checkpoints; -- `checkpoint_timeout`: `5min` → `10min` for less frequent ""planned"" checkpoints; -- `checkpoint_completion_target`: `0.7` → `0.9` for smoother checkpointer -behavior. - -And we would need to create the following changes, with 1-2 days of observations between them, on each step performing the change only on staging, and then on production: - -1. change `checkpoint_timeout` from `5min` to `10min` and change `checkpoint_completion_target` from `0.7` to `0.9`. - 1. apply to gstg - 1. apply to gprd -1. change `max_wal_size` from `5GB` to `8GB` - 1. apply to gstg - 1. apply to gprd -1. change `max_wal_size` from `8GB` to `16GB` - 1. apply to gstg - 1. apply to gprd -1. change `max_wal_size` from `16GB` to `32GB` - 1. apply to gstg - 1. apply to gprd -1. change `max_wal_size` from `32GB` to `64GB` - 1. apply to gstg - 1. apply to gprd - -For the 1st step and staging @NikolayS created: https://ops.gitlab.net/gitlab-cookbooks/chef-repo/-/merge_requests/2871",2.0 -31906779,2020-03-12 08:35:24.913,Create ingress for GKE alertmanager,"To simplify adding silences, examining alerts without port forwarding. - -This should have restricted access, to whoever is currently allowed to access https://alerts.gprd.gitlab.net (at most). We could likely use the GCP IAP for this. - -@bjk-gitlab @mwasilewski-gitlab @AnthonySandoval",5.0 -31881534,2020-03-11 17:31:26.925,Implement elasticsearch operational logging for utility scripts like `storage_rebalance.rb` and `storage_revert.rb`,"Implement elasticsearch operational logging for utility scripts like `storage_rebalance.rb` and `storage_revert.rb`. - -Got the idea for this one from @cmiskell. - -> Just wondering if you're aware of SemanticLogger (https://github.com/rocketjob/semantic_logger)? Delivery use this by preference, and I did also in https://gitlab.com/gitlab-com/gl-infra/infra-vault/-/tree/master/backup_job when prompted. It does a lot of handy things, including logging to ElasticSearch if desired. Not critical, but might be nice. - -https://gitlab.com/gitlab-com/runbooks/-/merge_requests/1935#note_294926778",3.0 -31840173,2020-03-10 16:45:52.342,Implement actual proper opts parsing for bash scripts,"Implement actual proper opts parsing for bash scripts. - -At the very least for the storage management utilities.",2.0 -31838746,2020-03-10 16:14:08.742,"Re-factor the information runbook utility commands into a bash function, and support an `--include-wiki` cli argument flag","Re-factor the information commands into a bash function, and support an `--include-wiki` cli argument flag. - -This will mean that the `print_info` function will be invoked once for the main `.git` repo directory, and then again for the `.wiki.git` directory when the `--include-wiki` flag is given. - -This is dependent on implementation of proper options parsing, which is tracked by this issue: https://gitlab.com/gitlab-com/runbooks/-/issues/35",1.0 -31809361,2020-03-09 22:00:47.557,Update Disaster Recovery page in Handbook to reflect current state,"While [answering some customer questions](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/9402), we noticed at least one piece of information is out of data on the [Handbook page describing our DR posture](https://about.gitlab.com/handbook/engineering/infrastructure/library/disaster-recovery/#repository-data). - -This task is to review that document and update it where needed with regard to current state of recoverability. - -Rendered: https://about.gitlab.com/handbook/engineering/infrastructure/library/disaster-recovery/ - -Source: https://gitlab.com/gitlab-com/www-gitlab-com/-/blob/master/source/handbook/engineering/infrastructure/library/disaster-recovery/index.html.md",2.0 -31804085,2020-03-09 18:42:09.182,semantic-release _may_not be working for our terraform modules,"The `publish` step in our pipelines upon MR merge may not be working. Example pipeline of one that simply didn't do anything: https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/pubsubbeat/-/jobs/972834 - -I, personally, was able to reproduce this precisely. Though another member of the team was not. From @alejandro this appeared to work as expected on this persons workstation: - -``` -alejandro@MacBook-Pro-4 ~/terraform-modules/google/pubsubbeat (master) $ GITLAB_URL=""https://ops.gitlab.net"" GL_TOKEN=""[REDACTED]"" semantic-release -[3:33:58 PM] [semantic-release] › ℹ Running semantic-release version 15.13.2 -[3:33:59 PM] [semantic-release] › ✔ Loaded plugin ""verifyConditions"" from ""@semantic-release/gitlab"" -[3:33:59 PM] [semantic-release] › ✔ Loaded plugin ""analyzeCommits"" from ""@semantic-release/commit-analyzer"" -[3:33:59 PM] [semantic-release] › ✔ Loaded plugin ""generateNotes"" from ""@semantic-release/release-notes-generator"" -[3:33:59 PM] [semantic-release] › ✔ Loaded plugin ""publish"" from ""@semantic-release/gitlab"" -[3:33:59 PM] [semantic-release] › ⚠ This run was not triggered in a known CI environment, running in dry-run mode. -[3:33:59 PM] [semantic-release] › ⚠ Run automated release from branch master in dry-run mode -[3:34:04 PM] [semantic-release] › ✔ Allowed to push to the Git repository -[3:34:04 PM] [semantic-release] › ℹ Start step ""verifyConditions"" of plugin ""@semantic-release/gitlab"" -[3:34:04 PM] [semantic-release] [@semantic-release/gitlab] › ℹ Verify GitLab authentication (https://ops.gitlab.net/api/v4) -[3:34:05 PM] [semantic-release] › ✔ Completed step ""verifyConditions"" of plugin ""@semantic-release/gitlab"" -[3:34:08 PM] [semantic-release] › ℹ Found git tag v5.3.0 associated with version 5.3.0 -[3:34:08 PM] [semantic-release] › ℹ Found 2 commits since last release -[3:34:08 PM] [semantic-release] › ℹ Start step ""analyzeCommits"" of plugin ""@semantic-release/commit-analyzer"" -[3:34:08 PM] [semantic-release] [@semantic-release/commit-analyzer] › ℹ Analyzing commit: Merge branch 'jarv/add-gke-sink' into 'master' -feat: add default sink for GKE with exclusions for application logs -See merge request gitlab-com/gl-infra/terraform-modules/google/pubsubbeat!19 -[3:34:08 PM] [semantic-release] [@semantic-release/commit-analyzer] › ℹ The commit should not trigger a release -[3:34:08 PM] [semantic-release] [@semantic-release/commit-analyzer] › ℹ Analyzing commit: feat: add default sink for GKE with exclusions for application logs -[3:34:08 PM] [semantic-release] [@semantic-release/commit-analyzer] › ℹ The release type for the commit is minor -[3:34:08 PM] [semantic-release] [@semantic-release/commit-analyzer] › ℹ Analysis of 2 commits complete: minor release -[3:34:08 PM] [semantic-release] › ✔ Completed step ""analyzeCommits"" of plugin ""@semantic-release/commit-analyzer"" -[3:34:08 PM] [semantic-release] › ℹ The next release version is 5.4.0 -[3:34:08 PM] [semantic-release] › ℹ Start step ""generateNotes"" of plugin ""@semantic-release/release-notes-generator"" -[3:34:08 PM] [semantic-release] › ✔ Completed step ""generateNotes"" of plugin ""@semantic-release/release-notes-generator"" -[3:34:08 PM] [semantic-release] › ⚠ Skip v5.4.0 tag creation in dry-run mode -[3:34:08 PM] [semantic-release] › ⚠ Skip step ""publish"" of plugin ""@semantic-release/gitlab"" in dry-run mode -[3:34:08 PM] [semantic-release] › ✔ Published release 5.4.0 -[3:34:08 PM] [semantic-release] › ℹ Release note for version 5.4.0: -# 5.4.0 (https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/pubsubbeat/compare/v5.3.0...v5.4.0) (2020-03-09) -``` - -In order to unblock the work being performed, @alejandro pushed up this tag. This should be investigated to ensure other terraform modules are not also impacted.",2.0 -31802130,2020-03-09 17:34:04.399,Update/create runbooks for increase/reduce disks for the patroni cluster,"After increasing the disks from the patroni cluster we observed a restart from postgresql on the node Patroni-07. - -To increase the security from the platform, we would like to update/create our runbooks to stop the traffic on the node. - -I would suggest adding the tag on the node to drain the traffic: - -``` -noloadbalance: true -```",2.0 -31797992,2020-03-09 16:18:43.138,License DB Extraction for Data Warehouse,"The command is -`pg_dump -Fp --no-owner --no-acl license_gitlab_com_production | sed -E 's/(DROP|CREATE|COMMENT ON) EXTENSION/-- \1 EXTENSION/g' > S{DUMPFILE}`",2.0 -31706514,2020-03-07 13:25:25.012,Create runbook to resync the delayed replicas,"I would like to describe the steps executed to restore the delayed replicas. - -I think would be positive to have this documented, and make it possible that more engineers could execute the procedure if needed.",2.0 -31704706,2020-03-07 11:30:36.098,resync dr-delayed replica,"at the moment we have a replication lag over 1 day. - -We need to be just 8 hours behind the primary cluster.",2.0 -31607481,2020-03-05 10:16:41.664,"Regular and predictable database latency spikes are leading to web latency slowdowns (or ""the {00,04,08,12,16,20}:05 spike"")","Spun out of https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/9248#note_294148937 - -Within ~1 minute of the following times, on a daily basis, we experience a slowdown on our web (and possibly other services) - -* `00:05` -* `04:05` -* `08:05` -* `12:05` -* `16:05` -* `20:05` - -# p99.99 `db` durations over 24 hours - -![image](/uploads/0b763fd835f60e788df09e3d3d5cc422/image.png) - -https://log.gprd.gitlab.net/goto/6f240764af89c33cbaf08ff91d92fe53 - -# p99.9 web duration over 3 minutes during a spike - -![image](/uploads/10bf79d5ea3db922bcf780c320e305fa/image.png) - -https://log.gprd.gitlab.net/goto/c414837b2116e33e0809ec871bafa946 - -cc @Finotto",4.0 -31599440,2020-03-05 09:01:35.199,Add pubsub key SLIs to metrics catalog,,5.0 -31599382,2020-03-05 09:00:52.318,Add stackdriver key SLIs to metrics catalog,,5.0 -31599310,2020-03-05 08:59:57.928,Add fluentd key SLIs to metrics catalog,,5.0 -31501130,2020-03-03 12:47:21.821,Postgresql Migration test on staging,"In this issue, I would like to plan and list the steps that we will execute to generate and restore the staging environment. - -Nowadays we have a cluster with Postgresql 9.6 - -We will need to execute the following steps : -- Install PostgreSQL 11. ( chef ) -- Setup the new instance ( Chef ) -- Stop the traffic on PG 9.6 -- Execute the migration ( the migration plan that will include the scripted steps on ansible ) - take a backup/dump from the database cluster 9.6 *BEFORE THE UPGRADE* -- Extra tests with QA -- Stop the traffic on PG 11 -- After finish, the migration tests, remove PG11 and restore the database backup to the instance PG 9.6. -- restore the traffic to PG 9.6",4.0 -31500788,2020-03-03 12:38:39.025,Create process or feature to stop or control the impact of the background migrations on the database,"We have faced some background migrations that generated load on the database cluster. We would like to investigate the possibilities to manage better this situation in the future. - -The idea of this issue is to find out or improve/generate documentation about the background migrations and how to manage them. - -Would be a goal to have the following features : - -- Be possible to stop the migration. -- Setup the intervals when the migrations are executed. -- Change the chunks of data that will be affected. -- Pause the migration.",8.0 -31398755,2020-02-28 20:28:51.657,Fix terraform warnings regarding deprecations,"We've been updating our target terraform versions in several of our environments at https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/ for different reasons, and this has left behind a cumulus of warnings in the output of our terraform `plan`s. Some of them are quite simple to address, like removing quotes around data types in variable deffinitions (e.g. `""string""` => `string`), but others might require refactoring. - -- [x] https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/generic-stor/-/merge_requests/20 -- [x] https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/generic-stor-redis/-/merge_requests/17 -- [x] https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/generic-stor-with-group/-/merge_requests/16 -- [x] https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/generic-sv-sidekiq/-/merge_requests/24 -- [x] https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/generic-sv-with-group/-/merge_requests/14 -- [x] https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/gke/-/merge_requests/26 -- [x] https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/monitoring-with-count/-/merge_requests/20 -- [x] https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/project/-/merge_requests/28 -- [x] https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/tcp-lb/-/merge_requests/8 -- [x] https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/pubsubbeat/-/merge_requests/20 -- [x] https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/cloudflare_workers/-/merge_requests/3 -- [x] https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/static-objects-cache/-/merge_requests/21 -- [x] https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/-/merge_requests/1510",3.0 -31370583,2020-02-28 08:07:10.263,evaluate using database labs for the delayed replica,"We would like to test the possibility of using database labs in the delayed replica scenario. Actually we have just a scenario of 12 hours delay applying logs - -Main advantages: - -- We could have snapshots each hour, and make the restore in time easier. I.e. -- The restore time of the images is fast.",4.0 -31333694,2020-02-27 10:10:54.242,change in the delayed replica from wal-e to wal-g,change the backup tool used on the delayed replica to use wal-g instead of wal-e,5.0 -31303472,2020-02-26 19:33:34.210,Bump ruby version for Chef cookbooks,"### Goal - -Update the pinned ruby version of our Chef cookbooks so that we can reliably run their `rspec` tests locally with `bundle exec rspec`. - -### Changes to make - -For each chef cookbook that has a pinned ruby version: -* Update the `.ruby-version` file to `2.6.3`. -* Update the `.gitlab-ci.yml` file to refer to `ruby` container image of the same major+minor version as in step 1. - -For reference, the `gitlab-server` cookbook is definitely affected, and the following additional cookbooks may be affected: - -```shell -cookbook-about-gitlab-com/.ruby-version:2.4.4 -cookbook-license-gitlab-com/.ruby-version:2.4.4 -cookbook-omnibus-gitlab/.ruby-version:2.5.3 -gitlab-exporters/.ruby-version:2.5.3 -gitlab-haproxy/.ruby-version:2.3.3 -gitlab_lsyncd/.ruby-version:2.4.4 -gitlab-nessus/.ruby-version:2.4.5 -gitlab-omnibus-prerequisites/.ruby-version:2.3.6 -gitlab-openssh/.ruby-version:2.3.6 -gitlab-prometheus/.ruby-version:2.5.3 -gitlab-server/.ruby-version:2.5.3 -gitlab-uptycs/.ruby-version:2.4.5 -``` - -### Background - -Yesterday I discovered that due to a known bug in RubyGems, I could not run `bundler install` on a cookbook (which was a prerequisite to my goal at the time: running the cookbook's rspec tests locally with `bundle exec rspec`). The bug and its work-arounds are described here: - -https://bundler.io/blog/2019/05/14/solutions-for-cant-find-gem-bundler-with-executable-bundle.html - -This bug requires your `bundler` ruby gem to *exactly* match the version of bundler that last wrote the Gemfile.lock (indicated by that file's `BUNDLED WITH` line). - -If you are using a version of ruby with the defective RubyGems that does not allow newer versions of bundler, then any bundler command will fail. 12 of our chef cookbooks currently pin the ruby version using a `.ruby-version` file (which is used by rbenv to switch the active ruby version). Updating the pinned ruby version in `.ruby-version` to be >= `2.6.3` would avoid this bug in RubyGems. Our chef-repo itself is currently pinned to ruby 2.6.3, so I'd like to propose updating all 12 pinned cookbooks to use that same ruby version. - -Note that our cookbooks' CI pipeline `test` stage does not seem to be affected by this bug, presumably because it always runs in a container starting with a pristine ruby environment. - -Questions for the team: -1. Can anyone think of a reason not to do this? -2. Apart from running each cookbook's rspec tests and letting the CI pipeline run its integration tests, what other testing would be appropriate? Some cookbooks lack rspec tests for some recipes. - -#### Example of the problem - -```shell -$ cat .ruby-version -2.5.3 - -$ rbenv install -... - -$ rbenv version -2.5.3 (set by /home/msmiley/src/git/gitlab/gitlab-cookbooks/gitlab-server/.ruby-version) - -$ gem install bundler -Fetching: bundler-2.1.4.gem (100%) -Successfully installed bundler-2.1.4 -Parsing documentation for bundler-2.1.4 -Installing ri documentation for bundler-2.1.4 -Done installing documentation for bundler after 3 seconds -1 gem installed - -$ bundle install -Traceback (most recent call last): - 2: from /home/msmiley/.rbenv/versions/2.5.3/bin/bundle:23:in `
' - 1: from /data/src/git/public/rbenv/versions/2.5.3/lib/ruby/2.5.0/rubygems.rb:308:in `activate_bin_path' -/data/src/git/public/rbenv/versions/2.5.3/lib/ruby/2.5.0/rubygems.rb:289:in `find_spec_for_exe': can't find gem bundler (>= 0.a) with executable bundle (Gem::GemNotFoundException) - -$ grep -A 1 'BUNDLED WITH' Gemfile.lock -BUNDLED WITH - 1.17.2 - -$ gem install bundler -v ""$(grep -A 1 ""BUNDLED WITH"" Gemfile.lock | tail -n 1)"" -... - -$ bundle install -Fetching gem metadata from https://rubygems.org/.......... -... -```",1.0 -31292481,2020-02-26 14:16:16.200,create database user for sidekiq,"create a new database user for sidekiq. - -It's a good practice and would make our life easier to identify the statements and logs in the database from this specific application. - - -We would like to know how much percent of the statements on the primary database are from sidekiq, and if we could redirect the read-only statements to the secondary nodes.",2.0 -31279527,2020-02-26 09:20:34.665,New dedicated `gitlab-qa-mirror-runner` runner manager,"As part of https://gitlab.com/gitlab-org/gitlab-qa/issues/261, we'd like to setup a dedicated `gitlab-qa-mirror-runner` runner manager for the https://gitlab.com/gitlab-org/gitlab-qa-mirror project. - -The set up / specifications would be similar to the `omnibus-gitlab-mirror-runner` runner manager. - -I originally opened a MR in the chef-repo, but then was pointed to Infrastructure as this would involve more steps than I anticipated: https://ops.gitlab.net/gitlab-cookbooks/chef-repo/-/merge_requests/2490#note_43048 - -The runner manager should be under the `GitLab QA Projects` GCP folder and probably in a new dedicated `quality-runners` project? - -Any help is appreciated, thanks! :pray_tone2: - -/cc @gl-quality/eng-prod",1.0 -31239899,2020-02-25 10:32:51.190,Analyze queries from sidekiq,"The intention here is to recognize how much % are select statements coming from the read-write pgbouncer pools on the primary database. - -Having these numbers we could suggest a change in the application to use the read-only databases.",4.0 -31219344,2020-02-24 22:52:53.686,502 Bad Gateway error while testing purchase in staging customers portal,"Amanda Rueda encounters a `502 Bad Gateway` while trying to complete a simple purchase test in the staging customers portal. - -Occurs in all browsers. Clearing cache has no effect. - -Steps to reproduce: - -Staging gitlab page: https://customers.stg.gitlab.com/plans - -Then select the Bronze plan to purchase. - -Then when the nginx returns, the url is: https://customers.stg.gitlab.com/subscriptions/new?plan_id=2c92c0f95a24621b015a259a50307881&transaction=create_subscription (edited) - -Customer ID for testing in the customers portal is `4013`.",2.0 -31219311,2020-02-24 22:50:47.852,Migrate Gitlab's Production Project to `ops.gitlab.net`,"To further efforts to dogfood incident management product features–and to remove our dependency on the installation (GitLab.com) that we're supporting–we will export the `production` project from gitlab.com and import it onto ops.gitlab.net. - -A prerequisite for moving ops.gitlab.net (ops) will be to assess the production readiness of the installation. While we have no intent of publishing public issues or pages from ops.gitlab.net, the intent is for the ops installation to take over responsibility for more of our operational workflows (e.g. status page). - -Additionally, adjustments will require substantial changes to the handbook instructing the Engineer On-Call to open production incidents on ops. We'll begin a campaign to inform the company of the switch, encouraging engineers and business stakeholders to verify or request access to ops. This will be necessary to view and contribute to ongoing incidents. - -Lastly, we'll redirect Incident Management Automation workflows (&100) and all other automation to use ops as the API endpoint for interaction with incident issues. - -There will be a number of situations where we lose the convenience of cross referencing issues to other gitlab.com projects. We manage to handle this well enough with projects that primarily require participation from the infrastructure department alone. It could become more problematic when we need to increase visibility on issues that impact other business units. - -Cc @marin @dawsmith @brentnewton @glopezfernandez",5.0 -31217526,2020-02-24 21:33:31.513,Simplify and standardize path-based haproxy blocking,"Using file-based block lists simplifies the most common forms of ACL additions, making urgent changes safer and quicker to apply than writing new ACLs from scratch. - -Useful lists we could standardize include: -* regexp match on request path -* regexp match on beginning of request path (implemented today by ""blacklist-uris.lst"") -* substring match on request path (computationally cheaper but less flexible than the equivalent regexp matcher) -* CIDR match on client IP (after Cloudflare compatibility transformations) -* regexp match on request path rate-limited to configurable X requests per second per client IP (after Cloudflare compatibility transformations)",2.0 -31200227,2020-02-24 11:59:59.066,Document GCP Escalation Path in Runbooks,This escalation policy needs to be documented in our runbooks and accessible to the Incident Manager.,2.0 -31192506,2020-02-24 08:59:07.290,add extra box for the testing of pg upgrade,"we need to add an extra server to test the replication upgrade. - -The box should have the same configuration as the primary box that we are using for the upgrade from PostgreSQL. - -I should have the same cookbooks,recipes to install the same version of postgresql as the primary.",4.0 -31141840,2020-02-21 22:13:50.488,Verify SRE access to Azure,"The following SREs need to verify their access to Azure portal: - -- [x] aamarsanaa@gitlab.com @aamarsanaa -- [x] ahmad@gitlab.com @ahmadsherif -- [x] cbarrett@gitlab.com @craig -- [x] cfurman@gitlab.com @craigf -- [x] cmiskell@gitlab.com @cmiskell -- [x] dsylva@gitlab.com @devin -- [x] hmeyer@gitlab.com @T4cC0re -- [x] hphilipps@gitlab.com @hphilipps -- [x] msmiley@gitlab.com @msmiley -- [x] nnelson@gitlab.com @nnelson - -Please reachout to me if you do not have access to the portal. - -Cc @dawsmith",1.0 -31137645,2020-02-21 18:25:31.038,Configure Thanos to scrape Airflow,"The data team has a prometheus metrics endpoint ready to be scraped at https://airflow.gitlabdata.com/admin/metrics/ -Thanos needs to be configured to scrape this endpoint and store the metrics it gathers so the data team can start doing monitoring and alerting.",1.0 -31134970,2020-02-21 16:40:07.758,create config setup in chef of the new PostgreSQL 11.6 cluster,"We need to be able to install the package, create the folder structure, and configuration files of postgresql 11.6 on the hosts with chef. as we currently do for postgresql 9.6. - -Also, we need adapt monitoring, backups and the patroni setup. -At the moment we have a recipe to make the package available. - -We need to add to our chef to : - -- install the package of PG 11.6 -- create the setup for the cluster ( folder structure ) -- render the config file from chef ( the config from the chef to the proper patroni files ) -- change some monitoring functions from for Postgresql 11.6 ( there are some changes in the name of some functions ) -- Point the backups to the data folder from the Postgresql version 11.6 -- change the patroni setup to work with the data folder from pg 11.6",8.0 -31134677,2020-02-21 16:29:51.480,create chef recipe/cookbook to install extensions on postgresql 11.6,"We need to make available pg_repack for the new pg cluster - -we need to install the extensions pg_Repack and pg_Stat_Statements. - -We should add this config on the new cluster to start with if not would require a new restart in the future. - - -It should be added on the cookbook/recipes for pg 11.6.",4.0 -31134470,2020-02-21 16:24:07.238,create script for PostgreSQL upgrade,"we need to script all the steps the database migration ( could be in ansible): - - consider that all the steps will be executed one-by-one and we can cancel the total execution in case of any failure - - would execute them from a bastion in a shared session as screen - - the steps will be : - - -** Pre checks : ** -- Verify files ownership -- Check Version before the upgrade -- Verify check of pg_upgrade -- Delete and verify the monitoring functions are deleted. -- Verify the collation between versions. - -** Upgrade steps : ( migration script) ** -- stop all the connections and verify that -- stop the 9.6 cluster -- Take a snapshot from the database. -- Execute pg_upgrade ( all nodes ) -- Apply pg configuration changes ( parallel query parameters ( parallel ) and others… autovacuum maybe ) ( all the nodes ) ( MR to executed separated ) -- Start cluster -- Vacuumdb analyze ( primary ) -- Vacummdb freeze ( primary) - it will run for longer … we will start with QA tests and enabling traffic. -Changes on PG_exporter ( all nodes ) - - - -** Post checks : ( migration script) ** -- Check version after upgrade -- Create views and functions taht we need for monitoring and verify they are created properly -- Change some functions for monitoring in exporter",4.0 -31113051,2020-02-21 09:59:23.003,Add elasticsearch key SLIs to metrics catalog,We should add key SLIs for the elasticsearch logging cluster to the metrics catalog in order to define SLO alerts and dashboards based on them.,5.0 -31078638,2020-02-20 13:58:40.606,Rollout node_exporter 1.0,"The Prometheus node_exporter is now 1.0. - -* [x] Rollout release candidate to subset of nodes for testing. -* [x] Rollout final release - -Release candidate: https://github.com/prometheus/node_exporter/releases/tag/v1.0.0-rc.0",1.0 -31031271,2020-02-19 15:17:17.765,DNS and certificate update for Bouncer,"I'm currently in the process of moving [Bouncer](https://gitlab.com/gitlab-com/security-tools/janitor-rails), an internal antispam tool, over to GCP so that everyone who isn't me can actually access it: - -* I'd like to point the DNS entry for `bouncer.sec.gitlab.net` to 35.224.46.119 (the IP address of the GCP machine I've provisioned for this) - -* And I'd need an updated HTTPS certificate due to this move (and also, in a rare instance of serendipitous timing, the old cert just expired) - -Thanks, and let me know if there's anything else needed on my end!",1.0 -30994839,2020-02-18 21:09:11.461,2020-02-18: Canary web saturated during deploy," - -Incident: https://gitlab.com/gitlab-com/gl-infra/production/issues/1679 - -## Summary - -During a deploy to canary, sluggish behavior was reported in slack. Metrics show a very high spike in latency and saturation of the canary web fleet. Once the deploy had finished, everything recovered. - -- Service(s) affected : Web Canary -- Team attribution : -- Minutes downtime or degradation : ~40 minutes - -For calculating duration of event, use the [Platform Metrics Dashboard](https://dashboards.gitlab.net/d/ZUei7TkWz/platform-metrics?orgId=1) to look at appdex and SLO violations. - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? (i.e. service outage, sub-service brown-out, exposure of sensitive data, ...) -- Who was impacted by this incident? (i.e. external customers, internal customers, specific teams, ...) -- How did the incident impact customers? (i.e. preventing them from doing X, incorrect display of Y, ...) -- How many attempts were made to access the impacted service/feature? -- How many customers were affected? -- How many customers tried to access the impacted service/feature? - -Include any additional metrics that are of relevance. - -* [Web Overview Dashboard of Canary at the time](https://dashboards.gitlab.net/d/web-main/web-overview?orgId=1&from=1582055400000&to=1582059600000&var-PROMETHEUS_DS=Global&var-environment=gprd&var-stage=cny&var-sigma=2) - -* [Rails controller metrics](https://dashboards.gitlab.net/d/rPsQXrImk/rails-controller?orgId=1&from=1582055400000&to=1582059600000&var-env=gprd&var-type=web&var-stage=cny&var-controller=Projects::IssuesController&var-action=discussions.json) - -* Saturation -![Screen_Shot_2020-02-18_at_4.28.49_PM](/uploads/b0aac9d69e036813a51cca2e7e1af100/Screen_Shot_2020-02-18_at_4.28.49_PM.png) -https://dashboards.gitlab.net/d/web-main/web-overview?orgId=1&from=1582055400000&to=1582059600000&var-PROMETHEUS_DS=Global&var-environment=gprd&var-stage=cny&var-sigma=2&fullscreen&panelId=6 - -* Requests per Second -![Screen_Shot_2020-02-18_at_4.31.43_PM](/uploads/ad8399278b9cf45f2c8c680833e5ff5d/Screen_Shot_2020-02-18_at_4.31.43_PM.png) -https://dashboards.gitlab.net/d/web-main/web-overview?orgId=1&from=1582055400000&to=1582059600000&var-PROMETHEUS_DS=Global&var-environment=gprd&var-stage=cny&var-sigma=2&fullscreen&panelId=5 - -## Detection & Response - -Start with the following: - -- How was the incident detected? -- Did alarming work as expected? -- How long did it take from the start of the incident to its detection? -- How long did it take from detection to remediation? -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team memeber wasn't page-able, ...) - -## Root Cause Analysis - -The purpose of this document is to understand the reasons that caused an incident, and to create mechanisms to prevent it from recurring in the future. A root cause can **never be a person**, the way of writing has to refer to the system and the context rather than the specific actors. - -Follow the ""**5 whys**"" in a **blameless** manner as the core of the root-cause analysis. - -For this it is necessary to start with the incident, and question why it happened. Keep iterating asking ""why?"" 5 times. While it's not a hard rule that it has to be 5 times, it helps to keep questions get deeper in finding the actual root cause. - -Keep in min that from one ""why?"" there may come more than one answer, consider following the different branches. - -### Example of the usage of ""5 whys"" -The vehicle will not start. (the problem) - -1. Why? - The battery is dead. -2. Why? - The alternator is not functioning. -3. Why? - The alternator belt has broken. -4. Why? - The alternator belt was well beyond its useful service life and not replaced. -5. Why? - The vehicle was not maintained according to the recommended service schedule. (Fifth why, a root cause) - -## What went well - -Start with the following: - -- Identify the things that worked well or as expected. -- Any additional call-outs for what went particularly well. - -## What can be improved - -Start with the following: - -- Using the root cause analysis, explain what can be improved to prevent this from happening again. -- Is there anything that could have been done to improve the detection or time to detection? -- Is there anything that could have been done to improve the response or time to response? -- Is there an existing issue that would have either prevented this incident or reduced the impact? -- Did we have any indication or beforehand knowledge that this incident might take place? - -## Corrective actions - -- List issues that have been created as corrective actions from this incident. -- For each issue, include the following: - - - Issue labeled as ~""corrective action"". - - Include an estimated date of completion of the corrective action. - - Incldue the named individual who owns the delivery of the corrective action. - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/engineering/root-cause-analysis/) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",1.0 -30945722,2020-02-17 21:28:53.097,Design database infrastructure for Praefect,"In order for Praefect to support replication coordination and failover it will rely on a PostgreSQL database. We must define a cost-effective and performant infrastructure to support this, with replication in mind. - -More info on Praefect's data model: https://gitlab.com/gitlab-org/gitaly/issues/1495 - -- [x] https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/-/merge_requests/1468 -- [x] https://ops.gitlab.net/gitlab-cookbooks/chef-repo/-/merge_requests/2816 -- [ ] Opt-in to CloudSQL maintenance notifications (to be addressed on https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/9462) -- [x] Configure preferred maintenance window day of the week and time https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/-/merge_requests/1480 - -/cc @gitlab-com/gl-infra for thoughts",6.0 -30917235,2020-02-17 11:03:33.647,create change requests to reduce the number of read only replicas,"As discussed in the issue https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8592 - -The goal is to reduce 2 replicas from the production cluster. - -We would like to have 6 nodes read-only receiving traffic and 2 nodes in standby without traffic. - - -Please create the change requests and proceed.",2.0 -30731697,2020-02-13 20:16:54.394,Clarify durations in deployer annotation text,"Clarify durations in annotation text. - -Old text printed the 2 durations adjacent to each other, resulting in -confusing phrasing such as ""an hour and 16 hours"": - -> deployer finished a deployer pipeline of ... on gprd which took -> an hour and 16 hours (wall time) - -This change aims to separate and more clearly define the two durations: - -> deployer finished a deployer pipeline of ... on gprd which had -> end-to-end wall clock duration of an hour (and sum of pipeline -> stage durations was 16 hours)",1.0 -30730945,2020-02-13 19:48:17.430,CI job stalled with no trace output for chef-repo apply_to_prod job,"Investigate why a CI job in our chef-repo's CI pipeline seems to have stalled indefinitely. - -### Details - -The following manually triggered job (`apply_to_prod`) in the normal chef-repo CI pipeline seems to have stalled and shows no job trace output via the web UI: - -https://ops.gitlab.net/gitlab-cookbooks/chef-repo/-/jobs/909330 - -The purpose of this job is to push the merge request's changes to chef server, so that subsequent chef-client runs can apply them. Chef server did get those changes, which initially suggested that this job may have run but lost its output. However, we discovered that this job had been run twice, with the first run succeeding and the second run stalling. - -https://ops.gitlab.net/gitlab-cookbooks/chef-repo/-/jobs/909204 - -So the successful first run would have pushed the changes to Chef Server. That puts us back to suspecting that the 2nd job run is stalled rather than just not reporting its output.",1.0 -30723483,2020-02-13 16:14:24.798,Postgres Replication lag is over 3 hours on archive recovery replica,"Postgres Replication lag is frequently over 3 hours on archive recovery replica, and was especially high on Tuesday, 2020-02-11. - -![Screen_Shot_2020-02-13_at_10.10.28_AM](/uploads/6d30a5240151878cfd71a8c549127981/Screen_Shot_2020-02-13_at_10.10.28_AM.png) - -https://prometheus-db.gprd.gitlab.net/graph?g0.range_input=1w&g0.expr=(pg_replication_lag%20%3E%201)%20and%20on(instance)%20(pg_replication_is_replica%7Btype%3D%22postgres-archive%22%7D%20%3D%3D%201)&g0.tab=0 - -Some initial diagnosis work has been undertaken by @gerardo.herzig with `#ongres-gitlab`. - -> ""From time to time, wal-e wal fetching is getting stuck"" - -Please see the linked incident issue for more details: https://gitlab.com/gitlab-com/gl-infra/production/issues/1653",3.0 -30722090,2020-02-13 15:31:39.439,RCA: 2020-02-12: The elastic_indexer Sidekiq queue (main stage) is not meeting its latency SLOs," - -Incident: gitlab-com/gl-infra/production#1661 - -## Summary - -Our default setting of sidekiq db connection pool size to be equal to `max_concurrency` was making it likely to have worker threads competing for db connections on nodes with a very low `max_concurrency`. As a result, we were seeing some jobs timing out on getting a db connection on those queues. - -- Service(s) affected : ~""Service::Sidekiq"" -- Team attribution : -- Minutes downtime or degradation : - -For calculating duration of event, use the [Platform Metrics Dashboard](https://dashboards.gitlab.net/d/ZUei7TkWz/platform-metrics?orgId=1) to look at appdex and SLO violations. - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? (i.e. service outage, sub-service brown-out, exposure of sensitive data, ...) -- Who was impacted by this incident? (i.e. external customers, internal customers, specific teams, ...) -- How did the incident impact customers? (i.e. preventing them from doing X, incorrect display of Y, ...) -- How many attempts were made to access the impacted service/feature? -- How many customers were affected? -- How many customers tried to access the impacted service/feature? - -Include any additional metrics that are of relevance. - -Provide any relevant graphs that could help understand the impact of the incident and its dynamics. - -## Detection & Response - -Start with the following: - -- How was the incident detected? -- Did alarming work as expected? -- How long did it take from the start of the incident to its detection? -- How long did it take from detection to remediation? -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team memeber wasn't page-able, ...) - -## Root Cause Analysis - -The purpose of this document is to understand the reasons that caused an incident, and to create mechanisms to prevent it from recurring in the future. A root cause can **never be a person**, the way of writing has to refer to the system and the context rather than the specific actors. - -Follow the ""**5 whys**"" in a **blameless** manner as the core of the root-cause analysis. - -For this it is necessary to start with the incident, and question why it happened. Keep iterating asking ""why?"" 5 times. While it's not a hard rule that it has to be 5 times, it helps to keep questions get deeper in finding the actual root cause. - -Keep in min that from one ""why?"" there may come more than one answer, consider following the different branches. - -### Example of the usage of ""5 whys"" -The vehicle will not start. (the problem) - -1. Why? - The battery is dead. -2. Why? - The alternator is not functioning. -3. Why? - The alternator belt has broken. -4. Why? - The alternator belt was well beyond its useful service life and not replaced. -5. Why? - The vehicle was not maintained according to the recommended service schedule. (Fifth why, a root cause) - -## What went well - -Start with the following: - -- Identify the things that worked well or as expected. -- Any additional call-outs for what went particularly well. - -## What can be improved - -Start with the following: - -- Using the root cause analysis, explain what can be improved to prevent this from happening again. -- Is there anything that could have been done to improve the detection or time to detection? -- Is there anything that could have been done to improve the response or time to response? -- Is there an existing issue that would have either prevented this incident or reduced the impact? -- Did we have any indication or beforehand knowledge that this incident might take place? - -## Corrective actions - -From https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/9198#note_287817712 - -* We have overridden the minimum connection pool size in `gitlab.rb` for the elasticsearch and export priorities and this has resolved the issue: https://ops.gitlab.net/gitlab-cookbooks/chef-repo/-/merge_requests/2668 -* The database connection pool should always have a minimum number pool size greater than some value (perhaps 5?) cc @mkaeppler @ayufan https://gitlab.com/gitlab-com/gl-infra/scalability/issues/155 -* For higher concurrencies, it would be worth considering adding a few extra connections to the pool than the concurrency, as an insurance against deadlock cc @mkaeppler @ayufan https://gitlab.com/gitlab-com/gl-infra/scalability/issues/155 -* We should report, through prometheus, connection pool utilisation in a process (on an interval, eg every 10 seconds?) - as a gauge metric. We should also record maximum pool size, so that we can add connection pool saturation as a saturation metric cc @mkaeppler @ayufan (https://gitlab.com/gitlab-com/gl-infra/scalability/issues/153) -* Track down the cause of the exception rescues in the elastic indexer workers cc @DylanGriffith https://gitlab.com/gitlab-org/gitlab/issues/205640 -* Do not re-report old Sidekiq errors in future invocations https://gitlab.com/gitlab-org/gitlab/-/merge_requests/25161 -* Do no mask errors behind `Sidekiq::JobRetry::Skip` exceptions -* Reindex lost ES index jobs cc @DylanGriffith - https://gitlab.com/gitlab-com/gl-infra/production/issues/1666 - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",1.0 -30682623,2020-02-12 16:24:09.182,It appears that the /dev/sda1 disk on api-06-sv-gprd.c.gitlab-production.internal will be full soon,"Alert: - -> The filesystem is predicted to be full in will be full in the next 24 hours. - -![Screen_Shot_2020-02-12_at_10.47.27_AM](/uploads/9f6d1ce6116312684c51fe845f61fd5e/Screen_Shot_2020-02-12_at_10.47.27_AM.png) - -```bash -$ bundle exec knife ssh role:gprd-base-fe-api 'df -h | grep ""/dev/sda1""' -api-15-sv-gprd.c.gitlab-production.internal /dev/sda1 20G 12G 8.2G 58% / -api-25-sv-gprd.c.gitlab-production.internal /dev/sda1 20G 11G 8.7G 56% / -api-04-sv-gprd.c.gitlab-production.internal /dev/sda1 20G 18G 2.3G 89% / -api-21-sv-gprd.c.gitlab-production.internal /dev/sda1 20G 15G 5.3G 74% / -api-07-sv-gprd.c.gitlab-production.internal /dev/sda1 20G 11G 8.7G 56% / -api-13-sv-gprd.c.gitlab-production.internal /dev/sda1 20G 6.8G 13G 35% / -api-19-sv-gprd.c.gitlab-production.internal /dev/sda1 20G 12G 8.2G 58% / -api-09-sv-gprd.c.gitlab-production.internal /dev/sda1 20G 14G 6.1G 69% / -api-03-sv-gprd.c.gitlab-production.internal /dev/sda1 20G 13G 6.4G 68% / -api-16-sv-gprd.c.gitlab-production.internal /dev/sda1 20G 6.5G 13G 34% / -api-10-sv-gprd.c.gitlab-production.internal /dev/sda1 20G 15G 4.5G 77% / -api-24-sv-gprd.c.gitlab-production.internal /dev/sda1 20G 12G 7.8G 60% / -api-11-sv-gprd.c.gitlab-production.internal /dev/sda1 20G 14G 6.0G 70% / -api-01-sv-gprd.c.gitlab-production.internal /dev/sda1 20G 13G 6.6G 67% / -api-08-sv-gprd.c.gitlab-production.internal /dev/sda1 20G 13G 6.5G 67% / -api-18-sv-gprd.c.gitlab-production.internal /dev/sda1 20G 13G 7.1G 64% / -api-17-sv-gprd.c.gitlab-production.internal /dev/sda1 20G 12G 8.2G 58% / -api-20-sv-gprd.c.gitlab-production.internal /dev/sda1 20G 16G 3.9G 80% / -api-22-sv-gprd.c.gitlab-production.internal /dev/sda1 20G 13G 6.9G 65% / -api-14-sv-gprd.c.gitlab-production.internal /dev/sda1 20G 13G 6.8G 65% / -api-06-sv-gprd.c.gitlab-production.internal /dev/sda1 20G 18G 2.1G 90% / -api-05-sv-gprd.c.gitlab-production.internal /dev/sda1 20G 16G 3.7G 81% / -api-23-sv-gprd.c.gitlab-production.internal /dev/sda1 20G 7.1G 13G 37% / -api-26-sv-gprd.c.gitlab-production.internal /dev/sda1 20G 15G 5.0G 75% / -api-12-sv-gprd.c.gitlab-production.internal /dev/sda1 20G 13G 6.8G 65% / -api-02-sv-gprd.c.gitlab-production.internal /dev/sda1 20G 14G 5.5G 72% / -``` - -And also `api-04-sv-gprd.c.gitlab-production.internal`. - -![Screen_Shot_2020-02-12_at_10.46.16_AM](/uploads/dcec9a4a04e9842621cd1adc00a8e447/Screen_Shot_2020-02-12_at_10.46.16_AM.png)",3.0 -30681897,2020-02-12 16:04:36.285,It appears that the pgbouncer_async_pool component of the pgbouncer service consistently exceeds SLO,"This alert popped up in the #alerts-general slack channel today: - -> The `pgbouncer` service (`main` stage), `pgbouncer_async_pool` component has a saturation exceeding SLO and is close to its capacity limit. - -> This means that this resource is running close to capacity and is at risk of exceeding its current capacity limit. - -So, I clicked through to the dashboard and saw this: - -![Screen_Shot_2020-02-12_at_10.00.07_AM](/uploads/dda0e529a7c5caaa15a92061fd8c98e2/Screen_Shot_2020-02-12_at_10.00.07_AM.png) - -https://dashboards.gitlab.net/d/alerts-saturation_component/alerts-saturation-component-alert?orgId=1&from=now-7d&to=now&panelId=2&tz=UTC&var-environment=gprd&var-type=pgbouncer&var-stage=main&var-component=pgbouncer_async_pool - -That looks pretty consistently not good. - -I imagine the pool needs to be adjusted upward, to expand capacity? I do not yet have any knowledge about whether the existing system resources will support such an expansion or not.",4.0 -69129504,2020-02-12 15:12:27.864,check the performance of the bloat check,"We had this check offline - Get http://10.217.7.101:9168/database_bloat: context deadline exceeded - -https://prometheus-db.gprd.gitlab.net/targets#job-gitlab-monitor-database-bloat - -Seems we reached a timeout. - -We should investigate why, and find out, performance improvements if possible. - -The queries come from an external codebase recommendation: - -https://gitlab.com/gitlab-org/gitlab-exporter/-/blob/master/lib/gitlab_exporter/database/bloat_btree.sql - -https://gitlab.com/gitlab-org/gitlab-exporter/-/blob/master/lib/gitlab_exporter/database/bloat_table.sql",4.0 -30647134,2020-02-11 19:41:03.790,Set 301 redirect for www.remoteonly.org,"Followup from https://gitlab.com/gitlab-com/www-gitlab-com/issues/6153 - -/cc @dmurph @awinata",1.0 -30622920,2020-02-11 10:44:02.323,create test environment in staging for capacity planning,"considering the issue : https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/9094 - -We need the necessary environment to proceed with the [GPT](https://gitlab.com/gitlab-org/quality/performance): - -- [x] database server with the setup similar with production -- [x] create a box to deploy the performance tool. -- [ ] Configure boxes for [pointing Gitlab instance to the database box](https://docs.gitlab.com/ee/administration/high_availability/gitlab.html).",2.0 -30617659,2020-02-11 09:42:11.343,RCA: 2020-02-11 High db insert rate caused by enabling issue templates on projects," - -Incident: https://gitlab.com/gitlab-com/gl-infra/production/issues/1651 - -## Summary - -Enabling instance templates for each project creation due to a bug, caused thousands of services to be created for each project creation (doing URL validation for each service, causing delays) which was causing a high db insert rate and latencies for web. - -Additionally, because of copying the same template to the projects, many slack notifications from other projects were sent to out to one single project, which is a security issue: https://gitlab.com/gitlab-com/gl-security/secops/operations/issues/650 - -- Service(s) affected : ~""Service::Web"" ~""Service::Postgres"" -- Team attribution : ~""group::ecosystem"" -- Minutes downtime or degradation : 153m (06:31 - 09:04) - -For calculating duration of event, use the [Platform Metrics Dashboard](https://dashboards.gitlab.net/d/ZUei7TkWz/platform-metrics?orgId=1) to look at appdex and SLO violations. - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? (i.e. service outage, sub-service brown-out, exposure of sensitive data, ...) -- Who was impacted by this incident? (i.e. external customers, internal customers, specific teams, ...) -- How did the incident impact customers? (i.e. preventing them from doing X, incorrect display of Y, ...) -- How many attempts were made to access the impacted service/feature? -- How many customers were affected? -- How many customers tried to access the impacted service/feature? - -Include any additional metrics that are of relevance. - -Provide any relevant graphs that could help understand the impact of the incident and its dynamics. - -DB inserts into `services` table: -![image](/uploads/1910528a373d585508c28378e8ec5501/image.png) -https://dashboards.gitlab.net/d/RZmbBr7mk/gitlab-triage?orgId=1&fullscreen&panelId=8&from=1581399974013&to=1581413232942 - -Web saturation: -![image](/uploads/4847c780a52743bee7ab3c36cc9c0ef6/image.png) -https://dashboards.gitlab.net/d/general-service/general-service-platform-metrics?orgId=1&fullscreen&panelId=10&from=1581400421276&to=1581413397765 - - -## Detection & Response - -Start with the following: - -- How was the incident detected? - - EOC was paged because of db replication lag caused by high insert rate, marquee customer alerts also triggered -- Did alarming work as expected? - - We did not get SLO violation alerts, maybe the impact wasn't high enough - need to investigate -- How long did it take from the start of the incident to its detection? - - 45m until [db replication alert](https://gitlab.pagerduty.com/incidents/PBP8ZVR?utm_source=slack&utm_campaign=channel) fired (06:31 - 07:16) -- How long did it take from detection to remediation? - - 108m (07:16 - 09:04) -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team member wasn't page-able, ...) - -## Root Cause Analysis - -We experienced high web latencies - -1. Why? - Because we suddenly started to create many services for each project creation, doing url lookups each service in `lib/gitlab/url_blocker.rb:111`, which took a lot of time (during a db transaction inserting into the `services` table). -2. Why? - Because the instance creation attribute was set to `true` for each new project. -3. Why? - Because a [db migration](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/23595/diffs#diff-content-abca773d09b69a0771e11f06ab0d9e92ab632889) with a column rename created a trigger to set `instance = template`, which enabled the previously disabled attribute -4. Why? - Because we had two versions of the codebase running at the same time, one would set a `template` to false when creating a new service, and the other would set `instance` to false but leave `template` as true (and therefore repopulate `instance` as true based on the trigger) -5. Why? - Service code is old and does some things we probably would not accept now like not having unique constraints on project_id and service type. We have validations at application level, not at the database level. - -## What went well - -Start with the following: - -- Identify the things that worked well or as expected. -- Any additional call-outs for what went particularly well. - -## What can be improved - -Start with the following: - -- Using the root cause analysis, explain what can be improved to prevent this from happening again. -- Is there anything that could have been done to improve the detection or time to detection? -- Is there anything that could have been done to improve the response or time to response? -- Is there an existing issue that would have either prevented this incident or reduced the impact? -- Did we have any indication or beforehand knowledge that this incident might take place? - -## Corrective actions - -- Immediate corrective actions were to revert the change. - - https://gitlab.com/gitlab-org/gitlab/-/merge_requests/24857 - - https://gitlab.com/gitlab-org/gitlab/-/merge_requests/24885 -- We will add database constraints as suggested in https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/9176#note_286149490 -- Add suggestion to [renaming columns docs](https://docs.gitlab.com/ee/development/what_requires_downtime.html#renaming-columns) about splitting migration MRs - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)", -30600879,2020-02-10 19:53:50.467,Add an auto-scaling policy to the snowplow collectors in AWS,"There were a few troubles with the collectors in Snowplow this morning: https://gitlab.pagerduty.com/incidents/P0CIK95?utm_source=slack&utm_campaign=channel - -There is an auto scaling group defined for the collectors, but no policy to take advantage of it. We should add a policy.",1.0 -30593946,2020-02-10 16:42:21.184,Remove us-east-1b availability zone from snowplow collector auto scaling group,"The machine size specified, `t2.micro`, is not available in that availability zone. Possibly replace it with another AZ?",1.0 -30488450,2020-02-06 22:41:34.114,Regenerate docker-machine certs during gitlab-runner upgrades,"### Task - -Opportunistically regenerate the CA and client certificates used by `docker-machine` on our CI Runner Manager hosts to provision VMs. This avoids a race condition that can corrupt the certs if they expire naturally while the host is in service. - -We deploy the `gitlab-runner` service more often than the certs expire, and that planned downtime provides an opportunity when no other `docker-machine` processes should be running, hence avoiding the race condition. - -### Background - -Currently our CI Runner Manager hosts use `docker-machine` to provision VMs for running CI jobs in docker containers. The manager host acts as a CA and signs its own client cert as well as the per-VM server cert. The CA and client certs have a 3-year lifespan, and when they expire, the next invocation of `docker-machine create` will generate a fresh pair of certs. However, in a high-throughput environment like GitLab.com, it is very likely that multiple docker-machine processes will concurrently try to generate new certs, each one overwriting the other's work. This can lead to mismatched or corrupt CA and client certs. When that occurs (see https://gitlab.com/gitlab-com/gl-infra/production/issues/1609#note_283265339), the host running docker-machine can provision new VMs, but those VMs reject all attempts to authenticate to its `dockerd` daemon using the damaged or untrusted client cert. Consequently, the manager host fails to run every job it pulls.",1.0 -30481273,2020-02-06 18:09:37.145,Terraform error re-creating forwarding rule for load balancer,"When deleting a load balancer and then re-creating it and its forwarding rule, we get an error due to not specifying the subnet. - -``` -Error: Error creating ForwardingRule: googleapi: Error 400: Invalid value for field 'resource.subnetwork': ''. Network interface must specify a subnet if the network resource is in custom subnet mode., invalid - - on .terraform/modules/gcp-tcp-lb-internal/loadbalancing.tf line 102, in resource ""google_compute_forwarding_rule"" ""internal"": - 102: resource ""google_compute_forwarding_rule"" ""internal"" { -```",2.0 -30440115,2020-02-05 19:21:24.895,2020-02-05 assist with support for marquee customer,"Help with support call on 2020-02-04 and follow-up tasks. - -Customer is building a shared CI runner platform and but some metrics are missing from their Grafana dashboards. Help troubleshooting why some metrics are missing.",1.0 -30420924,2020-02-05 12:49:37.607,Evaluate options to enable checksums on the database,"We need to enable checksums in the database, the main reason is to avoid data corruption. - -We need to estimate the most adequated strategy to accomplish this task on our database. - -We could consider a switchover to an instance generated by logical replication. (It is needed a project to create this instance from the actual cluster using logical replication.) - -Or we could think about a longer maintenance window to enable checksums on the database, but our SLA agreements won't allow this approach.",12.0 -30285613,2020-02-03 08:29:38.100,Create chronogram for the PostgreSQL upgrade,"Create a chronogram to have better visibility or blockers for the Postgresql upgrade. - -This chronogram will have per task : - --the overview. - --the owner. - --the deadline dates. - --a field for possible blockers.",4.0 -30285198,2020-02-03 08:18:53.455,Defining the steps for the PostgreSQL upgrade with a restore of a backup from production using a cluster with a replica ( read-only ),"The idea of this task is to create a test scenario for the PostgreSQL upgrade( pg 9.6 and 11.6 ), with a database with a size like production. - -We should have a similar hardware setup as the production boxes in these boxes. - -We the idea here is to get estimative of time that we need to execute the task of the upgrade. - -The cluster could have 3 nodes. - - - -# Step by step for PG_UPGRADE(master and standby) - - -@adescoms @sergio.ostapowicz @Finotto -The following steps are for executing `pg_upgrade` for upgrade from PG 9.6 to 11.7, -this was tested on servers `patroni-migrate-01-db-gstg.c.gitlab-staging-1.internal` and `patroni-migrate-02-db-gstg.c.gitlab-staging-1.internal`. - -***preliminary version*** - -# Upgrade with pg_upgrade - -Before executing the `pg_upgrade` some issue have to be resolved and are related with: - - - Server 9.6 have the extension `pg_repack` installed (solution, install pg_repack for PG 11) - - Some views, functions, and queries from `postgres exporter` are incompatible with PG 11 (solution, like a workaround, rewrite the views and functions and add some wrapper functions for queries) - - Make a copy to all configs file - - -To execute `pg_upgrade` in gitlab server you must follow the next steps: - - 1. Remove the following view and functions from PG 9.6 - - * views : ""postgres_exporter"".""pg_stat_replication"", ""postgres_exporter"".""pg_stat_wal_receiver"" - * functions : postgres_exporter.f_select_pg_stat_replication(), public.f_pg_stat_wal_receiver() - - 1.1 Make sure all connections out , checkpoint in master and check replica status , - -Add to pg_hba.conf from PG 9.6 the line - ``` -host all all 0.0.0.0/0 reject - ``` - -Connect to postgres database - -``` - select pg_terminate_backend(pid) from pg_stat_activity where datname='gitlabhq_production'; --disconnecto all from gitlabhq_production database - checkpoint ; - select * from pg_stat_replication ; - -``` - - 2. Stop PG 9.6 if is running in all server - - 3. Install postgresql-11 and stop service, *ignore if is already installed and stopped* - * apt-get install postgresql-11 - * pg_ctlcluster 11 main stop - - 4. Create the path (*data11* or the name that you want) for the new PG cluster a give the correct permission - ``` - mkdir /var/opt/gitlab/postgresql/data11 - chown gitlab-psql:gitlab-psql /var/opt/gitlab/postgresql/data11 - chmod 700 /var/opt/gitlab/postgresql/data11/ -``` - 5. Create the new cluster database with user `gitlab-psql` in master - ``` - su gitlab-psql - /usr/lib/postgresql/11/bin/./initdb -D /var/opt/gitlab/postgresql/data11 --locale=C.UTF-8 --encoding=UTF8 - ``` - 6. Enable to connect locally ""trust"" on pg_hba(/var/opt/gitlab/postgresql/data/pg_hba.conf) by adding the following line to the beginning of the file - * local all all trust - 7. Install the pg_repack extension for PG 11 - ``` -apt-get install postgresql-11-repack - ---optional, compile from source code -wget http://api.pgxn.org/dist/pg_repack/1.4.5/pg_repack-1.4.5.zip -unzip pg_repack-1.4.5.zip -d ./pg_repack -cd pg_repack/pg_repack-1.4.5/ -make PG_CONFIG=/usr/lib/postgresql/11/bin/pg_config ---like a root -make install PG_CONFIG=/usr/lib/postgresql/11/bin/pg_config - -``` - - -8. Install the postgresql 11 server on standby server and pg_repack extension -``` -apt-get install postgresql-11-repack - ---optional, compile from source code - -wget http://api.pgxn.org/dist/pg_repack/1.4.5/pg_repack-1.4.5.zip -unzip pg_repack-1.4.5.zip -d ./pg_repack -cd pg_repack/pg_repack-1.4.5/ -make PG_CONFIG=/usr/lib/postgresql/11/bin/pg_config ---like a root -make install PG_CONFIG=/usr/lib/postgresql/11/bin/pg_config -``` - 9. Check if all fine before execute pg_upgrade in master - ``` - su gitlab-psql - /usr/lib/postgresql/11/bin/pg_upgrade --old-bindir /usr/lib/postgresql/9.6/bin --new-bindir /usr/lib/postgresql/11/bin --old-datadir /var/opt/gitlab/postgresql/data --new-datadir /var/opt/gitlab/postgresql/data11/ -o ""-c config_file=/var/opt/gitlab/postgresql/postgresql.conf"" -O ""-c config_file=/var/opt/gitlab/postgresql/data11/postgresql.conf"" --check --link - ``` - 10. If the result from above is *Clusters are compatible* then execute the upgrade -``` ---takes approximately 10 sec -su gitlab-psql - /usr/lib/postgresql/11/bin/pg_upgrade --old-bindir /usr/lib/postgresql/9.6/bin --new-bindir /usr/lib/postgresql/11/bin --old-datadir /var/opt/gitlab/postgresql/data --new-datadir /var/opt/gitlab/postgresql/data11/ -o ""-c config_file=/var/opt/gitlab/postgresql/postgresql.conf"" -O ""-c config_file=/var/opt/gitlab/postgresql/data11/postgresql.conf"" --link - ``` - 11. If the result from above is *Upgrade Complete* then test start PG 11 , make sure the config are correct or with the values that you need and test start/stop PG service - ``` -su gitlab-psql - cd /usr/lib/postgresql/11/bin/ -./postgres -D /var/opt/gitlab/postgresql/data11 -c ""config_file=/var/opt/gitlab/postgresql/data11/postgresql.conf"" - -stop server - ``` - -### *Upgrade standby replicas* -12. Create the structures for data 11 on replicas -``` -mkdir /var/opt/gitlab/postgresql/data11 -chown gitlab-psql:gitlab-psql /var/opt/gitlab/postgresql/data11 -chmod 700 /var/opt/gitlab/postgresql/data11/ -``` -13. Copy to standby server the data for PG 11 and delta from PG 9.6, configure the recovery.conf to connecto to master PG 11 -``` -cd /var/opt/gitlab/postgresql -rsync --archive --delete --hard-links --size-only --no-inc-recursive data data11 10.224.42.102:/var/opt/gitlab/postgresql -``` - - -14. Config the replica setting if is necessary -15. Confing standby , Start master and standby server -recovery.conf -``` -recovery_target_timeline = 'latest' -standby_mode = 'on' -primary_conninfo = 'user=gitlab-replicator password=password host=IP port=5432 application_name=your_name' -primary_slot_name = 'your_slot_name' -``` - -16. Create the replication slot in master -``` - select pg_create_physical_replication_slot('your_slot_name'); - ``` - - -17. Connect to postgres on Master server, and create the functions and views required by `postgres exporter` -``` -sudo gitlab-psql ---and execute the code -CREATE OR REPLACE FUNCTION postgres_exporter.f_select_pg_stat_replication() - RETURNS SETOF pg_catalog.pg_stat_replication - LANGUAGE sql - SECURITY DEFINER - AS $BODY$ - SELECT * from pg_catalog.pg_stat_replication; - $BODY$; - -CREATE VIEW ""postgres_exporter"".""pg_stat_replication"" AS - SELECT * - FROM ""postgres_exporter"".""f_select_pg_stat_replication""() ; - - CREATE OR REPLACE FUNCTION public.f_pg_stat_wal_receiver() - RETURNS SETOF pg_stat_wal_receiver - LANGUAGE sql - SECURITY DEFINER - AS $BODY$ - select * from pg_catalog.pg_stat_wal_receiver - $BODY$; - -CREATE VIEW ""postgres_exporter"".""pg_stat_wal_receiver"" AS - SELECT * - FROM ""public"".""f_pg_stat_wal_receiver""() ; - -GRANT ALL PRIVILEGES ON postgres_exporter.pg_stat_replication TO postgres_exporter; -GRANT ALL PRIVILEGES ON postgres_exporter.pg_stat_wal_receiver TO postgres_exporter; - -CREATE OR REPLACE FUNCTION public.pg_last_xlog_replay_location() - RETURNS pg_lsn AS - $BODY$ - SELECT pg_last_wal_replay_lsn(); - $BODY$ - LANGUAGE SQL STABLE; - -CREATE OR REPLACE FUNCTION public.pg_current_xlog_insert_location() - RETURNS pg_lsn AS - $BODY$ - SELECT pg_current_wal_insert_lsn(); - $BODY$ - LANGUAGE SQL STABLE; - -CREATE OR REPLACE FUNCTION public.pg_current_xlog_location() - RETURNS pg_lsn AS $$ - SELECT pg_current_wal_lsn(); - $$ LANGUAGE SQL STABLE; - -CREATE OR REPLACE FUNCTION public.pg_xlogfile_name(pg_lsn) - RETURNS text AS $$ - SELECT pg_walfile_name($1); - $$ LANGUAGE SQL STABLE; - -``` - - 18. To collect statistics - * Add the temporaly line to pg_hba.conf (/var/opt/gitlab/postgresql/data/pg_hba.conf) and reload conf with `select pg_reload_conf()` : host gitlabhq_production gitlab-superuser 127.0.0.1/0 trust - * Execute the vaccumdb binaries with the following parameters: - ``` - --takes approximately 4 min - /usr/lib/postgresql/11/bin/vacuumdb --analyze-only --jobs 64 -d gitlabhq_production -U gitlab-superuser -h 127.0.0.1 - ``` - - * Remove or comment the lines from pg_hba (/var/opt/gitlab/postgresql/data/pg_hba.conf) and reload conf with `select pg_reload_conf()` on master server: - * host all all 0.0.0.0/0 reject - * local all all trust - * host gitlabhq_production gitlab-superuser 127.0.0.1/0 trust - - - - - -19. *Integrate PG 11 with HA solution Patroni*",8.0 -30284876,2020-02-03 08:09:10.316,Create image for testing the PostgreSQL upgrade,"I would suggest, create an image before we execute the pg_upgrade. - -I would suggest stopping the PostgreSQL instances to keep the databases in a consistent state. - -The suggestion would be to have a cluster ( can be with 2 nodes only Read-Write and Read-Only). -With Postgresql 9.6 ( the actual version from Production ) and version 11.6 that is the target for this migration. - -Also, we should consider having the restoration of a backup in the PG 9.6. - -With this image we could restore the testing scenario easier after our test, to be able to iterate faster.",4.0 -30284481,2020-02-03 07:55:03.398,Create the template of the migration plan for PostgreSQL Upgrade,"Create a similar migration template plan as we did for the Patroni migration. - -We should add : -- The steps that we will execute. -- The owners of each task. -- The pre-checks and post-checks tasks for the migration, and what we will perform in case of failure or unexpected behaviors. -- The communication needed on each step.",12.0 -30284123,2020-02-03 07:42:09.670,Capacity Planning for the primary database (closed),"It has been a long time since our last capacity planning, and we would like to verify: -- How much more load we could add on the primary database without degradation in the performance. -- We are reaching 60k connections per second. How much could we raise safely? -- Should we increase disk or memory to optimize performance? Any suggestions in shared buffers setup? -- Estimate how much % of the load is related to the table merge_requests. -- Add more topics that you think could be the next bottleneck to scale the availability of our primary database. -- measure the hot set of data that we use at the moment in memory. How much are we hitting disk. (`pg_buffercache` and the LRU count)",12.0 -30283479,2020-02-03 07:18:05.345,Verify the optimized parameters for PG Upgrade,"Our goal is to upgrade our database cluster to PostgreSQL to version 11.6, and we need to verify the optimized setup of parameters of PostgreSQL and Patroni/Consul for our database cluster. - -Please consider the parallelism parameters that we could optimize. - -Add here your thoughts about new parameters or values that we could discuss better. - -Also, check the optimized parameters for the pg_upgrade command that we will execute. - -This task is a top priority since we need to execute this migration until April.",8.0 -30249778,2020-02-02 19:56:39.977,Write the design doc for the Postgresql upgrade,"In this document, we would describe all the steps from the upgrade: -the mr is: https://gitlab.com/gitlab-com/www-gitlab-com/-/merge_requests/41495",8.0 -30231907,2020-02-02 08:15:51.321,Create test box to simulate postgresql upgrade,"We need a staging box, with the following 2 different versions of postgresql installed : - -- PG 9.6 as we have in production -- PG 11.6 that is the target of our migration - -- Please create a box with the same specs as production. - -For this we need : - -- [ ] create the new cookbooks needed to complete the task of execute a Postgresql migration - -To execute the initial test : - -Restore a backup from staging or execute and restore a dump from the database. - -With this steps we will be able to execute the pg_upgrade.",4.0 -30169532,2020-01-31 21:07:28.506,Document the TF and Chef Provisioning and Bootstraping Process in the Runbooks,"## SOC Type I Control Remediation Step for VM Provisioning - -The Security Compliance team is preparing for a SOC external audit, see https://gitlab.com/gitlab-com/gl-security/compliance/compliance/issues/1381. And has asked for simple documentation on the build process–provisioning and bootstraping–of servers in our production environment. We perform this workflow with Terraform and Chef, but I'm unable to locate a simple explanation on the process. Perhaps I've overlooked it, but I couldn't find anything in the runbook directory in https://gitlab.com/gitlab-com/gitlab-com-infrastructure that outlines the workflow. - -## Definition of Done - -In a Markdown file in the runbooks directory: - -- [ ] identify which machine image gets used for VM creation, preferably by pointing to the code -- [ ] include a workflow diagram that outlines both human and machine interactions with the files and tools that provision and bootstrap the image",1.0 -30168629,2020-01-31 20:22:04.547,[blueprint] Postgresql Upgrade,Please create a blueprint for the PostgreSQL upgrade.,4.0 -30152120,2020-01-31 12:51:34.715,Prometheus config is not being reloaded on rules change,"From slack (while it lasts): https://gitlab.slack.com/archives/CK171RT0F/p1580460508118300 - ->>> -Some changes to the runbooks got merged to master last night but didn’t get picked up by Prometheus. On investigation, the changes has been rolled out to the `rules` directory, but it looks like chef didn’t tell Prometheus to reload its configuration. - -I hupped the process and all is now right in the world… ->>> - -We should ensure that chef performs a config reload on changes.",1.0 -30126521,2020-01-30 19:23:43.565,"PagerDuty Services, Escalation Policies, and Users in Terraform","PagerDuty recently published an [article](https://www.pagerduty.com/eng/how-why-terraform) advocating for IaC with PagerDuty objects in Terraform. This is a great idea! Or is it? - -This issue seeks to collect feedback on the pros and cons of doing so, before commiting to putting putting these resources into version control.",2.0 -30121574,2020-01-30 16:20:40.529,Configure replication in praefect,"For this we'll need to create a new storage node and add it to the existing virtual storage. All settings required are already available in omnibus - -- [x] terraform changes: https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/merge_requests/1353 -- [x] chef-repo changes: https://ops.gitlab.net/gitlab-cookbooks/chef-repo/merge_requests/2576 - -/cc @johncai",3.0 -30120782,2020-01-30 15:54:34.575,Error check on querying the gitlab.com replica,"I'm trying to debug why our extract for certain tables is failing from the gitlab.com replica. We're trying to query the `resource_label_events` table and it seems like something is killing the job. I can't tell if it's a timeout on the postgres side or something on our side. - -The latest query was between these times UTC: - -``` -[2020-01-29 23:08:01,384] INFO - b'INFO:root:SELECT id , action , issue_id , merge_request_id , epic_id , label_id , user_id , created_at , cached_markdown_version , reference , reference_html FROM resource_label_events\n' -[2020-01-29 23:08:01,384] INFO - b' \n' -[2020-01-29 23:14:13,961] INFO - Event: gitlab-com-db-scd-5c81df55 had an event of type Failed -``` - -Maybe it's related to something like https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7769 ? - -@Finotto would you be able to provide some guidance / insight here? - -Happy to provide whatever information I need on my end.",2.0 -30027952,2020-01-28 14:39:44.473,Monitor and alert on Elastic logging cluster SLOs,"We should define SLOs for the key SLIs and implement monitoring and alerting: - -* Query latency -* Availability -* Query error rate -* Delay of log visibility -* Completeness of logs",8.0 -30027585,2020-01-28 14:30:31.129,Consider to backup Elastic visualizations and dashboards,"We are not keeping kibana visualizations or dashboards in version control, as they are mainly created via UI. -We should consider if it makes sense to export those visualization and dashboard configurations regularly to be able to restore them if needed.",5.0 -30025393,2020-01-28 13:39:26.517,Add alerts for fluentd,"We should be alerted if fluentd fails to forward messages. - -possible metrics to watch: -* `sum(fluentd_output_status_buffer_total_bytes{env=""gprd""}) by (fqdn,type)` - -related to https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7939",3.0 -30004127,2020-01-28 05:41:08.946,Allow Rate Limiting Specific URL's,"Currently, there is not a straightforward way to rate limit access to specific URL's. We can either allow a URL or block it. It would be very useful to be able to set limits on how many times per minute a single client can request a URL, or how many times per minute a URL can be requested in total. A flexible way to specify this would add significantly to the site's resilience. - -## Wishlist - -- It would be nice to be able to specify a URL regex, for example to rate limit all `*.zip` files. -- It would be nice to be able to rate limit groups of users differently, including unauthenticated users -- It would be nice to be able to rate limit IP's and CIDR ranges separately from users. -- It would be nice to be able to limit on other criteria such as User-Agent - -## Things to consider - -- Some larger organizations will be behind NAT's and all users will appear to be from the same IP -- Cloudflare may appear to make a lot of requests from the same IP, so we'll need to be able to differentiate the proxy server from the end user -- A DDoS may be distributed, and may appear to come from many IP's. We should still have a way to set a high threshold for how many times a specific URL can be requested at all. This should be much higher than the single IP threshold -- We should be careful not to limit internal components such as sidekiq inadvertently - -## Related things - -- Here is an incident that this would have prevented: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/9047 -- Here is some related work happening in the product: https://gitlab.com/gitlab-org/gitlab/issues/30829 -- Here is a related corrective action for API rate limiting: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6645",5.0 -29895201,2020-01-23 20:11:20.569,Replace staging certificate in Cloudflare,"The certificate for `staging.gitlab.com` has been updated in the GKMS vault and on the servers. The key was updated as well. https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8965 - -There is a copy of this cert and key in Cloudflare which is still the old version. - -The new certificate is: -```console -$ openssl s_client -connect geo.staging.gitlab.com:443 < /dev/null 2>/dev/null | openssl x509 -fingerprint -noout -in /dev/stdin -SHA1 Fingerprint=7A:E2:52:DB:CB:40:B5:D1:E8:2F:CF:C6:74:23:DF:90:41:76:91:3F -``` - -The cert and key are in the `gitlab-omnibus-secrets-geo gstg` GKMS vault. They can be retrieved like this: - -``` -./gkms-vault-show gitlab-omnibus-secrets-geo gstg | jq -r '.""omnibus-gitlab"".ssl.certificate' | gsed 's/\\\n/\n/g' -``` -and -``` -./gkms-vault-show gitlab-omnibus-secrets-geo gstg | jq -r '.""omnibus-gitlab"".ssl.private_key' | gsed 's/\\\n/\n/g' -``` - -The Cloudflare cert and key need to be updated to the new version of the cert and key.",1.0 -29863428,2020-01-22 20:12:55.052,Scale up praefect nodes for more CPU cores,"On https://gitlab.com/gitlab-org/gitaly/issues/2348#note_275054258 it was established that the periodical CPU spikes we're seeing in praefect nodes are actually caused by chef-client runs, and it was suggested that increasing the CPU core count could mitigate this problem. Let's change from `n1-standard-1` to `n1-standard-2`.",1.0 -29826208,2020-01-22 00:42:30.621,Split out secrets for geo secondary in staging,"The Geo Secondary node in staging currently uses the same `gitlab-omnibus-secrets` file as the rest of the environment. In some cases this is fine, but in cases like the redis server, we want to not configure secrets and use the local omnibus instance. This doesn't work with secrets configured because Chef assumes that if we have them, we must want to use them. - -The proposal (https://gitlab.com/gitlab-org/gitlab/issues/37926#note_274534850) is: - -- Change the chef configuration for that node to use a different secrets file here: https://ops.gitlab.net/gitlab-cookbooks/chef-repo/blob/master/roles/gstg-infra-geo-secondary.json#L36 -- Then delete secrets from that new file which we don't want applied to the node -- Cange any other secrets which shouldn't be the same (This is probably out of scope for this issue) - -/cc @craigf @dbalexandre",3.0 -29824453,2020-01-21 22:10:07.796,Write a runbook which documents qa-tunnel.gitlab.info,Write a runbook which documents qa-tunnel.gitlab.info.,3.0 -29772770,2020-01-20 16:18:32.504,Add fluentd and pubsub graphs to logging dashboard,On our [logging dashboard](https://dashboards.gitlab.net/d/USVj3qHmk/logging?orgId=1&refresh=30s&from=now-3h&to=now) we don't have graphs for fluentd and no bandwidth data for logs in stackdriver and pubsub.,1.0 -29756649,2020-01-20 09:57:37.151,Create a read replica of the license and customers database for analytics queries,"The data team needs a read replica of the license and customers database for the purposes of reporting within Snowflake. - -As this is done in CloudSQL it should be pretty quick to spin up a new one. The only thing we'd need is similar VPC peering like we have with the production replica so we can connect directly to it.",4.0 -29715262,2020-01-17 21:20:03.261,Configure External Access for geo.staging.gitlab.com,"In order to work with the geo secondary instance, we need to be able to get to the admin interface. https://gitlab.com/gitlab-org/gitlab/issues/37926 -I have tried a few different approaches to this. - -- Using our existing haproxy is dangerous, since it requires major changes to the template used by all instances (including production) in order for haproxy to proxy requests for geo.staging.gitlab.com to the geo-staging instance. -- Using a GCP load balancer is a little heavy handed since we only have one backend, but I tried it. I discovered that we would have to change the module type that we use to provision the geo-staging instance from generic-store to generic-sv-with-group - or write a whole new module. This is in addition to adding individual resources for health checks, backends, url_maps's and various other things. Writing a whole new module is the right way to do this, but that's too much work for a one off which will only be used for development work. -- Provisioning an external IP address and creating a firewall rule to allow access directly to this node. This is what I'm going with for the moment, since it is the simplest and there is no need for anything more complicated. This will get us unblocked and we can revisit it in the future. - -To provision the external address we need: - -- A static external IP associated with the node -- A Firewall rule allowing this access to the node -- A DNS entry -- A certificate or SAN for SSL connections to the interface",3.0 -14073141,2018-09-11 17:16:48.981,"""Number of PostgreSQL Databases in gprd has changed in the past minute"" alerts","The name of this alert: ""Number of PostgreSQL Databases in gprd has changed in the past minute"" - - -""Number of PostgreSQL Databases in gprd has changed in the past minute"" alert was received twice during Sep 10-11. - - - The first alert (sep 10): https://gitlab.slack.com/archives/C3NBYFJ6N/p1536587684000100 - - The second one (sep 11): https://gitlab.slack.com/archives/C3NBYFJ6N/p1536678976000100 - -@skarbek noticed, that nothing appeared to change in the database configuration, so something might be wrong with the way the alerts are working. - -Todo: check the code for this alert and how reliable it is. - -/cc @gitlab\-com/gl\-infra",3.0 -14072755,2018-09-11 16:52:41.682,Add integrations(project services) log to Kibana,"We moved project services related logs to a different file `logs/integrations_json.log` - -It would be nice to add these logs to Kibana like described here: https://gitlab.com/gitlab-com/runbooks/blob/master/howto/logging.md#adding-a-new-logfile",1.0 -14060305,2018-09-11 14:00:26.165,Add GCS storage to Thanos service,"In order to reduce / offload local storage requirements for Prometheus, Thanos can upload TSDB data to GCS (Also S3, or similar). - -TODO: - -* [x] Add a GCS bucket to the ""ops"" project for Thanos to read/write. -* [x] Configure the sidecars to upload data. -* [x] Add a [Store Gateway](https://github.com/improbable-eng/thanos/blob/master/docs/getting_started.md#store-gateway) to provide read access. -* [x] Reduce Prometheus local retention to 7 days. -* [x] Add a [Compactor](https://github.com/improbable-eng/thanos/blob/master/docs/getting_started.md#compactor) to add downsample data.",3.0 -14029269,2018-09-10 11:38:43.944,Setup no-data alerts,"We need to come up with a strategy for creating no-data alerts, this was recently seen in https://gitlab.com/gitlab-com/production/issues/459 where an alert was defined but due to a name refactoring the alert was no longer valid. - -This is problematic in the current configuration because we have the same alerting rules for all environments, should we? this will cause problems when we hardcode environment specific labels.",3.0 -14011795,2018-09-09 13:47:28.440,postmortem for pages incident,"This issue is not the postmortem but the issue to track that it gets done in this milestone. - -/cc @dawsmith",3.0 -13996456,2018-09-07 21:59:35.584,Marvin access for Tristan and Jerome,"Could we please get Marvin access for @tristan and @jeromeuy - permissions should be the same as myself. - -Thanks!",1.0 -13995832,2018-09-07 20:47:31.201,"BigQuery sink for `logName=""projects/gitlab-production/logs/haproxy""","`haproxy` logs are no longer being forwarded to ElasticCloud due to size, but we continue to run into cases where basic queries would be helpful to investigating traffic. I'd to explore using a log sink directly to BigQuery to support these queries. - -In particular, bulk queries for patterns of paths and user-agents have come up a few times in the last few months. While we now have user agent in more logs in ELK, we're still not sure of the storage and stability impact. - -My primary concern looking at the `haproxy` tagged logs in stackdriver is that there are appear to be 3 different formats for `jsonPayload`, so I think just trying it and seeing results in BigQuery is the fastest way to iterate and determine feasability. - - * [x] Infra team: Set up a sink in staging for `haproxy` logs in staging and collect 2 days worth of logs - * [x] Security: examine result, determine if changes need to be made to filter or request other changes - * [x] Iterate - * [x] Infra: create filter and sink in production and collect for a few hours - * [x] Both teams: Evaluate efficacy and size - - -/cc @jurbanc @jarv @andrewn",5.0 -13994382,2018-09-07 19:12:04.574,Update Pagerduty api key to use v2 for cog,"Got an email from PagerDuty that the v1 api is being deprecated: - -In Pagerduty, there are 2 keys still using v1: - -|Key Name|Created|Last Used| -|--------|-------|---------| -|ZenDesk Integration|Jan 25, 2016 at 7:04 PM by Patricio Cano|Sep 5, 2016| -|GitLab Cog|Sep 13, 2016 at 7:59 AM by GitLab Admin|Sep 7, 2018 at 3:05 PM| - -We can probably check with support and remove the zendesk, but likely need to update the GitLab Cog key whereever it is used - ------------ email: ---------- - -PagerDuty’s v1 REST API will be decommissioned and no longer operational starting on October 19, 2018, at 22:00 PDT (UTC+7). - -Your PagerDuty account has had at least one v1 REST API key in use over the last 90 days. This message has only been sent to the administrators of your PagerDuty account. Please forward this message to your team and ask them to migrate to our v2 REST API. - -Click here to learn more about our v1 REST API decommissioning on this FAQ page. Please note that the PagerDuty v1 Events API is not being deprecated. Read the FAQ for steps to identify your active v2 REST API keys. - -Feel free to contact support@pagerduty.com if you have unanswered questions or need assistance migrating.",1.0 -13964150,2018-09-06 17:32:30.515,"Destroy ldap{01,02,03}.ath.gitlab.com","Reported via ~HackerOne. No impact, but per @northrup unused so machines should be deprovisioned.",1.0 -13953961,2018-09-06 10:58:58.754,Email user list for ending GitLab.com Early Adopter program,"We're ending the Early Adopter program for GitLab.com and are discussing in https://gitlab.com/gitlab-com/marketing/general/issues/3050. - -### Problem to solve - -We'd like to email all users/groups who will be impacted before making the switchover on October 1st, 2018. - -### What's needed - -CSV of all GitLab.com users and groups on the Early Adopter plan. Columns should be `name`, `email`, `state`, `username`, `created_at`.",1.0 -13949790,2018-09-06 09:08:52.522,Setup delayed postgres replica + archive replica,"For DR purposes, let's setup a postgres replica with a configurable replication lag and one that feeds from the WAL archive.",5.0 -13939955,2018-09-05 23:17:57.830,Rename postgres-01 to postgres-archive-replica-01 (Was: postgresql-01 production replica differs from other replicas),"The replication lag postgres-01 (replica) started to grow significantly today: https://prometheus.gprd.gitlab.net/graph?g0.range_input=1h&g0.expr=(pg_replication_lag%20%3E%2043200)%20and%20on(instance)%20(pg_replication_is_replica%7Bfqdn%3D%22postgres-01-db-gprd.c.gitlab-production.internal%22%7D%20%3D%3D%201)&g0.tab=0 - -@skarbek also pointed that repmgr isn't running on it. - -`recovery.conf` differs from the other replicas, streaming replication is not enabled, it uses WAL shipping from S3 (perhaps it was left as is since the times of the Azure->GCP migration, when postgres-01 node intentionally didn't use SR): - -``` -$ sudo cat /var/opt/gitlab/postgresql/data/recovery.conf -# recovery file for creating the standby server -# uses both restore_command to fetch wal chunks -# and pimary_conninfo to transition to secondary -# when possible - -# Specifies whether to start the PostgreSQL server as a standby. -# If this parameter is on, the server will not stop recovery when the end of archived WAL is reached, -# but will keep trying to continue recovery by fetching new WAL segments using restore_command and/or -# by connecting to the primary server as specified by the primary_conninfo setting. -standby_mode = 'on' - -# By default, recovery will recover to the end of the WAL log. -# So we don't need any recovery_* options - -# If any option is unspecified in this string, then the corresponding environment variable (see Section 32.14) is checked. -# https://www.postgresql.org/docs/9.6/static/libpq-envars.html -# TL;DR: export PGPASSWORD=XXX -#primary_conninfo = 'user=gitlab_repmgr host=''postgres-01.db.prd.gitlab.com'' password=XXX port=5432 fallback_application_name=repmgr sslmode=prefer sslcompression=1 application_name=''postgres-01.db.gprd.gitlab.com''' -#primary_slot_name = secondary_gprd - -# lastly, the restore command that will be run until we can switch -restore_command = '/usr/bin/envdir /etc/wal-e.d/env /opt/wal-e/bin/wal-e wal-fetch -p 32 ""%f"" ""%p""' -recovery_target_timeline='latest' -``` - -Questions: - -1) why it is still using WAL shipping instead of streaming replication, why do we have not symmetric setup? -2) why is it lagging (higher lags are possible because WAL shipping from AWS is way longer and less reliable way than SR, but this time the lag is too high) -3) do we really need 5 replicas? if yes, what are reasons for that? - -/cc @abrandl @Finotto",5.0 -13937039,2018-09-05 19:32:56.879,Product link is resolving to pricing page,about.gitlab.com/products is resolving to about.gitlab.com/pricing.,1.0 -13935219,2018-09-05 17:11:45.516,Create IO dashboard for Postgres disks,"We'd like to gain more insight into IO behavior for postgres disks. - -Let's add a dashboard to monitor this in one place: - -* IO wait -* IOPS -* throughput read -* throughput write - -As @skarbek pointed out, those metrics are in `node_disk_*`. - -Some of these exist already, see https://dashboards.gitlab.net/d/bd2Kl9Imk/host-stats?refresh=1m&orgId=1&var-node=postgres-01-db-gprd.c.gitlab-production.internal and https://gitlab.slack.com/archives/C101F3796/p1536167065000100. -",1.0 -13913576,2018-09-05 08:25:24.013,Assets in canary don't work any more,"When using [canary](https://about.gitlab.com/handbook/engineering/#canary-testing), no assets work any more. - -This appears to be because we're using the production CDN, but if canary has assets that production doesn't. - -From staging, I guessed that the CDN host was supposed to be gl-canary.global.ssl.fastly.net. However, https://gl-canary.global.ssl.fastly.net/assets/application-367ee28873f9e3f90b56617182f9eded9ad2dbd834f7279d5771980e1858b411.css doesn't work either. - -@andrewn checked, and this is using canary.gitlab.com as a backend, which no longer exists. Should we bring it back and fix the canary config? cc @toon as you wanted this back for other reasons :slight\_smile: - -(Posted without using canary.)",1.0 -13908410,2018-09-05 06:28:06.128,Create a better howto for how to troubleshoot rails alerts,more context in https://gitlab.slack.com/archives/CB3LSMEJV/p1536125598000100,2.0 -13907887,2018-09-05 05:52:33.393,Set ENV['CANARY'] again for the canary environment,"this was dropped when we recreated the canary and needs to be added again. - -https://gitlab.com/gitlab-org/gitlab-ce/blob/ba99dfcde262c91e33b5bf7f86ba7c0e3b6f7e52/lib/gitlab/favicon.rb#L6-14",1.0 -13903751,2018-09-04 23:18:59.019,Automation for infrastructure deployments,"In a bid to increase reliability, consistency, transparency, and asynchronous workflows, we should begin automating our current infrastructure deployment processes. We will start with simple scripted automation in our CI pipeline to start, and consider whether other changes and/or more advanced tools are necessary based on the results of that work. Since the [Atlantis project](https://www.runatlantis.io/) does not have sufficient security controls to be used on a public repository, we are unable to use it at this time. Terraform Enterprise also provides similar advanced features, though we will need to evaluate the cost after we have a better understanding of where the pain points are using GitLab CI, first. - -## Objectives - -- Master branch + current state are always consistent with live environment - - `terraform plan` on master should show zero changes, except while `terraform apply` is running during merge -- All changes to environment are made via code and automation - -- All changes in `gprd` have been applied/validated in `gstg` - -_**We will obviously need to define how we will monitor and report on these_ - -## Next steps - -1. Update CI pipeline to run `terraform plan` for each environment - #5113 - -1. Add automatic apply stage for `gstg` - #5114 - -1. Add optional manual apply stages for `ops` and `gprd` - #5114 - -1. Setup required/blocking manual apply stage for `ops` and `gprd` - #5115",3.0 -13902622,2018-09-04 22:09:06.280,"New users, new projects, and MAU for GitLab.com for August 2018","We track users and projects for GitLab.com for internal use and for reports to other interested parties. I'd like to obtain counts for August. - -For GitLab.com, may I have the output for: - -``` -user_count = User.where(created_at: Date.new(2018,8,1).beginning_of_day..Date.new(2018,9,1).beginning_of_day).count -project_count = Project.where(created_at: Date.new(2018,8,1).beginning_of_day..Date.new(2018,9,1).beginning_of_day).count -``` -and - -``` -SELECT COUNT(DISTINCT ""audit_events"".author_id) -FROM ""audit_events"" -WHERE (""audit_events"".""created_at"" BETWEEN '2018-08-01 00:00:00.000000' AND '2018-08-31') -``` - -Thanks a lot. :smile:",1.0 -13901770,2018-09-04 20:43:24.574,Enable support to be able to see stackdriver logs,"From Namho and Andrew working together - came up that it might be useful to have support have access to stackdriver to research logs that may have fallen out of elastic cloud. - -Needs review by security to decide if this is okay. -Determine what group to create and get list of users from Lyle.",1.0 -13900795,2018-09-04 19:40:59.230,Week 5+ - begin migration of all projects on gitlab.com to hashed storage,"Related: https://gitlab.com/gitlab-com/infrastructure/issues/4772 - -Migrate all projects to hashed storage.",5.0 -13900771,2018-09-04 19:39:02.856,Week 4 hashed storage - begin migration of gitlab-com & gitlab-org projects on gitlab.com,"Related: https://gitlab.com/gitlab-com/infrastructure/issues/4772 -Closes: https://gitlab.com/gitlab-com/infrastructure/issues/3955 - -Begin the migration process for `gitlab-com` and `gitlab-org` projects on gitlab.com - -This is going to be better off scripted -* [x] Gather list of project ID's to be migrated -* [x] Perform the migration -* [x] Validate all projects have completed their migration",5.0 -13900748,2018-09-04 19:36:59.712,Week 3 - enable hashed storage permanently for new projects on gitlab.com,"Related: https://gitlab.com/gitlab-com/infrastructure/issues/4772 - -In Week 2, we temporarily enabled hashed storage for new projects. No issues were found. In this issue we will be re-enabling hashed storage for new projects and leaving the feature as enabled. We will continue to monitor for any problems. - -* [x] Enable hashed storage for new projects (with automatic migration when renaming/moving) - - this is one of our easier stories to be completed :joy: -* [x] Monitor for any issues with the renaming/moving -* [x] If anything pops up with the renaming/moving migration, Flip the feature toggle back, but keep the ""Hashed Storage for new projects enabled""",1.0 -13900730,2018-09-04 19:35:40.093,Week 2 - temporarily enable hashed storage for new projects on GitLab.com,"Related: https://gitlab.com/gitlab-com/infrastructure/issues/4772 - -* [x] Flip the feature toggle to disable the migration when renaming/moving a project (this step listed is on the parent ticket) -* [x] Enable the Hashed Storage option for 2 hours and monitor. -* [x] After 2 hours disable the Hashed Storage option for new projects. -* [x] Flip the feature toggle back to the initial state (this step listed is on the parent ticket)",1.0 -13890067,2018-09-04 12:24:44.292,Sometimes GCS returns 5XX (Server error) status code,"GitLab-rails have the object storage integration feature. It's used by various kind of areas, such as LFS, Avatar, Job artifacts, etc. - -It seems sometimes GCS returns 5XX status code (Server error) when GitLab-rails tried to access to it. You can see a bunch of errors happen on our servers. - -- Production: https://sentry.gitlap.com/gitlab/gitlabcom/?query=Google%3A%3AApis%3A%3AServerError -- Staging: https://sentry.gitlap.com/gitlab/staginggitlabcom/?query=Google%3A%3AApis%3A%3AServerError -- dev.gitab.org: https://sentry.gitlap.com/gitlab/devgitlaborg/?query=Google%3A%3AApis%3A%3AServerError - -From what I saw, this happens transiently. At the moment, we don't have any clues of the cause. - -We don't know which this issue falls into Infrastructure or GitLab-CE. For now, we create this on infrastrcture project.",1.0 -13868516,2018-09-03 15:04:57.442,Setup a Prometheus Pushgateway,"Having a Prometheus Pushgateway would help us solve many observability issues: - -1. **Allow `ops.gitlab.net` CI jobs to post information related to CI runs** - - As an example: if we setup a CI job which routinely performs a restore of our postgres backup, at the end of the job we can post the time it takes to the push gatewat - - We can then decide (or better yet, let exec/business decide) on an acceptable restore time for our Postgres backup - - This time limit can then be codified as a alert in AlertManager. If the CI job takes longer than X hours, we raise an alert - - Additionally, we have trend data to predict and plan ahead - - (cc @abrandl as this scenario came from discussions between us) - -1. **Configure takeoff to send metrics related to**: - 1. Deployment time (per host) as a latency histogram - 1. Deployment in progress (as a gauge): `gitlab_takeoff_deployment_in_progress{fqdn=""..."", gitlab-version=""...""} --> 1` - 1. Given these metrics, we would be able to annotate our grafana dashboards with details of deploys. - 1. We would also be able to compare error rates during and after deployments with much greater ease than we can at present. - -1. **Allow for easier integration with non-prometheus components**: - 1. For example, Redis Sentinel allows CLI commands to run when a failover occurs. We could use this to increment a counter on each failover via the push gateway and then setup more robust alerting than we have at present. - - -Before starting to build - estimate out how much effort is it to do this with Chef vs K8s and update the team with the estimate. - - -cc @bjk\-gitlab @jarv",3.0 -13868201,2018-09-03 14:54:32.495,Use CICD for environment and role updates in chef-repo,"We should start using the ops instance for role and environment updates. At this point all role updates for gprd/gstg should be made on merge, automatically. We may want a pipeline for this, apply the changes for gstg first, check for alerts and then apply to gprd, or a manual step for the gprd apply would work as well.",2.0 -13867132,2018-09-03 14:03:32.105,Use CICD for uploading cookbooks to chef,Individual cookbooks should upload to chef when they are merged to master. For this to work we probably should add a pipeline check to ensure that there is a version bump.,3.0 -13865657,2018-09-03 12:57:42.089,Design document for the canary environment,"Using backend server weights, we should start sending a very small percentage of production traffic to the canary web node.",2.0 -13749361,2018-08-27 19:09:38.878,Disable Transparent Huge Pages (THP) on Redis machines,"I don't think we've had to do this before, and I don't see warnings in the Redis logs but Redis strongly recommends disabling THP to avoid latency: - -* https://redis.io/topics/latency -* https://github.com/antirez/redis/issues/3176 -* http://antirez.com/news/84 - -It can be disabled simply by: - -```sh -echo never > /sys/kernel/mm/transparent_hugepage/enabled -``` - -Right now it's set as the following: - -```sh -# cat /sys/kernel/mm/transparent_hugepage/enabled -always madvise [never] -``` - -/cc: @jarv, @andrewn",1.0 -13718658,2018-08-27 09:48:36.990,Arihant: Production access,As we have moved to GCP as stated [here](https://gitlab.com/gitlab-com/runbooks/blob/master/howto/access-gstg-gprd-hosts.md) creating this issue to get production access for .com.,1.0 -13701151,2018-08-26 09:20:40.158,Unmount NFS for Gitaly,"The tripswitch for for path access from Rails/Sidekiq has been enabled for a while now[1], so the next steps could be taken to remove NFS from our architecture. - -Now, dev.gitlab.org doesn't use NFS to the best of my knowledge, so we can do testing there, but I think we should move forward by unmounting the NFS drive for staging and do manual testing there. And afterwards move on to production. - -/cc @jramsay @jacobvosmaer\-gitlab - -[1]: https://gitlab.com/gitlab-org/gitaly/issues/1261",2.0 -13696150,2018-08-25 16:00:28.243,Configure Snowplow integration on production for GitLab.com,"We added our Snowplow configuration to staging in https://gitlab.com/gitlab-com/infrastructure/issues/4731, and nothing exploded. We also see events successfully populating in our Snowflake dw, so we should configure and turn on the integration in production as well. - -In admin settings, we need to enable Snowplow and set the following values: -* Collector URI: `snowplow.trx.gitlab.net` -* Site ID: `gitlab` -* Cookie domain: `.gitlab.com` - -![snowplow](https://gitlab.com/gitlab-org/gitlab-ee/uploads/2b0bd6866097a38f7f985500578c00fe/snowplow.png)",1.0 -13728932,2018-08-24 14:49:13.857,Environment running EE master refreshed nightly,"### Problem -- We currently have `dev.gitab.org` running CE master which is refreshed nightly. In addition to serving as a production environment for GitLab security-related merge requests, it's a great place to test features that are already in master in a live environment with real data. -- `staging.gitlab.com` has early RCs of EE with GitLab.com production data, but it is never running master. So if you wanted to test out features already on master, you cannot do so if those features are not yet in an RC, which oftentimes, means several weeks later. -- Testing out features already on master is super helpful for pretty much everyone at GitLab. For PMs, we can review features, help out on QA, and do FA. This is especially helpful to see a new feature integrated with the latest and greatest code, and not on a branch that might not have the newest functionality. -- Currently, the alternative is setting up GDK locally, which is not super easy to do for many GitLabbers. - -### Proposal -- Have an environment running EE master. -- Having it refreshed nightly would be great, or at least weekly. -- Having some type of real data would be great, and having that data refreshed would be awesome as well. But that is less crucial, and should be a second iteration. Just having the environment with master running means users can create their own test data manually.",1.0 -13653802,2018-08-23 17:03:58.410,Some users are having trouble performing a deploy,"``` -✖ Stopping chef-client on nfs, sidekiq, mailroom, web, api, git, registry, deploy-node (26s) -ssh_exchange_identification: Connection closed by remote host -ssh_exchange_identification: Connection closed by remote host -ssh_exchange_identification: Connection closed by remote host -ssh_exchange_identification: Connection closed by remote host -ssh_exchange_identification: Connection closed by remote host -ERROR: Errno::EPIPE: Broken pipe -/Users/jivanvl/gitlab/takeoff/lib/steps/base.rb:69:in `abort': Failed to execute command: bundle exec knife ssh -e -a fqdn 'roles:gprd-base-stor-nfs OR roles:gprd-base-be-sidekiq OR roles:gprd-base-be-mailroom OR roles:gprd-base-fe-web OR roles:gprd-base-fe-api OR roles:gprd-base-fe-git OR roles:gprd-base-fe-registry OR roles:gprd-base-deploy-node' 'sudo service chef-client stop' (ScriptError) - from /Users/jivanvl/gitlab/takeoff/lib/steps/base.rb:60:in `post_checks' - from /Users/jivanvl/gitlab/takeoff/lib/steps/base.rb:34:in `run!' - from /Users/jivanvl/gitlab/takeoff/lib/step_runner.rb:44:in `block in run' - from /Users/jivanvl/gitlab/takeoff/lib/step_runner.rb:42:in `each' - from /Users/jivanvl/gitlab/takeoff/lib/step_runner.rb:42:in `run' - from bin/takeoff-deploy:130:in `block in
' - from /Users/jivanvl/.rbenv/versions/2.4.4/lib/ruby/gems/2.4.0/gems/semantic_logger-4.3.0/lib/semantic_logger/base.rb:355:in `measure_internal' - from /Users/jivanvl/.rbenv/versions/2.4.4/lib/ruby/gems/2.4.0/gems/semantic_logger-4.3.0/lib/semantic_logger/base.rb:97:in `measure_info' - from bin/takeoff-deploy:128:in `
' -``` - -At least two members have reported nearly the same error message.",2.0 -13631669,2018-08-22 19:26:03.442,pgbouncer authentication is incorrectly configured in vault,"pgbouncer has the incorrect credentials stored in our vault. The next time a `gitlab-ctl reconfigure` is run on the db servers, is the next time we'll cause an outage as pgbouncer will not be able to properly authenticate with the master postgres node. - -In order to close this issue: -1. Correct the password in our vault -1. Ensure chef has placed the proper credentials on it's next run -1. Ensure a `gitlab-ctl reconfigure` has completed on all database/pgbouncer servers -1. Validate pgbouncer is still successful at connecting to the master node -1. The above is done in both staging and production",5.0 -13631324,2018-08-22 18:53:46.501,forum.gitlab.com running low on disk space,"Disk space on this node is mostly taken up by the docker container running discourse. Soon, it won't have any available space. - -Reference: https://gitlab.com/gitlab-com/infrastructure/issues/3946 - -This node is a one-off. This node ought to be rebuilt.",1.0 -13613595,2018-08-22 09:02:54.322,Update the infrastructure section of the handbook - first iteration,"https://about.gitlab.com/handbook/engineering/infrastructure/production-architecture/ needs a refresh. -@andrewn has done a bit of work in https://gitlab.com/gitlab-com/infrastructure/issues/4675 which would serve as a great starting point to update the diagrams. I think we should probably keep the scaffolding and just do a big update to make it relevant for GCP. - -cc @gl\-infra @andrewn - - -* `## On this page` -* `## Other Related Pages` -* `## Current Architecture` - needs an update -* `## Proposed Cloud Native Architecture` - needs an update -* `### GitLab.com Production Environment` - consolidate with current architecture -* `### High Level Components View` - Borrow diagrams from the readiness review? -* `### Pods Definition` - This can probably stay as-is but maybe could use an update? -* `### Database Architecture` - Needs an update -* `### Monitoring Architecture` - Needs an update -* `### Logging Architecture` - Either reference or put link to the logging page -* `## Infrastructure ""Services"" and Their SLx's` - I think we should remove this until we have a solid plan, cc @andrew -* `## Host Naming Standards` - Update for gcp -* `### Hostnames` - Update for gcp -* `### Service Tiers` - This is pretty much the same -* `### Environments` - Needs update -* `### Locations` - update for GCP -* `### TLD Zones` needs update for internal and .gitlab.net addresses -* `### Examples`- needs update -* `## Internal Networking Scheme` - replace with pointer to terraform -* `### Production` - remove -* `### Canary` - remove -* `### Staging` - remove -* `### GitLabGeoPrd` - remove -* `### GitLabGeoStg` - remove -* `### GitLabOps` - remove -* `### Remote Access` - Replace with pointer to howtos for bastion access -* `## Azure` - remove -* `### Load Balancers` - Remove, covered in high level diagrams -* `### Service Nodes` - remove -* `## Digital Ocean` - needs update -* `## AWS` - needs update -* `## Google Cloud` - remove as we are on gcp now -* `## Monitoring` - keep, reference to monitoring section -* `## Technology at GitLab` - more or less the same.",1.0 -13575681,2018-08-21 08:56:54.911,Create blackbox scraper alert for https://dashboards.gitlab.com,"This is the public dashboard service. We may want to just hit the api health endpoint on port 3000 from the blackbox scraper on the internal ip, or we can hit the https endpoint since it is public.",1.0 -13502571,2018-08-17 16:28:44.429,Fix mailgun 2FA to not email everyone on login,"https://gitlab.slack.com/archives/C0SNC8F2N/p1534522465000100 - -Apparently logging into mailgun with 2FA enabled is emailing a mailing list with the 2FA code.",1.0 -13487380,2018-08-16 21:41:25.369,GCP permission cleanup - add users to groups and remove specific permissions,See issue https://gitlab.com/gitlab-com/security-accountability/issues/3,1.0 -13483938,2018-08-16 17:25:52.103,Week 1 - test limited repos on GitLab.com with hashed storage,"Week 1 issue for #4772 - - -Following: https://gitlab.com/gitlab-com/infrastructure/issues/4174 and as a step to https://gitlab.com/groups/gitlab-org/-/epics/75 I discussed some additional steps with @stanhu to have an initial rollout in gitlab dot com: - -- First week: Let's enable it for new projects for a short period of time (~2 hours) and monitor - -Coordinate with @stanhu and geo team for which repos and time window",1.0 -13483439,2018-08-16 16:03:22.609,Change GITLAB_CDN_HOST to the IPv6-enabled endpoint on staging,"**Reason:** -[IPv6 support on GitLab.com](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/645) is being prioritized. Static resources ought to also be reachable over IPv6. - -**Problem:** -Staging uses `gl-staging.global.ssl.fastly.net`/`gl-staging-canary.global.ssl.fastly.net` as the CDN endpoint, but that doesn't support IPv6. - -**Fix:** -Change the [`GITLAB_CDN_HOST`](https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/administration/environment_variables.md#supported-environment-variables) environment variable to `gl-staging.freetls.fastly.net`/`gl-staging-canary.freetls.fastly.net`. As an added benefit this will enable HTTP/2. See [Fastly's Free TLS IPv6 documentation](https://docs.fastly.com/guides/securing-communications/setting-up-free-tls#support-for-http2-ipv6-and-tls-12).",1.0 -13480437,2018-08-16 14:02:03.813,remoteonly.org ssl expiration,The certificate expired for this pages served application.,1.0 -14000505,2018-08-16 12:12:27.246,Define Database Reliability Engineer role,This is to track work on defining the DBRE role we currently discuss.,2.0 -13460797,2018-08-15 16:39:56.551,Investigate / Deploy eBPF metrics exporter,The [ebpf_exporter](https://github.com/cloudflare/ebpf_exporter) can give us latency histograms for disk IO. This would be useful for debugging and viewing the impact of slow / throttled storage.,2.0 -13460848,2018-08-15 16:34:02.240,Database Access For Dylan Griffith,"The configure team regularly finds ourselves needing to query the database for certain things like: - -- debug production incidents -- understanding how many rows we have in certain tables (how this might affect migrations) -- analytics on usage of our product -- query the DB to understand the impact of introducing new constraints in the DB - -We don't have anybody in our team that can execute these queries so we usually rely on others to do this for us but sometimes like in the case of analytics on product usage we need to do slightly exploratory querying to know what we're looking for. It's hard to do this without prod DB access since we do one `COUNT` query and this tells us what kinds of other queries we might want to do to understand the data better. - -Could you please provide access to @DylanGriffith so that the Configure team is better setup to manage these kinds of problems in future without external support. -",1.0 -13460332,2018-08-15 16:07:27.750,Hashed Storage rollout in GitLab dot Com,"Following: https://gitlab.com/gitlab-com/infrastructure/issues/4174 and as a step to https://gitlab.com/groups/gitlab-org/-/epics/75 I discussed some additional steps with @stanhu to have an initial rollout in gitlab dot com: - -* [x] First week: Start migrating some of our own projects (`gitlab-com` and/or `gitlab-org`) #4785 -* [x] Second week #4866: - - Flip the feature toggle to disable the migration when renaming/moving a project - - Enable Hashed Storage for new projects only, for a short period of time (~2 hours) - - Disable Hashed Storage for new projects - - Flip the feature toggle back to the initial state -* [x] Third week #4867: - - Enable Hashed Storage for new projects and the automatic migration when renaming/moving a project - - Monitor for any issues with the renaming/moving - - If anything pops up with the renaming/moving migration, Flip the feature toggle back, but keep the Hashed Storage for new projects enabled -* [x] Fourth week #4868: Migrate all our `gitlab-com` and `gitlab-org` projects -* [x] Fifth week and beyond #4869: Start migrating user's repository (batches of 1K? per day/or all repositories from the same storage, etc) -* [x] Resolve any failures #6001 -* [x] Resolve any failures from [#935](https://gitlab.com/gitlab-com/gl-infra/production/issues/935) - -Because we've introduced a feature to also migrate hashed storage when renaming/moving projects in https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/19747 we may want to reduce the risk and introduce a feature toggle to disable the behavior first (this will be done in a separate issue, I will discuss this with @vsizov - https://gitlab.com/gitlab-org/gitlab-ce/issues/50345). - - ----- - -After finalizing all the migration in GitLab-dot-com, this is a list of what we learned and the corner cases we found: - -* When a precondition fail, we had no visibility of what failed nor why - - That required extra debugging and allowed us to improve the code. - - The failures were either due to corner cases or bugs in other parts of the codebase we were relying on. -* The rake task did not include all pagination params SRE wanted in order to throttle the execution - - Initially SRE did some custom scripts to trigger a throttled amount of projects to comply with GitLab-dot-com load - - We improve a few things on our rake task adding extra params it was lacking, but still most of the initial batch was triggered via some custom bash scripts calling our rake tasks and paginating -* Files in object storage are already hashed, but we were still listing them as candidates - - The code did not corrupt or messed up with anything on object storage, so we got only some noise as side-effect - - The queries were fixed and we removed them from the list and from the candidates that were being triggered - - We also added an extra bit that just ignored any attempt to schedule a migration as a precondition -* Initial goal was to build it to be fast (that means, be over optimistic of how the environment is, but stop to prevent data-loss whenever we have 100% confidence) - - This was necessary for the scale of GitLab dot com. The conservative approach helped us achieve no data-loss - - Because of the conservative approach, it required multiple iterations to find and fix corner cases -* We found out that due to some previous attempts we left an empty folder (with CarrierWave folder structure, but no files inside) - - We now consider that OK and we still try to migrate over - - For GitLab dot com, we found a few cases were, due to an early attempt, the tmp folder were not empty, so a few cases required a manually intervention to allow that to be retried -* We lacked instrumentation, logging was not good from the beginning - - We've improved the amount of information we log - - That allowed us to follow up the migration by watching the specific worker class in our logstash instance -* Because of the speed concerns, migration happened in sidekiq, and scheduling them was also triggered by sidekiq. This is hard to debug. Ideally we should have a background/foreground mode - - I've created a proposal to build some internal framework with the lessons we learned here that could help us with future data migrations https://gitlab.com/gitlab-org/gitlab/issues/34427 -* Permission problems (incorrect permissions on disk, in a few of the last remaining projects) - - This was due to previous SRE support work, which had nothing to do with the hashed storage, but impacted the script - - Permissions had to be fixed manually -* SQL timeouts (during first attempts we had enough database performance, that degraded as dot com grew and we reduced timeout limit...) - - We had to add partial indexes to help get back the speed - - The indexes were only tracking remaining legacy storage projects, so after migrating them all the index will have zero cost -* Repository reference counters were inconsistent in some repositories (probably due to earlier bug) - - We had to manually reset the counter for a few projects - - There was no code available to do that, so that was added in another MR -* We found projects in legacy storage that are `pending delete` but never was removed (created: https://gitlab.com/gitlab-org/gitlab/-/issues/210031) - - The solution until we found a more permanent fix is to re-schedule removal of them manually - - There is documentation on how to do it here: https://gitlab.com/gitlab-org/gitlab/blob/a67ad6249dc784f328ce23d77bd7ae1e8ebe57b5/doc/administration/troubleshooting/gitlab_rails_cheat_sheet.md#L193-232",8.0 -13452769,2018-08-15 09:19:41.680,Alert for failing basebackup,"We have discussed a couple ways to go about this: -- Alert on a failing restore pipeline EDIT: I would prefer to hold off until we have this running in the ops instance. -- Alert on the cronjob failing",2.0 -13435488,2018-08-14 17:15:25.641,Documentation for GKMS Secret Storage for GitLab Servers,Document on our architecture page our usage of GKMS and the methodology behind how we use GKMS in automated Chef runs for secrets management.,1.0 -13432006,2018-08-14 14:34:41.808,adrielsantiago needs an account on staging.gitlab.com,"@adrielsantiago joined recently and needs a staging account to help test changes made into RCs. - -Thanks! - -Created this issue as recommended in https://gitlab.slack.com/archives/C3JJET4Q6/p1534255102000100 ",1.0 -13429069,2018-08-14 12:20:08.060,page the oncall when redis fails over,"There should be a specific alert for every type of failover, redis, redis-cache, postgres. -Although failovers can happen without service interruption they usually can indicate other problems and and it will help to shorten the troubleshooting time if we are alerted to them immediately.",3.0 -13410376,2018-08-13 17:30:35.817,Reconfigure the Gitaly cgroup cookbook to match new file server nodes,"Gitaly currently uses the default cgroup settings (32GB of memory, 1024 CPU shares, etc) from the Gitaly cgroup cookbook. - -However, the machines Gitaly is now running on on GitLab.com are much bigger, with much more memory and we should ensure that we're using the optimal settings. - -``` -root@file-01-stor-gprd.c.gitlab-production.internal:/home/andrewn# free -g - total used free shared buff/cache available -Mem: 118 2 83 0 31 113 -Swap: 0 0 0 -``` - -113GB of unused memory on file-01 :disappointed: - -``` -root@file-01-stor-gprd.c.gitlab-production.internal:/home/andrewn# cat /sys/fs/cgroup/memory/system.slice/gitlab-runsvdir.service/memory.max_usage_in_bytes -34359738368 -``` - -Gitaly cgroup constrained to 32GB of memory, using a maximum of ~20% of available RAM on dedicated Gitaly machines: - -![image](/uploads/2026203e3d7668e210688d6511d540c3/image.png) - -https://dashboards.gitlab.net/d/Zy6xM95mk/incident-405-failing-api-health-checks?panelId=12&fullscreen&orgId=1&from=now-24h&to=now - -cc @tommy.morgan @jacobvosmaer\-gitlab",2.0 -13382075,2018-08-12 13:41:52.188,fewer requests are going to web-01 than other VMs in the load balancing pool,"It appears web-01 in gprd is responding much faster than the other instances as you can see in this graph: -https://dashboards.gitlab.net/d/HyOiXrSmz/rails?refresh=5m&panelId=2&fullscreen&orgId=1 - -The reason, however, is that fewer requests are being routed to web-01 resulting in the liveness check skewing the latency results. From this there are a couple open questions: - -- Should we be excluding liveness from our latency measurements? -- Why is web-01 receiving fewer requests relative to liveness? - -Out of the last 1000 requests on web-01 524 were liveness, on web-02 354 were liveness.",1.0 -13380775,2018-08-12 09:29:23.970,Move chatops runner to gitlab-ops,"In order for chatops to have network access to both gstg and gprd we should move it to the gitlab-ops gcp project and also register it with the ops.gitlab.net instance.,",4.0 -13363801,2018-08-10 14:13:57.651,Create a wildcard certificate for `*.gitlab-review.app`,"We currently use a self-signed certificate for `*.gitlab-review.app` (for EE review apps) but it seems to not work at all. - -I get the following when visiting https://gitlab-review-improve-re-ffvbep.gitlab-review.app/ in Firefox: - ->>> -The owner of gitlab-review-improve-re-ffvbep.gitlab-review.app has configured their website improperly. To protect your information from being stolen, Firefox has not connected to this website. - -This site uses HTTP Strict Transport Security (HSTS) to specify that Firefox may only connect to it securely. As a result, it is not possible to add an exception for this certificate. - -gitlab-review-improve-re-ffvbep.gitlab-review.app uses an invalid security certificate. - -The certificate is not trusted because it is self-signed. - -The certificate is only valid for ingress.local. - -Error code: MOZILLA_PKIX_ERROR_SELF_SIGNED_CERT ->>> - -We should have a proper wildcard certificate for `*.gitlab-review.app`. - -/cc @marin",2.0 -13359158,2018-08-10 10:04:04.002,infrastructure in the ops environment needs monitoring,"We currently do not have an alert manager in the ops environment which means we have very poor optics for some of our important operational infrastructure. This includes: -* dashboards.gitlab.net -* nginx proxy -* public grafana - -We should deploy a dedicated prometheus and alertmanager there.",2.0 -13347037,2018-08-09 17:18:05.349,Create VM/Cloud inventory first iteration,"https://gitlab.slack.com/archives/CB3LSMEJV/p1533834709000494 - -After getting some quetions - we should have a tool that produces some kind of inventory of hosts and IPs to help us track down things among our cloud providers. This will also help new people find things. - -Before doing - come up with a plan - is there an existing tool we can use? -Do we just want output to go to a spreadsheet? -Get a 2nd set of eyes to review the plan and then go.",5.0 -13324527,2018-08-08 16:54:01.579,Decommission sidekiq-traces servers,"With the object storage migration complete, we can now decommission the `sidekiq-traces` servers and reintegrate their queues back into `besteffort`.",1.0 -13297246,2018-08-07 16:35:14.325,New Storage Servers,"We will need to create new storage servers soon. We are approaching the 60% mark on 17, 18, 19, and 20. This should perhaps obviously be done post migration. - -- [X] Build new storage servers (https://gitlab.com/gitlab-com/gitlab-com-infrastructure/merge_requests/497) -- [x] Add storage servers to chef-repo so that they are mounted by NFS -- [x] Ensure the servers are set up in `gitlab.rb` to be storage options -- [x] Attempt to migrate a test project to the new servers -- [x] Update the default repository locations in the GitLab admin interface.",1.0 -13226598,2018-08-04 14:13:27.235,Fix chef-client issues for 08-02 RC incident,We have a lot of chef-clients stopped/stale or erroring from 8/2. This issue is the tracking point to make sure they get back to a good state before aug 11.,1.0 -13217341,2018-08-03 16:44:51.457,create alert and dashboard for sidekiq exceptions,"We currently have very little visibility into this and it would be nice to add this to our sidekiq stats dashboard. - -* azure - https://performance.gitlab.net/d/000000154/sidekiq-stats?refresh=5m&orgId=1 -* gcp - https://dashboards.gitlab.net/d/000000154/sidekiq-stats?refresh=5m&orgId=1 - -During the last failover rehearsal we saw a large number of exceptions we should alert on. https://dashboards.gitlab.net/d/l8ifheiik/geo-status?panelId=33&fullscreen&orgId=1&from=1533223456362&to=1533250079043",3.0 -14000516,2018-07-30 09:47:06.529,WAL-e wal-push should not overwrite existing data in S3,"I'm wondering if the wal-e `archive_command` overwrites existing WAL segments in S3 - we should make sure this doesn't happen (as per [docs](https://www.postgresql.org/docs/9.6/static/continuous-archiving.html), `archive_command` should fail in this situation): - -> The archive command should generally be designed to refuse to overwrite any pre-existing archive file. This is an important safety > feature to preserve the integrity of your archive in case of administrator error (such as sending the output of two different servers to the same archive directory). - -/cc @NikolayS",1.0 -13305504,2018-07-30 09:34:07.926,"if any of the registry/api/web backends go unhealthy, the entire site goes down","We have three load balancers that serve gitlab.com in GCP, https, git and http. -These three load balancers are serving traffic to haproxy VMs that have the following health checks that validate the health of their corresponding backends: - -``` -frontend check_ssh - bind 0.0.0.0:8003 - mode http - option splice-auto - acl no_be_srvs_ssh nbsrv(ssh) lt 1 - monitor-uri /-/available-ssh - monitor fail if no_be_srvs_ssh - -frontend check_http - bind 0.0.0.0:8001 - mode http - option splice-auto - acl no_be_srvs_web nbsrv(web) lt 1 - monitor-uri /-/available-http - monitor fail if no_be_srvs_web - -frontend check_https - bind 0.0.0.0:8002 - mode http - option splice-auto - acl no_be_srvs_web nbsrv(web) lt 1 - acl no_be_srvs_api nbsrv(api) lt 1 - acl no_be_srvs_reg nbsrv(registry) lt 1 - monitor-uri /-/available-https - monitor fail if no_be_srvs_web || no_be_srvs_api || no_be_srvs_reg -``` - -the last check means that if either web, api, or registry has zero healthy backend hosts we will cease to serve https traffic. - - -* This configuration puts an unnecessary coupling between the registry and the rest of the site as since the registry is served from registry.gitlab.com we could have a dedicated load balancer for it. -* git https traffic is routed to the git vms, we should probably move git https traffic to a different fleet or use the web fleet.",8.0 -12876064,2018-07-16 23:26:31.396,(canary) Creating Kubernetes Cluster - cannot fetch Google project billing status,"### Summary - -When creating a new Kubernetes Cluster, and I select a project from the ""Google Cloud Platform project"" field, the request to Google is blocked by CSP. - -### Steps to reproduce - -* Turn on canary with by setting the `gitlab_canary` cookie -* Go to Operations, Kubernetes -* Select ""Create new Cluster on GKE"" -* select any project from the ""Google Cloud Platform project"" field - -### Example Project - -https://gitlab.com/tkuah/test_rails/clusters/new - -### What is the current *bug* behavior? - -The ""Google Cloud Platform project"" field is stuck on ""Validating project billing status"" and cannot be selected anymore. - -### What is the expected *correct* behavior? - -We successfully validate the project billing status, and the field shows the GCP project that was selected by the user. - -### Relevant logs and/or screenshots - -Console shows the following error : - -``` -Refused to frame 'https://content-cloudbilling.googleapis.com/' because it violates the following Content Security Policy directive: ""frame-src 'self' https://www.google.com/recaptcha/ https://content.googleapis.com https://content-cloudresourcemanager.googleapis.com"". -``` - -![Screen_Shot_2018-07-17_at_11.11.39_AM](/uploads/b9221ae2e5369f906115d9701d7c0188/Screen_Shot_2018-07-17_at_11.11.39_AM.png) - -With `gitlab_canary`, I receive the following header for https://gitlab.com/tkuah/test_rails/clusters/new : - -``` -Content-Security-Policy: object-src 'none'; worker-src https://assets.gitlab-static.net https://gl-canary.global.ssl.fastly.net https://gitlab.com blob:; script-src 'self' 'unsafe-inline' 'unsafe-eval' https://assets.gitlab-static.net https://gl-canary.global.ssl.fastly.net https://www.google.com/recaptcha/ https://www.gstatic.com/recaptcha/ https://apis.google.com; style-src 'self' 'unsafe-inline' https://assets.gitlab-static.net https://gl-canary.global.ssl.fastly.net; img-src * data: blob:; frame-src 'self' https://www.google.com/recaptcha/ https://content.googleapis.com https://content-cloudresourcemanager.googleapis.com; frame-ancestors 'self'; connect-src 'self' https://assets.gitlab-static.net https://gl-canary.global.ssl.fastly.net wss://gitlab.com https://sentry.gitlap.com https://customers.gitlab.com; report-uri https://sentry-infra.gitlap.com/api/3/csp-report/?sentry_key=a664fdde83424b43a991f25fa7c78987 - -``` - -Without `gitlab_canary` : - -``` -Content-Security-Policy: object-src 'none'; worker-src https://assets.gitlab-static.net https://gitlab.com blob:; script-src 'self' 'unsafe-inline' 'unsafe-eval' https://assets.gitlab-static.net https://www.google.com/recaptcha/ https://www.gstatic.com/recaptcha/ https://apis.google.com; style-src 'self' 'unsafe-inline' https://assets.gitlab-static.net; img-src * data: blob:; frame-src 'self' https://www.google.com/recaptcha/ https://content.googleapis.com https://content-compute.googleapis.com https://content-cloudbilling.googleapis.com https://content-cloudresourcemanager.googleapis.com; frame-ancestors 'self'; connect-src 'self' https://assets.gitlab-static.net wss://gitlab.com https://sentry.gitlap.com https://customers.gitlab.com; report-uri https://sentry-infra.gitlap.com/api/3/csp-report/?sentry_key=a664fdde83424b43a991f25fa7c78987 - -``` - -So somehow, the canary servers are missing `https://content-cloudbilling.googleapis.com` from the CSP header - -### Output of checks - -This bug happens on GitLab.com - -### Possible fixes - -Add the above src to the `frame-src` CSP directive.",1.0 -13305476,2018-07-08 17:05:05.148,Migrate registry S3 bucket to GCS,"As part of https://gitlab.com/gitlab-com/migration/issues/635, we should also standardize the registry bucket name. This is likely more work than https://gitlab.com/gitlab-com/migration/issues/635 and thus should be done migration~4038690 . However, the bucket itself will be added in https://gitlab.com/gitlab-com/gitlab-com-infrastructure/merge_requests/440.",4.0 -12598698,2018-07-06 15:11:29.331,Access request - AWS keys for billing info for Meltano,"In order to help Meltano with billing analysis we're looking to set up a billing AWS IAM profile: - -See: https://gitlab.com/meltano/meltano/issues/225 which in turn helps: https://gitlab.com/meltano/looker/issues/53 - -cc @jschatz1",1.0 -13304968,2018-07-06 11:14:16.916,Architecture diagrams to aid in systematic debugging,"Per 2018-07-05 failover - -The debugging of https://gitlab.com/gitlab-com/migration/issues/642 was made unnecessarily difficult due to us having to puzzle out the exact CDN setup. We should ensure up-to-date architecture diagrams exist for at least gprd, documenting the full route taken for all variations of all presented TCP services, so we can debug these things systematically when they come up. - -/cc @dawsmith @jarv",1.0 -12530971,2018-07-03 15:40:13.825,Chef access for andrewn,"Related to https://gitlab.com/gitlab-com/infrastructure/issues/4502 - -While I now have production access, I would ideally like chef access too. While I don't plan to make changes, having access allows me to run commands like - -``` -knife node show -```",1.0 -12511484,2018-07-02 21:06:22.369,Set up meltano domains for GitLab Pages,"We'd like to make use of the meltano domains list via GitLab Pages. The project and linked issue where we would set up pages in is: https://gitlab.com/meltano/meltano.com/issues/1 - -To set this up, we would need to set up DNS records and then SSL certificates.",1.0 -13304976,2018-06-29 08:55:21.311,Decide on what endpoint to use for the HAProxy health check,"* In azure we use `/` which goes all the way to rails and serves a 302. -* In GCP we use `/-/liveness` which also hits the database. - -There was some discussion in https://gitlab.com/gitlab-com/infrastructure/issues/4481 on whether this is a good idea and that maybe we should use a health check that does not depend on the database.",1.0 -13304997,2018-06-28 18:48:41.549,Re-enable archiving mode on postgres nodes in GCP,"They are being disabled in https://dev.gitlab.org/cookbooks/chef-repo/merge_requests/2207. - -Why? Quoting [Slack](https://gitlab.slack.com/archives/C9W4C89LY/p1530204627000477): - -> My concern is, if we used a single bucket for Azure and GCP, failover demos would push WAL segments with different timeline, and I think that would confuse WAL-E when we failback because it constantly looks for newer timelines to fetch segments for - -After we failover permanently, we should re-enable them.",1.0 -13305245,2018-06-25 15:17:58.234,Fix reverse DNS for GCP IPs,"Noted in the 2018-06-21 failover - -Various IPs on the GCP side are missing appropriate reverse DNS. - -For instance, `gprd.gitlab.com` resolves to `35.231.145.151`, which has reverse DNS of `151.145.231.35.bc.googleusercontent.com` - -Since post-failover, we want `gitlab.com` to point to this IP, we should ensure the reverse DNS is set to `gitlab.com` - -We don't send email via these IPs, so it's not super-important, but I do think it's desirable. - -Other IPs that matter will include the ones that `registry.gprd.gitlab.com`, `gprd.gitlab.io` and `altssh.gprd.gitlab.com` (doesn't exist yet) will resolve to. Perhaps I'm missing others as well. - -We should consider setting appropriate rDNS for IPs in gstg as well",1.0 -12301013,2018-06-25 00:47:52.776,Improve the download speed from packages.gitlab.com globally,"Currently it takes 2 hours in the morning or 5 hours in the afternoon to install GitLab CE/EE from Shanghai, China using the official instructions by connecting to packages.gitlab.com. Would need to get this improved significantly for better experiences for users and customers in China. - -/cc @malessio @jimt2061 @northrup",1.0 -14000523,2018-06-21 15:31:06.971,Alert if database backup cannot be restored,"We have full database backup restore automation. Let's add an alert to the process that lets us know if the restore failed for some reason. - -https://gitlab.com/gitlab-restore/postgres-01.db.prd.gitlab.com - -/cc @ahanselka",3.0 -13305493,2018-06-20 10:34:18.476,repmgr should be added to centralized logging,"We currently do not have these included and they probably should be. - -Example: -``` -2018-04-26_02:31:41.69640 [2018-04-26 02:31:41] [INFO] connecting to database 'host=postgres-01.db.prd.gitlab.com port=5432 user=gitlab_repmgr dbname=gitlab_repmgr' -2018-04-26_02:31:41.69733 [2018-04-26 02:31:41] [ERROR] connection to database failed: could not connect to server: Connection refused -2018-04-26_02:31:41.69749 Is the server running on host ""postgres-01.db.prd.gitlab.com"" (127.0.0.2) and accepting -2018-04-26_02:31:41.69756 TCP/IP connections on port 5432? -2018-04-26_02:31:41.69764 -2018-04-26_02:31:42.70236 [2018-04-26 02:31:42] [INFO] connecting to database 'host=postgres-01.db.prd.gitlab.com port=5432 user=gitlab_repmgr dbname=gitlab_repmgr' -2018-04-26_02:31:42.70289 [2018-04-26 02:31:42] [ERROR] connection to database failed: could not connect to server: Connection refused -2018-04-26_02:31:42.70303 Is the server running on host ""postgres-01.db.prd.gitlab.com"" (127.0.0.2) and accepting -2018-04-26_02:31:42.70310 TCP/IP connections on port 5432? -2018-04-26_02:31:42.70316 -```",2.0 -13305489,2018-06-20 10:09:11.358,consul client and consul cluster should be added to centralized logging,"We currently do not have these included and they probably should be. I think we can simply add these to the existing postgres index. - -## Consul client - -Unfortunately these logs are unstructured: - -``` -/var/log/gitlab/consul/current - -example: -2018-06-20_00:07:26.61126 2018/06/20 00:07:26 [INFO] serf: EventMemberFailed: postgres-02.db.prd.gitlab.com 10.66.1.102 -2018-06-20_00:07:27.59971 2018/06/20 00:07:27 [ERR] agent: Failed to invoke watch handler '/var/opt/gitlab/consul/scripts/failover_pgbouncer': exit status 4 -2018-06-20_00:07:32.90036 2018/06/20 00:07:32 [INFO] serf: EventMemberJoin: postgres-02.db.prd.gitlab.com 10.66.1.102 -2018-06-20_00:09:22.54212 2018/06/20 00:09:22 [ERR] agent: Failed to invoke watch handler '/var/opt/gitlab/consul/scripts/failover_ - -``` - -## Consul server - -Logged using the systemd journal, we don't currently have a fluentd plugin for this so it would need to be added. -https://github.com/reevoo/fluent-plugin-systemd",2.0 -12112496,2018-06-18 09:46:40.473,Monitor consul members,"We've had instances of postgres secondaries missing from consul in https://gitlab.com/gitlab-com/infrastructure/issues/4415 and https://gitlab.com/gitlab-com/infrastructure/issues/4330. - -It would be useful to have prometheus check and alert on e.g. the member count for ""postgres"", so we can detect that situation automatically. - -There is a consul exporter available: https://github.com/prometheus/consul_exporter",1.0 -12069327,2018-06-16 06:42:39.960,Add centralized logging to prometheus and grafana hosts,"When troubleshooting https://gitlab.com/gitlab-com/infrastructure/issues/4410 it was difficult to determine what was going on because of the amount of logging which was rotated off the host quickly. - -We should add all of the Prometheus/Thanos, and Grafana hosts to centralized logging. We should also add some troubleshooting steps into the `runbooks` project repository. See https://gitlab.com/gitlab-com/runbooks/blob/master/howto/monitoring-overview.md. - -* [x] Prometheus -* [x] Thanos components (compact, store, query) -* [x] Grafana -* [x] Alertmanager -* [x] Pushgateway",5.0 -13498116,2018-06-14 16:47:08.679,Create public dashboard service,"To replace the old `monitor.gitlab.net`, we need a new `monitor.gprd.gitlab.net` that syncs dashboards from `dashboards.gitlab.net`.",8.0 -13305512,2018-06-13 09:11:46.882,Migrate the sidekiq-workers dashboard to Prometheus,"Currently the sidekiq-workers dashboard relies on InfluxDB. Since we are strategically moving from Influxdb to Prometheus, and our current plans are to leave Influxdb behind in Azure, we need to rebuild this dashboard using metrics from Prometheus. - -@smcgivern has pointed out that this is an important dashboard which is frequently used, so it's migration should be done migration~4038689 - -**References** - -* https://performance.gitlab.net/dashboard/db/sidekiq-workers -* [issues](https://gitlab.com/groups/gitlab-org/-/issues?search=https://performance.gitlab.net/dashboard/db/sidekiq-workers) -* https://gitlab.com/gitlab-com/migration/issues/291 -* https://gitlab.com/gitlab-com/infrastructure/issues/1962",2.0 -13305231,2018-06-13 09:10:08.280,Migrate the rails-controllers dashboard to Prometheus,"Currently the rails-controllers dashboard relies on InfluxDB. Since we are strategically moving from Influxdb to Prometheus, and our current plans are to leave Influxdb behind in Azure, we need to rebuild this dashboard using metrics from Prometheus. - -@smcgivern has pointed out that this is an important dashboard which is frequently used, so it's migration should be done migration~4038689 - -**References** - -* https://performance.gitlab.net/dashboard/db/rails-controllers -* [issues](https://gitlab.com/groups/gitlab-org/-/issues?search=https://performance.gitlab.net/dashboard/db/rails-controllers) -* https://gitlab.com/gitlab-com/migration/issues/291 -* https://gitlab.com/gitlab-com/infrastructure/issues/1962",2.0 -13305468,2018-06-13 09:09:20.823,Migrate the grape-endpoints dashboard to Prometheus,"Currently the Grape Endpoints dashboard relies on InfluxDB. Since we are strategically moving from Influxdb to Prometheus, and our current plans are to leave Influxdb behind in Azure, we need to rebuild this dashboard using metrics from Prometheus. - -@smcgivern has pointed out that this is an important dashboard which is frequently used, so it's migration should be done migration~4038689 - -**References** - -* https://performance.gitlab.net/dashboard/db/grape-endpoints -* [issues](https://gitlab.com/groups/gitlab-org/-/issues?search=https://performance.gitlab.net/dashboard/db/grape-endpoints) -* https://gitlab.com/gitlab-com/migration/issues/291 -* https://gitlab.com/gitlab-com/infrastructure/issues/1962",2.0 -11619150,2018-06-05 23:33:01.431,Make sure GCP sizing is updated with changes done in Azure week of 6/3,"We should go back through the items we had done in https://gitlab.com/gitlab-com/infrastructure/issues/4314 and make sure we scale up GCP hosts similar to what we had done in Azure as needed. - -* [ ] API nodes -* [ ] Sidekiq workers -* [ ] Worker chef roles for sidekiq queues -* [ ] Any further tuning of NFS nodes -* [ ] Pgbouncer changes if any -* [ ] Prometheus? - -cc @andrewn ",1.0 -11302970,2018-06-04 16:53:57.235,Bastion Access to gprd,"Reference: https://docs.google.com/spreadsheets/d/1O6BrllYss6lnKyguK5dTrjlqsaZyFT0oqOtI_9_e-hE/edit - -Reference: https://gitlab.com/gitlab-com/migration/issues/459 - -In order to conduct the ~""GCP Migration"" production and failover rehearsals, please grant the following GitLab team members **Bastion** access to `gprd` - -- [ ] @meks -- [ ] @rymai -- [ ] @bikebilly -- [ ] @DylanGriffith -- [ ] @fjsanpedro -- [ ] @felipe_artur -- [ ] @mkozono -- [ ] @jprovaznik - -How to access Bastions: https://gitlab.com/gitlab-com/runbooks/blob/master/howto/gprd-bastions.md",1.0 -11302840,2018-06-04 16:52:30.591,Bastion Access to gstg,"Reference: https://docs.google.com/spreadsheets/d/1O6BrllYss6lnKyguK5dTrjlqsaZyFT0oqOtI_9_e-hE/edit - -Reference: https://gitlab.com/gitlab-com/migration/issues/459 - -In order to conduct the ~""GCP Migration"" production and failover rehearsals, please grant the following GitLab team members **Bastion** access to `gstg` - -- [ ] @meks -- [ ] @rymai -- [ ] @bikebilly -- [ ] @DylanGriffith -- [ ] @fjsanpedro -- [ ] @felipe_artur -- [ ] @jprovaznik - -How to access Bastions: https://gitlab.com/gitlab-com/runbooks/blob/master/howto/gstg-bastions.md",1.0 -11302497,2018-06-04 16:44:42.157,"SSH, Shell Access in Production","Reference: https://docs.google.com/spreadsheets/d/1O6BrllYss6lnKyguK5dTrjlqsaZyFT0oqOtI_9_e-hE/edit - -Reference: https://gitlab.com/gitlab-com/migration/issues/459 - -In order to conduct the ~""GCP Migration"" production and failover rehearsals, please grant the following GitLab team members **SSH and Shell** access in Production - -- [ ] @digitalmoksha -- [ ] @nick.thomas",1.0 -11302385,2018-06-04 16:42:47.788,"SSH, Shell Access to Staging","Reference: https://docs.google.com/spreadsheets/d/1O6BrllYss6lnKyguK5dTrjlqsaZyFT0oqOtI_9_e-hE/edit - -Reference: https://gitlab.com/gitlab-com/migration/issues/459 - -In order to conduct the ~""GCP Migration"" production and failover rehearsals, please grant the following GitLab team members **SSH and Shell** access on Staging - -- [ ] @digitalmoksha",1.0 -11302358,2018-06-04 16:40:57.803,Rails Access to Production,"Reference: https://docs.google.com/spreadsheets/d/1O6BrllYss6lnKyguK5dTrjlqsaZyFT0oqOtI_9_e-hE/edit - -Reference: https://gitlab.com/gitlab-com/migration/issues/459 - -In order to conduct the ~""GCP Migration"" production and failover rehearsals, please grant the following GitLab team members **GitLab Rails** access on to Production - -- [ ] @digitalmoksha - - -Please note that this also requires https://gitlab.com/gitlab-com/infrastructure/issues/4316 to be completed. - -https://gitlab.com/gitlab-com/infrastructure/issues/4316",1.0 -11302305,2018-06-04 16:38:28.030,GitLab Admin Access in Production,"Reference: https://docs.google.com/spreadsheets/d/1O6BrllYss6lnKyguK5dTrjlqsaZyFT0oqOtI_9_e-hE/edit - -Reference: https://gitlab.com/gitlab-com/migration/issues/459 - -In order to conduct the ~""GCP Migration"" production and failover rehearsals, please grant the following GitLab team members **GitLab Admin** access on https://gitlab.com - -- [x] @andrewn -- [x] @digitalmoksha -- [ ] @meks",1.0 -11302265,2018-06-04 16:36:57.510,GitLab Admin Access on Staging,"Reference: https://docs.google.com/spreadsheets/d/1O6BrllYss6lnKyguK5dTrjlqsaZyFT0oqOtI_9_e-hE/edit - -Reference: https://gitlab.com/gitlab-com/migration/issues/459 - -In order to conduct the ~""GCP Migration"" production and failover rehearsals, please grant the following GitLab team members **GitLab Admin** access on https://staging.gitlab.com - -- [x] @andrewn -- [ ] @meks -- [x] @rymai -- [x] @dawsmith",1.0 -11283994,2018-06-04 11:04:34.669,"Allow rails-console, db-console and nfs users access to the Bastion hosts","Proposed fix in https://dev.gitlab.org/cookbooks/chef-repo/merge_requests/2054 .... - -Currently users with `rails-console`, `db-console` and `nfs` access are not allowed onto the bastion hosts, so are not able to access the hosts that they previously could. - -It would be good if their previous access was restored.",1.0 -13676071,2018-05-31 18:57:36.005,GCP Security Review Followup: Firewall Rules By Role and Subnet,"Review the communication between VPC and subnets (currently organized by role) to better catalog active connections in the production environment and recommend additional configuration, firewall rules, etc.",1.0 -13305352,2018-05-31 17:42:50.745,Update documentation about runner configuration for GCP,"# Summary - -The current documentation for the production runner configuration is out of date and should be updated to reflect additions and changes for the GCP environments. - -# Links - -* https://dev.gitlab.org/cookbooks/chef-repo/blob/master/doc/shared-and-specific-runners.md",1.0 -13305367,2018-05-29 17:45:20.137,Remove`ci.gitlab.com` from haproxy,"We no longer need ssl configuration for ci.gitlab.com, this can be removed from the haproxy cookbook. - - -Original description: - -According to @jarv, our HAproxy configuration for GitLab.com includes a section for the domain name `ci.gitlab.com` - -This has no counterpart in gprd and no entry in the DNS. - -Does it require any action for the migration~2977716 or can we assume it's obsolete and remove the HAProxy configuration currently on GitLab.com ? - -/cc @jarv",1.0 -13305360,2018-05-23 20:13:48.917,Set up IMAP Prometheus exporter for monitoring Service Desk,"Right now we are blind as to whether mail_room is doing anything for Service Desk or reply-by-email. - -A quick thing we could do to improve this is to run an IMAP Prometheus Exporter, such as https://github.com/camptocamp/imap-mailbox-exporter. -We'd need to fork it to export the `Unseen` messages in https://godoc.org/github.com/mxk/go-imap/imap#MailboxStatus, but that shouldn't be hard. - -At the minimum, I might just run a local exporter and watch it during the GCP failover.",1.0 -13362078,2018-05-21 20:33:09.828,Clean up deleted repositories,"Similar to https://gitlab.com/gitlab-com/migration/issues/322, we have a lot of +deleted repositories on disk that we should clean up after the migration due to issues such as https://gitlab.com/gitlab-org/gitlab-ee/issues/6091. - -We may want to build this into the product.",1.0 -10843587,2018-05-15 17:09:58.846,Gemnasium service backup - restore test,"In #4203 it was brought to our attention that there will be a Gemnasium service that will be utilizing the GCP SQL product and backups. It was determined this was an acceptable backup solution, but we need to test restores.",3.0 -13305319,2018-05-13 14:00:38.263,Investigate why postgresql service shutdown on `postgres-01-db-gprd` and did not restart,"About four hours ago, postgres-01 went down and did not come back up for some reason: - -``` -# sudo gitlab-ctl status -run: consul: (pid 1857) 15370s; run: log: (pid 1850) 15370s -run: logrotate: (pid 28186) 969s; run: log: (pid 1863) 15370s -down: postgresql: 15370s; run: log: (pid 1858) 15370s -run: remote-syslog: (pid 1826) 15370s; run: log: (pid 1825) 15370s -down: repmgrd: 1s, normally up, want up; run: log: (pid 1871) 15370s -``` - -Manually starting again seems to work. I didn't see anything in the log that suggested why it didn't come up automatically.",2.0 -10641750,2018-05-03 13:10:06.230,Sidekiq not running on staging.gitlab.com,"Noted by @toon (thanks!) - -![Screenshot_from_2018-05-03_14-09-50](/uploads/d00b8f946dcca7340eeb87d5cd1d325c/Screenshot_from_2018-05-03_14-09-50.png) - -This blocks any attempt at a planned failover.",2.0 -14963419,2018-10-15 16:04:17.252,Move Alliances project from GitLab.org to GitLab.com group,,1.0 -14961158,2018-10-15 14:49:16.464,Upgrade to haproxy v1.8.x,"We are running haproxy 1.6.3 which is goes allll the way back to 2015. -In order to support TLS1.3 and probably some other things we should consider upgrading. -Suggest we role this out on staging and then canary first. I don't believe there will be any problems but @northrup may know better. - -Putting in the next next milestone for scheduling consideration.",4.0 -14960184,2018-10-15 14:12:24.933,"AWS, DO access for @ssichak, @acarella and @estrike","Our two new Security Operations engineers need read-only access to the AWS and DO production accounts. The engineers' handles are: -* @ssichak -* @acarella -* @estrike ",1.0 -14959799,2018-10-15 13:57:04.183,Access to create a project on our Google API,"I'm currently working to deploy a tool that would link our SFDC instance with Google Sheets. This would make it much easier for our reps to mass update their opportunities. - -One of the steps of the installation/setup is to create a custom [Google API Project](https://appexchange.salesforce.com/servlet/servlet.FileDownload?file=00P3A00000XbZCaUAN) - Page 12 of the install instructions. - -I would need access to create a project and edit that project within our instance. I also may have to do this twice (once for an installation and once for a production installation) so I'm not sure if that affects the permission set I would need. I could always update the project once it's created though",1.0 -14957229,2018-10-15 12:41:34.748,"reboot api, git, sidekiq and web fleet","In order to pick up the latest kernel we should do a coordinated reboot of the fleet. This can be done in a rolling fashion, one or two nodes at a time making sure we drain from the lb first. - -This is a followup to https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/4668",8.0 -14947093,2018-10-15 07:29:14.488,Upgrade to Grafana v5.3,"There are some nice features, most interestingly built-in support for stackdriver. -http://docs.grafana.org/guides/whats-new-in-v5-3/ - -Putting in the next milestone for scheduling consideration. -cc @dawsmith",1.0 -14936975,2018-10-15 05:11:40.250,301 Redirects for Comparison Pages,"We'd like to move `/comparison` to `/devops-tools` - -This will require 217+ redirects in order to route existing pages to new urls. - -Here is a [spreadsheet of the required redirects with the source URL and the desired: https://docs.google.com/spreadsheets/d/17cU2VUlIIaw9LEU1VnPuNrA2uBos3UAD8CItZP9ESzo/edit#gid=0 - -### Blocking work - -Before we put redirects in place: - -https://gitlab.com/gitlab-com/www-gitlab-com/issues/3093 and https://gitlab.com/gitlab-com/www-gitlab-com/issues/3094 should be done first - -### WARNINGS - -NOTE: Not all URLs are a 1:1 swap of s/comparison/devops-tools. See the [redirect spreadsheet](https://docs.google.com/spreadsheets/d/17cU2VUlIIaw9LEU1VnPuNrA2uBos3UAD8CItZP9ESzo/edit#gid=0) for the correct urls. - -NOTE: Any additional comparison pages that are created between now and the 301s going live will be need to added to the 301 redirect spreadsheet. - -NOTE: This the timing of this merge will need to be coordinated with the corresponding www-gitlab-com merge https://gitlab.com/gitlab-com/www-gitlab-com/merge_requests/15469. - -- If 301 redirects merge too early, then web visitors will be sent to 404 pages that don't exist yet -- If 301 redirects merge too late, then any existing links and google juice will go to 404 pages - -cc @sytses @kuthiala @dangordon ",1.0 -14907075,2018-10-13 00:03:05.748,[Design Document] Ephemeral Environments,"This document describes using Ephemeral Environments to provide a location to test infrastructure and application components, and make it easier to manage GitLab.com - -The document can be reviewed at: https://add-ephemeral-environments-design.about.gitlab.com/handbook/engineering/infrastructure/design/201810_environments.html - -The merge request is at https://gitlab.com/gitlab-com/www-gitlab-com/merge_requests/15450",15.0 -14904892,2018-10-12 20:20:58.475,Enable CloudTrail for AWS,"We should ensure that we have global trails enabled within CloudTrail for AWS API audit logging - -This may require creating/configuring a new S3 bucket for the CloudTrail logs (if an appropriate bucket doesn't already exist), and [creating a default trail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-a-trail-using-the-console-first-time.html#creating-a-trail-in-the-console); for this default trail, we want to ensure that it is applied to all regions (step 5).",1.0 -14893756,2018-10-12 12:16:15.232,HTTP targets missing TLS upgrade,"There are a few instances of HTTP endpoints not doing a redirect to https: - -* [x] `http://contributors.gitlab.com` -* [x] `http://dashboards.gitlab.com` -* [x] `http://dashboards.gitlab.net` -* [x] `http://gitlab-examples.gitlab.io` -* [x] `http://gitlabhosted.com` -* [x] `http://gitlabhq.com` -* [x] `http://log.gitlap.com` -* [x] `http://pages.gitlab.io` -* [x] `http://performance.gitlab.net`",5.0 -14884201,2018-10-12 01:28:25.220,[Design Document] Git Workflow for CI/CD,"This document describes a Git Workflow to facilitate using CI/CD to deploy infrastructure and application components, and make it easier to manage GitLab.com - -To summarize, the standard [GitLab flow](https://docs.gitlab.com/ee/workflow/gitlab_flow.html) workflow is not the desired model for managing GitLab.com and related resources. This document discusses how we can use a common git workflow with GitLab's CI/CD tool set to manage all GitLab.com resources (Terraform, chef, Kubernetes, Monitoring, Deployments, etc). - -The document can be reviewed at https://add-infra-git-workflow-design.about.gitlab.com/handbook/engineering/infrastructure/design/201810_git_workflow.html - -The initial merge request is here: https://gitlab.com/gitlab-com/www-gitlab-com/merge_requests/15393/diffs - -It should should be considered in parallel with [Dogfooding CI/CD](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5231) #5231",5.0 -14865964,2018-10-11 16:05:42.647,Renewal of 4 TLS certificates,"On November 7th, 2018 we have 4 TLS certificates expiring: - -* `*.helm-charts.win` -* `*.k8s-ft.win` -* `*.cloud-native.win` -* `*.separate-containers.party` - -These certificates are used for GitLab Charts Review Apps as well as part of development envs for Distribution team. The current certificates are stored in the `Cloud Native` vault, would be great to replace them there. - -AC - see if we can get these managed by sslmate too for automating renewals -",1.0 -14860691,2018-10-11 14:52:31.093,TLS: certificate for h1.sec.gitlab.net,Please register a TLS certificate for `h1.sec.gitlab.net`. We're going to run a web application on that host we would like to secure.,1.0 -14857043,2018-10-11 13:35:19.546,forum.gitlab.com - Transition Backups to Cloud Storage,"### Plan of Action -The Discourse forum engine has the ability to use AWS S3 as a backup target instead of local disk for it's nightly backups. We should: - -- [x] Create an AWS S3 bucket for forum.gitlab.com backups -- [x] Configure forum.gitlab.com in the admin interface to use the S3 bucket -- [x] Verify that backups are being written to the bucket - ---- -### Original Issue Description -The forum service has it's own backup process, ownership of this service is lacking and the ability to restore at any given point is not documented. Backups are stored on the disk of the host itself. We'll want to probably ship those offsite for cold storage in the case of disaster recovery.",2.0 -14856898,2018-10-11 13:32:30.120,Runbooks and monitoring for forum.gitlab.com,"With #8202 underway, this issue gains specificity as an ultimate input to a corresponding production readiness review under gitlab-com/gl-infra/readiness> for migrating/launching the service in k8s on GKE; that work will incur corresponding changes and requirements to the specifics of how we monitor services in k8s, metrics available, norms established for managing logs, etc. - -This issue now becomes focused on authoring the initial set of runbooks and updating relevant architecture pages in the handbook to properly and comprehensively document the infrastructure for the forum service. - -Original description -``` -We do not have appropriate monitoring of the forum service. - -The only alerts we get are when someone notifies us about backups failing: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5214 - -Or when pingdom tells us it's down: https://gitlab.slack.com/archives/C101F3796/p1539262868000100 - -We don't have our standard set of prometheus metrics being collected on this box. We are unable to proactively handle potential situations that arise. - -Use this issue to discuss how we can better manage this box. -```",5.0 -14841496,2018-10-10 23:08:51.154,Review and setup first iteration of Firewall changes with Security team in Staging,"This comment has a proposed set of rules for review https://gitlab.com/gitlab-com/security-accountability/issues/11#note_107604322 - -After review, create a production issue with the rules to be changed and make the changes in staging.",3.0 -14834812,2018-10-10 18:00:09.979,Fastly potentially serving incorrect content to some users,"## Discovery - -An example URL provided to me for testing this: -https://about.gitlab.com/direction/secure/ - -A few slack conversations: -* https://gitlab.slack.com/archives/C101F3796/p1539191863000100 (unthreaded) -* https://gitlab.slack.com/conversation/C9WFMSDFF/p1539167836000100 (thread) - -Some team members are reporting different versions of the site multiple hours after a successful deploy. One user is reporting that he see's what's been deployed on their mobile, while an old version is still showing on their workstation :unamused: - -Using Fastly's Check Cache feature, I've discovered 2 endpoints that have differing versions of the same page using the example above. Those in question are: -* `cache-tyo19931` -* `cache-nrt6138` - -I've opened a support ticket with Fastly about this. I hesitate to purge cache in case they are able to perform some troubleshooting. - -As an FYI the backup time of the cache is configured to 1 hour. Being that our deploy completed over 3 hours prior, Fastly really ought to have the latest version of our site. - -## Support -Fastly Support ticket: http://fastly.zendesk.com/hc/requests/98067",1.0 -14828768,2018-10-10 15:49:46.173,Blog post about DR replicas,See https://gitlab.com/gitlab-com/www-gitlab-com/issues/3222. This issue only tracks work for the milestone.,5.0 -14816677,2018-10-10 09:04:29.463,Onboarding architectural program,"To improve onboarding for new SREs we should give https://about.gitlab.com/handbook/engineering/infrastructure/production-onboarding/ a facelift. There are some good pointers there already but there is a lot of detail lacking. - -This issue will be used to collect some thoughts, before a handbook MR. - -I would also like to expand onboarding to include some more general architecture overviews. This would include tracing different types of requests through the gitlab.com architecture. - -Since this page serves as a documentation index that is geared towards new-hires the sections could be fleshed out a bit. I don't see this as a replacement for the onboarding template, but a way to augment it with more information that can also be useful to non SREs so they can see what the typical day-to-day is for the team. - - -## oncall overview - -## Where to find things - -* fleet information -* logs, what are we storing and where to find them -* monitoring and alert rules -* dashboards -* rails and database access - -## terraform - -* environments and state files -* shared configuration overview, variables, etc. - -## chef - -* updating a cookbook, how much is automated, cookbook versions -* updating a node attribute -* updating a secret - -## Architecture overview of different types of requests - -* https get request to gitlab.com -* public api -* https git -* ssh git -* repository overview -* pages request -* registry -* download project zip file -* download project archive -* request raw file -* cicd job - -## releases - -Distilled version of how releases work and how deployments work with chatops?",4.0 -14816145,2018-10-10 08:42:44.122,evaluate stickiness settings on haproxy backends,"Currently the haproxy backend configuration is such that every pool of servers is round-robin with the exception of websockets, ssh and pages. - - -* `websockets` -``` - balance roundrobin - cookie _gitlab_session prefix nocache -``` -* `ssh` -``` - balance source - hash-type consistent -``` -* `pages_http` and `pages_https` -``` - balance source - hash-type consistent -``` -The other backends: - -* `api`: all of the `api-xx` nodes -* `https_git`: all of the `git-xx` nodes -* `web`: all of the `web-xx` nodes -* `canary_web`: all of the `web-cny-xx` nodes -* `registry`: all of the `registry-xx` nodes - -Use roundrobin load balancing. As now we are starting to put some production traffic on canary we should probably consider making requests sticky so that clients do not potentially see multiple versions of the applications on a single page load. - -For `web` and `canary_web` this would mean modifying our load balancing algorithm to use the session cookie: - -``` - balance roundrobin - cookie _gitlab_session prefix nocache -``` - -For the `api` this would mean modifying our load balancing algorithm to use the source ip: -``` - balance source - hash-type consistent -``` - -but I wonder how smart this will be since most of our traffic is coming from the CICD. Until we are able to separate that traffic we should probably leave it roundrobin. - -For the `registry` I'm not sure whether it matters much but we could also use source ip here as well: - -``` - balance source - hash-type consistent -```",2.0 -14792324,2018-10-09 10:57:24.581,Users not receiving emails,"We have received a couple of requests related to service desk not sending emails or users are not receiving emails. - -Ticket: https://gitlab.zendesk.com/agent/tickets/105220 - -https://gitlab.com/gitlab-com/support-forum/issues/3984 - -Few of the discussion: https://gitlab.slack.com/archives/C4XFU81LG/p1539078943000100 (internal) - -Can we please check and investigate this. - -cc// @ahmadsherif",1.0 -14788640,2018-10-09 08:29:41.399,Request to update forum.gitlab.com,"The forum needs manual update. SSH into the node and: - -```sh -cd /var/discourse -git pull origin master -./launcher rebuild app -``` - -I created a runbook at https://gitlab.com/gitlab-com/runbooks/merge_requests/776/diffs.",2.0 -14786753,2018-10-09 07:05:30.670,Enable Git v2 over SSH on GitLab.com,"Now that https://gitlab.com/gitlab-org/gitlab-ce/issues/46555 will land in production soon (in the next RC) it will work for Git HTTP clients that request v2, but not for Git over SSH. - -In order to enable v2 in production, we'll need to set the following in `sshd_config`: - -``` -AcceptEnv GIT_PROTOCOL -``` - -Now, this could have security considerations, such as https://serverfault.com/questions/427522/why-is-acceptenv-considered-insecure - -And from the man page of SSHD: - -``` -Be warned that some environment variables could be used to bypass restricted user environments. For this reason, care should be taken in the use of this directive. The default is not to accept any environment variables. -``` - -So I think we need a security evaluation before enabling this, although given that it's restricted to `GIT_PROTOCOL` I didn't see a big concern in this particular situation (pinging @kathyw so this can be evaluated by Security) - -Also, it could be that we could enable this directly in Omnibus (or a setting to be made) - @marin would this be possible? I've only found https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/docker/assets/sshd_config which is Docker related. - -A few facts about the Git protocol v2: - -- It's opt-in only, so clients (GitLab users) that support it (using Git `v2.18.0` onwards) need to explicitily pass a configuration to enable it. -- Not all Git commands are on v2, some will still execute v1 -- `GIT_PROTOCOL` is only evaluated for us if the string contains `version=2`, everything else is ignored and won't be passed to `git`, defaulting to `v1`. - -@jramsay this means that without us enabling this SSH config, people using `Git v2` wouldn't be able to use v2 over SSH (will use v1 instead), but could use v2 over HTTP on GitLab.com. In the docs we could point users on how to enable it for on-premises, unless we decide to configure this in Omnibus directly. - -Git Protocol v2: https://github.com/git/git/blob/master/Documentation/technical/protocol-v2.txt - -Man page for SSHD: https://linux.die.net/man/5/sshd_config - -cc @jacobvosmaer\-gitlab @DouweM",2.0 -14786669,2018-10-09 06:59:33.873,Setup Geo with ops.gitlab.net instance,"Right now we don't have a regular deployment of Geo until https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/4741. Could we do something simpler and just set up a single Geo instance for the ops instance and deploy regularly to it? - -That way we can: - -1. Ensure database migrations work -2. Ensure existing functionality still works -3. Test new functionality",4.0 -14780650,2018-10-08 20:36:01.904,Rails Console Access for wvandenberg,"Hello Team, - -I need Rails console access to allow me to very ownership of gitlab pages and alike. Can anyone assist in granting me access? - -Thanks! - -~oncall ~""access request""",1.0 -14768974,2018-10-08 13:20:04.756,Unable to unarchive projects,"A user is unable to unarchive these projects - -``` -https://gitlab.com/blade-group/dev/clients/boxes/audio-switcher -https://gitlab.com/blade-group/dev/clients/boxes/electron -``` - -I believe this has something to do with our GCP migration. I was unable to reproduce this on my projects. - -https://gitlab.zendesk.com/agent/tickets/105222 - -cc// @jarv",1.0 -14760740,2018-10-08 08:00:08.905,Database Reviews,"New issue for the current milestone that contains pending reviews from the last. - -* [x] https://dev.gitlab.org/gitlab/gitlabhq/merge_requests/2498/diffs -* [x] https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/7341#note_106524072 -* [x] https://dev.gitlab.org/gitlab/gitlabhq/merge_requests/2528#note_136727 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/21719#note_104392420 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22014 -* [x] Thursday https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/7578#note_106070226 -* [x] Thursday https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/7806?commit_id=5d6177e91a8159dded6c5f62b5bd40a4165fe687#note_108912846 -* [x] https://gitlab.com/gitlab-org/gitlab-ee/issues/6070#note_106815144 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22143#note_106796618 -* [x] https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/6947 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22013#note_106869076 -* [x] https://dev.gitlab.org/gitlab/gitlab-ee/merge_requests/678 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22226 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/21021 -* [x] https://gitlab.com/gitlab-org/gitlab-ee/issues/7955#note_109956059 together with https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/7990#deca3eb05754841e761d010d8941933fd19f2aef_0_26 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22430",5.0 -14746994,2018-10-07 09:26:00.103,Create a critical PD alert for HAProxy connection errors,"corrective action for https://gitlab.com/gitlab-com/gl-infra/production/issues/496 -Alert on https://dashboards.gitlab.com/d/ZOOh_aNik/haproxy?panelId=57&fullscreen&orgId=1&from=now-7d&to=now&refresh=5m&var-host=fe-07-lb-gprd.c.gitlab-production.internal&var-port=9101&var-backend=All&var-frontend=All&var-server=All&var-code=All&var-interval=30s as it is currently our best way to determine a malfunctioning backend server that is not accepting ssh but is passing the rails healthcheck. - -Discussion about how to improve the healthcheck more generally in https://gitlab.com/gitlab-org/gitlab-shell/issues/166",1.0 -14715420,2018-10-05 11:54:22.838,IAM policy gitlab-internal: For Configure team,"Similar to https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5233 I would like to request the same permissions for the following people: - -- [ ] mcabrera@gitlab.com -- [ ] tkuah@gitlab.com -- [ ] taurie@gitlab.com -- [ ] mgreiling@gitlab.com -- [ ] dgruesso@gitlab.com -- [ ] jerasmus@gitlab.com - -They require IAM permissions for `gitlab-internal-153318` project with role `roles/container.admin`.",1.0 -14710648,2018-10-05 08:11:24.433,canary.gitlab.com should use the corresponding backends for api/git traffic,"Our routing logic for canary is currently: - -``` -use_backend canary_web if is_canary or is_canary_host -``` - -Which means all requests to canary.gitlab.com or web requests that have the cookie set will go to the web fleet. -Ideally public api and git-ssh/git-https should use the new canary git and api servers.",8.0 -14688316,2018-10-04 11:46:31.950,Convert environment labels to server-side,"In order to improve metrics and alert routing and maintenance we should use Prometheus ""external labels"". - -External labels allow Prometheus to assign labels based on its view of the system. - -We're currently [populating several labels](https://ops.gitlab.net/gitlab-cookbooks/chef-repo/blob/master/roles/gprd-infra-prometheus-server.json#L6-9). -* `provider` (gcp/azure/etc) -* `region` (us-east, etc) -* `monitor` (default, app, etc) - -We're currently overloading `monitor` with things like `gprd-default`. - -I propose we change this to add a new `env` label that contains the environment for the Prometheus server. - -This will allow us to transition alert routing to the new labels.",3.0 -14668026,2018-10-04 01:44:44.578,Evaluate timed incremental rollout on hosting servers,"We're going to ship the delayed job feature https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/21767 for AutoDevOps timed incremental rollout https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22023. - -We're going to evaluate it works on our hosting servers with AutoDevOps timed incremental rollout mode. - -### Evaluation plan - -**dev.gitlab.org** - -- Enabled term: 8th, Oct. ~ -- Evaluation date: 8th, Oct. -- [x] Wait for dev.gitlab.org daily sync -- [x] Create a sample project with new AutoDevOps deployment strategy to make sure it's fully functional. -- [x] Check health (metrics, logs and crash reports) - -**staging.gitlab.com** - -- Enabled term: 10th?, Oct. ~ (It depends on RM's plan) -- Evaluation date: 10th, Oct. -- [x] RC with the new code has been deployed -- [x] Create a sample project with new AutoDevOps deployment strategy to make sure it's fully functional. -- [x] Check health (metrics, logs and crash reports) - -**gitlab.com** - -- Enabled term: 10th?, Oct. ~ (It depends on RM's plan) -- Evaluation date: 10th, Oct. ~ 10th, Nov. -- [x] RC with the new code has been deployed -- [x] Create a sample project with new AutoDevOps deployment strategy to make sure it's fully functional. -- [x] Check health (metrics, logs and crash reports) - -NOTE: -- This feature is behind the feature flag `ci_enable_scheduled_build`, but it's enabled **by default** (See https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/21767#note_106332776) -- This feature doesn't run unless users manually changed their deployment strategy in AutoDevOps. Presumably, after we published a release post of 11.4 at 22nd, Oct., users would try to use it, and the usage of the new sidekiq-workers will gradually increase so that the time we should keep eyes on server's health. - -### Check health (metrics, logs and crash reports) - -- Does the new worker `Ci::BuildScheduleWorker` run properly? This uses `pipeline_processing` namespace (priority: 5 (highest)). -- Does not the new worker `Ci::BuildScheduleWorker` pressurize other workers in the same namespace? -- Are there any stale delayed jobs? `Ci::Build.stale_schedule.count` should be zero. -- Are there any crash reports related to this feature on Sentry? -- StuckCIJobWorker should use Index Scan properly? (Ref: https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/21767#note_106811708) (cc @abrandl) - -### Feature flag - -The feature flag's name is `ci_enable_scheduled_build`. The new AutoDevOps deployment strategy - Timed incremental rollout is based on the [delayed job](https://gitlab.com/gitlab-org/gitlab-ce/issues/51352) feature. By disabling `ci_enable_scheduled_build`, we can effectively revert the timed incremental rollout to manual incremental rollout (Also, stops new creation for delayed jobs) - -``` -Feature.enabled?('ci_enable_scheduled_build') # Check if it's enabled -Feature.enable('ci_enable_scheduled_build') # Enable the feature -Feature.disable('ci_enable_scheduled_build') # Disable the feature -``` - -/cc @nolith @winh @jlenny @erushton",2.0 -14663527,2018-10-03 18:53:35.235,Out of disk space errors (inodes?) in artifacts tmp directory,"Sentry error: https://sentry.gitlab.net/gitlab/gitlabcom/issues/530892 - -All the API servers are complaining about similar errors: - -``` -Errno::ENOSPC: No space left on device @ dir_s_mkdir - /var/opt/gitlab/gitlab-rails/shared/artifacts/tmp/cache/1538592524-29247-0008-8556 -``` - -The disk is only 23% full. I suspect we are out of inodes in that parent directory.",8.0 -14655002,2018-10-03 12:32:42.886,Move two projects from /jamedjo to /gitlab-com group,"## What - -Move https://gitlab.com/jamedjo/team-quiz and https://gitlab.com/jamedjo/icebreakers to the `gitlab-com` group - -## Why - -The [handbook says](https://about.gitlab.com/handbook/contracts/#approval-for-outside-projects) projects ""completed as part of the GitLab employment"" should be in a GitLab namespace, and I'd like to have the projects moved before announcing the gitlab-pages URLs - -## Why in infrastructure - -Changing group needs the Owner (project creator) to be a Maintainer in the target group. I thought I'd be able to get a maintainer to move it, but they wouldn't be the owner of the project. See https://gitlab.com/gitlab-org/gitlab-ce/issues/18423",1.0 -14653331,2018-10-03 11:00:20.862,Rollout the Sidekiq Reliable fetcher,"We have an issue for 11.4 https://gitlab.com/gitlab-org/gitlab-ee/issues/7279. It's already implemented and is expected to be reviewed and merged for 11.4. - -By default, it's disabled and we can enable it with a feature flag `gitlab_sidekiq_reliable_fetcher`. The state of this flag is only considered when we run the Sidekiq process. So we need to restart the Sidekiq process on some node to enable it gradually and to see how it goes. - -To monitor the state of the Sidekiq we should check the Grafana dashboards in https://dashboards.gitlab.net -and we can also see to the new ""Working"" tab in the admin area (Monitoring > Background jobs). - -If something goes wrong we have to disable the feature flag and restart the Sidekiq process. - -I'll also need some assistance from someone from the Production team",1.0 -14638385,2018-10-02 18:02:03.375,Request: staging.gitlab.com access for aciciu,"Hi guys, - -I've tried using my gitlab.com account but it's not working. Do you mind creating an account for me in the staging environment, please?",1.0 -14638204,2018-10-02 17:49:16.945,Request: staging.gitlab.com access for tpresa,"My GitLab.com account does not work to log into staging.gitlab.com, so I'd like to request access to it.",1.0 -14632494,2018-10-02 15:21:50.992,Database Reviews,"I keep track of pending database reviews as a list of MRs and check them off as I go. As discussed with @Finotto today, the idea is to create one issue ""Database Reviews"" per milestone and maintain a list of pending database reviews there. /cc @dawsmith - -The issue will be assigned to @NikolayS and @abrandl . The idea is to react to pings on MRs and link the MR in this list. The goal is to distribute review work between the two of us and also communicate the amount of reviews we're doing. - -I tend to put duplicates on the list if I'm getting pinged repeatedly. Every ping-cycle means a distraction and for me it was worth tracking this. - -This is my current list - let's use this issue to extend the list for this milestone and create a new issue next milestone: - -* [ ] ~Community https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/21719#note_104392420 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/21521#note_104568347 -* [x] ~""Unblocking others"" Kamil https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/7582 -* [x] ~""Unblocking others"" https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/21893#note_105140458 -* [x] ~""Unblocking others"" https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/21767#note_105406114 -* [x] ~Urgent ~""Unblocking others"" https://dev.gitlab.org/gitlab/gitlabhq/merge_requests/2528#note_135915 - https://dev.gitlab.org/gitlab/gitlabhq/merge_requests/2528#note_136247 https://dev.gitlab.org/gitlab/gitlabhq/merge_requests/2528#note_136251 -* [x] gitlab-restore https://gitlab.com/gitlab-restore/postgres-gprd/merge_requests/19#note_105154860 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/21893#note_105824379 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22055#note_105882003 -* [x] https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/7738 -* [x] https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/7386#note_105203938 -* [ ] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22014 -* [x] https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/7433/diffs -* [x] https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/7493 -* [x] https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/7779 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/21893 -* [ ] https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/7578#note_106070226 -* [x] https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/7757#note_106315671 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22080#note_106119664 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/21767#note_106084358 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22041#note_106203774 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22008#note_106238381 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22013#note_106303012 -* [x] https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/7341#note_106332416 -* [x] https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/4213#note_106489662 -* [ ] https://dev.gitlab.org/gitlab/gitlabhq/merge_requests/2528#note_136247 -* [ ] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22143#note_106796618 -* [ ] https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/6947",8.0 -14630448,2018-10-02 14:19:45.343,Create a new wildcard certificate and domain for `*.gitlab-ce-review.app`,"From the meeting we had today on addressing cleanup and scalability, the team would like to create a new cluster for CE in the review apps project. -* Meeting notes: https://docs.google.com/document/d/1oLc4s02U_bNxx2UCO-dHw5C3swhNI8QCYpX6taNnSi8/edit#heading=h.nzenrwnechk2 - -Currently, we are using the one created in https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/4735 for both CE and EE. - -This has proven to be a challenge with debugging and implementing cleanup/scale. - -We would like to use 2 clusters so we have some degree of separation. The load from CE is a lot more than EE. The benefit of having 2 clusters: -* Separation of stability, if the clean up of CE is unstable EE is not affected -* Ease of debugging during our first phases of implementation. -* Possibility of having a more aggressive clean up for CE (at least in the short term) - -/cc @marin @ibaum @rymai @ddavison - -@dawsmith if you can help us assign this to someone that would be much appreciated :pray: I set the weight to 2 similar to the request we had before (linked above)",1.0 -14621925,2018-10-02 08:17:22.354,Redis cache failover can lead to data corruption,"Currently `SAVE` is disabled on the Redis cache instance. - -This change has been in place since January 3rd 2018: https://dev.gitlab.org/cookbooks/chef-repo/commit/a60bdeded9fbd1cb295db501b4f4f18e27dabb54 - -When sentinel detects a reboot, it may decide to wait for the original master to come back online rather than failing over to a secondary. We experienced this during - Saturday's planned failover. It's mentioned in several Redis issues (see below) but the reasoning for this behaviour is not yet clear to me. - -When coupling this with our current setup, the following scenario may occur: - -1. Redis has a snapshot file in `dir` with an unspecified date. -1. We reboot a Redis master -1. Since we have `save """"`, Redis will shutdown without saving the snapshot (by design) -1. Redis will restart. On start it will begin loading the RDB snapshot file. This file might be weeks old. -1. Load complete, the slaves resync off the master, overwriting their ""current"" cache with an old, invalidated cache -1. The application requests a cache item and is returned previously invalidated data. This data is then written to a persistent store (eg postgres) leading to further corruption. - -See -* https://github.com/antirez/redis/issues/1297 -* https://github.com/antirez/redis/issues/1281 - -cc @jarv @dawsmith @Finotto",3.0 -14615910,2018-10-01 23:26:18.670,Tune threshold for sidekiq exception alerts,"We recently created a new alert for sidekiq exception count in #4651, but quickly noticed that we were seeing radically different volumes (~5x to ~15x higher) from one controller in particular, `RepositoryUpdateMirrorWorker`. After looking at https://sentry.gitlab.net/gitlab/gitlabcom/issues/217714/, it looks like the majority are for legitimate, albeit misconfigured jobs. - -For an easy fix to make this alert more useful and cut down on the noise, we should implement a separate threshold for the `RepositoryUpdateMirrorWorker`. Based on https://dashboards.gitlab.net/d/9GOIu9Siz/sidekiq-stats?panelId=66&fullscreen&orgId=1&from=now-30d&to=now, setting it to `10000` for `RepositoryUpdateMirrorWorker` and `2000` for the others should be more appropriate",3.0 -14610224,2018-10-01 18:26:12.250,Access to ops.gitlab.net,I don't seem to have permissions over in https://ops.gitlab.net/ (I don't see any projects/activity). Assuming I should see something - can I get those permissions please?,1.0 -14604190,2018-10-01 14:56:38.157,Ship new .log files to Elasticsearch,"In 11.3, we added new JSON files: - -* `/var/log/gitlab/gitlab-rails/integrations_json.log` (https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/21316) -* `/var/log/gitlab/gitlab-rails/importer.log` - -We should ship these to Elasticsearch/Stackdriver etc.",1.0 -14599350,2018-10-01 12:14:24.470,Chef Server certificate expiring soon,SSLabs provided certificate is ready to go. Current certificate will expire in 5 days. Replace it on the server.,1.0 -14564889,2018-09-28 22:13:05.100,When canary is enabled some customers are reporting broken assets.,"From this thread: https://gitlab.slack.com/archives/C101F3796/p1538155584000100 - -Splitting a small percentage of production traffic to canary appeared to be at the root of some pages not loading css correctly.",2.0 -14530179,2018-09-27 17:54:57.178,Network issues caused connectivity problems for about 2 minutes,"# Summary - -We had a network blip today that caused some issues connecting to various services. - - -# Timeline of events - -Pagerduty alerts -* 7:17am gitlab.com issue is down -* 7:18am gitlab.com issue alert cleared -* 7:18am gitlab.com pages is down -* 7:18am gitlab.com new repo is down -* 7:19am gitlab.com new repo alert cleared -* 7:19am gitlab.com pages alert cleared - -During this time, we also had trouble logging in to servers. It cleared up on its own. - - -## Monitoring - - -This appears to be due to a network glitch - -![image](/uploads/d819a4ac6c716832086acf7297b2edce/image.png)", -14529363,2018-09-27 17:33:38.369,Redirect for app sec page,"Please redirect from https://about.gitlab.com/product/application-security to https://about.gitlab.com/solutions/dev-sec-ops. - -Thank you.",2.0 -14513053,2018-09-27 09:58:09.996,GitLab.com 5XX error pages should include a link to status.gitlab.com,"When a user is presented with a 5XX error page on GitLab.com, we should link to https://status.gitlab.com",1.0 -14511922,2018-09-27 09:09:48.870,Separate gitlab-pages from the rest of the web worker fleet,"Currently GitLab pages runs alongside unicorn on all web-workers on `web-*` nodes. - -The traffic profile for GitLab Pages, and it's growth, is not related to the profile of the rest of the application, but by having this service running alongside the web, we have coupled the scaling of the Rails web service to the Pages service. - -This has many disadvantages: - -* We have 14 instances of the pages service running alongside 14 web workers. This is vastly more than we need. -* Each pages daemon loads the index into memory and is competing with the unicorn fleet on memory consumption, even when traffic volumes (per instance) are low. -* Pages deployment and restart is notoriously slow. Having unnecessary additional workers exasperates this problem. -* Pages requires additional NFS mounts that the web workers would otherwise not require, so splitting this would allow us to drop those NFS mounts. - - -cc @glopezfernandez @dawsmith @Finotto @jarv",2.0 -14502779,2018-09-27 02:17:31.138,PullMirrorsOverdueQueueTooLarge from staging triggering pager duty,"The PullMirrorsOverdueQueueTooLarge from gstg was triggering pagerduty. - -This MR removed the filtering so that alert manager could decide whether to page. -https://gitlab.com/gitlab-com/runbooks/merge_requests/733 - -Since alert manager is making the wrong decision, I've added this MR to remediate the immediate problem, but we'll need a better solution. https://gitlab.com/gitlab-com/runbooks/merge_requests/759 - -The pagerduty alert looked like: -``` -Labels: - - alertname = PullMirrorsOverdueQueueTooLarge - - channel = backend - - monitor = gstg-app - - pager = pagerduty - - provider = gcp - - region = us-east - - replica = 01 - - severity = critical -Annotations: - - description = On average, there have been over 5000 overdue pull mirror jobs for the last 10 minutes. Check https://dashboards.gitlab.net/d/_MKRXrSmk/pull-mirrors. - - runbook = troubleshooting/large-pull-mirror-queue.md - - title = Large number of overdue pull mirror jobs: 13343 -Source: https://prometheus-app.gstg.gitlab.net/graph?g0.expr=quantile%280.5%2C+gitlab_database_rows%7Bquery_name%3D%22mirrors_ready_to_sync%22%7D%29+%3E+5000&g0.tab=1 -Labels: - - alertname = PullMirrorsOverdueQueueTooLarge - - channel = backend - - monitor = gstg-app - - pager = pagerduty - - provider = gcp - - region = us-east - - replica = 02 - - severity = critical -Annotations: - - description = On average, there have been over 5000 overdue pull mirror jobs for the last 10 minutes. Check https://dashboards.gitlab.net/d/_MKRXrSmk/pull-mirrors. - - runbook = troubleshooting/large-pull-mirror-queue.md - - title = Large number of overdue pull mirror jobs: 13342 -Source: https://prometheus-app.gstg.gitlab.net/graph?g0.expr=quantile%280.5%2C+gitlab_database_rows%7Bquery_name%3D%22mirrors_ready_to_sync%22%7D%29+%3E+5000&g0.tab=1 -```",1.0 -14502182,2018-09-27 01:22:27.040,DashboardsGitlabComDown Alert,"dashboards-com-01-inf-ops.c.gitlab-ops.internal had the grafana-server process crash. Running chef manually did not bring it back up. Rebooted the server. The process came back up with the server. The alert cleared a few minutes later. - -I wasn't sure how to determine why the process crashed. @jarv, can you help figure out why this happened?",1.0 -14500572,2018-09-26 23:58:51.538,Add SSL Version to HA Proxy Logging,"The SSL Version of the handshake is good information to track, especially as we look to continue to make improvements to our SSL handshake by removing weak and deprecated methods. - -### Things to Do -- [x] Add `%sslv` in https://gitlab.com/gitlab-cookbooks/gitlab-haproxy templates after `%ft` -- [x] Update FluentD config for added field in HA Proxy Logs -- [x] Ensure Stack Driver ingestion without issue.",5.0 -14498208,2018-09-26 20:24:23.918,Requesting specific permissions on GCP gitlab-demos project,"For my partner demonstration work I need to be able to test install GitLab from the Google MarketPlace on GCP. I'm requesting that my account (dgordon@gitlab.com) be given proper privileges so that I can do this. - -I'm doing this to a cluster I have created in the gitlab-demos project. When I try to install from MarketPlace it gives me the error that I need ""Kubernetes Engine Admin"" permissions. - -![image](/uploads/101584b0c38a8aafb06041490e2e6713/image.png) - - Thanks",1.0 -14491301,2018-09-26 14:52:22.654,Redirect a couple of whitepapers,"We're in the midst of moving some content behind email gates and I'd like to redirect the old URLs to the new landing pages. Here are the details: - -| old URL | redirect to URL | -| -------- | -------- | -| /pdfs/resources/gitlab-moving-to-git-whitepaper.pdf | /resources/whitepaper-moving-to-git/ | -| /pdfs/resources/gitlab-scaled-ci-cd-whitepaper.pdf | /resources/whitepaper-scaled-ci-cd/ | - -Thanks!",1.0 -14483845,2018-09-26 11:46:39.801,Plan for making ops.gitlab.net the source of truth for operations repos for chef and terraform automation,"Given that we are automating many of our infrastructure repos with CICD on the ops.gitlab.net instance I think it is probably time to consider switching to ops.gitlab.net as the source of truth for: - -* `https://gitlab.com/groups/gitlab-cookbooks/*` -* https://dev.gitlab.org/cookbooks/chef-repo -* https://gitlab.com/gitlab-com/gitlab-com-infrastructure - -### Reasoning - -* The deployment that we submit MRs to should be the same deployment that runs the CICD jobs, this is important for approving and reviewing. -* We want to encourage contributions to these repositories and ensure that there is very little friction for gitlab.com team members to contribute. -* We currently have https://ops.gitlab.net setup so that anyone with a gitlab.com email address can login and view protected repositories. -* All repositories that were on gitlab.com will be mirrored on gitlab.com -* The downside of this is that this plan does not have a way for the wider community to contribute, there won't be an easy way to enable this without enabling the wider community to create accounts on ops.gitlab.net which may be an option we later consider. - -This is a proposed high level plan on how to do this in a sane way: - -# Plan -## chef-repo -- [x] Prevent pushes to all branches in settings -- [x] Export / Import the project into ops.gitlab.net -- [x] Notify members to update their remotes -- [x] Setup the repository to push to gitlab.com/gitlab-cookbooks/chef-repo (private project) -- [x] Update description so that it is clear where the source is - -## gitlab-com-infrastructure -- [x] Prevent pushes to all branches in settings -- [x] Export / Import the project into ops.gitlab.net -- [x] Notify members to updates their remotes -- [x] Setup the repository to push to gitlab.com/gitlab-com/gitlab-com-infrastructure (public project) -- [x] Update description so that it is clear where the source is - -## gitlab-cookbooks/* -- [ ] Change group membership for gitlab-cookbooks so only ops-gitlab-net can push -- [ ] Update the repository settings on ops.gitlab.net for all gitlab-cookbooks repositories so that they are configured to push to gitlab.com using the ops-gitlab-net user -- [ ] Notify members to updates their remotes\ -- [ ] Update description so that it is clear where the source is - -cc @gitlab\-com/gl\-infra @andrewn @dawsmith @gerir @marin",3.0 -14483319,2018-09-26 11:24:17.583,[Design Document] First iteration of kubernetes migration,"It was brought up during our last on-call handover that we should be careful about the opportunity cost of spending time on efforts (such as internal network hardening) that may not be relevant as we start migrating services to kubernetes. - -Personally, I find this cost difficult to reason about given we do not have yet a roadmap or plan for a first iteration of kubernetes service migration for gitlab.com. This issue is to track just that, a design document for what we can accomplish as a first iterations for kubernetes and a horizon for completing the work against other priorities. - -Currently I believe the first likely candidate for k8 will be the registry service given that we have already set it up once and that it is a standalone service. The next service might be sidekiq. I propose we document a transition plan for registry and sidekiq unless there are other services that are a better fit. - -I will put this into the next milestone for consideration, unless we decide it is not something we want to start thinking about. - -cc @andrewn @dawsmith @gerir @Finotto @bjk\-gitlab @jurbanc",2.0 -14479585,2018-09-26 08:59:31.087,give Jose access to the gcp console,@Finotto should have access to the gcp console,1.0 -14479549,2018-09-26 08:58:41.703,the restore pipeline for disk snapshots is broken,"Taking snapshots is working but the restore pipeline is broken: - -https://gitlab.com/gitlab-restore/gitlab-production-snapshots/-/jobs/101608916 - -- [x] Fix the pipeline stage so that if it fails it fails the pipeline step -- [x] Use deadman snitch so that this pipeline is better monitored -- [x] Fix the restore pipeline - - -I don't think this is extremely urgent as the snapshot pipeline is working, just the restore test is broken. I will put it in the next milestone since this one is already at capacity. - -cc @Finotto @dawsmith",2.0 -14468768,2018-09-25 23:12:06.962,Investigate way to make tls 1.2 only endpoint for tls 1.0 deprecation,"We may want to come up with a way to give customers a tls 1.2+ only endpoint to test api calls. - - -AC for issue- come up with proposal and estimate here - share back with Dave, Gerir, Jose and we'll schedule based on how much work.",3.0 -14463636,2018-09-25 17:30:43.270,New bastion host in AWS for Registry analysis,"Spun out of https://gitlab.com/gitlab-org/gitlab-ce/issues/51702#note_104213310 - -> @andrewn Can you spin-up the bastion host (likely best on AWS) with 64GB of RAM and S3 read-only credentials that give GET and LIST permission? -> Once I start doing this ""manual scan"" I will start writing steps needed to do that. - -cc @ayufan - -cc @dawsmith @Finotto for scheduling",2.0 -14430468,2018-09-24 16:25:45.980,Sidekiq is not being restarted after chef deploy for customers.gitlab.com,"REF: https://gitlab.com/gitlab-org/gitlab-ee/issues/7518 - -I was trying to fix a bug related to one of our background jobs but realized that after running `sudo chef-deploy` the workers were still using an old version of the app, so I was required to do a manual restart in order to see the fix working. - -Can we fix it through the [cookbook](https://gitlab.com/gitlab-cookbooks/cookbook-customers-gitlab-com)? - -/cc @jarv @northrup",3.0 -14424092,2018-09-24 11:40:15.835,Create an internal api endpoint for canary,"In order for the canary deployment to use the api internally we will need an api internal load balancer with the canary api nodes. - -- [ ] Create the canary haproxy vm and attach it to an internal loadbalancer for canary -- [ ] Configure the canary hosts to use the internal api endpoint.",4.0 -14424067,2018-09-24 11:38:12.344,"Setup web, api and git canary to receive a small percentage of web traffic","This task is part of the canary project, outlined in the design doc https://docs.google.com/document/d/15jzLb5O4ASPYInxw1CFI-ngLHv2K8w01AuFOkejfcrk/edit# - - -- [x] Create the infrastructure for web/api/git -- [x] Create utilitiesthat will adjust the weight of these servers so they can receive a small portion of gitlab traffic",4.0 -14423994,2018-09-24 11:34:39.101,Configuration updates to introduce optional backend weights to the haproxy configuration,"Described in the design doc - https://docs.google.com/document/d/15jzLb5O4ASPYInxw1CFI-ngLHv2K8w01AuFOkejfcrk/edit# - -We will want to have an option for all backends to introduce weights and also optionally include the canary hosts with a backend weight of zero.",2.0 -14423882,2018-09-24 11:28:00.061,Update elastic credentials pipeline,"The credentials for https://gitlab.com/gitlab-restore/esc-tools are out of date. - -Before we lose logging visibility we need to get updated credentials into the pipeline.",1.0 -14385584,2018-09-21 21:16:37.469,Add Redis alert when secondaries are connected but not sync,"Utilize the metric `master_link_up` to alert us when secondaries of redis fail to keep up their sync with the primary node - -* [x] If possible create a graph of this data -* [x] Ensure this is a high priority alert, this is PagerDuty worthy",1.0 -14385552,2018-09-21 21:13:19.478,redis binary on cache had been upgraded - needs restart to be running against correct binary,"It was discovered on Sept 21, the redis application on the cache servers was running against a deleted binary. -This issue is put in place to restart redis gracefully on the cache servers to resolve this concern.",5.0 -14373223,2018-09-21 15:12:26.366,[Design Document] GEO for DR,"Please fill with your ideas the following design doc: https://docs.google.com/document/d/1_fuskON5fbZgxoBktSp-vFvHUU8IauX-q_YWhnT4J5k/edit?usp=sharing - -AC - switch to doing an MR in the handbook rather than the google doc.",1.0 -14367226,2018-09-21 11:28:05.732,Add PagerDuty webhook to open a GitLab incident issue for all PagerDuty incidents,"Currently we have a fairly clumsy process where we get a page and either open an incident manually in the production tracker or in the case where GitLab is down we use the slack bot to initiate it. - -I don't think we need to change the latter but for the former I think we can automate this in a way that we don't have to open up issues manually. - -I think the best option for us is to probably use a google cloud function webhook to handle the api request to open an issue on an incident. What we should include is a way to avoid spamming the infrastructure tracker when incidents fire again for the same event. For this we can probably add an api query to see if an issue is already open before opening a new one. - -cc @andrewn @jurbanc - -Tasks (updated by @peterdam) -* [x] Understand what information GitLab production issues contain -* [X] Review recent PagerDuty incidents to understand what information is captured [Notes](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5121#note_119676468) -* [x] Understand the conditions for which a GitLab production issue must be created i.e. requirements. [Notes](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5121#note_119591691) -* [X] Figure out which GCP Project to use - **Preference is gitlab-ops** [Notes](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5121#note_120432576) -* [x] Figure out what is the team's preferred FaaS provider (GCP or AWS) - **Preference is GCF** [Notes](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5121#note_119628319) -* [x] Understand [alertmanager-slack-bridge](https://gitlab.com/gitlab-com/gl-infra/slackline) -* [x] Understand [security pager](https://gitlab.com/gitlab-com/security-tools/security-pager) - **Not relevant for this issue** [Notes](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5121#note_120454499) -* [ ] Determine service accounts to be used when connecting to GitLab, maybe we have a generic service account -* [x] Read up on GitLab REST API -* [X] Create flow diagram as proposal [Notes](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5121#note_120636590) -* [x] Enable GCF API for `gitlab-ops` Google Project -* [ ] Write function/code in Python3.7 -* [ ] Create PagerDuty Webhook which will connect to `https://us-east1-gitlab-ops.cloudfunctions.net/` -* [ ] Deploy -* [ ] Look into Zapier - PagerDuty/GitLab integration - Seems like the easy option -* [ ] ... more to come ... - -",2.0 -14363523,2018-09-21 08:45:55.928,Introduce suspension of changes and releases during important events,"During the livestream on 2018-09-20 we decided to suspend a [network configuration change](https://gitlab.com/gitlab-com/gl-infra/production/issues/478) and I think it might be worth discussing a way to formalize this process. - -What we need: - -1. something to call it -2. something that is accessible for the team and company to view -3. something we can query from CICD to stop our automated changes from rolling out - - -For (1) we could call it Production SOC (suspension of changes) though it is not a standard term. We should avoid other popular terms like ""blackout windows"" or ""blackdays"". I did some googling around and couldn't find anything that is standard and would love to hear suggestions. - -For (2) I would like to signal boost this but putting it on the shared meeting calendar, if that is a bit too much we could create a new calendar for it. - -For (3) would a google calendar work or should we consider other options? -the last internal company pipeline tool I used had a feature where you could force pipelines to pause during the certain time intervals, @ayufan is there anything like this on the roadmap for cicd? - -## Name suggestions (feel free to vote or add more if you have them) - -* SOC (suspension of changes) :ghost: -* Change suspension :palm\_tree: -* office hours :roller\_coaster: -* ICL (Infrastructure Change Lock) :8ball: -* Change freeze :snowman: - -cc @gitlab\-com/gl\-infra @glopezfernandez @andrewn",3.0 -14349534,2018-09-20 23:04:37.139,CI/CD for terraform (require apply stages),"To close out the final two implementation points in #4872: after #5114 has been successfully implemented and validated, we need to update the pipeline in gitlab-com/gitlab-com-infrastructure> so that the apply stages have the setting `allow_failure: false` to ensure that they are [blocking manual actions](https://docs.gitlab.com/ee/ci/yaml/#when-manual), and are required before merging to master.",1.0 -14349499,2018-09-20 22:59:09.856,CI/CD for terraform (tf apply),"Per the high-level plan in #4872, we need to update the CI/CD pipeline for gitlab-com/gitlab-com-infrastructure> to [manually run](https://docs.gitlab.com/ee/ci/yaml/#when-manual) `tf apply` for each environment, starting with `gstg` - -Also see [this section](https://www.terraform.io/guides/running-terraform-in-automation.html#multi-environment-deployment) of the terraform documentation for more details about automating deployment to multiple environments. Since we are not (currently) using terraform workspaces, the MVC first pass should probably just have optional manual steps for each production environment after `gstg` (i.e. `ops` and `gprd`) using the build artifacts from #5113. Eventually we may need to consider re-organizing repositories along the lines of [this comment](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/4872#note_100185346).",1.0 -14349436,2018-09-20 22:50:44.492,CI/CD for terraform (tf plan),"Per the high-level plan in #4872, we need to update the CI/CD pipeline for gitlab-com/gitlab-com-infrastructure> to run `tf plan` for each environment during a ""build"" stage, with a build artifact of the plan file and `.terraform` directory for each environment that can be re-used for automated `tf apply` stages later. - -See [this section](https://www.terraform.io/guides/running-terraform-in-automation.html#plan-and-apply-on-different-machines) of the terraform documentation for more details.",1.0 -14333375,2018-09-20 10:38:35.407,301 redirect for a page on the about website,"Please infra team, can you set up a 301 from https://about.gitlab.com/blog/categories/release/ to https://about.gitlab.com/blog/categories/releases/? - -The former is a 404 due to renaming a blog category from `release` to `releases`. cc/ @rebecca FYI for future changes and so that you can check if we need more redirects from old categories. - -It caused some backslash on HN as we linked that page from the docs. See https://gitlab.slack.com/archives/C16HYA2P5/p1537391564000100 for reference.",2.0 -14309457,2018-09-19 19:00:47.589,Apply receive_max_input_size setting on GitLab.com,"@rdavila's change to limit the max push size has been merged: https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/20758 (thanks @rdavila :+1:!) - -See https://gitlab.com/gitlab-org/gitlab-ce/issues/26044 for further details. - -This was a production request, and an important control for improving the stability of GitLab.com. - -See should apply this control to GitLab.com. What is a reasonable maximum size? - -cc @jarv @dawsmith @Finotto",1.0 -14271485,2018-09-18 23:36:07.516,Chef Failing on Prometheus Alert Managers,"Chef is failing to run on `alerts-01-inf-gprd.c.gitlab-production.internal` and `alerts-02-inf-gprd.c.gitlab-production.internal` with the following error: - -```ruby - * template[/opt/prometheus/alertmanager/alertmanager.yml] action create[2018-09-18T23:32:58+00:00] INFO: Processing template[/opt/prometheus/alertmanager/alertmanager.yml] action create (gitlab-alertmanager::default line 70) - - - ================================================================================ - Error executing action `create` on resource 'template[/opt/prometheus/alertmanager/alertmanager.yml]' - ================================================================================ - - Chef::Mixin::Template::TemplateError - ------------------------------------ - undefined method `[]' for nil:NilClass - - Resource Declaration: - --------------------- - # In /var/chef/cache/cookbooks/gitlab-alertmanager/recipes/default.rb - - 70: template node[""alertmanager""][""flags""][""config.file""] do - 71: source ""alertmanager.yml.erb"" - 72: owner node[""prometheus""][""user""] - 73: group node[""prometheus""][""group""] - 74: mode ""0644"" - 75: variables(conf: alertmanager_conf) - 76: notifies :hup, ""runit_service[alertmanager]"" - 77: end - 78: - - Compiled Resource: - ------------------ - # Declared in /var/chef/cache/cookbooks/gitlab-alertmanager/recipes/default.rb:70:in `from_file' - - template(""/opt/prometheus/alertmanager/alertmanager.yml"") do - action [:create] - retries 0 - retry_delay 2 - default_guard_interpreter :default - source ""alertmanager.yml.erb"" - variables {:conf=>{""slack""=>{""channel""=>""#alerts"", ""api_url""=>""https://hooks.slack.com/services/XXXXX/XXXXX/XXXXX""}, ""pagerduty""=>{""service_key""=>""XXXXX"", ""low_prio_service_key""=>""XXXXX""}}} - declared_type :template - cookbook_name ""gitlab-alertmanager"" - recipe_name ""default"" - owner ""prometheus"" - group ""prometheus"" - mode ""0644"" - path ""/opt/prometheus/alertmanager/alertmanager.yml"" - verifications [] - end - - Template Context: - ----------------- - on line #225 - 223: - name: dead_mans_snitch - 224: webhook_configs: - 225: - url: ""https://nosnch.in/<%= @conf['snitch']['api_key'] %>"" - 226: send_resolved: false - - Platform: - --------- - x86_64-linux - -[2018-09-18T23:32:58+00:00] INFO: Running queued delayed notifications before re-raising exception - -Running handlers: -[2018-09-18T23:32:58+00:00] ERROR: Running exception handlers - - PrometheusHandler -Running handlers complete -[2018-09-18T23:32:58+00:00] ERROR: Exception handlers complete -Chef Client failed. 14 resources updated in 31 seconds -[2018-09-18T23:32:58+00:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out -[2018-09-18T23:32:58+00:00] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report -[2018-09-18T23:32:58+00:00] ERROR: - -Chef::Mixin::Template::TemplateError (undefined method `[]' for nil:NilClass) on line #225: - -223: - name: dead_mans_snitch -224: webhook_configs: -225: - url: ""https://nosnch.in/<%= @conf['snitch']['api_key'] %>"" -226: send_resolved: false -```",2.0 -14268826,2018-09-18 19:41:52.427,Redirect /partners to /applications on about.gitlab.com,"Old URL: `https://about.gitlab.com/partners` -New URL: `https://about.gitlab.com/applications`",1.0 -14254636,2018-09-18 15:35:18.768,"Switch GitLab.com Early Adopters to Free on October 1st, 2018","We extended the Early Adopter program to Oct 1st (see https://gitlab.com/gitlab-com/marketing/general/issues/3050) and emailed impacted users around Sept 19th. - -On October 1st, we should switch all Early Adopter plans to Free plans on GitLab.com.",2.0 -14242970,2018-09-18 12:20:00.727,Collect Postgres logs with log_min_duration_statement = 0 for analysis and DB experiments,"These actions are part of issue #4921 - -As discussed with @abrandl and @dawsmith, I'm going to collect a sample (30-60 minutes) of all-queries log on Wednesday, Sep 19, starting at 12:00 UTC (the busiest time is ~13:00 UTC). - -The brief description of actions ---- -To do that, I'll setup `log_min_duration_statement` with manual `alter system set ..` command on the master, followed by `select pg_reload_conf();`. The plan (see details below) is to reach 0 iteratively, descending from the current value of 1 second to 0 by steps, with ~15 minutes apart, controlling IO and the log size. - -Projected IO impact ---- -The projected IO impact, based on `pg_stat_statements` view (it lacks query parameters, but this just adds some small error to the forecast – there are no huge Geo queries on the master node), based on last 10 hours of observation: - -```sql -gitlabhq_production=# select now(), :'TS_PGSS_RESET' as pg_stat_statements_last_reset, now() - :'TS_PGSS_RESET' since_reset, sum(calls * length(query)) as total_bytes, sum(calls * length(query)) / extract(epoch from now() - :'TS_PGSS_RESET') as bytes_per_sec from pg_stat_statements; --[ RECORD 1 ]-----------------+------------------------------ -now | 2018-09-18 07:51:16.052095+00 -pg_stat_statements_last_reset | 2018-09-17 21:19:59.557029+00 -since_reset | 10:31:16.495066 -total_bytes | 37407827046 -bytes_per_sec | 987626.41529573 -``` - -– this gives an estimate of ~1MB/s for writing Postgres log. This is based on not the busiest hours though, so it's subject to checking during Tuesday mid-day. This is an estimate for `log_min_duration_statement = 0`. For higher values, including the possible lowest positive `1ms`, the excepted IO will be much lower. - -Additional observation: a lot of queries under 1ms ---- -If we take Top-15 (by `total_time`) queries among those which `max_time` is < 1mas, we'll see that those take >50% of time: -```sql -select - total_time, - sum(total_time) over () as total_time_registered_in_pss, - round(100::real * (sum(total_time) over (order by total_time desc)) / sum(total_time) over ())::text - || '%' as accum_percentage, - min_time, - round(mean_time::numeric, 2) as mean_time, - max_time, - query -from pg_stat_statements -where max_time < 1 -order by total_time desc -limit 15; -``` - -The detailed plan of actions ---- - -0) Tuesday, 2018-09-18 12:00 UTC: reset pg_stat_statements and double check the write IO estimate. -In psql on Postgres master: -```sql -select now() as ""TS_PGSS_RESET"" from pg_stat_statements_reset() \gset - -select - now(), - :'TS_PGSS_RESET' as pg_stat_statements_last_reset, - now() - :'TS_PGSS_RESET' since_reset, - sum(calls * length(query)) as total_bytes, - sum(calls * length(query)) / extract(epoch from now() - :'TS_PGSS_RESET') as bytes_per_sec -from pg_stat_statements; - -\watch 300 -``` - -1) Wednesday, 2018-09-19 11:xx Preparations. - -Set millisecond precision in log timestamps – use `%m` instead of `%t` (currently we have second-level precision, `%t`): -``` -alter system set log_line_prefix = '%m [%p]: [%l-1] db=%d,user=%u '; -select pg_reload_conf(); -``` - -2) Wednesday, 2018-09-19 12:00 UTC: start descending. - -In psql on Postgres master: -```sql -alter system set log_min_duration_statement = '100ms'; -select pg_reload_conf(); -``` - -3) Observe the current size of the log and avg speed of its growth: -```shell -ls -lah /var/log/gitlab/postgresql/current - -# this will print avg bytes/sec based on the full ""current"" log file -sudo head -n1 /var/log/gitlab/postgresql/current | head -c 22 | sed 's/_/T/g' | xargs date +""%s"" --date \ - | awk \ - -v date=""$(date +""%s"")"" \ - '{print (date - $1)}' \ - | awk \ - -v bytes=""$(sudo wc -c /var/log/gitlab/postgresql/current | awk '{print $1}')"" \ - '{print (bytes / $1)}' -``` - -And check IO with: - - `sudo iotop -o` - - monitoring - -4) Wednesday, 2018-09-19 12:15 UTC: continue descending, set to '10ms' - -In psql on Postgres master: -```sql -alter system set log_min_duration_statement = '10ms'; -select pg_reload_conf(); -``` - -+ continue observing as described in step 3 - -5) Wednesday, 2018-09-19 12:30 UTC: continue descending, set to '5ms' - -In psql on Postgres master: -```sql -alter system set log_min_duration_statement = '5ms'; -select pg_reload_conf(); -``` - -+ continue observing as described in step 3 - -+ check IO with `sudo iotop -o` and monitoring (https://dashboards.gitlab.net/d/pEfSMUhmz/postgresql-disk-io?orgId=1 + GCP graphs) - -5) Wednesday, 2018-09-19 12:45 UTC: continue descending, set to '1ms' - -In psql on Postgres master: -```sql -alter system set log_min_duration_statement = '1ms'; -select pg_reload_conf(); -``` - -+ continue observing as described in step 3 - -+ check IO with `sudo iotop -o` and monitoring (https://dashboards.gitlab.net/d/pEfSMUhmz/postgresql-disk-io?orgId=1 + GCP graphs) - -6) DECISION TO BE MADE BASED ON OBSERVED NUMBERS: If the current IO caused by logging is > 1MB/s, stop with `1ms`, and collect only partial logs, with this threshold. -If it's below, continue with last step of descending to collect all-queries log: - -```sql -alter system set log_min_duration_statement = 0; -select pg_reload_conf(); -``` - -7) Collect logs (minimum 15 minutes, max 30 minutes): - -```shell -sudo cp /var/log/gitlab/postgresql/current ~nikolays/postgres_0_20180919.log -``` - -8) Return to the initial state - in psql on the master node: - -```sql -alter system reset log_min_duration_statement; -alter system reset log_line_prefix; -select pg_reload_conf(); -\c -show log_min_duration_statement; -show log_line_prefix; -``` - -Notes on possible risks ---- -No risks to have any performance degradation are expected. In case of any unpredictable IO, the step 8 will be immediately applied. -It expected, that ~2-4GB of logs will be generated during collection phase (with projected generation speed ~1-2MB/s), which is an equivalent to ~10 days of logging with current `log_min_duration_statement` threshold (1s) -This might cause some delays in Kibana processing. - -TODO after these works are done ---- - -After this is complete, TODO (separate issues): - -- process, analyze logs with pgBadger -- PoC usage of logs with pgreplay on experimental DB nodes -- `set log_min_duration_statement to 100ms;` permanently -- set better `log_line_prefix` -- stop using ""one for all"" DB roles, use separate ones for different apps and for administration. - - - -/cc @gitlab\-com/gl\-infra",4.0 -14239602,2018-09-18 09:48:55.566,[Design Document] Terraform Automation,Design doc MR: https://gitlab.com/gitlab-com/www-gitlab-com/merge_requests/15647,3.0 -14239562,2018-09-18 09:47:38.778,[Design Document] Chef Automation,Please fill with your ideas the following design doc: https://docs.google.com/document/d/17DhmLAUkMP6lusX7OWcPXoQeER2u7OuhJLWzOSriG64/edit?usp=sharing,4.0 -14239539,2018-09-18 09:46:53.231,[Design Document] Monitoring Review Storages,Please fill with your ideas the following design doc: https://docs.google.com/document/d/1fYkjKLPlpABxTfrF8KRK3KF66JtSfml1BiF7lpzLCMo/edit?usp=sharing,5.0 -14230666,2018-09-18 05:14:53.481,Chef not running on some clients,An inadvertent action deleted the clients from the `syslog_client / _default` vault which has stopped Chef processing on several nodes still in Azure.,1.0 -14219800,2018-09-17 17:30:13.768,design.gitlab.com is offline,threads in # production - https://gitlab.slack.com/archives/C101F3796/p1537187945000100,2.0 -14219511,2018-09-17 17:10:57.194,Error when deploying customers.gitlab.com,"I'm getting this error when trying to deploy the customers app through `sudo chef-client`: - -``` -================================================================================ -Recipe Compile Error in /var/chef/cache/cookbooks/gitlab-server/recipes/rsyslog_client.rb -================================================================================ - -ChefVault::Exceptions::SecretDecryption ---------------------------------------- -syslog_client/_default is not encrypted with your public key. Contact an administrator of the vault item to encrypt for you! - -Cookbook Trace: ---------------- - /var/chef/cache/cookbooks/gitlab_secrets/libraries/secrets.rb:17:in `get' - /var/chef/cache/cookbooks/gitlab_secrets/libraries/secrets.rb:87:in `get_secrets' - /var/chef/cache/cookbooks/gitlab-server/recipes/rsyslog_client.rb:2:in `from_file' - -Relevant File Content: ----------------------- -/var/chef/cache/cookbooks/gitlab_secrets/libraries/secrets.rb: - - 10: @node = node - 11: Chef::Log.info(""gitlab_secrets: BE: 'chef_vault', path: '#{@path}', key:'#{@key}'"") - 12: end - 13: - 14: def get - 15: require 'chef-vault' - 16: if ChefVault::Item.vault?(@path, @key) - 17>> Hash(ChefVault::Item.load(@path, @key)) - 18: else - 19: Chef::Log.warn(""This is not a vault, will try to load the #{@key} in the #{@path} databag."") - 20: Hash(Chef::DataBagItem.load(@path, @key)) - 21: end - 22: end - 23: end -``` - -@skarbek maybe it's related to some recent changes in the vault?",1.0 -14213721,2018-09-17 14:13:39.430,2018 - standups and planning updates,"Proposing making a slack channel for standups and planning to have a record / place for team communication on these topics. - -~~Channel name: infra-standup~~ - -Based on comments- hold off on making the channel and just use # sre-lounge - -### Standups: - -* [x] todo: setup the bot to remind - -they should cover: -What are you working on? -What are you blocked on? -Opportunities to pair up? -Things I worked on that can be handed off? - -### Milestone Planning: -Thinking about how best to make milestone and issue level planning async. -The current thought is to use the standup channel for posts to request feedback on issues that need to be planned in the current and next milestones. - -Since we are async, we should not have to depend on a 'planning meeting' and instead, work on planning/reviewing issues and designs as they occur. The managers of the team can serve as POs to make sure and remind the team we should be getting items planned early. Ideally, we are planning/reviewing items for the next milestone, though as we start, there may be some items in the current milestones needing review. - -cc @gitlab\-com/gl\-infra",1.0 -14163089,2018-09-14 17:16:22.741,Remove DNS record for `log.gitlap.com`,"This ELK cluster has been destroyed but the DNS record for it is still live. - - -/cc @dawsmith",1.0 -14162646,2018-09-14 16:39:33.266,Setup alerts for both custom and non-custom ssl endpoints for pages,"This comes out of the pages postmortem. The front-end for pages ip should never be released. We should have critical (pagerduty) alerts for both staging and production so we are notified immediately if there is an issue. By the time we are paged in production it will be too late so we need to do this for staging as well. - -For the alert I think the best thing to do is scrape an ssl custom domain, that is configured to use an A record. -",2.0 -14155269,2018-09-14 12:17:38.746,network peer the monitoring subnets in gprd/gstg/ops,"For alertmanager clustering it is desired that we network peer the monitoring subnets. This will cover both prometheus and alertmanager. - -Note that internal dns resolution will not work across projects so if we do configure cross project connections it will need to be by the private ip address. - -For more context see https://dev.gitlab.org/cookbooks/chef-repo/merge_requests/2531#note_134273 - -- [x] Peer gstg and gprd (see how this was done for the ops network) -- [x] Create firewall rules that allow alertmanagers to communicate with each other",2.0 -14151997,2018-09-14 09:56:18.938,Enable object storage for maven packages on GitLab.com,In 11.3 we released maven packages support https://gitlab.com/gitlab-org/omnibus-gitlab/merge_requests/2697/diffs. @ayufan suggested we enable object storage support for it as soon as possible to avoid migration in future,3.0 -14150172,2018-09-14 08:59:31.251,Whitelist metrics for public hosts,"The ops Prometheus instance needs to access metrics ports (9100-9300) on several public hosts. - -These hosts are on ec2: - -* license.gitlab.com -* packages.gitlab.com -* version.gitlab.com - -We can either tunnel this traffic to ec2 from the ops zone, or we can use the public IPs.",1.0 -14149366,2018-09-14 08:24:08.183,onboarding jfinotto ssh access to the environment,"Hello here my .pub -Username jfinotto - -ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNidMhz6K0KoKcnM7l08YWTp1T84bjP5tF0u6utfTlpjjqrIFI2mhYicga2nRxypb6xriaiOtR+w9uzVIWdJbePG3mu82vdIRUrk9C/tEQsMy8Y51jUJF/i0JsKMKfTQnrC9cfe0L4zOP/NcQwL0GUtIhc8Nw8T1B/2i7Djz39RYWDEqNlwYMl9NPtjW1067ns+YZw68bKbB5e+ndaheE1BtcmmdU9n9Cx4QB9366GO7UWwx7f9o3Ifd8/NwDsz81rx8ie2rwL4hpbUsDkxwY2DPaZ68MnTScLIdrnekxs/OGmoTBwLf/VZiQ7m3z9tDbQTZMUqBReXVD+354krPD7 jfinotto@Joses-MBP.fritz.box",1.0 -14149086,2018-09-14 08:11:47.612,Check for internal ip addresses when updating our restriction lists,"We recently had an incident where the blackbox ip address was flagged and put into a restricted list. - -The following two repos: -* https://ops.gitlab.net/gitlab-com/security-tools/front-end-security.git -* https://ops.gitlab.net/gitlab-com/security-tools/recaptcha.git - -Would benefit from a pipeline check to ensure that none of the IPs are internally reserved.",2.0 -14140069,2018-09-13 21:14:30.115,Empty Home Dashboard on the public metrics server `dashboards.gitlab.net`,"Navigating to `https://dashboards.gitlab.com` for the first time in a fresh browser shows an empty ""Home Dashboard"". - -It used to redirect to GitLab triage.",1.0 -14133730,2018-09-13 15:17:46.906,remove bootstrap=false from the base roles,This is necessary because it forces a reconfigure after package install. The right way is for the deployment orchestrator (takeoff) to do this. This issue is to track reverting https://dev.gitlab.org/cookbooks/chef-repo/merge_requests/2525 after takeoff changes are made.,1.0 -14118575,2018-09-13 08:59:29.970,Backup restore hangs/failed,"On 2018-09-12, a pipeline run for https://gitlab.com/gitlab-restore/postgres-01.db.prd.gitlab.com was triggered. This spins up a new GCE database instance and pulls the latest backup using `wal-e backup-fetch`. - -Timeline: - -* Sep 12 15:28 - Instance boot -* Sep 12 15:33 - wal-e starts `backup-fetch` -* Sep 12 15:33 - Encryption errors the may be caused by a permission error for `/var/opt/gitlab/postgresql/data/server.crt` (see log below) - may be harmless -* Sep 13 2:11 - wal-e downloads `part_00001959.tar.lzo` (this is the last partition - so may even have completed restoring the backup) - no more log messages after this. -* Sep 13 8:58 - the `backup-fetch` process still hangs (strace see below) - -The backup we were fetching was: `base_00000006000084260000008D_03825496`. - -strace of the stalled backup-fetch process: - -``` -gitlab-+ 21309 27.1 0.8 363136 65124 ? Sl Sep12 283:07 /opt/wal-e/bin/python3 /opt/wal-e/bin/wal-e backup-fetch /var/opt/gitlab/postgresql/data/ base_00000006000084260000008D_03825496 -root@restore-postgres-01-prd:/var/opt/gitlab/postgresql# strace -c -p 21309 -strace: Process 21309 attached -^Cstrace: Process 21309 detached -% time seconds usecs/call calls errors syscall ------- ----------- ----------- --------- --------- ---------------- - 60.60 0.063848 12 5504 clock_gettime - 29.87 0.031475 20 1570 epoll_wait - 9.52 0.010033 13 791 wait4 ------- ----------- ----------- --------- --------- ---------------- -100.00 0.105356 7865 total -``` - -In the beginning, we were seeing these errors (which may be harmless and caused by permission errors for the certificate): -``` -Sep 12 15:37:35 restore-postgres-01-prd startup-script: INFO startup-script: wal_e.worker.s3.s3_worker INFO MSG: beginning partition download -Sep 12 15:37:35 restore-postgres-01-prd startup-script: INFO startup-script: DETAIL: The partition being downloaded is part_00000000.tar.lzo. -Sep 12 15:37:35 restore-postgres-01-prd startup-script: INFO startup-script: HINT: The absolute S3 key is postgres-02/basebackups_005/base_00000006000084260000008D_03825496/tar_partitions/part_00000000.tar.lzo. -Sep 12 15:37:35 restore-postgres-01-prd startup-script: INFO startup-script: STRUCTURED: time=2018-09-12T15:37:35.702363-00 pid=21309 -Sep 12 15:37:35 restore-postgres-01-prd wal_e.worker.s3.s3_worker INFO MSG: beginning partition download#012 DETAIL: The partition being downloaded is part_00000000.tar.lzo.#012 HINT: The absolute S3 key is postgres-02/basebackups_005/base_00000006000084260000008D_03825496/tar_partitions/part_00000000.tar.lzo.#012 STRUCTURED: time=2018-09-12T15:37:35.702363-00 pid=21309 -Sep 12 15:37:38 restore-postgres-01-prd startup-script: INFO startup-script: gpg: block_filter 0x24e83c0: read error (size=15934,a->size=15934) -Sep 12 15:37:38 restore-postgres-01-prd startup-script: INFO startup-script: gpg: block_filter 0x24e8ea0: read error (size=16165,a->size=16165) -Sep 12 15:37:38 restore-postgres-01-prd startup-script: INFO startup-script: gpg: WARNING: encrypted message has been manipulated! -Sep 12 15:37:38 restore-postgres-01-prd startup-script: INFO startup-script: gpg: block_filter: pending bytes! -Sep 12 15:37:38 restore-postgres-01-prd startup-script: INFO startup-script: gpg: block_filter: pending bytes! -Sep 12 15:37:38 restore-postgres-01-prd startup-script: INFO startup-script: lzop: Inappropriate ioctl for device: -Sep 12 15:37:41 restore-postgres-01-prd wal_e.retries WARNING MSG: retrying after encountering exception#012 DETAIL: Exception information dump: #012 Traceback (most recent call last):#012 File ""/opt/wal-e/lib/python3.5/site-packages/wal_e/retries.py"", line 87, in shim#012 return f(*args, **kwargs)#012 File ""/opt/wal-e/lib/python3.5/site-packages/wal_e/worker/s3/s3_worker.py"", line 78, in fetch_partition#012 TarPartition.tarfile_extract(pl.stdout, self.local_root)#012 File ""/opt/wal-e/lib/python3.5/site-packages/wal_e/tar_partition.py"", line 301, in tarfile_extract#012 tar.extract(member, path=dest_path)#012 File ""/opt/wal-e/lib/python3.5/tarfile.py"", line 2038, in extract#012 numeric_owner=numeric_owner)#012 File ""/opt/wal-e/lib/python3.5/tarfile.py"", line 2108, in _extract_member#012 self.makefile(tarinfo, targetpath)#012 File ""/opt/wal-e/lib/python3.5/tarfile.py"", line 2148, in makefile#012 with bltn_open(targetpath, ""wb"") as target:#012 PermissionError: [Errno 13] Permission denied: '/var/opt/gitlab/postgresql/data/server.crt'#012 #012 HINT: A better error message should be written to handle this exception. Please report this output and, if possible, the situation under which it arises.#012 STRUCTURED: time=2018-09-12T15:37:41.346707-00 pid=21309 -```",5.0 -14087853,2018-09-12 10:00:20.853,[Design Document] Canary Deployment Testing,Please fill with your ideas the following design doc : https://docs.google.com/document/d/15jzLb5O4ASPYInxw1CFI-ngLHv2K8w01AuFOkejfcrk/edit?usp=sharing,3.0 -14087686,2018-09-12 09:57:34.420,[Design Document] Configure properly Autovacuum for postgresql,"Please fill with your ideas the following design doc : -https://docs.google.com/document/d/1rQxVHrVb_LGGsG69G7r7-8vTVkN5Slr7eMWw9uxMTFA/edit?usp=sharing - -Deliverables: - -- [x] [New Document Version](https://docs.google.com/document/d/16uj2mK4k93xumNdU4qWVNmh8EI-yC4e8fw248S5LB-0/edit) -- [x] Change Request. cc/ @gerardo.herzig ",12.0 -14087353,2018-09-12 09:42:46.070,[Design Document] Postgresql Backup & Recovery,"Please fill with your ideas the following design doc : -https://docs.google.com/document/d/1X51du8kpxX4GAmipC2DeT61JDNqTz78sHPgTBkSxa5U/edit?usp=sharing - -Once ready, move the design to the Handbook and ping the team to review the MR before this is being merged. Once the MR is merged, the design is considered final. - -Handbook for designs: https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/source/handbook/engineering/infrastructure/design/index.html.md",0.0 -14078014,2018-09-11 23:40:56.708,Hurricane Florence Readiness,"Our region is us-east1 in Moncks Corner, SC - hurricane path is going to be near there based on NHC maps. - -This is a general issue to group together actions we will take to be ready should we see disruptions of service in our regions. - -Initial actions: - -* [ ] Look into options for alternate region backup of git / nfs data -* [x] Cloud storage is already multiregional - anything else to do? -* [x] Make sure about.gitlab.com and any other important assets are in an alternate region -* [ n/a too big ] Start work in Terraform to build out stack for Geo in us-west1 - -",3.0 -16586793,2018-12-11 22:57:31.591,Services running on the Patroni cluster need to have their limits matching omnibus,"gitlab-omnibus runs all of its services with [bumped limits](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/2fc14ae20a136ebb2662383b8d9c97d5f9ff0e1e/config/templates/runit/runsvdir-start.erb#L20-37). We are using custom cookbooks to install pgbouncer and PostgreSQL on the Patroni cluster, so we need to have a way to apply the same limits these services. - -Things to take into consideration: -* PostgreSQL is not managed by systemd or the likes, rather, it's spawned by Patroni so we may need to apply the limits to the Patroni service to be inherited by PostgreSQL. -* Bumping such limits will likely need the services to be restarted, which can cause brief disruption of service, so careful planning is needed for production roll-out. - -~""corrective action""",3.0 -16586751,2018-12-11 22:51:49.222,pgbouncer mtail doesn't catch some errors,"After the Patroni migration, we started seeing this error in the logs: ""ERROR accept() failed: Too many open files"". It wasn't caught by [mtail regexp](https://gitlab.com/gitlab-cookbooks/gitlab-mtail/blob/b0a57b7bd3b2d18c564d5fccc08081c7673a442d/files/default/mtail/pgbouncer.mtail#L33), so we need to correct it. - -~""corrective action""",1.0 -16583833,2018-12-11 20:21:11.756,Setup CI/CD for terraform modules,"With each (GCP) module now residing in its own repository, we need to include a few things to move beyond MVC - -1. [x] Add a CHANGELOG to each module repo -1. [x] Add a README to each module repo -1. [x] Add a license to each module repo -1. [x] Add a VERSION file to each module repo -1. [ ] Configure CI - 1. [x] Run `tf-lint` - 1. [x] Check `terraform fmt` - 1. [x] Run `terraform validate` - 1. [x] Ensure version gets bumped on merge to master - 1. [x] Automatically tag repo on merge to master",6.0 -16571102,2018-12-11 11:14:28.822,Add in the template of the Patroni migration that we need to use TMUX or SCREEN from the deploy hosts,add in the template of the Patroni migration that we need to use TMUX or SCREEN from the deploy hosts,1.0 -16563840,2018-12-11 05:48:56.418,Investigate why/how Pingdom checks got deleted,This is to investigate why/how 12 of our Pingdom checks got deleted sometime between 12/10 and 12/11.,2.0 -16561930,2018-12-11 02:35:46.275,New patcher should post to slack,"When running the new patcher CI/CD process, I find myself manually posting a message to slack every time a stage completes. - -The automation should do this for me. - -At the very least, there should be slack messages for: -- Starting patching staging -- Finished patching staging -- Finished patching canary -- Prompt for manual step to patch production -- Finished patching production",2.0 -16561761,2018-12-11 02:18:41.369,PostgreSQL_ExporterErrors Alerts In Staging,"Staging has been sending PostgreSQL_ExporterErrors alerts. I've been silencing them, but this should really be fixed. It's possible that this alert is no longer necessary and can be deleted, but I don't have enough information to make that call. - -``` -Labels: - - alertname = PostgreSQL_ExporterErrors - - channel = database - - environment = gstg - - fqdn = postgres-01-db-gstg.c.gitlab-staging-1.internal - - instance = postgres-01-db-gstg.c.gitlab-staging-1.internal:9187 - - job = postgres - - monitor = gstg-default - - pager = pagerduty - - provider = gcp - - region = us-east - - replica = 01 - - severity = critical - - stage = main - - tier = db - - type = postgres -Annotations: - - description = This may indicate postgres_exporter is not running or a buggy query in query.yaml on postgres-01-db-gstg.c.gitlab-staging-1.internal - - title = Postgres exporter is showing errors for the last hour -Source: https://prometheus.gstg.gitlab.net/graph?g0.expr=pg_exporter_last_scrape_error%7Benvironment%3D~%22gprd%7Cgstg%22%7D+%3D%3D+1&g0.tab=1 -```",1.0 -16561374,2018-12-11 01:28:04.209,Disk Full on version.gitlab.com due to failing WAL archive - continuation- ensure backups are working,"It looks like we need to fix the destination for the archive storage. - -``` -wal_e.main ERROR MSG: no storage prefix defined - HINT: Either set one of the --file-prefix, --gs-prefix, --s3-prefix or --wabs-prefix options or define one of the WALE_FILE_PREFIX, WALE_GS_PREFIX, WALE_S3_PREFIX, WALE_SWIFT_PREFIX or WALE_WABS_PREFIX, environment variables. - STRUCTURED: time=2018-12-11T01:14:26.080378-00 pid=23088 -2018-12-11 01:14:26 GMT LOG: archive command failed with exit code 1 -2018-12-11 01:14:26 GMT DETAIL: The failed archive command was: /usr/bin/envdir /etc/wal-e.d/env /opt/wal-e/bin/wal-e wal-push pg_xlog/000000010000007B0000000D -2018-12-11 01:14:26 GMT WARNING: archiving transaction log file ""000000010000007B0000000D"" failed too many times, will try again later -``` - -Details are in https://gitlab.com/gitlab-com/version-gitlab-com/issues/132",1.0 -16559111,2018-12-10 21:54:53.236,Shared macOS Runners,"#### Create Minimal Private Cloud Build for MacOS Shared Runners - -Working issue for tracking our macstadium setup and first iteration: - -- [x] Sign up for MacStadium account and get information into 1password -- [x] Initiate Minimal Private Cloud Build (1 VPN Firewall, 1 Mac Pro, 1TB Storage, vCenter / vSphere, VMware ESX 6.7) -- [ ] Set up with ESX and first clean base image of mac os x mojave -- [ ] Communicate with CI/CD team an plan next steps for how to setup shared runner manager and shared runners with labels. -- [ ] Configure Mac runners for only tagged jobs, and look for jobs with `mac` tag - -This direction lets us make API calls from our platform to vCenter to manage the creation, destruction, and templating of images.",3.0 -16553315,2018-12-10 18:35:42.666,Add https/SSL to next.gitter.im (next feature toggles),"Add https/SSL to [`next.gitter.im`](http://next.gitter.im/) (next feature toggles) - -https://github.com/gitterHQ/next.gitter.im - ---- - -Follow-up from https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/4664#note_123807024 - -cc @skarbek",1.0 -16549271,2018-12-10 16:09:14.868,Use HTTP data source for bootstrap/teardown scripts,"Access to scripts via relative paths produces the following errors on plan: - -``` -Error: module.sentry.google_compute_instance.instance_with_attached_disk: 1 error(s) occurred: - -* module.sentry.google_compute_instance.instance_with_attached_disk: file: open /Users/craig/src/gitlab/gitlab-com/gitlab-com-infrastructure/environments/ops/.terraform/modules/5ddd9c31c4df35aeb0460a824e50024e/../../../scripts/google/teardown-v1.sh: no such file or directory in: - -${file(""${path.module}/../../../scripts/google/teardown-v1.sh"")} -``` - -This also prevents splitting out modules into standalone repositories, and all these relative `path.module` references should be updated to use the [http data source](https://www.terraform.io/docs/providers/http/data_source.html)",1.0 -16541755,2018-12-10 11:38:22.726,Failed deploy of 11.6.0 RC4,"## Summary - -Deploy of 11.6.0 RC4 to production failed and needed to be rolled back - -Service(s) affected : -Team attribution : -Minutes downtime or degradation : 10h03 - 10h37 = 34m - -* Incident doc: [work doc](https://docs.google.com/document/d/16pY8L3azTvSsOgc8so9FNssqMKJk_31EJ_Rc9lZZYpw/edit#heading=h.jnyksupah24j) -* Production issue: https://gitlab.com/gitlab-com/gl-infra/production/issues/608 - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? (i.e. service outage, sub-service brown-out, exposure of sensitive data, ...) -- Who was impacted by this incident? (i.e. external customers, internal customers, specific teams, ...) -- How did the incident impact customers? (i.e. preventing them from doing X, incorrect display of Y, ...) -- How many attempts were made to access the impacted service/feature? -- How many customers were affected? -- How many customers tried to access the impacted service/feature? - -Include any additional metrics that are of relevance. - -Provide any relevant graphs that could help understand the impact of the incident and its dynamics. - - -## Detection & Response - -Start with the following: - -- How was the incident detected? - - pagerduty alerts where triggered by pingdom and prometheus -- Did alarming work as expected? - - yes -- How long did it take from the start of the incident to its detection? - - Incident was detected immediately as we were watching the deployment. -- How long did it take from detection to remediation? - - Errors started at 10h03, site reported to be back again at 10h37 (34m downtime) -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team memeber wasn't page-able, ...) - -## Timeline - -2018-12-10 - -- 09h49 Deployment of 11.6.0-rc4 (https://ops.gitlab.net/gitlab-org/takeoff/pipelines/14941) -- 10h03 Pingdom alerts gitlab-ce down (https://gitlab.pagerduty.com/incidents/PJM0YL9) -- 10h06 High Web Error Rate (https://gitlab.pagerduty.com/incidents/PHERBYV) -- 10h14 Jose tweets: https://twitter.com/gitlabstatus/status/1072072022627373056 -- 10h14 Initiated rollback to 11.5.3 https://ops.gitlab.net/gitlab-org/takeoff/pipelines/14947 -- 10h15 Errors on GitLab.com (https://log.gitlab.net/goto/1bb0fbde4bbf4d43fb8ce0b16c6bdcbf) -- 10h33 Jose tweets we are rolling back: https://twitter.com/gitlabstatus/status/1072076713612468224 -- 10h37 Alerts resolved https://gitlab.slack.com/services/B12SVN24D -- 10h42 tweet: Everything back to normal https://twitter.com/gitlabstatus/status/1072079050401701888 - -## Root Cause Analysis - -The purpose of this document is to understand the reasons that caused an incident, and to create mechanisms to prevent it from recurring in the future. A root cause can **never be a person**, the way of writing has to refer to the system and the context rather than the specific actors. - -Follow the ""**5 whys**"" in a **blameless** manner as the core of the root-cause analysis. - -For this it is necessary to start with the incident, and question why it happened. Keep iterating asking ""why?"" 5 times. While it's not a hard rule that it has to be 5 times, it helps to keep questions get deeper in finding the actual root cause. - -Keep in min that from one ""why?"" there may come more than one answer, consider following the different branches. - -###Example of the usage of ""5 whys"" -The vehicle will not start. (the problem) - -1. Why? - The battery is dead. -2. Why? - The alternator is not functioning. -3. Why? - The alternator belt has broken. -4. Why? - The alternator belt was well beyond its useful service life and not replaced. -5. Why? - The vehicle was not maintained according to the recommended service schedule. (Fifth why, a root cause) - -## What went well - -Start with the following: - -- Identify the things that worked well or as expected. -- Any additional call-outs for what went particularly well. - -## What can be improved - -Start with the following: - -- Using the root cause analysis, explain what can be improved to prevent this from happening again. -- Is there anything that could have been done to improve the detection or time to detection? -- Is there anything that could have been done to improve the response or time to response? -- Is there an existing issue that would have either prevented this incident or reduced the impact? -- Did we have any indication or beforehand knowledge that this incident might take place? - - -## Corrective actions - -- List issues that have been created as corrective actions from this incident. -- For each issue, include the following: - - - Issue labeled as ~""corrective action"". - - Include an estimated date of completion of the corrective action. - - Incldue the named individual who owns the delivery of the corrective action. - - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",2.0 -16524827,2018-12-09 13:59:37.949,execute VACUUM FREEZEE in all the tables,execute VACUUM FREEZEE in all the tables,1.0 -16421696,2018-12-08 14:38:15.607,"move the fixes to chef , that were found after the failover","move the fixes to chef , that were found after the failover",1.0 -16413870,2018-12-07 20:22:37.575,The mirror push for gitlab-com-infrastructure is not working,"Today we discovered that this repo is not syncing between .com and ops. To fix the immediate issue, a force push was completed to the .com repo. However, the next failure came about; the credentials are incorrect :unamused: - -I updated these with what's in the 1password vault, and still no dice. HALP",1.0 -16394204,2018-12-06 23:54:13.005,Add ARInsights SPF and DKIM records to update GitLab DMARC authentication policy,"# Summary -GitLab is a subscriber to ARInsights software, which is a tool used to monitor analyst relations, typically used by most of the large tech vendors (except MS) and more of the smaller ones. Specifically I use it to send a bimonthly newsletter to industry analysts. We use their SW because it satisfies GDPR requirements, and it also gives me tools to track who opens the newletter and what links they click through on. - -This month the newsletter bounced from 5 recipients with a blocked (550) 550 5.7.1 error. All 4 at Redmonk bounced, as well as one at Forrester. When I brought this to ARInsight's attention they wrote me the following: - -""Your email administrator needs to add our SPF and DKIM records to update your DMARC authentication policy. This will enable us to send emails from ARchitect that look like they are coming directly from your domain and will not get trapped in any spam filters. I'm going to have the DMARC records generated and will forward to you with some documentation. - -As soon as the records are generated I'll forward on to you, please don't hesitate to ask any questions as we move through the process."" - -I will send the files when I receive them. Please let me know what else I need to do.",2.0 -16381415,2018-12-06 12:12:33.385,Our hosted Elastic Search cluster is starting to run low on space,"We've been receiving alerts the past week that we start to run out of space, then our cleanup script will come by and clean up some indexes, and we'll receive an alert that we are good to go. That cleanup script is now reporting 85% used storage space after it completes the cleanup process. - -We just scaled up a number of nodes, so log volume has increased recently. We also have issues (not yet pulled) to start adding a few items into elastic search. - -Utilize this issue and bump up the amount of storage we have available to us such that we don't run out of space in our cluster. Currently each one of our nodes is setup if 2TB of space, we are using between 1.7TB and 1.8TB on all of them.",1.0 -16359537,2018-12-05 22:56:50.301,Can we redirect all www.about.gitlab.com URLs?,"Earlier today we came across an issue where some absolute links caused Google Bot to index and serve pages from www.about.gitlab.com in search results. - -The absolute links have been updated and will eventually move themselves out of Google's Index. To resolve the problem in the interim, can we setup a redirect for www.about.gitlab.com to about.gitlab.com? - -Thank you!",1.0 -16326762,2018-12-04 21:23:42.246,Separate repositories for continuous delivery target environments,"1. [ ] Create new repository for each environment we currently have new gitlab project -1. [ ] lock old repository to prevent merge-requests/changes while re-organizing repositories (set a push restriction in the gitlab project settings) -1. [ ] Enable/validate pipeline in new repository -1. [ ] Remove environment directory from old repository - -## Testing -1. [ ] Copy terraform files from old repository -1. [x] ~~Setup pipeline using image from [`terraform-ci` repo](infrastructure#5689) (n/a)~~ -1. [ ] Validate `tf-init` successfully configures remote state & downloads providers/modules -1. [ ] Validate `tf plan` success & output from local repository -1. [ ] Add manual `tf apply` stage -1. [ ] Enable pipeline and validate end-to-end with no-op change",5.0 -16326720,2018-12-04 21:18:52.552,Shared tooling and docker build pipeline for terraform-ci,"Create new `terraform-ci` repository for shared tooling/docker images used in each environments' pipeline `gitlab-ci.yml` - -This repository will provide a single place to build/manage the container used in all terraform CI/CD pipelines, with required packages and utilities included. The existing terraform wrapper scripts under `/bin` in the `gitlab-com-infrastructure` repository need to be migrated here.",2.0 -16326597,2018-12-04 21:07:09.708,Separate terraform modules repositories,"1. [x] Create new `terraform-modules/*` repositories -1. [x] Update all `source = ...` lines in current (sub-)modules to [reference new modules repo/paths](https://www.terraform.io/docs/modules/sources.html#generic-git-repository) -1. [x] Update all `source = ...` lines in current repository to [reference new modules repo/paths](https://www.terraform.io/docs/modules/sources.html#generic-git-repository) - -## Testing -From within `gitlab-com-infrastructure` -1. [x] Rename `modules` directory repository to ensure no relative path references remain -1. [x] Remove all `.terraform` directories under `./environments` -1. [x] Execute `./bin/tf-init` for each environment -1. [x] Execute `./bin/tf plan` for each environment -1. [x] Remove renamed `modules` directory",3.0 -16325790,2018-12-04 20:39:35.537,Service discovery for database load balancing,"We support service discovery for database load balancing but apparently we're not using it (I looked at a `database.yml` in production, it seems this is not configured). - -Service discovery was implemented with https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/5883/diffs by @yorickpeterse . - -(I'm not sure yet where this issue is going - but I'm inclined to think that we need service discovery to support flawless failovers.)",3.0 -16316122,2018-12-04 13:52:05.211,analyze database after failover,"add step : ""analyze database;"" after the promote on patroni.",1.0 -16312576,2018-12-04 11:56:43.804,[Design Doc] GCP Maintenance Automation,"We need to upgrade the kernel version of our GCE VMs. While the process is pretty straight-forward and we completed the task on a staging fleet (`api`) via a change request successfully, there were some areas where we could leverage automation because we are looking at nearly 100 VMs for Staging environment alone and more in Production. In addition, this is a type of activity we could do in the near future as well. - -Here is the design doc for this automation effort, which includes MRs for a proposed project as well. https://docs.google.com/document/d/1u-LtKdu0uSt16IBjsq4szCXq0bCA87Sf0or-Blu55SQ - -Specific asks from the team, by **12/7/18** (if possible): -- Please review the design doc -- Please review the MRs (listed in the design doc) -- Provide feedback - -Related issues: -- https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5289 -- https://gitlab.com/gitlab-com/gl-infra/production/issues/583",5.0 -16302679,2018-12-04 03:23:07.074,Transfer concurrentdevops.com under GitLab Ownership,@sytses has acquired the domain `concurrentdevops.com` and we will be transferring it to our domain management.,1.0 -16299071,2018-12-03 21:51:31.176,Add Gitter SSH to SRE offboarding,"Add Gitter SSH to SRE (Site-reliability engineer) offboarding, https://gitlab.com/gitlab-com/gl-infra/gitter-infrastructure#ssh-to-boxes - ---- - -`troupe` AWS accounts were addressed in https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5669",1.0 -16298813,2018-12-03 21:29:45.463,Network Security Diagram for security/customer consumption,"On the [Infra production architecture page](https://about.gitlab.com/handbook/engineering/infrastructure/production-architecture/), we have a current architecture diagram. To help with security questionnaires, a network security diagram would be good to produce. This would show VPCs (help answer how we separate production traffic from staging, etc) and then the other network separation we have in place. - -cc @MFarber",2.0 -16297405,2018-12-03 20:05:36.866,301 for /serverless,"Redirect `^/serverless(.*)` to `/product/serverless/` - -Page is being built, need this redirect in place for press going out next week: https://gitlab.com/gitlab-com/marketing/general/issues/3673",1.0 -16296712,2018-12-03 19:23:19.571,Add troupe (Gitter) AWS to offboarding,"We need to make sure to remove the user on the `troupe` (Gitter) AWS space when offboarding. We should also remove their SSH key https://gitlab.com/gitlab-com/gl-infra/gitter-infrastructure#ssh-to-boxes - -I don't have access to the project or it doesn't exist anymore, https://dev.gitlab.org/cookbooks/chef-repo/blob/master/doc/offboarding.md - -https://gitlab.com/gitlab-com/people-ops/employment/blob/b4b4f283bcf47d109a24c2427428b0c29d622375/.gitlab/issue_templates/offboarding.md#L122 -``` -1. [ ] For former Developers (those who had access to part of the infrastructure), and Production GitLabbers: copy offboarding process from [infrastructure](https://dev.gitlab.org/cookbooks/chef-repo/blob/master/doc/offboarding.md) for offboarding action. -``` - ---- - -Spawned from @skarbek finding an old user in the list, https://gitlab.slack.com/archives/C3W3PSR88/p1543863899004000",1.0 -16291768,2018-12-03 15:21:14.778,Various admin for Jim Thavisouk to help with access request process,"Please describe your problem below and the oncall engineer will pick it up. - -See - https://gitlab.com/gitlab-com/access-requests/issues/182",1.0 -16289344,2018-12-03 14:00:39.561,Rename Grafana folder for Postgres,The `PostgreSQL_New` Grafana folder should replace the old `PostgreSQL` and `PostgreSQL_Patroni` folders if there are no objections.,1.0 -16281372,2018-12-03 10:06:43.540,Database Reviews,"* [x] @NikolayS -> @abrandl Recursive CTE https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22308#note_115367888 :eyes: @NikolayS (checked CTE, passed to @abrandl) -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23268 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8603 -* [x] Douwe -> @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23353#note_120477127 -* [x] @NikolayS -> @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/7884#note_120724759 :eyes: @NikolayS (tentatively approved; @abrandl see the comment there) -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23412#note_120700982 :eyes: @NikolayS (already reviewed and even merged; double-checked, looks ok) -* [ ] @NikolayS https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23147#note_120549262 :eyes: @NikolayS (commented, waiting for feedback) -* [x] https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8641#note_121316786 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23445 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23098 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8550 -* [x] @abrandl urgent - missed deliverable: https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/6878/diffs#note_121418853 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8497#note_121843202 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8603#note_121851696 -* [x] @NikolayS -> @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23508#note_121871425 :+1: @NikolayS (ready to be approved @abrandl, one minor comment left) -* [x] @abrandl urgent - blocker https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23353#note_121895753 -* [x] Quick https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23512/diffs#note_121901715 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/6878 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8603 -* [x] @NikolayS -> @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23564#note_122262530 :eyes: @NikolayS (checked, passed to @abrandl) -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23436#note_122432169 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8695#note_122348606 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/7885#note_122342732 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23217 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8442#note_122477084 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8550#note_122275825 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23599 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23609",8.0 -16250247,2018-11-30 20:59:34.826,Add Grafana dashboard for docker registry,registry.gitlab.com should have a dedicated Grafana dashboard to observe it's operational status (related to #5659).,3.0 -16250107,2018-11-30 20:48:15.737,Structured logging for docker registry,"registry.gitlab.com logs (from /var/log/gitlab/registry/) are not sent as structured logs which makes it hard to find and parse them. -Adjust [registry.conf.erb](https://gitlab.com/gitlab-cookbooks/gitlab_fluentd/blob/master/templates/default/registry.conf.erb) to parse fields correctly. Make sure that multiline panic stack traces also are forwarded (see https://gitlab.com/gitlab-org/gitlab-ce/issues/54703 for an example log).",3.0 -16246365,2018-11-30 16:48:51.156,about-src.gitlab.com is being crawled by Google,"I noticed it when doing a search for the last summit: - -![image](/uploads/bb18f61903d9159b7b582a68f708fa0e/image.png) - -I imagine this is not something we want, right?",1.0 -16245371,2018-11-30 16:02:29.448,new changes on the migration script,"- add the query - -select * from pg_stat_activity where state != idle ; \ -- remove step that is not mandatory and keep as documentation for troubleshoot ( pg_switch_xlog) -- automate the test of ssh blocked and any other manual verification that would be easier to automate",2.0 -16243688,2018-11-30 14:48:35.768,Alert Investigation: 2018-11-30 13:49 UTC High CPU on git-01,"![image](/uploads/2e11ef1c0bedf6269762c8500b0f8fe1/image.png) - -During investigation there were 18 `sshd: git@notty` processes tying up the CPU. I performed an strace on a few and saw very similar output that didn't really point me into any direction as to what I should continue to look for: - -``` -clock_gettime(CLOCK_BOOTTIME, {4569259, 270293753}) = 0 -select(12, [3 5 9 11], [3], NULL, {30, 0}) = 2 (in [9], out [3], left {29, 999993}) -rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0 -rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 -clock_gettime(CLOCK_BOOTTIME, {4569259, 270591957}) = 0 -read(9, ""\17_\267e$\370M\340>\277\371 \376\246\244(\203H\213@\v\313_\344\31 \214[\200uPl""..., 16384) = 16384 -write(3, ""_\275\356\314O\257$\\\\O\\\302\""\303\262\367\250x\205\337BR\364\357\347\n5\227\37\272\4\377""..., 16448) = 16448 -clock_gettime(CLOCK_BOOTTIME, {4569259, 367838203}) = 0 -select(12, [3 5 9 11], [3], NULL, {30, 0}) = 2 (in [9], out [3], left {29, 999993}) -rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0 -rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 -clock_gettime(CLOCK_BOOTTIME, {4569259, 368819627}) = 0 -read(9, ""\17F\377a\276[\255O\34\267\21\177\234T\34qm*:\345h\264\363\372C[%+\312g^\233""..., 16384) = 16384 -write(3, ""\335\251R\2125\270\330$\303\272Z&2\342 yV\230W\371f\230\371\242\323\6\316\255)#,\375""..., 16448) = 16448 -clock_gettime(CLOCK_BOOTTIME, {4569259, 570915822}) = 0 -select(12, [3 5 9 11], [3], NULL, {30, 0}) = 2 (in [9], out [3], left {29, 999994}) -rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0 -rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 -clock_gettime(CLOCK_BOOTTIME, {4569259, 571406231}) = 0 -read(9, ""\215#t\366\203u\244K\370\346\23\325\334\351\241\314\241\376\301\314\3011\366\205\361d\376\260\320D\344#""..., 16384) = 16384 -``` - -The host was able to process requests just fine. It was never removed from our load balancers and during this time, I proceeded to force a few git commands through this server, and despite being a tad slower than other git servers, the request worked just fine. There were no known ""stuck"" git processes, based on the command provided in our runbook. - -At 14:04 UTC I put this server in state DRAIN for all load balancers. The load immediately started to fall off, but the remaining git processes were apparently long running and left the cpu in a pegged state. - -At 14:26 UTC I decided to pull the plug on the 18 processes via `kill`. The CPU usage immediately dropped. I waited for other metrics (load) and the alert to clear before proceeding to put this server back into rotation - -At 14:32 UTC I killed any remaining git ssh process. There were roughly 20 of them, some of them as old as November 27th. - -At 14:34 UTC the server was put back into rotation. - ---- - -Due to lack of knowledge and visibility into what these processes are doing, I'm not sure what to look at for this type of situation. If they are long running git commands, it'd be nice if I could tie an ssh session to a process. - -If anyone else has better ideas as to how I should've approach this situation please let me know. I am not marking this as an incident due to not having enough information to prove abuse and for lifetime in which this situation was occurring, the server was still able to serve requests successfully. - -/cc @gitlab\-com/gl\-infra",1.0 -16241705,2018-11-30 13:23:17.213,Public Dashboard for GitLab.com is not loading graphs and data,"The [GitLab Triage Dashboard](https://dashboards.gitlab.com/d/RZmbBr7mk/gitlab-triage?refresh=30s&orgId=1) does not load the graphs and or data points. This has been confirmed my multiple people as not working: - -![image](/uploads/9d8508ab7bf9bfcf4917324b5f08ffb9/image.png)",1.0 -16239323,2018-11-30 11:35:00.652,Deprecate TLS 1.0 and TLS 1.1 on staging,"Like what we have set on canary we think we are ready to apply the TLS update on staging. - -## references - -- https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5152#note_108864376 -- https://gitlab.com/gitlab-org/gitlab-ee/issues/7794#note_120941430",2.0 -16236688,2018-11-30 09:42:37.310,Upgrade forum.gitlab.com,"The forum needs an upgrade. Info in https://gitlab.com/gitlab-com/runbooks/blob/master/howto/discourse-forum.md. - -cc @jarv",1.0 -16227960,2018-11-29 21:18:29.516,RCA: API fleet unavailable 20181129 20:10:00 - 20:36:00 UTC,"## Summary - -All of the API servers became unavailable due to someone walking all of the public projects. - -- Service(s) affected : API -- Team attribution : SAE -- Minutes downtime or degradation : 26 minutes - -## Impact & Metrics - -Start with the following: - -- The API was completely inaccessible -- All users trying to access the API were affected -- The incident prevented any API actions, including pushes and CI runners fetching jobs - -### Graphs and logs - -- [Kibana: Project listing requests](https://log.gitlab.net/goto/d1d21142d69d5ff6f30891a3cde44f4c) -- [Dashboards: DB host load and locks](https://dashboards.gitlab.net/d/000000142/postgresql-overview?orgId=1&from=1543521758415&to=1543524942335) -- [Dashboards: Workhorse Availability](https://dashboards.gitlab.net/d/OktWokpik/workhorse-overview?panelId=94&fullscreen&orgId=1&from=1543520954651&to=1543525648769) -- [Dashboards: HAProxy Dashboard during incident](https://dashboards.gitlab.net/d/ZOOh_aNik/haproxy?orgId=1&from=1543521006515&to=1543524880550&var-host=fe-01-lb-gprd.c.gitlab-production.internal&var-port=9101&var-backend=api&var-frontend=All&var-server=All&var-code=All&var-interval=30s) - -## Detection & Response - -- The incident was detected with a PagerDuty alert. The first alert resolved itself shortly after firing. The second one was the downtime. -- Remediation happened on its own. - -## Timeline - -2018-11-29 - -- 20:11 - First alert goes off `IncreasedBackendConnectionErrors` -- 20:16 - First alert clears -- 20:26 - Second alert fires `IncreasedBackendConnectionErrors` -- 20:29 - Users begin reporting issues -- 20:36 - The API servers recover and service is restored -- 20:41 - Second alert clears - -## Root Cause Analysis - -Why did the API servers become unavailable? - - Public projects were being iterated through the API - - API pagination issues can cause timeouts and overwhelm the servers (https://gitlab.com/gitlab-org/gitlab-ce/issues/42194) - -## What went well - -- We discovered the problem quickly and there was a quick response - -## What can be improved - -- This has happened multiple times (see https://gitlab.com/gitlab-com/gl-infra/production/issues/589) -- The runbooks for HAProxy problems could use some work -- We could further limit requests to this endpoint -- We can improve pagination in the application - -## Corrective actions - -- Dedicated fleet for internal API: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/4708 -- Better handling of pagination for api requests: https://gitlab.com/gitlab-org/gitlab-ce/issues/42194 - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",2.0 -16224277,2018-11-29 18:15:33.047,empty the artifacts bucket in GCS,"We had a sync set up to sync artifacts from S3 to GCS nightly because we were hoping to migrate to GCS quickly at the time. In the meanwhile, we have encountered problems and the migration has been placed in the backlog (https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/4684#note_117887957 for more info). We need to empty out the artifacts bucket in GCS (`gitlab-gprd-artifacts`) and disable the nightly sync until we are closer to actually being able to execute the move. It is simply a massive waste of money to store all that data twice as well as bandwidth transfer costs with no migration time in sight. - -cc/ @dawsmith @gitlab\-com/gl\-infra - -EDIT: - -As per [the object storage dashboard](https://dashboards.gitlab.net/d/BtEcmxLik/object-storage?orgId=1&from=now-12h&to=now) we are storing 307TB in GCS for artifacts. This amounts to - -``` -314368 GB * $0.026/month = $8173.56/month -``` - -~~It isn't killing us, but it is also a waste.~~ - -EDIT2: On second thought, 8173.56/mo is a lot and could be an additional engineer.",1.0 -16223440,2018-11-29 17:35:11.461,Readiness review for the new post deployment patcher,"This issue is for an SRE review of the post deployment patcher. - -- [x] SRE sign-off on new patcher tool https://ops.gitlab.net/gitlab-com/gl-infra/patcher -- [x] review new documentation - https://gitlab.com/gitlab-org/release/docs/merge_requests/84 -- [x] deprecate old patcher howto - https://gitlab.com/gitlab-com/runbooks/merge_requests/840 -- [x] deprecation notice on gitlab-patcher - https://gitlab.com/gitlab-com/gl-infra/gitlab-patcher/merge_requests/7 -- [x] deprecation notice on post-deployment-patches - https://dev.gitlab.org/gitlab/post-deployment-patches/merge_requests/118 -- [x] Turn off issues and MRs on the gitlab-patcher and post-deployment-patches repos -- [x] Apply the README update to point to official docs - https://ops.gitlab.net/gitlab-com/gl-infra/patcher/merge_requests/12 -- [x] Deploy a noop patch (adds a comment to a source file) through the pipeline -- [x] Announce in `#backend` and to previous developers who have issued patches that the process has changed.",2.0 -16222047,2018-11-29 16:30:06.287,Promote the cluster of patroni in production for ongres tests,"Promote the cluster of patroni in prod environment for tests. -Just to remember the cluster do not have any traffic. -@ahmadsherif could you please let us know if the cluster is ready? -The tests will be a multiple sequence of failover ...",1.0 -16212197,2018-11-29 12:08:56.250,Users unable to create project,"Users are facing issues when they are trying to create a new project from web UI. They are getting: - -` -PG::QueryCanceled: ERROR: canceling statement due to statement timeout CONTEXT: while rechecking updated tuple (1,14) in relation ""site_statistics"" : UPDATE ""site_statistics"" SET ""repositories_count"" = ""repositories_count""+1 -` - -ZD: -* https://gitlab.zendesk.com/agent/tickets/109256 -* https://gitlab.zendesk.com/agent/tickets/109252",1.0 -16201607,2018-11-29 02:37:30.693,Create GCP Project for Disaster Recovery,"Create a GCP project under Infrastructure/Environments called gitlab-dr to use for setting up GEO replication based Disaster Recovery site. - -This project will need its quota expanded in the us-west1 region to match what we have in production. https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5466",1.0 -16196302,2018-11-28 20:02:47.266,The following servers should be added to blackbox monitoring for SSL certificate expiration,"* `prometheus-01.nyc1.do.gitlab-runners.gitlab.net` -* `prometheus-01.us-east1-c.gce.gitlab-runners.gitlab.net` -* `prometheus-01.us-east1-d.gce.gitlab-runners.gitlab.net` - -Reference: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5610",1.0 -16188663,2018-11-28 15:34:41.726,updates on the monitoring script,"1 - change the queries to check the database activity -select count(*) from pg_Stat_activity where state !='idle'; - -select state, count(*) from pg_Stat_activity group by state; - -2 - create pre-flight checks to check the health status of the new cluster: --Pgbouncer --Patroni --Postgresql - -3 - Auto check results instead of human analysis.let's try to reduce. - -4 - tmux topic... can we remove? and add the logging solution ? ",8.0 -16188312,2018-11-28 15:20:51.543,problems with the Application LB,"Problems in the tests with the application LB. - -``` -NoMethodError (undefined method `load_balancer' for nil:NilClass): - ee/lib/gitlab/database/load_balancing/sticking.rb:81:in `load_balancer' - ee/lib/gitlab/database/load_balancing/sticking.rb:34:in `all_caught_up?' - ee/lib/gitlab/database/load_balancing/sticking.rb:42:in `unstick_or_continue_sticking' - ee/lib/gitlab/database/load_balancing/rack_middleware.rb:21:in `stick_or_unstick' - ee/lib/ee/api/helpers.rb:33:in `block in current_user' - lib/gitlab/utils/strong_memoize.rb:26:in `strong_memoize' - ee/lib/ee/api/helpers.rb:28:in `current_user' - app/helpers/sentry_helper.rb:9:in `sentry_context' - lib/api/helpers.rb:386:in `handle_api_exception' - lib/api/api.rb:83:in `block in ' - ee/lib/omni_auth/strategies/group_saml.rb:22:in `other_phase' - lib/gitlab/middleware/multipart.rb:101:in `call' - lib/gitlab/request_profiler/middleware.rb:14:in `call' - ee/lib/gitlab/jira/middleware.rb:15:in `call' - lib/gitlab/middleware/go.rb:17:in `call' - lib/gitlab/etag_caching/middleware.rb:11:in `call' - lib/gitlab/middleware/read_only/controller.rb:40:in `call' - lib/gitlab/middleware/read_only.rb:16:in `call' - lib/gitlab/middleware/basic_health_check.rb:25:in `call' - lib/gitlab/request_context.rb:20:in `call' - lib/gitlab/metrics/requests_rack_middleware.rb:27:in `call' - lib/gitlab/middleware/release_env.rb:10:in `call' -```",2.0 -16188286,2018-11-28 15:19:43.161,restart of cluster executed a failover,the restart of the cluster created a failover,1.0 -16188272,2018-11-28 15:18:57.401,pending restart on patroni cluster,pending restart on patroni cluster,3.0 -22677979,2018-11-27 17:05:18.314,Add statusio chatops docs to general incident documentation,"Relevant links -https://gitlab.com/gitlab-com/runbooks/blob/master/incidents/general_incidents.md -https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5508",1.0 -16145353,2018-11-27 10:43:25.933,review runbooks for patroni,"We would like to cover the following examples : -https://gitlab.com/gitlab-com/runbooks/merge_requests/831 - -* start / stop instance / patroni -* failover -* add a node -* remove a node -* how to start a new cluster",2.0 -16145313,2018-11-27 10:41:45.715,changes in staging script for migration,"-staging - - stop all the postgresql clusters - - change step and stop repmgr first and after check the replication lag. - - parameters has to be equal",8.0 -16145299,2018-11-27 10:41:08.785,create production cluster for patroni / consul,,2.0 -16135243,2018-11-27 07:55:38.807,Disable Geo on staging,"I just noticed Geo is still enabled on staging. - -```ruby -GeoNode.secondary_nodes -=> #]> -``` - -This is a leftover from the GCP migration rehearsals and I don't see any reason why we wouldn't disable Geo completely on staging. - -#### Backstory - -During the rehearsals we did not disable Geo, but added a dummy node to the admin dashboard. This would maintain the Geo event log, which _might_ come in useful if we ever wanted to fallback.",2.0 -16112133,2018-11-26 12:24:07.302,Permission request: role Kubernetes Engine admin on gitlab-internal project,"Hi, - -I'm trying to create a kubernetes cluster in the gitlab-internal project for me to test with, but I get the following error: - -> clusterrolebindings.rbac.authorization.k8s.io is forbidden: User ""rpereira@gitlab.com"" cannot create clusterrolebindings.rbac.authorization.k8s.io at the cluster scope: Required ""container.clusterRoleBindings.create"" permission. - -Can I be given the required permission/role (Kubernetes Engine Admin possibly)? - -cc @northrup",1.0 -16109352,2018-11-26 10:27:31.270,Renew certificates for CI Prometheus servers fleet,"Certificates for CI Prometheus servers have expired at November 7th :( - -Please renew them (they should be managed by sslmate) and update proper vaults: - -- [x] `prometheus-01.nyc1.do.gitlab-runners.gitlab.net` (vault: `gitlab-runners-prometheus-do-nyc1 ci-prd`) -- [x] `prometheus-01.us-east1-c.gce.gitlab-runners.gitlab.net` (vault: `gitlab-runners-prometheus-gce-us-east1-c ci-prd`) -- [x] `prometheus-01.us-east1-d.gce.gitlab-runners.gitlab.net` (vault: `gitlab-runners-prometheus-gce-us-east1-d ci-prd`)",1.0 -16102989,2018-11-26 05:01:12.789,Runbook and alerting improvements for staging.gitlab.com,"As part of https://gitlab.com/gitlab-com/gl-infra/production/issues/586, we need to take the below corrective action items: - -- [x] Improve runbook with more troubleshooting tips -- [x] Create alerts on `node_netstat_TcpExt_ListenDrops` and/or `node_netstat_TcpExt_ListenOverflows`",1.0 -16071259,2018-11-23 15:52:02.947,Cleanup and unify postgres dashboards,"The Grafana dashboards for postgres are inconsistent in their use of variables and have hardcoded values for environment, type and prometheus instances in many places, which makes it hard to adapt them for new clusters or envs (like for patroni). - -We should update the dashboards to consistently use selectable variables for `environment` and add a new variable `type` to switch between ""patroni"", ""postgres"" and possibly other db host types in the future. That would make the dashboards re-usable when adding patroni to production.",1.0 -16064210,2018-11-23 10:04:26.017,301 Redirect for UX Research Panel,"Please redirect: https://about.gitlab.com/researchpanel/ to https://about.gitlab.com/community/gitlab-first-look/ - -Please note a redirect is already in place from https://about.gitlab.com/researchpanel/ to https://about.gitlab.com/community/researchpanel/ This should be replaced with the above. - -cc @williamchia",1.0 -16062249,2018-11-23 08:58:00.593,Use ElasticSearch Curator for recurring ElasticSearch maintenance tasks,"The [Curator tool](https://www.elastic.co/guide/en/elasticsearch/client/curator/current/index.html) completes regular maintenance tasks such as trimming indices, purging data etc by using an easy to configure YAML definition file. Although the activities we currently do can (and are) simply completed with a bash script, the same activities can be done with Curator. The Curator also offers additional functionality of course. Once we upgrade to ES 6.5, the current bash maintenance script won't be necessary (AFAIK) as the functionality is included in ES 6.5. However, Curator may still offer functionality that would be relevant to maintenance of ES.",1.0 -16061970,2018-11-23 08:43:35.618,Upgrade of Elastic.co hosted ElasticSearch from 5.6.10 to 6.5.1,"ElasticSearch version 6.x has been available for a while and is supported by Elastic.co. GitLab should upgrade to take advantage of the latest features and speed improvements. It is also highly recommended by our Elastic.co technical contacts that we upgrade as soon as it is possible. - -The upgrade will be automatically handled by Elastic.co, and should be frictionless. However, we need to be aware of any potential changes to underlying attributes/fields. Breaking changes are described here: https://www.elastic.co/guide/en/elasticsearch/reference/current/breaking-changes.html - -The upgrade will be a rolling upgrade as described here: https://www.elastic.co/guide/en/elasticsearch/reference/current/rolling-upgrades.html. The upgrade will require a full cluster restart which would cause the cluster to be unavailable for a short period of time. - -To complete this tasks, I believe we will need to - -* [ ] Determine best approach for testing the upgraded version -* [ ] Create a new cluster based on latest version of ES -* [ ] Re-point some data ingestion points, potentially for staging, to new cluster -* [ ] Ensure no errors and data is available -* [ ] Remove re-point -* [ ] Upgrade production cluster -* [ ] Hope for the best - -The disadvantages of this approach is the full cluster restart and thereby losing logs for a short period of time, which may be acceptable. - -I don't know the configuration/environment well enough to determine if the above steps are sufficient or even doable.",3.0 -16057209,2018-11-23 00:54:31.082,"Marvin down, returns a bundler error","# Summary - -The marvin bot on slack is no longer responding to commands but is only showing the error below: - -``` -/usr/lib/ruby/gems/2.5.0/gems/bundler-1.16.2/lib/bundler.rb:172:in `user_home': uninitialized constant #::Etc (NameError) - from /usr/lib/ruby/gems/2.5.0/gems/bundler-1.16.2/lib/bundler.rb:197:in`user_bundle_path' - from /usr/lib/ruby/gems/2.5.0/gems/bundler-1.16.2/lib/bundler/settings.rb:377:in `global_config_file' - from /usr/lib/ruby/gems/2.5.0/gems/bundler-1.16.2/lib/bundler/settings.rb:80:in`initialize' - from /usr/lib/ruby/gems/2.5.0/gems/bundler-1.16.2/lib/bundler.rb:257:in `new' - from /usr/lib/ruby/gems/2.5.0/gems/bundler-1.16.2/lib/bundler.rb:257:in`settings' - from /usr/lib/ruby/gems/2.5.0/gems/bundler-1.16.2/lib/bundler.rb:84:in `configured_bundle_path' - from /usr/lib/ruby/gems/2.5.0/gems/bundler-1.16.2/lib/bundler.rb:335:in`use_system_gems?' - from /usr/lib/ruby/gems/2.5.0/gems/bundler-1.16.2/lib/bundler.rb:519:in `configure_gem_path' - from /usr/lib/ruby/gems/2.5.0/gems/bundler-1.16.2/lib/bundler.rb:512:in`configure_gem_home_and_path' - from /usr/lib/ruby/gems/2.5.0/gems/bundler-1.16.2/lib/bundler.rb:66:in `configure' - from /usr/lib/ruby/gems/2.5.0/gems/bundler-1.16.2/lib/bundler.rb:134:in`definition' - from /usr/lib/ruby/gems/2.5.0/gems/bundler-1.16.2/lib/bundler.rb:101:in `setup' - from /usr/lib/ruby/gems/2.5.0/gems/bundler-1.16.2/lib/bundler/setup.rb:20:in`' - from /usr/lib/ruby/2.5.0/rubygems/core_ext/kernel_require.rb:135:in `require' - from /usr/lib/ruby/2.5.0/rubygems/core_ext/kernel_require.rb:135:in`rescue in require' - from /usr/lib/ruby/2.5.0/rubygems/core_ext/kernel_require.rb:39:in `require' - from /home/bundle/cog-command:7:in`
' -exit status 1 -```",1.0 -16038360,2018-11-22 12:10:19.206,sentry-01-sv-ops.c.gitlab-ops.interal,"``` - ================================================================================ - Recipe Compile Error - ================================================================================ - - Chef::Exceptions::RecipeNotFound - -------------------------------- - could not find recipe server for cookbook postgresql - - Cookbook Trace: - --------------- - /var/chef/cache/cookbooks/gitlab-sentry/recipes/default.rb:9:in `from_file' - - Relevant File Content: - ---------------------- - /var/chef/cache/cookbooks/gitlab-sentry/recipes/default.rb: - - 2: # Cookbook Name:: gitlab-sentry - 3: # Recipe:: default - 4: # - 5: # Copyright 2016, GitLab Inc. - 6: # - 7: # All rights reserved - Do Not Redistribute - 8: # - 9>> include_recipe 'postgresql::server' - 10: - 11: # Fetch secrets - 12: sentry_secrets = get_secrets(node['gitlab-sentry']['secrets']['backend'], - 13: node['gitlab-sentry']['secrets']['path'], - 14: node['gitlab-sentry']['secrets']['key']) - 15: - 16: sentry_conf = Chef::Mixin::DeepMerge.deep_merge(sentry_secrets['gitlab-sentry'], node['gitlab-sentry'].to_hash) - 17: - 18: ## nginx -```",1.0 -16034098,2018-11-22 09:26:23.951,setup a cloudwatch exporter so we can get cloudwatch metrics into prometheus (for AWS),"AS long as we still have critical infrastructure in AWS we should be monitoring it, besides a few ec2 instances it would be nice to also have storage metrics for s3 for the buckets we have not yet migrated. - -https://github.com/prometheus/cloudwatch_exporter",2.0 -16021789,2018-11-21 19:54:35.017,Load Balancer Redundancy Threshold too high,"We get FeLoadBalancerLossOfRedundancy alerts to pagers any time there is ANY loss of redundancy in the load balancers. While we want to know if this happens, that should be more of an informational thing until we get below about 80% to 90% redundancy. What are the load balancers for, if not to automate managing our redundancy for us. - -We should be able to lose some of our redundancy in the load balancer without setting off pagers. To implement this, I've changed the threshold from 100% to 90% in https://gitlab.com/gitlab-com/runbooks/merge_requests/832 - -@gitlab\-com/gl\-infra - Please discuss if this is the appropriate threshold or what might be better.",1.0 -16017987,2018-11-21 16:15:36.060,Create an alert when we are missing haproxy logs,"It's happened too many times. Our logs have disappeared from stackdriver. Whether it be an incorrectly configured exclusion rule, or a misconfigured td-agent. Both of these have negatively impacted our visibility into our environment. Let's prevent this from happening by somehow alerting when we don't have logs available. - -Acceptance Criterion: -* Determine a method for which we can alert on this -* Determine the appropriate threshold (if necessary)",2.0 -16017584,2018-11-21 15:55:17.738,Adapt postgres dashboards for Patroni,"We need to adapt the postgres dashboards for the Patroni cluster. - -Duplicate existing dashboards and change them to point to the “patroni” tagged instances where necessary. - -cc @ahmadsherif @Finotto",1.0 -16017555,2018-11-21 15:54:25.415,Add preparation steps for backups to patroni migration plan,,1.0 -16016939,2018-11-21 15:27:32.925,Handbook update to include a service list with owners and responsible,"As part of the onboarding, we have identified a potential improvement which is a list of services that are currently known (and potentially managed by gl-infra). This update should be in the Handbook. - -Some of the information is being captured in the Google Doc https://docs.google.com/document/d/1xxE-dZIArF59MMH0pbrnjRz3KlrtoNUOKnDwbf7N4L0. Once completed, the essential information must be extracted and added to the handbook. - -Potentially a table with some of the following: -* Service URL -* Description -* Owner -* Responsible team -* Any potential documentation",1.0 -16014211,2018-11-21 14:06:17.264,Compile list of recommended resources for PostgreSQL (both engineering and infra),"Since this question pops up frequently, let's compile a list of recommended reads (like books and stuff) for PostgreSQL related topics for both engineering and infra. - -Let's maintain that list at https://about.gitlab.com/handbook/engineering/infrastructure/database (or somewhere on the handbook).",0.0 -16009745,2018-11-21 10:43:56.268,Streamline service access requests with gl-security Access Requests,"Streamlining the access requests to be provisioned by gl-infra with the gl-security access request model would ensure compliance while simplifying the process flow. The idea is to ensure all access requests are managed using https://gitlab.com/gitlab-com/access-requests/issues but any requests requiring gl-infra to provision are passed to gl-infra in a suitable and timely manner. The issue is about creating the proper process or technical flow to ensure this is achieved. - -We would like to avoid access requests being created directly in the gl-infra issue tracker. E.g. avoiding this [https://about.gitlab.com/handbook/engineering/#amazon-web-services-aws](https://about.gitlab.com/handbook/engineering/#amazon-web-services-aws) - -Tasks: -* [ ] Agree appropriate model with gl-security -* [ ] Update access request template -* [ ] Update Security owned application list with agreements -* [ ] Update handbook with appropriate links",1.0 -16009513,2018-11-21 10:33:58.795,Refactor common config in fluentd templates,"There is a lot of copy and paste in the fluentd templates that can be refactored using includes. -It was also noticed that the prometheus mixin is missing in the redis config https://gitlab.com/gitlab-cookbooks/gitlab_fluentd/merge_requests/55#note_119012949 - -This issue is to pull out as much of the common config as we can to make these templates a bit more manageable. - -https://gitlab.com/gitlab-cookbooks/gitlab_fluentd",2.0 -15991313,2018-11-20 15:56:02.420,Adjust disk capacity for postgres,"Currently, we provision 5 TB of SSD disk space per database instance in both `gstg` and `gprd`. - -Actual current usage is: -* `gstg`: 0.8 TB -* `gprd`: 2.3 TB - -Leaving room for WAL and still [maxing out IO performance](https://cloud.google.com/compute/docs/disks/performance#size_price_performance), the proposal here is to reduce the disk capacity to: - -* `gstg`: 1.5 TB - utilization approx. 54 % -* `gprd`: 4 TB - utilization approx. 58 % - -We have 8 instances per environment. The saving in `gstg` is 28 TB and in `gprd` 8 TB. - -This translates to a saving of roughly 6,300 USD per month. - -Performance-wise, this does *not* impact `gprd`. For `gstg`, IO performance this change reduces sustained read performance to roughly half of what we currently have. However, we also have smaller instance types in `gstg` anyways. - -Worth to note that increasing a disk is easy and does not require a reboot.",2.0 -15979239,2018-11-20 11:05:11.989,Logging of IP client addresses in nginx -> ElasticSearch,"For troubleshooting purposes, it would be useful to have the client IP address available in ElasticSearch/Kibana for troubleshooting and correlation of activities. At the moment, we only have the IP address of the proxy. This information is most likely already available on the nginx proxy but is not forwarded to ElasticSearch as far as I understand.",2.0 -15977306,2018-11-20 10:24:34.330,Configure wal-e archiving/backups for Patroni cluster,"After staging has failed over to the Patroni cluster, configure wal-e and make sure backups are working with Patroni: - -* [x] Setup wal-e on patroni instances -* [x] Review archiving related postgres config -",3.0 -15977260,2018-11-20 10:22:53.693,Review staging env / cookbooks for patroni,"Please consider and review the maintenance plan. - -Notes: - -* [ ] Disk size is 5TB, probably 4TB (in gprd) / 1.5TB (in gstg) is more than enough and we save $$$. Check performance implications. -> https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5566 -> https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/merge_requests/632 -* [x] Compile a postgres-config diff between gprd and gprd-patroni before it goes live and make sure we're aware of all changes. -* [ ] Do we still need to manage `shmall` and `shmmax` ourselves or is this a relict? AFAIK it was only necessary in older postgres versions. -* [x] Increase disk size for root volume `/` to 100 GB. -> https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/merge_requests/632",3.0 -15958497,2018-11-19 17:04:28.334,Clean up old S3 registry bucket,"With the successful migration of the registry to GCS (https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/4710), we need to look into cleaning out and removing the old bucket. - -The size of the registry bucket is: - -``` -Selection: 0 Objects, 1 Folders Total size: 41.8 TB Total objects: 2983000 -```",3.0 -15940463,2018-11-19 09:03:41.779,Database Reviews,"* [x] Sean https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8070 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22971#note_118167275 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23058 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23098#note_118544178 -* [x] https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8446#note_118367363 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23057#note_118367296 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23098#note_119437237 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23314/diffs#note_119525749 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23236#note_119675631 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/19740#note_120436389 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22744 -* [x] Andreas https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/19740#note_120734824 -* [x] Fri https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/6878 -* [x] Fri https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23412",8.0 -15920535,2018-11-17 17:54:06.407,Infra Team Hiring/Questionnaire grading for Nov 18 - Dec 1,"Team bucket to account for effort into hiring. -Questionnaires and interview time. - -Starting with 13 Questionnaires to do. - -The candidates to be assessed can be found here: https://app2.greenhouse.io/plans/4050457002/candidates?hiring_plan_id%5B%5D=4050457002&job_status=open&sort=last_activity+asc&stage_status_id%5B%5D=4000006002&type=all&in_stages%5B%5D=Assessment - - -List reviewed 20 of November : - -Pending tickets : -https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5569 -https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5570",13.0 -15913358,2018-11-16 23:40:45.287,RCA for 2018-10-24 Incident - back up of Merge Request Queue processing in Sidekiq,"**Please note:** if the incident relates to sensitive data, or is security related consider -labeling this issue with ~security and mark it confidential. -*** - -Copying information from: https://docs.google.com/document/d/1yWJFbf3z7PBgdxQAxrRPFg0B2fkUjPFZY39G6QmHXU8/edit#heading=h.si79cynhlbp7 - - -## Summary - -Working notes from the day(s) of the event: - -Starting from issue: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5215 - -Slack threads: -1st thread: https://gitlab.slack.com/archives/C101F3796/p1540390946000100 -2st thread: https://gitlab.slack.com/archives/C101F3796/p1540404258000100 -3st thread: https://gitlab.slack.com/archives/C101F3796/p1540405525000100 -Root cause thread: https://gitlab.slack.com/archives/C101F3796/p1540409583000100 -https://gitlab.slack.com/archives/C101F3796/p1540412033000100 - - -Contributors: Stan, Valery, Northrup, Mike Kozono, Dave, Jose - - -Service(s) affected : -Team attribution : -Minutes downtime or degradation : - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? (i.e. service outage, sub-service brown-out, exposure of sensitive data, ...) - -Actions that normally finish in a few seconds, but which are delayed over several minutes, appear to be broken until they finish. Some actions were run multiple times. - -- Who was impacted by this incident? (i.e. external customers, internal customers, specific teams, ...) -All GitLab.com users -- How did the incident impact customers? (i.e. preventing them from doing X, incorrect display of Y, ...) -Actions that normally finish in a few seconds, but which are delayed over several minutes (IDK what the peak delay was), appear to be broken until they finish. E.g.: -Merge requests diffs not being updated -Merge request widgets not updating -Repo mirrors not working - -Some actions were run multiple times, e.g.: -Many system notes were duplicated - - -- How many attempts were made to access the impacted service/feature? -- How many customers were affected? -- How many customers tried to access the impacted service/feature? - -Include any additional metrics that are of relevance. - -Provide any relevant graphs that could help understand the impact of the incident and its dynamics. - - -## Detection & Response - -Start with the following: - -- How was the incident detected? -- Did alarming work as expected? -- How long did it take from the start of the incident to its detection? -- How long did it take from detection to remediation? -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team memeber wasn't page-able, ...) - -{{The first alert - - -Mirror updates overdue - - -Sidekiq availability and service operation rates - - -UpdateMergeRequestsWorker running jobs - - -UpdateMergeRequestsWorker errors - - -Postgres locks - - -Sidekiq queues -https://dashboards.gitlab.net/d/9GOIu9Siz/sidekiq-stats?orgId=1&panelId=3&fullscreen&from=1540375200000&to=1540418400000 - - - - - -Detection & Response - -How was the incident detected? -PullMirrorsOverdueQueueTooLarge alert https://gitlab.slack.com/archives/C101F3796/p1540398268000100 and reports to #production, and issues being created -Did alarming work as expected? -Yes and no -Yes - PullMirrorsOverdueQueueTooLarge worked -No - Further alarming around the system operating outside of normal - pressure on Redis and other areas only was going to #alerts-general - SRE thinks we are ready to push some of those up to alerting via PagerDuty -How long did it take from the start of the incident to its detection? -The alarm was the first warning - -How long did it take from detection to remediation? -About 42 minutes until we shut off the feature and recycled the workers, but the queues took another 3 hours to drain due to duplicate jobs locking rows and being killed and retried. - -Were there any issues with the response to the incident? -No, it just took time to diagnose and mitigate, and at the moment (6 hrs after detection) we still don’t have a root cause. -}} - - -## Timeline - -2018-10-24 - -14:29 UTC - Feature flag for gitlab_sidekiq_reliable_fetcher was enabled in production and staging. - -16:11 UTC - John Northrup has noticed that “pipeline_processing:update_head_pipeline_for_merge_request” queue is growing. -16:24 UTC - We got the first alert “PullMirrorsOverdueQueueTooLarge”. -16:59 UTC - Feature flag for gitlab_sidekiq_reliable_fetcher was disabled in production and staging. - -17:24 UTC - Alert manager has reported that “PullMirrorsOverdueQueueTooLarge” is resolved -17:29 UTC - We’ve got a report from the GitLab team member that it takes too much time for MR to be updated -18:04 UTC - It still takes several minutes for MR to be updated -18:23 UTC - We see lots of UpdateMergeRequestsWorker jobs that fails due to “PG::QueryCanceled: ERROR: canceling statement due to statement timeout” -18:52 UTC - We tweet the message to GitLab.com status https://twitter.com/gitlabstatus/status/1055170090839097344 -20:31 UTC - We tweet that we’re back to normal -21:36 UTC - We noticed that there were duplicated jobs https://gitlab.slack.com/archives/C101F3796/p1540417012000100?thread_ts=1540409583.000100&cid=C101F3796 since multiple Sidekiq workers were processing the same job JID at the same time - - - -## Root Cause Analysis - -Current theory: A bug in the reliable fetch gem https://gitlab.com/gitlab-org/sidekiq-reliable-fetch/issues/5 caused jobs to be duplicated each time a Sidekiq worker was started up. Certain kinds of duplicated jobs (UpdateMergeRequestsWorker in particular) caused too many SQL update queries that were targeted to the same record. It caused lots of locks in the database that in response caused a number of Sidekiq jobs to grow rapidly. - -### What went well?: -* We were alarmed about the problems and immediately started fixing it -* We were alarmed before we got any user report -* The situation has been stabilized pretty quickly - - -### What went wrong?: -* We don’t know the root of the problem yet. Probably because of a not sufficient monitoring and logging -* Regarding the new queue for update_head_pipeline_for_merge_request - did we miss something in communication with SRE / Engineering? -* We didn’t declare an incident early - with no CMOC we didn’t tweet every 15 minutes to give more frequent updates to our customers -* We didn’t have any rollback option when we noticed the issue.This data migrations should be back compatible to a previous state. -* The sidekiq reliable fetch gem, which we forked off an inactive project, does not seem to have been run in an environment with more than one Sidekiq worker. We did not have any proof that it had been proven in a production environment, so we should have treated it with more caution. More testing and comprehensive staging tests may not have caught this bug, but it would have increased our chances. - - -### Where did we get lucky? -We got alarm notification from least expected place from - - -Open questions: -We have identified the most likely cause of duplicate jobs, but it is not confirmed yet for sure https://gitlab.com/gitlab-org/sidekiq-reliable-fetch/issues/5 - - -## Corrective actions - - -* We need to find out why some MRs have too many diffs -https://gitlab.com/gitlab-org/gitlab-ce/issues/53153 -* We have to find out why there were duplicates -https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5215 -* We need to reproduce the ReliableFetcher#requeue_on_startup doesn't work with multiple Sidekiq processes issue to confirm that it was the root cause. Then fix it. -https://gitlab.com/gitlab-org/sidekiq-reliable-fetch/issues/5 - - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",1.0 -15911015,2018-11-16 19:43:08.635,[OKR:KR] Deliver 20 significant updates to Infrastructure Handbook,Deliver 20 significant updates to Infrastructure Handbook => 9/20 (45%),19.0 -15910080,2018-11-16 19:24:07.781,[OKR:O] Maximize user-visible services (particularly GitLab.com) MTBF,,20.0 -15910068,2018-11-16 19:22:32.514,[OKR:O] Minimize user-visible services (particularly GitLab.com) MTTR,,20.0 -15909903,2018-11-16 19:18:19.225,[OKR:O] Make all user-visible services (particularly GitLab.com) ready for mission-critical workloads.,,20.0 -15904200,2018-11-16 15:17:52.116,Evaluate our pingdom request timeouts,"To reduce our page volume we increased our pingdom timeouts from 5second to 20seconds with https://gitlab.com/gitlab-com/runbooks/merge_requests/827 . This is very high and we should take some time to evaluate whether these long request times are isolated, what is causing them, and whether they really need to be this high.",1.0 -15897517,2018-11-16 11:15:21.780,Incident Slack Tooling,"**------------------**
-**Update 2019-12-06** - -We created this epic: https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/100 to keep track of the overall incident management automation effort which originated from this issue. Keeping track of the effort under an epic made more sense since the subtasks would enumerate pretty quickly. Therefore, closing this issue.
-**------------------** - -In order to streamline incident handling, we must **reduce cognitive load** on the Incident Team. - -One way to do so is to ensure procedures are as simple as possible so that the Incident Team, and, in particular, the `EOC` and the `IMOC`, can focus on the technical resolution of the incident instead of the necessary procedural calls that are intended to provide visibility into and awareness of the incident. - -Another way is to improve automation to support said procedures. In particular: - -* **incident declaration**: - * opening `production` issue, - * creating tracking document, and - * creating `infrastructure` root-cause analysis issue - * optionally, severity assignment (and possibly `IMOC` escalation) and Security escalation (in cases of abuse) to notify the Security team -* **incident management**: - * editing incident severity (and possibly `IMOC` escalation) - * Security escalation -* **incident data management**: - * ensure incident issues have severity labels - * ensure incident issues have service labels - * ensure incident issues have attribution labels - * ensure incident issues have timely RCAs - -## Samples - -As I am not entirely familiar with Slack's custom command capabilities, the following examples are provided as a CLI implementation: - -#### Incident Declaration and Resolution - -``` -> incident declare [--abuse] [@user] -``` - -* Creates a tracking document in Google docs from a template (ideally filling out some data) -* Creates an **incident** issue in the `production` queue and assigns it to *\@user*; if user is not specified, it is assumed to be the `EOC` -* Creates a corresponding issues in the `infrastructure` queue and links it to both the incident issue and the tracking document; assigned to EOC. -* When the `--abuse` flag is used, Security on-call is paged through PagerDuty - -This command should return some way (an ``) to refer to the incident from Slack. In commands that would normally use the ``, when omitted, it is assumed to be the currently opened incident. - -``` -> incident resolve [] -``` - -Resolves the incident by closing `production` incident issue. - -#### Incident Management - -``` -> incident list -``` - -Lists open, on-going incidents, if any. - -``` -> incident status -``` - -Provides status on current/last incident (open, closed) - -``` -> incident severity -``` - -Changes incident severity to ``; when severity is `S1` or `S2`, escalate to `IMOC`. - -#### Incident Data Management - -A bot that ensures incident issues have proper labels.",2.0 -15892832,2018-11-16 08:35:27.553,Move all pingdom checks over to runbook configuration,"Currently, a subset of pingdom checks are managed as YAML. See https://gitlab.com/gitlab-com/runbooks/blob/master/pingdom/pingdom.yml - -Now that we're confident in this approach, we should bring them all in so that they're all managed through the YAML configuration. - -Related slack thread: https://gitlab.slack.com/archives/C101F3796/p1542354588701400 - -cc @jarv",2.0 -15887120,2018-11-16 00:57:50.145,RCA for pingdom check failure on 2018-11-15 for https://gitlab.com/gitlab-com/gitlab-ce/merge_requests/,"**Please note:** if the incident relates to sensitive data, or is security related consider -labeling this issue with ~security and mark it confidential. -*** - -## Summary - -Follow-up to gitlab-com/gl-infra/production#572; Pingdom reported failing checks while accessing https://gitlab.com/gitlab-com/gitlab-ce/merge_requests/. - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? (i.e. service outage, sub-service brown-out, exposure of sensitive data, ...) -- Who was impacted by this incident? (i.e. external customers, internal customers, specific teams, ...) -- How did the incident impact customers? (i.e. preventing them from doing X, incorrect display of Y, ...) -- How many attempts were made to access the impacted service/feature? -- How many customers were affected? -- How many customers tried to access the impacted service/feature? - -Include any additional metrics that are of relevance. - -Provide any relevant graphs that could help understand the impact of the incident and its dynamics. - - -## Detection & Response - -Start with the following: - -- How was the incident detected? -- Did alarming work as expected? -- How long did it take from the start of the incident to its detection? -- How long did it take from detection to remediation? -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team memeber wasn't page-able, ...) - -## Timeline - -YYYY-MM-DD - -- 00:00 UTC - something happened -- 00:01 UTC - something else happened -- ... - -YYYY-MM-DD+1 - -- 00:00 UTC - and then this happened -- 00:01 UTC - and more happened -- ... - - -## Root Cause Analysis - -The purpose of this document is to understand the reasons that caused an incident, and to create mechanisms to prevent it from recurring in the future. A root cause can **never be a person**, the way of writing has to refer to the system and the context rather than the specific actors. - -Follow the ""**5 whys**"" in a **blameless** manner as the core of the post mortem. - -For this it is necessary to start with the incident, and question why it happened. Keep iterating asking ""why?"" 5 times. While it's not a hard rule that it has to be 5 times, it helps to keep questions get deeper in finding the actual root cause. - -Keep in min that from one ""why?"" there may come more than one answer, consider following the different branches. - -###Example of the usage of ""5 whys"" -The vehicle will not start. (the problem) - -1. Why? - The battery is dead. -2. Why? - The alternator is not functioning. -3. Why? - The alternator belt has broken. -4. Why? - The alternator belt was well beyond its useful service life and not replaced. -5. Why? - The vehicle was not maintained according to the recommended service schedule. (Fifth why, a root cause) - -## What went well - -Start with the following: - -- Identify the things that worked well or as expected. -- Any additional call-outs for what went particularly well. - -## What can be improved - -Start with the following: - -- Using the root cause analysis, explain what can be improved to prevent this from happening again. -- Is there anything that could have been done to improve the detection or time to detection? -- Is there anything that could have been done to improve the response or time to response? -- Is there an existing issue that would have either prevented this incident or reduced the impact? -- Did we have any indication or beforehand knowledge that this incident might take place? - - -## Corrective actions - -- List issues that have been created as corrective actions from this incident. -- For each issue, include the following: - - - Issue labeled as ~""corrective action"". - - Include an estimated date of completion of the corrective action. - - Incldue the named individual who owns the delivery of the corrective action. - - -## Guidelines - -* [Blameless Postmortems Guideline](https://about.gitlab.com/handbook/infrastructure/#postmortems) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",1.0 -15887023,2018-11-16 00:44:32.629,RCA for about.gitlab.com 503s for certificate expired,"**Please note:** if the incident relates to sensitive data, or is security related consider -labeling this issue with ~security and mark it confidential. -*** - -## Summary - -For approximately 20 minutes on 2018-11-16 unauthenticated requests to https://gitlab.com and all requests to https://about.gitlab.com were returning 503 errors because of an expired certificate. - - -## Impact & Metrics - -The following services were impacted: - -* https://about.gitlab.com -* unauthenticated requests to https://gitlab.com (because of the redirect to about.gitlab.com) - - -The following services remained unaffected during the outage: - -* authenticated logins to https://gitlab.com -* pages, registry, api, all gitlab.com services - - -## Detection & Response - -Start with the following: - -- How was the incident detected? -- Did alarming work as expected? -- How long did it take from the start of the incident to its detection? -- How long did it take from detection to remediation? -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team memeber wasn't page-able, ...) - -## Timeline - -2018-11-16 - -- 00:03 UTC - we started seeing expired certificate errors for about.gitlab.com -- 00:06 UTC - curl for `about.gitlab.com` cert reveals that it's good -- 00:08 UTC - curl for `about-src.gitlab.com` cert reveals that it's expired at 00:00 2018-11-16 -- 00:10 UTC - started purchase process for cert renewal of `about-src.gitlab.com` -- 00:15 UTC - update chef vault for `about-gitlab-com _default` with new certs -- 00:18 UTC - manually run chef-client on `about-src.gitlab.com` to roll in new certificate -- 00:21 UTC - `about-src.gitlab.com` serving new cert now -- 00:25 UTC - modify Fastly configuration for Origin validation to accept `about-src.gitlab.com` as TLS name -- 00:27 UTC - Roll out Fastly configuration for Origin SSL TLS name change -- 00:28 UTC - about.gitlab.com is operating normally. - - -## Root Cause Analysis - -The certificate on the origin server for `about.gitlab.com` had expired causing the CDN provider to throw errors. We didn't realize the cert was expiring or about to expire because when we moved the production site `about.gitlab.com` to be fully CDN resolved we never re-branded the origin server, in stead leaving the old certificate on the server while the CDN provider generated a new one. This then was further compounded by the fact that our certificate monitoring in prometheus for advanced alerting was never reconfigured either, so it was still looking at `about.gitlab.com` rather than the origin server of `about-src.gitlab.com` for cert expiration alerting. - - -## What went well - -- Alerting triggered immediately in both Slack ad PagerDuty that there was an issue - -## What can be improved - -- We need a more automated means to iterate over the domains that we have certs on and monitor when they're going to expire. - - -## Corrective actions - - -## Guidelines - -* [Blameless Postmortems Guideline](https://about.gitlab.com/handbook/infrastructure/#postmortems) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",2.0 -15882989,2018-11-15 19:01:20.874,template for Patroni failover - postgresql steps,"Please review the MR : -https://gitlab.com/gitlab-com/migration/merge_requests/202 - -The repo is : git@gitlab.com:gitlab-com/migration.git - -Create the steps for the maintenance for the Patroni failover. -Postgresql / Repmgr / Patroni steps to setupthe new cluster to production. - -Please everything automated. Won't be allowed any direct interaction on hosts. All actions from the bastions. - -Consider also to update/clean up the monitoring to do not consider REPMGR the master cluster. - -@ahmadsherif / Ongres @3manuek are working on the postgresql. - -@glopezfernandez @andrewn @dawsmith @3manuek @abel3 @ahmadsherif fyi",8.0 -15882948,2018-11-15 18:57:54.950,template for Failover to Patroni,"Please review the MR : -https://gitlab.com/gitlab-com/migration/merge_requests/202 - -The repo is : git@gitlab.com:gitlab-com/migration.git - -Create the steps for the maintenance for the Patroni failover. -Close/Open traffic. -Redirect applications for the correct Load balancers/ databases. - -Please everything automated. Won't be allowed any direct interaction on hosts. All actions from the bastions. - -Ahmad / Ongres are working on the postgresql. - -@glopezfernandez @andrewn @dawsmith @3manuek @abel3 @ahmadsherif fyi",2.0 -15870615,2018-11-15 11:51:48.529,create pagerduty slack commands for /imoc /cmoc,"The idea will be for oncall engineer type /imoc and page ( by pagerduty the on-call engineer ) and for cmoc type /cmoc and page ( by pagerduty the on-call cmoc ) at slack. - -see the current /pd command for reference.",1.0 -15870563,2018-11-15 11:49:00.282,Review and Clean-up PagerDuty Alerts,"Check and review the source events that are generating alerts. - -@dsylva could you please add your report here? - -Design Doc : https://docs.google.com/document/d/1dgpACz_clUNLdRRipxUqwlAAg3grG7A6wad0miHyezw - -Number of incidents: https://docs.google.com/spreadsheets/d/1m3Qp-9fLCKSw38o3t48tlUbSwlW6F6Ymq-5Zxg8EPBw",5.0 -15867797,2018-11-15 10:17:20.113,Stackdriver logs are missing HAProxy logs,"Currently, stackdriver appears to be missing large volumes of haproxy logs. - -Related slack thread: https://gitlab.slack.com/archives/CB3LSMEJV/p1542275675780800 - -This may be related to last week's haproxy slowdown: https://gitlab.com/gitlab-com/gl-infra/production/issues/553 - -Example: https://console.cloud.google.com/logs/viewer?authuser=0&project=gitlab-production&minLogLevel=0&expandAll=false&customFacets&limitCustomFacetWidth=true&dateRangeStart=2018-11-14T09%3A53%3A20.960Z&dateRangeEnd=2018-11-15T09%3A53%3A20.960Z&interval=P1D&resource=gce_instance&scrollTimestamp=2018-11-14T14%3A58%3A19.013464014Z&filters=text%3Ahaproxy&advancedFilter=resource.type%3D%22gce_instance%22%0Alabels.tag%3D%22haproxy%22%0AjsonPayload.frontend_name!%3D%22pages_https%22%0AjsonPayload.backend!%3D%22registry%22%0AjsonPayload.frontend_name!%3D%22pages_http%22%0AjsonPayload.frontend_name!%3D%22altssh%22%0AjsonPayload.environment%3D%22gprd%22%0AjsonPayload.server!%3D%22web-cny-01-sv-gprd%22%0AjsonPayload.server!%3D%22web-cny-02-sv-gprd%22%0AjsonPayload.server!%3D%22git-cny-01-sv-gprd%22%0AjsonPayload.server!%3D%22git-cny-02-sv-gprd%22",2.0 -15859519,2018-11-15 02:26:11.092,Block search crawlers for all related domains,"@plafoucriere discovered assets.gitlab-static.net URLs are indexed by Google. [original message](https://gitlab.slack.com/archives/C0259241C/p1542242346577000) - -He also pointed out results for gprd.gitlab.com show up in Google as well. - -[Results with assets.gitlab-static.net](https://www.google.com/search?source=hp&ei=4tjsW4qcLYXn_QaN_72wBA&q=site%3Aassets.gitlab-static.net&btnK=Google+Search&oq=site%3Aassets.gitlab-static.net&gs_l=psy-ab.3...3962.12004..12186...2.0..0.186.1815.30j2......0....1..gws-wiz.....0..0j0i131j0i3j0i10.LdbggfhUnlk) -[Results with gprd.gitlab.com](https://www.google.com/search?ei=8djsW67yG867ggeegruQDg&q=site%3Agprd.gitlab.com&oq=site%3Agprd.gitlab.com&gs_l=psy-ab.3...34064.39612..39985...0.0..0.186.987.17j1......0....1..gws-wiz.rUoOGCRUsE4) - -What are our best options to manage the crawl status for these domains and any subdomains? I'm hoping we can make a simple robots.txt update to fix both of these. 😄 - -Do we have any documentation with domains we use across GitLab? I'd love to go through this list and find any other outlying index issues. - -cc @sytses @lbanks",1.0 -15850458,2018-11-14 16:10:44.052,Add nginx config test before restart nginx for about.gitlab.com,"In https://gitlab.com/gitlab-com/gl-infra/production/issues/560, about.gitlab.com went down because of bad redirects. This can be immediately remedied by forcing a config test BEFORE restarting nginx. Currently the about.gitlab.com cookbook just restarts nginx if the config changes. - -- [cookbook-about-gitlab-com](https://gitlab.com/gitlab-cookbooks/cookbook-about-gitlab-com) -- [relevant recipe](https://gitlab.com/gitlab-cookbooks/cookbook-about-gitlab-com/blob/master/recipes/nginx.rb#L13-17)",5.0 -15849777,2018-11-14 15:45:00.729,Admin access to staging.gitlab.com for Tony Carella,"* Grant admin access to Tony Carella (acarella@gitlab.com) on staging.gitlab.com -* Supports the security team, as they are taking some responsibilities for IT Operations at this time -* No one on the security team has admin access to staging at this time. This will help us cater to access requests coming in for staging. -* access-requests -",1.0 -15836792,2018-11-14 09:24:59.405,Evaluate Ubuntu Advantage for livepatching support,"Ubuntu offers the [Ubuntu Advantage programme](https://gitlab.slack.com/archives/CB3LSMEJV/p1542155941686600) (at a cost) which offers many benefits, a particular benefit for us is livepatching. With a large fleet of instances and major concerns about what can be rebooted, and if rebooted, in which order, live patching of the e.g. kernel becomes very relevant. Additionally, it could help mitigate attacks such as Spectre and Meltdown, which require urgent attention for everything, particularly public cloud hosted instances. The latest example of [speculative execution attacks](https://arstechnica.com/gadgets/2018/11/spectre-meltdown-researchers-unveil-7-more-speculative-execution-attacks/) just enforces the need for such focus and feature. - -The cost of Ubuntu Advantage may seem high ($250 per instance per year) but I believe the benefits outweigh the cost. Listed below are some of the benefits, I can think of: - -* (Official Ubuntu benefit) Landscape, the Ubuntu server and desktop management tool -* (Official Ubuntu benefit) 24/7 telephone and online support portal -* (Official Ubuntu benefit) Canonical Livepatch Service -* (Official Ubuntu benefit) Ubuntu 12.04 Extended Security Maintenance -* (Personal opinion) Reduce cost by not having individual(s) tasked with preparing and executing reboots -* (Personal opinion) Reduce potential financial and reputational cost of failed reboots and failed reboot chains -* (Personal opinion) Reduce security risk and improve security posture by mitigating 0-day and similar attacks -* (Personal opinion) Provide overview of fleet and state of the operating system of the fleet - -Disadvantages: -* Introducing a tool which makes automated changes to our environment which may impact service stability -* Cost -* ... probably more I can't think of right now ... - -I do not currently have access to data about the number of instances but to reduce cost, it could be considered to use Ubuntu Advantage for only a select few instances that are troublesome to reboot or instances that are public facing. My preference would be to cover the entire fleet but that may be financially unvailable. However, my impression is that GitLab infrastructure has a number of less used instances, which could be shuttered to reduce the overall cost of the infrastructure. Maybe this could even offset some of the Ubuntu Advantage cost. - -Any thoughts on the above are very welcome. - -Thanks, -Peter",1.0 -21464080,2019-05-31 04:30:19.835,Mailroom locking/arbitration problem,"Just raised https://github.com/tpitale/mail_room/issues/84 and noting it here for any local discussion we might like to have. - -Noticed because we've currently got >30 messages in this state (of various ages up to 4-5 months). - -I wonder if https://gitlab.com/gitlab-org/gitlab-ce/issues/2870#note_4184125 might be a way forward, but I'm not yet competent enough in Redis client use to comment well. - -/cc @DouweM - you might be interested as the author of the original patch for mail_room.",1.0 -21458137,2019-05-30 20:29:28.426,S3 credentials for new Snowplow bucket for Data Team,"As part of https://gitlab.com/groups/gitlab-com/-/epics/41 the data team will need read only credentials to be able to pull events in the data warehouse. - -All we need is the bucket name, access key id, and the secret access key.",1.0 -21457788,2019-05-30 20:08:55.967,Windows machines for GitLab Runner CI tests,"For GitLab Runner CI we need two more Windows machines: - -- one with Windows 1803 -- one with Windows 1809 - -On existing ones we're executing basic tests, building images for Windows Docker Containers and we will in near future start using `docker-windows` for running tests there. - -The two new will be used to test support Docker for Windows with Linux containers. For this we need to ensure that HyperV is installed on the machines. - -It will be enough if we will get the machines with proper Windows installed on them and with HyperV for Docker (some guidance can be found at https://blogs.msdn.microsoft.com/azuredev/2018/09/09/docker-in-azure-vm/). ~Verify can next handle installation and registration of Runner, installation of Docker and coupling it all together.",1.0 -21457021,2019-05-30 19:16:39.402,RCA: Pages service interruptions,"## Summary - -At approximately 1552UTC on May 29th and again around 1845UTC on May 30th, multiple alerts were received for backend connection errors related to the Pages service - -Service(s) affected : ~""Service:Pages"" -Team attribution : -Minutes downtime or degradation : TBD - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? Pages service was down -- Who was impacted by this incident? (i.e. external customers, internal customers, specific teams, ...) -- How did the incident impact customers? (i.e. preventing them from doing X, incorrect display of Y, ...) -- How many attempts were made to access the impacted service/feature? -- How many customers were affected? -- How many customers tried to access the impacted service/feature? - -Include any additional metrics that are of relevance. - -Provide any relevant graphs that could help understand the impact of the incident and its dynamics. - - -## Detection & Response - -Start with the following: - -- How was the incident detected? -- Did alarming work as expected? -- How long did it take from the start of the incident to its detection? -- How long did it take from detection to remediation? -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team memeber wasn't page-able, ...) - -## Timeline - -YYYY-MM-DD - -- 00:00 UTC - something happened -- 00:01 UTC - something else happened -- ... - -YYYY-MM-DD+1 - -- 00:00 UTC - and then this happened -- 00:01 UTC - and more happened -- ... - - -## Root Cause Analysis - -The purpose of this document is to understand the reasons that caused an incident, and to create mechanisms to prevent it from recurring in the future. A root cause can **never be a person**, the way of writing has to refer to the system and the context rather than the specific actors. - -Follow the ""**5 whys**"" in a **blameless** manner as the core of the root-cause analysis. - -For this it is necessary to start with the incident, and question why it happened. Keep iterating asking ""why?"" 5 times. While it's not a hard rule that it has to be 5 times, it helps to keep questions get deeper in finding the actual root cause. - -Keep in min that from one ""why?"" there may come more than one answer, consider following the different branches. - -###Example of the usage of ""5 whys"" -The vehicle will not start. (the problem) - -1. Why? - The battery is dead. -2. Why? - The alternator is not functioning. -3. Why? - The alternator belt has broken. -4. Why? - The alternator belt was well beyond its useful service life and not replaced. -5. Why? - The vehicle was not maintained according to the recommended service schedule. (Fifth why, a root cause) - -## What went well - -Start with the following: - -- Identify the things that worked well or as expected. -- Any additional call-outs for what went particularly well. - -## What can be improved - -Start with the following: - -- Using the root cause analysis, explain what can be improved to prevent this from happening again. -- Is there anything that could have been done to improve the detection or time to detection? -- Is there anything that could have been done to improve the response or time to response? -- Is there an existing issue that would have either prevented this incident or reduced the impact? -- Did we have any indication or beforehand knowledge that this incident might take place? - - -## Corrective actions - -- List issues that have been created as corrective actions from this incident. -- For each issue, include the following: - - - Issue labeled as ~""corrective action"". - - Include an estimated date of completion of the corrective action. - - Incldue the named individual who owns the delivery of the corrective action. - - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",5.0 -21455616,2019-05-30 17:58:05.375,Git Storage Node Repository metrics,"I'd be interested in knowing the following about our Git storage nodes: - -* number of repositories per node (absolute value) -* number of repositories per user (min, max, average, distribution) -- per node and across the fleet -* size of repositories (min, max, average, distribution) -- per node and across the fleet -* number of repositories based on activity (distribution: last hour, last 6 hours, last 12 hours, last 24 hours, last 3 days, last 7 days, last 2 weeks, last month, last 3 months, last 6 months, last 9 months, last 12 months, and everything else) - -We should keep track of these metrics on an ongoing basis. - -This request is intended to be able to help us guide decisions relevant to storage architecture.",4.0 -21454594,2019-05-30 17:13:16.108,Add PathFactory to the CNAME for branded URL,"We (Mktg) are bringing on a new tool that will handled content distribution and syndication. -Can we please get a branded URL set up for this tool. - -[PathFactory Instructions](https://lookbookhq.force.com/nook/s/article/how-to-set-up-custom-subdomain) - -### Desired CNAME: learn.gitlab.com - -MktgOps main point of contact: @nlarue - -Please let me know if there are questions. - -/cc @nlarue @northrup",2.0 -21422105,2019-05-29 21:44:12.393,Bump bootstrap script to version 8,"The latest version of the bootstrap script (v8) includes a check to ensure that the requested kernel version is available before removing the current running kernel and attempting the upgrade (since it will naturally fail). However, we are not (yet) using that bootstrap script as the default everywhere, yet. - -This issue will track the effort to update all version references for module source lines to ensure that version [v1.0.2](https://gitlab.com/gitlab-com/gl-infra/terraform-modules/google/bootstrap/tree/v1.0.2) of the module is loaded and all environments are referencing `v8` as the default version of the script. - -In order to prevent terraform from unnecessarily rebooting running instances, we will also have to use the `gcloud` CLI to update the `startup_script` metadata key on running instances.",3.0 -21366725,2019-05-28 11:59:36.054,Personal staging credential gone - Access request,"My login for `staging.gitlab.com` is no longer present in the DB and hence cannot access staging anymore. This issue is to keep track of the request to get access again. - -Not sure how my account got wiped from DB, yet.",1.0 -21357863,2019-05-28 08:27:26.287,Send a slack notification to the production channel and infra lounge when the oncall report is generated,This would be a nice addition to the oncall report,2.0 -21350342,2019-05-28 03:07:23.507,PackageCloud backups 'failing' according to DMS,"Worked on May 4 and 13th, but haven't completed with exit code 0 since. - -The error from the logs is -```ERROR: Upload of '/var/opt/packagecloud/backups/packagecloud-streamed-database-backup.1551991511.xbstream' part 11 failed. Aborting multipart upload.``` -but the s3 bucket looks like it is correct, at least in list of files and byte sizes. - -Something weird is going on",1.0 -21335800,2019-05-27 14:28:34.161,Break down postgres upgrade OKR,"We have this OKR: `Upgrade Postgres to version 11 with log centralization and reporting.` captured in -* milestone: https://gitlab.com/groups/gitlab-com/gl-infra/-/milestones/22. -* epic: https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/67 - -I propose to split this apart into multiple epics: - -1. Postgres major upgrade -1. Log centralization -1. Reporting - -Those are largely independent chunks of work we can parallelize or address independently across teams. - -cc @ansdval",1.0 -21331275,2019-05-27 12:24:57.302,Evaluate usage of more recent ZoL versions on -gcp kernels,"ZoL 0.8.0 has been released, where as Ubuntu Bionic ships with 0.7.x. - -[ZoL 0.8.0 has lots of improvements, that we want to use if possible.](https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.8.0) - -We will have to compile and package it our-own. This also gives us the ability to apply patches as needed. (Including patches that violate the GPL, as long as we do not redistribute those and the binaries, see https://www.gnu.org/licenses/old-licenses/gpl-2.0-faq.html#GPLRequireSourcePostedPublic) - -I have developed a prototype CI pipeline to compile the kernel modules here: https://gitlab.com/gitlab-com/gl-infra/gitlab-zfs/ - -Tl;Dr; of the comments: It works. Bionic & xenial. But we'll not be running it in prod (https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6883#note_180038845)",4.0 -21330923,2019-05-27 12:12:08.347,Increase thanos compact disk size,"The thanos-compact component needs a bit more disk space for performing compactions. Currently it's 100GB, we should increase this to 200GB.",1.0 -21328399,2019-05-27 11:09:32.173,Upgrade to PgBouncer 1.12.0,"Pgbouncer 1.12.0 has been released. - - -We should look into upgrading PgBouncer accordingly. - -Description of the change : https://gitlab.com/gitlab-com/gl-infra/production/issues/1325 - -This maintenance would be done to a critical component from the platform so we should schedule to a low peak time as a c1. - -For the read only nodes: - -* Stop the traffic for the node, and remove the node that could become a primary database. -* execute a chef cookbook to uninstall and install the newer version of pgbouncer (or upgrade), keep the same configurations. -Restore the traffic and make the node again available to become a primary database. - -For the pgbouncer nodes ( that we will have 2 fleets with 3 nodes), execute in all the nodes : - -* take a node out from the rotation on the ILB. -* Update the config from the other 2 nodes from pgbouncer to add temporarily the traffic from the node that is out. -* execute a chef cookbook to uninstall and install the newer version of pgbouncer (or upgrade), keep the same configurations. -* restore the config from the 2 nodes from pgbouncer. -* Add the node back to the rotation on the ILB.",20.0 -21278361,2019-05-24 19:49:23.722,Codify update rules for GitLab Status page,"Our customers—internal and external—rely on status.gitlab.com to determine whether an issue they're experiencing is known by our support and reliability teams. The status page is also the primary means for broadcasting public information about incidents. To ensure this is always the case, we will produce a set of rules in the on-call checklist. We should update our tools to communicate around the status updates, thus ensuring the status page is not a secondary medium for communication.",2.0 -21276815,2019-05-24 18:26:24.958,"SREs should all have Zendesk ""Light Agent"" access","As outlined in https://gitlab.com/gitlab-com/www-gitlab-com/merge_requests/23124, all SREs when they're onboarded should setup Zendesk ""Light Agent"" account access. Because this addition to the list was added after everyone's start date, please follow the instructions provided by [internal-support](https://about.gitlab.com/handbook/support/internal-support/#light-agent-zendesk-accounts-available-for-all-gitlab-staff) to setup your access. - -Please confirm: - -- [x] @abrandl -- [x] @ahanselka -- [x] @ahmadsherif -- [x] @alejandro -- [x] @cmcfarland -- [x] @craig -- [x] @craigf -- [x] @cmiskell -- [x] @cshobe -- [x] @dawsmith -- [x] @devin -- [x] @Finotto -- [ ] @glopezfernandez -- [x] @hphilipps -- [x] @mwasilewski-gitlab -- [x] @msmiley -- [x] @nnelson -- [x] @yguo - -Optional: - -- [ ] @andrewn -- [ ] @jarv -- [ ] @marin -- [x] @skarbek",1.0 -21257979,2019-05-24 09:15:43.501,improve incident management tooling,"There are several open issues to improve incident management which need to get finished: - -- https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5543 -- https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6424 -- https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5359 -- https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5508 (was this ever used?) -- https://gitlab.com/gitlab-com/runbooks/issues/19 - -We also should try to reduce the number of bots that need to be used and make them easy and consistent to use and test. - -During last incident the `/start-incident` slack command failed to create an incident issue. We should regularly test our tooling and think about regular incident trainings.",0.0 -21257530,2019-05-24 08:59:25.243,consolidate incident management documentation,"We have several issues with our incident management documentation: - -- we have documentation in several places with different/outdated content: - - https://about.gitlab.com/handbook/engineering/infrastructure/incident-management/ - - https://gitlab.com/gitlab-com/runbooks/blob/master/howto/manage-production-incidents.md - - https://gitlab.com/gitlab-com/runbooks/blob/master/incidents/general_incidents.md - - https://gitlab.com/gitlab-com/runbooks/blob/master/incidents/database.md -- The handbook page is a very long read which is fine to learn all about the processes if you have time but does not work if you have an incident and need to figure what to do ASAP. -- We have no documentation on - - how to find the EOC/IMOC/CMOC - - how to page the EOC/IMOC/CMOC - - chatbots and their usage - -We need to consolidate this documentation and make it easy to find and useful for everybody during an incident.",5.0 -21239178,2019-05-23 21:44:52.627,Write runbook on CodeSandbox bucket,In https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6709 we set up a custom bucket so we could host codesandbox ourselves. We should write a runbook on how it was set up and is deployed to.,2.0 -21230737,2019-05-23 17:31:45.760,Upgrade to terraform 0.12,"Terraform 0.12 [has been officially released](https://www.hashicorp.com/blog/announcing-terraform-0-12), and contains many improvements based on the corresponding updates to HCL that we can leverage in our infrastructure code. This issue is to track efforts for working through the [upgrade guide](https://www.terraform.io/upgrade-guides/0-12.html) on all of our environments in gitlab-com/gitlab-com-infrastructure> - -* [x] update all envs in gitlab-com-infrastructure to 0.11.14 -* [x] update terraform modules to 0.11.14 -* [x] update all environments to use new versions of modules -* [x] update terraform modules to 0.12 -* [x] update all environments to 0.12, I tested it on gstg and it was clean, @craigf already did some work here: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7067 - -/cc @ansdval @dawsmith @Finotto FYI for prioritization",13.0 -21226494,2019-05-23 15:15:10.001,Complete the container registry deployment for a kubernetes cluster in pre,"We've built a [PoC] in the past, let's move from PoC to a [real environment]. - -This is the realization of decisions made in this issue: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5337 - -[PoC]: https://gitlab.com/gitlab-com/gl-infra/kubernetes-poc -[real environment]: https://ops.gitlab.net/gitlab-com/gl-infra/k8s-workloads/gitlab-com",5.0 -21193998,2019-05-22 16:27:44.139,RCA: SSO enforcement feature breaking pipelines,"## Summary - -Enabling the feature `enforced_sso_requires_session` was making groups inaccessible for members and ci pipelines fail for customers using SAML: https://gitlab.com/gitlab-org/gitlab-ee/issues/11704 - -Service(s) affected : ~""Service:CI Runners"" -Team attribution : Backend Manage -Minutes downtime or degradation : 266 - -## Impact & Metrics - -### What was the impact of the incident? -Customers using SAML/SSO for authentication could not access groups, even though they had proper permissions. Both the users attempting to authorize views in the UI and the ci-runners owned by those encountered failures. - -### Who was impacted by this incident? - -Any customer using SAML/SSO authentication. - -### How did the incident impact customers? - -Customers browsing the UI received 404s to obfuscate the existence of group paths. And customers' ci-runners failed with 403 errors. While the feature flag was set, it was only after customers re-authenticated with their SSO provider did their projects become visible again. - -### How many attempts were made to access the impacted service/feature? - -There was only a moderate raise of 404 errors on the `GroupsController` between 9 and 15. Compared to the total amount of 404s that's indicating that not many users have been affected. - -![image](/uploads/1597048cd1296ae69416d37566eb392d/image.png) - -### How many customers were affected? - -4 customers reported the issue—one of them had 90 users affected. At the peak of the issue we encountered nearly 400 errors during a 30 minute period. Given our natural error rate, we estimate that < 10% of these were legitimate errors. - -![image](/uploads/e3f66f99e0c67472bfdb8cf7f52c4714/image.png) - -## Detection & Response - -### How was the incident detected? - -Customers began reporting issues to support via Zendesk. -- https://gitlab.zendesk.com/agent/tickets/122122 -- https://gitlab.zendesk.com/agent/tickets/122134 -- https://gitlab.zendesk.com/agent/tickets/122145 - -### Did alarming work as expected? - -We received no alerts for this issue, and the error rate (404s, 403s) was too low to be visible on any dashboards or sentry. - -### How long did it take from the start of the incident to its detection? - -After 23m the first customer reported via Zendesk. - -### How long did it take from detection to remediation? - -243 minutes. - -### Were there any issues with the response to the incident? - -Yes. It took 2h33m from first customer report to response, which resulted in gitlab-org/gitlab-ee/issues/11704. Dialogue in that issue did not include much feedback from customers, because the initial conversations took place in Zendesk. - -Additionally, it proved difficult for @markpundsack to open an incident in the Infrastructure departments production tracker. Which unnecessarily delayed the page to the Reliability Engineer on call. Mark cited confusion in the handbook's language for [Incident Management](https://about.gitlab.com/handbook/engineering/infrastructure/incident-management/) as the source of his confusion. Finally, once he did find instructions to use the `/start-incident` command in the `#incident-management` Slack channel, the `imoc-bot` received a 404 error and failed to create an incident issue (https://gitlab.slack.com/archives/CB7P5CJS1/p1558534349190400) in the production tracker for the SRE on call. - -## Timeline - -See incident ticket: https://gitlab.com/gitlab-com/gl-infra/production/issues/840#timeline. - -## Root Cause Analysis - -1. Chatbot permitted a feature flag to flip during an ongoing production deployment. -1. The line of communication for escalation from support did not include Reliability Engineering. -1. Documentation for engaging Reliability Engineering was difficult to interpret and follow. -1. Metrics and monitoring did detect the issue—it was an incredibly low number relative to all 40X errors—though it was anomalous. - -## What went well - -- Support did a great job at creating the gitlab-ee issue and pointing all reporting customers to it so information could be centralized. -- After escalating the issue to an incident the cause was found within 1 minute by @stanhu and mitigated by @ahanselka within 5 minutes. - -## What can be improved - -- We will improve the the feature flag change process for better observability and awareness. -- We will be more responsive in the the gitlab-ee issue so customers know we are working on it—before an incident issue has been created. -- Creating and escalating incidents should be as easy as possible for everybody and well documented. - - We should broaden incident management training outside of Reliability Engineering for exposure to other organizations at GitLab. - -## Corrective actions - -- https://gitlab.com/gitlab-org/release/framework/issues/335 -- https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6765 -- https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6766 -- https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6770 -- https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6773 -- https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6772 -- https://gitlab.com/gitlab-org/gitlab-ee/issues/11757 - - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",1.0 -21192445,2019-05-22 15:49:37.879,Broken mirroring for terraform modules (ops -> .com),"It appears that the credentials changed or were not validated when we relocated the terraform modules to ops, and as such mirroring from ops->.com is currently broken for most/all module repositories. We need to update the mirroring configurations to use the updated/correct credentials for `ops-gitlab-net`",1.0 -21165208,2019-05-21 21:01:58.622,Update the terraform gke module to provide the ability to separate how node pools are created,"Our current GKE module relies on the use of the default node pool. This is a great default, but limits our ability to customize a cluster in the future in the cases where we need beefier nodes for X reason, or if we need to enable or disable preemptible instances for some other reason. - -Disable the default node pool and provides the ability to configure many node pools.",3.0 -21165169,2019-05-21 20:59:45.400,Update the terraform gke module to provide the ability to create regional clusters,Currently GKE clusters default to creating a single master node in the default region of choice. What if that region goes down for X reason? Let's enhance our gke module to include the ability to enable regional clusters optionally. This will provide us with redundant kube api servers.,3.0 -21164859,2019-05-21 20:44:33.878,Update the terraform gke module with the ability to lock down what networks have access to the kube api server,"By default GKE clusters have open access from all of the Internet to access the API server. We should limit this, or at LEAST provide a mechanism to update this list in the future. - -https://cloud.google.com/kubernetes-engine/docs/how-to/authorized-networks - -https://www.terraform.io/docs/providers/google/r/container_cluster.html#master_authorized_networks_config",1.0 -21164819,2019-05-21 20:42:50.944,Update the terraform gke module to provide the option to enable network policies feature of GKE,"[GKE Network Policy](https://cloud.google.com/kubernetes-engine/docs/how-to/hardening-your-cluster#restrict_with_network_policy) provides a method of locking down traffic inside of the Kubernetes cluster. Update our module to provide a method of enabling this optionally with some form of configuration. - -https://www.terraform.io/docs/providers/google/r/container_cluster.html#network_policy",1.0 -21164731,2019-05-21 20:39:03.330,Update the terraform gke module with the ability to create private clusters,"In general, there really isn't a reason why our clusters are public. These nodes all have public ip addresses and all egress traffic comes from whatever that nodes IP address is. Let's fix this. This will improve our security stature by limiting what IP addresses we expose to the outside world, this will make it easier for the community as we'll have a set IP address where our traffic comes from, and it just feels like a cleaner more sane configuration. - -This will require the use of a NAT of some sort, otherwise egress traffic won't know how to reach the Internet.",3.0 -21164677,2019-05-21 20:36:22.887,Update the terraform gke module to allow oauth scopes to be configurable,Right now our gke module doesn't allow this to be customized. This will be important if we choose to utilize specific google services. We can also limit what we currently deploy to our clusters as it is quite generous.,1.0 -21164661,2019-05-21 20:35:23.929,Update the gke terraform module configuration for disable-legacy-endpoint,"The terraform module currently allows this to be configured. This is bad practice and regardless, future clusters after 1.12 will forcibly disable this option. Let's proceed to either remove or set a default to ensure this is disabled by default on all k8s clusters we create.",1.0 -21163552,2019-05-21 19:57:11.230,Create a consul server fleet for pre,"For the purpose of implementing https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6132 a consul server cluster (probably just 2 to start) on the `pre` environment would be useful to have a low-impact place to test the necessary changes - -/cc @jarv @skarbek",2.0 -21162278,2019-05-21 18:59:45.144,Register services necessary for using consul for host inventory,"As part of https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6153 and based on the notes of https://gitlab.com/gitlab-com/www-gitlab-com/merge_requests/20889, have each node register its services with consul for the purpose of node inventory. - -- chef-repo MR: https://ops.gitlab.net/gitlab-cookbooks/chef-repo/merge_requests/1199 - -/cc @dawsmith @devin @jarv",5.0 -21152880,2019-05-21 15:12:31.685,Help debug SSO Enforcement by running a few lines in rails console in staging,"## Why - -I'm testing SSO Enforcement on staging and it is working for groups but not a particular project. - -## What - -On staging, run the following from a rails console: - -```ruby -Gitlab::Auth::GroupSaml::SsoEnforcer.define_method(:active_session?){ false } -policy = ProjectPolicy.new(User.find_by(username: 'jamedjo'), Project.find_by_full_path('jej-group-saml-test/test-subproject-within-enforced-ssso')) -policy.debug(:read_project) -policy.debug(:guest_access) -policy.lookup_access_level! -policy.needs_new_sso_session? -```",1.0 -21151972,2019-05-21 14:46:30.261,[Design Doc] Drop share-01 in favor of cloud storage,,3.0 -21103659,2019-05-20 18:13:51.671,Can terraform git storage nodes that use ZFS,"After https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6726 is complete, implement terraform code that will allow us to provision ""new"" file servers with the decided hardware configuration. These new file servers should be a separate TF resource declaration with their own `count`. - -The MR should not actually cause any new nodes to be provisioned (i.e. set count on the new resource declaration to 0). Old storage nodes should not be deleted(!).",4.0 -21103616,2019-05-20 18:10:34.541,Decide on disk layout for storage nodes,"Following the results from https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6306, decide on a storage layout for vdevs. The 2 main contenders at the moment are: - -* RAIDZ1 with 9 disks (8 usable, 1 parity) -* Single disk - -Single disk is often called out as bad practice for ZFS, but [GCP PDs](https://cloud.google.com/persistent-disk/) are advertised as having high availability, data redundancy, and error correction. We need to evaluate this and weigh up whether it's worth adding redundancy at the ZFS level. - -The throughput of the configurations must also be a factor. We don't want reduced throughput compared to today. - -The total usable filesystem space per node must be at least 16TB (overlaps with discussion with quotas in https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6850), the same as we have today. Note that this will be less than the zpool space due to a reservation filesystem (see https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6850), and this zpool space may be less than the total PD disk space provisioned due to RAIDZ redundancy (if we choose raidz). - -We intend to keep the same shard size as today. At the time of writing this is 32 x 16TB in prod. Changing this would involve changing too many things at once. - -Another thing to take into consideration is that the zpool space may need to be substantially larger than what we expect due to snapshot [bloat after a git repack operation](https://christian.amsuess.com/idea-incubator/space-efficient-git). A ZFS snapshot taken before the repack will reference disjoint blocks to one taken after. Therefore the zpool disk usage for the repo will be doubled until the old snapshot passes out of the retention window. If repacking occurred simultaneously for every repo on a GitLab installation (which is unlikely), our filesystem usage would spike to double. Therefore in the worst case we would need at least twice the usable filesystem space as the data we intend to store on the node. However, since repacking of different repos should be spread out in time in realistic scenarios, we could use a lower multiple. - -Also, decide on a storage layout for L2ARC. There is not much debate about this at the time of writing. The initially proposed configuration is 2 x 375GB ephemeral [local SSDs](https://cloud.google.com/compute/docs/disks/local-ssd) (they are fixed size) connected via NVMe. - -We have already decided to keep using n1-standard-32 VMs for the storage nodes, so memory and CPU will not change. - -The deliverable of this issue is not necessarily code, as we will need to tackle this very early in the project, but a comment that we can use for reference when we come to write some terraform. - -Performance characteristics of the candidate configurations are here: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6883 - -Finally, satisfy yourself that we don't make life too hard for ourselves when it comes to restoring an old backup from GCE snapshots if we choose raidz.",1.0 -21079679,2019-05-20 09:09:34.719,"Grant db-geo access also to Toon, Douglas & Ash","My apologies, according to the [process](https://gitlab.com/gitlab-com/runbooks/blob/master/howto/granting-rails-or-db-access.md#process) the requests at https://gitlab.com/gitlab-com/access-requests/issues/804 and https://gitlab.com/gitlab-com/access-requests/issues/803 should have made here, but thanks for granting the access anyway! - -But additionally, I'd also like to request access to the Geo tracking database on DR. I tried to connect, but failed. - -``` -❯ ssh to1ne-db-geo@dr-console -to1ne-db-geo@console-01-sv-dr.c.gitlab-dr.internal: Permission denied (publickey). -``` - -Unix usernames: `to1ne` and `dbalexandre`",3.0 -21070907,2019-05-19 22:53:33.561,Extend source port range of network ACL to fix #6656,"## What are we going to do? -Change the port range in Inbound ACL rule 700 of production-external-network-acl Network ACL to ""1024-65535"" - -## Why are we doing it? -Right now, roughly 10% of outbound TCP connections fail, when the randomly chosen source port is between 1024 and 6665 (the current lower limit of the port range); the reply (e.g. SYN-ACK) packets are dropped by the Network ACL we want to change. See #6656 for the report + diagnosis of the issue. - -## When are we going to do it? - -* Start time: 2019-05-21 08:00 UTC -* Duration: 1 minute -* Estimated end time: 2019-05-21 08:01 UTC - -## How are we going to do it? -Manual edit of the ACL rules using the AWS Web Console - -## How are we preparing for it? -Co-ordinating with Gitter engineers and GitLab SRE's - -## What can we check before starting? -Nothing - -## What can we check afterwards to ensure that it's working? -Use the debug one-liner from #6656 (`for i in $(seq 1 20); do curl -vi api.ipify.org; done`). We're expecting that all these calls succeed, quickly. Any failures indicate the fix has not had the desired effect; more than 10% failures indicate we've inexplicably made the situation worse. - -## Impact - -* Type of impact: none expected -* What will happen: Nothing negative -* Do we expect downtime? (set the override in pagerduty): None - -## How are we communicating this to our customers? -* Nothing required - -## What is the rollback plan? -Change the lower bound of the source port back to 6665",1.0 -20971792,2019-05-16 17:36:19.844,Update *.gstg.gitlab.net certificate,"We are seeing some odd issues with the internal load balancer and we noticed a failing certificate. We should update this. - - -- pre-conditions for execution of the step -1. The newly renewed certificate has been downloaded with sslmate. ```sslmate download *.gstg.gitlab.net``` -2. The proper fields to be updated in the gkms vaults are identified and the old cert is verified to match the new one. In the chef-repo, run these commands: -* ```./bin/gkms-vault-show frontend-loadbalancer gstg``` -* ```./bin/gkms-vault-show gitlab-omnibus-secrets gstg``` -3. A backup copy of the old certificate field is stored locally. -4. A properly formatted version of the new cert is already made and formatted for JSON. -```awk 'NF {sub(/\r/, """"); printf ""%s\\n"",$0;}' *.gstg.gitlab.net.chained.crt > json.*.gstg.gitlab.net.cert``` - -- execution commands for the step -1. Stop chef on the service fleet that serves the cert in question. ```knife ssh ""role:gstg-base-lb-fe"" “sudo service chef-client stop“``` -2. Edit the vaults that contain the cert using a command like this: ```./bin/gkms-vault-edit frontend-loadbalancer gstg``` -3. Find and replace the cert field I identified earlier in the JSON and save the changes. The following fields will be updated: ```gitlab-haprox -> ssl -> internal_crt``` -4. There is a load balancer certificate in GCP that needs to be updated also. This [certificate](https://console.cloud.google.com/net-services/loadbalancing/advanced/sslCertificates/list?project=gitlab-staging-1&authuser=0&organizationId=769164969568&orgonly=true&sslCertificateTablesize=50) will be deleted and replaced with the new certificate. -5. Force a chef-run on a server like fe-01-lb-gstg.c.gitlab-staging-1.internal and verify it is using the new certificate. -6. Cert verification ```echo | openssl s_client -connect 10.224.14.18:443 2>/dev/null | openssl x509 -noout -dates``` -7. Restart chef on the nodes from the first step. ```knife ssh “role:gstg-base-lb-fe” “sudo service chef-client start”``` - -- Rollback -1. Edit the altered vault using the gkms-vault-edit command. -2. Replace the changed cert with the old one that was copied into the .crt file. -3. Delete and replace the GCP certificate with a backup of the old certificate recorded in the pre-steps. -4. Save the changes and force a chef run on the test system and verify it is fixed (or back to normal).",1.0 -20964347,2019-05-16 13:30:32.768,GitLab Hosted version of Codesandbox Sandpack,"### Overview - -As part of gitlab-org/gitlab-ce#58548 we need to setup a new S3 bucket to serve the static sandpack script required for a GitLab hosted Codesandbox. - -### Proposal - -Serve the javascript from an S3 bucket on a custom domain with SSL enabled. Requirements: - - Domain (needs to be purchased): gitlab-sandbox.com - - S3 Bucket Configured - - SSL Enabled - -The entirety of the linked the feature is targeted for the 12.1 Release (2019-07-22) and this is a dependency to that. - -### Links / References - -https://docs.gitlab.com/ee/user/project/web_ide/index.html#enabling-client-side-evaluation -https://gitlab.com/gitlab-org/gitlab-ce/issues/58548#note_170759249",3.0 -20943994,2019-05-15 21:08:17.997,Setup salesforce Omniauth with GitLab.com,"With https://gitlab.com/gitlab-org/gitlab-ce/issues/57077#note_169752576, we are being asked to setup Omniauth for GitLab.com with Salesforce. There is a promotion where we are looking to go live by May 29 so we have a clear understanding of expected dates. For infra, we should look to have an initial setup running next week (week of May 20). - -Documentation: -https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/27834/diffs?commit_id=97fa2bf90da6489e84b143f4d7b00395b634ffc7 - -### Todo - - - [ ] Disable Salesforce oauth sign-in source on staging, for consistency with other providers",2.0 -20786207,2019-05-11 17:37:58.330,Clean up terraform runs on gstg/gprd,"Right now our `tf plan` runs from master return several changes to apply. This has caused some issues recently (e.g. https://gitlab.com/gitlab-com/gl-infra/production/issues/816) because it forces us to do only targeted runs. It will also be troublesome moving forward with our k8s plans and efforts. - -Following are the current changes in the output of `tf plan` that we need to address - -- [x] Update `metadata.GL_KERNEL_VERSION` on all `google_compute_instance`s. This was caused by the work to update to Bionic https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6539. It should be harmless to apply all these changes since the only place that metadata value is used is by the bootstrap script. Only new instances care about these value, existing ones will not attempt to update the kernel version. -- [x] Non-changes like -``` - ~ module.postgres-zfs.google_compute_instance.instance_with_attached_disk - metadata.block-project-ssh-keys: ""true"" => ""TRUE"" -``` -... which should be fine to apply as well -- [ ] Shrink prometheus disks, which forces new resources. E.g.: -``` --/+ module.prometheus-app.google_compute_disk.default[0] (new resource required) - id: ""prometheus-app-01-inf-gprd-data"" => (forces new resource) - creation_timestamp: ""2018-07-05T04:37:48.972-07:00"" => - disk_encryption_key_sha256: """" => - label_fingerprint: ""LCTWFBEFgPA="" => - labels.%: ""3"" => ""3"" - labels.do_snapshots: ""true"" => ""true"" - labels.environment: ""gprd"" => ""gprd"" - labels.pet_name: ""prometheus-app"" => ""prometheus-app"" - last_attach_timestamp: ""2018-07-05T04:38:02.436-07:00"" => - last_detach_timestamp: """" => - name: ""prometheus-app-01-inf-gprd-data"" => ""prometheus-app-01-inf-gprd-data"" - project: ""gitlab-production"" => ""gitlab-production"" - self_link: ""https://www.googleapis.com/compute/v1/projects/gitlab-production/zones/us-east1-c/disks/prometheus-app-01-inf-gprd-data"" => - size: ""4000"" => ""100"" (forces new resource) - source_image_id: """" => - source_snapshot_id: """" => - type: ""pd-standard"" => ""pd-ssd"" (forces new resource) - users.#: ""1"" => - zone: ""us-east1-c"" => ""us-east1-c -``` -- [x] There's a lot of shuffling around ports on prometheus instances for some reason. E.g.: -``` - ~ module.prometheus-app.google_compute_instance_group.default[0] - named_port.0.name: ""http"" => ""prometheus-app"" - named_port.0.port: ""80"" => ""9090"" - named_port.1.name: ""https"" => ""http"" - named_port.1.port: ""443"" => ""80"" - named_port.2.name: ""prometheus-app"" => ""https"" - named_port.2.port: ""9090"" => ""443"" -``` -These should be fine as well, since the behavior is the same -- [ ] Another set of apparent non-changes for attached disks from prometheus instances, e.g.: -``` - ~ module.prometheus-db.google_compute_instance.default[1] - attached_disk.0.source: ""https://www.googleapis.com/compute/v1/projects/gitlab-production/zones/us-east1-d/disks/prometheus-db-02-inf-gprd-data"" => ""${google_compute_disk.default.*.self_link[count.index]}"" -``` -- [x] Pubsubbeat instance and topic to be recreated: - -``` --/+ module.pubsubbeat.google_pubsub_topic.mytopic[7] (new resource required) - id: ""projects/gitlab-production/topics/pubsub-geo-inf-gprd"" => (forces new resource) - name: ""pubsub-geo-inf-gprd"" => ""pubsub-rspec-inf-gprd"" (forces new resource) - project: ""gitlab-production"" => --/+ module.pubsubbeat.google_compute_instance.default[7] (new resource required) - id: ""pubsub-geo-inf-gprd"" => (forces new resource) -``` -- [x] `metadata.CHEF_VERSION` on pubsubbeat instances (""12.19.36"" => ""12.22.5"") -- [x] `allow_stopping_for_update` ""false"" => ""true"", and `machine_type` changes on `module.sidekiq.google_compute_instance`s -- [x] `module.postgres-zfs.google_compute_firewall.public` to be deleted - -/cc @gitlab-com/gl-infra for whoever has more specific info about each of those changes.",2.0 -20761748,2019-05-10 14:22:47.633,Snowplow: Logging and log shipping,"Without any specific need, local logging or shipping to a local S3 bucket are probably the simplest way to keep logs from the system in AWS. - -Are there any requirements for logging and integrating logs into log.gitlab.com?",1.0 -20709802,2019-05-09 02:42:21.992,Update Salesforce's sandbox credentials for customers.stg.gitlab.com,"We need to update the Sandbox [credentials](https://gitlab.com/gitlab-cookbooks/cookbook-customers-gitlab-com/blob/5c4ce40bdec1e9752be6650213471caca4647b3a/templates/default/secrets.yml.erb#L48-52) for Salesforce, new credentials ara available in the `Subscription Portal` shared vault from 1password.",1.0 -20703353,2019-05-08 18:56:38.707,Snowplow: Docker vs VM,"In order to move quickly, we will be deploying the Snowplow collectors in AWS. - -**Outstanding question:** -Should we use EC2 or ECS, or something else in AWS to host the components of the Snowplow pipeline? -",1.0 -20648598,2019-05-07 11:41:20.529,Missing haproxy logs since the 1.8 upgrade,"We are currently not receiving or logging anything for haproxy to `/var/log/haproxy.log`, the last log message I see is on `May 2 21:29:07` - - -This roughly corresponds to the haproxy upgrade: - -``` -Commandline: apt-get -q -y install haproxy=1.8.8-1ubuntu0.4 -Requested-By: alejandro (1007) -Install: haproxy:amd64 (1.8.8-1ubuntu0.4) -End-Date: 2019-05-02 21:43:12 -``` - -cc @alejandro",2.0 -20615587,2019-05-06 11:11:47.295,ChatOps: Match statement timeout with production setting,"ChatOps can currently only get plan for queries that return within 5s. That is because we have a lower `statement_timeout` set than the default production setting (which is 15s). - -This means that we cannot use ChatOps for rather expensive queries. Oftentimes, those are the queries we actually care about and want to improve and hence we need plans for them. - -Currently, the procedure is to reach out to somebody with database access to get plans for those queries (example: https://gitlab.slack.com/archives/C3NBYFJ6N/p1557140206109800). - -The proposal here is to match the production `statement_timeout` setting for ChatOps, such that we can get plans through chatops for queries up to 15s.",1.0 -20615133,2019-05-06 10:49:57.860,Improve high-frequency database query,"This is to track the gitlab-ee code change to improve the high-frequency database query found in https://gitlab.com/gitlab-org/gitlab-ce/issues/60524. - -The query takes more than 50% of total database time on the primary and is expected to generate significant load on the database.",5.0 -20575927,2019-05-04 00:36:22.062,version.gitlab.com SSL certificate expires soon,"Expires: Friday, May 10, 2019 at 7:59:59 PM Eastern Daylight Time",1.0 -20535738,2019-05-02 16:50:19.715,Make https://hub.gitlab.com redirect to lab.github.com,"https://gitlab.slack.com/archives/C101F3796/p1556815592121200 - -https://news.ycombinator.com/item?id=19806284",1.0 -20533888,2019-05-02 15:29:28.607,Set lifecycle policy to change the storage class of some objects in the log-archive bucket,"The log archive bucket is 484TB and represents a significant cost when it comes to object storage. -This bucket alone is 10k/month, the same amount of data in coldline storage is about 10% of this cost. -This issue is to discuss a policy for changing the storage class of objects to coldline storage. -Before we can work on this issue we need to decide on the lifecycle policy. - -I suggest that we start with anything older than 90days. - - -## References - -- https://cloud.google.com/storage/docs/lifecycle -- https://cloud.google.com/storage/docs/storage-classes",2.0 -20420544,2019-04-29 01:55:42.270,githost.io - SSL certificate expiring soon,Tracking the work for page: githost.io - SSL certificate expiring soon,2.0 -20353680,2019-04-25 20:38:50.486,Update cert for registry.gitlab.com,"# Production Change - Criticality 2 ~""C2"" - -| Change Objective | Prevent registry.gitlab.com certificates from expiring.| -|:---|:---| -| Change Type | ~C2 | -| Services Impacted | GitLab.com | -| Change Team Members | @cmcfarland | -| Change Severity | ~S2 | -| Buddy check or tested in staging | I don't think we can test this in staging. Will pair with a colleague for coverage. | -| Schedule of the change | **See Comments** | -| Duration of the change | **See Comments** | -| Detailed steps for the change. Each step must include: | See below | - -* Verify certificate is valid locally. -* Re-format the certificate for vault entry ( https://gitlab.com/gitlab-com/runbooks/blob/master/howto/ssl_cert.md ) -* Use gkms-vault-edit to replace the SSL certificate. -* Find some way to verify the change is working in production.",1.0 -20353636,2019-04-25 20:37:51.612,Update cert for gitlab.com,"# Production Change - Criticality 2 ~""C2"" - -| Change Objective | Prevent gitlab.com certificates from expiring.| -|:---|:---| -| Change Type | ~C2 | -| Services Impacted | GitLab.com | -| Change Team Members | @cmcfarland | -| Change Severity | ~S2 | -| Buddy check or tested in staging | I don't think we can test this in staging. Will pair with a colleague for coverage. | -| Schedule of the change | **See Comments** | -| Duration of the change | **See Comments** | -| Detailed steps for the change. Each step must include: | See below | - -* Verify certificate is valid locally. -* Re-format the certificate for vault entry ( https://gitlab.com/gitlab-com/runbooks/blob/master/howto/ssl_cert.md ) -* Use gkms-vault-edit to replace the SSL certificate. -* Find some way to verify the change is working in production.",1.0 -20349077,2019-04-25 17:07:42.124,Update cert for staging.gitlab.com,"# Production Change - Criticality 2 ~""C2"" - -| Change Objective | Prevent staging.gitlab.com certificates from expiring. | -|:---|:---| -| Change Type | ~C3 | -| Services Impacted | Staging.GitLab.com | -| Change Team Members | @cmcfarland @devin | -| Change Severity | ~S2 | -| Buddy check or tested in staging | I don't think we can test this in staging. Will pair with a colleague for coverage. | -| Schedule of the change | **See Comments** | -| Duration of the change | **See Comments** | -| Detailed steps for the change. Each step must include: | See below | - -* Verify certificate is valid locally. -* Re-format the certificate for vault entry ( https://gitlab.com/gitlab-com/runbooks/blob/master/howto/ssl_cert.md ) -* Use gkms-vault-edit to replace the SSL certificate. -* Find some way to verify the change is working in production.",1.0 -20347703,2019-04-25 15:58:31.764,Standard database console access to archive replica for analytics,"Follow-up from https://gitlab.com/gitlab-com/access-requests/issues/835#note_164280462 - -To my knowledge, a standard (read-only) database console would give access to a production replica (ie one that participates in the Patroni HA cluster) only. - -I would suggest we implement access for the archive replica for cases like https://gitlab.com/gitlab-com/access-requests/issues/835. This would allow people to run adhoc queries with much more freedom (timeouts) and no concerns about affecting the production HA cluster for GitLab.com (but with the same data).",2.0 -20326363,2019-04-25 00:37:46.436,RCA For 2019-04-24 Google Load Balancer Anomalies,"## Summary - -There was a problem with Google ILB health checks which was causing the database load balancer to send traffic to read-only secondary nodes instead of the primary. - -The Incident Issue is: https://gitlab.com/gitlab-com/gl-infra/production/issues/802 - -The problem started out slowly in the afternoon, at first mostly manifesting in the Pull Mirrors failing, but over the course of several hours it started compounding as other sidekiq jobs started retrying and web nodes started being directed to the read only databases. - - -Service(s) affected : Database and therefore everything in the us-east1-d zone - -Team attribution : - -Minutes downtime or degradation : - -## Impact & Metrics - - -- What was the impact of the incident? -Increased errors, especially in sidekiq jobs. Pull mirrors didn't work - -- Who was impacted by this incident? -All gitlab.com users, especially those using pull mirrors - -- How did the incident impact customers? -Mirrors were unable to run and sidekiq jobs failed. Web hooks failed. - -- How many attempts were made to access the impacted service/feature? -- How many customers were affected? -- How many customers tried to access the impacted service/feature? - -Include any additional metrics that are of relevance. - -Provide any relevant graphs that could help understand the impact of the incident and its dynamics. - - -## Detection & Response - - -- How was the incident detected? -The first alert was for pull mirrors overdue. There was very little actual evidence in the first hour or so, only seemingly unconnected suspicious behavior. The incident issue was opened based on instinct and suspicion rather than data. - -- Did alarming work as expected? -The existing alerts worked based on the number of errors observed, but could have been more helpful with identifying the source of the problem. - -- How long did it take from the start of the incident to its detection? -25 minutes - -- How long did it take from detection to remediation? -14 hours to full remediation - -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team memeber wasn't page-able, ...) - -The Rackspace support layer adds a lot of overhead between us and google: https://gitlab.com/gitlab-com/gl-infra/production/issues/802#note_163652594 - - - -## Timeline - -2019-04-24 - -- 00:00 UTC - Pull Mirror Jobs stopped processing -- 00:25 UTC - First alert to pager -- 00:30 UTC - Troubleshooting.... -- 01:42 UTC - Ran last resort commands from runbook, changed, but not fixed -- 03:00 UTC - [Incident created](https://gitlab.slack.com/archives/CB7P5CJS1/p1556074810015200)

-troubleshooting continued...
-- 06:25 UTC - Incident call started

-- 06:40 UTC - Tweet sent out and updated status.io -- 06:50 UTC - Rackspace ticket opened: https://portal.rackspace.com/1173105/tickets/details/190424-ord-0000235 -- 07:00 UTC - First attempt to call Rackspace -- 07:01 UTC - Paged DBRE for assessment -- 07:40 UTC - Got a hold off Rackspace team and hopped on a Zoom to troubleshoot the issue on our end -- 07:52 UTC - GCP responded confirming small amount of traffic was sent to unintentional nodes -- 08:46 UTC - Confirmed our theories that something was off with the LB -- 08:51 UTC - GCP responded with some more updates about narrowing down the issue -- 10:49 UTC - Removed all read-only patroni nodes from their respective instance groups except for the primary -- 11:18 UTC - Reset redis key set and triggered the workers to pick up jobs -- 11:36 UTC - Queued jobs to 0, alerts cleared up - tweeted and updated status.io -- 11:52 UTC - GCP still investigating -- 12:30 UTC - GCP resolved the inconsistency -- 15:33 UTC - Started sidekiq-cluster on the few sidekiq-* nodes in 1-d zone - -## Root Cause Analysis - -An update from Google: - -> What exactly was happening is that health status of ILB backends was failing to propagate to instances in us-east1-d, and thus requests from them were forwarded to all backends configured, not just to the healthy one (That is the reason why you observed traffic going to -b as well). ILB and as well as this issue is not happening close to the server VM side, but close to the client VM. - -Last update from Google: - -> After further investigation, we believe that the temporary inconsistency of ILB backend health observed by VM Instances in us-east1-d was caused by a rollout of one of our control plane components in that zone. This issue affected only small amount of Virtual Private Networks and is likely triggered only together with VPC Peering. -> -> The engineering team resolved the inconsistency and we believe there should be no further impact. - - -Follow the ""**5 whys**"" in a **blameless** manner as the core of the root-cause analysis. - -1. Why? - Jobs are intermittently failing -2. Why? - They can't write to the database sometimes -3. Why? - The load balancer sends some traffic to the wrong databases -4. Why? - The load balancer is not interpreting health checks correctly -5. Why? - The rollout of a Google control plane component in us-east1-d caused health checks to be reported incorrectly to the load balancers - -## What went well - -Start with the following: - -- Identify the things that worked well or as expected. - -The team piled on this very nicely. Everyone contributed what they could and spread the workload out. - -Even though we didn't have alerting that covered this case, the existing alerts let us know that something was 'just not right' - -- Any additional call-outs for what went particularly well. - -The team did an AMAZING job with some very difficult troubleshooting to narrow down an issue that didn't have much conclusive evidence pointing to the cause.

- -Rackspace support was able to help us troubleshoot the issue to make sure we had all of our bases covered. Stackdriver - Monitoring Metrics was very helpful in checking the LB egress traffic and this was a good proof point that traffic was sent to unintended nodes/zones. The graphs exactly showed the impact and recovery very nice.

- -Even in the midst of the chaotic incident, we took the time to create and execute peer-reviewed change request to remove instances from instance groups instead of taking actions on the fly. - -## What can be improved - - -- Whether this happens again is up to Google. We can put pressure on them to be more careful, and we can build some more resilience into some of our jobs - especially sidekiq jobs which aren't idempotent. -- The incident started in the late afternoon Hawaii time. The US had left for the day and the EU had not yet woken up, so there was not a lot of help available for a few hours. -- The runbook for Pull Mirrors just says to post and ask for help in the `#backend` Slack channel. When this was posted, it was in the EU early morning hours, so it took a long time for anyone to see it. -- Our database alerts and monitoring don't cover problems like this. -- There was no indication beforehand that this might happen, no existing issue related to it, and no reason to think it was something we should plan for. -- We should brainstorm, plan and test scenarios like this to see how our system reacts. -- We should practice making changes in our production infrastructure more often and for more scenarios so that we always feel comfortable doing similar changes during incident. During this incident, removing instances from instance groups VS removing instance groups from LB took a little bit of time because of uncertainty around the setup and how the system would react. We were able to collectively talk through, wrote down steps and executed - but this is something we should be able to practice even during non-incident time. - -## Corrective actions - -- List issues that have been created as corrective actions from this incident. -- For each issue, include the following: - - - Issue labeled as ~""corrective action"". - - Include an estimated date of completion of the corrective action. - - Include the named individual who owns the delivery of the corrective action. - - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",3.0 -20325677,2019-04-24 23:30:35.907,Update fe-lb hosts to Ubuntu Bionic,"Part of https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6539 - -Unblocks https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5293 - -### pre - -- chef-repo MR: https://ops.gitlab.net/gitlab-cookbooks/chef-repo/merge_requests/1050 -- gitlab-com-infrastructure MR: https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/merge_requests/752 - -### gstg - -- chef-repo MR: https://ops.gitlab.net/gitlab-cookbooks/chef-repo/merge_requests/1062 -- gitlab-com-infrastructure MR: https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/merge_requests/753 - -### gprd - -- chef-repo MR: https://ops.gitlab.net/gitlab-cookbooks/chef-repo/merge_requests/1065 -- gitlab-com-infrastructure MR: https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/merge_requests/767 -- Production change issue: https://gitlab.com/gitlab-com/gl-infra/production/issues/815",4.0 -20325450,2019-04-24 23:00:43.453,work_mem should be reconsidered on all Postgres nodes,"# Current picture: amount of temporary files generated - -Currently, `work_mem` is set to `'16MB'` on all Postgres GitLab.com nodes. - -Its is insufficient, as we can see from temporary files creation statistics (see [the full postgres-checkup data](https://gitlab.com/gitlab-com/gl-infra/production/snippets/1848486#postgres-checkup_A004)): - -master (`patroni-04`): - - Indicator | Value ------------|------- -Stats Since | 2019-02-13 21:05:54+00 -Stats Age | 62 days 11:02:22 -Temp Files: total size | 5647 GB -Temp Files: total number of files | 483381 -Temp Files: total number of files per day | 7796 -Temp Files: avg file size | 12 MB - -– 7482 * 12 MiB = ~87.7 GiB, this is how much temporary files is written every day in average, according last 64 days stats. - -A replica (`patroni-01`): - - Indicator | Value ------------|------- -Stats Since | 2019-03-21 01:44:39+00 -Stats Age | 27 days 06:25:29 -Temp Files: total size | 11 TB -Temp Files: total number of files | 225048 -Temp Files: total number of files per day | 8335 -Temp Files: avg file size | 52 MB - -– 7895 * 56 = ~431 GiB (!) of tmp files data daily. - -This looks like huge inefficiency, which is relatively easy to fix. Here is how much memory we have and how it is used now: - -# Current picture: use of memory - -The master (`patroni-04`): - -``` -MemTotal: 429299392 kB -MemFree: 26501664 kB -MemAvailable: 269576376 kB -Buffers: 116700 kB -Cached: 357627264 kB -SwapCached: 0 kB -Active: 327929036 kB -Inactive: 36116408 kB -Active(anon): 126797136 kB -Inactive(anon): 1784692 kB -Active(file): 201131900 kB -Inactive(file): 34331716 kB -``` - -A replica (`patroni-01`): - -``` -MemTotal: 429299392 kB -MemFree: 38309544 kB -MemAvailable: 289822468 kB -Buffers: 111740 kB -Cached: 366551768 kB -SwapCached: 0 kB -Active: 339501400 kB -Inactive: 29624252 kB -Active(anon): 122687420 kB -Inactive(anon): 2000436 kB -Active(file): 216813980 kB -``` - -As we can see, from all ~400GiB of RAM available, right now ~250 GiB is used for OS file cache. - -On the master, average tmp file size is just 12 MiB. On replicas, it's more, 52 MiB. - -# Considerations on work_mem increase - -`work_mem` is not allocated fully for every session – a session might use a fraction of it. On the other hand, a single session might use multiple `work_mem` portions, if the statement being executed has multiple operations that need memory. So rough calculations of worst case scenario, when all `max_connection` (which is 300 now) sessions are consuming `work_mem` fully two times, is 300 * 16 MiB * 2 = 9600 MiB. This is a very small fraction of total RAM available. - -If we raise `work_mem` to `'64MB'` – 4 times more than the current value – we'll have 300 * 64 MiB * 2 = 38400 MiB. It's still not a problem, we will just take ~38 GiB from those 250 GiB of OS file cache. And this is the *worst* case, which actually will not happen. But `work_mem = '64MB'` will allow us to get rid of most temporary files. - -Additionally, we know from [postgres-checkup's K001 report](https://gitlab.com/postgres-ai-team/hc-results-tmp/blob/master/g/md_reports/20190419002_2019_04_19T07_13_42_-0700/0_Full_report.md#postgres-checkup_K001) that reading from OS file cache is very rare -- just hundreds blocks per second. This means, that the most of working data set is present in the buffer pool. This again tells us that we can easily increase `work_mem`, even to values like '100MB' and more. - -# Log analysis - -Let's see average and stddev for temporary file sizes for one given day (April 23): -```bash -$ sudo journalctl --since '2019-04-23' --until '2019-04-24' | grep ""temporary file"" | awk '{print $18}' | awk '{for(i=1;i<=NF;i++) {sum[i] += $i; sumsq[i] += ($i)^2}} - END {for (i=1;i<=NF;i++) { - printf ""%f %f \n"", sum[i]/NR, sqrt((sumsq[i]-sum[i]^2/NR)/NR)} - }' -46059369.520593 10821203.571495 -``` --- average size was 46 MB, standard deviation ~11 MB. - -This means, that raising to `'64MB'` might in insufficient if we want to decrease number of files generated per day to dozens – additional increase to values like 70-80, up to 100 MB might be needed. - -# Proposal - -1. Raise `work_mem` to `'100MB'` on all Postgres nodes. Restart is not required, only config reloading. -1. After a few days, analyze tmp files generation and memory consumption again. If needed, reconsider `work_mem` again.",3.0 -20320734,2019-04-24 17:48:59.427,Chef failing in gprd across the board,"Chef runs are failing everywhere in gprd for the last 8 hours: - -``` -[2019-04-24T17:43:18+00:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out -[2019-04-24T17:43:18+00:00] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report -[2019-04-24T17:43:18+00:00] ERROR: Cookbook 'seven_zip' version '3.1.0' depends on chef version ["">= 13.0""], but the running chef version is 12.22.5 -[2019-04-24T17:43:18+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1) -``` - -stacktrace.out looks like: - -``` -Generated at 2019-04-24 17:43:18 +0000 -Chef::Exceptions::CookbookChefVersionMismatch: Cookbook 'seven_zip' version '3.1.0' depends on chef version ["">= 13.0""], but the running chef version is 12.22.5 -/opt/chef/embedded/lib/ruby/gems/2.3.0/gems/chef-12.22.5/lib/chef/cookbook/metadata.rb:721:in `validate_chef_version!' -/opt/chef/embedded/lib/ruby/gems/2.3.0/gems/chef-12.22.5/lib/chef/cookbook/cookbook_collection.rb:54:in `block in validate!' -/opt/chef/embedded/lib/ruby/gems/2.3.0/gems/chef-12.22.5/lib/chef/cookbook/cookbook_collection.rb:53:in `each' -/opt/chef/embedded/lib/ruby/gems/2.3.0/gems/chef-12.22.5/lib/chef/cookbook/cookbook_collection.rb:53:in `validate!' -/opt/chef/embedded/lib/ruby/gems/2.3.0/gems/chef-12.22.5/lib/chef/policy_builder/expand_node_object.rb:85:in `setup_run_context' -/opt/chef/embedded/lib/ruby/2.3.0/forwardable.rb:204:in `setup_run_context' -/opt/chef/embedded/lib/ruby/gems/2.3.0/gems/chef-12.22.5/lib/chef/client.rb:513:in `setup_run_context' -/opt/chef/embedded/lib/ruby/gems/2.3.0/gems/chef-12.22.5/lib/chef/client.rb:281:in `run' -/opt/chef/embedded/lib/ruby/gems/2.3.0/gems/chef-12.22.5/lib/chef/application.rb:295:in `block in fork_chef_client' -/opt/chef/embedded/lib/ruby/gems/2.3.0/gems/chef-12.22.5/lib/chef/application.rb:283:in `fork' -/opt/chef/embedded/lib/ruby/gems/2.3.0/gems/chef-12.22.5/lib/chef/application.rb:283:in `fork_chef_client' -/opt/chef/embedded/lib/ruby/gems/2.3.0/gems/chef-12.22.5/lib/chef/application.rb:248:in `block in run_chef_client' -/opt/chef/embedded/lib/ruby/gems/2.3.0/gems/chef-12.22.5/lib/chef/local_mode.rb:44:in `with_server_connectivity' -/opt/chef/embedded/lib/ruby/gems/2.3.0/gems/chef-12.22.5/lib/chef/application.rb:236:in `run_chef_client' -/opt/chef/embedded/lib/ruby/gems/2.3.0/gems/chef-12.22.5/lib/chef/application/client.rb:464:in `sleep_then_run_chef_client' -/opt/chef/embedded/lib/ruby/gems/2.3.0/gems/chef-12.22.5/lib/chef/application/client.rb:451:in `block in interval_run_chef_client' -/opt/chef/embedded/lib/ruby/gems/2.3.0/gems/chef-12.22.5/lib/chef/application/client.rb:450:in `loop' -/opt/chef/embedded/lib/ruby/gems/2.3.0/gems/chef-12.22.5/lib/chef/application/client.rb:450:in `interval_run_chef_client' -/opt/chef/embedded/lib/ruby/gems/2.3.0/gems/chef-12.22.5/lib/chef/application/client.rb:434:in `run_application' -/opt/chef/embedded/lib/ruby/gems/2.3.0/gems/chef-12.22.5/lib/chef/application.rb:59:in `run' -/opt/chef/embedded/lib/ruby/gems/2.3.0/gems/chef-12.22.5/bin/chef-client:26:in `' -/usr/bin/chef-client:57:in `load' -/usr/bin/chef-client:57:in `
' -``` - -We have had similar problems with the seven_zip version before: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6216",1.0 -20305152,2019-04-24 13:14:41.711,Update cert for *.gprd.gitlab.net in production,"# Production Change - Criticality 2 ~""C2"" - -| Change Objective | Prevent *.gprd.gitlab.net certificates from expiring.| -|:---|:---| -| Change Type | ~C2 | -| Services Impacted | GitLab.com | -| Change Team Members | @cmcfarland | -| Change Severity | ~S2 | -| Buddy check or tested in staging | I don't think we can test this in staging. Will pair with a colleague for coverage. | -| Schedule of the change | **See Comments** | -| Duration of the change | **See Comments** | -| Detailed steps for the change. Each step must include: | See below | - -* Verify certificate is valid locally. -* Re-format the certificate for vault entry ( https://gitlab.com/gitlab-com/runbooks/blob/master/howto/ssl_cert.md ) -* Use gkms-vault-edit to replace the SSL certificate. -* Find some way to verify the change is working in production.",1.0 -20287213,2019-04-24 02:16:39.607,Pull Mirrors not running in production,"PullMirrorsOverdueQueueTooLarge Alert is Firing and Pull Mirror jobs aren't being processed. - -![Screen_Shot_2019-04-23_at_4.11.57_PM](/uploads/38c18ed9789e9b528e5f7afa4b2ca6fc/Screen_Shot_2019-04-23_at_4.11.57_PM.png) -![Screen_Shot_2019-04-23_at_4.12.56_PM](/uploads/0624dc0892167f22b24d965bd127e635/Screen_Shot_2019-04-23_at_4.12.56_PM.png) - -Following the runbook caused mirror jobs to run again for a moment, but they stopped again soon afterward",2.0 -20272407,2019-04-23 14:27:47.902,Rebalance Git Nodes - File 26 and File 28 are above 80%,"We should look at File-26 and File-28 to rebalance and figure out growth on File-28. -https://dashboards.gitlab.net/d/W_Pbu9Smk/storage-stats?refresh=30m&orgId=1 - -file-28 is at 87% and had been steadily growing over the last month.",1.0 -20263041,2019-04-23 09:28:27.892,ChatOps access request for @lmcandrew on ops.gitlab.net,"Similar request as https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6454 - -## What - -I'd like to be given chatops access so I can enable feature flags. I think this entails being added to a group on ops.gitlab.net - -## Why - -In the Manage team we are using per-group feature flags for the first time, and I'd like to test both the features themselves and the process of enabling feature flags.",1.0 -20253280,2019-04-22 21:54:27.769,DR gitaly jobs show as not up in prometheus,"Using this query `up{tier=""stor""}` on https://prometheus.dr.gitlab.net should show `1` on the lines for `job=""gitaly""`, just like it is with the others. - -The services are running on the server: -``` -file-01-stor-dr.c.gitlab-dr.internal:~$ sudo gitlab-ctl status -run: gitaly: (pid 17958) 337468s; run: log: (pid 2383) 862976s -run: logrotate: (pid 19323) 2565s; run: log: (pid 2381) 862976s -```",1.0 -20247897,2019-04-22 18:52:06.250,"Important SSL certificates expiring in May, 2019","Opening up this issue so we have extra attention on the ssl certificates that are expiring in may, some over contribute: - -* gitlab.com *May 11 23:59:59 2019 GMT* -* int.gprd.gitlab.net *May 11 23:59:59 2019 GMT* -* registry.gitlab.com *May 22 23:59:59 2019 GMT*",1.0 -20243671,2019-04-22 16:16:25.418,Add runner version var to about-gitlab-com cookbook,In `cookbook-about-gitlab-com::runner` the gitlab-runner package version is locked to prevent unwanted upgrades. We should add a version role variable to make the intention clear and automate upgrading via chef instead of having to do it manually.,2.0 -20240684,2019-04-22 14:02:16.865,Upgrade runner on about.gitlab.com,gitlab-runner on about-src.gitlab.com is still on the very old version version 10.4 but with the recently introduced feature `ci_use_merge_request_ref` runners need to have version 11.9+. This is blocking review app deploys. We need to upgrade the runner.,2.0 -20235184,2019-04-22 08:28:20.340,Clear out S3 artifacts bucket,"As of last week, we are officially moved to GCS for artifacts (https://gitlab.com/gitlab-com/gl-infra/production/issues/783). We should delete the artifacts from S3 and remove the bucket.",1.0 -20183369,2019-04-18 20:41:45.303,Omnibus version incorrect in chef,"The omnibus version value is not correct in some environments. After deploying `11.10.0-rc8.ee.0`, the roles look like this: - -Production: -``` -chef_type: role -default_attributes: - omnibus-gitlab: - package: - key: ccc8474ea719d51d16542f1a69117ec7d7465bd18b19a832 - name: gitlab-ee - repo: gitlab/pre-release - use_key: true - version: 11.10.0-rc8.ee.0 -description: THIS IS A PLACEHOLDER FOR TAKEOFF - VERSION IS NOT VALID, DO NOT UPLOAD TO CHEF -env_run_lists: -json_class: Chef::Role -name: gprd-omnibus-version -override_attributes: -run_list: -``` - -Staging also appears correct: -``` -chef_type: role -default_attributes: - omnibus-gitlab: - package: - key: ccc8474ea719d51d16542f1a69117ec7d7465bd18b19a832 - name: gitlab-ee - repo: gitlab/pre-release - use_key: true - version: 11.10.0-rc6.ee.0 -description: THIS IS A PLACEHOLDER FOR TAKEOFF - VERSION IS NOT VALID, DO NOT UPLOAD TO CHEF -env_run_lists: -json_class: Chef::Role -name: gstg-omnibus-version -override_attributes: -run_list: -``` - -DR (Note the lack of version): -``` -chef_type: role -default_attributes: - omnibus-gitlab: - package: - key: ccc8474ea719d51d16542f1a69117ec7d7465bd18b19a832 - name: gitlab-ee - repo: gitlab/pre-release - use_key: true -description: THIS IS A PLACEHOLDER FOR TAKEOFF VERSION IS NOT VALID, DO NOT UPLOAD TO CHEF -env_run_lists: -json_class: Chef::Role -name: dr-omnibus-version -override_attributes: -run_list: -``` - -Pre was updated before production: -``` -chef_type: role -default_attributes: - omnibus-gitlab: - package: - key: ccc8474ea719d51d16542f1a69117ec7d7465bd18b19a832 - name: gitlab-ee - repo: gitlab/pre-release - use_key: true - version: 11.9.8-ee.0 -description: THIS IS A PLACEHOLDER FOR TAKEOFF - VERSION IS NOT VALID, DO NOT UPLOAD TO CHEF -env_run_lists: -json_class: Chef::Role -name: pre-omnibus-version -override_attributes: -run_list: -```",3.0 -20175680,2019-04-18 16:54:26.390,ChatOps access request for @nfriend on ops.gitlab.com,"### What -I'd like to be given ChatOps access in order to enable and disable feature flags on staging.gitlab.com. - -### Why -Being able to enable and disable feature flags on staging.gitlab.com will allow me to test features hidden behind feature flags as part of our QA process. - -### Background - -I'm following the example of this issue: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6454",1.0 -20173929,2019-04-18 15:24:52.517,Automate postgres checkup report,"Let's automate running https://gitlab.com/postgres-ai-team/postgres-checkup and produce a new checkup report every week (?). The result can be pasted into an issue in the infrastructure queue and reviewed from there. - -[An example report](https://gitlab.com/gitlab-com/gl-infra/production/snippets/1845548#postgres-checkup_K003) led to lots of great insights, e.g. in https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6590 et al.",2.0 -20150946,2019-04-17 20:41:04.330,access to dr prometheus,"My account is not able to access https://prometheus.dr.gitlab.net/ -![Screen_Shot_2019-04-17_at_4.16.49_PM](/uploads/5d0c545175c8df282072a84758783afe/Screen_Shot_2019-04-17_at_4.16.49_PM.png)",1.0 -20141253,2019-04-17 13:44:47.044,Update some credentials for https://customers.stg.gitlab.com,"We need to update the [following fields](https://gitlab.com/gitlab-cookbooks/cookbook-customers-gitlab-com/blob/master/attributes/default.rb#L18-19) on the Chef vault: - -`zuora_api_user` and `zuora_api_password` - -The values are stored on the `Subscription Portal` vault that has been shared with the Infrastructure team.",1.0 -20137208,2019-04-17 11:06:06.278,Switch to hourly rotation for some Elastic indices,"Some of our elastic indices are very big - up to a 1TB in size. - -This may be causing some of the problems we're seeing with our Elastic cluster. - -In order to fix the cluster, @ahmadsherif @jarv @mwasilewski\-gitlab and I discussed solutions, and switching over to hourly logging seems to be fairly straight-forward approach which may help. - -We aim to do this for the following logs: - -* `rails` -* `gitaly` -* `workhorse` - -The `nginx` logs are also quite large, but we intend to stop sending these to ElasticSearch: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6542 (and only storing them in GCS) - - -**Note** changes may need to be made to the retention tools in https://ops.gitlab.net/gitlab-com/gl-infra/gitlab-restore/esc-tools/",3.0 -20131255,2019-04-17 08:05:36.206,Rebuild invalid index: projects_mirror_user_id_idx,"Index `projects_mirror_user_id_idx` on the `projects` table is invalid: -``` - ""projects_mirror_user_id_idx"" btree (mirror_user_id) INVALID -``` -as detected by recent checkup: - -It was not so a week ago (the report made 6 days ago [shows only one invalid index](https://gitlab.com/gitlab-com/gl-infra/production/snippets/1845548#postgres-checkup_H001), another one, being expected since at that time pg_repack was operating -- it uses names `index_**` when rebuilding indexes). - -An open question is what caused it. - -Action: rebuild it. Perhaps pg_repack is a good option for this. - -/cc @abrandl",1.0 -20081851,2019-04-15 19:14:20.376,open port 9091 for ops prometheus server,"I was trying to setup pushgateway on chef.gitlab.com, license.gitlab.com, version.gitlab.com, and customers.gitlab.com -Ops prometheus server (prometheus-01-inf-ops.c.gitlab-ops.internal) will be scraping them. Currently, ops prometheus server is not able to connect to the 9091 port of those servers. - -Anyone can help open up the port for ops Prometheus server?",2.0 -20075787,2019-04-15 15:16:33.996,fix test kitchen for patroni clusters,please create/fix the test kitchen for patroni clusters on chef,2.0 -20015768,2019-04-12 20:36:35.722,Apply GCP sizing recommendations/downsize nodes in DR,"In looking at GCP sizing recommendations in https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6239, it appears that prod may be difficult, but we do have a similar potential for savings in DR and in the case of the file-nn-stor nodes, we have already moved the persistent disks to standard so the cpus should be able to scale down. - -Recommendation is to scale down file-nn nodes to n1-standard-16 from 32 and see how things perform as we turn on Geo. We may be able to downsize further after Geo has been running and we see more recommendations. This will initially save us 32 * 32 = 1024 vcpu per month. - -cc @glopezfernandez @andrewn @devin. Assigning devin given that he has been working DR and Geo items.",2.0 -19972071,2019-04-11 15:52:56.551,Turn of SSL compression on PostgreSQL ZFS replication slot,"The #database channel is getting alerted to a replication slot that is currently using SSL compression: - -```sql -gitlabhq_production=# select * from pg_stat_ssl where compression = 't'; --[ RECORD 1 ]---------------------------- -pid | 4391 -ssl | t -version | TLSv1.2 -cipher | ECDHE-RSA-AES256-GCM-SHA384 -bits | 256 -compression | t -clientdn | - -gitlabhq_production=# select * from pg_replication_slots; --[ RECORD 1 ]-------+---------------- -slot_name | postgres_zfs_01 -plugin | -slot_type | physical -datoid | -database | -active | t -active_pid | 4391 -xmin | 3997015837 -catalog_xmin | -restart_lsn | D869/D000000 -confirmed_flush_lsn | -```",1.0 -19972057,2019-04-11 15:52:15.947,PoC: Create test environment with staging database data,"Context: https://gitlab.com/abrandl/poc-test-envs - -Any comments appreciated",1.0 -19960991,2019-04-11 11:07:01.254,osqueryd logrotation,"`/var/log/osqueryd/{watcher,worker}.log` are not rotated. Especially worker log can grow very fast.",2.0 -19937588,2019-04-10 19:02:25.188,Make cron to cleanup temp files on share after a certain age,"As part of researching https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6546 - we should go ahead an add a cron that cleans out share after a set time. - -This will be a bandaid until https://gitlab.com/gitlab-org/gitlab-ce/issues/56712 is in place.",3.0 -19923459,2019-04-10 11:23:49.716,Daily runner usage,"Extracted from : https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6360#note_157441581 - -if we could get another report into CI, which gives us daily runner usage (aggregated on total CI runner minutes per day) by the following categories? - -1. `gitlab`: GitLab and GitLab forks (this could be as simple as checking for ""gitlab-ce""/""gitlab-ee"" in the project path) -1. `free-public`: shared runners for non-paid public projects -1. `free-nonpublic`: shared runners for non-paid non-public projects -1. `paid`: shared runners for non-free projects -1. `private-runners`: non-shared runner projects",2.0 -19920571,2019-04-10 10:10:34.595,Databases reviews,"* [x] @abrandl https://dev.gitlab.org/gitlab/gitlab-ee/merge_requests/866 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/10676 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/26490 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/10756",1.0 -19913546,2019-04-10 07:29:55.566,Consider increasing Runner timeouts for gitlab-shared-runners-manager-X.gitlab.com runners to 90 minutes,"Currently it seems that the runners from `gitlab-shared-runners-manager-X.gitlab.com` are using a 60 minutes timeout which can lead to rspec jobs to timeout even if all the tests actually passed (e.g. https://gitlab.com/gitlab-org/gitlab-ee/-/jobs/193827374, https://gitlab.com/gitlab-org/gitlab-ee/-/jobs/193827346). We should consider increasing this timeout to something like 90 minutes. - -@tmaczukin What do you think? - -/cc @meks @godfat",1.0 -19849793,2019-04-08 16:18:58.866,setup an Elastic Cloud cluster for indexing in staging.gitlab.com,"as per: https://gitlab.com/groups/gitlab-org/-/epics/853 we need to set up an ELK cluster to enable Elasticsearch integration on `staging.gitlab.com` for `gitlab-org` group - -~~* [ ] extend our subscription in Elastic Cloud~~ turns out the subscription limit is per deployment so I could create the cluster -* [x] create a cluster for staging -* [x] connect it to `staging.gitlab.com` (limit to `gitlab-org`, do not enable searching for now, let indexing run first) - * [x] check indexing is done, after ~12h: -```bash -$ sudo gitlab-rake gitlab:elastic:index_repositories_status -Indexing is 26.30% complete (811186/3084890 projects) -``` -* [x] indexing is done -* [x] if needed, manually trigger indexing: https://docs.gitlab.com/ee/integration/elasticsearch.html#indexing-large-instances -* [x] enable searching with ELK on `staging.gitlab.com` -* [x] test resizing of the cluster",16.0 -19796443,2019-04-06 01:16:09.828,Investigate 10TB Directory on share-01,"In https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6467 we began to investigate why the `share-01` server filled up so much and so quickly. - -It was determined that the directory `/var/opt/gitlab/gitlab-rails/uploads/-/system/tmp` has 10.1TB of data in it and we need to figure out how it got there and decide if it can be deleted. Given that it is `tmp`, I would imagine we can just delete it, but we need to verify its purpose and why it got to this point before doing so.",3.0 -19752181,2019-04-04 20:49:06.635,Jump Links in Slack lead to old archived channel,"When clicking `Jump` on an alert recovery, acknowledgement or un-acknowledgement in the production channel, it opens the archived `#infrastructure` channel. - -I would expect it to lead to the original alert message in the #production channel",1.0 -19751852,2019-04-04 20:27:33.995,Add alerts for osqueryd metrics,We should add alerts for process_exporter metrics for CPU and disk usage.,3.0 -19751762,2019-04-04 20:22:07.566,Define a production-readiness process for services,"Define and document criteria for a service to be declared ready for production. -This should lead to a process ensuring production-readiness for new and existing services.",3.0 -19748639,2019-04-04 18:41:03.897,[RCA] osqueryd consuming too many resources in production,"## Summary - -Service(s) affected : N/A -Team attribution : infrastructure/security -Minutes downtime or degradation : N/A - -## Impact & Metrics - -- What was the impact of the incident? - - CPU load and disk IO on most machines went up, consuming an equivalent of ca. 60 cores overall in the production fleet. -- Who was impacted by this incident? - - mostly security and infrastructure teams having to spend time finding and fixing the root cause and preventing further impact -- How did the incident impact customers? (i.e. preventing them from doing X, incorrect display of Y, ...) - - As the cpu usage of osqueryd was isolated to 1 core on each server there was no measurable slowdown of any service but it might be that it was contributing to slower response during times of higher load on the site. -- How many attempts were made to access the impacted service/feature? - - N/A -- How many customers were affected? - - N/A -- How many customers tried to access the impacted service/feature? - - N/A - -Overall osqueryd CPU consumption by env (we are missing the week before as we switched to thanos longterm storage last week): -![overall CPU consumption by env](/uploads/317c884da20af07381af518fae1db26e/image.png) - - -## Detection & Response - -Start with the following: - -- How was the incident detected? - - a few hours after prod deployment, SRE oncall was paged for root disk filling up on git-servers (because of growing osqueryd data dir). Investigation showed issues with the embedded rocksdb and [high CPU usage](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6434#note_152859032). -- Did alarming work as expected? - - The existing alarming worked to notify us about systems coming into critical state, but we didn't have specific alerts or monitoring for osqueryd misbehaving. Also, FilesystemFullSoon alerts are not going to pagerduty, but disks filled so fast, that we were at risk to have them filled up before SRE oncall was taking notice. -- How long did it take from the start of the incident to its detection? - - prod deployment started 1am UTC, filesystem-full alerts coming in 5:49am (in slack channel, not via pagerduty), SRE oncall starting to take action 12:15pm -- How long did it take from detection to remediation? - - immediate remediation for FileSystemFull alerts at 11:15am. Trimming down the osqueryd profile took place over several days with finally stopping it in production after 7 days because of rocketdb corruption issues not stopping. -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team member wasn't page-able, ...) - - security team having no insight to metrics or logs of osqueryd was slowing down the process of analysing and fixing the issue - -## Timeline - -2019-03-21 -* 01:00 UTC - starting to deploy osqueryd to production -* 05:49 UTC - FilesystemFullSoon alerts starting for git-\* nodes -* 12:20 UTC - SRE On-call stopping osqueryd on git nodes and cleaning up data dirs -* 15:00 UTC - SRE On-call noticing that uptycs cookbook is re-installing and restarting osqueryd on each chef-client run (https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6440) -* 17:30 UTC - Security moved git hosts into new profile, collecting less data -* 18:34 UTC - SRE On-call deploying MR to fix uptycs cookbook reinstall issue (https://gitlab.com/gitlab-cookbooks/gitlab-uptycs/merge_requests/9) - -2019-02-22 -* 07:10 UTC - SRE On-call noticing that not all git nodes got a new profile (https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6434#note_153156697) -* 08:21 UTC - security applying the new profile to missing git nodes -* 11:04 UTC - SRE On-call providing list with other hosts having osqueryd issues (https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6434#note_153231389) -* 21:01 UTC - new profile without local caching applied to all hosts - -2019-03-25 - -* 11:22 UTC - SRE-oncall still seeing hosts with issues, pinging security through slack - -2019-03-28 - -* 15:50 UTC - process monitoring for osquery deployed, showing a lot CPU usage over the whole fleet (https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6434#note_155174540) - -2019-03-29 - -* 13:15 UTC - osqueryd being stopped in production to stop waste of CPU resources until solution is found. - -2019-04-03 - -* 02:15 UTC - deploying Uptycs 3.2.6.46 (wich contains a fix for rocksdb) in staging solves the CPU utilization issues (https://ops.gitlab.net/gitlab-cookbooks/chef-repo/merge_requests/945) - - - - -## Root Cause Analysis - -osqueryd showed anomalous behavior in production. That was caused by a profile that was collecting too many data with too high frequency and a bug specific to RocksDB in Uptycs 3.2.6.40 that was leading to db corruption and spinning with 100% cpu usage on 1 core. We did collect too many different metrics because it was assumed that the default uptycs profile would have no negative impact on our hosts and would not do local caching. That assumption wasn't proofed wrong in tests beforehand - presumably, because we didn't test on hosts with the same workload like in production and because we didn't have enough visibility into the behavior of osqueryd. We missed visibility because we didn't implement monitoring for the behavior of the osqueryd process before going into production and maybe also because we didn't examine the osqueryd logs enough in tests to see what it is collecting. Solving the issue in production took a while because the security team didn't have direct access to the staging and production systems, so they couldn't observe logs or system resource usage while the infra team didn't have insight into the osqueryd configuration profile and also didn't have expertise with the product. - -## What went well - -- Our existing monitoring and alerts made as aware of the issues before they could cause issues in production. -- First response of infra and security teams was fast and we worked closely together to find the root cause and fix it. -- Osqueryd was configured with resource usage limits, so it never exhausted a whole node, but only 1 core per node at maximum - -## What can be improved - -- We should do proper testing before bringing a new service into production. -- SRE and security to work closer together in planning and developing a new service. -- SREs should take a stronger lead on ensuring production readiness for a new service before going live (monitoring, alerting, runbook, ...). -- We should have solid cookbooks with proper tests and ways to enable or disable the service. Initially, the cookbook was always re-installing osqueryd on each chef-run and there was no way to control the service. -- We should only collect the necessary metrics and avoid collecting metrics that are already available through our existing monitoring systems. -- We could have stopped osqueryd in production earlier instead of trying to fix it there and discovering more and more issues in the process. - -## Corrective actions - -- formalize and document production readiness process: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6523 -- add alerts for osqueryd cpu and disk usage: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6524 -- review the profile of osqueryd, what metrics it is collecting how often, and what we really need from it: https://gitlab.com/gitlab-com/gl-security/operations/issues/180 -- deploy fixed Uptycs version 3.2.6.46",2.0 -19745425,2019-04-04 16:08:53.552,Change PatroniIsDown backing metrics,"Right now we use a metric that's exported by mtail to determine if Patroni is up, which has been proven to be unreliable at times. Instead we should use process-exporter for this.",2.0 -19717209,2019-04-04 10:05:38.810,Blueprint for MachineResourceManager,"We want to be able to upgrade OS and/or Kernel on our compute machines whenever we want and downgrade if we find issues. The traditional way of doing `apt-get` directly on our hosts is succeeded with a process like `unattended-upgrades`. However, we are taking it to the next level and going with base image creation process. The idea is that we will have a fully automated process that can build a new image with newer OS/kernel and can test it to make sure basic functionalities work without any issues. There is already work being done for this. - -What is needed then? We still need to come up with a process that will take the base image, do all the orchestration work of configuring it the way we want for a given service and replace an existing node with the new one and it should be able to do it for all of our hosts. - -This issue is to keep track of the blueprint for this initiative. It is called: ""MachineResourceManager"". In the work field, ""Human Resource"" is a function that deals with hiring, firing, providing career growth/development, promotion...etc for employees. When demand increases, HR hires more people temporarily. HR lets go off employees and onboards new employees. These are very similar to what we are trying to achieve with the work - the only difference is that it is for machines. We want to be able to replace machines, add some and remove some when needed.",8.0 -19702775,2019-04-03 20:32:00.035,Access Request: MR Access to Ops gitlab-cookbooks,Request for myself (@pharrison on ops.gitlab.net) to be given access to submit MR's to the gitlab-cookbooks project. This will be used to simplify bumping version numbers and adding cookbooks to areas like the ops/gprd/gstg environments json.,1.0 -19683326,2019-04-03 10:29:08.470,Setup omnibus in testbed env,Setup omnibus in the testbed env.,5.0 -19683283,2019-04-03 10:27:25.511,Setup monitoring for testbed env,"* [ ] Setup prometheus server in testbed env -* [ ] Network peering with ops -* [ ] Configure Prometheus, Alertmanager, ...",5.0 -19683151,2019-04-03 10:22:45.702,create testbed bastion host,Stand up bastion host for the testbed. Add ssh config.,3.0 -19682974,2019-04-03 10:15:26.139,Create Chef env for testbed,Create Chef environment for the testbed environment.,3.0 -19680849,2019-04-03 09:52:45.000,Create terraform env for testbed,Create a new terraform env for the testbed environment. Make it work with the gitlab-testbed GCP project (https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6510). Store terraform env credentials in 1Password.,5.0 -19680731,2019-04-03 09:47:35.985,Create new gitlab-testbed GCP project,"Create new GCP project for setting up the testbed environment. - -Enable GCP API, KMS API. Setup KMS key, credentials.",2.0 -19668388,2019-04-02 23:17:22.374,LoggingVisibilityDiminished Alerts,"``` -Firing 2 - LoggingVisibilityDiminished -PubSub messages are queuing up. Unacked messages older than 60 seconds exist in the queue -for the last 5 minutes. This will lead to a loss of log data. - - -:desktop_computer: *Services:* - - ** - PubSub queuing high - - PubSub messages are queuing up. Unacked messages older than 60 seconds exist in the queue -for the last 5 minutes. This will lead to a loss of log data. - - - - ** - PubSub queuing high - - PubSub messages are queuing up. Unacked messages older than 60 seconds exist in the queue -for the last 5 minutes. This will lead to a loss of log data. - - -:label: *Labels*: - - *Alertname*: LoggingVisibilityDiminished - *Channel*: production - *Env*: gprd - *Monitor*: default - *Provider*: gcp - *Region*: us-east - *Severity*: warn - *Subscription_id*: pubsub-workhorse-inf-gprd-sub -``` - -https://prometheus.gprd.gitlab.net/graph?g0.range_input=1h&g0.expr=sum%20by(subscription_id)%20(stackdriver_pubsub_subscription_pubsub_googleapis_com_subscription_oldest_unacked_message_age)%20%3E%2060&g0.tab=0 - -![Screen_Shot_2019-04-02_at_1.15.52_PM](/uploads/002e9829d311cb471a3fcda6fd4a2e72/Screen_Shot_2019-04-02_at_1.15.52_PM.png)",1.0 -19657128,2019-04-02 16:44:41.705,Cleanup or fix obsolete alerts,"The expressions backing the following alerts doesn't return any data, so either we need to fix them or remove them: - -* [x] HighGitCatFileCount -* [x] BlackBoxGitPullHttps -* [x] BlackBoxGitPullSsh -* [x] BlackBoxGitPushHttps -* [x] BlackBoxGitPushSsh -* [ ] CICDTooManyRunningJobsPerNamespaceOnSharedRunners -* [ ] CICDRunnerMachineCreationRateHigh -* [x] CICDRunnersCacheDown -* [x] GitLabComLatencyWeb -* [x] GitLabComLatencyWebCritical -* [x] GitLabComLatencyAPI -* [x] GitLabComLatencyAPICritical -* [x] GitLabComLatencyGit -* [x] GitLabComLatencyGitCritical -* [x] GitlabComDown -* [x] WWWGitlabComDown -* [x] MonitorGitlabNetPrometheusDown -* [x] MonitorGitlabNetNotAccessible -* [x] OtherPrometheusDown -* [x] PostgreSQL_ReplicationStopped -* [x] PostgreSQL_ReplicaStaleXmin -* [x] PostgreSQL_DiskUtilizationMaxedOut -* [x] PostgreSQL_PGBouncer_maxclient_conn -* [x] SnitchHeartBeat -* [x] StagingGitlabComDown -* [x] StagingPostgresIO_High -* [x] FrontEndWorkerDown",3.0 -19646033,2019-04-02 10:42:58.437,come up with a way to configure external links on about.staging.gitlab.com,,4.0 -19645992,2019-04-02 10:41:37.082,investigate the difference between redirect behaviour in nginx and Fastly,"redirects were broken in three cases, they were fixed in the following MRs: [self-managed](https://gitlab.com/gitlab-com/www-gitlab-com/merge_requests/20865), [categories](https://gitlab.com/gitlab-com/www-gitlab-com/merge_requests/20867), [codefresh](https://gitlab.com/gitlab-com/www-gitlab-com/merge_requests/20868) - -investigate why they behaved differently from nginx redirects",2.0 -19625912,2019-04-01 18:04:04.387,Move Zoom sync script out of gitlab-server,Currently the zoom sync script is housed in the [gitlab-server cookbook](https://gitlab.com/gitlab-cookbooks/gitlab-server/blob/master/files/default/zoom_sync.rb). We need to create a repo for it and move it out of the cookbook and make it public. Ultimately we want to run it via CI/CD or similar and decommission the current `cron` server that runs it.,1.0 -19572973,2019-03-29 19:22:37.150,Upload Nessus package to aptly.gitlab.com,"Per: https://gitlab.com/gitlab-com/runbooks/blob/master/howto/aptly.md - -Requesting to have the Nessus package added to our aptly repo. - -File is available here: https://drive.google.com/file/d/1gRJ3MbYIrHbxemvqKlnlU44eCzGPt1Kk/view?usp=sharing",1.0 -19550066,2019-03-28 21:43:01.968,Upgrade Packagecloud to 2.0.6,"packagecloud 2.0.6 has been released which should resolve an issue we are having with backups uploading to S3 by implementing multi-part upload. - -Time to upgrade again! - -This should be basically no big deal, certainly in comparison to https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6317 :smile:",1.0 -19544925,2019-03-28 16:57:05.026,Analyze checkpoint frequency,"~~We currently peak at 15 checkpoints per minute (per [Grafana](https://dashboards.gitlab.net/d/000000144/postgresql-overview?orgId=1&fullscreen&panelId=194&from=now-7d&to=now)).~~ - -~~The peak up to 45 opm was exceptional:~~ - -This was based on a wrong reading of the graph. The *primary* actually has low checkpoint frequency (https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6477#note_159060272). - -![Screenshot_from_2019-03-28_17-54-07](/uploads/ea74601587d7e55114551a503f73a121/Screenshot_from_2019-03-28_17-54-07.png) - -This strikes as too frequent checkpoints. We should increase `max_wal_size` in order to reduce the frequency. - -Concerns to watch out for: -* Decreasing checkpoint frequency means an increase in recovery time. What do we deem tolerable here? We need to measure this too. -* Would be interesting to measure the IO impact a decrease in frequency has. - -cc @NikolayS @cshobe @yguo",1.0 -19543692,2019-03-28 16:21:24.667,create osqueryd runbook,Write runbook for dealing with osqueryd.,1.0 -19511512,2019-03-27 21:45:36.513,Deprovision `monkey.gitlab.net`,"Per a slack discussion with @northrup, this machine is unmaintained and not in service, so can be deprovisioned. - -It is the single machine running in `security-monkey` GCP project. - -cc @dawsmith for scheduling.",1.0 -19447805,2019-03-26 07:50:42.273,Analysis on HAProxy alert tuning impact,"As part of ~""Alert Fatigue"", [we fine-tuned 3 HAProxy alerts](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6397) based on some data analysis. This issue is to keep track of how this tuning effort helped the oncall alert fatigue.",1.0 -19434373,2019-03-25 17:32:14.666,Lower the repository size limit on GitLab.com to 10GB,"As the final step on https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5160, let's: - -* [x] Mail affected users of repositories above 10GB: https://gitlab.com/gitlab-com/marketing/digital-marketing-programs/issues/440 -* [x] Adjust the limit to 10GB",2.0 -19434064,2019-03-25 17:19:06.398,Many staging alerts still paging production,"Staging alerts are paging production, even though staging is silenced. This is happening because many alerts are tagged incorrectly. - -This MR fixed things for one alert: https://gitlab.com/gitlab-com/runbooks/merge_requests/1010 - -The same thing needs to be done for all others. - -We may need to come up with a better way to make sure that all alerts are tagged by default, now that they all get sent to the same alert manager.",2.0 -19433426,2019-03-25 16:50:14.698,Monitor osqueryd,"Depending on the osquery profile and the amount of server activity, osqueryd can consume a significant amount of system resources. When caching too many events locally, it even breaks the local db which leads to filling up the disk because old data files will not be cleaned up anymore (https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6434). - -We need to monitor and alert on data dir size and CPU usage.",5.0 -19433078,2019-03-25 16:36:10.586,Create repacking dashboard,"In order to execute repacking, we want to have good insight into the lock behavior. This will help to decide when to kill a repacking process. - -Let's add useful lock monitoring to Grafana, e.g. to https://dashboards.gitlab.net/d/000000224/postgresql-bloat?orgId=1.",1.0 -19426091,2019-03-25 12:51:59.987,Database Reviews,"* [ ] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/26490 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/26057#note_149890203 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9861 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/10411 with https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/26496 -* [x] @yguo https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/10331#note_152692635 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9815#note_152275676 -* [ ] @fjsanpedro https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9634#note_154356948 -* [x] @yguo https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/26582#note_154264004 -* [x] @fjsanpedro -> @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/26540 -* [x] @yguo, @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/10057#note_154636936 along with https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25975#note_154636914 -* [x] @fjsanpredro -> @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/10446#note_154605666 -* [ ] @fjsanpedro -> @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9634#note_154536921 -* [ ] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25973#note_153842418 ~""Community Contribution"" -* [x] @NikolayS -> @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25432#note_153437855 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/26146#note_153144338 -* [x] @yguo -> @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/10353#note_154967144 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/26675 -* [x] @abrandl https://dev.gitlab.org/gitlab/gitlab-ee/merge_requests/866#note_158632 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25973 -* [x] @fjsanpedro -> @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/10437#note_15637594510437 -* [x] @abrandl https://dev.gitlab.org/gitlab/gitlabhq/merge_requests/2997#note_158700 -* [x] @fjsanpedro -> @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/26554 -* [x] @NikolayS -> @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25299#note_155593259 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/10552#2cda0c6171fa7d04989507a1dd112e34c40df46d -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/26823#3c92371994fe8eb4b866e721926eab2c2be2f44d -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/10446#note_157598053 -* [ ] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/10609 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/26212 -* [ ] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24512",5.0 -19422644,2019-03-25 11:29:42.727,Fix logrotation on gitlab-01-inf-ops.c.gitlab-ops.internal,Logs on `gitlab-01-inf-ops.c.gitlab-ops.internal` in `/var/log/gitlab/gitlab-rails/` where last rotated 2018-07-24. We need to enable logrotation to stop them from filling the disk.,2.0 -19413917,2019-03-25 07:08:16.218,Add field validation for service-catalog.yml,"It looks like we have some variations across the field values in the `service-catalog.yml`. This is causing the service-catalog-app to throw error when trying to parse the file. - -This issue is to keep track of the work to add field validations for the .yml so that: -1) Existing variations can be identified and corrected -2) Future variations are prevented",2.0 -19411173,2019-03-25 05:07:03.148,ChatOps access request for @jedwardsjones on ops.gitlab.net,"## What - -I'd like to be given chatops access so I can enable a feature flag on staging. I think this entails being added to a group on ops.gitlab.net - -## Why - -In the manage team we are using per-group feature flags for the first time, and I'd like to test both the features themselves and the process of enabling feature flags. In particular we've added the ability for chatops to enable features per group instead of per project, but I've been unable to verify that this actually works because I don't have access. - -## From slack - -https://gitlab.slack.com/archives/C101F3796/p1552617972681000 ->>> -How do I get access to chatops? When I try `/chatops run feature list --match=saml --staging` I just get the error ""Whoops! This action is not allowed. This incident will be reported."" - -A slack search (https://gitlab.slack.com/archives/C3JJET4Q6/p1547028248032500) suggests I should ask here and include my ops.gitlab.net username, `@jedwardsjones`. ->>>",1.0 -19410413,2019-03-25 03:52:33.890,Update service-catalog-app with latest changes in service-catalog.tml,Deploy latest service-catalog.yml to the app.,1.0 -19349042,2019-03-21 17:22:56.089,Do not install osqueryd on each chef-client run,The `gitlab-uptyks` cookbook is installing the osqueryd .deb package unconditionally from a local file which leads to a re-installation on each chef-client run. We need to make this resource idempotent.,2.0 -19337301,2019-03-21 11:40:04.828,osquery is filling up the root fs,"The osquery is saving data in `/var/osquery/osquery.db/` which is growing very fast (several GB/day), filling up the root fs. -This needs to be fixed soon, as some git hosts will run out of space within a few hours.",13.0 -19330465,2019-03-21 09:10:11.781,setup staging environment for about.gitlab.com,"this came up as part of discussion in: https://gitlab.com/gitlab-com/www-gitlab-com/issues/3952 - -* [x] manually create a VM in Azure -* [x] manually create a DNS entry for the VM -* [x] register the VM with Chef -* [x] create a separate role called `about-staging-gitlab-com.json`, similar to `about-gitlab-com.json` and apply it - * [ ] edit the `cookbook-about-gitlab-com` cookbook so that nginx config is generated from a template and the domain can be specified in the role, created a separate issue for this: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6503 - * [x] configure a CI job to deploy site from master -* [x] register the runner -* [x] create a service in Fastly -* [x] configure TLS certs, enforce TLS -* [x] create DNS entries (IPv4, IPv6) for Fastly about.staging.gitlab.com -* [x] configure `staging.gitlab.com` to use `about.staging.gitlab.com` for its redirects",15.0 -19330376,2019-03-21 09:06:39.780,Enable 301 Redirects on about.gitlab.com for Marketing (infrastructure project),"moving the discussion from: https://gitlab.com/gitlab-com/www-gitlab-com/issues/3952 to infrastructure project - -* [x] create a yaml file with redirects definitions: `www-gitlab-com/data/redirects.yml` , start with the following [content](https://ops.gitlab.net/gitlab-cookbooks/chef-repo/blob/master/roles/about-gitlab-com.json#L21) and use this format: -``` -- sources: /some-old-path/ - target: /some-new-path/ -- sources: - - /another-old-path/ - - /another-old-path-as-well/ - target: /some-other-new-path/ -``` -* [x] add a CI job in the build stage that validates the yml file (@alejandro): - * [x] no source path appears twice in the collection - * [x] no target path appears twice in the collection - * [x] no target path appears as a source path - * [x] no redirect loops -* [x] create manually within Fastly (@mwasilewski\-gitlab): - * [x] An edge dictionary named `redirects` - * [x] link dictionary with a version of config - * [x] create redirects with a condition which uses a table, follow [docs](https://docs.fastly.com/guides/performance-tuning/generating-http-redirects-at-the-edge) (the redirects are actually so simple that there shouldn't be a need for VCL snippets) - * [x] update a 301 redirect using an API call -* [x] write a Ruby script `www-gitlab-com/bin/redirects` that will: - * [x] read yml file, parse it into 3 parts: exact matches, simple regexes (literal matches), regexes - * [x] exact matches (@alejandro): - * [x] get items from fastly edge dictionary `curl -H ""$FASTLY_API_TOKEN_STAGING"" https://api.fastly.com/service/$FASTLY_SERVICE_ID_STAGING/dictionary/$FASTLY_DICTIONARY_ID_STAGING/items` - * [x] compare with exact matches from yml: - * [x] if items exist in fastly, but not in yml, delete them from fastly using batch update, e.g. json: - - ```json - { - ""items"": [ - { - ""op"": ""delete"", - ""item_key"": ""/src/path1"", - ""item_value"": """" - }, - { - ""op"": ""delete"", - ""item_key"": ""/src/path2"", - ""item_value"": """" - } - ] - } - ``` - - * [x] for all else use batch upsert api call, e.g. json: - - ```json - { - ""items"": [ - { - ""op"": ""upsert"", - ""item_key"": ""/src/path1"", - ""item_value"": ""/dst/path1"" - }, - { - ""op"": ""upsert"", - ""item_key"": ""/src/path2"", - ""item_value"": ""/dst/path2"" - } - ] - } - ``` - - * [x] literal matches and regexes: - * [x] generate and upload one recv dynamic VCL snippet and one error dynamic VCL snippet. They need to include all rules inside of them. e.g. recv: -``` -curl -X PUT -s https://api.fastly.com/service/$FASTLY_SERVICE_ID_STAGING/snippet/$FASTLY_VCL_SNIPPET_STAGING -H ""$FASTLY_API_TOKEN_STAGING"" -H 'Content-Type: application/x-www-form-urlencoded' --data $'content=if ( req.url ~ ""^/gitlab-ee"" ) {\n error 805 ""Permanent Redirect"";\n}\nif ( req.url ~ ""^/development"" ) {\n error 806 ""Permanent Redirect"";\n}\n'; -``` -e.g. error: -``` -curl -X POST -s https://api.fastly.com/service/$FASTLY_SERVICE_ID_STAGING/version/8/snippet -H ""$FASTLY_API_TOKEN_STAGING"" -H 'Content-Type: application/x-www-form-urlencoded' --data $'content=if ( obj.status == 805 ) {\n set obj.status = 301;\n set obj.response = ""Moved Permanently"";\n set obj.http.Location = ""/pricing/"";\n synthetic {""""};\n return (deliver);\n}\nif ( obj.status == 806 ) {\n set obj.status = 301;\n set obj.response = ""Moved Permanently"";\n set obj.http.Location = ""/sales"";\n synthetic {""""};\n return (deliver);\n}\n'; -``` -* [x] add a protected env var with an API key to Fastly API tied only to the `about.gitlab.com` Fastly Service. An env var with a key was already present in `www-gitlab-com` ci/cd config. I turned it into a protected one and added a key for staging (@mwasilewski\-gitlab ) -* [ ] add a CI job in the build stage that validates the script -* [x] add a CI job in the deploy stage that triggers the script -* [ ] after all this work is done, confirm redirects in Fastly are fully operational -* [ ] remove redirects from Chef and chef-repo - -batch update curl: `curl -X PATCH -H 'Content-Type: application/json' -H ""$FASTLY_API_TOKEN_STAGING"" -d @batch.json ""https://api.fastly.com/service/$FASTLY_SERVICE_ID_STAGING/dictionary/$FASTLY_DICTIONARY_ID_STAGING/items""` - -[batch updating](https://docs.fastly.com/guides/edge-dictionaries/working-with-dictionary-items-using-the-api#batch-updating-dictionary-items) and [upserts](https://docs.fastly.com/guides/edge-dictionaries/working-with-dictionary-items-using-the-api#upserting-dictionary-items)",5.0 -19316608,2019-03-20 19:28:50.654,Set up initial sync of artifacts S3->GCS,"As part of migrating the artifacts from S3 to GCS (https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/4684), we need to set up an initial sync. The set up of this initial sync is a quick process that will get us on the path to migrating.",4.0 -19315275,2019-03-20 18:39:31.626,PullMirrorsOverdueQueueTooLarge in Staging,"We are getting these errors going to PagerDuty. They seem to be correct alerts this time, and they are going to the right place. - -It is not going to production this time like it was in: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5172 - -And it doesn't seem to be in error as it was in this: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5401 - -The threshold is 5K overdue updates, and the graph looks like this: - -![Screen_Shot_2019-03-20_at_8.34.37_AM](/uploads/fbaf2649780c677b8f06a1724cd839cc/Screen_Shot_2019-03-20_at_8.34.37_AM.png) - -[Dashboard link](https://dashboards.gitlab.net/d/_MKRXrSmk/pull-mirrors?refresh=30s&orgId=1&var-environment=gstg&var-prometheus=prometheus-01-inf-gstg&var-prometheus_app=prometheus-app-01-inf-gstg&from=1552847993728&to=1553107193728)",1.0 -19299442,2019-03-20 14:23:15.874,Add Thanos front-end,"Similar to `prometheus.*.gitlab.net`, we need an oauth proxy to give us access to the thanos query front-end on `dashboards-01-inf-ops.c.gitlab-ops.internal:10902`.",3.0 -19269541,2019-03-19 20:33:18.867,restore project generated too many one time ssh keys,"the pipeline failed with this error `ERROR: (gcloud.compute.scp) INVALID_ARGUMENT: Login profile size exceeds 32 KiB. Delete profile values to make additional space.` https://ops.gitlab.net/gitlab-com/gl-infra/gitlab-restore/postgres-gprd/-/jobs/373652 - -seems like there are lots of ssh key that were used once in the pipeline still exists. we can either clean up all the obsoleted sshkeys with -```for i in $(gcloud compute os-login ssh-keys list | grep -v FINGERPRINT); do echo $i; gcloud compute os-login ssh-keys remove --key $i; done``` - -also, I think we should consider publish the scripts to a GCS bucket, and let the instance download the script from GCS bucket and run as startup script instead of `scp` and `ssh` into the instance to execute the script.",2.0 -19256881,2019-03-19 13:48:43.118,IAM policy gitlab-internal for tread@gitlab.com,Requesting IAM permissions for `gitlab-internal-153318` project with role `roles/container.admin` for `tread@gitlab.com` per instructions at https://gitlab.com/gitlab-org/gitlab-development-kit/blob/master/doc/howto/auto_devops.md,1.0 -19234335,2019-03-18 22:16:45.730,Add Outreach to the CNAME for a branded URL,"To help with deliverability of emails sent by Outreach, creating a branded URL has been suggested by our CSM - -Instructions to set up are here: [Outreach Support Article](https://support.outreach.io/hc/en-us/articles/115001092234-Setting-up-Branded-URLs) - -And **Step 1** is below to point the CNAME at: -![Screen_Shot_2019-03-18_at_3.05.30_PM](/uploads/ef28569a3aed9b9b84a15a9936f2de1c/Screen_Shot_2019-03-18_at_3.05.30_PM.png) - - -Branded URL is going to be `enable.gitlab.com` - - -Please let me know if there are questions.",1.0 -19231125,2019-03-18 19:04:06.740,NFS High Load and Sidekiq `ArchiveTraceWorker` Jobs,"**Please note:** if the incident relates to sensitive data, or is security related consider -labeling this issue with ~security and mark it confidential. -*** - -## Summary - -A brief summary of what happened. Try to make it as executive-friendly as possible. - -Service(s) affected : -Team attribution : -Minutes downtime or degradation : - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? (i.e. service outage, sub-service brown-out, exposure of sensitive data, ...) -- Who was impacted by this incident? (i.e. external customers, internal customers, specific teams, ...) -- How did the incident impact customers? (i.e. preventing them from doing X, incorrect display of Y, ...) -- How many attempts were made to access the impacted service/feature? -- How many customers were affected? -- How many customers tried to access the impacted service/feature? - -Include any additional metrics that are of relevance. - -Provide any relevant graphs that could help understand the impact of the incident and its dynamics. - - -## Detection & Response - -Start with the following: - -- How was the incident detected? -- Did alarming work as expected? -- How long did it take from the start of the incident to its detection? -- How long did it take from detection to remediation? -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team memeber wasn't page-able, ...) - -## Timeline - -YYYY-MM-DD - -- 00:00 UTC - something happened -- 00:01 UTC - something else happened -- ... - -YYYY-MM-DD+1 - -- 00:00 UTC - and then this happened -- 00:01 UTC - and more happened -- ... - - -## Root Cause Analysis - -The purpose of this document is to understand the reasons that caused an incident, and to create mechanisms to prevent it from recurring in the future. A root cause can **never be a person**, the way of writing has to refer to the system and the context rather than the specific actors. - -Follow the ""**5 whys**"" in a **blameless** manner as the core of the root-cause analysis. - -For this it is necessary to start with the incident, and question why it happened. Keep iterating asking ""why?"" 5 times. While it's not a hard rule that it has to be 5 times, it helps to keep questions get deeper in finding the actual root cause. - -Keep in min that from one ""why?"" there may come more than one answer, consider following the different branches. - -###Example of the usage of ""5 whys"" -The vehicle will not start. (the problem) - -1. Why? - The battery is dead. -2. Why? - The alternator is not functioning. -3. Why? - The alternator belt has broken. -4. Why? - The alternator belt was well beyond its useful service life and not replaced. -5. Why? - The vehicle was not maintained according to the recommended service schedule. (Fifth why, a root cause) - -## What went well - -Start with the following: - -- Identify the things that worked well or as expected. -- Any additional call-outs for what went particularly well. - -## What can be improved - -Start with the following: - -- Using the root cause analysis, explain what can be improved to prevent this from happening again. -- Is there anything that could have been done to improve the detection or time to detection? -- Is there anything that could have been done to improve the response or time to response? -- Is there an existing issue that would have either prevented this incident or reduced the impact? -- Did we have any indication or beforehand knowledge that this incident might take place? - - -## Corrective actions - -- List issues that have been created as corrective actions from this incident. -- For each issue, include the following: - - - Issue labeled as ~""corrective action"". - - Include an estimated date of completion of the corrective action. - - Incldue the named individual who owns the delivery of the corrective action. - - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",2.0 -19227312,2019-03-18 16:45:38.962,Improve PostgreSQL configuration files,"Presently the Chef recipes result in two files being placed in /var/opt/gitlab/postgresql: -1. postgresql.base.conf - this appears to be more or less a default configuration file, from the version 9.2 (or earlier) distribution, as it contains commented parameters that were removed in later releases. It also lacks many of the newer settings, and is full of comment noise and is frankly a mess that's difficult to read. -2. postgresql.conf - this is Patroni's overrides to the base configuration, for variables that Patroni needs to be able to control. It includes the previous file and then overrides/adds to that configuration. - -There are also some effective settings that don't match either of these files or the default settings, which is confusing. - -I would like the configuration files to be clear, correct for the current PostgreSQL version, have all settings explicitly defined, and be written to match the order in the documentation. Chef templating should be able to handle this fairly easily. Ideally our Chef recipes would not be hardcoded to use a specific version either, but instead set a default major version that can be overridden.",3.0 -19205575,2019-03-18 10:32:45.876,HAProxy alerts should fire based on error-rate rather than static value,"From: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6397, we would like to alert based on error-rate rather than per-second average rate of increase of total errors. Alerts in the scope of this work: - -1. IncreasedBackendConnectionErrors
-`rate(haproxy_backend_connection_errors_total[1m]) > .1`. What this means is that 0.1 * 60 = 6 error is enough to trigger this alert regardless of the total connections (which might also increase over time) -1. IncreasedServerConnectionErrors
-`rate(haproxy_server_response_errors_total[1m]) > .5`. Here it means 0.5 * 60 = 30 error is enough to trigger this alert. -1. IncreasedServerResponseErrors
-`rate(haproxy_server_connection_errors_total[1m]) > .1`. Here, it is also 6 errors. - -The proposal here is to calculate the above based on error-rate where we calculate it as: `error count` / `total count`. For example, for the `IncreasedServerResponseErrors` and `api_rate_limit` backend we could do: - -`sum(rate(haproxy_server_response_errors_total{backend=""api_rate_limit""}[1m]) * 60) / sum(rate(haproxy_server_http_responses_total{backend=""api_rate_limit""}[1m]) * 60)` - -We will then have to set a threshold of error rate. The highest number of requests `api_rate_limit` processed in the last 2 weeks was around 1600 requests per min (ref: https://prometheus.gprd.gitlab.net/graph?g0.range_input=1w&g0.expr=sum(rate(haproxy_server_http_responses_total%7Bbackend%3D%22api_rate_limit%22%7D%5B1m%5D))%20by%20(backend)&g0.tab=0). The current threshold of `> .1` would mean that 6 errors out of 1600 requests would be enough to trigger the alert and page us. This is 0.003%.",4.0 -19203347,2019-03-18 10:00:34.370,Adjust HAProxy alert thresholds (for connection and response errors),"From https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6374, this issue will keep track of the work for: - -1. IncreasedBackendConnectionErrors -2. IncreasedServerConnectionErrors -3. IncreasedServerResponseErrors - -Setting the threshold from 10seconds to 2mins. For reasons behind this proposal look at: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6374#note_151471405",1.0 -19162671,2019-03-15 21:10:45.471,use disk snapshot to restore instances for testing database reviews,"when we need to test some queries for database reviews, we have to restore the production wale base backup with the pipeline https://ops.gitlab.net/gitlab-com/gl-infra/gitlab-restore/postgres-gprd/pipelines which normally takes a few hours. - -I propose we add a step to take snapshot of the boot disk and data disk in nightly restore schedule before deleting the restored instance. The snapshots can therefore be used to restore instances quickly to test queries for database reviews",2.0 -19161532,2019-03-15 19:34:09.343,Rebuild DR File nodes with cheaper storage,"File nodes are currently using a little over 500Tb of SSD storage. Due to cost concerned, we need to move this to spinning disks. - -Steps involved: - -- [x] Tearing down and deleting the file nodes and their disks -- [x] Requesting a quota increase on spinning disks (and optionally a decrease on SSD’s) -- [x] Spinning up new instances with the disk type changed -- [ ] I will have to get some help from the Geo team on a clean way to reset the replication state, since replication has already started and is currently paused",2.0 -19158458,2019-03-15 16:56:19.571,Lower the repository size limit on GitLab.com to 20GB,"As part of https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5160 to reach the 10gb limit, let's: - -* [x] Adjust the limit to 20GB -* [x] Mail affected users of repositories above 20GB: https://gitlab.com/gitlab-com/marketing/digital-marketing-programs/issues/418",2.0 -19154037,2019-03-15 14:07:59.157,New training GCP Project,"We are planning a couple of workshops (https://gitlab.com/gitlabcontribute/new-orleans/issues/51 and https://gitlab.com/gitlabcontribute/new-orleans/issues/56) for Contribute 2019 and would like to use a dedicated GCP project where attendees can create/delete clusters. - -We could name it `gitlab-training`. - -Thanks",1.0 -19137570,2019-03-15 00:13:47.795,Requesting wildcard cert for *.eks.helm-charts.win,"In https://gitlab.com/charts/gitlab/issues/1132 the Distribution team are adding an EKS CI cluster for the charts, and would like a wildcard cert for `*.eks.helm-charts.win` - -This should be copied into the `Cloud Native` vault alongside the existing wildcard certs for our cloud native CI domains.",2.0 -19113981,2019-03-14 13:38:33.951,cookbook publishing for haproxy is broken,"It looks like this has been failing for some time now - -https://ops.gitlab.net/gitlab-cookbooks/gitlab-haproxy/-/jobs",2.0 -19112999,2019-03-14 13:07:33.772,Return corresponding error codes from haproxy,"We should have error pages that report the appropriate error to the end-user instead of the same one for most 5xx codes: - -``` - errorfile 400 /etc/haproxy/errors/400.http - errorfile 403 /etc/haproxy/errors/429.http - errorfile 408 /etc/haproxy/errors/400.http - errorfile 429 /etc/haproxy/errors/429.http - errorfile 500 /etc/haproxy/errors/500.http - errorfile 502 /etc/haproxy/errors/500.http - errorfile 503 /etc/haproxy/errors/503.http - errorfile 504 /etc/haproxy/errors/500.http -```",1.0 -19066903,2019-03-13 14:28:01.503,Problem with status.gitlab.com in Google search index,"Someone pointed out `status.gitlab.com` does not appear in search results. Using the site operator I found only one page in Google's index. - -Are there any settings for this sub-domain we can toggle to expand our coverage in search? - -cc @lbanks @jjcordz",1.0 -19028137,2019-03-12 13:44:50.712,Deploy process-exporter for postgres io stats in Production,,2.0 -18995094,2019-03-11 18:18:43.279,Configure Sentry in DR,Plan: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5469,1.0 -18982972,2019-03-11 12:33:04.885,Database Reviews,"* [x] @abrandl Rails 5.1 https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9386 -* [x] @abrandl Int4Int8 https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/24512 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/26009 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9939#note_150188904 -* [x] @fjsanpedro -> @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25881#note_150153031 -* [ ] @yguo -> @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/26057#note_149890203 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25321#note_149349275 -* [x] @abrandl https://dev.gitlab.org/gitlab/gitlabhq/merge_requests/2922/diffs#note_156988 -* [x] DB office hours https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9861#note_152718524 -* [x] DB office hours https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25973#note_152584625 -* [ ] @yguo https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/10331#note_152692635 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/26146#note_152736732 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/10234#note_152481170 -* [ ] @cshobe https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9815#note_152275676 -* [x] https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/10161#note_151035300",5.0 -18923188,2019-03-08 20:32:58.306,CI/D Readiness Review Addendum Runbook Responses,"In preparation for taking over the day to day of CI/CD issues, the runbooks should cover alerts that frequently have occurred. - -- [ ] CPU use percent is extremely high on shared-runners-manager-4.gitlab.com for the past 2 hours. -- [ ] No disk space left on /opt/prometheus/prometheus/data on prometheus-01.nyc1.do.gitlab-runners.gitlab.net: 641.8m% -- [ ] No disk space left on /opt/gitlab on runners-cache-5.gitlab.com: 997.3m% -- [ ] Runners manager is down on shared-runners-manager-4.gitlab.com:9402 -- [ ] [CICDTooManyPendingJobsPerNamespace](https://gitlab.com/gitlab-com/runbooks/blob/master/rules/gitlab-com-ci.yml#L19-29) -- [ ] [CICDTooManyRunningJobsPerNamespaceOnSharedRunnersGitLabOrg](https://gitlab.com/gitlab-com/runbooks/blob/master/rules/gitlab-com-ci.yml#L43-53) -- [ ] [CICDNamespaceWithConstantNumberOfLongRunningRepeatedJobs](https://gitlab.com/gitlab-com/runbooks/blob/master/rules/gitlab-com-ci.yml#L210-223) -- [ ] [CICDJobQueueDurationUnderperformant](https://gitlab.com/gitlab-com/runbooks/blob/master/rules/gitlab-com-ci.yml#L225-244) -- [ ] [CICDTooManyPendingBuildsOnSharedRunnerProject](https://gitlab.com/gitlab-com/runbooks/blob/master/rules/gitlab-com-ci.yml#L4-17) -- [ ] [CICDTooManyArchivingTraceFailures](https://gitlab.com/gitlab-com/runbooks/blob/master/rules/gitlab-com-ci.yml#L263-275)",1.0 -18921739,2019-03-08 19:40:22.079,No logs for Gitaly in DR reaching Kibana,"It appears this is because the logs is unstructured whereas fluentd is expecting json logs. - -Must be solved for https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6330",1.0 -18920687,2019-03-08 18:30:59.807,Raise package upload size limit on packagecloud,"Our nightly packages are now above 600MB which is the current `client_max_body_size`. We need to raise this higher to enable package uploads. However, we must wait until gitlab-com/gl-infra/infrastructure#6337 is complete as we will be required to restart packagecloud to make the change. - -- [job failure](https://dev.gitlab.org/gitlab/omnibus-gitlab/pipelines/106534) -- [Slack thread](https://gitlab.slack.com/archives/C1FCTU4BE/p1552062978274100)",1.0 -18918858,2019-03-08 17:14:20.486,Run OPTIMIZE TABLE on packagecloud databases,"Now that we are finally able to clean out all these extra rows in the metadata tables, we will need to run `OPTIMIZE TABLE` on `rpm_files` and `deb_files` in order to reclaim the disk space. This will lock the tables during its run, which means we will not be able to upload packages during that time, so should coordinate to ensure that it will not block package releases. It will most likely take around 10 minutes to complete this as our tables are now substantially smaller than they were when we did the upgrade.",1.0 -18918317,2019-03-08 16:56:39.767,Update packagecloud backups to use xbstream,"With the upgrade to 2.0.5, we are now able to use `xbstream` to backup the MySQL database. This will improve the speed and reliability of our backups and eventually assist us in moving to use RDS for the backend. - -https://gitlab.com/gitlab-cookbooks/gitlab-packagecloud/merge_requests/13 will enable this in the config, but we need to wait for gitlab-com/gl-infra/infrastructure#6337 to complete first.",1.0 -18914779,2019-03-08 14:38:16.619,Monitor the rate of PG temporary file creation,"Quoting https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6350: - -> * No alerts/warnings were fired for the database -> * In this situation we could've had alerts/warnings for too many temporary files are being written to disk",2.0 -18914710,2019-03-08 14:35:07.051,Add useful Postgres graphs to the triage dashboard,"Quoting https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6350: - -> * The triage dashboard doesn't have useful Postgres graphs, the lack of which makes Postgres the last item to check -> * Compare that to Gitaly which has useful graphs that can indicate if Gitaly is causing trouble",2.0 -18888085,2019-03-07 16:37:21.315,Transfer gitlab.com.cn to our control,https://gitlab.slack.com/archives/CB3LSMEJV/p1551976538278600,1.0 -18847097,2019-03-06 18:03:25.771,API Reboot Investigation / Troubleshooting,"API nodes are rebooting with increasing frequency, looking to figure out what's wrong and track solutions.",5.0 -18843826,2019-03-06 16:27:14.065,Cleanup file metadata for packagecloud,"## Summary - -In packagecloud 2.0.5, they've added the ability to permanently disable the repo file list metadata, resulting in far fewer rows being added to the files tables, and kicking off a background job to cleanup the existing files rows for the repo you turn it on for. See https://packagecloud.atlassian.net/wiki/spaces/ENTERPRISE/pages/465076277/RPM+and+Debian+file+list+metadata for details on the new feature. - -## Background - -Packagecloud by default keeps track of every file within a package is a new role in one of it's files tables in the database, and uses this information to generate repo file list metadata files. Due to number of files in our omnibus package, the size of these tables is large, and is impacting our ability to backup the database. In prior versions of packagecloud we have turned off the metadata file generation, but this does not change the number of rows added to the database. - - -## Additional Info - -This is some related information we recently got from our contact at packagecloud regarding turning off these files: - -> Once you enable this setting, it will kick off a job that will delete all the unnecessary files rows in the database (this job will likely take a few days to run, but nothing will be locked in the process). Note: As the name suggests, this is a *permanent* setting, once filelists are disabled permanently, they can't be re-enabled. - -> (Optional) After doing the above and ensuring all of the DeleteFilesJob jobs are finished (you can see job progress in the indexer status section of the administrator dashboard) you can now run 'OPTIMIZE TABLE deb_files;' and 'OPTIMIZE TABLE rpm_files;' to significantly reduce disk space used and the size of your database backups. - - -## Actions - -- [x] Permanent disable metadata for `gitlab/pre-release` (~ 10000 packages) - * [x] DeleteFilesJob complete -- [x] Permanent disable metadata for `gitlab/nightly-builds` (~ 30000 packages) - * [x] DeleteFilesJob complete -- [x] Permanent disable metadata for `gitlab/raspberry-pi2` (~ 1000 packages) - * [x] DeleteFilesJob complete -- [x] Permanent disable metadata for `gitlab/unstable` (~ 6000 packages) - * [x] DeleteFilesJob complete -- [x] Permanent disable metadata for `gitlab/gitlab-ee` (~ 5000 packages) - * [x] DeleteFilesJob complete -- [x] Permanent disable metadata for `gitlab/gitlab-ce` (~ 5000 packages) - * [x] DeleteFilesJob complete -- [x] Run `OPTIMIZE TABLE deb_files;` (note table will be locked, uploads will be blocked) -- [x] Run `OPTIMIZE TABLE rpm_files;` (note table will be locked, uploads will be blocked)",2.0 -18826757,2019-03-06 05:18:15.512,Look into set of old gsrm runners in gitlab-ci gcp project,"There are a batch of runners still running since Feb 6 in the gitlab-ci project. - -https://console.cloud.google.com/compute/instances?project=gitlab-ci-155816&instancessize=50&instancessort=creationTimestamp%252Cname%252CmachineType&instancesquery=%255B%257B_22k_22_3A_22_22_2C_22t_22_3A10_2C_22v_22_3A_22_5C_22gsrm_5C_22_22%257D%255D - -We should look into if they are still running jobs, what happened and make sure they are properly destroyed. - -cc @northrup as we were looking at usage on runners and this is an anomaly maybe? - -8vcpu * 86400s = 691,000s per day",2.0 -18825958,2019-03-06 04:01:04.515,DR Site Codebase out of sync with primary production site,"The Geo nodes in DR are running `v11.6.3` - the production nodes are running `v11.8.1` - -We need to get these running the same version before activating the DR site. - -This should just be a matter of triggering the pipeline and making sure it succeeds. If I can get some quick instruction on the proper way to do this, I'll make sure it gets into the DR runbooks.",1.0 -18825664,2019-03-06 03:34:01.121,DR Database has stopped replicating,"Information added to the primary database is not showing up in the DR database after an hour. - -Is there any monitoring for this anywhere? - -The query I am using is: `SELECT * FROM geo_nodes;` - -Once it is fixed, can we get at least a minimal runbook on how to troubleshoot this?",1.0 -18824447,2019-03-06 01:58:41.788,chef-client failing in dr pubsub because of ChecksumMismatch,"Below the error: - -``` -remote_file[/opt/pubsubbeat/pubsubbeat] action create[2019-03-06T01:40:46+00:00] INFO: Processing remote_file[/opt/pubsubbeat/pubsubbeat] action create (gitlab-elk::pubsubbeat line 20) - - - ================================================================================ - Error executing action `create` on resource 'remote_file[/opt/pubsubbeat/pubsubbeat]' - ================================================================================ - - Chef::Exceptions::ChecksumMismatch - ---------------------------------- - Checksum on resource (b3b911) does not match checksum on content (2bb63f) - - Resource Declaration: - --------------------- - # In /var/chef/cache/cookbooks/gitlab-elk/recipes/pubsubbeat.rb - - 20: remote_file node[""gitlab-elk""][""pubsubbeat""][""bin""] do - 21: source node[""gitlab-elk""][""pubsubbeat""][""url""] - 22: checksum node[""gitlab-elk""][""pubsubbeat""][""sha256sum""] - 23: owner node[""gitlab-elk""][""pubsubbeat""][""user""] - 24: group node[""gitlab-elk""][""pubsubbeat""][""group""] - 25: mode ""0755"" - 26: force_unlink true - 27: notifies :restart, ""runit_service[pubsubbeat]"", :delayed - 28: end - 29: - - Compiled Resource: - ------------------ - # Declared in /var/chef/cache/cookbooks/gitlab-elk/recipes/pubsubbeat.rb:20:in `from_file' - - remote_file(""/opt/pubsubbeat/pubsubbeat"") do - provider Chef::Provider::RemoteFile - action [:create] - retries 0 - retry_delay 2 - default_guard_interpreter :default - source [""https://ops.gitlab.net/gl-infra/pubsubbeat/-/jobs/377/artifacts/raw/build/pubsubbeat""] - use_etag true - use_last_modified true - declared_type :remote_file - cookbook_name ""gitlab-elk"" - recipe_name ""pubsubbeat"" - checksum ""b3b911b5802da256a379d1ad9ee65f3711c7b06e36181124697f824041cfaee1"" - owner ""root"" - group ""root"" - mode ""0755"" - force_unlink true - path ""/opt/pubsubbeat/pubsubbeat"" - verifications [] - end - - System Info: - ------------ - chef_version=12.22.5 - platform=ubuntu - platform_version=16.04 - ruby=ruby 2.3.6p384 (2017-12-14 revision 61254) [x86_64-linux] - program_name=chef-client worker: ppid=15468;start=01:40:11; - executable=/opt/chef/bin/chef-client -``` - -This should be fixed to ensure logging in Geo is working. See https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6330",2.0 -18823475,2019-03-06 00:05:51.337,Ensure that logging for Geo is in place,Plan: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5469,4.0 -18815808,2019-03-05 17:17:15.630,Move license-gitlab-com and version-gitlab-com from -com to -org,"### Overview - -We'd like to move the `license-gitlab-com` and `version-gitlab-com` project under the `gitlab-org` group. - -The Fulfillment team works on these two apps, and issues/boards aren't visible across non-nested groups. As a result, there is no single source of truth that gives an accurate assessment of everything that Fulfillment is working on in a single release. - -Deliverables may slip for this reason, simply because we need to remember to check issues in several different places. - -* [x] Move `gitlab-com/version-gitlab-com` into the `gitlab-org` group namespace -* [x] Move `gitlab-com/license-gitlab-com` into the `gitlab-org` group namespace",2.0 -18803913,2019-03-05 10:58:29.467,Upgrade kernels on API fleet only,Most of our production alerts are caused by random API server reboots. The current theory is that it is caused by NFS issues leading to a kernel panic and can be fixed by a kernel upgrade. As sequentially rebooting API servers can be done easily without customer impact (after draining from the LB) we should make this a priority before working on a framework to safely upgrade the kernels of the _whole_ fleet.,3.0 -18791653,2019-03-04 21:49:29.857,Create an Ansible playbook for database service discovery migration,"For this production change: https://gitlab.com/gitlab-com/gl-infra/production/issues/633. - -WIP MR: https://gitlab.com/gitlab-com/gl-infra/ansible-migrations/merge_requests/1",3.0 -18786952,2019-03-04 17:53:51.529,Make rackspace users for infra team and other items,"We only have rackspace users for part of the infra team so this issue will track: - -1. [x] Making the users - list below -2. [x] Updating SRE entitlements access template -3. [x] Make sure onboarding template verifies access. -4. [x] Double check the runbook is up to date: https://gitlab.com/gitlab-com/runbooks/blob/master/howto/GCP-rackspace-support.md -5. [x] Broadcast to the team about creds and support tickets. - -* [x] Amar -* [x] Ahmad -* [x] Jarv -* [x] Michal -* [x] Hendrik -* [x] Henri -* [x] Andreas -* [x] Jose -* [x] Skarbek -* [x] Yun -* [x] Cameron -* [x] Alejandro -* [x] Alex -* [x] Casey -* [x] Anthony -* [x] Craig -* [x] Devin",2.0 -18783109,2019-03-04 16:12:32.413,Make the High4xxRateForRegistry alert less sensible in staging,"The `High4xxRateForRegistry` is triggered by when 75% of the responses return a `4xx`: - -```sum(backend_code:haproxy_server_http_responses_total:irate1m{backend=""registry"",code=""4xx"",tier=""lb""}) / sum(backend_code:haproxy_server_http_responses_total:irate1m{backend=""registry"",tier=""lb""}) > 0.75``` - -On staging we nearly have no requests at all leading to an alert as soon as only a few 4xx responses are triggered. This happened on 2019-03-04 13:16 UTC caused by a slow scan for non-existing endpoints, causing alerts for several hours. - -https://gitlab.slack.com/archives/C101F3796/p1551702532086700 - -We should maybe add a minimum threshold of `x` requests/s before we trigger alerts besed on error rates on low-traffic servers.",2.0 -18759420,2019-03-03 21:13:08.601,Implement Epic-to-Issue Links in go-gitlab,[`go-gitlab`](https://github.com/xanzy/go-gitlab) does not implement the Epic-to-Issue Links API. Implement it so that it can be used by [`glork`](https://gitlab.com/glopezfernandez/glokr/),2.0 -18759404,2019-03-03 21:11:41.441,Implement Epic Links in go-gitlab,[`go-gitlab`](https://github.com/xanzy/go-gitlab) does not implement the Epic Links API. Implement it so that it can be used by [`glork`](https://gitlab.com/glopezfernandez/glokr/),2.0 -18729497,2019-03-01 19:06:13.950,Configure Meltano.com Name Servers,"We want to host the site through SiteGround. I believe all we need to do is point the nameservers, but let me know if there are additional steps I'm not aware of. - -- ns1.aore1.siteground.us -- ns2.aore1.siteground.us - -Thank you!",1.0 -18728253,2019-03-01 17:51:40.296,Upgrade PackageCloud,"We have run into an issue where the RPM file table of the PackageCloud database is using signed 32bit integers for the `id` column and we've run into the limit. We've worked around the issue by keeping the rails app from updating that table, however without this table, it is possible that some RPM based package managers will have issues installing packages. - -There is a new version of PackageCloud that will resolve this issue. This upgrade requires downtime. We have a call with PackageCloud to discuss the details of the upgrade. - -- [upgrade docs](https://packagecloud.atlassian.net/wiki/spaces/ENTERPRISE/pages/465141825/Upgrading+from+2.0.4+to+2.05) -- [disable metadata table docs](https://packagecloud.atlassian.net/wiki/spaces/ENTERPRISE/pages/465076277/RPM+and+Debian+file+list+metadata) - - -cc/ @twk3 @marin @rspeicher @dawsmith",8.0 -18726709,2019-03-01 16:48:22.713,Transfer meltano.com ownership,"The meltano team needs to have access to manage the domain name settings in order to reduce requests to the GitLab Infrastructure team. - -If you need additional information, please reach out to @bencodezen - -cc @dmor @jschatz1 ",1.0 -18719563,2019-03-01 14:07:31.361,Fixing gstg alerts finding their way to PD,All alerts with `pager=pagerduty` from all envs now pager the on-call SRE. This is a result of consolidating ops/gstg/dr AlertManagers into one.,3.0 -18693446,2019-02-28 19:40:04.002,Cannot connect to port 80 on new AWS instance,"> Hi, do you have any idea on why I can't access 80 port of i-02ad02527d4758cb3 instance on AWS. I checked security groups of the instance, Network ACL for subnet and for VPC. I also checked route table. 22 port works great - -``` -$ netstat -plnt -tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN - -``` - -and - -``` -Chain INPUT (policy ACCEPT) -target prot opt source destination - -Chain FORWARD (policy ACCEPT) -target prot opt source destination - -Chain OUTPUT (policy ACCEPT) -target prot opt source destination -``` - -![Screen_Shot_2019-02-28_at_20.30.57](/uploads/202c4b9683caadb56c26b4523f03ebdf/Screen_Shot_2019-02-28_at_20.30.57.png)",1.0 -18693120,2019-02-28 19:28:48.919,Clean up jobs.gitter.im Route53 record,"Follow-up item per notes in gitlab-com/gl-infra/infrastructure#5516; we need to remove the `jobs.gitter.im` Route53 record from the [gitlab-com](https://gitlab-com.signin.aws.amazon.com/console) account, and do a quick review of the Ansible & Terraform codebase in gitlab-com/gl-infra/gitter-infrastructure> to verify that there are no additional impacts/changes required. - -/cc @MadLittleMods in case I missed any details/context",1.0 -18692812,2019-02-28 19:06:41.059,ZFS: Research Best Disk Layout & L2ARC Approach,"Spin up Ubuntu 18.04 nodes w/ ZFS and do IOPS testing. Determine the best number of disks, size of disks, ZIL and L2ARC placement for DB Nodes and for Storage Nodes. Document test methodology and results in issue.",8.0 -18655131,2019-02-27 18:07:22.286,301 redirect codefresh page,"Add a 301 redirect from `/devops-tools/codefresh/` to `/devops-tools/codefresh-vs-gitlab.html` - -We get an inbound like from this [Codefresh blog post](https://codefresh.io/continuous-integration/codefresh-versus-gitlabci/) but it goes to a helper page we use to generate the real comparison page. - -Logging an issue here since we don't have https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/4894 yet. - -cc @brendan @shanerice ",1.0 -18652339,2019-02-27 16:19:55.663,New GCP Project for GitLab PaaS POC/testing/alpha features enablement,"The Configure group is currently working towards a POC for [GitLab PaaS](https://gitlab.com/groups/gitlab-org/-/epics/111). We'd like to create a dedicated project on GCP so that we can: - -1. Enable GCP alpha features, namely [GKE Sandbox](https://cloud.google.com/blog/products/containers-kubernetes/digging-into-kubernetes-1-12) -1. Ensure we have IP space available (currently all other projects run out of IP space often, this would block us) -1. Analyze cost easily without having to filter resources out of existing projects - -We only require that the project have billing enabled and have a recognizable name, such as `gitlab-paas`. - -Also, if possible, it would be great to limit access to this project to members of the configure team (perhaps just for resource creation). - -``` -tkuah -dgriffith -jfargher -twatson -dgruesso -tdavis -mcabrera -ddavison -jcunha -jerasmus -gbizon -mgreiling -``` - -Thank you",2.0 -18635326,2019-02-27 08:29:36.485,No Registry logs in Kibana,"I wonder if this is because the timestamp is preceding the JSON logs? - -``` -2019-02-27_08:28:16.46583 {""go.version"":""go1.10.3"",""http.request.host"":""registry.gitlab.com"",""http.request.id"":""09b57461-9cd7-4053-a7fa-c920e7e54a11"",... -```",2.0 -18625609,2019-02-26 21:33:08.761,Updated Email Setting for Greenhouse,"There's an additional DNS entry for the Greenhouse email configuration. - -Type Hostname Required Value -CNAME email.gh-mail.gitlab.com mailgun.org - -Also, I don't believe we've whitelisted their IP addresses, but if that would help reduce instances where Greenhouse emails are going to spam, I'll provide them here upon request. - -Thank you",1.0 -18612534,2019-02-26 13:20:07.245,Staging: delayed replica unable to follow timeline,"The delayed replica in staging is unable to recover further: - -``` -2019-02-26_13:17:54.62610 2019-02-26 13:17:54 GMT [19449]: [192257-1] LOG: new timeline 71 forked off current database system timeline 69 before current recovery point 36A6/D0000000 -``` - -Additionally, we're getting paged on the production pagerduty schedule. I don't know if this is related or not, but it's not intended since this is about staging. - -Example alert: https://gitlab.pagerduty.com/incidents/PGURCPE -Slack: https://gitlab.slack.com/archives/CB3LSMEJV/p1551186785243800 - -cc @yguo @cshobe",2.0 -18605002,2019-02-26 09:51:11.985,Database Reviews,"* [x] https://gitlab.com/gitlab-org/gitlab-ee/issues/5348#note_142649706 -* [x] @NikolayS https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25230#note_142474150 -* [x] @NikolayS https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25278#note_142134862 -* [x] @NikolayS https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/21197#note_139607418 -* [x] @NikolayS https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25107#note_140670702 (Nik: checking it on a ""restore"" box) -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25182#note_143426559 -* [x] @NikolayS https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25432#note_143741434 -* [x] @NikolayS https://gitlab.com/gitlab-org/gitlab-ce/issues/57284#note_143632158 -* [x] @NikolayS https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/7361#note_143467417 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25417#note_145035983 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9283 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9468#note_145050418 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/issues/57873 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25474#note_145490484 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9625 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9663 -* [x] @yguo https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9593#note_144222817 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/7361 -* [x] @NikolayS https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9544#note_144868675 -* [x] @yguo https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9572#note_145028420 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9744 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9468#note_145290166 -* [x] @NikolayS https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25639 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25532#a46973a5817634d9469930bacdf06d22045ac89b -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_request/9807#2cda0c6171fa7d04989507a1dd112e34c40df46d_220_219 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25532 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9833 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9841 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25533#note_146516657 -* [x] @yguo https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25437#note_146925022 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25034 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9858#a45f971bfca57680d3204229f43425fe162dc9c3 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25181 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9786#note_148357464 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25806 -* [x] @abrandl https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25533#2d53eb4595665098858367d6ea2b62c548d55d0a -* [ ] @abrandl Rails 5.1 https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9386",8.0 -18599765,2019-02-26 06:41:57.481,ZFS: Establish QA Harness for Packer Image Build,"QA Harness for Packer Builds Should: - -1. [x] Validate that the image is bootable -1. [ ] Validate that the image can be bootstrapped into Chef -1. [ ] Validate that the entire GitLab Suite Functions and passes existing QA harness -1. [ ] Promote the image into selected train (i.e. current or next) - -Stretch Goal: -- [ ] Validate that the minimum convergence time between first spin to functional machine is < 5m",8.0 -18599724,2019-02-26 06:38:08.748,ZFS: Establish Packer Build,"Establish a CI process that meets the following criteria: - -- [x] Starts with Google Ubuntu 18.04 LTS [Shielded VM](https://cloud.google.com/security/shielded-cloud/shielded-vm) Image -- [x] Leverages all of the cookbooks in the `gprd-base` role via a `chef zero` run process -- [x] Installs and configures ZFS kernel modules and supporting tools via `chef zero` run process -- [x] Produces a GCP Disk Image as the end product",20.0 -18597525,2019-02-26 03:49:32.800,Staging alerts mis-labeled and going to production pager duty,"The following alerts have been going to the production, high priority channel of pager duty all weekend: - -``` -Postgres Replication lag is over 9 hours on delayed replica (normal is 8 hours) -PostgreSQL_ReplicationLagTooLarge_ArchiveReplica -PostgreSQL_UnusedReplicationSlot -PatroniIsDown -``` - -All of these have hostnames in staging. - -Additionally, this ops alert is going to production, high priority channel but is clearly not high priority. - -``` -Deadman switch of `test_alert` (db/postgres) has expired -```",2.0 -18595305,2019-02-26 00:49:16.533,DR Redis servers have excessive load,"``` -load average: 30.21, 36.86, 29.86 -``` - -On: - -``` -redis-01-db-dr.c.gitlab-dr.internal -redis-02-db-dr.c.gitlab-dr.internal -redis-03-db-dr.c.gitlab-dr.internal -``` - -Connecting: - -``` -$ REDIS_MASTER_AUTH=$(sudo grep ^masterauth /var/opt/gitlab/redis/redis.conf|cut -d\"" -f2) -$ /opt/gitlab/embedded/bin/redis-cli -a $REDIS_MASTER_AUTH -127.0.0.1:6379> info replication -# Replication -role:master -connected_slaves:1 -slave0:ip=10.251.5.102,port=6379,state=send_bulk,offset=0,lag=0 -master_repl_offset:1300481 -repl_backlog_active:1 -repl_backlog_size:1048576 -repl_backlog_first_byte_offset:251906 -repl_backlog_histlen:1048576 -```",1.0 -18578561,2019-02-25 14:23:59.076,adjust log retention on about.gitlab.com,"Logrotate on about.gitlab.com is keeping a year of nginx logs, but the root disk is running full after a few months already. We should keep less of them and forward them to stackdriver eventually.",2.0 -18576898,2019-02-25 14:17:41.625,Fix alerting for azure nodes,We are not getting any alerts from the alertmanager on prometheus.gitlab.com (Azure) and the alerts also seem not to be routed to other alertmanagers. So we didn't get notified for full disks on about.gitlab.com (https://gitlab.com/gitlab-com/gl-infra/production/issues/699) for instance.,2.0 -18524638,2019-02-22 19:59:13.249,Reference bootstrap/teardown scripts by module version,"Currently, our [new](#6084) GCP [bootstrap module](https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/bootstrap) is an iterative improvement over versioned bootstrap script files in out legacy terraform monorepo, that still has versioned files referenced by passing an attribute to the module. - -Ideally, the versioned filenames should be deprecated, and going forward, subsequent versions should be referenced by changing the module version. We need to test to make sure this is viable, and verify whether we have any cases of using multiple bootstrap versions in parallel. If so, we will have to validate using multiple instances of the bootstrap module at the same time, namespaced by version. - -1. [ ] Add a copy of the latest bootstrap script to the module, _without_ a version in its filename -1. [ ] Make the `bootstrap_version` and `teardown_version` attributes optional, for referencing older versions of the bootstrap script -1. [ ] Update calling references for the module to deprecate those attributes, and switch to using the module version - -/cc @T4cC0re",2.0 -18524248,2019-02-22 19:38:11.650,Deploy Uptycs to Production,"After the successful deploy to staging (https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5784) next step is to roll it to production. - -To do: - -1. [x] Setup gitlab-uptycs cookbook mirroring to ops.gitlab.com. Issue: https://gitlab.com/gitlab-cookbooks/gitlab-uptycs/issues/1 -1. [x] Add CI/CD pipeline for gitlab-uptycs cookbook. Issue: https://gitlab.com/gitlab-cookbooks/gitlab-uptycs/issues/2 -1. [x] Determine rollout plan (# roles/hosts per phase, # phases) Issue: https://gitlab.com/gitlab-com/gl-infra/production/issues/709 -1. [x] Deploy! Issue: https://gitlab.com/gitlab-com/gl-infra/production/issues/709",5.0 -18523598,2019-02-22 18:56:59.189,"Check if DB related metrics, alerts and dashboards are still working","After scraping DB metrics from the new prometheus-db instance instaed of from prometheus-gprd, we need to make sure that we have complete metrics as before, all alerts are working as before (especially general alerts) and grafana dashboards are working as before.",2.0 -18523175,2019-02-22 18:34:05.741,[RCA] Loss of db metrics visibility by switching to dedicated prometheus instance for db metrics,"## Summary - -By starting to scrape db metrics from a new dedicated prometheus instance our grafana db dashboards stopped working. - -Service(s) affected : ~""Service:Postgres"" - -Team attribution : Infrastructure - -Minutes downtime or degradation : 21h - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? - - We lost visibility into db metrics for a while (although metrics were still scraped, but by another instance, which wasn't noticed) -- Who was impacted by this incident? - - SRE oncall, release team - having to delay the 11.8 release by an hour. -- How did the incident impact customers? - - no customer impact - -## Detection & Response - -Start with the following: - -- How was the incident detected? - - we got alerts for missing metrics -- Did alarming work as expected? - - we got some alerts for missing metrics data, but not specific enough to immediately be aware of what was going on. -- How long did it take from the start of the incident to its detection? - - 12h -- How long did it take from detection to remediation? - - 9h -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team memeber wasn't page-able, ...) - - we should have noticed earlier from the alerts that db monitoring wasn't working anymore - - paging for DBRE support didn't work as DBRE schedules were not communicated clear enough and escalation was still going to OnGres (which doesn't support us anymore) - -## Timeline - -2019-02-21 - -- 17:35 UTC - general-alerts: `Operation rate data for the ""service"" component of the ""patroni"" service is missing` - -2019-02-22 - -- 05:29 UTC - @andrewn raising concern about above alert -- 12:34 UTC - incident issue created (https://gitlab.com/gitlab-com/gl-infra/production/issues/697) -- 12:58 UTC - DBRE oncall was paged, team starts fixing Grafana dashboards -- 13:24 UTC - release team wants to deploy 11.8, SRE oncall is holding them off to first get db metrics visibility again -- 14:29 UTC - SRE oncall gives green light for 11.8 deploy - - -## Root Cause Analysis - -general db alerts didn't get data anymore. -Because db dashboards in grafana didn't work anymore. -Because prometheus-gprd wasn't scraping db data anymore. -Because a change was made to scrape them from a new, dedicated instance, prometheus-db. -Because db metrics were making up most of the load on prometheus-gprd. - -## What went well - -- general-alerts warning for missing db metrics data -- @andrewn noticing the alerts and taking action, making team aware of the consequences and creating an incident -- @ahmadsherif and @yguo jumping in to adjust the dashboards - -## What can be improved - -- Better communication to make everybody aware of changes in monitoring -- Making sure that dashboards and alerts are still working when changing prometheus setup -- Better response to alerts that are not fully understood (as the missing metrics alert) - -## Corrective actions - -- Fix Grafana Dashboards https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6266 -- Check if we still get all the db metrics and alerts via prometheus-db: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6271 -- Communicate the new prometheus-db instance",2.0 -18519468,2019-02-22 15:39:50.589,Properly secure gitlab.dev domain name,"We have purchased the `gitlab.dev` domain, but it is still not being managed by the appropriate internal group. - -See relevant Slack threads: -https://gitlab.slack.com/archives/C3MAZRM8W/p1550339924057600 -https://gitlab.slack.com/archives/C101F3796/p1550817531108200",2.0 -18516038,2019-02-22 13:49:33.524,Fix DB Dashboards for new db prometheus server,We need to adjust all Postgres Grafana dashboards to work with the new prometheus instance prometheus-db.gprd.gitlab.net.,2.0 -18487590,2019-02-21 20:00:20.551,registry.staging.gitlab.com Certificate expired,![image](/uploads/38d3d97c92493146b46f260b00d54fea/image.png),1.0 -18482930,2019-02-21 16:16:25.507,"Raise GCP limits for the ""restore"" projects","For https://ops.gitlab.net/gitlab-com/gl-infra/gitlab-restore current limits are: - -- 40TiB for disks -- 96 for CPUs - -For database-related tasks (such as benchmarking, testing new code, background migrations, and so on) we need production clones. When for small tasks (as new index idea verification) a smaller instance is usually enough, it is not so if we need to test something bigger (as background migration processing ~100M-1B rows). In such cases we need a 32-core machine, since it gives better disk performance (see https://cloud.google.com/compute/docs/disks/performance). - -The same project is also constantly being used for automated backup verification. - -Additionally, I just raised the default disk space for 'grpd' type of instances there from 3000Gib to 3300GiB, since GitLab.com is growing and the old size was not enough. - -Please raise quotas. If possible, double them. - -@Finotto - -cc @abrandl @northrup @yguo",1.0 -18460105,2019-02-21 01:55:29.473,"Make automatic terraform module versioning mandatory, or remove it","So this has tripped me up several times now, and I know it’s tripped up a few others as well. The new automatic versioning for the terraform modules requires `fix` or `feat` in the MR name or it breaks and you have to manually tag it. This is just too easy to mess up, since this is not required across the board, and many of these changes are so small and infrequent that I foresee it being a long time before I get used to doing it right. That means a lot more messed up merges ahead. - -I propose either making it impossible to merge an MR with no prefix, or removing the automated pipeline and making manual tagging the standard. - -If the merge button didn't appear unless the pipeline found a prefix in the title, that would be ideal. Then we could use `something:` as the prefix if we didn't want a version bump.",1.0 -18459505,2019-02-21 00:59:28.982,Reduce used space on `file-21`,"`file-21` is at 83% usage and the last one left over 80%. We need to rebalance to try to get this under control. - -Change issue https://gitlab.com/gitlab-com/gl-infra/production/issues/691",2.0 -18403832,2019-02-20 09:54:26.387,create a blueprint for the kernel patch process,please try to keep it abstract to be reused.,2.0 -18381918,2019-02-19 19:16:59.742,Alerts Mis-labeled for GitalyVersionMismatch,"Alerts in staging are going to production PagerDuty. In alert manager, they are labeled as `gstg-default` but in PagerDuty they are labeled as `GitLab Production` - -https://gitlab.com/gitlab-com/runbooks/blob/master/rules/gitaly.yml#L173 - -``` - - alert: GitalyVersionMismatch - expr: > - count( - sum by (environment, version) ( - gitlab_build_info{tier=""stor"",type=""gitaly""} - ) > 0 - ) == 2 - for: 30m - labels: - channel: gitaly - pager: pagerduty - severity: critical - annotations: - description: During a deployment, two distinct versions of Gitaly may be running - alongside one another, but this should not be the case for more than 30m. - Visit https://dashboards.gitlab.net/dashboard/db/gitaly-version-tracker?orgId=1 - for details of versions deployed across the fleet. - runbook: troubleshooting/gitaly-version-mismatch.md - title: 'Gitaly: two versions of Gitaly have been running alongside one another - in production for more than 30 minutes' -```",1.0 -18346260,2019-02-19 01:13:36.920,Enable Geo Tracking Database,"The next step to enabling Geo for GitLab.com is to enable the tracking database. This is not as straightforward as the documentation suggests, so I am creating this issue to track. - -The replicated main databases are set up. There is a separate instance configured for the tracking database at the secondary site. We are at the point of running the migrations to set up the schema. - -This issue can be closed when this section of the documentation is successfully completed: https://docs.gitlab.com/ee/administration/geo/replication/external_database.html#configure-the-tracking-database - -cc/ @ashmckenzie",3.0 -43413000,2020-06-15 07:21:35.040,Configure Flipper HTTP adapter on dev.gitlab.org,"# Details - -Now, we [can smoothly connect to GitLab Feature Flag from `Feature` class](https://gitlab.com/gitlab-org/gitlab/-/issues/222273). We should start dogfooding on a pre-production server dev.gitlab.org in order to evaluate the new architecture if it works properly - -## TODO - -- [ ] Announce when we reconfigure the instance. Engineers cannot update feature flags during the maintaince period. -- [ ] [Reconfigure dev.gitlab.org to use HTTP adapter](https://gitlab.com/gitlab-org/gitlab/-/issues/222266). The URL points to a GitLab Feature Flag Server (Project) in ops.gitlab.net. -- [ ] [Migrate existing flag data from local postgres to GitLab Feature Flag](https://gitlab.com/gitlab-org/gitlab/-/issues/222273). -- [ ] Announce when the migration is done. Engineers can update feature flags via chatops again. - -Probably, it'd better we introduce a maintenance mode in chatops for feature flags that prevents anyone to update feature flags during the term. We're already doing the same during a production incident. - -## Feature Flag Servers - -- dev.gitlab.org ... https://ops.gitlab.net/gitlab-org/feature_flags/dev -- staging.gitlab.com ... https://ops.gitlab.net/gitlab-org/feature_flags/staging -- gitlab.com ... https://ops.gitlab.net/gitlab-org/feature_flags/production - -## NOTES - -- Developers can change flags via Chatops or rails console (i.e. Feature.enable). You can see the strategy vs gate mapping below. -- Developers cannot change flags in GitLab's Feature Flag UI (i.e. readonly). This is because chatops has some extended features e.g. freeze all flags during a production incident. We'll likely follow this up. -- This is [a part of ~Dogfooding issues](https://gitlab.com/groups/gitlab-org/-/epics/3367).",4.0 -38020114,2020-06-09 14:21:07.760,Put osquery Grafana dashboard under version control,"We want to [cleanup manually managed Grafana dashboards](https://gitlab.com/gitlab-com/runbooks/-/merge_requests/2345). As the [osquery dashboard](https://dashboards.gitlab.net/d/fjSLYzRWz/osquery) still is needed (for monitoring the uptycs deployment, which is about to be [rolled-out with a new version](https://gitlab.com/gitlab-com/gl-security/secops/detection/uptycs/-/issues/13) by security soon), we should add it to the runbooks/dashboards folder.",3.0 -36062339,2020-06-08 15:23:24.093,Add CNAME to DNS - Sigstr,"## Goal - -We are onboarding a new tool in marketing operations and require a new CNAME in our DNS. - -The name of the record should be `signatures.gitlab.com` and the value should point it to `gitlab.sigstr.net`.",1.0 -35424143,2020-06-04 10:14:32.733,Ignore min_cpu_platform in all instances and envs in terraform,"We sometimes get random changes of `min_cpu_platform` for instances in our plan. Presumably, when gcp internally decides to run a VM on a different CPU family (maybe triggered by a reboot) - that's at least my theory. This is unfortunate, because it causes an unclean plan and applying the plan needs a reboot of the instance. - -``` -- min_cpu_platform = ""Intel Skylake"" -> null -``` - -As we are not interested in `min_cpu_platform` so far (we only select `machine_type`), we should make terraform ignore this attribute for all instances and envs instances.",3.0 -35386491,2020-06-03 13:39:56.529,Planning for Observability Team W25 Milestone/Sprint,"[Milestone](https://gitlab.com/groups/gitlab-com/gl-infra/-/milestones/81) - -## Calendar - -| | M 15 Jun | T 16 Jun | W 17 Jun | Th 18 Jun | F 19 Jun -|-------------|----------|----------|----------|----------- |---------- -| @bjk-gitlab | - | - | - | - | - -| @cindy | - | - | - | - | - -| @craigf | :pager: | :wrench: | :wrench: | :wrench: | :wrench: -| @igorwwwwwwwwwwwwwwwwwwww | - | - | - | - | - -| @msmiley | - | - | - | - | - -| @mwasilewski-gitlab | - | - | - | - | 🧘🏻‍♂️ - -W25 working days: 24 - -| | M 22 Jun | T 23 Jun | W 24 Jun | Th 25 Jun | F 26 Jun -|-------------|----------|----------|----------|----------- |---------- -| @bjk-gitlab | - | - | - | - | - -| @cindy | - | - | - | - | - -| @craigf | :sun_with_face: | :palm_tree: | - | - | - -| @igorwwwwwwwwwwwwwwwwwwww | - | - | - | - | - -| @msmiley | :palm_tree: | :palm_tree: | :palm_tree: | :palm_tree: | :palm_tree: -| @mwasilewski-gitlab | - | - | - | - | - - -W26 working days: 24 - -
- -**Total working days: 48**",2.0 -35206403,2020-05-30 02:46:35.305,Update about.gitlab.com review apps prometheus probe,"We have [a probe configured](https://ops.gitlab.net/gitlab-cookbooks/chef-repo/blob/0ce027722dac3bd7c11b8de87b3a1e8dd1d0ca1c/roles/gprd-infra-prometheus-server.json#L215) for about-src.gitlab.com. We used to serve review apps for https://gitlab.com/gitlab-com/www-gitlab-com/ from that domain, but now we do it from about.gitlab-review.app. We should update that probe.",1.0 -35205922,2020-05-30 01:40:52.677,Move legacy redirects out of about-src to fastly,"We got a [PagerDuty alert](https://gitlab.pagerduty.com/incidents/PRU4JCP) about a cert about to expire for about-src.gitlab.com. We first thought this was for about.gitlab.com but it seems it's actually for the legacy redirects we're still serving through that host (see https://ops.gitlab.net/gitlab-cookbooks/chef-repo/blob/81e814df0e352d43f3c6a6e8576f4a3625346d6d/roles/about-gitlab-com.json#L7-23). We should move those to our [terraform redirects environment](https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/-/tree/master/environments/redirects) which will put them on Fastly and manage SSL certificates through it. - -- [x] https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/-/merge_requests/1784 -- [x] https://ops.gitlab.net/gitlab-cookbooks/chef-repo/-/merge_requests/3606 -- [x] `knife vault edit about-gitlab-com _default` and remove the old SSL certs - -/cc @ahmadsherif @ggillies FYI in case you saw that alert.",1.0 -35178244,2020-05-29 14:24:54.871,[Runbook] update the runbook to create or restore a delayed/archived replica,"Review the existent runbooks, and update the process on the PostgreSQL version 11.7. - -Also update the reference and commands for the usage of wal-g to apply the wals. - -In these environments, PostgreSQL is installed by omnibus. - -Please consider the case when the replica is out of sync and how to troubleshoot and fix the instance.",3.0 -35086129,2020-05-27 18:35:37.618,Completely decommission uptycs from our fleet,"Security wants to rollout a new version of uptycs. To prevent any glitches, uptycs is recommending to completely remove the previous version from the fleet first: https://gitlab.com/gitlab-com/gl-security/secops/detection/uptycs/-/issues/13#note_349679029 - -We need to update the `gitlab-uptycs` cookbook to support de-installing the package and deleting the data dir and then execute the decommissioning of the old uptycs version.",5.0 -35055230,2020-05-27 10:03:30.347,Sidekiq logs and metrics are missing for ops,The sidekiq dashboards for ops don't show data (prometheus metrics missing) and the sidekiq logs in elastic seem to only show client logs but not server logs: https://nonprod-log.gitlab.net/goto/92baf0950243d109800bd67aacb427a3.,3.0 -35021099,2020-05-26 14:38:52.367,postgres checkup does not include information from database primary,"The latest postgres-checkup report does not seem to include information about the primary database instance: https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/10326 - -I suspect this is only a configuration issue with the pipeline (but don't have access). It would be helpful to have the primary included, particularly to understand the read/write workload better. - -The project to look at is: https://ops.gitlab.net/gitlab-com/gl-infra/postgres-checkup I recall we've had a list of database instances to connect to in the CI config, but I'm not sure if that's still true. - -cc @albertoramos",3.0 -35011535,2020-05-26 11:05:42.939,Integrate db ops with chat-ops,The db ops automation tooling in https://ops.gitlab.net/gitlab-com/gl-infra/db-ops should be integrated with chat-ops.,8.0 -35011423,2020-05-26 11:02:03.148,Setup CI pipelines for db ops automation,"With new database ops automation code living in https://ops.gitlab.net/gitlab-com/gl-infra/db-ops we want to have CI pipelines setup to run different database maintenance tasks automated via CI. This should also help us to integrate with chat-ops in the future. - -First iteration: -* Run [rolling postgres restarts on replicas](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/10333) via CI",5.0 -35011090,2020-05-26 10:51:46.364,Create Ansible playbooks for postgres restart and failover,"For the [planned DB failover](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/10276) we want to have Ansible playbooks in order to make this task automated and repeatable. - -The playbooks should live in the new db-ops repository: https://ops.gitlab.net/gitlab-com/gl-infra/db-ops - -First iteration: - -* automate rolling postgres restart on (selected) replicas -* automate primary failover",8.0 -34935310,2020-05-24 00:34:51.125,SSL Certificate for dashboards.GitLab.net expiring,"The certificate for dashboards.GitLab.net expires today. - -https://gitlab.pagerduty.com/incidents/P7PS70T",2.0 -34881743,2020-05-22 08:29:39.286,Redis Cache Sentinel hosts should not be labelled as `type=redis` in Prometheus,"Currently, the Redis Cache Hosts are labelled in redis as part of the `redis` service. They should be part of the `redis-cache` service. - -https://thanos-query.ops.gitlab.net/graph?g0.range_input=1h&g0.max_source_resolution=0s&g0.expr=count(up%7Btype%3D%22redis%22%2C%20env%3D%22gprd%22%7D)%20by%20(fqdn)&g0.tab=1",1.0 -34815865,2020-05-20 17:24:46.757,rebuild patroni-02,"Patroni-02 failed and was taken out of the cluster. In order to align the patroni cluster nodes properly by number again to be able to [decommission 4 other nodes](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/1741), we need to re-build patroni-02 from scratch (forcing it to be instantiated on a different gcp hardware node hopefully). - -We need to rebuild patroni-02 before we do the [planned primary switchover](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/10276).",3.0 -34812779,2020-05-20 16:04:54.207,Remove overly broad page rule from Cloudflare configuration,"Incident: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2145 - -[This page rule](https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/-/blob/master/environments/gprd/main.tf#L2463-2475) should be removed from our configuration. - -The global security settings should be considered our final fall through page rule. Global security and caching settings should be made at the zone level and not with a page rule.",1.0 -34812470,2020-05-20 15:55:50.760,Create (or modify) a runbook to describe how to identify authenticated vs unauthenticated API calls,"Incident: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2145 - -As an on-call SRE, I should be able to identify and characterize API traffic, including if that traffic is authenticated or unauthenticated. And, if possible, programmatically isolate this traffic to better craft page rules and defenses in Cloudflare (or Haproxy) to protect our site from abuse.",2.0 -34812201,2020-05-20 15:47:22.145,Create (or update) Cloudflare runbook to better address abuse and attack events,"Incident: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2145 - -As an on-call SRE, I should have a runbook that helps describe how to identify an abuse or attack in Cloudflare, and describe the tools we can use in Cloudflare to mitigate such an event. - -This should probably include sections on: -1. Creating specific page rules to over-ride our security level for a URI -1. Adding IP addresses to a block list -1. Changing the zone-wide security level -1. Explain how the global `Im under attack` toggle will affect the site",2.0 -34809692,2020-05-20 14:52:11.290,Plan patroni failover to shrink the cluster (and more),"We want to execute some changes to our production patroni cluster which require a short downtime for a failover: - -* shrink the cluster size: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/1741 -* increase max_connections from 300 to 500 -* Reconfigure our Terraform template -* Anything else? - -We need to plan when and how to execute this change, evaluating the risk and impact of a failover and which actions are needed to cleanup after the failover.",1.0 -34790558,2020-05-20 08:18:02.951,Deploy Thanos 0.13.0,Thanos 0.13.0 fixes https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/9293,1.0 -34790154,2020-05-20 08:09:08.025,Export version database for loading into warehouse,Runbook https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/uncategorized/cloudsql-data-export.md,2.0 -34769941,2020-05-19 18:16:57.351,Fix creation of wal-g gcs.json on grpd postgres replicas,"During execution of https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2161, chef created an empty `/etc/wal-g.d/gcs.json` file instead of filling it with data. -I copied over the file from wal-e to fix this for now (and confirmed it is not overwritten by chef), but we need to get this fixed in chef.",3.0 -34768232,2020-05-19 17:29:42.871,Geo.staging.gitlab.com server not found,"https://geo.staging.gitlab.com was accessible until recently (today or yesterday maybe?), even though DB replication is down.",1.0 -34730270,2020-05-19 10:53:17.780,"Adjust node disk IO quota metrics to be per node, not per device","As mentioned in the observability meeting, it turns out that GCP disk IOP quota is per node, sum by the total HDD and SSD disk size per node. Not IOPs per disk per node. - -We need to adjust the automatic calculations for these metrics to calculate per disk. - -We also need to have info metrics to identify devices by HDD and SSD so that we can calculate per-node saturation.",1.0 -34726385,2020-05-19 09:08:14.933,Alert when jobs are not being processed by sidekiq,"Corrective action for: -* incident https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2154 -* RCA https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2158 - - - -To avoid the situation where we are unaware of sidekiq queues that are not being processed at all we should implement alerting on low RPS that will be useful so that we can page the oncall for similar problems as in the linked incident. As a first iteration it would be good to have a very generous low threshold. - -@andrewn suggested in slack: - -> We alert on any job that maintains a minimum 0.1 rps over the course of the day. If we don’t see it for 6 hours, we alert. -> Obviously this will we a bit noisy when queues are decommissioned",3.0 -34712939,2020-05-18 23:31:29.097,Clean up unused deployments in gs-staging cluster,"The `gs-staging` cluster has a bunch of failed deployments, and deployments related to projects that no longer exist. - -``` -version-gitlab-com-6491770-dast-default -version-gitlab-com-fork-12446120-review-sync-upstr-vthqx5 -version-gitlab-com-fork-12446120-staging -license-gitlab-com-fork-12446131-review-enable-aut-a96d5t -license-gitlab-com-fork-12446131-dast-default -license-gitlab-com-fork-12446131-staging -```",1.0 -34684452,2020-05-18 12:10:27.498,Node provisioning broken due to gcloud gem install failure,"The ruby version bundled with td-agent is 2.4, now when we install the `gcloud` gem it is complaining that that `google-protobuf` requires ruby 2.5. - - -``` -* Mixlib::ShellOut::ShellCommandFailed occurred in chef run: execute[install gcloud gem] (gitlab_fluentd::default line 170) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1' ----- Begin output of /usr/sbin/td-agent-gem install gcloud --no-document -v 0.24.0 ---- -STDOUT: -STDERR: ERROR: Error installing gcloud: - google-protobuf requires Ruby version < 2.8.dev, >= 2.5. ----- End output of /usr/sbin/td-agent-gem install gcloud --no-document -v 0.24.0 ---- -Ran /usr/sbin/td-agent-gem install gcloud --no-document -v 0.24.0 returned 1 - -``` - - -One a working node: - -``` -google-protobuf (3.11.3 x86_64-linux, 3.9.0 x86_64-linux, 3.6.1 x86_64-linux) -``` - -Installing `google-protobuf` manually seems to work around this - -``` -/usr/sbin/td-agent-gem install google-protobuf --no-document -v 3.11.3 - -``` - - - -``` -# /usr/sbin/td-agent-gem install google-protobuf --no-document -v 3.11.3 -Fetching: google-protobuf-3.11.3-x86_64-linux.gem (100%) -Successfully installed google-protobuf-3.11.3-x86_64-linux -1 gem installed - -```",2.0 -34684174,2020-05-18 12:03:47.526,gitlab-uptycs cookbook is not stopping osqueryd service,"When setting `""enabled"": false` in the [chef role](https://ops.gitlab.net/gitlab-cookbooks/chef-repo/-/merge_requests/3454) for gitlab-uptycs, the osqueryd service on all affected nodes should be stopped and disabled, but that isn't working. - -This MR also didn't fix it: https://gitlab.com/gitlab-cookbooks/gitlab-uptycs/-/merge_requests/22",3.0 -34614717,2020-05-15 16:33:07.457,Decomission vfiles module and revert any bootstrap migration changes,"This [EPIC](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/206) outlined a series of steps to alter how our bootstrap process worked for chef in GCP. That work is on hold due to another option being put in place and future terraform changes breaking this new approach. - -We should remove the vfiles module from all environments except ci-org.",1.0 -34592926,2020-05-15 08:07:54.982,Warning Alert Hackathon - 2020-06-04,"* Gather a list of warnings that have fired recently/frequently. -* Fix trivial issues. -* File issues for non-trivial issues.",1.0 -34558537,2020-05-14 13:53:24.866,Cleanup PrometheusRuleEvalFailures,"There are some persistent issues with Prometheus rule evaluations. We need to investigate and fix these. - -https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/10089",2.0 -34519880,2020-05-13 17:05:42.330,Fix patroni installation in Chef,"When building new patroni nodes, after Chef has run, `gitlab-patronictl` is missing dependencies: - -``` -root@patroni-08-db-gprd.c.gitlab-production.internal:~# gitlab-patronictl list -Traceback (most recent call last): - File ""/opt/patroni/bin/patronictl"", line 7, in - from patroni.ctl import ctl - File ""/opt/patroni/lib/python3.5/site-packages/patroni/ctl.py"", line 29, in - from patroni.config import Config - File ""/opt/patroni/lib/python3.5/site-packages/patroni/config.py"", line 12, in - from patroni.postgresql.config import ConfigHandler - File ""/opt/patroni/lib/python3.5/site-packages/patroni/postgresql/__init__.py"", line 3, in - import psycopg2 -ImportError: No module named 'psycopg2' -``` - -We need to fix that in Chef.",1.0 -34515750,2020-05-13 15:17:25.536,Rebuild and add patroni-08 back to cluster,"As a replacement for the failing patroni-02 we need to rebuild and bring back in sync patroni-08. - -See https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2101 and https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2119.",3.0 -34503932,2020-05-13 11:22:37.135,Decommission left-over patroni nodes,"During the postgres 11 upgrade we left patroni-{08,09,10,12} out of the cluster and on version 9.6 to be able to roll back. -As the cluster seems to be running fine without those nodes and we constantly need to silence alerts for those nodes and they also are quite expensive, we should commission them ASAP. - -The current primary is patroni-11, which makes decommissioning more complicated, as we only can savely remove nodes in terraform starting with the highest number. This means, we probably need to rebuild and add patroni-08 to the cluster, failover to one of the lower-numbered nodes, remove patroni-11 from the cluster and then decommission patroni-[09-12].",5.0 -34467351,2020-05-12 18:23:54.923,Allow and set specific concurrency limits for the canary file node,The current limit on concurrent PostUploadPack processes is '80'. We did not have that many during the canary slowdown event. Should we revise this number down just for the canary gitaly node to help promote a better experience when the node is under load? What should the concurrency be?,1.0 -34461510,2020-05-12 15:58:55.860,Observability Team - W23 2020 | Sprint Planning Issues,"# Planning - W23 2020 - -We'll use this issue as a means of discussing the priorities for the [Observability Team - W23 2020](https://gitlab.com/groups/gitlab-com/gl-infra/-/milestones/80) milestone. - -![Team_Focus](/uploads/655795e3b83206641ca3a4b2f5e7b167/Team_Focus.jpeg) - -## References - -- [~""team::Observability"" Epics Roadmap](https://gitlab.com/groups/gitlab-com/gl-infra/-/roadmap?label_name%5B%5D=team::Observability&scope=all&sort=end_date_asc&state=opened&utf8=%E2%9C%93&layout=QUARTERS)",3.0 -34438422,2020-05-12 09:39:20.820,The `testbed` environment appears to be routing alerts to production destinations,"Alerts in the `testbed` environment are being send to production slack channels, leading to false alerts. - -Example: https://gitlab.slack.com/archives/CD6HFD1L0/p1589276190114400 - - - -cc @T4cC0re oncall @AnthonySandoval",1.0 -34398667,2020-05-11 12:24:26.429,Make archive and delayed replica work after postgres 11 upgrade,"After upgrading the patroni cluster to postgres 11, we need to make the archive and delayed replicas work again.",5.0 -34367388,2020-05-10 20:13:13.794,Update the deleted-project-restore runbook,"While doing a project restore for https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/10092 I encountered many issues that need to be updated in the according runbook. Basically, in most cases project restores will not work anymore as described in the runbook for multiple reasons. Need to add my notes to the [runbook](https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/uncategorized/deleted-project-restore.md).",3.0 -34289578,2020-05-08 03:44:33.672,Can't use a symbolic link to bin/tf,"Having establish a symbolic link to `bin/tf` from within a directory already in ones local env `PATH` in order to support easy inclusion of `bin/tf` in one's `PATH` without having to explicitly add this project's path to one's `PATH`, the `bin/tf` script fails, since its code references the actual directory of the link, not the link target, during execution.",1.0 -34274020,2020-05-07 16:52:01.986,Change low-urgency-cpu-bound machine type,"As the `low-urgency-cpu-bound` shard is receiving more traffic now, we should change the machine type from `n1-standard-2` to the standard `n1-standard-4`. (See https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/10098#note_338354418).",3.0 -34268356,2020-05-07 14:21:46.968,Add low-urgency-cpu-bound nodes to accommodate to CPU saturation,"We get `low-urgency-cpu-bound` CPU saturation alerts (probably related to spikes of `reactive_caching` jobs). -We should add more nodes to keep up with it. - -![image](/uploads/14396e4d41efeee6f74689c55112a905/image.png)",3.0 -34234798,2020-05-06 20:10:01.963,Add Zendesk as an external service to Status.io,"Our testing environment on Status.io currently has Zendesk set up as an [external service](https://kb.status.io/monitoring/external-service/). For the sake of parity, I think we should add this to the production status page as well in the event that there's a Zendesk (and therefore, support) outage we need to report on. - -Looks like admin privileges are required to add this so I'm unable to. Would you mind taking care of this, @dawsmith?",1.0 -34234548,2020-05-06 20:00:18.764,Request for new subdomain - `advisories.gitlab.com`," - -This issue https://gitlab.com/gitlab-org/secure/vulnerability-research/advisories/landing-page/-/issues/1 requires a new subdomain, `advisories.gitlab.com`, to be created for the advisory landing pages. The project at `gitlab.com/advisories/advisories.gitlab.io` will then use the new subdomain as its custom domain for its pages. - -**Details** - - Point of contact for this request: [+ @d0c-s4vage +] - - If a call is needed, what is the proposed date and time of the call: [+ Date and Time +] - - Additional call details (format, type of call): [+ additional details +] - -**SRE Support Needed** - -[+ Support Request Details +] - - - - - -",2.0 -34163906,2020-05-05 16:30:45.573,Generate alert manager config from runbook pipeline,"Currently, we have three different source of truth for our alert manager routing configuration. - -These are: -1. https://gitlab.com/gitlab-com/runbooks/-/blob/master/alertmanager/alertmanager.yml.erb -1. https://gitlab.com/gitlab-cookbooks/gitlab-alertmanager/blob/master/templates/default/alertmanager.yml.erb -1. An encrypted file, manually stored in GCS, and not regularly updated. See https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/monitoring/-/blob/master/bin/k-ctl#L57 for details - -The first two are now relatively in check with one another, but effort needs to be made to unify all three routing configs. - -## Proposal - -Add a pipeline to the runbooks repo to generate the config and store it encrypted in GCS. - -Then adapt the chef repo to use this GCS file, rather than the ERB directly. - -(Placeholder, more to follow) - ------------------------------------------------------- - -Related: - -* https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/214 -* https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/7071 -* https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/9572 -* https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/gitlab-helmfiles/-/merge_requests/16 -* https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/10045 - -cc @ggillies @skarbek @jarv @marin @craigf",3.0 -34162364,2020-05-05 15:49:58.980,Provision new HDD-based Gitaly node in production,"**Details** - - Point of contact for this request: [+ @glopezfernandez +] - - If a call is needed, what is the proposed date and time of the call: no call necessary - - Additional call details (format, type of call): n/a - -**SRE Support Needed** - -As part of migrating archived repositories to HDD-based storage, we need to provision a new node in production backed by HDDs. This is simiar to #9846, but in order to safeguard production, let's provision the same class of machine and adjust as we observe the behavior of the system. - -",2.0 -34153471,2020-05-05 12:21:46.454,Export version database for loading into warehouse,Runbook https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/uncategorized/cloudsql-data-export.md,2.0 -34140342,2020-05-05 07:30:19.373,staging console permissions broken,"People with `rails-console` access in their data bags are not able to login as `-rails@console-01-sv-gstg.c.gitlab-staging-1.internal` anymore since last week. - -It seems a typo in the `gstg-base-console-node` chef role removed all ssh `AllowGroups` from that node.",1.0 -34119354,2020-05-04 16:43:52.616,investigate charts on the monitoring dashboard missing data,"follow up on: https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/9877#note_334672627 - -some charts on this dashboard: https://dashboards.gitlab.net/d/monitoring-main/monitoring-overview?orgId=1&from=now-6h%2Fm&to=now%2Fm&var-PROMETHEUS_DS=Global&var-environment=gprd&var-stage=main&var-sigma=2 show no data errors",2.0 -34001861,2020-04-30 14:08:10.469,`ElasticCloud Watcher: gitaly_abuse_1` triggered with empty project names,"The alert is triggered and posts to Slack, but is no longer including a project name: - -``` -Project: -File Server: file-13-stor-gprd.c.gitlab-production.internal -Average Gitaly Wall time: 2376ms/second -Average rate: invocations per second 170ops/sec -``` - -Example slack message (GitLab internal only): https://gitlab.slack.com/archives/CCR9GMMR7/p1588255198064400 - -@craigf (currently the SRE on-call) for help directing this to right place. - -cc @gitlab-com/gl-security/abuse-team",1.0 -33961346,2020-04-29 16:22:57.866,Setup Grafana Image Renderer service in k8s,"Grafana recommends moving away from PhantomJS and adopting the [Grafana Image Renderer](https://grafana.com/docs/grafana/latest/administration/image_rendering/) plugin. - -## Definition of Done - -- [ ] The [image renderer](https://hub.docker.com/r/grafana/grafana-image-renderer) is deployed to our GKE instance. -- [ ] Slackline is verified as working with the new image rendering.",5.0 -33958578,2020-04-29 15:35:24.298,document in runbooks in /docs/gitaly the use of housekeeping button for a lot of upload-packs processes,related to: https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/9487#note_332230619,1.0 -33955237,2020-04-29 14:19:29.880,document process of import pgbouncer logs and analysis with counts,"@gerardo.herzig I do like your approach on the incident yesterday and I would like to take notes on your steps how you proceed to : - -- collect the logs from pgbouncer. -- what table structure you created -- how did you import the data -- and what queries we could execute for analysis. - -Please let me know what else can we add to the process. -Perhaps enable connection logging on Pgbouncer? - -I would like to implement this and in future automate to extract and support us on the analysis of metrics from pgbouncer. - -Also, we could break down the like that summarize the number of connections and the status of the pools : - -``` -2020-04-29_14:03:05 2020-04-29 14:03:05.540 UTC [1136] LOG stats: 3229 xacts/s, 3879 queries/s, in 2071807 B/s, out 6585743 B/s, xact 7366 us, query 1867 us, wait 921749 us -2020-04-29_14:04:05 2020-04-29 14:04:05.540 UTC [1136] LOG stats: 3321 xacts/s, 3961 queries/s, in 2122757 B/s, out 6471056 B/s, xact 8924 us, query 3311 us, wait 6245925 us -2020-04-29_14:05:05 2020-04-29 14:05:05.541 UTC [1136] LOG stats: 3264 xacts/s, 3899 queries/s, in 2050998 B/s, out 7198025 B/s, xact 6805 us, query 2118 us, wait 2080644 us -2020-04-29_14:06:05 2020-04-29 14:06:05.541 UTC [1136] LOG stats: 4712 xacts/s, 6235 queries/s, in 3057949 B/s, out 9155166 B/s, xact 14221 us, query 7623 us, wait 38534949 us -2020-04-29_14:07:05 2020-04-29 14:07:05.541 UTC [1136] LOG stats: 4162 xacts/s, 4907 queries/s, in 2601754 B/s, out 7390354 B/s, xact 8209 us, query 3593 us, wait 9232004 us -2020-04-29_14:08:05 2020-04-29 14:08:05.541 UTC [1136] LOG stats: 4016 xacts/s, 5537 queries/s, in 2496438 B/s, out 9656314 B/s, xact 14397 us, query 7119 us, wait 32114727 us -2020-04-29_14:09:05 2020-04-29 14:09:05.541 UTC [1136] LOG stats: 3612 xacts/s, 4460 queries/s, in 2243004 B/s, out 7149377 B/s, xact 10388 us, query 4600 us, wait 12898834 us - -``` - -And what retention time we could have in this ""database""? -We should partition the tables.",4.0 -33945354,2020-04-29 10:51:51.721,Use `shard` labels to distinguish `pgbouncer-sidekiq` from other `pgbouncer` nodes.,"Currently, it's not possible to distinguish `pgbouncer` main pool metrics from `pgbouncer` `sidekiq` pool nodes, without hardcoding database connection names or using regular expression matching on host names. - -We should use the `shard` label to distinguish these metrics. Possibly `shard=""sidekiq""` for the `pgbouncer-sidekiq-*` nodes. - -![image](/uploads/130d2f6799f97453b23fe75b349d14bd/image.png) - -https://thanos-query.ops.gitlab.net/graph?g0.range_input=1h&g0.max_source_resolution=0s&g0.expr=count(up%7Bfqdn%3D~%22pgbouncer.*%22%2C%20env%3D%22gprd%22%7D)%20by%20(fqdn%2C%20shard%2C%20stage%2C%20environment%2C%20tier)&g0.tab=1 - -Minor corrective action for https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/9966 - -cc @ahmadsherif @bjk-gitlab @Finotto",1.0 -33886329,2020-04-28 06:52:05.411,Configure chef-client in gitlab-server cookbook by default,"We currently configure chef-client in our roles files. This leads to copy-n-paste errors[0]. - -* [x] Add a configuration recipe to gitlab-server. -* [x] Rollout new recipe to `gitlab-server::default`. - -[0]: https://ops.gitlab.net/gitlab-cookbooks/chef-repo/-/merge_requests/3245",1.0 -33871579,2020-04-27 19:21:02.789,Geo replication is broken in staging after Postgres update attempt,"After our Postgres update attempt on 2020-04-24, [the geo team reported replication was broken](https://ops.gitlab.net/gitlab-com/gl-infra/db-migration/-/issues/12#note_53253) on staging. Although this doesn't affect the prospect of a production attempt (since geo isn't enabled in production) we should fix it to unblock the geo team. - -/cc @Finotto @dbalexandre",2.0 -33871203,2020-04-27 19:11:18.684,Delete stale branches on ops chef-repo,"There are thousands of stale branches. Should we clean these up? - -`https://ops.gitlab.net/gitlab-cookbooks/chef-repo/-/branches/stale`",1.0 -33837283,2020-04-27 08:01:25.498,Reenable the usage ping in gitlab admin interface,"## Overview - -Reenable the usage ping in gitlab admin interface - -- We had disabled the usage ping as part of the greater fixing, https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/9385 . We reenable it agains - - -## Next Steps - -* [x] Go to GitLab.com **Admin area > Settings > Metrics and profiling > Usage statistics** and Check usage ping. -* [x] Please post a screenshot of the usage ping settings once this once this is done.",1.0 -33763049,2020-04-24 15:45:57.800,License Database Extract,"Hi @Finotto & @gerardo.herzig - -Created this issue to request a license DB extract as from the [hand book](https://about.gitlab.com/handbook/business-ops/data-team/data-infrastructure/#license-db) - -Command required below: - -`pg_dump -Fp --no-owner --no-acl license_gitlab_com_production | sed -E 's/(DROP|CREATE|COMMENT ON) EXTENSION/-- \1 EXTENSION/g' > S{DUMPFILE}` - -Can share the files with me on Slack this time - -Thanks!",1.0 -33762548,2020-04-24 15:42:18.531,Export version database for loading into warehouse,Runbook https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/uncategorized/cloudsql-data-export.md,2.0 -33753070,2020-04-24 13:01:23.021,add to runbooks a note about removing haproxy machines from GCP LB,follow up on: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/1999#note_330938313,1.0 -33692496,2020-04-23 12:53:53.483,Set GITLAB_QA_FORMLESS_LOGIN_TOKEN variable on /etc/gitlab/gitlab.rb on live environments,"We're getting close to having https://gitlab.com/gitlab-org/gitlab/-/merge_requests/27788 merged. With that, to allow for running tests using the formless login mechanism in live environments such as staging, pre-prod, canary, and production, we will need to set the `GITLAB_QA_FORMLESS_LOGIN_TOKEN` environment variable on `/etc/gitlab/gitlab.rb` on those environments. - -You can find the variable's value at the Team 1Password under GitLab QA - Access tokens. - -Could someone help with this, please? - -Cc @gitlab-com/gl-infra/managers.",1.0 -33676220,2020-04-23 09:58:13.080,Research and prototype logical replication in PostgreSQL on version 11.7,"This issue is to capture ideas and proposals for the initiative of implementing logical replication with logical decoding in PostgreSQL. - -With logical replication we would be able of: - --Improve our upgrade between main releases. - --Enable checksums on the database.",20.0 -33657695,2020-04-22 23:37:59.194,Reduce disk space contention on CI runner VMs created by gsrm,"As a corrective action for https://gitlab.com/gitlab-com/gl-infra/production/-/issues/1976, reduce the number of jobs serially handled by each runner VM from 30 to 10. - -Each time a runner VM runs a job, it accumulates residual disk space used by that docker container's volumes. Sometimes an unlucky VM runs a combination of jobs that fills its disk. This has started happening fairly often, and it is starting to affect the efficiency of development work. - -For more background, see summary notes here: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/1976#note_329467200 - -This is the cheapest of the easy options. If it turns out to be insufficient, we can revisit the other options.",1.0 -33650500,2020-04-22 19:44:44.140,Upgrade Patroni in gitlab.com,"The goal is to upgrade Patroni to 2.0.2 without any downtime or failover. - -Release notes: https://github.com/zalando/patroni/blob/master/docs/releases.rst#version-201 - -The initial steps that we are planning are : - -- ~~stop chef-client in all the database nodes - to do not execute any failover~~ -- ~~Pause the Patroni Cluster~~ -- ~~ensure that all nodes ack the pause~~ -- ~~install the new version ( chef MR I guess)~~ -- ~~Restart the Patroni processes~~ -- ~~check that the new process is launched and operative~~ -- ~~when the check is positive we resume the cluster~~ -- ~~verify logs and the database and patroni are running~~ - -New steps (from [this comment](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/9885#note_480398240)) [as of `2020-01-11`]: - -* Disable chef in all the nodes. -* Update the patroni python package ( `pip` ) -* Pause the patroni cluster in maintenance mode `gitlab-patronictl pause` -* Restart the patroni cluster in maintenance mode `gitlab-patronictl restart` -* Resume the maintenance mode Patroni cluster `gitlab-patronictl resume` -* Merge in chef the new version of Patroni. -* restore chef in each node. - - -**Acceptance criteria:** - -- [x] Create a runbook to document all the processes executed. -- [x] Rollout in staging the new Patroni version. -- [ ] Execute several failovers and verify that all the integrations are working properly. E.g.: ( traffic routing). -- [ ] Enable checksums in staging using the pause strategy. -- [x] Rollout the new Patroni version in Production.",8.0 -33629473,2020-04-22 14:07:22.723,Enable postgres_queue_enabled for Praefect in gstg/gprd,"This new setting allows replication jobs to be stored on Postgresql. It isn't expected to have any impact on the normal operation of Praefect outside of replication. - -See https://gitlab.com/gitlab-org/gitaly/-/merge_requests/1989 and https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/4096 for more details.",1.0 -33616634,2020-04-22 09:30:29.886,[Incident Review] 2020-04-22 Lack of Observability,"Incident: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/1978 - - -## Summary - -Prometheus and alertmanager were still operational, Thanos was completely unavailable and because of that also Grafana - -- no alerts were triggered: - - we could use Elastic watcher [http input](https://www.elastic.co/guide/en/elasticsearch/reference/current/input-http.html) to perform a simple check if Prometheus, Alertmanager, Thanos, Grafana are operational. @bjk-gitlab do we already have any alerting for the ""monitoring stack"" external to it? -- what steps were taken to troubleshoot it? (how can we improve time to detection) - - we have this dashboard for the monitoring components: https://dashboards.gitlab.net/d/monitoring-main/monitoring-overview?orgId=1&from=now-6h%2Fm&to=now%2Fm&var-PROMETHEUS_DS=Global&var-environment=gprd&var-stage=main&var-sigma=2 . However, a lot of the charts are empty. Besides, these should be monitored by an external system - - We have Thanos sending tracing info to Elastic APM - - The only logs available in Kibana are from Prometheus: https://log.gprd.gitlab.net/goto/db7eca8ec4cbd80b2ae1187715e62984 -- what would have prevented it from happening? - - the change was reviewed, we simply missed the fact that it resulted in a circular dependency - -1. Service(s) affected : ~""Service::Thanos"" -1. Team attribution : ~""team::Observability"" -1. Minutes downtime or degradation : ~600 - -## Customer Impact - -1. Who was impacted by this incident? **All employees attempting to use dashboards.gitlab.net and the infrastructure department's monitoring systems.** -2. What was the customer experience during the incident? **None** -3. How many customers were affected? **None** -4. If a precise customer impact number is unknown, what is the estimated potential impact? **n/a** - -## Incident Response Analysis - -1. How was the event detected? **Grafana dashboards were timing out.** -2. How could detection time be improved? **Monitoring of the Grafana system or latency for Thanos queriies could have alerted us.** -3. How did we reach the point where we knew how to mitigate the impact? -4. How could time to mitigation be improved? - - -## Post Incident Analysis - -1. How was the root cause diagnosed? -2. How could time to diagnosis be improved? -3. Do we have an existing backlog item that would've prevented or greatly reduced the impact of this incident? **Yes, https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/159.** -4. Was this incident triggered by a change (deployment of code or change to infrastructure. _if yes, have you linked the issue which represents the change?_)? **Yes, https://gitlab.com/gitlab-com/runbooks/-/merge_requests/2135.** - - - -## Timeline - -All times UTC. - -2020-04-21 -- 19:00:00 https://gitlab.com/gitlab-com/runbooks/-/merge_requests/2135 change is made to rules which results in a circular dependency in Thanos -- 23:40:00 transaction durations start to go up - -2020-04-22 - -- 02:15:00 request duration goes through the roof, we start to hit the 5min timeout -- 08:59:00 https://gitlab.com/gitlab-cookbooks/gitlab-prometheus/-/merge_requests/521 adds a monitor label to the rule server -- 09:02:00 https://gitlab.com/gitlab-com/runbooks/-/merge_requests/2140/diffs adds a filter to record rules to skip the rule server -- 10:29:00 https://gitlab.com/gitlab-com/runbooks/-/merge_requests/2142 improving routing - - -## 5 Whys - - -## Lessons Learned - - -## Corrective Actions - -- investigate and fix monitoring related metrics which are unavailable in Grafana - - issue: https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/10027 -- add an Alertmanager alert for Thanos latency (might be a matter of defining SLO threshold) - - there's already an epic: https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/159 -- add Alertmnager alerts for other components of the monitoring stack - - same as above -- add Elastic watches for alerting when the monitoring stack is down - - same as above -- start using Jaeger/Elastic APM for tracing Prometheus/Alertmanager/Grafana - - support for Jaeger in Prometheus is coming in next release -- send logs from Thanos, alertmanager, Grafana to Kibana - - they simply don't log a lot -- set up a staging environment for our monitoring stack so that we can test changes before they go to production (I think that this would be an overkill, we should focus on better monitoring and alerting instead) - - we should do it as part of the migration to Kubernetes, creation of the staging env will be easier when not done with Chef and will help the migration itself as well - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/customer-success/professional-services-engineering/workflows/internal/root-cause-analysis.html#meeting-purpose)",1.0 -33612884,2020-04-22 08:37:32.692,Re-enable indexing on GKE logs with reduced schema,"detailed error: -``` -{""type"":""illegal_argument_exception"",""reason"":""field expansion matches too many fields, limit: 1024, got: 1629""}}}]},""status"":400} -``` - -see: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-settings.html for more background - -this config option cannot be adjusted through the API: -``` -""persistent setting [indices.query.bool.max_clause_count], not dynamically updateable"" -``` - -we need to bring down the number of fields in the indices - ---- - -Here is an example request/response: - -
- -``` -{ - ""version"": true, - ""size"": 500, - ""sort"": [ - { - ""json.time"": { - ""order"": ""desc"", - ""unmapped_type"": ""boolean"" - } - } - ], - ""_source"": { - ""excludes"": [] - }, - ""aggs"": { - ""2"": { - ""date_histogram"": { - ""field"": ""json.time"", - ""fixed_interval"": ""30s"", - ""time_zone"": ""UTC"", - ""min_doc_count"": 1 - } - } - }, - ""stored_fields"": [ - ""*"" - ], - ""script_fields"": { - ""controller_and_action"": { - ""script"": { - ""source"": ""doc['json.controller.keyword'] + \""#\"" + doc['json.action.keyword']"", - ""lang"": ""painless"" - } - } - }, - ""docvalue_fields"": [ - { - ""field"": ""@timestamp"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.expiry_from"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.expiry_to"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.bucket.start"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.bucket.stop"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.commits.timestamp"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.created_after"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.created_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.created_before"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.due_date"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.head_commit.timestamp"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.pull_request.base.repo.created_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.pull_request.base.repo.pushed_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.pull_request.base.repo.updated_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.pull_request.closed_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.pull_request.created_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.pull_request.head.repo.created_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.pull_request.head.repo.pushed_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.pull_request.head.repo.updated_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.pull_request.merged_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.pull_request.updated_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.raw_response.created_on"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.raw_response.updated_on"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.repository.updated_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.bucket.start"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.bucket.stop"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.commits.timestamp"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.head_commit.timestamp"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.pull_request.base.repo.created_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.pull_request.base.repo.pushed_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.pull_request.base.repo.updated_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.pull_request.closed_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.pull_request.created_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.pull_request.head.repo.created_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.pull_request.head.repo.pushed_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.pull_request.head.repo.updated_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.pull_request.merged_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.pull_request.updated_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.extra.request_forgery_protection.repository.updated_at"", - ""format"": ""date_time"" - }, - { - ""field"": ""json.time"", - ""format"": ""date_time"" - }, - { - ""field"": ""publish_time"", - ""format"": ""date_time"" - } - ], - ""query"": { - ""bool"": { - ""must"": [], - ""filter"": [ - { - ""multi_match"": { - ""type"": ""best_fields"", - ""query"": ""ExternalDiffUploader"", - ""lenient"": true - } - }, - { - ""match_phrase"": { - ""json.controller"": { - ""query"": ""Projects::MergeRequests::DiffsController"" - } - } - }, - { - ""range"": { - ""json.time"": { - ""format"": ""strict_date_optional_time"", - ""gte"": ""2020-03-27T12:00:00.000Z"", - ""lte"": ""2020-03-27T12:30:00.000Z"" - } - } - } - ], - ""should"": [], - ""must_not"": [] - } - }, - ""highlight"": { - ""pre_tags"": [ - ""@kibana-highlighted-field@"" - ], - ""post_tags"": [ - ""@/kibana-highlighted-field@"" - ], - ""fields"": { - ""*"": {} - }, - ""fragment_size"": 2147483647 - } -} -``` -
- -Response: -
- -``` -{ - ""took"": 5158, - ""timed_out"": false, - ""_shards"": { - ""total"": 762, - ""successful"": 750, - ""skipped"": 750, - ""failed"": 12, - ""failures"": [ - { - ""shard"": 0, - ""index"": ""pubsub-rails-inf-gprd-001925"", - ""node"": ""jmnNQegZRWOO0aJBFjnZew"", - ""reason"": { - ""type"": ""query_shard_exception"", - ""reason"": ""failed to create query: {\n \""bool\"" : {\n \""filter\"" : [\n {\n \""multi_match\"" : {\n \""query\"" : \""ExternalDiffUploader\"",\n \""fields\"" : [ ],\n \""type\"" : \""best_fields\"",\n \""operator\"" : \""OR\"",\n \""slop\"" : 0,\n \""prefix_length\"" : 0,\n \""max_expansions\"" : 50,\n \""lenient\"" : true,\n \""zero_terms_query\"" : \""NONE\"",\n \""auto_generate_synonyms_phrase_query\"" : true,\n \""fuzzy_transpositions\"" : true,\n \""boost\"" : 1.0\n }\n },\n {\n \""match_phrase\"" : {\n \""json.controller\"" : {\n \""query\"" : \""Projects::MergeRequests::DiffsController\"",\n \""slop\"" : 0,\n \""zero_terms_query\"" : \""NONE\"",\n \""boost\"" : 1.0\n }\n }\n },\n {\n \""range\"" : {\n \""json.time\"" : {\n \""from\"" : \""2020-03-27T12:00:00.000Z\"",\n \""to\"" : \""2020-03-27T12:30:00.000Z\"",\n \""include_lower\"" : true,\n \""include_upper\"" : true,\n \""format\"" : \""strict_date_optional_time\"",\n \""boost\"" : 1.0\n }\n }\n }\n ],\n \""adjust_pure_negative\"" : true,\n \""boost\"" : 1.0\n }\n}"", - ""index_uuid"": ""HunDEJAFRKieC7kFcif7zw"", - ""index"": ""pubsub-rails-inf-gprd-001925"", - ""caused_by"": { - ""type"": ""illegal_argument_exception"", - ""reason"": ""field expansion matches too many fields, limit: 1024, got: 1470"" - } - } - }, - { - ""shard"": 0, - ""index"": ""pubsub-rails-inf-gprd-001926"", - ""node"": ""Nce627z_R7aRVIjH1JkAog"", - ""reason"": { - ""type"": ""query_shard_exception"", - ""reason"": ""failed to create query: {\n \""bool\"" : {\n \""filter\"" : [\n {\n \""multi_match\"" : {\n \""query\"" : \""ExternalDiffUploader\"",\n \""fields\"" : [ ],\n \""type\"" : \""best_fields\"",\n \""operator\"" : \""OR\"",\n \""slop\"" : 0,\n \""prefix_length\"" : 0,\n \""max_expansions\"" : 50,\n \""lenient\"" : true,\n \""zero_terms_query\"" : \""NONE\"",\n \""auto_generate_synonyms_phrase_query\"" : true,\n \""fuzzy_transpositions\"" : true,\n \""boost\"" : 1.0\n }\n },\n {\n \""match_phrase\"" : {\n \""json.controller\"" : {\n \""query\"" : \""Projects::MergeRequests::DiffsController\"",\n \""slop\"" : 0,\n \""zero_terms_query\"" : \""NONE\"",\n \""boost\"" : 1.0\n }\n }\n },\n {\n \""range\"" : {\n \""json.time\"" : {\n \""from\"" : \""2020-03-27T12:00:00.000Z\"",\n \""to\"" : \""2020-03-27T12:30:00.000Z\"",\n \""include_lower\"" : true,\n \""include_upper\"" : true,\n \""format\"" : \""strict_date_optional_time\"",\n \""boost\"" : 1.0\n }\n }\n }\n ],\n \""adjust_pure_negative\"" : true,\n \""boost\"" : 1.0\n }\n}"", - ""index_uuid"": ""URp08IJpRjKQ6kRnKFJQ8w"", - ""index"": ""pubsub-rails-inf-gprd-001926"", - ""caused_by"": { - ""type"": ""illegal_argument_exception"", - ""reason"": ""field expansion matches too many fields, limit: 1024, got: 1136"" - } - } - } - ] - }, - ""hits"": { - ""total"": 0, - ""max_score"": 0, - ""hits"": [] - } -} -``` - -
",3.0 -33565099,2020-04-21 09:57:43.421,Thanos storage is enabled in Testbed,"The `testbed` environment has thanos storage enabled in the sidecars, but it's not setup for other thanos components. This is cusing Prometheus to send data to GCS that isn't useful.",1.0 -33654844,2020-04-21 09:17:24.241,a small number of rails logs is being rejected by the ES logging cluster,"## Summary -a small number of rails logs is being rejected by the ES logging cluster - -## Timeline - -All times UTC. - -2020-04-22 - -- 09:17 - Incident declared from Slack - -## Details -This is a continuation of https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/8931 - -Two alerts are firing for this: -- number of pubsubbeat warnings -- ES write rejections - -These alerts have been firing for a few weeks now. We either need to fix the underlying problem or adjust the thresholds - -## Source -Incident declared by mwasilewski in Slack via `/incident declare` command. - -## Resources -1. If the **Situation Zoom room** was utilised, recording will be automatically uploaded to [Incident room Google Drive folder](https://drive.google.com/drive/folders/1wtGTU10-sybbCv1LiHIj2AFEbxizlcks) (private)", -33532275,2020-04-20 20:00:08.815,Thanos has few alerting rules configured,"https://github.com/thanos-io/thanos/blob/master/examples/alerts/alerts.md#Ruler has 43 suggested alerts for monitoring a Thanos setup. - -At present, we have 4 alerts configured for our cluster. - -We should consider adding the others. - -For example, [one of the suggested alerts](https://thanos-query.ops.gitlab.net/graph?g0.range_input=1d&g0.max_source_resolution=0s&g0.expr=clamp_max(%0A%20%20%20%20time()%20-%20%20max%20by%20(job%2C%20rule_group)%20(prometheus_rule_group_last_evaluation_timestamp_seconds%7Bjob%3D~%22thanos.*%22%7D)%0A%20%20%20%20%3E%0A%20%20%20%2010%20*%20max%20by%20(job%2C%20rule_group)%20(prometheus_rule_group_interval_seconds%7Bjob%3D~%22thanos.*%22%7D)%2C%201)&g0.tab=0&g1.range_input=1h&g1.max_source_resolution=0s&g1.expr=topk(5%2C%20prometheus_rule_group_last_evaluation_timestamp_seconds%7Bjob%3D~%22thanos.*%22%7D)&g1.tab=1) would point to frequent rule group failures on Thanos Ruler nodes. - -cc @bjk-gitlab @AnthonySandoval",2.0 -33529835,2020-04-20 18:22:46.417,"Investigate dips in redis-cache latency apdex, part 2","We previously investigated apdex spikiness as part of https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/9420. - -During that investigation we found apdex dips to correlate with CPU saturation. The CPU saturation has been addressed in two ways: - -* Upgraded to C2 instance type https://gitlab.com/gitlab-com/gl-infra/production/-/issues/1871 -* Applied rate-limit for user producing traffic bursts https://gitlab.com/gitlab-com/gl-infra/production/-/issues/1906 - -This dropped the baseline CPU utilization from 80% to 35% and it is now much smoother. That class of apdex dips no longer occurs. - -However, it appears that we have a new class of latency apdex dips that does not correlate with any CPU saturation. - -From the [redis-cache overview dashboard](https://dashboards.gitlab.net/d/redis-cache-main/redis-cache-overview): - -![Screenshot_2020-04-20_at_20.06.21](/uploads/e55b9c4fb1dedd80dc5c19fd5c777ab2/Screenshot_2020-04-20_at_20.06.21.png) - -These events are less frequent than they were previously, but the dips have a higher amplitude, which gets amplified when looking at apdex over a longer period of time. - -Here is 30 days: - -![Screenshot_2020-04-20_at_20.07.19](/uploads/778fb99e275a67b82f9dbfcf70222092/Screenshot_2020-04-20_at_20.07.19.png) - -![Screenshot_2020-04-20_at_20.07.30](/uploads/d49af10a1432d8546ae78696c3736c8e/Screenshot_2020-04-20_at_20.07.30.png) - -Please note that this dashboard uses a `min_over_time` aggregation that amplifies lower dips -- it makes things look worse than they actually are when looking at a longer time frame. - -The change in behaviour appears to correlate with the exact date that we upgraded to C2 instances, March 31 (2020-03-31). - -This change included: - -* Change in underlying instance type from `n1-highmem-16` to `c2-standard-30` -* Change in number of CPUs from `16` to `30` -* Change in kernel version from `4.15.0-1036-gcp` to `4.15.0-1058-gcp` - -It is not yet proven that this is a server-side issue. Since apdex is measured on the client-side (`gitlab_cache_operation_duration_seconds_bucket`), it is also possible that something is happening on the client. - -Some possible next steps: - -* Gather a CPU profile of the redis process with perf to validate the claim that there is no CPU burst during the event. -* Gather data on other processes running on the redis host during the time of the event, to validate that no other process is contributing. -* Undo some of the variables that changed, starting with the instance type: change one of the redis hosts back to a `c2-standard-30`.",1.0 -33518307,2020-04-20 14:03:20.149,Provision new HDD-based Gitaly node in staging,"As part of migrating archived repositories to HDD-based storage, we need to test this process in staging and need to have a HDD-based Gitaly node provisioned.",3.0 -33517848,2020-04-20 13:52:00.603,Rename the `alerts` Slack channel to `feed_alerts`,"as per: https://gitlab.com/gitlab-com/support/support-team-meta/-/issues/2244 , the alerts channel needs to be renamed - -this can be done by adjusting the alertmanager config, the actual names are configured in the default attributes for the alertmanager cookbook: https://gitlab.com/gitlab-cookbooks/gitlab-alertmanager/-/blob/master/attributes/default.rb#L26-33 and are partially overwritten in the chef role: https://ops.gitlab.net/gitlab-cookbooks/chef-repo/-/blob/master/roles/gprd-alertmanager.json#L22-28 - -- [ ] try using channel IDs instead of names in AM config",1.0 -33514849,2020-04-20 12:46:58.296,certificate for log.gitlab.net expires on 11.05.2020,"SSLMate Certificate for log.gitlab.net Expiring in 27 Days - -(see related email for more details) - -Recently expired: - -* [x] `next.staging.gitlab.com` (2020-02-22) -* [x] `monkey.gitlab.net` (2020-02-27) -* [x] `*.pre.gitlab.net` (2020-03-05) - -Upcoming expiries: - -* [x] `staging.gitlab.com` (2020-05-01) -* [x] ~~`*.githost.io` (2020-05-03)~~ <- set not to auto-renew -* [x] `*.testbed.gitlab.net` (2020-05-03) -* [x] `log.gitlab.net` (2020-05-11) -* [x] `*.gprd.gitlab.net` (2020-05-14) -* [x] `*.gstg.gitlab.net` (2020-05-14) -* [x] `performance-lb.gitlab.net` (2020-05-17) -* [x] `dashboards.gitlab.net` (2020-05-24) - -cc @AnthonySandoval @igorwwwwwwwwwwwwwwwwwwww",2.0 -33509653,2020-04-20 10:39:30.693,Rollout Thanos 0.12,"Thanos 0.12 has been released with a number of memory and other performance improvements. - -https://github.com/thanos-io/thanos/releases/tag/v0.12.0",2.0 -33509628,2020-04-20 10:38:47.339,Points to implement on the postgresql upgrade ansible playbook,"We need support from SRE to implement in the ansible-playbook from the upgrade: - -- Execution of the MRs to apply the changes in chef. -- Make a snapshot from the database that could be used in a rollback scenario.",4.0 -33498871,2020-04-20 06:14:41.119,Install postgres debug symbols on hosts running postgres,"To aid in profiling postgres stacks, add the debug symbol package along with the main postgres package. This should apply to our patroni-managed hosts as well as other hosts running the `postgresql-` package from the postgres apt repo. The postgres binary is stripped of most symbols; adding the `postgresql--dbg` package will make profilers like `perf` more useful. - -```shell -$ apt-cache search 'postgresql-9.6.*dbg' -postgresql-9.6-dbg - debug symbols for postgresql-9.6 -... -```",1.0 -33430869,2020-04-17 14:52:03.097,Create new gitaly storage shard node to replace `nfs-file45`,"Gitaly storage shard `nfs-file45` (`file-45-stor-gprd.c.gitlab-production.internal`) is at `66.42%` usage as of `2020-04-17`. - -At `78.22%` as of `2020-05-07`. - -Our usage targets specify that we try to maintain usage between 65-79%. New project creation would quickly cause more usage than that on `nfs-file45`. - -There are currently 7 gitaly shard nodes configured to accept new projects. Maintaining at least this level of availability is important to avoid any shards filling up too quickly. - -To remove a single node from the new projects storage rotation cluster, and also prevent usage acceleration, a new gitaly node should be created and added to the list of shards configured in the GitLab Application to store new project repositories. - -[Create a production change issue for this, using the `storage_shard_creation` template: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/new?issuable_template=storage_shard_creation](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/new?issuable_template=storage_shard_creation)",4.0 -33430844,2020-04-17 14:51:26.935,Create new gitaly storage shard node to replace `nfs-file42`,"Gitaly storage shard `nfs-file42` (`file-42-stor-gprd.c.gitlab-production.internal`) is at `65.16%` usage as of `2020-04-17`. - -At `77.22%` as of `2020-05-07`. - -Our usage targets specify that we try to maintain usage between 65-79%. New project creation would quickly cause more usage than that on `nfs-file42`. - -There are currently 7 gitaly shard nodes configured to accept new projects. Maintaining at least this level of availability is important to avoid any shards filling up too quickly. - -To remove a single node from the new projects storage rotation cluster, and also prevent usage acceleration, a new gitaly node should be created and added to the list of shards configured in the GitLab Application to store new project repositories. - -[Create a production change issue for this, using the `storage_shard_creation` template: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/new?issuable_template=storage_shard_creation](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/new?issuable_template=storage_shard_creation)",4.0 -33430820,2020-04-17 14:50:42.426,Create new gitaly storage shard node to replace `nfs-file41`,"Gitaly storage shard `nfs-file41` (`file-41-stor-gprd.c.gitlab-production.internal`) is at `67.30%` usage as of 2020-04-17 and `73.51%` as of 2020-04-28. - -Our usage targets specify that we try to maintain usage between 65-79%. New project creation would quickly cause more usage than that on `nfs-file41`. - -There are currently 7 gitaly shard nodes configured to accept new projects. Maintaining at least this level of availability is important to avoid any shards filling up too quickly. - -To remove a single node from the new projects storage rotation cluster, and also prevent usage acceleration, a new gitaly node should be created and added to the list of shards configured in the GitLab Application to store new project repositories. - -[Create a production change issue for this, using the `storage_shard_creation` template: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/new?issuable_template=storage_shard_creation](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/new?issuable_template=storage_shard_creation)",4.0 -33420813,2020-04-17 09:55:57.784,Increase login session duration for Sentry,"Sentry seems to be logging out after around 15 minutes, which feels a bit excessive. - -Can we get this increased to something more acceptable?",2.0 -33419602,2020-04-17 09:21:15.200,Review of deadmans snitch infrastructure,"Looking at our deadmans snitch infrastructure, as part of https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/9694#note_326029699, and wondering whether it’s working as expected? - -* The dashboard referenced from the runbook seems pretty vacant: https://dashboards.gitlab.com/d/_FOpntlmz/deadman-switches?orgId=1&var-interval=15m&var-interval=30m&var-interval=6h&var-interval=1d&var-interval=1d6h&var-environment=gprd - -* Deadmans snitch only seems to be integrated with Slack, not Pagerduty (this differs from the runbook) - -* Most of the alerts configured in Alertmanager do not receive any data: https://gitlab.com/gitlab-com/runbooks/blob/master/rules/deadman-switch-alerts.yml#L149, although the [`SnitchHeartBeat`](https://thanos-query.ops.gitlab.net/graph?g0.range_input=12h&g0.max_source_resolution=0s&g0.expr=ALERTS%7Balertname%3D%22SnitchHeartBeat%22%7D&g0.tab=0) does appear to be firing. - * Should we have one snitch per alertmanager, rather than one per environment? - -* `gprd alertmanager` has not been pinged in 9 months (`ops alertmanager` is active however) - -My biggest concern is that, if alertmanager stops sending alerts, at present it seems we will get a slack message to #alerts and not much else - certainly no pagerduty alerts afaics. Am I missing something here? - -![image](/uploads/2d425e236ebe21d3bb51214260113caa/image.png) - - -## Expectations - -* [x] If snitches are not getting pinged, we should be alerted through pagerduty, within a few minutes. -* [x] Are the alerts in `deadman-switch-alerts.yml` redundant or actually useful? -* [x] Is https://gitlab.com/gitlab-com/runbooks/blob/master/docs/uncategorized/deadman-switches.md up to date. - -cc @AnthonySandoval",2.0 -33401295,2020-04-16 21:07:44.375,[logging] Shard failures trying to search `json.tag: rails.application` in `pubsub-rails-inf-gprd`,"The content of that tag is not structured, so a text search is necessary. - -There are currently a lot of failures on queries like https://log.gprd.gitlab.net/goto/df7102d612189db2fb95d894becdcdd4. - -``` -Type - query_shard_exception -Reason - failed to create query: { ""bool"" : { ""filter"" : [ { ""multi_match"" : { ""query"" : ""Successful Login"", ""fields"" : [ ], ""type"" : ""phrase"", ""operator"" : ""OR"", ""slop"" : 0, ""prefix_length"" : 0, ""max_expansions"" : 50, ""lenient"" : true, ""zero_terms_query"" : ""NONE"", ""auto_generate_synonyms_phrase_query"" : true, ""fuzzy_transpositions"" : true, ""boost"" : 1.0 } }, { ""match_phrase"" : { ""json.tag.keyword"" : { ""query"" : ""rails.application"", ""slop"" : 0, ""zero_terms_query"" : ""NONE"", ""boost"" : 1.0 } } }, { ""range"" : { ""json.time"" : { ""from"" : null, ""to"" : null, ""include_lower"" : true, ""include_upper"" : true, ""boost"" : 1.0 } } } ], ""adjust_pure_negative"" : true, ""boost"" : 1.0 } } -Index uuid - kiWqJuLQRG-a6Rn9iBQMpQ -Index - pubsub-rails-inf-gprd-002303 -Caused by type - illegal_argument_exception -Caused by reason - field expansion matches too many fields, limit: 1024, got: 1579 -``` - -This is for a full 24 hours, but it seems reasonable to be able search in more than a few hours worth of data at a time. - -Can the limit be raised or some other adjustment made to improve this?",5.0 -33308198,2020-04-15 08:28:57.162,Execute testing of PostgreSQL upgrade using a data volume similar to production,"We are executing tests of the PostgreSQL upgrade with the dataset from the staging database. - -It is required to execute the test with the data set from production, to evaluate the time consumed for the process. - -Please consider the following steps to execute a consistent GCP snapshot in a read-only replica : - - - connect to PostgreSQL and execute the command `select pg_start_backup();` - - - Execute the GCP snapshot. - - - In PostgreSQL execute the `select pg_stop_backup();` - -We need to attach the snapshot in the following hosts : - - * patroni-migrate-01-db-gstg.c.gitlab-staging-1.internal - - * patroni-migrate-02-db-gstg.c.gitlab-staging-1.internal - -After Ongres checks and executes the setup of the environment, we should test the playbook to execute the PostgreSQL upgrade.",2.0 -33287406,2020-04-14 20:49:43.098,Incident Practice for Support CMOC - EMEA,"Basic summary - -This is meant to be a simple problem to solve and a table top scenario. -First, we are testing incident response and basic group of host interactions. - -Practice environment location: Staging? - -Practice - [test env for status.io](https://app.status.io/statuspage/5bedc0c2a394fc04c9ccc974) - -Scenario: -service stop haproxy on all LB - current status no haproxy running (front door is closed) - -Incident Start: -EOC - execute command(s) to stop LB in gstg - -Start of incident handling -1. EOC use `/incident declare` in `#incident management` -2. Skipping checking pages - we'll verify those through other tests when CMOC rotation is set. - -Validate: -1. [ ] ~~EOC page~~ -2. [ ] ~~IMOC page~~ -3. [ ] ~~CMOC page~~ -4. [x] Creation of incident gdoc -5. [x] Creation of incident issue -6. [x] CMOC / IMOC can find incident issue - -Once manager and cmoc join -1. IMOC/CMOC - talk through any comments to understand the issue -1. cmoc log into status.io and talk through what they would do to create incident [Link to tests status page](https://app.status.io/statuspage/5bedc0c2a394fc04c9ccc974) -1. cmoc - talk through your update to status.io - -Resolution actions -1. EOC - talk through actions you would do to get load balancers restarted -1. EOC/ Manager - talk about how to escalate to engineer on call -1. Verify incident is resolved -1. cmoc - confirm resolution and talk through status.io update -1. Follow up with action items -1. Create Incident Review issue -1. How to escalate action items to infradev.",5.0 -33261764,2020-04-14 11:28:14.780,Increase prometheus query.max-samples,"Owing to https://gitlab.com/gitlab-com/gl-infra/production/-/issues/1780, may queries need to be run directly against Prometheus. Running against Thanos will result in counter reset bugs, leading to incorrect results. - -However, when querying Prometheus directly, we will frequently hit the `query.max-samples` limit, which is set at 10m samples. - -This means we can neither run these queries in Prometheus or Thanos. - -![image](/uploads/2bc931db288a4da8febcfd72f136248d/image.png) - - -## Proposal: Increase `query.max-samples` back to the default value of 50m - -Currently we limit queries to roughly 150MB of sample data (assuming 16 bytes per sample). - -The Prometheus default is 50m items or 760MB per query. - -Our prometheus fleet uses about 50% of the available memory, so this seems like a reasonable change to make, at least until the Thanos reset bug is addressed - -![image](/uploads/a8e0284bc3a367147250c6b594781a94/image.png) - -https://dashboards.gitlab.net/d/monitoring-main/monitoring-overview?orgId=1&from=now-12h&to=now&fullscreen&panelId=53 - -cc @bjk-gitlab @AnthonySandoval",1.0 -33260605,2020-04-14 10:52:36.410,PoC Redis Cluster as a potential infra side improvement of scalability of Redis,"part of: https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/80 - -GDK connected to redis-cluster running in minikube: https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/redis-cluster-poc",3.0 -33138978,2020-04-10 12:06:19.974,Improve stackdriver exporter metrics,"We currently collect a lot of stackdriver metrics. - -| header | header | -| ------ | ------ | -| ci-prd | 19959 | -| gprd | 97653 | -| gstg | 28759 | -| ops | 10732 | -| pre | 6274 | -| testbed | 92 | - -This leads to a number of problems -* Large storage needs on the metrics systems. -* Slow scrape times, timeouts. -* Lost scrapes. -* Lots of stackdriver API traffic. -* Duplication of metric data between stackdriver and other monitoring components. - -For example, we pull the entire `compute.googleapis.com` metrics subsystem. This is 40% of the metrics and has a lot of overlap with `the node_exporter` data. - -Proposed todo: -* [x] Filter out metrics we're not using by using more specific metric filters. (ie `compute.googleapis.com/nat`) -* [ ] Split scrape job into more granular instances.",3.0 -33106785,2020-04-09 15:32:18.578,Export version database for loading into warehouse,Runbook https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/uncategorized/cloudsql-data-export.md,2.0 -33100615,2020-04-09 13:43:29.865,Create new gitaly storage shard node to replace `nfs-file44`,"Gitaly storage shard `nfs-file44` (`file-44-stor-gprd.c.gitlab-production.internal`) is currently at `66.06%` usage (`68.26%` as of 2020-04-14). - -A new gitaly node should be created, and added to the list of shards configured to be included in consideration for storing new project repositories. - -That way, `nfs-file44` can be removed from rotation without any concern that the node's removal from the configuration will put any additional burden on the remaining nodes. - -It is important to avoid acceleration of usage growth on the remaining nodes accepting new repositories.",3.0 -33098053,2020-04-09 12:51:52.811,chef broken on postgres-dr-delayed-01-db-gprd,"``` -# -```",3.0 -33088089,2020-04-09 08:44:13.902,Enable JavaScript source fetching for Sentry projects customersgitlabcom and customersstggitlabcom,"This issue is to re-enable that setting, as we can't see the source for how the errors are being triggered. - -We need this setting to be enabled in both projects: - -* https://sentry.gitlab.net/settings/gitlab/projects/customersstggitlabcom/ -* https://sentry.gitlab.net/settings/gitlab/projects/customersgitlabcom/",1.0 -32963394,2020-04-06 22:53:23.646,Add chef-client-is-enabled script to the chef-client-disabler,"Per [discussion here](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/1898#note_318576770), in addition to intercepting calls to chef-client when it's disabled, we would also like to be able to easily check whether or not it is disabled. - -Add a `chef-client-is-enabled` script to the `gitlab-server::chef-client-disabler` recipe. Expected behavior: -* If chef-client is enabled, this script will say so and exit 0. -* If chef-client is disabled, this script will exit non-zero and show when, by whom, and for what reason (i.e. same output as though chef-client had been run and intercepted).",1.0 -32693292,2020-03-31 17:36:30.969,Fix permissions on patroni.yml since it contains credentials,"The patroni.yml file is world-readable, but it contains postgres superuser credentials. - -```shell -msmiley@patroni-01-db-gprd.c.gitlab-production.internal:~$ ls -l /var/opt/gitlab/patroni/patroni.yml --rw-r--r-- 1 gitlab-psql gitlab-psql 3752 Mar 30 06:48 /var/opt/gitlab/patroni/patroni.yml -``` - -Presumably only the patroni daemon needs to read this file, and the file owner is the same Unix account who runs the patroni daemon: - -```shell -msmiley@patroni-01-db-gprd.c.gitlab-production.internal:~$ pgrep 'patroni' | xargs -r ps -o uid,user,args - UID USER COMMAND - 1071 gitlab-+ /opt/patroni/bin/python /opt/patroni/bin/patroni /var/opt/gitlab/patroni/patroni.yml - -msmiley@patroni-01-db-gprd.c.gitlab-production.internal:~$ id 1071 -uid=1071(gitlab-psql) gid=1071(gitlab-psql) groups=1071(gitlab-psql) -``` - -Therefore, it should be safe to change this file to be read/write only for its owner (i.e. chmod from `0644` to `0600`).",1.0 -32637793,2020-03-30 15:20:30.916,Chef broken because download of origin-pull-ca.pem from cloudflare is broken,"Chef-client is broken on 46 gprd nodes because https://support.cloudflare.com/hc/en_us/article_attachments/360044928032/origin-pull-ca.pem can't be downloaded by chef anymore - it seems to be captcha protected, which must be new. - -https://thanos-query.ops.gitlab.net/graph?g0.range_input=2h&g0.max_source_resolution=0s&g0.expr=sum(round(avg_over_time(chef_client_error%7Benv%3D%22gprd%22%7D%5B5m%5D)))&g0.tab=0",3.0 -32632884,2020-03-30 13:38:11.645,jobs.gitlab.com cert expired unnoticed on 2020-03-28,"The jobs.gitlab.com cert expired on 2020-03-28 without us noticing. - -We got informed by @joernchen at 2020-03-30 11:42 UTC. (https://gitlab.slack.com/archives/CB3LSMEJV/p1585568555313400) - -We replaced the cert with a new one at 12:45 UTC by updating the chef vault. - -We got an expiry warning email from SSLMate on 2020-01-28. - -Questions: -- Why didn't we notice? What's the process?",3.0 -32579198,2020-03-28 21:00:08.450,Mark all certificate resources as `sensitive: true`,"Mark all certificate resources as `sensitive: true`. - -- https://docs.chef.io/resources/#common-functionality-properties -- https://docs.chef.io/custom_resources/#sensitive",3.0 -29678381,2020-01-16 16:15:59.096,setup secondary database with the flags in patroni STAGING,"In staging add to one of the secondary databases to do not become primary, also, add the flag to do not receive traffic, this is a config in the patroni nodes. - -``` -tags: - nofailover: true - noloadbalance: true -```",2.0 -29670681,2020-01-16 13:23:51.663,Get elastic cluster operation metrics into prometheus,"As Elastic Cloud seems to not provide a way to get time series data for internal cluster metrics (we only can get the current state for things like used storage from it’s API) and the limited possibilities of the Elastic monitoring cluster to get a view at the operation of the production cluster aren't really satisfying, we should consider to setup something like https://github.com/justwatchcom/elasticsearch_exporter to get Elastic cluster metrics into Prometheus. - -This also would make alerting easier and more standard compared to creating watches within Elastic and we could have dashboards in Grafana.",8.0 -29611372,2020-01-14 23:39:14.101,Add TXT record for domain verification - Drift,"We are setting up Drift for SSO in Okta, and to enable this we need to perform domain verification. - -Can we add the following TXT record to gitlab.com domain - - -drift-domain-verification=fa583cfff88c496bcc62651057550656a98ab3e689c314255a1a6ae848e3e56d - -Further information about this request is available in https://gitlab.com/gitlab-com/marketing/marketing-operations/issues/1193",1.0 -29544814,2020-01-13 14:12:59.803,Upgrade DR haproxy to 1.8,"staging, prod and preprod are using haproxy 1.8 and Ubuntu 18.04.2 LTS for the front-end lbs. It would be nice if DR did the same so we could unify configuration, for example using the `hard-stop-after` option in the config. - -This means we have an exception in https://gitlab.com/gitlab-com/runbooks/merge_requests/1776 for the DR environment, which isn't ideal.",3.0 -29543252,2020-01-13 13:26:56.271,Research our log volume and rate,In order to calculate the proper size and costs of our elastic logging clusters we need to research our current log volume and rate. We should have data per index and environment to also make a decision on which indexes we might want to exclude from elastic.,5.0 -29496485,2020-01-10 21:32:23.292,Thanos processes cannot successfully restart while another thanos is uploading to bucket storage,"This issue documents findings regarding one form of thanos crash loop. Action items: -* [ ] Add something like the following notes to the [troubleshooting/prometheus-is-down.md runbook](https://gitlab.com/gitlab-com/runbooks/blob/master/troubleshooting/prometheus-is-down.md), which is referenced by the alert that detects frequent thanos restarts. -* [ ] File a bug report with the thanos community. Potentially also work on a patch, if time allows. - ----- - -### Problem statement - -Thanos appears to have a race condition, such that whenever a thanos instance is uploading a new block directory to bucket storage, if any other thanos instance restarts for any reason, that instance will fail to startup until the upload completes. - -This race condition is a thanos bug. Automatic restart of the thanos service is the best mitigation we have until this bug is fixed. - -Special case: If a thanos upload to the bucket aborts uncleanly, it could leave an incomplete directory in the bucket, which would cause this crash-loop to continue indefinitely. Fixing that would require manually deleting the incomplete block's directory from the object storage bucket. - -### Background - -Part of Thanos' job is to use an object storage bucket as the primary place to store metrics data (and use the local filesystem as a persistent cache for that data). Recently we have noticed the `thanos store` process temporarily entering a crash-loop due to an unhandled exception when scanning that bucket for new `meta.json` files. - -Example log excerpt: - -``` -{ - ""ts"":""2020-01-10T17:35:29.319245979Z"", - ""msg"":""syncing blocks failed"", - ""err"":""iter: load meta: download meta.json: get file 01DY7VTGX0PWPHWWXFNZB6NA17/meta.json: storage: object doesn't exist"" -} -``` - -This log message indicates that the object storage bucket has a ""block directory"" named `01DY7VTGX0PWPHWWXFNZB6NA17` that did not (yet) contain a `meta.json` file. In this example, that missing `meta.json` file was created 30 minutes later, when `thanos compact` finished uploading data to that block's `chunks` subdirectory. - -In general, when a new ""block"" (directory) is created in the object storage bucket, that dir's `meta.json` file is created as the last step after uploading all the data files and index file for that block. - -Our current version of thanos (`v0.9.0`, the latest release) treats the absence of a `meta.json` file from a block directory as an error. If thanos is starting up, it's a fatal error (hence the crash loop). In contrast, if thanos is just doing one of its periodic refreshes of its catalog of bucket contents, then the error is treated as a non-fatal warning. - -It can take many minutes between creating a new directory and creating its `meta.json` file, depending on the amount of data to upload and the available network bandwidth. This is the main window of opportunity for thanos to hit this race condition and enter a crash loop. - -### Potential solution - -Thanos calls `SyncBlocks` on startup and again periodically. Only the first call (via the `InitialSync` method) treats the absence of a `meta.json` file as fatal. In contrast, the periodic calls to `SyncBlocks` just logs a warning and stops processing the bucket's remaining dirs. - -If `SyncBlocks` skipped dirs that exist but do not yet have a `meta.json` file, this would: -* Prevent `InitialSync` from failing and causing the thanos process to exit shortly after startup (i.e. avoids the crash loop). -* Allow the periodic `SyncBlocks` calls to scan all viable bucket dirs, instead of aborting prematurely. This should result in fresh data being more reliably available even during slow compaction runs.",2.0 -29484939,2020-01-10 14:00:18.483,Followup: Remove `gcs-` prefix from review apps,"After https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8203, remove the transitional `gcs-` prefix from review apps. - -After doing so, existing MRs with review apps still being server from about-src's file storage will have to be rebased from the latest www-gitlab-com master. - -- [x] cookbook change: https://gitlab.com/gitlab-cookbooks/cookbook-about-gitlab-com/merge_requests/78 -- [x] chef-repo cookbook bump: https://ops.gitlab.net/gitlab-cookbooks/chef-repo/merge_requests/2443 -- [x] www-gitlab-com change: https://gitlab.com/gitlab-com/www-gitlab-com/merge_requests/38161",2.0 -29437383,2020-01-09 10:22:44.385,Update database diagrams,"Update the rolled out changes to pgbouncer to the database diagrams. -Focus on the changes on the pgbouncers-RW.",2.0 -29383410,2020-01-07 19:56:34.421,Create a new file module for nodes served via praefect,"This will allow us to differentiate between file storages accessed directly by rails and those accessed via praefect. - -- [x] gitlab-com-infrastructure changes: https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/merge_requests/1279 -- [x] chef-repo changes: https://ops.gitlab.net/gitlab-cookbooks/chef-repo/merge_requests/2421",2.0 -29285464,2020-01-03 21:56:40.815,Grant SELECT access to the `analytics` user in gitlab_dotcom db,"I'm adding a newly created table to our data team ETL and I'm looking for somebody to grant select access to the `analytics` user for all public tables in the gitlab_dotcom db. -* For context, this is my MR where I'm blocked: https://gitlab.com/gitlab-data/analytics/merge_requests/2098 -* MR for last time this happened: https://gitlab.com/gitlab-data/analytics/merge_requests/1522#note_212269236 - -* Other relevant issue: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7769",1.0 -29277728,2020-01-03 14:53:08.647,Incident Review: 2019-12-27 Spammers causing large mailers Sidekiq queue," - -Incidents: -* https://gitlab.com/gitlab-com/gl-infra/production/issues/1483 (related RCA: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8694) -* https://gitlab.com/gitlab-com/gl-infra/production/issues/1491 -* https://gitlab.com/gitlab-com/gl-infra/production/issues/1493 -* https://gitlab.com/gitlab-com/gl-infra/production/issues/1494 -* https://gitlab.com/gitlab-com/gl-infra/production/issues/1532 - -## Summary - -We experienced several incidents where spam attacks caused large `mailers` sidekiq queues. - - - -- Service(s) affected : ~""Service::Sidekiq"" ~""queue::mailers"" -- Team attribution : -- Minutes downtime or degradation : - -We do not have a set SLA for mail queue - for purposes here: using greater than 10 minute latency per job from: -https://dashboards.gitlab.net/d/sidekiq-main/sidekiq-overview?orgId=1&from=1577415600000&to=1577448000000&fullscreen&panelId=14 - -* https://gitlab.com/gitlab-com/gl-infra/production/issues/1483 - 02:25-04:45 - 140 minutes -* https://gitlab.com/gitlab-com/gl-infra/production/issues/1491 - 17:00-23:19 - 379 minutes -* https://gitlab.com/gitlab-com/gl-infra/production/issues/1493 - 11:00-12:07 - 67 minutes -* https://gitlab.com/gitlab-com/gl-infra/production/issues/1494 - 04:00-06:36 - 156 minutes -* https://gitlab.com/gitlab-com/gl-infra/production/issues/1532 - 03:35-07:13 - 218 minutes - -For calculating duration of event, use the [Platform Metrics Dashboard](https://dashboards.gitlab.net/d/ZUei7TkWz/platform-metrics?orgId=1) to look at appdex and SLO violations. - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? (i.e. service outage, sub-service brown-out, exposure of sensitive data, ...) -- Who was impacted by this incident? (i.e. external customers, internal customers, specific teams, ...) -- How did the incident impact customers? (i.e. preventing them from doing X, incorrect display of Y, ...) -- How many attempts were made to access the impacted service/feature? -- How many customers were affected? -- How many customers tried to access the impacted service/feature? - -Include any additional metrics that are of relevance. - -Provide any relevant graphs that could help understand the impact of the incident and its dynamics. - -## Detection & Response - -Start with the following: - -- How was the incident detected? -- Did alarming work as expected? -- How long did it take from the start of the incident to its detection? -- How long did it take from detection to remediation? -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team memeber wasn't page-able, ...) - -## Root Cause Analysis - -The purpose of this document is to understand the reasons that caused an incident, and to create mechanisms to prevent it from recurring in the future. A root cause can **never be a person**, the way of writing has to refer to the system and the context rather than the specific actors. - -Follow the ""**5 whys**"" in a **blameless** manner as the core of the root-cause analysis. - -For this it is necessary to start with the incident, and question why it happened. Keep iterating asking ""why?"" 5 times. While it's not a hard rule that it has to be 5 times, it helps to keep questions get deeper in finding the actual root cause. - -Keep in min that from one ""why?"" there may come more than one answer, consider following the different branches. - -### Example of the usage of ""5 whys"" -Spam campaigns - -1. Why? - The battery is dead. -2. Why? - The alternator is not functioning. -3. Why? - The alternator belt has broken. -4. Why? - The alternator belt was well beyond its useful service life and not replaced. -5. Why? - The vehicle was not maintained according to the recommended service schedule. (Fifth why, a root cause) - -## What went well - -Start with the following: - -- Identify the things that worked well or as expected. -- Any additional call-outs for what went particularly well. - -## What can be improved - -- We should make it harder to create new accounts for spam campaigns. -- We should have stricter limits for issue/notes creation. -- While rate limits are a very good thing to have, we need to be careful with the collateral effects for existing use-cases when enabling them. -- We should have better tooling for cleaning up the sidekiq queue that makes it easier and safe to execute (e.g. preventing to overload Redis). - -Start with the following: - -- Using the root cause analysis, explain what can be improved to prevent this from happening again. -- Is there anything that could have been done to improve the detection or time to detection? -- Is there anything that could have been done to improve the response or time to response? -- Is there an existing issue that would have either prevented this incident or reduced the impact? -- Did we have any indication or beforehand knowledge that this incident might take place? - -## Corrective actions - -- List issues that have been created as corrective actions from this incident. -- For each issue, include the following: - - - Issue labeled as ~""corrective action"". - - Include an estimated date of completion of the corrective action. - - Incldue the named individual who owns the delivery of the corrective action. - -* [ ] make it harder to create bogus accounts for abusive operations -* [ ] rate limit issue creation (https://gitlab.com/gitlab-org/gitlab/issues/55241) -* [ ] make it easier to identify (and kill) bad sidekiq jobs (https://gitlab.com/gitlab-com/gl-infra/scalability/issues/9 for identifying) -* [ ] add troubleshooting runbook for dealing with issue spam (disabling mail sending, cleaning up queue/issues, blocking spammers) -* [ ] prevent overloading redis master (affecting other queues) when cleaning up the `mailers` queue -* [ ] make it easier to stop sending out mails - * [ ] consider moving the `mailers` queue into a separate cluster (https://gitlab.com/gitlab-com/gl-infra/delivery/issues/611 for testing this idea) -* [ ] improve protection against spam campaigns: (discussion: https://gitlab.com/gitlab-org/gitlab/issues/103325) - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",3.0 -29240913,2020-01-02 15:28:16.739,Fix sidekiq log index mapping,Since 12/30 we get many Pubsubbeat warning alerts on pubsub-duplicate-sidekiq-inf-gprd.c.gitlab-production.internal: https://prometheus.gprd.gitlab.net/graph?g0.range_input=1w&g0.expr=rate(pubsubbeat_warnings_total%5B1m%5D)%20%3E%201&g0.tab=0 - maybe related to a deployment on that day. We should fix the index mapping.,3.0 -29022082,2019-12-24 03:39:56.210,Gitaly error-rate exceeds SLO in the Canary stage,"The name of this alert: `service_cny_error_ratio_slo_out_of_bounds_upper_5m` - -PagerDuty event: https://gitlab.pagerduty.com/incidents/P01BI1F - -This alert self-resolved after a few minutes, but the [dashboard shows](https://dashboards.gitlab.net/d/gitaly-main/gitaly-overview?orgId=1&from=1577133346141&to=1577154946141&var-PROMETHEUS_DS=Global&var-environment=gprd&var-stage=cny&var-sigma=2&fullscreen&panelId=4) the canary-stage has briefly spiked above SLO several times today. - -![Screenshot_from_2019-12-23_19-38-18](/uploads/d911065abd6c5684cf972b1addd7ec64/Screenshot_from_2019-12-23_19-38-18.png)",1.0 -28918188,2019-12-19 13:49:33.183,add detailed monitoring to raise alerts in case of network traffic for patroni fleet is lower than average,"Create alerts when the network traffic reaches 70% than the average. - -Evaluate if 70% is the correct value to detect metrics. - -This alert should raise a severity 2.",2.0 -28866342,2019-12-18 09:44:22.479,usage of consul.checks for patroni,"Investigate the usage consul.checks:\[\] at patroni config to be more resilient to short network glitches. - -With this patroni will ensure use the values from ttl and retry_timeouts and loop_wait instead of the serfcheck. - -We need to proceed with some tests in staging.",4.0 -28862059,2019-12-18 08:03:33.712,Gitter beta DNS change,"Gitter beta environment has recently changed. We created a new ELB and ASG for our webapp servers https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8687. - -We'd like to point the DNS A record for - -`beta.gitter.im` to `beta-webapp-elb-1739286740.us-east-1.elb.amazonaws.com`. - -(I'm not sure how that is usually done outside of one AWS account). - -### Additional notes: -The `beta-webapp-elb` ELB is in Gitter's AWS account. - -Right now, the `beta.gitter.im` DNS record points directly to one of our EC2 instances [gitter-beta-01](https://console.aws.amazon.com/ec2/home?region=us-east-1#Instances:search=i-007a3fd34e0535967;sort=tag:Name)`54.226.100.160`",1.0 -28838617,2019-12-17 19:14:15.780,Remove residual firewall rule definitions that would re-apply on next reboot,"Recently some iptables DROP rules were unintentionally applied to Patroni hosts, causing connectivity loss to the database servers ([incident](https://gitlab.com/gitlab-com/gl-infra/production/issues/1421) and [review](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8528)). The immediate recovery steps included unassigning Chef role `gitlab-iptables` and manually remove iptables rules from the affected hosts (especially the Patroni hosts). - -That left some residual unmanaged configuration on those hosts. As part of an incident follow-up corrective action, that residual config was [discovered and found to probably be dangerous](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8600#note_261924370). - -The ""gitlab-iptables"" cookbook's ""default"" recipe uses cookbook ""iptables-ng"" to define iptables rules. That ""iptables-ng"" cookbook installs deb package ""iptables-persistent"" (and implicitly also the ""netfilter-persistent"" deb package), whose job is to persist those iptables rules across reboots. - -The ""iptables-ng"" cookbook stores its version of the rules in `/etc/iptables.d/[table]/[chain]/[named_rule]`, but as far as I know, those are only used by chef-client to support that cookbook. So those files by themselves may be harmless. - -In contrast, the contents of `/etc/iptables/rules.v4` and `rules.v6` are actively loaded during reboot by the ""netfilter-persistent"" systemd service. - -For all hosts that unintentionally had the ""gitlab-iptables"" cookbook applied, if that cookbook has been removed and not reapplied, we should apply some clean up steps that at least include one of the following: - -* Option A: Disable systemd unit ""netfilter-persistent.service"" -* Option B: Remove files /etc/iptables/rules.* -* Option C: Remove deb package ""iptables-persistent"" and optionally also ""netfilter-persistent""",1.0 -28838043,2019-12-17 18:45:39.584,Many `PagesDomainSslRenewalWorker` sidekiq exceptions,"**Queue: `pages_domain_ssl_renewal`** - -Today I noticed that there have been many instances of `PagesDomainSslRenewalWorker` sidekiq exceptions lately. - -Here is the grafana chart for the past 36 hours: - -![Screen_Shot_2019-12-17_at_12.15.10_PM](/uploads/2c8c58290c31acb6146fa41a9bc1b118/Screen_Shot_2019-12-17_at_12.15.10_PM.png) - -Here are the logs for the past 7 days, with 320,716 hits, about 45,000 errors per day. - -![Screen_Shot_2019-12-17_at_12.42.37_PM](/uploads/65c261c0ddbcefa65d914e44dae94056/Screen_Shot_2019-12-17_at_12.42.37_PM.png) - -The errors in the logs tend to be one of the following: - -``` -Acme::Client::Error::RateLimited: Rate limit for '/acme' reached - -Acme::Client::Error::Timeout - -Faraday::ConnectionFailed: Net::OpenTimeout - -Faraday::ConnectionFailed: Connection reset by peer - SSL_connect - -Acme::Client::Error::BadNonce: JWS has no anti-replay nonce - -Acme::Client::Error::UnsupportedOperation: Directory at https://acme-v02.api.letsencrypt.org/directory does not include `new_account` -```",3.0 -28833806,2019-12-17 16:14:08.259,check replication on patroni-10,"please check the status the streaming replication on patroni-10. -we need to have this replica up to date and without receiving traffic and not being able to become a primary of the cluster. - -Please evaluate if we need to create alerts for nodes that are not replicating properly. - -Please check the replication slot of this node.",2.0 -28823296,2019-12-17 11:24:42.100,Evaluate Elastic APM as distributed tracing solution,"Elastic APM has an OpenTracing bridge and thus could be an easy plugin solution to send tracing data to (https://gitlab.com/gitlab-org/labkit/merge_requests/40). We should evaluate if we can use it for distributed tracing of GitLab.com. - -* technical requirements -* are there missing features (e.g. which features of OpenTracing are supported/missing)? -* correlation with logs? -* estimate data volume -* cost factor",5.0 -28779954,2019-12-16 17:27:20.723,cookbook-license-gitlab-com needs the same fixes for deploy that cookbook-customers-gitlab-com needed,Review the license cookbook and incorporate customers changes that broke when ruby was updated.,1.0 -28770014,2019-12-16 14:03:00.970,Help I.W. as an onboarding buddy,"As an onboarding buddy for I. W., I want to make the process for him as smooth as possible. - -* [x] Introductory Coffee chat -* [x] Make sure he has access to his onboarding issue (https://gitlab.com/gitlab-com/people-group/employment/issues/1678). - * [x] Clarify if new account still needs to be created (https://gitlab.com/gitlab-com/people-group/employment/issues/1678#note_260789516). -* [x] Provide resources - * [x] relevant slack channels -* [x] Schedule follow-up meeting in first week of new year. -* [x] Make sure there is an SRE onboarding issue. -* [x] Help with SRE onboarding -* [ ] See if we can meet in person on a co-working day. -* [ ] Improve onboarding issues / handbook where necessary.",5.0 -28643682,2019-12-13 21:26:51.862,Remove deprecated Digital Ocean instance of design.gitlab.com,"Since `design.gitlab.com` has been moved to a Kubernetes cluster, managed by [the project's Auto Devops configuration](https://gitlab.com/gitlab-org/gitlab-services/design-gitlab-com/), it is time to remove the previous Digital Ocean instance of the application. - -This Droplet is in the `GitLab Production` Project in the Digital Ocean console. - -Once the instance is deleted, the chef configuration should be removed from the chef server.",3.0 -28643658,2019-12-13 21:25:36.350,Remove deprecated AWS instance of version.gitlab.com,"Since `version.gitlab.com` has been moved to a Kubernetes cluster, managed by [the project's Auto Devops configuration](https://gitlab.com/gitlab-services/version-gitlab-com/), it is time to remove the previous AWS instance of the application. - -Once the instance is deleted, the chef configuration should be removed from the chef server. - -The instance is in `us-east-1` and it is named `redash.gitlab.com`",2.0 -28634530,2019-12-13 16:27:40.926,Setup logging for Praefect on gprd,"Most of the work was done when setting logging on gstg (https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8142) we just have to create the pubsub host. - -- [x] gitlab-com-infrastructure MR: https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/merge_requests/1237/",2.0 -28499784,2019-12-13 08:50:29.229,Cloudflare: Fix HAProxy ACLs to also check Cloudflare headers,"From @asaba in https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8409#note_259451396: -> The `require-recaptcha.lst` is used here: https://ops.gitlab.net/gitlab-cookbooks/gitlab-haproxy/blob/master/templates/default/haproxy-frontend.cfg.erb#L141. When configured like that, do you know if haproxy will check against additional IP address associated with the request (e.g. the extra X-Forwarded-For from cloudflare)? - -We need to fix all ACLs using `src` to also look at the Cloudflare headers when `src` is a Cloudflare IP.",5.0 -28419151,2019-12-12 16:25:54.796,investigate RCA from production/issues/1476,"## RCA Deep Dive -From https://gitlab.com/gitlab-com/gl-infra/production/issues/1476 - - -### What happened -A network glitch causes quorum loss and trigger a failover routine. As nothing was really wrong with the current leader (`patroni-06`), there was no leader change. - -Patroni behaves as expected, triggering a failover (thus never completing it, since `patroni-06` was detected healthy) - - - -### Evidence - -[Prometheus graph](https://prometheus.gprd.gitlab.net/graph?g0.range_input=1h&g0.end_input=2019-12-12%2008%3A15&g0.expr=rate(node_network_receive_bytes_total%7Btype%3D%22patroni%22%2Cdevice%3D%22ens4%22%7D%5B1m%5D)&g0.tab=0) showing network interruption: -![image](/uploads/c521e16497dbc7d8f1ebf0c66a0be432/image.png) - -At `Dec 12 07:31:24` consul report missing contact with the current leader, `patroni-06`, followed by a quick re-join -``` -Dec 12 07:31:24 pgbouncer-02-db-gprd consul[26816]: 2019/12/12 07:31:24 [INFO] serf: EventMemberFailed: patroni-06-db-gprd 10.220.16.106 -Dec 12 07:31:24 pgbouncer-02-db-gprd consul[26816]: serf: EventMemberFailed: patroni-06-db-gprd 10.220.16.106 -Dec 12 07:31:25 pgbouncer-02-db-gprd consul[26816]: 2019/12/12 07:31:25 [INFO] serf: EventMemberJoin: patroni-06-db-gprd 10.220.16.106 -Dec 12 07:31:25 pgbouncer-02-db-gprd consul[26816]: serf: EventMemberJoin: patroni-06-db-gprd 10.220.16.106 -``` - -At `2019-12-12 07:31:26.317 GMT` the same network failure causes an error during walfile upload: -``` -2019-12-12 07:31:26.317 GMT [45589, 0]: [3-1] user=gitlab-replicator,db=[unknown],app=patroni-10-db-gprd.c.gitlab-production.internal,client=10.220.16.110 LOG: disconnection: session time: 461:46:01.162 user=gitlab-replicator database= host=10.220.16.110 port=49464 -wal_e.worker.upload INFO MSG: begin archiving a file - DETAIL: Uploading ""pg_xlog/0000001F0001862000000009"" to ""gs://gitlab-gprd-postgres-backup/pitr-wale-v1/wal_005/0000001F0001862000000009.lzo"". - STRUCTURED: time=2019-12-12T07:31:26.981242-00 pid=62869 action=push-wal key=gs://gitlab-gprd-postgres-backup/pitr-wale-v1/wal_005/0000001F0001862000000009.lzo prefix=pitr-wale-v1/ seg=0000001F0001862000000009 state=begin -Traceback (most recent call last): - File ""/opt/wal-e/lib/python3.5/site-packages/google/cloud/storage/blob.py"", line 1158, in upload_from_file - client, file_obj, content_type, size, num_retries, predefined_acl - File ""/opt/wal-e/lib/python3.5/site-packages/google/cloud/storage/blob.py"", line 1068, in _do_upload - client, stream, content_type, size, num_retries, predefined_acl - File ""/opt/wal-e/lib/python3.5/site-packages/google/cloud/storage/blob.py"", line 1015, in _do_resumable_upload - response = upload.transmit_next_chunk(transport) - File ""/opt/wal-e/lib/python3.5/site-packages/google/resumable_media/requests/upload.py"", line 396, in transmit_next_chunk - self._process_response(result, len(payload)) - File ""/opt/wal-e/lib/python3.5/site-packages/google/resumable_media/_upload.py"", line 574, in _process_response - self._get_status_code, callback=self._make_invalid) - File ""/opt/wal-e/lib/python3.5/site-packages/google/resumable_media/_helpers.py"", line 93, in require_status_code - status_code, u'Expected one of', *status_codes) -google.resumable_media.common.InvalidResponse: ('Request failed with status code', 410, 'Expected one of', , 308) - -During handling of the above exception, another exception occurred: - -Traceback (most recent call last): - File ""src/gevent/greenlet.py"", line 766, in gevent._greenlet.Greenlet.run - File ""/opt/wal-e/lib/python3.5/site-packages/wal_e/worker/upload.py"", line 59, in __call__ - self.gpg_key_id) - File ""/opt/wal-e/lib/python3.5/site-packages/wal_e/worker/worker_util.py"", line 40, in do_lzop_put - k = blobstore.uri_put_file(creds, url, tf) - File ""/opt/wal-e/lib/python3.5/site-packages/wal_e/blobstore/gs/utils.py"", line 38, in uri_put_file - blob.upload_from_file(fp, size=size, content_type=content_type) - File ""/opt/wal-e/lib/python3.5/site-packages/google/cloud/storage/blob.py"", line 1162, in upload_from_file - _raise_from_invalid_response(exc) - File ""/opt/wal-e/lib/python3.5/site-packages/google/cloud/storage/blob.py"", line 2038, in _raise_from_invalid_response - raise exceptions.from_http_status(response.status_code, message, response=response) -google.api_core.exceptions.GoogleAPICallError: 410 PUT https://www.googleapis.com/upload/storage/v1/b/gitlab-gprd-postgres-backup/o?uploadType=resumable&upload_id=AEnB2UomUBipRkns50Aj6FpTMxL-d_WXjfMyLkcAj-9bSnwLLeBl47ZHEC2olCg8340SDRyTC2qbYq_bntvY_9gipuG_v-9PBA: ('Request failed with status code', 410, 'Expected one of', , 308) -2019-12-12T07:31:29Z ( failed with GoogleAPICallError - -wal_e.main CRITICAL MSG: An unprocessed exception has avoided all error handling - DETAIL: Traceback (most recent call last): - File ""/opt/wal-e/lib/python3.5/site-packages/google/cloud/storage/blob.py"", line 1158, in upload_from_file - client, file_obj, content_type, size, num_retries, predefined_acl - File ""/opt/wal-e/lib/python3.5/site-packages/google/cloud/storage/blob.py"", line 1068, in _do_upload - client, stream, content_type, size, num_retries, predefined_acl - File ""/opt/wal-e/lib/python3.5/site-packages/google/cloud/storage/blob.py"", line 1015, in _do_resumable_upload - response = upload.transmit_next_chunk(transport) - File ""/opt/wal-e/lib/python3.5/site-packages/google/resumable_media/requests/upload.py"", line 396, in transmit_next_chunk - self._process_response(result, len(payload)) - File ""/opt/wal-e/lib/python3.5/site-packages/google/resumable_media/_upload.py"", line 574, in _process_response - self._get_status_code, callback=self._make_invalid) - File ""/opt/wal-e/lib/python3.5/site-packages/google/resumable_media/_helpers.py"", line 93, in require_status_code - status_code, u'Expected one of', *status_codes) - google.resumable_media.common.InvalidResponse: ('Request failed with status code', 410, 'Expected one of', , 308) - - During handling of the above exception, another exception occurred: - - Traceback (most recent call last): - File ""/opt/wal-e/lib/python3.5/site-packages/wal_e/cmd.py"", line 666, in main - concurrency=args.pool_size) - File ""/opt/wal-e/lib/python3.5/site-packages/wal_e/operator/backup.py"", line 283, in wal_archive - group.join() - File ""/opt/wal-e/lib/python3.5/site-packages/wal_e/worker/pg/wal_transfer.py"", line 144, in join - raise val - File ""src/gevent/greenlet.py"", line 766, in gevent._greenlet.Greenlet.run - File ""/opt/wal-e/lib/python3.5/site-packages/wal_e/worker/upload.py"", line 59, in __call__ - self.gpg_key_id) - File ""/opt/wal-e/lib/python3.5/site-packages/wal_e/worker/worker_util.py"", line 40, in do_lzop_put - k = blobstore.uri_put_file(creds, url, tf) - File ""/opt/wal-e/lib/python3.5/site-packages/wal_e/blobstore/gs/utils.py"", line 38, in uri_put_file - blob.upload_from_file(fp, size=size, content_type=content_type) - File ""/opt/wal-e/lib/python3.5/site-packages/google/cloud/storage/blob.py"", line 1162, in upload_from_file - _raise_from_invalid_response(exc) - File ""/opt/wal-e/lib/python3.5/site-packages/google/cloud/storage/blob.py"", line 2038, in _raise_from_invalid_response - raise exceptions.from_http_status(response.status_code, message, response=response) - google.api_core.exceptions.GoogleAPICallError: 410 PUT https://www.googleapis.com/upload/storage/v1/b/gitlab-gprd-postgres-backup/o?uploadType=resumable&upload_id=AEnB2UomUBipRkns50Aj6FpTMxL-d_WXjfMyLkcAj-9bSnwLLeBl47ZHEC2olCg8340SDRyTC2qbYq_bntvY_9gipuG_v-9PBA: ('Request failed with status code', 410, 'Expected one of', , 308) - - STRUCTURED: time=2019-12-12T07:31:29.645893-00 pid=62869 -``` -### Starting the leader election -Next, every replica with at least one peer with the wal position _ahead_ of themselves, downvotes himself: -``` -patroni-01-db-gprd.c.gitlab-production.internal/syslog.1:Dec 12 07:31:25 patroni-01-db-gprd patroni[58612]: 2019-12-12 07:31:25,249 INFO: Wal position of patroni-09-db-gprd.c.gitlab-production.internal is ahead of my wal position - -patroni-02-db-gprd.c.gitlab-production.internal/syslog.1:Dec 12 07:31:25 patroni-02-db-gprd patroni[92454]: 2019-12-12 07:31:25,232 INFO: Wal position of patroni-08-db-gprd.c.gitlab-production.internal is ahead of my wal position - -patroni-03-db-gprd.c.gitlab-production.internal/patroni.log.1:2019-12-12_07:31:26 patroni-03-db-gprd patroni[97589]: 2019-12-12 07:31:26,121 INFO: Wal position of patroni-05-db-gprd.c.gitlab-production.internal is ahead of my wal position - -patroni-04-db-gprd.c.gitlab-production.internal/syslog.1:Dec 12 07:31:26 patroni-04-db-gprd patroni[38752]: 2019-12-12 07:31:26,162 INFO: Wal position of patroni-02-db-gprd.c.gitlab-production.internal is ahead of my wal position - -patroni-05-db-gprd.c.gitlab-production.internal/syslog.1:Dec 12 07:31:25 patroni-05-db-gprd patroni[24338]: 2019-12-12 07:31:25,228 INFO: Wal position of patroni-07-db-gprd.c.gitlab-production.internal is ahead of my wal position - -patroni-07-db-gprd.c.gitlab-production.internal/syslog.1:Dec 12 07:31:25 patroni-07-db-gprd patroni[2378]: 2019-12-12 07:31:25,276 INFO: Wal position of patroni-03-db-gprd.c.gitlab-production.internal is ahead of my wal position - -patroni-09-db-gprd.c.gitlab-production.internal/patroni.log.1:2019-12-12_07:31:26 patroni-09-db-gprd patroni[59084]: 2019-12-12 07:31:26,153 INFO: Wal position of patroni-03-db-gprd.c.gitlab-production.internal is ahead of my wal position - -patroni-10-db-gprd.c.gitlab-production.internal/syslog.1:Dec 12 07:31:25 patroni-10-db-gprd patroni[1712]: 2019-12-12 07:31:25,263 INFO: Wal position of patroni-01-db-gprd.c.gitlab-production.internal is ahead of my wal position - -patroni-11-db-gprd.c.gitlab-production.internal/syslog.1:Dec 12 07:31:25 patroni-11-db-gprd patroni[54181]: 2019-12-12 07:31:25,281 INFO: Wal position of patroni-02-db-gprd.c.gitlab-production.internal is ahead of my wal position - -patroni-12-db-gprd.c.gitlab-production.internal/syslog.1:Dec 12 07:31:26 patroni-12-db-gprd patroni[59289]: 2019-12-12 07:31:26,117 INFO: Wal position of patroni-01-db-gprd.c.gitlab-production.internal is ahead of my wal position - - -``` -Leaving `patroni-08` as the best candidate for new ledear. Logs from `patroni-08`: -``` -2019-12-12_07:31:25 patroni-08-db-gprd patroni[29808]: 2019-12-12 07:31:25,081 INFO: Got response from patroni-05-db-gprd.c.gitlab-production.internal http://10.220.16.105:8009/patroni: b'{""database_system_identifier"": ""6343687859876602183"", ""role"": ""replica"", ""cluster_unlocked"": true, ""timeline"": 31, ""patroni"": {""scope"": ""pg-ha-cluster"", ""version"": ""1.6.0""}, ""postmaster_start_time"": ""2019-10-29 09:21:50.461 GMT"", ""xlog"": {""replayed_timestamp"": ""2019-12-12 07:31:25.054 GMT"", ""received_location"": 428947133460896, ""replayed_location"": 428947133460896, ""paused"": false}, ""state"": ""running"", ""server_version"": 90614}' -2019-12-12_07:31:25 patroni-08-db-gprd patroni[29808]: 2019-12-12 07:31:25,081 INFO: Got response from patroni-06-db-gprd.c.gitlab-production.internal http://10.220.16.106:8009/patroni: b'{""state"": ""running"", ""database_system_identifier"": ""6343687859876602183"", ""timeline"": 31, ""role"": ""master"", ""replication"": [{""state"": ""streaming"", ""client_addr"": ""10.220.16.107"", ""application_name"": ""patroni-07-db-gprd.c.gitlab-production.internal"", ""usename"": ""gitlab-replicator"", ""sync_priority"": 0, ""sync_state"": ""async""}, {""state"": ""streaming"", ""client_addr"": ""10.220.16.109"", ""application_name"": ""patroni-09-db-gprd.c.gitlab-production.internal"", ""usename"": ""gitlab-replicator"", ""sync_priority"": 0, ""sync_state"": ""async""}, {""state"": ""streaming"", ""client_addr"": ""10.220.16.111"", ""application_name"": ""patroni-11-db-gprd.c.gitlab-production.internal"", ""usename"": ""gitlab-replicator"", ""sync_priority"": 0, ""sync_state"": ""async""}, {""state"": ""streaming"", ""client_addr"": ""10.220.16.112"", ""application_name"": ""patroni-12-db-gprd.c.gitlab-production.internal"", ""usename"": ""gitlab-replicator"", ""sync_priority"": 0, ""sync_state"": ""async""}, {""state"": ""streaming"", ""client_addr"": ""10.220.16.108"", ""application_name"": ""patroni-08-db-gprd.c.gitlab-production.internal"", ""usename"": ""gitlab-replicator"", ""sync_priority"": 0, ""sync_state"": ""async""}, {""state"": ""streaming"", ""client_addr"": ""10.220.16.103"", ""application_name"": ""patroni-03-db-gprd.c.gitlab-production.internal"", ""usename"": ""gitlab-replicator"", ""sync_priority"": 0, ""sync_state"": ""async""}, {""state"": ""streaming"", ""client_addr"": ""10.220.16.104"", ""application_name"": ""patroni-04-db-gprd.c.gitlab-production.internal"", ""usename"": ""gitlab-replicator"", ""sync_priority"": 0, ""sync_state"": ""async""}, {""state"": ""streaming"", ""client_addr"": ""10.220.16.102"", ""application_name"": ""patroni-02-db-gprd.c.gitlab-production.internal"", ""usename"": ""gitlab-replicator"", ""sync_priority"": 0, ""sync_state"": ""async""}, {""state"": ""streaming"", ""client_addr"": ""10.220.16.105"", ""application_name"": ""patroni-05-db-gprd.c.gitlab-production.internal"", ""usename"": ""gitlab-replicator"", ""sync_priority"": -2019-12-12_07:31:25 patroni-08-db-gprd patroni[29808]: 0, ""sync_state"": ""async""}, {""state"": ""streaming"", ""client_addr"": ""10.220.16.110"", ""application_name"": ""patroni-10-db-gprd.c.gitlab-production.internal"", ""usename"": ""gitlab-replicator"", ""sync_priority"": 0, ""sync_state"": ""async""}, {""state"": ""streaming"", ""client_addr"": ""10.220.16.101"", ""application_name"": ""patroni-01-db-gprd.c.gitlab-production.internal"", ""usename"": ""gitlab-replicator"", ""sync_priority"": 0, ""sync_state"": ""async""}], ""server_version"": 90614, ""postmaster_start_time"": ""2019-10-28 16:07:16.645 GMT"", ""patroni"": {""scope"": ""pg-ha-cluster"", ""version"": ""1.6.0""}, ""xlog"": {""location"": 428947133581640}, ""cluster_unlocked"": false}' -2019-12-12_07:31:25 patroni-08-db-gprd patroni[29808]: 2019-12-12 07:31:25,084 INFO: Got response from patroni-10-db-gprd.c.gitlab-production.internal http://10.220.16.110:8009/patroni: b'{""postmaster_start_time"": ""2019-11-23 01:39:04.569 GMT"", ""cluster_unlocked"": true, ""timeline"": 31, ""xlog"": {""replayed_timestamp"": ""2019-12-12 07:31:25.063 GMT"", ""replayed_location"": 428947133566072, ""paused"": false, ""received_location"": 428947133566072}, ""state"": ""running"", ""server_version"": 90615, ""role"": ""replica"", ""database_system_identifier"": ""6343687859876602183"", ""patroni"": {""version"": ""1.6.0"", ""scope"": ""pg-ha-cluster""}}' -2019-12-12_07:31:25 patroni-08-db-gprd patroni[29808]: 2019-12-12 07:31:25,088 INFO: Got response from patroni-07-db-gprd.c.gitlab-production.internal http://10.220.16.107:8009/patroni: b'{""patroni"": {""version"": ""1.6.0"", ""scope"": ""pg-ha-cluster""}, ""server_version"": 90615, ""database_system_identifier"": ""6343687859876602183"", ""state"": ""running"", ""role"": ""replica"", ""cluster_unlocked"": true, ""timeline"": 31, ""postmaster_start_time"": ""2019-10-29 17:48:07.243 GMT"", ""xlog"": {""received_location"": 428947133581640, ""paused"": false, ""replayed_location"": 428947133581640, ""replayed_timestamp"": ""2019-12-12 07:31:25.070 GMT""}}' -2019-12-12_07:31:25 patroni-08-db-gprd patroni[29808]: 2019-12-12 07:31:25,092 INFO: Got response from patroni-01-db-gprd.c.gitlab-production.internal http://10.220.16.101:8009/patroni: b'{""state"": ""running"", ""cluster_unlocked"": true, ""xlog"": {""replayed_location"": 428947133581640, ""replayed_timestamp"": ""2019-12-12 07:31:25.070 GMT"", ""received_location"": 428947133582952, ""paused"": false}, ""database_system_identifier"": ""6343687859876602183"", ""server_version"": 90615, ""role"": ""replica"", ""postmaster_start_time"": ""2019-11-27 23:12:39.985 GMT"", ""patroni"": {""scope"": ""pg-ha-cluster"", ""version"": ""1.6.0""}, ""timeline"": 31}' -2019-12-12_07:31:25 patroni-08-db-gprd patroni[29808]: 2019-12-12 07:31:25,105 INFO: Got response from patroni-09-db-gprd.c.gitlab-production.internal http://10.220.16.109:8009/patroni: b'{""database_system_identifier"": ""6343687859876602183"", ""server_version"": 90615, ""timeline"": 31, ""state"": ""running"", ""cluster_unlocked"": true, ""xlog"": {""paused"": false, ""replayed_location"": 428947133581640, ""received_location"": 428947133581640, ""replayed_timestamp"": ""2019-12-12 07:31:25.070 GMT""}, ""postmaster_start_time"": ""2019-10-29 09:21:52.573 GMT"", ""role"": ""replica"", ""patroni"": {""scope"": ""pg-ha-cluster"", ""version"": ""1.6.0""}}' -2019-12-12_07:31:25 patroni-08-db-gprd patroni[29808]: 2019-12-12 07:31:25,120 INFO: Got response from patroni-02-db-gprd.c.gitlab-production.internal http://10.220.16.102:8009/patroni: b'{""cluster_unlocked"": true, ""timeline"": 31, ""state"": ""running"", ""postmaster_start_time"": ""2019-10-29 09:21:50.651 GMT"", ""patroni"": {""scope"": ""pg-ha-cluster"", ""version"": ""1.6.0""}, ""xlog"": {""received_location"": 428947133600792, ""replayed_timestamp"": ""2019-12-12 07:31:25.079 GMT"", ""paused"": false, ""replayed_location"": 428947133600792}, ""database_system_identifier"": ""6343687859876602183"", ""server_version"": 90614, ""role"": ""replica""}' -2019-12-12_07:31:25 patroni-08-db-gprd patroni[29808]: 2019-12-12 07:31:25,187 INFO: Got response from patroni-04-db-gprd.c.gitlab-production.internal http://10.220.16.104:8009/patroni: b'{""role"": ""replica"", ""server_version"": 90614, ""database_system_identifier"": ""6343687859876602183"", ""cluster_unlocked"": true, ""patroni"": {""scope"": ""pg-ha-cluster"", ""version"": ""1.6.0""}, ""postmaster_start_time"": ""2019-10-29 09:21:51.098 GMT"", ""timeline"": 31, ""state"": ""running"", ""xlog"": {""replayed_timestamp"": ""2019-12-12 07:31:25.067 GMT"", ""replayed_location"": 428947133580968, ""paused"": false, ""received_location"": 428947133580968}}' -2019-12-12_07:31:25 patroni-08-db-gprd patroni[29808]: 2019-12-12 07:31:25,190 INFO: Got response from patroni-03-db-gprd.c.gitlab-production.internal http://10.220.16.103:8009/patroni: b'{""patroni"": {""version"": ""1.6.0"", ""scope"": ""pg-ha-cluster""}, ""state"": ""running"", ""timeline"": 31, ""cluster_unlocked"": true, ""server_version"": 90614, ""role"": ""replica"", ""database_system_identifier"": ""6343687859876602183"", ""xlog"": {""replayed_location"": 428947133721400, ""replayed_timestamp"": ""2019-12-12 07:31:25.123 GMT"", ""received_location"": 428947133721400, ""paused"": false}, ""postmaster_start_time"": ""2019-10-29 09:21:50.671 GMT""}' -2019-12-12_07:31:25 patroni-08-db-gprd patroni[29808]: 2019-12-12 07:31:25,202 INFO: Got response from patroni-11-db-gprd.c.gitlab-production.internal http://10.220.16.111:8009/patroni: b'{""patroni"": {""scope"": ""pg-ha-cluster"", ""version"": ""1.6.0""}, ""postmaster_start_time"": ""2019-10-29 09:21:50.985 GMT"", ""cluster_unlocked"": true, ""timeline"": 31, ""xlog"": {""replayed_location"": 428947134134856, ""replayed_timestamp"": ""2019-12-12 07:31:25.173 GMT"", ""paused"": false, ""received_location"": 428947134134856}, ""role"": ""replica"", ""database_system_identifier"": ""6343687859876602183"", ""server_version"": 90615, ""state"": ""running""}' -2019-12-12_07:31:26 patroni-08-db-gprd patroni[29808]: 2019-12-12 07:31:26,088 INFO: Got response from patroni-12-db-gprd.c.gitlab-production.internal http://10.220.16.112:8009/patroni: b'{""role"": ""replica"", ""state"": ""running"", ""database_system_identifier"": ""6343687859876602183"", ""postmaster_start_time"": ""2019-10-29 09:21:52.046 GMT"", ""xlog"": {""replayed_timestamp"": ""2019-12-12 07:31:26.067 GMT"", ""replayed_location"": 428947138866080, ""received_location"": 428947138866080, ""paused"": false}, ""timeline"": 31, ""server_version"": 90615, ""cluster_unlocked"": true, ""patroni"": {""scope"": ""pg-ha-cluster"", ""version"": ""1.6.0""}}' -2019-12-12_07:31:26 patroni-08-db-gprd patroni[29808]: 2019-12-12 07:31:26,153 WARNING: Master (patroni-06-db-gprd.c.gitlab-production.internal) is still alive -2019-12-12_07:31:26 patroni-08-db-gprd patroni[29808]: 2019-12-12 07:31:26,173 INFO: following a different leader because i am not the healthiest node -2019-12-12_07:31:34 patroni-08-db-gprd patroni[29808]: 2019-12-12 07:31:34,905 INFO: Lock owner: patroni-06-db-gprd.c.gitlab-production.internal; I am patroni-08-db-gprd.c.gitlab-production.internal -2019-12-12_07:31:34 patroni-08-db-gprd patroni[29808]: 2019-12-12 07:31:34,917 INFO: changing primary_conninfo and restarting in progress - -``` -So, as nothing really were wrong with `patroni-06` and was still online, leader role was preserved, and timeline is still 31: - -``` -+---------------+-------------------------------------------------+---------------+--------+---------+----+-----------+ -| Cluster | Member | Host | Role | State | TL | Lag in MB | -+---------------+-------------------------------------------------+---------------+--------+---------+----+-----------+ -| pg-ha-cluster | patroni-01-db-gprd.c.gitlab-production.internal | 10.220.16.101 | | running | 31 | 39 | -| pg-ha-cluster | patroni-02-db-gprd.c.gitlab-production.internal | 10.220.16.102 | | running | 31 | 43 | -| pg-ha-cluster | patroni-03-db-gprd.c.gitlab-production.internal | 10.220.16.103 | | running | 31 | 32 | -| pg-ha-cluster | patroni-04-db-gprd.c.gitlab-production.internal | 10.220.16.104 | | running | 31 | 39 | -| pg-ha-cluster | patroni-05-db-gprd.c.gitlab-production.internal | 10.220.16.105 | | running | 31 | | -| pg-ha-cluster | patroni-06-db-gprd.c.gitlab-production.internal | 10.220.16.106 | Leader | running | 31 | 0 | -| pg-ha-cluster | patroni-07-db-gprd.c.gitlab-production.internal | 10.220.16.107 | | running | 31 | 51 | -| pg-ha-cluster | patroni-08-db-gprd.c.gitlab-production.internal | 10.220.16.108 | | running | 31 | 46 | -| pg-ha-cluster | patroni-09-db-gprd.c.gitlab-production.internal | 10.220.16.109 | | running | 31 | 42 | -| pg-ha-cluster | patroni-10-db-gprd.c.gitlab-production.internal | 10.220.16.110 | | running | 31 | 31 | -| pg-ha-cluster | patroni-11-db-gprd.c.gitlab-production.internal | 10.220.16.111 | | running | 31 | 50 | -| pg-ha-cluster | patroni-12-db-gprd.c.gitlab-production.internal | 10.220.16.112 | | running | 31 | 34 | -+---------------+-------------------------------------------------+---------------+--------+---------+----+-----------+ - - -``` - - -## Consecuences - -- ~ 1 minute downtime in the replicas ( because of the restarting) - - - -## Corrective actions - -- Investigate the usage consul.checks:[] at patroni config to be more resilient ti short network glitches.",2.0 -28331221,2019-12-11 20:54:44.906,Hosted discourse forum PoC testing / readiness review,"Once we have a PoC environment provisioned and accessible under &139, with a recently [restored backup](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8677) and [authentication configured](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8678), we need to perform thorough validation. This should include working through a [readiness review](https://gitlab.com/gitlab-com/gl-infra/readiness).",1.0 -28318751,2019-12-11 16:15:04.740,Transparency into API requests by IP and project ID,"As an SRE on call, I need to be able to quickly identify problem users and projects in unicorn and workhorse, specifically with respect to API calls.",1.0 -28314510,2019-12-11 14:18:02.977,Learn about Elastic Search 7.x,"Gain knowledge about Elastic Search v 7.x to be able to setup and support it as log search solution. - -* [ ] Find learning material, share findings -* [ ] Evaluate possible training options / certifications -* [ ] Read stuff, try things -* [ ] improve docs",8.0 -28310903,2019-12-11 13:01:39.208,Verification step of postgres-gprd is failing,"Related to https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8643 - -The actual verification script is failing to return successfully. - -Example: https://ops.gitlab.net/gitlab-com/gl-infra/gitlab-restore/postgres-gprd/-/jobs/785011",2.0 -28242485,2019-12-10 13:01:05.448,Turn down Azure/DO blackbox prober,"We don't get alerts from Azure anymore, and we're trying to clean out this infra. - -Remove the blackbox exporter server and jobs from Azure/DO production.",1.0 -28082432,2019-12-08 16:30:44.191,"A ""5% disk space left"" alert gets fired, but resolves almost immediately","A ""5% disk space left"" alert gets fired, and I get paged through PagerDuty, but then when I go to take a look at the disks on the system, everything appears normal. - -``` -nelsnelson@sidekiq-export-04-sv-gprd.c.gitlab-production.internal:~$ df -h -Filesystem Size Used Avail Use% Mounted on -udev 26G 0 26G 0% /dev -tmpfs 5.2G 550M 4.6G 11% /run -/dev/sda1 20G 11G 9.1G 54% / -tmpfs 26G 4.0K 26G 1% /dev/shm -tmpfs 5.0M 0 5.0M 0% /run/lock -tmpfs 26G 0 26G 0% /sys/fs/cgroup -/dev/sdb 50G 151M 49G 1% /var/log -share-01-stor-gprd.c.gitlab-production.internal:/var/opt/gitlab/gitlab-rails/shared/artifacts 20T 11T 9.4T 52% /var/opt/gitlab/gitlab-rails/shared/artifacts -share-01-stor-gprd.c.gitlab-production.internal:/var/opt/gitlab/gitlab-ci/builds 20T 11T 9.4T 52% /var/opt/gitlab/gitlab-ci/builds -share-01-stor-gprd.c.gitlab-production.internal:/var/opt/gitlab/gitlab-rails/shared/lfs-objects 20T 11T 9.4T 52% /var/opt/gitlab/gitlab-rails/shared/lfs-objects -pages-01-stor-gprd.c.gitlab-production.internal:/var/opt/gitlab/gitlab-rails/shared/pages 16T 5.0T 11T 32% /var/opt/gitlab/gitlab-rails/shared/pages -share-01-stor-gprd.c.gitlab-production.internal:/var/opt/gitlab/gitlab-rails/uploads 20T 11T 9.4T 52% /var/opt/gitlab/gitlab-rails/uploads -tmpfs 5.2G 0 5.2G 0% /run/user/1017 -``` - -I suppose this is because the disk capacity dip is so brief, and I did not run the `df` check quickly enough. - -![Screen_Shot_2019-12-08_at_10.28.35_AM](/uploads/eccbfa2bcd3905705dc14990017f9a0a/Screen_Shot_2019-12-08_at_10.28.35_AM.png)",1.0 -28070585,2019-12-07 23:09:56.131,Failed to connect to 10.226.1.4 port 9091: Operation timed out,"It seems that the deadman's switch for a few hosts may not be operational or reachable on their normal port 9091. - -> Can anyone help understand what is 10.226.1.4? -> It's hardcoded in gitlab-restore as a destination for the final signal about successful restoration https://ops.gitlab.net/gitlab-com/gl-infra/gitlab-restore/postgres-gprd/blob/master/common.sh#L45 – looks like it's not available anymore -```* Failed to connect to 10.226.1.4 port 9091: Operation timed out``` -> So, the verification for staging backups is always failing https://ops.gitlab.net/gitlab-com/gl-infra/gitlab-restore/postgres-gprd/-/jobs/779052. -> Any recent changes with that address and should it be changed? -> 10.226.1.3 is also unavailable. And 10.250.8.9 - --- https://gitlab.slack.com/archives/CB3LSMEJV/p1575757987020800",1.0 -28027466,2019-12-06 16:20:27.496,Weird wobbling of web/haproxy latency+error ratios,"Noticed this happening today. The onset of the wobbling seems to correspond with the natural increase of traffic as Europe and Americas comes online. - -![Screen_Shot_2019-12-06_at_10.16.46_AM](/uploads/8c2977c15a540300e6247dc48f656de5/Screen_Shot_2019-12-06_at_10.16.46_AM.png) - -![Screen_Shot_2019-12-06_at_10.14.32_AM](/uploads/4a9a3cd8ae8de9fcac7d0c02ef35af51/Screen_Shot_2019-12-06_at_10.14.32_AM.png) - -![Screen_Shot_2019-12-06_at_10.13.36_AM](/uploads/3bb1a6f192f62ddf90056bbc35ae479b/Screen_Shot_2019-12-06_at_10.13.36_AM.png) - -![Screen_Shot_2019-12-06_at_10.13.35_AM](/uploads/09976c0a5c513bf1934226b922f9ba14/Screen_Shot_2019-12-06_at_10.13.35_AM.png)",1.0 -27996584,2019-12-05 21:46:22.257,Upgrade to terraform GCP provider 3.x,"The most recent major version upgrade of the Google Cloud Terraform provider has been released, and includes [several changes](https://github.com/terraform-providers/terraform-provider-google/blob/master/CHANGELOG.md) we will want/need in the near future. Note that this impacts how we manage GCP projects, however (see #8122) - -Note that this will require a fair amount of review and testing to ensure we don't have any issues in any modules and/or when rolling out deployments across environments. This issue should be updated according to the [upgrade guide](https://www.terraform.io/docs/providers/google/guides/version_3_upgrade.html) or split into further tasks as required. - -/cc @gitlab-com/gl-infra",5.0 -27995476,2019-12-05 20:35:59.631,Match existing chef server config in gitlab-chef-server cookbook,"Once the base cookbook to [automate chef-server installation](#8601) has been completed, we need to document and replicate the existing chef server configuration in the cookbook attributes, and validate via chefspec/inspec.",2.0 -27995407,2019-12-05 20:31:58.951,Automate chef-server install,"Follow-up / breakout from #8028 - while the terraform code is in place for an instance in us-central, the bootstrap needs to be improved. Our current chef server was built manually, and has little to no automation for the installation and configuration of the chef server itself. There is a [chef-server cookbook](https://github.com/chef-cookbooks/chef-server) that was added to a [new role](https://ops.gitlab.net/gitlab-cookbooks/chef-repo/blob/master/roles/chef-gitlab-net.json) based on the prior one. - -With this configuration, however, we do not have any ability to test independently of deploying changes to the role onto the chef server; this issue is to track the creation of a container build/test/deployment pipeline (if possible), or a ""standard"" wrapper cookbook around `chef-server`, and the corresponding test-kitchen/chefspec/inspec config to validate, otherwise.",2.0 -27994920,2019-12-05 20:05:27.534,Investigate application start failures when all known read-only replicas are not responsive.,"## Summary - -In the Incident Review meeting earlier today (05 December), corrective action item 13 in https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8528 was skipped in favor of a more in-depth discussion. - -> We need to investigate why losing just 2 patroni node affected us - -The meeting was nearly over, so I stated that a follow-up meeting should be scheduled to discuss the concern. Most of the individuals in the call assembled for shortly after for a separate call, and wed ended up discussing the matter then. It became evident quickly that more information needs to be uncovered before a fruitful conversation can take place. - -## Definition of Done - -- [x] Outline the technical conditions under which the failure occurred. - - [x] Provide steps to reproduce the behavior. -- [ ] Document this scenario in the appropriate section of our runbook. -- [x] @ansdval will determine a priority level and bring it to the attention of the development teams in the Performance & Availability meeting using `~infradev` and ~gitlab.com label combination.",2.0 -27988400,2019-12-05 17:09:50.339,evaluate migration to Postgresql 12,"It is needed to plan a migration for PostgreSQL to a newer version. - -For this task, we need to evaluate the options for the migration",20.0 -27986142,2019-12-05 16:04:17.620,Create metrics catalog for Praefect,"To establish Apdex and error rate metrics and alerting, and graphana dashboards - -/cc @johncai @nnelson",2.0 -27981854,2019-12-05 14:00:43.120,Use custom instance type for Patroni nodes,"We currently use `n1-highmem-96` to get 600GB memory PostgreSQL server instances. This leads to very under-utilized CPU. This is currently wasting 20% of our GCP CPU quota. Not to mention the cost of reserving 1000 cores we don't use. - -We can easily cut this down by using a custom node type. - -![image](/uploads/648dbc4c6d5168258331866c1f164784/image.png) - -Based on this ""worst case"" CPU utilization, we peak at about 30 cores in use. We can safely cut the allocated cores by 50% to 48 CPUs. - -Proposal: Switch allocation to `custom-48-614400`.",2.0 -27947632,2019-12-04 17:43:12.601,Problems with license db extraction,"## We're unable to read from license_db - -``` -2019-12-04 08:10:54,086] INFO - [2019-12-04 08:10:54,086] INFO - b'psycopg2.OperationalError: could not connect to server: Connection timed out\n' -[2019-12-04 08:10:54,086] INFO - [2019-12-04 08:10:54,086] INFO - b'\tIs the server running on host ""10.138.16.11"" and accepting\n' -[2019-12-04 08:10:54,087] INFO - [2019-12-04 08:10:54,087] INFO - b'\tTCP/IP connections on port 5432?\n' -``` - ---- - -http://35.190.127.73/log?dag_id=license_db_extract&task_id=license-db-incremental&execution_date=2019-12-04T00%3A00%3A00%2B00%3A00 - -Full log: - -``` -*** Reading local file: /usr/local/airflow/logs/license_db_extract/license-db-incremental/2019-12-04T00:00:00+00:00/2.log -[2019-12-04 08:06:17,093] INFO - Dependencies all met for -[2019-12-04 08:06:17,102] INFO - Dependencies all met for -[2019-12-04 08:06:17,102] INFO - --------------------------------------------------------------------------------- -[2019-12-04 08:06:17,102] INFO - Starting attempt 2 of 2 -[2019-12-04 08:06:17,103] INFO - --------------------------------------------------------------------------------- -[2019-12-04 08:06:17,129] INFO - Executing on 2019-12-04T00:00:00+00:00 -[2019-12-04 08:06:17,130] INFO - Running: ['airflow', 'run', 'license_db_extract', 'license-db-incremental', '2019-12-04T00:00:00+00:00', '--job_id', '22469', '--raw', '-sd', 'DAGS_FOLDER/extract/gitlab_dbs.py', '--cfg_path', '/tmp/tmpy57gwtvp'] -[2019-12-04 08:06:17,774] INFO - Job 22469: Subtask license-db-incremental [2019-12-04 08:06:17,774] INFO - settings.configure_orm(): Using pool settings. pool_size=6, pool_recycle=1800, pid=1892444 -[2019-12-04 08:06:17,932] INFO - Job 22469: Subtask license-db-incremental [2019-12-04 08:06:17,932] INFO - Using executor LocalExecutor -[2019-12-04 08:06:18,227] INFO - Job 22469: Subtask license-db-incremental [2019-12-04 08:06:18,226] INFO - Filling up the DagBag from /usr/local/airflow/analytics/dags/extract/gitlab_dbs.py -[2019-12-04 08:06:18,618] INFO - Job 22469: Subtask license-db-incremental [2019-12-04 08:06:18,618] INFO - Running on host airflow-deployment-b666dd78d-dzz4r -[2019-12-04 08:06:18,890] INFO - [2019-12-04 08:06:18,890] INFO - Event: license-db-incremental-df1efe5e had an event of type Pending -[2019-12-04 08:06:19,897] INFO - [2019-12-04 08:06:19,897] INFO - Event: license-db-incremental-df1efe5e had an event of type Pending -[2019-12-04 08:06:20,904] INFO - [2019-12-04 08:06:20,904] INFO - Event: license-db-incremental-df1efe5e had an event of type Pending -[2019-12-04 08:06:21,912] INFO - [2019-12-04 08:06:21,912] INFO - Event: license-db-incremental-df1efe5e had an event of type Pending -[2019-12-04 08:06:22,920] INFO - [2019-12-04 08:06:22,920] INFO - Event: license-db-incremental-df1efe5e had an event of type Pending -[2019-12-04 08:06:23,927] INFO - [2019-12-04 08:06:23,927] INFO - Event: license-db-incremental-df1efe5e had an event of type Pending -[2019-12-04 08:06:24,933] INFO - [2019-12-04 08:06:24,933] INFO - Event: license-db-incremental-df1efe5e had an event of type Pending -[2019-12-04 08:06:25,940] INFO - [2019-12-04 08:06:25,940] INFO - Event: license-db-incremental-df1efe5e had an event of type Pending -[2019-12-04 08:06:26,948] INFO - [2019-12-04 08:06:26,948] INFO - Event: license-db-incremental-df1efe5e had an event of type Pending -[2019-12-04 08:06:27,957] INFO - [2019-12-04 08:06:27,956] INFO - Event: license-db-incremental-df1efe5e had an event of type Pending -[2019-12-04 08:06:28,964] INFO - [2019-12-04 08:06:28,964] INFO - Event: license-db-incremental-df1efe5e had an event of type Running -[2019-12-04 08:06:29,009] INFO - [2019-12-04 08:06:29,009] INFO - b""Cloning into 'analytics'...\n"" -[2019-12-04 08:06:30,579] INFO - [2019-12-04 08:06:30,579] INFO - b'INFO:root:Reading manifest at location: ../manifests/license_db_manifest.yaml\n' -[2019-12-04 08:06:30,589] INFO - [2019-12-04 08:06:30,589] INFO - b'INFO:root:Creating database engines...\n' -[2019-12-04 08:06:30,616] INFO - [2019-12-04 08:06:30,616] INFO - b'INFO:root:Engine(postgresql://postgres:***@10.138.16.11:5432/license_gitlab_com_production)\n' -[2019-12-04 08:06:30,718] INFO - [2019-12-04 08:06:30,718] INFO - b'INFO:root:Engine(snowflake://airflow:***@gitlab/RAW/tap_postgres?role=LOADER&warehouse=LOADING)\n' -[2019-12-04 08:06:30,718] INFO - [2019-12-04 08:06:30,718] INFO - b'INFO:root:Processing Table: add_ons\n' -[2019-12-04 08:06:30,723] INFO - [2019-12-04 08:06:30,723] INFO - b'INFO:botocore.vendored.requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): gitlab.snowflakecomputing.com\n' -[2019-12-04 08:06:30,911] INFO - [2019-12-04 08:06:30,911] INFO - b'INFO:botocore.vendored.requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): ocsp.snowflakecomputing.com\n' -[2019-12-04 08:10:54,065] INFO - [2019-12-04 08:10:54,065] INFO - b'Traceback (most recent call last):\n' -[2019-12-04 08:10:54,066] INFO - [2019-12-04 08:10:54,066] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py"", line 2158, in _wrap_pool_connect\n' -[2019-12-04 08:10:54,067] INFO - [2019-12-04 08:10:54,067] INFO - b' return fn()\n' -[2019-12-04 08:10:54,068] INFO - [2019-12-04 08:10:54,067] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/pool.py"", line 400, in connect\n' -[2019-12-04 08:10:54,068] INFO - [2019-12-04 08:10:54,068] INFO - b' return _ConnectionFairy._checkout(self)\n' -[2019-12-04 08:10:54,069] INFO - [2019-12-04 08:10:54,069] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/pool.py"", line 788, in _checkout\n' -[2019-12-04 08:10:54,070] INFO - [2019-12-04 08:10:54,070] INFO - b' fairy = _ConnectionRecord.checkout(pool)\n' -[2019-12-04 08:10:54,071] INFO - [2019-12-04 08:10:54,071] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/pool.py"", line 529, in checkout\n' -[2019-12-04 08:10:54,071] INFO - [2019-12-04 08:10:54,071] INFO - b' rec = pool._do_get()\n' -[2019-12-04 08:10:54,072] INFO - [2019-12-04 08:10:54,072] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/pool.py"", line 1193, in _do_get\n' -[2019-12-04 08:10:54,072] INFO - [2019-12-04 08:10:54,072] INFO - b' self._dec_overflow()\n' -[2019-12-04 08:10:54,073] INFO - [2019-12-04 08:10:54,073] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/util/langhelpers.py"", line 66, in __exit__\n' -[2019-12-04 08:10:54,074] INFO - [2019-12-04 08:10:54,074] INFO - b' compat.reraise(exc_type, exc_value, exc_tb)\n' -[2019-12-04 08:10:54,075] INFO - [2019-12-04 08:10:54,075] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/util/compat.py"", line 249, in reraise\n' -[2019-12-04 08:10:54,075] INFO - [2019-12-04 08:10:54,075] INFO - b' raise value\n' -[2019-12-04 08:10:54,076] INFO - [2019-12-04 08:10:54,076] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/pool.py"", line 1190, in _do_get\n' -[2019-12-04 08:10:54,076] INFO - [2019-12-04 08:10:54,076] INFO - b' return self._create_connection()\n' -[2019-12-04 08:10:54,077] INFO - [2019-12-04 08:10:54,077] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/pool.py"", line 347, in _create_connection\n' -[2019-12-04 08:10:54,078] INFO - [2019-12-04 08:10:54,078] INFO - b' return _ConnectionRecord(self)\n' -[2019-12-04 08:10:54,079] INFO - [2019-12-04 08:10:54,079] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/pool.py"", line 474, in __init__\n' -[2019-12-04 08:10:54,079] INFO - [2019-12-04 08:10:54,079] INFO - b' self.__connect(first_connect_check=True)\n' -[2019-12-04 08:10:54,080] INFO - [2019-12-04 08:10:54,080] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/pool.py"", line 671, in __connect\n' -[2019-12-04 08:10:54,080] INFO - [2019-12-04 08:10:54,080] INFO - b' connection = pool._invoke_creator(self)\n' -[2019-12-04 08:10:54,081] INFO - [2019-12-04 08:10:54,081] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/strategies.py"", line 106, in connect\n' -[2019-12-04 08:10:54,082] INFO - [2019-12-04 08:10:54,082] INFO - b' return dialect.connect(*cargs, **cparams)\n' -[2019-12-04 08:10:54,083] INFO - [2019-12-04 08:10:54,083] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/default.py"", line 412, in connect\n' -[2019-12-04 08:10:54,083] INFO - [2019-12-04 08:10:54,083] INFO - b' return self.dbapi.connect(*cargs, **cparams)\n' -[2019-12-04 08:10:54,084] INFO - [2019-12-04 08:10:54,084] INFO - b' File ""/usr/local/lib/python3.7/site-packages/psycopg2/__init__.py"", line 130, in connect\n' -[2019-12-04 08:10:54,085] INFO - [2019-12-04 08:10:54,085] INFO - b' conn = _connect(dsn, connection_factory=connection_factory, **kwasync)\n' -[2019-12-04 08:10:54,086] INFO - [2019-12-04 08:10:54,086] INFO - b'psycopg2.OperationalError: could not connect to server: Connection timed out\n' -[2019-12-04 08:10:54,086] INFO - [2019-12-04 08:10:54,086] INFO - b'\tIs the server running on host ""10.138.16.11"" and accepting\n' -[2019-12-04 08:10:54,087] INFO - [2019-12-04 08:10:54,087] INFO - b'\tTCP/IP connections on port 5432?\n' -[2019-12-04 08:10:54,087] INFO - [2019-12-04 08:10:54,087] INFO - b'\n' -[2019-12-04 08:10:54,087] INFO - [2019-12-04 08:10:54,087] INFO - b'\n' -[2019-12-04 08:10:54,088] INFO - [2019-12-04 08:10:54,088] INFO - b'The above exception was the direct cause of the following exception:\n' -[2019-12-04 08:10:54,088] INFO - [2019-12-04 08:10:54,088] INFO - b'\n' -[2019-12-04 08:10:54,088] INFO - [2019-12-04 08:10:54,088] INFO - b'Traceback (most recent call last):\n' -[2019-12-04 08:10:54,089] INFO - [2019-12-04 08:10:54,089] INFO - b' File ""main.py"", line 341, in \n' -[2019-12-04 08:10:54,089] INFO - [2019-12-04 08:10:54,089] INFO - b' Fire({""tap"": main})\n' -[2019-12-04 08:10:54,090] INFO - [2019-12-04 08:10:54,090] INFO - b' File ""/usr/local/lib/python3.7/site-packages/fire/core.py"", line 127, in Fire\n' -[2019-12-04 08:10:54,091] INFO - [2019-12-04 08:10:54,091] INFO - b' component_trace = _Fire(component, args, context, name)\n' -[2019-12-04 08:10:54,092] INFO - [2019-12-04 08:10:54,092] INFO - b' File ""/usr/local/lib/python3.7/site-packages/fire/core.py"", line 366, in _Fire\n' -[2019-12-04 08:10:54,092] INFO - [2019-12-04 08:10:54,092] INFO - b' component, remaining_args)\n' -[2019-12-04 08:10:54,093] INFO - [2019-12-04 08:10:54,093] INFO - b' File ""/usr/local/lib/python3.7/site-packages/fire/core.py"", line 542, in _CallCallable\n' -[2019-12-04 08:10:54,093] INFO - [2019-12-04 08:10:54,093] INFO - b' result = fn(*varargs, **kwargs)\n' -[2019-12-04 08:10:54,094] INFO - [2019-12-04 08:10:54,094] INFO - b' File ""main.py"", line 319, in main\n' -[2019-12-04 08:10:54,094] INFO - [2019-12-04 08:10:54,094] INFO - b' table_name,\n' -[2019-12-04 08:10:54,095] INFO - [2019-12-04 08:10:54,095] INFO - b' File ""/analytics/extract/postgres_pipeline/postgres_pipeline/utils.py"", line 260, in check_if_schema_changed\n' -[2019-12-04 08:10:54,096] INFO - [2019-12-04 08:10:54,096] INFO - b' con=source_engine,\n' -[2019-12-04 08:10:54,097] INFO - [2019-12-04 08:10:54,097] INFO - b' File ""/usr/local/lib/python3.7/site-packages/pandas/io/sql.py"", line 397, in read_sql\n' -[2019-12-04 08:10:54,097] INFO - [2019-12-04 08:10:54,097] INFO - b' chunksize=chunksize)\n' -[2019-12-04 08:10:54,098] INFO - [2019-12-04 08:10:54,098] INFO - b' File ""/usr/local/lib/python3.7/site-packages/pandas/io/sql.py"", line 1099, in read_query\n' -[2019-12-04 08:10:54,099] INFO - [2019-12-04 08:10:54,099] INFO - b' result = self.execute(*args)\n' -[2019-12-04 08:10:54,099] INFO - [2019-12-04 08:10:54,099] INFO - b' File ""/usr/local/lib/python3.7/site-packages/pandas/io/sql.py"", line 990, in execute\n' -[2019-12-04 08:10:54,100] INFO - [2019-12-04 08:10:54,100] INFO - b' return self.connectable.execute(*args, **kwargs)\n' -[2019-12-04 08:10:54,101] INFO - [2019-12-04 08:10:54,101] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py"", line 2074, in execute\n' -[2019-12-04 08:10:54,102] INFO - [2019-12-04 08:10:54,102] INFO - b' connection = self.contextual_connect(close_with_result=True)\n' -[2019-12-04 08:10:54,103] INFO - [2019-12-04 08:10:54,103] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py"", line 2123, in contextual_connect\n' -[2019-12-04 08:10:54,103] INFO - [2019-12-04 08:10:54,103] INFO - b' self._wrap_pool_connect(self.pool.connect, None),\n' -[2019-12-04 08:10:54,104] INFO - [2019-12-04 08:10:54,104] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py"", line 2162, in _wrap_pool_connect\n' -[2019-12-04 08:10:54,105] INFO - [2019-12-04 08:10:54,105] INFO - b' e, dialect, self)\n' -[2019-12-04 08:10:54,106] INFO - [2019-12-04 08:10:54,106] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py"", line 1476, in _handle_dbapi_exception_noconnection\n' -[2019-12-04 08:10:54,106] INFO - [2019-12-04 08:10:54,106] INFO - b' exc_info\n' -[2019-12-04 08:10:54,107] INFO - [2019-12-04 08:10:54,107] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/util/compat.py"", line 265, in raise_from_cause\n' -[2019-12-04 08:10:54,108] INFO - [2019-12-04 08:10:54,108] INFO - b' reraise(type(exception), exception, tb=exc_tb, cause=cause)\n' -[2019-12-04 08:10:54,109] INFO - [2019-12-04 08:10:54,109] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/util/compat.py"", line 248, in reraise\n' -[2019-12-04 08:10:54,109] INFO - [2019-12-04 08:10:54,109] INFO - b' raise value.with_traceback(tb)\n' -[2019-12-04 08:10:54,111] INFO - [2019-12-04 08:10:54,111] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py"", line 2158, in _wrap_pool_connect\n' -[2019-12-04 08:10:54,111] INFO - [2019-12-04 08:10:54,111] INFO - b' return fn()\n' -[2019-12-04 08:10:54,113] INFO - [2019-12-04 08:10:54,112] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/pool.py"", line 400, in connect\n' -[2019-12-04 08:10:54,113] INFO - [2019-12-04 08:10:54,113] INFO - b' return _ConnectionFairy._checkout(self)\n' -[2019-12-04 08:10:54,114] INFO - [2019-12-04 08:10:54,114] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/pool.py"", line 788, in _checkout\n' -[2019-12-04 08:10:54,114] INFO - [2019-12-04 08:10:54,114] INFO - b' fairy = _ConnectionRecord.checkout(pool)\n' -[2019-12-04 08:10:54,115] INFO - [2019-12-04 08:10:54,115] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/pool.py"", line 529, in checkout\n' -[2019-12-04 08:10:54,116] INFO - [2019-12-04 08:10:54,116] INFO - b' rec = pool._do_get()\n' -[2019-12-04 08:10:54,117] INFO - [2019-12-04 08:10:54,117] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/pool.py"", line 1193, in _do_get\n' -[2019-12-04 08:10:54,117] INFO - [2019-12-04 08:10:54,117] INFO - b' self._dec_overflow()\n' -[2019-12-04 08:10:54,118] INFO - [2019-12-04 08:10:54,118] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/util/langhelpers.py"", line 66, in __exit__\n' -[2019-12-04 08:10:54,119] INFO - [2019-12-04 08:10:54,119] INFO - b' compat.reraise(exc_type, exc_value, exc_tb)\n' -[2019-12-04 08:10:54,120] INFO - [2019-12-04 08:10:54,120] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/util/compat.py"", line 249, in reraise\n' -[2019-12-04 08:10:54,120] INFO - [2019-12-04 08:10:54,120] INFO - b' raise value\n' -[2019-12-04 08:10:54,121] INFO - [2019-12-04 08:10:54,121] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/pool.py"", line 1190, in _do_get\n' -[2019-12-04 08:10:54,122] INFO - [2019-12-04 08:10:54,122] INFO - b' return self._create_connection()\n' -[2019-12-04 08:10:54,123] INFO - [2019-12-04 08:10:54,123] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/pool.py"", line 347, in _create_connection\n' -[2019-12-04 08:10:54,123] INFO - [2019-12-04 08:10:54,123] INFO - b' return _ConnectionRecord(self)\n' -[2019-12-04 08:10:54,124] INFO - [2019-12-04 08:10:54,124] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/pool.py"", line 474, in __init__\n' -[2019-12-04 08:10:54,125] INFO - [2019-12-04 08:10:54,125] INFO - b' self.__connect(first_connect_check=True)\n' -[2019-12-04 08:10:54,126] INFO - [2019-12-04 08:10:54,126] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/pool.py"", line 671, in __connect\n' -[2019-12-04 08:10:54,126] INFO - [2019-12-04 08:10:54,126] INFO - b' connection = pool._invoke_creator(self)\n' -[2019-12-04 08:10:54,127] INFO - [2019-12-04 08:10:54,127] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/strategies.py"", line 106, in connect\n' -[2019-12-04 08:10:54,128] INFO - [2019-12-04 08:10:54,128] INFO - b' return dialect.connect(*cargs, **cparams)\n' -[2019-12-04 08:10:54,129] INFO - [2019-12-04 08:10:54,129] INFO - b' File ""/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/default.py"", line 412, in connect\n' -[2019-12-04 08:10:54,130] INFO - [2019-12-04 08:10:54,129] INFO - b' return self.dbapi.connect(*cargs, **cparams)\n' -[2019-12-04 08:10:54,131] INFO - [2019-12-04 08:10:54,131] INFO - b' File ""/usr/local/lib/python3.7/site-packages/psycopg2/__init__.py"", line 130, in connect\n' -[2019-12-04 08:10:54,132] INFO - [2019-12-04 08:10:54,132] INFO - b' conn = _connect(dsn, connection_factory=connection_factory, **kwasync)\n' -[2019-12-04 08:10:54,133] INFO - [2019-12-04 08:10:54,133] INFO - b'sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: Connection timed out\n' -[2019-12-04 08:10:54,133] INFO - [2019-12-04 08:10:54,133] INFO - b'\tIs the server running on host ""10.138.16.11"" and accepting\n' -[2019-12-04 08:10:54,134] INFO - [2019-12-04 08:10:54,134] INFO - b'\tTCP/IP connections on port 5432?\n' -[2019-12-04 08:10:54,134] INFO - [2019-12-04 08:10:54,134] INFO - b' (Background on this error at: http://sqlalche.me/e/e3q8)\n' -[2019-12-04 08:10:59,152] INFO - [2019-12-04 08:10:59,152] INFO - b'ERROR:snowflake.connector.network:could not find io module state (interpreter shutdown?)\n' -[2019-12-04 08:10:59,152] INFO - [2019-12-04 08:10:59,152] INFO - b'Traceback (most recent call last):\n' -[2019-12-04 08:10:59,153] INFO - [2019-12-04 08:10:59,153] INFO - b' File ""/usr/local/lib/python3.7/site-packages/snowflake/connector/network.py"", line 781, in _request_exec\n' -[2019-12-04 08:10:59,154] INFO - [2019-12-04 08:10:59,154] INFO - b' auth=SnowflakeAuth(token),\n' -[2019-12-04 08:10:59,155] INFO - [2019-12-04 08:10:59,155] INFO - b' File ""/usr/local/lib/python3.7/site-packages/botocore/vendored/requests/sessions.py"", line 451, in request\n' -[2019-12-04 08:10:59,155] INFO - [2019-12-04 08:10:59,155] INFO - b' prep = self.prepare_request(req)\n' -[2019-12-04 08:10:59,156] INFO - [2019-12-04 08:10:59,156] INFO - b' File ""/usr/local/lib/python3.7/site-packages/botocore/vendored/requests/sessions.py"", line 382, in prepare_request\n' -[2019-12-04 08:10:59,157] INFO - [2019-12-04 08:10:59,157] INFO - b' hooks=merge_hooks(request.hooks, self.hooks),\n' -[2019-12-04 08:10:59,158] INFO - [2019-12-04 08:10:59,158] INFO - b' File ""/usr/local/lib/python3.7/site-packages/botocore/vendored/requests/models.py"", line 307, in prepare\n' -[2019-12-04 08:10:59,158] INFO - [2019-12-04 08:10:59,158] INFO - b' self.prepare_body(data, files, json)\n' -[2019-12-04 08:10:59,160] INFO - [2019-12-04 08:10:59,159] INFO - b' File ""/usr/local/lib/python3.7/site-packages/botocore/vendored/requests/models.py"", line 436, in prepare_body\n' -[2019-12-04 08:10:59,160] INFO - [2019-12-04 08:10:59,160] INFO - b' length = super_len(data)\n' -[2019-12-04 08:10:59,161] INFO - [2019-12-04 08:10:59,161] INFO - b' File ""/usr/local/lib/python3.7/site-packages/botocore/vendored/requests/utils.py"", line 59, in super_len\n' -[2019-12-04 08:10:59,161] INFO - [2019-12-04 08:10:59,161] INFO - b' fileno = o.fileno()\n' -[2019-12-04 08:10:59,162] INFO - [2019-12-04 08:10:59,162] INFO - b'RuntimeError: could not find io module state (interpreter shutdown?)\n' -[2019-12-04 08:10:59,162] INFO - [2019-12-04 08:10:59,162] INFO - b'\n' -[2019-12-04 08:10:59,163] INFO - [2019-12-04 08:10:59,163] INFO - b'During handling of the above exception, another exception occurred:\n' -[2019-12-04 08:10:59,163] INFO - [2019-12-04 08:10:59,163] INFO - b'\n' -[2019-12-04 08:10:59,163] INFO - [2019-12-04 08:10:59,163] INFO - b'Traceback (most recent call last):\n' -[2019-12-04 08:10:59,164] INFO - [2019-12-04 08:10:59,164] INFO - b' File ""/usr/local/lib/python3.7/site-packages/snowflake/connector/network.py"", line 648, in _request_exec_wrapper\n' -[2019-12-04 08:10:59,164] INFO - [2019-12-04 08:10:59,164] INFO - b' **kwargs)\n' -[2019-12-04 08:10:59,165] INFO - [2019-12-04 08:10:59,165] INFO - b' File ""/usr/local/lib/python3.7/site-packages/snowflake/connector/network.py"", line 871, in _request_exec\n' -[2019-12-04 08:10:59,166] INFO - [2019-12-04 08:10:59,166] INFO - b' raise RetryRequest(err)\n' -[2019-12-04 08:10:59,167] INFO - [2019-12-04 08:10:59,167] INFO - b'snowflake.connector.network.RetryRequest: could not find io module state (interpreter shutdown?)\n' -[2019-12-04 08:11:04,486] INFO - [2019-12-04 08:11:04,486] INFO - Event: license-db-incremental-df1efe5e had an event of type Failed -[2019-12-04 08:11:04,487] INFO - [2019-12-04 08:11:04,487] INFO - Event with job id license-db-incremental-df1efe5e Failed -[2019-12-04 08:11:04,493] INFO - [2019-12-04 08:11:04,493] INFO - Event: license-db-incremental-df1efe5e had an event of type Failed -[2019-12-04 08:11:04,493] INFO - [2019-12-04 08:11:04,493] INFO - Event with job id license-db-incremental-df1efe5e Failed -[2019-12-04 08:11:04,564] ERROR - Pod Launching failed: Pod returned a failure: failed -Traceback (most recent call last): - File ""/usr/local/lib/python3.6/site-packages/airflow/contrib/operators/kubernetes_pod_operator.py"", line 142, in execute - 'Pod returned a failure: {state}'.format(state=final_state) -airflow.exceptions.AirflowException: Pod returned a failure: failed - -During handling of the above exception, another exception occurred: - -Traceback (most recent call last): - File ""/usr/local/lib/python3.6/site-packages/airflow/models/__init__.py"", line 1441, in _run_raw_task - result = task_copy.execute(context=context) - File ""/usr/local/lib/python3.6/site-packages/airflow/contrib/operators/kubernetes_pod_operator.py"", line 147, in execute - raise AirflowException('Pod Launching failed: {error}'.format(error=ex)) -airflow.exceptions.AirflowException: Pod Launching failed: Pod returned a failure: failed -[2019-12-04 08:11:04,565] INFO - All retries failed; marking task as FAILED -[2019-12-04 08:11:04,894] INFO - Job 22469: Subtask license-db-incremental Traceback (most recent call last): -[2019-12-04 08:11:04,894] INFO - Job 22469: Subtask license-db-incremental File ""/usr/local/lib/python3.6/site-packages/airflow/contrib/operators/kubernetes_pod_operator.py"", line 142, in execute -[2019-12-04 08:11:04,894] INFO - Job 22469: Subtask license-db-incremental 'Pod returned a failure: {state}'.format(state=final_state) -[2019-12-04 08:11:04,894] INFO - Job 22469: Subtask license-db-incremental airflow.exceptions.AirflowException: Pod returned a failure: failed -[2019-12-04 08:11:04,894] INFO - Job 22469: Subtask license-db-incremental -[2019-12-04 08:11:04,894] INFO - Job 22469: Subtask license-db-incremental During handling of the above exception, another exception occurred: -[2019-12-04 08:11:04,894] INFO - Job 22469: Subtask license-db-incremental -[2019-12-04 08:11:04,894] INFO - Job 22469: Subtask license-db-incremental Traceback (most recent call last): -[2019-12-04 08:11:04,894] INFO - Job 22469: Subtask license-db-incremental File ""/usr/local/bin/airflow"", line 32, in -[2019-12-04 08:11:04,894] INFO - Job 22469: Subtask license-db-incremental args.func(args) -[2019-12-04 08:11:04,894] INFO - Job 22469: Subtask license-db-incremental File ""/usr/local/lib/python3.6/site-packages/airflow/utils/cli.py"", line 74, in wrapper -[2019-12-04 08:11:04,894] INFO - Job 22469: Subtask license-db-incremental return f(*args, **kwargs) -[2019-12-04 08:11:04,894] INFO - Job 22469: Subtask license-db-incremental File ""/usr/local/lib/python3.6/site-packages/airflow/bin/cli.py"", line 523, in run -[2019-12-04 08:11:04,894] INFO - Job 22469: Subtask license-db-incremental _run(args, dag, ti) -[2019-12-04 08:11:04,894] INFO - Job 22469: Subtask license-db-incremental File ""/usr/local/lib/python3.6/site-packages/airflow/bin/cli.py"", line 442, in _run -[2019-12-04 08:11:04,894] INFO - Job 22469: Subtask license-db-incremental pool=args.pool, -[2019-12-04 08:11:04,894] INFO - Job 22469: Subtask license-db-incremental File ""/usr/local/lib/python3.6/site-packages/airflow/utils/db.py"", line 73, in wrapper -[2019-12-04 08:11:04,895] INFO - Job 22469: Subtask license-db-incremental return func(*args, **kwargs) -[2019-12-04 08:11:04,895] INFO - Job 22469: Subtask license-db-incremental File ""/usr/local/lib/python3.6/site-packages/airflow/models/__init__.py"", line 1441, in _run_raw_task -[2019-12-04 08:11:04,895] INFO - Job 22469: Subtask license-db-incremental result = task_copy.execute(context=context) -[2019-12-04 08:11:04,895] INFO - Job 22469: Subtask license-db-incremental File ""/usr/local/lib/python3.6/site-packages/airflow/contrib/operators/kubernetes_pod_operator.py"", line 147, in execute -[2019-12-04 08:11:04,895] INFO - Job 22469: Subtask license-db-incremental raise AirflowException('Pod Launching failed: {error}'.format(error=ex)) -[2019-12-04 08:11:04,895] INFO - Job 22469: Subtask license-db-incremental airflow.exceptions.AirflowException: Pod Launching failed: Pod returned a failure: failed -[2019-12-04 08:11:07,868] INFO - [2019-12-04 08:11:07,867] INFO - Task exited with return code 1 -```",1.0 -27939523,2019-12-04 14:53:12.490,"gitlab-restore: implement cleanup procedure to avoid ""out of quota"" events","It's quite common now the cases when gitlab-restore project reaches some quotas because some instances are stalled. If, during restoration, some error occurs, in some cases it doesn't lead to hard failure and auto-cleanup. - -We need some job that would clean up instances periodically, based on certain mask for instance name and rules like ""If the backup verification started 3 days ago and hasn't finished yet, and if the instance is not protected from deletion, it's time to consider backup verification as failed, send all the signals about it, and destroy the instance"". - -To discuss: how should it be organized? Cronjob or CI/CD pipelines with special task? Anyhow else?",4.0 -27914122,2019-12-04 02:40:49.217,Adjust praefect storage node config in gstg to new format,After the changes in https://gitlab.com/gitlab-org/omnibus-gitlab/merge_requests/3699 and https://gitlab.com/gitlab-org/omnibus-gitlab/merge_requests/3754 we need to adjust our gstg configuration,1.0 -27912638,2019-12-04 00:19:00.074,Setup sentry for praefect in gstg,"As part of the readiness [review for praefect](https://gitlab.com/gitlab-com/gl-infra/readiness/merge_requests/10), let's setup Sentry reporting, initially on gstg but later it will have to be replicated on gprd",1.0 -27906899,2019-12-03 19:54:28.173,"The ""View in AlertManager"" link in a PagerDuty incident is not alerts.gprd.gitlab.net, but instead ops.gitlab.net/gitlab-com/runbooks","I'd expect to go to https://alerts.gprd.gitlab.net/#/alerts or something. - -But instead, for example, the ""View in GitLab Alertmanager"" link in https://gitlab.pagerduty.com/incidents/P5VDQCW goes to https://ops.gitlab.net/gitlab-com/runbooks/blob/master/troubleshooting/chef.md. This is not what I would expect.",1.0 -27904748,2019-12-03 19:00:45.089,document troubleshooting of restore backups failure,"Please document the steps to troubleshoot the database backups and restore. - -We would like to involve SRE to troubleshoot. - -Please write a runbook to explain all the steps that you execute usually to fix those alerts.",2.0 -27892387,2019-12-03 14:15:02.965,Nodes with incomplete Chef runs,"There seem to be a lot of nodes with missing chef-client metric data in gprd/gstg/etc. - -https://prometheus.gprd.gitlab.net/graph?g0.range_input=1h&g0.expr=up%7Bjob%3D%22node%22%7D%20unless%20chef_client_error&g0.tab=1 - -Some of them have chef-client disabled. Some never complete the first client run.",2.0 -27752364,2019-11-29 14:05:12.619,Replace datasources for data analytics team,"We would like to propose to change the datasources that the data analytics team (datawarehouse), the actual mechanism to restore a database and apply the wals, can be optimised. - -We would like to provide extra replicas from the following databases that will stay updated by streaming replication: --[ ] license --[ ] version --[ ] customers --[ ] gitlab.com",4.0 -27752138,2019-11-29 13:59:28.286,Onboard datawarehouse databases,"It is needed to onboard the datawarehouse databases : - -- license -- version -- customers - -For this we need to execute the following steps : -- [ ] Check hardware available for those databases. Recommend if an upgrade is needed. -- [ ] Check the number of replicas. Create a replica from the primary if does not exists. -- [ ] Create a dashboard and add metrics to all the databases. Suggestion: create an overview dashboard for all of them, and an specific for each one. -- [ ] Verify the backups, create a restore project and add to deadmansnith. ( I.e. we have for gitlab.com ). -- [ ] Verify if we have alerts in place.",8.0 -27701789,2019-11-28 15:27:35.633,grant access to ongres to ops.gitlab.net,"The ongres team does not have access to ops.gitlab.net, the chef-repo, and dashboards.gitlab.net. - -We would like to check the possibility to grant access to ops.gitlab.net to Ongres using SAML. - -Could we configure multiple omniauth providers?",3.0 -27561703,2019-11-26 05:18:25.619,Improve the on-call handover and labels,"Looking for feedback on the following board and labels: https://gitlab.com/groups/gitlab-com/gl-infra/-/boards/1424188?&label_name[]=SRE%3AOn-call - -The premise is that all incidents will get labelled ~""SRE:On-call"" by the incident template, and automatically go on this board. Any other work which is triaged as On Call work will get tagged that way as well. The EOC will triage those items and either remove ~""SRE:On-call"" and re-label, or move it to their region. At the end of the shift, items can be handed off to the next region, or not as the EOC sees fit. - -This will work best if the ~""SRE:On-call"" label is not used for ongoing work, unless that ongoing work needs to be handed off. That means there should not be anything labelled both ~""corrective action"" and ~""SRE:On-call"". We could consider using scoped tags to ensure this. - -It might be better to use another tag for all of this - for example `SRE::Handoff` - and possibly `SRE::Corrective Action` - -Opening it up for discussion, comments and opinions",1.0 -27479514,2019-11-22 19:36:38.728,"Monitor ""autovacuum queue"" on Postgres primary","The query https://gitlab.com/snippets/1889668 can be used to monitor the ""autovacuum queue"": the list of tables that need to be processed by autovacuum but not yet being processed + size of this list. - -Goals: -1. be able to see if some tables that need processing don't get it and ""wait"" (we could even use some metric that is similar to ""load average"" and have some alerts: if we have N workers, and 2*N tables are ""waiting"" to be processed, it is time to trigger an alert). Here we need to know only the size of the ""queue"". @emanuel_ongres do you think it is easy to add it to Prometheus? -2. in case if troubleshooting is needed, it would be great to see details in logs. I usually achieve this using some cronjob with plpgsql form of that query, running every N minutes and logging details. What is the best option to do it in the case of GitLab.com infrastructure? - -I think we should split the achieving of these two goals. Having item 1 is much more important. However, having item 2 can be extremely helpful as well, for the troubleshooting of autovacuum behavior. - -Running this query (https://gitlab.com/snippets/1889668) makes sense only for the Postgres master. So we can wrap it to `pg_is_in_recovery()` check, not to run it on replicas, but have it everywhere. - -cc @Finotto @gerardo.herzig @ahachete",2.0 -27423355,2019-11-22 06:14:47.767,Test Postgres database restoration from GCP snapshots on staging,"I'm going to test how GCP snapshots can be used for restoration of Postgres database: -- first, manually on staging (an instance with prefix `nik-` will be manually created -- next, I'll find a way to use snapshots from staging to restore to a instance in ""gitlab-restore"" - -Additionally, an open question is: do we need pg_start_backup()/pg_stop_backup() when creating a snapshot. - -Cc @Finotto",8.0 -27419624,2019-11-22 00:57:29.820,Add alert and monitoring to ops.gitlab.net,"While working on the steps to migrate ops, I noticed that ops does not have any type of monitoring or alerting with the exception of a [blackbox rule checking the sign on endpoint running](https://gitlab.com/gitlab-com/runbooks/blob/master/rules/ops-gitlab-nets.yml) in [gprd prometheus](https://ops.gitlab.net/gitlab-cookbooks/chef-repo/blob/68f0e9d101ea2271a7d5a62f7e432e6d6a023d61/roles/gprd-infra-prometheus-server.json#L178). We should enable node and gitlab exporter and add some alerts.",3.0 -27398783,2019-11-21 12:02:46.082,Terraform env gprd (and possibly others) will not plan in CI,"See https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/-/jobs/751731 for an example. - -Local planning works so this appears to be a service account permissions issue, but as the error message says, the API in question really doesn't appear to be enabled: https://console.developers.google.com/apis/library/container.googleapis.com?project=gitlab-production",2.0 -27342916,2019-11-20 23:08:55.041,Missing permissions on gitlab-ops terraform service account,"While applying the changes for [gitlab-com-infrastructure!1195](https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/merge_requests/1195), the [apply job](https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/-/jobs/750625) returned a permissions error and failed to apply an update to the ops GKE cluster; we need to add cluster admin privileges to the environments where we're utilizing GKE",1.0 -27271403,2019-11-20 18:45:33.002,Cloudflare: Prototype log shipping,"As discussed in https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8377. - -Create a prototype, that used Cloudflare Logpull and pushes that to a filebeat. -Bonus points for metric extraction.",3.0 -27176545,2019-11-18 15:07:13.973,[staging] add extra pgbouncer node for the primary database and rebalance connection pool,"add an extra pgbouncer node to the pool of the primary database on the staging cluster - -Setup the applications to use 2 nodes for the web-api, and 1 node for the sideqik.",1.0 -27152618,2019-11-18 09:05:29.635,RCA: 2019-11-18 daily 08:07 UTC latency spike," - -Incident: https://gitlab.com/gitlab-com/gl-infra/production/issues/1372 - -## Summary - -We see a daily spike in latencies every morning around 08:07 UTC. This issue will serve to collect evidence for investigating and mitigating the root cause. - -**Update**: This is a duplicate of https://gitlab.com/gitlab-com/gl-infra/production/issues/1316 - -- Service(s) affected : ~""Service:Web"" -- Team attribution : ~Infrastructure -- Minutes downtime or degradation : - -For calculating duration of event, use the [Platform Metrics Dashboard](https://dashboards.gitlab.net/d/ZUei7TkWz/platform-metrics?orgId=1) to look at appdex and SLO violations. - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? (i.e. service outage, sub-service brown-out, exposure of sensitive data, ...) -- Who was impacted by this incident? (i.e. external customers, internal customers, specific teams, ...) -- How did the incident impact customers? (i.e. preventing them from doing X, incorrect display of Y, ...) -- How many attempts were made to access the impacted service/feature? -- How many customers were affected? -- How many customers tried to access the impacted service/feature? - -Include any additional metrics that are of relevance. - -Provide any relevant graphs that could help understand the impact of the incident and its dynamics. - -## Detection & Response - -Start with the following: - -- How was the incident detected? -- Did alarming work as expected? -- How long did it take from the start of the incident to its detection? -- How long did it take from detection to remediation? -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team memeber wasn't page-able, ...) - -## Root Cause Analysis - -The purpose of this document is to understand the reasons that caused an incident, and to create mechanisms to prevent it from recurring in the future. A root cause can **never be a person**, the way of writing has to refer to the system and the context rather than the specific actors. - -Follow the ""**5 whys**"" in a **blameless** manner as the core of the root-cause analysis. - -For this it is necessary to start with the incident, and question why it happened. Keep iterating asking ""why?"" 5 times. While it's not a hard rule that it has to be 5 times, it helps to keep questions get deeper in finding the actual root cause. - -Keep in min that from one ""why?"" there may come more than one answer, consider following the different branches. - -### Example of the usage of ""5 whys"" -The vehicle will not start. (the problem) - -1. Why? - The battery is dead. -2. Why? - The alternator is not functioning. -3. Why? - The alternator belt has broken. -4. Why? - The alternator belt was well beyond its useful service life and not replaced. -5. Why? - The vehicle was not maintained according to the recommended service schedule. (Fifth why, a root cause) - -## What went well - -Start with the following: - -- Identify the things that worked well or as expected. -- Any additional call-outs for what went particularly well. - -## What can be improved - -Start with the following: - -- Using the root cause analysis, explain what can be improved to prevent this from happening again. -- Is there anything that could have been done to improve the detection or time to detection? -- Is there anything that could have been done to improve the response or time to response? -- Is there an existing issue that would have either prevented this incident or reduced the impact? -- Did we have any indication or beforehand knowledge that this incident might take place? - -## Corrective actions - -- List issues that have been created as corrective actions from this incident. -- For each issue, include the following: - - - Issue labeled as ~""corrective action"". - - Include an estimated date of completion of the corrective action. - - Incldue the named individual who owns the delivery of the corrective action. - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys) -*",3.0 -27173904,2019-11-14 11:20:03.453,create change to separate pools for pgbouncer production read write,"As we plan to execute in all the read-only databases we have to split the traffic between the pools at the pgbouncer level from sideqik and web. -Please test in staging, and create the change request to apply in production. - -We will need an extra node to organize this change.",2.0 -27015529,2019-11-13 13:36:42.083,Deploy Thanos sidecar --min-time flag,"After rolling out Thanos v0.8.1, we now have a `--min-time` flag[0]. This limits the sidecar lookback into Prometheus for Thanos query frontend. This allows us to reduce load on Prometheus, while still maintaining longer history in Prometheus itself.",1.0 -27013533,2019-11-13 12:36:33.054,RCA: 2019-11-12: Latency Apdex score degradation because of pgbouncer saturation," - -Incident: gitlab-com/gl-infra/production#1357 - -## Summary - -A failover of pgbouncer nodes led to imbalanced connection distribution to the 2 active pgbouncer instances, which in turn led to saturated pgbouncer connections and caused Apdex degradations. - -- Service(s) affected : ~""Service:Web"" -- Team attribution : ~Infrastructure -- Minutes downtime or degradation : 19,5h - -For calculating duration of event, use the [Platform Metrics Dashboard](https://dashboards.gitlab.net/d/ZUei7TkWz/platform-metrics?orgId=1) to look at appdex and SLO violations. - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? (i.e. service outage, sub-service brown-out, exposure of sensitive data, ...) -- Who was impacted by this incident? (i.e. external customers, internal customers, specific teams, ...) -- How did the incident impact customers? (i.e. preventing them from doing X, incorrect display of Y, ...) -- How many attempts were made to access the impacted service/feature? -- How many customers were affected? -- How many customers tried to access the impacted service/feature? - -Include any additional metrics that are of relevance. - -Provide any relevant graphs that could help understand the impact of the incident and its dynamics. - -![image](/uploads/b579c5383a885b92af6407a1243cd86d/image.png) - -![image](/uploads/b4f0dc2322b0995f41682009b3c622fe/image.png) - -## Detection & Response - -Start with the following: - -- How was the incident detected? -- Did alarming work as expected? -- How long did it take from the start of the incident to its detection? -- How long did it take from detection to remediation? -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team memeber wasn't page-able, ...) - -## Root Cause Analysis - -The purpose of this document is to understand the reasons that caused an incident, and to create mechanisms to prevent it from recurring in the future. A root cause can **never be a person**, the way of writing has to refer to the system and the context rather than the specific actors. - -Follow the ""**5 whys**"" in a **blameless** manner as the core of the root-cause analysis. - -For this it is necessary to start with the incident, and question why it happened. Keep iterating asking ""why?"" 5 times. While it's not a hard rule that it has to be 5 times, it helps to keep questions get deeper in finding the actual root cause. - -Keep in min that from one ""why?"" there may come more than one answer, consider following the different branches. - -### Example of the usage of ""5 whys"" -The vehicle will not start. (the problem) - -1. Why? - The battery is dead. -2. Why? - The alternator is not functioning. -3. Why? - The alternator belt has broken. -4. Why? - The alternator belt was well beyond its useful service life and not replaced. -5. Why? - The vehicle was not maintained according to the recommended service schedule. (Fifth why, a root cause) - -## What went well - -Start with the following: - -- Identify the things that worked well or as expected. -- Any additional call-outs for what went particularly well. - -## What can be improved - -Start with the following: - -- Using the root cause analysis, explain what can be improved to prevent this from happening again. -- Is there anything that could have been done to improve the detection or time to detection? -- Is there anything that could have been done to improve the response or time to response? -- Is there an existing issue that would have either prevented this incident or reduced the impact? -- Did we have any indication or beforehand knowledge that this incident might take place? - -## Corrective actions - -- List issues that have been created as corrective actions from this incident. -- For each issue, include the following: - - - Issue labeled as ~""corrective action"". - - Include an estimated date of completion of the corrective action. - - Incldue the named individual who owns the delivery of the corrective action. - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys) -*",1.0 -27002656,2019-11-13 07:44:59.103,Large number of failing blackbox probes,"Currently 16.5% (23/139) blackbox probes are failing continuously. - -* [x] Update blackbox_exporter to 0.16.0. (Fix for https host header in redirects) -* [x] Fix all broken blackbox probes. -* [x] Add alert for blackbox probes failing.",2.0 -26986385,2019-11-12 15:36:55.826,Cleanup stage/tier/type labels,"There are a lot of Prometheus monitoring targets that are missing `stage, `tier`, and `type` labels. These are necessary for correct identification in the generic service alerts/rules and correct routing to alertmanager.",2.0 -26981123,2019-11-12 13:10:10.814,Fix sidekiq error ratio metrics,The [sidekiq error ration panel](https://dashboards.gitlab.net/d/general-service/general-service-platform-metrics?orgId=1&from=1573553266211&to=1573564066211&panelId=8&tz=UTC&fullscreen&var-PROMETHEUS_DS=Global&var-environment=gprd&var-type=sidekiq&var-stage=main&var-sigma=2) in Grafana is showing error ratios higher than 100% which can't be true and is causing alerts.,1.0 -26974433,2019-11-12 11:06:25.806,Sync folders to public Grafana dashboards,The current Grafana sync script doesn't support updating the folders. This causes all dashboards to be squashed into a flat namespace.,3.0 -26958032,2019-11-12 03:51:29.171,Handle DNS entries for the gitlab-review.app hosted zone in terraform,This might come into play at https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8203 but we should do it regardless,2.0 -26947132,2019-11-11 17:18:08.372,RCA: 2019-11-11 GCP Service Disruption," - -Incident: https://gitlab.com/gitlab-com/gl-infra/production/issues/1349 , https://gitlab.com/gitlab-com/gl-infra/production/issues/1348 - -## Summary - -A brief summary of what happened. Try to make it as executive-friendly as possible. - -- Service(s) affected : -- Team attribution : -- Minutes downtime or degradation : - -For calculating duration of event, use the [Platform Metrics Dashboard](https://dashboards.gitlab.net/d/ZUei7TkWz/platform-metrics?orgId=1) to look at appdex and SLO violations. - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? (i.e. service outage, sub-service brown-out, exposure of sensitive data, ...) -- Who was impacted by this incident? (i.e. external customers, internal customers, specific teams, ...) -- How did the incident impact customers? (i.e. preventing them from doing X, incorrect display of Y, ...) -- How many attempts were made to access the impacted service/feature? -- How many customers were affected? -- How many customers tried to access the impacted service/feature? - -Include any additional metrics that are of relevance. - -Provide any relevant graphs that could help understand the impact of the incident and its dynamics. - -## Detection & Response - -Start with the following: - -- How was the incident detected? -- Did alarming work as expected? -- How long did it take from the start of the incident to its detection? -- How long did it take from detection to remediation? -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team memeber wasn't page-able, ...) - -## Root Cause Analysis - -The purpose of this document is to understand the reasons that caused an incident, and to create mechanisms to prevent it from recurring in the future. A root cause can **never be a person**, the way of writing has to refer to the system and the context rather than the specific actors. - -Follow the ""**5 whys**"" in a **blameless** manner as the core of the root-cause analysis. - -For this it is necessary to start with the incident, and question why it happened. Keep iterating asking ""why?"" 5 times. While it's not a hard rule that it has to be 5 times, it helps to keep questions get deeper in finding the actual root cause. - -Keep in min that from one ""why?"" there may come more than one answer, consider following the different branches. - -### Example of the usage of ""5 whys"" -The vehicle will not start. (the problem) - -1. Why? - The battery is dead. -2. Why? - The alternator is not functioning. -3. Why? - The alternator belt has broken. -4. Why? - The alternator belt was well beyond its useful service life and not replaced. -5. Why? - The vehicle was not maintained according to the recommended service schedule. (Fifth why, a root cause) - -## What went well - -Start with the following: - -- Identify the things that worked well or as expected. -- Any additional call-outs for what went particularly well. - -## What can be improved - -Start with the following: - -- Using the root cause analysis, explain what can be improved to prevent this from happening again. -- Is there anything that could have been done to improve the detection or time to detection? -- Is there anything that could have been done to improve the response or time to response? -- Is there an existing issue that would have either prevented this incident or reduced the impact? -- Did we have any indication or beforehand knowledge that this incident might take place? - -## Corrective actions - -- List issues that have been created as corrective actions from this incident. -- For each issue, include the following: - - - Issue labeled as ~""corrective action"". - - Include an estimated date of completion of the corrective action. - - Incldue the named individual who owns the delivery of the corrective action. - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys) -*",1.0 -26909720,2019-11-11 03:50:37.653,Corrupt WAL in GKE prometheus,"One of the prometheus servers in the gitlab-production GKE cluster (https://console.cloud.google.com/kubernetes/pod/us-east1/gprd-gitlab-gke/monitoring/prometheus-gitlab-monitoring-promethe-prometheus-0?project=gitlab-production) has a corrupt WAL: -`""reload blocks: head truncate failed: create checkpoint: read segments: corruption in segment /prometheus/wal/00002079 at 81103202: unexpected full record""` - -AFAIK the correct way to fix that is to kick prometheus over so it can do a repair.",1.0 -26909449,2019-11-11 03:20:10.542,prometheus-app-02-inf-gprd failing to upload to object store,"```Firing 1 - Thanos compaction has not run in 24 hours.``` - -```Thanos compact prometheus-app-02-inf-gprd.c.gitlab-production.internal:10902 has not uploaded any blocks in 24 hours.```",1.0 -26908797,2019-11-11 02:10:18.178,postgres-01-inf-dr:/opt/prometheus is full,"Not sure how long, just that it is. - -Currently 50GB; will expand to 100GB",1.0 -26833795,2019-11-08 00:28:40.511,Fix path to PPA in gitlab-sentry cookbook,"The gitlab-sentry cookbook adds a PPA for redis, but it has a `not-if` pointing to an old `trusty` path. This leads to add-apt-repository running every chef run, and each time it adds an additional commented out deb-src line, leading to multi-thousand line files and chef-client timeouts when it runs at busy times (add-apt-repository rewrites the file repeatedly, for some reason I don't care to investigate).",1.0 -26824818,2019-11-07 18:28:15.142,Provision AWS IAM account for customer cross-account role access,"For the request in gitlab-com/access-requests#2606, we need to provision an account for use by `gitlab.com` to [assume customer-owned cross-account IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html) with permissions to provision EKS clusters into their account. At least to start, I think the `aws-account` terraform configuration in gitlab-com/gitlab-com-infrastructure> is the most likely place for this resource, though we can move/adjust if necessary at a later date. - -The account should be assigned an IAM policy which allows `sts:AssumeRole` and a condition(https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html) that prevents the user from assuming a role in our own account (for now?). If we need to enable EKS provisioning, we will need to stipulate that they can not be hosted in our current production account, or make other policy adjustments to ensure that the provisioning account cannot assume other roles that grant unwanted elevated permissions. - -As an example of how to negate access to roles in our production account, something like the following will be needed (change ACCOUNT_ID accordingly). (See [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_notresource.html#notresource-element-combinations) for additional context on why this is actually desirable in our case) - -```json -{ - ""Version"": ""2012-10-17"", - ""Statement"": { - ""Effect"": ""Allow"", - ""Action"": ""sts:AssumeRole"", - ""NotResource"": [ - ""arn:aws:iam::ACCOUNT_ID:role/*"" - ] - } -} -``` - -Further, since we require an external ID for all cross-account roles being accessed, we _should_ be able to limit the account to only assume roles when an external id is provided, like this using the [Null](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html#Conditions_Null) condition and the [`sts:ExternalId`](https://docs.aws.amazon.com/IAM/latest/UserGuide/list_awssecuritytokenservice.html#awssecuritytokenservice-policy-keys) condition key to ensure that the permissions are only granted when that condition key is present. - -```json -{ - ""Version"": ""2012-10-17"", - ""Statement"": { - ""Effect"": ""Allow"", - ""Action"": ""sts:AssumeRole"", - ""NotResource"": [ - ""arn:aws:iam::ACCOUNT_ID:role/*"" - ], - ""Condition"":{""Null"":{""sts:ExternalId"":""false""}} - } -} -```",1.0 -26820785,2019-11-07 16:58:20.488,Add about.gitlab.com redirect rule for index.html,"## Problem - -Crawlers are reporting duplicate content for some pages on about.gitlab.com ending with a URL `index.html`. - -Example: -- https://about.gitlab.com/analysts/forrester-vsm/ -- https://about.gitlab.com/analysts/forrester-vsm/index.html - -## Solution - -Add a redirect rule on about.gitlab.com for any URL ending with `index.html` back to `/`. - -## Concerns - -@brandon_lyon @laurenbarker do you know of any templates that require their URL to end with index.html? - -@gitlab-com/gl-infra/managers tagged for prioritization",1.0 -26784630,2019-11-06 20:03:17.428,Tweak alert for disk use of /var/log on patroni servers to alert sooner,"The current alert for /var/log filling up comes when 1% of the space is remaining. When /var/log is filled up, it can have negative impacts on WAL-E shipping and the node performance as a whole. We should alert a little more aggressively on the /var/log disk filling up. Perhaps at 10% remaining space.",1.0 -26776728,2019-11-06 15:49:29.957,Update runbooks to provide good documentation on how to associate sidekiq jobs with projects (or users).,"Update runbooks to describe how we can find sidekiq queue jobs per project (or user), and review sidekiq queues to look for problematic projects (or users). - -https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8368 - -@ansdval FYI",1.0 -26756847,2019-11-06 10:49:06.190,Make cloudflare DNS terraformable,"Since we are keeping seperate zones in cloudflare for gitlab.com and staging.gitlab.com for cloudflare configuration purposes we are going to keep the split as it is right now. This means we can extend the `dns` environment in terraform to allow managing the `staging.gitlab.com.` zone via the cloudflare provider without impacting production. - -This also resolves diversion from production, because even though the underlying DNS is different, there is no difference in managing it, then. - -We should be able to keep the same structure for configuring that via the json files as we do for route53.",5.0 -26739422,2019-11-05 21:54:53.033,Rails Consoles for Auto DevOps Deployments,"Currently most of our rails consoles are running on VM's, and the code that they run is deployed along with all of the other nodes. - -In non-core applications running from Auto DevOps, we have no way to get a rails console at the moment. - -When connecting directly to an application pod, this is the result -``` -$ kubectl exec -it production-5fdb8f947-5rpxq -- /bin/bash -root@production-5fdb8f947-5rpxq:/# cd /app/bin -root@production-5fdb8f947-5rpxq:/app/bin# ./rails console -Traceback (most recent call last): - 4: from ./rails:3:in `
' - 3: from ./rails:3:in `require_relative' - 2: from /app/config/boot.rb:3:in `' - 1: from /usr/lib/ruby/2.5.0/rubygems/core_ext/kernel_require.rb:59:in `require' -/usr/lib/ruby/2.5.0/rubygems/core_ext/kernel_require.rb:59:in `require': cannot load such file -- bundler/setup (LoadError) -root@production-5fdb8f947-5rpxq:/app/bin# rails console -``` -Also: -``` -kubectl exec -it production-5fdb8f947-5rpxq -- /bin/bash -root@production-5fdb8f947-5rpxq:/# bundle exec rails c -bash: bundle: command not found -root@production-5fdb8f947-5rpxq:/# cd /app/bin/ -root@production-5fdb8f947-5rpxq:/app/bin# ./bundle exec rails c -Traceback (most recent call last): - 2: from ./bundle:3:in '
' - 1: from /usr/lib/ruby/2.5.0/rubygems.rb:263:in 'bin_path' -/usr/lib/ruby/2.5.0/rubygems.rb:289:in 'find_spec_for_exe': Could not find 'bundler' (1.17.3) required by your /app/Gemfile.lock. (Gem::GemNotFoundException) -To update to the lastest version installed on your system, run `bundle update --bundler`. -To install the missing version, run 'gem install bundler:1.17.3' -``` - -If we simply deploy a VM as we do now, we would have to manually set it up and there is no mechanism in Auto DevOps to deploy code updates to that VM. - -If we deploy to a pod, then we have to give full access to the cluster via `kubectl` to anyone who needs to run `kubectl exec` to get a rails console. - -Neither of these are ideal. Lets discuss options.",1.0 -26683973,2019-11-04 20:30:53.170,Redis traffic analysis - gitlab-com/www-gitlab-com#5708,,5.0 -26670540,2019-11-04 15:24:57.400,Deploy a prototype of dynamic Image rescaling to CloudFlare workers on staging,"Context: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8209#note_238007871 - -Something like this should work - -```javascript -addEventListener('fetch', event => { - event.respondWith(handleRequest(event.request)) -}) - -async function handleRequest(request) { - let url = new URL(request.url); - let fetchParams = { - cf: { - image: { - fit: 'scale-down' - } - }; - - let width = parseInt(url.searchParams.get(""width""), 10); - if (isNaN(width) || !width) { - return fetch(request); - } else { - fetchParams.cf.image.width = width; - } - - // When we do not need the `x-with` header anymore: - // return fetch(request, fetchParams); - - let response = await fetch(request, fetchParams); - response = new Response(response.body, response) - response.headers.set('x-with', 'cloudflare worker') - return response -} - -``` - -According to @timzallmann These are the required routes: - -- User Avatar URL `https://assets.gitlab-static.net/uploads/-/system/user/avatar/1149402/avatar.png?width=24` -- Project Avatar URL `https://assets.gitlab-static.net/uploads/-/system/project/avatar/7764/about_logo.png?width=40` -- Group Avatar URL `https://gitlab.com/uploads/-/system/group/avatar/9970/logo-extra-whitespace.png` - -so the `/uploads/-/system/**` pattern should be good.",2.0 -26670291,2019-11-04 15:19:16.783,Update runbooks with documentation on how to use and manage one-off commands.,"As an SRE, I need to know how to create, install, and trigger ansible based commands to help manage actions on large numbers of nodes. - -The runbooks should be updated to include sections on how to accomplish each of these tasks. - -This is the original issue with details on what was to be made: https://gitlab.com/gitlab-com/gl-infra/delivery/issues/75",1.0 -26670201,2019-11-04 15:17:37.451,Make CloudFlare workers terraformable,"In order to deploy a prototype for https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8209#note_238224109, we need to have a way to deploy cloudflare worker scrips and routes vie terraform. - -This issue's goal is to have a way to deploy worker scripts (https://www.terraform.io/docs/providers/cloudflare/r/worker_script.html) and worker routes (https://www.terraform.io/docs/providers/cloudflare/r/worker_route.html) to staging.gitlab.com.",3.0 -26669985,2019-11-04 15:09:45.537,Use `web_exporter/metrics` for staging and production.,"The `/-/metrics` is a very bad citizen now when we run Puma. It does effectively block the worker process for the time required to render all metrics, thus increase the latency and processing time across the fleet substantially: https://dashboards.gitlab.net/d/thYzurImk/rails-controllers?orgId=1&var-action=MetricsController%23index&var-database=influxdb-01-inf-gprd. We do run around 1k of `/-/metrics` requests per-minute, the mean time is 400ms, p95 1s. Anything that escapes to native, is not interruptible by default due to Ruby GVL and effectively blocks the requests processing for the given period of time. - -Looking at actual implementation, it is actually very inefficient: https://gitlab.com/gitlab-org/prometheus-client-mmap/blob/master/lib%2Fprometheus%2Fclient%2Fformats%2Ftext.rb#L34. We get a list of all files, and escape to native to present them. - -We do have `/metrics` on implemented already on separate endpoint, it is not yet fully tested, but I would assume that we might prefer to switch to that: https://gitlab.com/gitlab-org/gitlab/issues/30037. - -We should validate and switch it on staging/production to use `/metrics` endpoint.",3.0 -26572236,2019-11-01 22:20:55.483,Environment name for new ops environment,"When provisioning the new Chef server infrastructure, we received the following error on bootstrap: `chef-01-sv-ops-us-central startup-script: INFO startup-script: Cannot load environment ops-us-central#033[0m` because we've overloaded that variable/term. - -At first glance, I was inclined to set this value to `ops` in `environments/ops-us-central/variables.tf`, thinking that our intention is to lift-and-shift the current ops environment to the new region, destroy the old infrastructure, then update the terraform config directory name and state file to `ops` and let it be the new primary. However - currently there are a few areas where `ops-us-central` is hard-coded that might still be problematic, and prevent this approach; most notably [the name for the GCP network](https://gitlab.com/gitlab-com/gitlab-com-infrastructure/blob/d2f51e5d709943cad25a305c73a6eb4f69581b90/environments/ops-us-central/main.tf#L52), and subsequent references under downstream dependent resources. - -In order to facilitate the migration, I anticipate that we will be limited in our options for names we can use while still effectively running both environments in parallel, but `ops-us-central` is likely not the best choice, in terms of length, and region-specificity to start; I'm sure that we'll encounter other issues as we continue, as well. `ops-too` was used for the first aborted migration, but that received pushback due to being somewhat flippant. - -Since we're restarting the efforts to move ops away from the `us-east1` region, I suspect now is the time to figure this out, while we can reasonably easily destroy/recreate infrastructure without impacting running services. If we _do_ have to use an entirely new name, that means duplication of our chef configs, at least; what else would be impacted? - -TL;DR - what shall we call the new ops environment / how do we address the naming issues? - -/cc @ahanselka @cmcfarland @cindy",1.0 -26542980,2019-10-31 23:45:57.991,Add AWS GovCloud Account to Okta,"This issue is primarily a placeholder, so we have something to track effort towards https://gitlab.com/gitlab-com/gl-security/zero-trust/okta/issues/112 in our milestone planning on the infrastructure issue board. All notes/updates/follow-up should be logged on that issue, directly.",5.0 -26542132,2019-10-31 22:25:09.299,Thanos compact disk filling up,"The `/opt/prometheus` disk on `thanos-compact-01-inf-gprd.c.gitlab-production.internal` is almost full. - -This is similar to this issue: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6776 - -The alert was cleared by following the runbook here: https://gitlab.com/gitlab-com/runbooks/blob/master/troubleshooting/thanos-compact.md - -The following commands were executed: - -``` -df -h -sudo systemctl stop thanos-compact -sudo systemctl start thanos-compact -df -h -``` - -/cc @bjk-gitlab",1.0 -26534975,2019-10-31 17:51:23.314,Validate scraping of Praefect on gstg,"Staging has Praefect enabled, though I couldn't find the metrics in Prometheus. One metric I looked for specifically is `gitaly_praefect_replication_latency`. That one is always created, regardless if it has input. Given it was missing I think we're not scraping the target, or we're not exposing metrics on the target. - -/cc @alejandro",2.0 -26511361,2019-10-31 07:46:49.701,Intermittent HTTP failures to Ingress controllers,"We have been seeing intermittent HTTP failures when trying to connect to ingress controllers. - -The first instance was the `design.gitlab.com` site in this issue: https://gitlab.com/gitlab-org/gitlab-services/design.gitlab.com/issues/432 - -That was resolved by upgrading the K8s cluster and provisioning a new ingress, then re-deploying the app to it. We tried provisioning a new ingress alone, and it didn't resolve it. - -The second instance is some suspicious failures on the `version.gitlab.com` app. We got a Pingdom check failure, and `httping` is returning some intermittent timeouts. - -These may not be related, but this issue can be a place to capture information and explore the possibility. - -``` -$ httping -c 15 -t 2 http://version.gitlab.com -PING version.gitlab.com:80 (/): -connected to 104.196.17.203:80 (208 bytes), seq=0 time=294.46 ms -connected to 104.196.17.203:80 (208 bytes), seq=1 time=293.93 ms -connected to 104.196.17.203:80 (208 bytes), seq=2 time=292.10 ms -connected to 104.196.17.203:80 (208 bytes), seq=3 time=296.31 ms -connected to 104.196.17.203:80 (208 bytes), seq=4 time=293.10 ms -connected to 104.196.17.203:80 (208 bytes), seq=5 time=294.27 ms -timeout while receiving reply-headers from host -timeout while receiving reply-headers from host -connected to 104.196.17.203:80 (208 bytes), seq=8 time=293.68 ms -connected to 104.196.17.203:80 (208 bytes), seq=9 time=294.75 ms -connected to 104.196.17.203:80 (208 bytes), seq=10 time=293.74 ms -connected to 104.196.17.203:80 (208 bytes), seq=11 time=295.00 ms -connected to 104.196.17.203:80 (208 bytes), seq=12 time=361.12 ms -connected to 104.196.17.203:80 (208 bytes), seq=13 time=301.54 ms ---- http://version.gitlab.com/ ping statistics --- -15 connects, 12 ok, 13.33% failed, time 22342ms -round-trip min/avg/max = 292.1/300.3/361.1 ms -```",3.0 -26503320,2019-10-30 22:11:37.637,Documentation for new Chef Infra Server,"Broken out from #8037; the infrastructure has been provisioned, and in conjunction with / once #8028 is completed, we need to document the changes to the infrastructure, and conduct a [readiness review](https://gitlab.com/gitlab-com/gl-infra/readiness/-/issues/11) before proceeding with a production change for the migration. This issue should include discussion centered around documentation and lead into the production readiness review.",3.0 -26503275,2019-10-30 22:08:26.263,Implement/update monitoring for Chef server in GCP,"Broken out from #8037; the infrastructure has been provisioned, and in conjunction with / once #8028 is completed, we need to implement [monitoring](https://docs.chef.io/server_monitor.html) for the Chef Infra Server application. - -Definition of done: -* up check on port 443 -* hook up the node exporter -* make a basic jsonnet def in the dashboards",2.0 -26482037,2019-10-30 12:07:06.187,Update cookbook-gitlab-runner to use new Docker packages,"[cookbook-gitlab-runner](https://gitlab.com/gitlab-org/cookbook-gitlab-runner) is still using the obsolete `docker-engine` package repo. This should be updated to use the new [docker-ce package](https://docs.docker.com/install/linux/docker-ce/ubuntu/) repo. - -CC: @jarv @tmaczukin",3.0 -26390744,2019-10-28 20:43:31.065,Quality tests failing due to praefect in staging,"See https://gitlab.slack.com/archives/C3ER3TQBT/p1572293689252600 - -We're seeing 500 errors on staging.gitlab.com with - -``` -GRPC::InvalidArgument -3:only messages for praefect are allowed -```",1.0 -26338837,2019-10-26 15:20:41.119,Add runbook for design.gitlab.com,"design.gitlab.com seems down. I can't find a runbook or other instructions for how to diagnose or repair it. -See https://gitlab.com/gitlab-com/gl-infra/production/issues/1279 - -/cc @devin",3.0 -26338071,2019-10-26 14:13:19.353,blackbox-exporter not closing connections,"blackbox-exporter isn't closing connections to targets that don't respond. That is leading to exhaustion of file descriptors after 6h with the current limit of 1024 FDs. Also, blackbox_exporter doesn't seem to alert for those targets. See incident https://gitlab.com/gitlab-com/gl-infra/production/issues/1279.",3.0 -26329956,2019-10-25 21:20:22.416,Cloudflare: Accept and forward original Client IP,"With the current configuration, IPs from Cloudflare (https://www.cloudflare.com/ips/) are being reported by the application as the IP for the session and in the logs.",3.0 -26310715,2019-10-25 10:50:37.814,"Audit kubernetes workloads and ensure all that can be scraped, is scraped",Split from https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8186,2.0 -26283490,2019-10-24 15:35:24.552,RCA: 2019-10-24: Elevated CI job queue durations," - -Incident: gitlab-com/gl-infra/production#1275 - -## Summary - -Due to a project import bug, some jobs had an options attribute with a wrong data type, which lead to failures assigning jobs so they were put back into the queue in an endless loop. The fair usage job scheduling algorithm was preferring them over most other jobs as they were belonging to a short pipeline and so only a few other jobs got the chance to also run on shared runners. This increased the overall queue time and the number of pending jobs was rising. - -- Service(s) affected : ~""Service:CI Runners"" -- Team attribution : -- Minutes downtime or degradation : 10:00 UTC - 20:20 UTC = 10h 20m - -For calculating duration of event, use the [Platform Metrics Dashboard](https://dashboards.gitlab.net/d/ZUei7TkWz/platform-metrics?orgId=1) to look at appdex and SLO violations. - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? - - many jobs pending for a long time -- Who was impacted by this incident? - - all users running jobs -- How did the incident impact customers? - - customers needed to wait a long time to get their jobs scheduled -- How many attempts were made to access the impacted service/feature? -- How many customers were affected? -- How many customers tried to access the impacted service/feature? - -Job duration [for 50th percentile](https://thanos-query.ops.gitlab.net/graph?g0.range_input=12h&g0.expr=histogram_quantile(0.5%2C%20sum(rate(job_queue_duration_seconds_bucket%7Benvironment%3D~%22gprd%22%2C%20jobs_running_for_project%3D~%220%22%7D%5B5m%5D))%20by%20(shared_runner%2C%20jobs_running_for_project%2C%20le))&g0.tab=0): - -![image](/uploads/a2ddc504fa56bad09669250c1afa2470/image.png) - -Job duration [for 90th percentile](https://thanos-query.ops.gitlab.net/graph?g0.range_input=12h&g0.expr=histogram_quantile(0.9%2C%20sum(rate(job_queue_duration_seconds_bucket%7Benvironment%3D~%22gprd%22%2C%20jobs_running_for_project%3D~%220%22%7D%5B5m%5D))%20by%20(shared_runner%2C%20jobs_running_for_project%2C%20le))&g0.tab=0): - -![image](/uploads/0484cbc10faba0b0b9fe2e07c6b67fe0/image.png) - -![image](/uploads/ca7f96dd91a80f981cf45dabcdea96bb/image.png) - -Provide any relevant graphs that could help understand the impact of the incident and its dynamics. - -## Detection & Response - -Start with the following: - -- How was the incident detected? - - reports from customer support about users seeing pending jobs -- Did alarming work as expected? - - we got no alerts, as the SLO APDEX for ci runner latency was defined to alert on the 50th percentile but in the first hours only the 70th percentile was severely affected -- How long did it take from the start of the incident to its detection? - - 42m (support [reported](https://gitlab.slack.com/archives/C101F3796/p1571913724286000) customer issues at 10:42) -- How long did it take from detection to remediation? - - 10:42 - 20:20 = 9h 38m -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team member wasn't page-able, ...) - - It took a long time to identify the root cause and to understand the impact. - -## Root Cause Analysis - -Jobs were getting stuck in pending state. - -1. Why? - They only had a low chance of getting assigned to a shared runners. -2. Why? - The shared runners were mostly occupied with a few jobs with corrupt options being retried indefinitely. -3. Why? - The `Ci::Build#options` attribute was a string instead of a hash. -4. Why? - The jobs came from imported projects and the importer has a bug in 12.4.0 (12.3.5 works). -5. Why? - -## What went well - -- customer support escalating to infra team -- dev and infra working together to debug a tricky issue - -## What can be improved - -- Alerting for pending jobs and rising job queues. -- Better understanding of job scheduling and the impact of elevated job queue times. - -Start with the following: - -- Using the root cause analysis, explain what can be improved to prevent this from happening again. -- Is there anything that could have been done to improve the detection or time to detection? -- Is there anything that could have been done to improve the response or time to response? -- Is there an existing issue that would have either prevented this incident or reduced the impact? -- Did we have any indication or beforehand knowledge that this incident might take place? - -## Corrective actions - -- List issues that have been created as corrective actions from this incident. -- For each issue, include the following: - - - Issue labeled as ~""corrective action"". - - Include an estimated date of completion of the corrective action. - - Incldue the named individual who owns the delivery of the corrective action. - -* [x] Prevent corrupt job options: https://gitlab.com/gitlab-org/gitlab/merge_requests/19122, https://gitlab.com/gitlab-org/gitlab/merge_requests/19124 -* [x] Prevent jobs from being rescheduled indefinitely: https://gitlab.com/gitlab-org/gitlab/issues/34897 -* [ ] Improve alerting for elevated job queue times -* [ ] Make it easier to identify which job is picked from which project by which runner: https://gitlab.com/gitlab-org/gitlab/issues/34889 - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",3.0 -26256776,2019-10-24 03:17:02.383,Mirror or Move infra-vault to ops,Pending the outcome of https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/10126 we will need to either mirror https://gitlab.com/gitlab-com/gl-infra/infra-vault/ to ops.gitlab.net or move it there entirely (and mirror it back to gitlab.com),1.0 -75706089,2020-12-07 12:57:25.486,Facilitate 2nd Failover Test of Staging Using Geo With Learnings from 1st Test Applied.,"As part of the deliverables of the [GitLab.com Disaster Recovery Working Group](https://about.gitlab.com/company/team/structure/working-groups/disaster-recovery/) we would like to conduct a 2nd single-node test of a Geo-enabled failover for staging.gitlab.com. - -This test would be a planned failover of staging at a scheduled time, which: -1. Fails over staging.gitlab.com to a Geo-based secondary -1. Tests that staging.gitlab.com works on this secondary -1. Fails back over to the original staging.gitlab.com infrastructure. - -What needs to happen to facilitate this: -- [ ] Apply changes and learnings from the first single-node test. -- [ ] Build a change management issue to track the tactical pieces to this failover. -- [ ] Coordinate with various consumers of staging to help define a testing schedule and what we will need to have in place in the event we are without staging for an extended period of time",8.0 -75565488,2020-12-03 17:23:20.613,Modify Fastly headers to accommodate same-origin content in frames," - -**Details** - - - Point of contact for this request: @djensen - - If a call is needed, what is the proposed date and time of the call: n/a - - Additional call details (format, type of call): n/a - -**SRE Support Needed** - -Please modify Fastly headers for about.gitlab.com to allow for `` tags to be used with same-origin content (to allow [clickable SVGs](https://gitlab.com/gitlab-com/www-gitlab-com/-/issues/10052#proposal)): - -1. Remove the `X-Frame-Options` header. -1. Add the CSP `frame-ancestors` directive, and whitelist `about.gitlab.com`. - -This proposal was already discussed here: https://gitlab.com/gitlab-com/www-gitlab-com/-/issues/10052#note_458792328 - -",1.0 -75498455,2020-12-02 16:32:16.027,Installing Python 3.7.9 fails on bastion-02-inf-gprd.c.gitlab-production.internal,"The reason this fails is because `can't decompress data; zlib not available`. - -I cannot install zlib, since I don't have root access on bastion systems. - -```bash -nelsnelson@bastion-02-inf-gprd.c.gitlab-production.internal:~/workspace/db-ops$ asdf install python -Downloading python-build... -Cloning into '/home/nelsnelson/.asdf/plugins/python/pyenv'... -remote: Enumerating objects: 18376, done. -remote: Total 18376 (delta 0), reused 0 (delta 0), pack-reused 18376 -Receiving objects: 100% (18376/18376), 3.67 MiB | 0 bytes/s, done. -Resolving deltas: 100% (12514/12514), done. -Checking connectivity... done. -python-build 3.7.9 /home/nelsnelson/.asdf/installs/python/3.7.9 -Downloading Python-3.7.9.tar.xz... --> https://www.python.org/ftp/python/3.7.9/Python-3.7.9.tar.xz -Installing Python-3.7.9... - -BUILD FAILED (Ubuntu 16.04 using python-build 1.2.21-1-g943015e) - -Inspect or clean up the working tree at /tmp/python-build.20201202162150.30331 -Results logged to /tmp/python-build.20201202162150.30331.log - -Last 10 log lines: - runpy.run_module(""pip"", run_name=""__main__"", alter_sys=True) - File ""/tmp/python-build.20201202162150.30331/Python-3.7.9/Lib/runpy.py"", line 201, in run_module - mod_name, mod_spec, code = _get_module_details(mod_name) - File ""/tmp/python-build.20201202162150.30331/Python-3.7.9/Lib/runpy.py"", line 142, in _get_module_details - return _get_module_details(pkg_main_name, error) - File ""/tmp/python-build.20201202162150.30331/Python-3.7.9/Lib/runpy.py"", line 109, in _get_module_details - __import__(pkg_name) -zipimport.ZipImportError: can't decompress data; zlib not available -Makefile:1141: recipe for target 'install' failed -make: *** [install] Error 1 -``` - -Manually running `make` and `make install` yields a similar error: - -```bash -Traceback (most recent call last): - File ""/home/nelsnelson/tmp/Python-3.7.9/Lib/runpy.py"", line 193, in _run_module_as_main - ""__main__"", mod_spec) - File ""/home/nelsnelson/tmp/Python-3.7.9/Lib/runpy.py"", line 85, in _run_code - exec(code, run_globals) - File ""/home/nelsnelson/tmp/Python-3.7.9/Lib/ensurepip/__main__.py"", line 5, in - sys.exit(ensurepip._main()) - File ""/home/nelsnelson/tmp/Python-3.7.9/Lib/ensurepip/__init__.py"", line 214, in _main - default_pip=args.default_pip, - File ""/home/nelsnelson/tmp/Python-3.7.9/Lib/ensurepip/__init__.py"", line 127, in _bootstrap - return _run_pip(args + [p[0] for p in _PROJECTS], additional_paths) - File ""/home/nelsnelson/tmp/Python-3.7.9/Lib/ensurepip/__init__.py"", line 32, in _run_pip - runpy.run_module(""pip"", run_name=""__main__"", alter_sys=True) - File ""/home/nelsnelson/tmp/Python-3.7.9/Lib/runpy.py"", line 201, in run_module - mod_name, mod_spec, code = _get_module_details(mod_name) - File ""/home/nelsnelson/tmp/Python-3.7.9/Lib/runpy.py"", line 142, in _get_module_details - return _get_module_details(pkg_main_name, error) - File ""/home/nelsnelson/tmp/Python-3.7.9/Lib/runpy.py"", line 109, in _get_module_details - __import__(pkg_name) -zipimport.ZipImportError: can't decompress data; zlib not available -Makefile:1141: recipe for target 'install' failed -make: *** [install] Error 1 -```",2.0 -75423510,2020-12-01 12:28:50.866,update alerts and runbooks for using wal-g,Make sure that our alerting for wal-g wal archiving and backups is working as intended and that alerts are pointing to up-to-date runbooks for troubleshooting.,5.0 -75104668,2020-11-27 12:39:01.467,PoC: rails-console read-only sandbox,"Based on https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/11730#note_455197827 - -> I have a rough idea of how we can sandbox the read-only console pretty well. -> -> We can force SSH to not start a shell, but rather a transient systemd-unit that is forwarded via SSH. That we can lock down as tight as we need to (filesystem ro, partially ro, even network access can be sandboxed to specific hosts). By forcing every user to access the rails-console via that, we can make sure, that whatever we configure is confined in the sandbox and the appropriate config files would be to. -> -> We should probably have one designated write-only target though if someone needs to export something. But if the user is unprivileged, they would need the interaction of another person to get the exports. That way we can also eliminate possible scenarios where someone might have gone rouge (which hopefully never happens!) -That way we can also prevent users from shelling out of rails into the OS. We can either restrict it, so they can't shell out period, or they can fiddle around in their sandbox. -> -> The read-write console access might be unsandboxed I think. - -Scope: -- Enforce full read-only access to *all* resources outside the sandboxed instance. -- Check feasibility of running sandboxed rails-consoles. - - What infra does rails need, that also requires read-only instances? - - Can we force rails to treat those (except Postgres - where we most likely *will* rely on a replica) as read-only (/force not triggering writes)? - - Does rails care if the state it writes to disk is not carried forward to the next startup? -- Consider how files exported by rails-console could be extracted from the machine and ensure those ways are known and regulated. -- All changes and how they are derived *must* be documented in this issue. -- Ensure output of the sandbox is captured in the systemd journal so it can be easily logged. -- Codeify changes in chef if possible. - -Not scope: -- User access and group mappings. This PoC assumes allowed users are in the group `rails-console-read-only` and can authenticate to the ssh daemon.",20.0 -75059477,2020-11-26 16:22:07.729,Disable wal-e in gprd,"After running wal-g successfully in parallel to wal-e since a while in gprd, and gstg fully being [switched over to wal-g](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/3077) already, we can disable wal-e in gprd after making sure we [switched over all wal-e backup consumers in gprd](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/11983). - -* [x] [switch all wal-e backup consumers to wal-g](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/11983) -* [x] add alert silence for wal-e -* [x] switch archive_command on primary to use wal-g, disable `wal-g wal-push` on secondary - * [ ] merge MR for disabling wal-e - * [ ] check archive_command on primary - * [ ] switch archive_mode from `always` to `on` on patroni-08 (needs postgres restart) - * [ ] disable archive_lock cronjobs on all nodes -* [x] disable wal-e backup-push cronjobs on all nodes -* [ ] cleanup chef roles -* [ ] update runbooks - -For execution we can make a copy of the CR issue for gstg, which worked well: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/3077 - -## Acceptance criteria: - -* [x] all backup and wal file consumers are configured to read from the wal-g archive (https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/11983) -* [x] wal-e is disabled in gprd",3.0 -74934845,2020-11-24 20:36:50.320,switch all gprd db replicas to restore from wal-g archive,"We now have working and tested daily db backups done by wal-g in gprd and gstg has wal-e completely disabled already. To be able to disable wal-e in gprd, we need to find all current consumers of the wal-e archive and switch them over to use the wal-g archive. - -For https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/11852. - -* [x] archive replica -* [x] delayed replica -* [x] database-lab/joe -* [x] any active gitlab-restore instances -* [x] make wal-g archive the default path for new restore instances -* [x] anything else? - -## Acceptance criteria - -* [x] All backup consumers use wal-g archive by default",5.0 -74835216,2020-11-23 10:36:11.052,disable wal-e in gstg,"After running wal-g successfully in parallel to wal-e since a while in gstg and gprd, we can disable wal-e in gstg after making sure we [disabled all wal-e backup consumers in gstg](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/11883). - -* [x] gstg: [disable wal-e backup consumers](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/11883) -* [x] gstg: add alert silence for wal-e -* [x] gstg: switch archive_command on primary to use wal-g, disable `wal-g wal-push` on secondary - * [x] merge MR - * [x] check archive_command on primary - * [x] switch archive_mode from `always` to `on` on patroni-05 (needs postgres restart) - * [x] disable archive_lock cronjobs on all nodes -* [x] disable wal-e backup-push cronjobs on all nodes",3.0 -74767597,2020-11-20 21:47:17.611,Investigate Failed builds for `gitlab-com/www-gitlab-com`,"**NOTE: Much of the initial discussion and links to failed jobs are contained in https://gitlab.com/gitlab-com/gl-infra/production/-/issues/3104. TODO: Any relevant/necessary info should be moved to this issue, then that one can be closed as a duplicate.** - -@cwoolley-gitlab Reported to us a couple of failed jobs when cloning or fetching objects for the `www-gitlab-com` repository. - -- https://gitlab.com/gitlab-com/www-gitlab-com/-/jobs/863088257 -- https://gitlab.com/gitlab-com/www-gitlab-com/-/jobs/862981337 - -I'm pretty sure that I found the error for the first job. - -``` -Job failed (system failure): Error response from daemon: error setting label on mount source '/var/lib/docker/volumes/runner-d5ae8d25-project-278964-concurrent-0-cache-3c3f060a0374fc8bc39395164f415a70/_data': bad message (docker.go:817:4s) -``` - -Link to document: - -https://log.gprd.gitlab.net/app/kibana#/discover/doc/AWgzayS3ENm-ja4G1a8d/pubsub-runner-inf-gprd-000339?id=dZoV53UBQEXcWSi0HOqw - -Its duration is `1,377,525,329,393` which I presume is nanoseconds, and this translates to `22.958755489883334` minutes, which roughly correlates with the error message from the job: `ERROR: Job failed: execution took longer than 20m0s seconds`. - -There may be some relationship to a couple on-going issues being tracked separately: - -* https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/1358#note_451945972 -* https://gitlab.com/gitlab-com/gl-infra/production/-/issues/3068 - -This issue is blocked by [another issue to gather the correct metrics](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/12213)",2.0 -74358395,2020-11-14 00:54:30.931,Help redirect hub.gitlab.com,Meta issue - tracking on https://gitlab.com/gitlab-com/marketing/growth-marketing/growth/-/issues/649#note_446580709,2.0 -74279059,2020-11-12 19:54:32.689,Identify deliverables for beta launch of macOS Shared Runners on .com,"The macOS Build Cloud (runners on .com) [closed beta](https://gitlab.com/groups/gitlab-org/-/epics/3922) is currently in progress. The goal over the next few milestones is to wrap up the work on the autoscaler to transition the solution to [open beta](https://gitlab.com/groups/gitlab-org/-/epics/3926). The target for the open beta launch is milestone 13.10, March 2021. - -In addition to the [autoscaling](https://gitlab.com/groups/gitlab-org/-/epics/3936) development, as we learned with the Windows Shared Runners rollout, there is significant work required by the infrastructure team that is a pre-requisite to the open beta launch. This issue is to start the discussion and planning needed to get the infrastructure components in. place.",3.0 -74253429,2020-11-12 11:41:58.953,switch all gstg db replicas to restore from wal-g archive,"We now have working and tested daily db backups done by wal-g in gstg. To be able to disable wal-e in gstg, we need to find all current consumers of the wal-e archive and switch them over to use the wal-g archive. - -For https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/11852. - -* [x] archive replica -* [x] delayed replica -* [x] geo secondary db -* [x] database-lab/joe (?) -* [x] any active gitlab-restore instances?",5.0 -74171455,2020-11-10 22:29:08.394,Update cirepom to leverage new gitlab.com repository_storage_moves API,"**Update 2020-11-13**: This issue is being closed, since it does not appear that this will be a viable option, given the ostensibly short term of the impending application-based dog-fooded solution. - -The more I look into modifying the `cirepom` project, the more I become concerned that the status quo is not aligned with the state of the art from the GitLab project repository storage move api. - -I think I might need to develop a better understanding of how the code base operates, and how it integrates with gitlab. - -Currently, it appears that `cirepom` initiates a repository migration in this manner: - -```ruby -def set_project_repository_storage(id,target) - exec :put, ""projects/#{grok id}"", ""repository_storage=#{target}"" -end -``` - -Source: https://gitlab.com/gitlab-com/gl-infra/cirepom/-/blob/master/lib/cirepom/store/gitlab.rb#L31 - -And then subsequently monitors and tracks state, quarantining projects whose migrations exhibited errors. - -This is problematic for two reasons: - -1. The gitlab application now tracks migration state itself and exposes an API to monitor the state of a given migration. -2. The gitlab application project repository storage move API has significantly reduced error rates for migrations and increased stability to the point that the quarantine features here may not be necessary, since it is my understanding that they are designed to prevent repeated attempts to migrate repositories whose migrations had already failed. - -New `gitlab.com` API is at `projects//repository_storage_moves`. - -It doesn't even appear that the defacto GitLab client ruby gem has support for this API, either. As of 2020-11-10, there are no results for: https://github.com/NARKOZ/gitlab/search?q=repository_storage_moves But maybe this is a naive query. - -It is unclear to me whether the Firestore datastore features surrounding migration state management and quarantining are of much use after such an update.",4.0 -74091608,2020-11-09 14:44:39.590,Chef recipe for installing custom wal-g binary,"As there is no release yet for the wal-g version that we are using, we need a chef recipe to install a custom binary: https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/11619",3.0 -74049552,2020-11-09 02:00:05.292,Renew fuzzit.dev,"Renew fuzzit.dev domain for 4 years -We completed the acquisition of FuzzIt this past year and transferred the domain fuzzit.dev to GitLab. - -The website will be deprecated but we want to add 4 years to the registration to prevent anyone from registering the domain and benefitting from the brand equity. - -Please add 4 years to the fuzzit.dev domain.",1.0 -73810238,2020-11-03 15:23:15.345,Enable the new patroni-gitlab-pgchecksums chef recipe to run for patroni nodes,Enable the new `patroni-gitlab-pgchecksums` chef recipe for installing `gitlab-pgchecksums` package for patroni nodes.,2.0 -73762705,2020-11-02 22:09:51.368,Include statements in postgresql elastic logs for better debugging,"Currently, for analysing slow queries, we need to log into a DB host and search through the local logs, because we redacted query statements when sending logs to elastic, for security reasons. - -Instead we should consider to include the statement in the logs and just redact values between `''`. This should allows us to aggregate by class of query and make it easier to debug DB performance issues because of slow queries. - -~""corrective action"" for https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2885.",3.0 -73762129,2020-11-02 21:42:58.238,Alert if a small set of queries is dominating postgresql,"Normally, most of the DB workload is spread out over many different queries. If the postgres workload is dominated by only a few slow queries (because of missing indices or statistics leading to bad query plans), this can have a severe impact on overall DB performance - up to a full downtime of GitLab.com as seen in [this incident](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2885) - but is sometimes is hard to detect, as we will get many symptomatic alerts but no alerts directly pointing at slow queries. - -We should alert if only a small set of queries is dominating the total query time, as suggested [here](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2885#note_436200513). - -~""corrective action"" for https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2885.",3.0 -73759743,2020-11-02 20:30:59.436,Upgrade `thanos-query-03-inf-ops`,"A simple apt-upgrade on this box, followed by a restart. See https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/11791.",1.0 -73750484,2020-11-02 16:56:31.310,Define SLOs and alerts for GCS storage,"In case of increased GCS latencies or error rates we only alert on symptoms which can make it difficult to find the real cause. We should define SLOs and alerts for GCS requests. - -~""corrective action"" for https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2887",5.0 -73746129,2020-11-02 15:39:42.838,Research converting runbook for database credential rotation to an ansible play,Figure out if it is possible to convert the [credential rotation runbook](https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/patroni/postgresql-role-credential-rotation.md) into a `db-ops` play.,4.0 -73719229,2020-11-02 08:23:32.491,Automatically enable GCP DocerkHub mirror for DinD builds for shared runners,"## Problem - -With the new [Docker RateLimits](https://www.docker.com/increase-rate-limits) users might start reaching the rate limits of pulling docker images. As discussed in https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/11113 we aren't going to be affected by this because we are using the [GCP mirror](https://cloud.google.com/container-registry/docs/pulling-cached-images). However this mirror is only configured when pulling images for the job, it is **not** configured for the docker daemons that start [docker in docker](https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#use-docker-in-docker-workflow-with-docker-executor). - -If users are using [docker in docker](https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#use-docker-in-docker-workflow-with-docker-executor) to build their image, it is going to pull the base images it needs to build the image. Since the docker daemon that is started by dind is not configured to use the mirror it might reach some rate limits. - -## Proposal - -Follow https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#docker-executor-inside-gitlab-runner-configuration by having our Shared runner fleet automatically mount `/etc/docker/daemon.json` that is configured to use [GCP DockerHub mirror](https://cloud.google.com/container-registry/docs/pulling-cached-images) so even our docker in docker builds will use the mirror. - -What we would need to do: - -1. Update the base VM image that is used in CI to include a `daemon.json` somewhere in the runner manager to have the config below. -
- daemon.json - - ```json - { - ""registry-mirrors"": [ - ""https://registry-mirror.example.com"" - ] - } - ``` - -
- -1. Update the shared runner fleet configuration like below. - -
- config.toml - - ```toml - [[runners]] - ... - executor = ""docker"" - [runners.docker] - image = ""alpine:3.12"" - privileged = true - volumes = [""/opt/docker/daemon.json:/etc/docker/daemon.json:ro""] - ``` -
- -## Possible problems - -### When users have are configuring the mirror through `command` - -**This will be a breaking change for certain users** - -When a user has the following `command` defined in their `.gitlab-ci.yml` to specify a mirror (they can do this already) and we mount the `/etc/docker/daemon.json` the service is going to fail with the error below. - -User updated docker `dind` service to specify a mirror themselves -```yaml -services: - - name: docker:19.03.13-dind - command: [""--registry-mirror"", ""https://registry-mirror.example.com""] # Specify the registry mirror to use. -``` - -GitLab CI failure -```shell -2020-11-02T08:18:33.103369077Z unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives are specified both as a flag and in the configuration file: registry-mirrors: (from flag: [https://mirror.gcr.io/], from file: [https://registry-mirror.example.com]) -``` - -### Users don't expect the `/etc/docker/daemon.json` to be present - -There might be some jobs out that that don't expect the `/etc/docker/daemon.json` to be present which might also break their jobs if we start mounting this file.",13.0 -73589795,2020-10-29 16:44:24.430,Create terraform repo for transient-import project,"We get requests often to make VMs for customer project imports (ex: https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/11695). As of now, these are manually provisioned by someone in infra and handed off to the support or professional services engineer performing the import. I think we could be much more efficient if we made a terraform repo and allowed engineers to mostly self-service the creation of servers. - -I propose that we re-use the [GCP module in dev-resources](https://gitlab.com/gitlab-com/dev-resources/-/tree/master/modules/gcp) to make this possible. This module was written for support to create VMs in a similar way for interviews or other testing purposes. The main thing we would need to modify is to remove the chef provisioning part and add a firewall rule to the module to allow the support engineer to access via SSH. - -Since much of the terraform is done, this shouldn't take too long to do and would make both infra and support more efficient in these requests.",5.0 -73533796,2020-10-28 17:22:55.269,Install the `pg_checksums` debian package on patroni nodes,"To support postgres [checksums enablement](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/11518#note_436803777), install the `pg_checksums` debian package on patroni nodes from the gitlab Aptly package host. - -Package is: `gitlab-pg-checksums_1.0_amd64.deb` - -It should already be uploaded to aptly, but I'll attach it here, just in case. - -[gitlab-pg-checksums_1.0_amd64.deb](/uploads/30539087472549bdbbe9143249a97c53/gitlab-pg-checksums_1.0_amd64.deb) - -The goal of this issue is implement how the package that is already in Aptly will be rolled out using chef in the database nodes.",1.0 -73257732,2020-10-23 16:16:10.359,Help create temporary access for GCP bucket for Package team working on Registry,"We have https://gitlab.com/gitlab-com/team-member-epics/access-requests/-/issues/7318#note_435195251 open to help the team working on registry better understand current development work. - -From the AR: -Justification for this access: Hayley and I are both maintainers of the GitLab Container Registry. As part of the upcoming registry upgrade (https://gitlab.com/gitlab-org/container-registry/-/issues/191), we will need to create an inventory of the repositories that exist in the current registry buckets. We may also need to examine these during the upgrade. For this reason, we need to be able to list/scan the buckets using the GCS API. This requires a service account with read access (`Viewer` role, I believe). Having this access is also useful to debug customer issues in general. - -Decide if we want to make a service account or just give viewer role to the two developers temporarily.",2.0 -73108670,2020-10-21 15:14:03.261,Readiness Review for Jaeger,"This checklist is sourced from https://gitlab.com/gitlab-com/gl-infra/readiness/-/blob/master/.gitlab/issue_templates/production_readiness.md. - -Link to the Jaeger runbook, which covers majority of the points listed below: https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/jaeger/README.md - -The scope of this review is: -- deploying Jaeger to the production environment -- Elasticsearch instance deployed to Elastic Cloud (used as storage backend for the Jaeger service) -- Labkit instrumentation - -## Summary - -* [x] Short overview mentioning purpose of the service, dependencies and owners -* [x] Explain the scope of this review and what is explicitly out of scope. - -## Architecture - -* [x] Runbook README.md contains an architecture overview (provide link) - * [x] Runbook README.md contains a logical architecture diagram - * [x] Runbook README.md contains a physical architecture diagram (optional) - * [x] Runbook README.md provides enough information for a reviewer to get an - understanding of the service and it's components, dependencies and - interactions - -## Documentation - -* [x] is there a blue print/design doc? (provide link) -* [x] do we have runbooks? (provide links) -* [x] are runbooks up-to-date? -* [x] where else is documentation for this service located? -* [ ] is there a service catalog entry? (provide link) - * [x] is service catalog listing all dependencies? - * [ ] has service catalog links to all existing documentation? - * [ ] is service catalog linking to readiness review? - -## Performance - -* [x] is there a runbook section with performance characteristics? (it should - cover following considerations, provide link) - * [x] current requests/s (min, max, average), latency characteristics, - saturation, ... - * [x] throtteling/limits - * [x] bottlenecks (cpu-bound, memory-bound, ...) - * [x] is there documentation on how/why we set certain config options that are - affecting performance? - -## Scalability - -* [x] is there a runbook section with scalability information? (it should cover - following considerations, provide link) - * [x] expected load in the future - * [x] how can we scale to the expected load? - * [x] can it be scaled across availability zones or regions? - * [x] are there scalability limitations? - * [x] are we doing performance tests? - -## Availability - -* [x] is there a runbook section covering availability considerations? (it - should cover following topics, provide link) - * [x] failure modes of this service, blast radius, how long does it take to - recover? - * [x] what happens on outage of services we are depending on? - * [x] Availability Zone (AZ) outage - * [x] split brain between AZs - * [x] region outage - * [x] other external dependencies that could affect availability - * [x] what other services are affected by an outage of this service? - * [x] is there an existing Recovery Time Objective (RTO) documented? How do we - plan to achieve it? - * [x] do we have an error budget? - * [x] are we doing disaster recovery tests? - * [x] is there a failover procedure? Do we have runbook instructions? - -## Durability - -* [x] is there a runbook section covering durability considerations? (it should - cover following topics, provide link) -* [x] possible failure modes and how to recover from them - * [x] deletion by accident - * [x] disk failure - * [x] data corruption - * [x] GCP outage - * ... - * [x] is there an existing Recovery Point Objective (RPO) documented? How do - we plan to achieve it? - * Backups - * [x] are we testing backup replay? - * [x] are we monitoring backups? - * [x] what is the backup retention policy? - * [x] are backups in a different logical and physical environment? - -## Security/Compliance - -* [ ] is there a runbook section covering security considerations? (it should - cover following topics, provide link) - * [x] list of access roles - * [x] Who has which role? - * [x] How do we protect access? - * [x] Auditability of access - * [x] Which entrypoints need protection? - * [x] How are we applying security updates? (OS and service) - * [ ] Regulations/Policies applying? (PII, SOX, ...) - * [x] how do we protect customer data? - * [x] encryption at rest? - * [x] could customer data leak in logs? - * [x] how long do we keep logs? -* [ ] is someone from security included for the readiness review? - -## Monitoring - -* [x] is there a runbook section covering monitoring? (it should - cover following topics, provide link) - * [x] list key SLIs. Are we monitoring them? - * [x] list SLOs. Are we monitoring/alerting on them? - * [x] list of relevant alerts - * [x] are alerts actionable and linking to a runbook? - * [x] do we have a metrics catalog entry for the service? (provide link) - * [x] list of relevant dashboards - * [x] list of relevant logs",5.0 -72672050,2020-10-13 20:22:23.773,Estimate how quickly the 16TB disks allocated for new project repository gitaly shards are filled up with new projects,"Estimate how quickly the 16TB disks allocated for new project repository gitaly shards are filled up with new projects. - -This will be slightly tricky, since we do not have a constant number of shards participating in this round robin arrangement.",1.0 -72668091,2020-10-13 18:45:56.565,Move sidekiq: Shard Detail dashboard to use recording rules,The shard detail dashboard can be quite slow to load and won't produce data with specific time frames. We can move the queries used in these dashboards to recording rules to pre-compute the queries.,1.0 -72590774,2020-10-12 15:47:22.409,Create new gitaly storage shard node to replace `nfs-file51`,"Gitaly storage shard `nfs-file51` (`file-51-stor-gprd.c.gitlab-production.internal`) is at `79.12%` usage as of `2020-10-12`. - -Our usage targets specify that we try to maintain usage between 65-79%. New project creation would quickly cause more usage than that on `nfs-file51`. - -There are currently 7 gitaly shard nodes configured to accept new projects (`nfs-file51-57`). Maintaining at least 7 or 8 shards with sufficient capacity for new user repository creation is about the target level of availability we prefer and is important because it helps us avoid a scenario in which shards fill up too quickly. - -To remove a single node from the new projects storage rotation cluster, and also prevent the acceleration of capacity consumption, a new gitaly shard node should be created and added to the list of shards configured in the GitLab Application to store new project repositories. - -[Create a production change issue for this, using the `storage_shard_creation` template: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/new?issuable_template=storage_shard_creation](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/new?issuable_template=storage_shard_creation)",1.0 -72269394,2020-10-06 16:06:45.285,DNS Audit- Please review these IPs," - -**Details** - - Point of contact for this request: @mmaneval20 - - If a call is needed, what is the proposed date and time of the call: [+ Date and Time +] - - Additional call details (format, type of call): [+ additional details +] - -**SRE Support Needed** -We are working with a third party security scanner, Bit Sight. They have identified the following IPs as associated with GitLab via DNS records. We have found that a few of them were given up but perhaps the DNS record wasn't updated. - -My ask is that you review [this list](https://docs.google.com/spreadsheets/d/1O6FbyB-uGpVbWjZtEAwVxeM53Db71YMcb2fLUvBUfqI/edit?usp=sharing_). If there are any IPs that are no longer ours, please indicate that on the Google Doc. Then, please validate that the DNS record was properly removed/updated. - -If they are ours, could you please indicated if it is: Production, Pre-prodcution, or User Managed. - -I will then get any IPs removed from the Bit Sight Portal. - -Please note that at this time, I'm not saying there is necessarily anything vulnerable or wrong with these IPs. However, we NEED to validate these IPs so we can remove the misattributed IPs that could be negatively impacting our score. - -If it's easier to get on a sync call, let me know. - - - -",2.0 -72259382,2020-10-06 13:53:48.640,Automate k8s-workloads sidekiq configuration updates,"Instead of creating a MR in the `k8s-workload/gitlab-com` project with an empty yaml file and then reverting the MR in order to execute a noop change to trigger a pipeline run, or from clicking a button on a pipeline web page, switch to using [the programmatic method for triggering a pipeline build](https://gitlab.com/gitlab-com/gl-infra/production/-/blob/c92bba1c33cee8d215d9822e5fa0c4a62beef054/.gitlab/issue_templates/storage_shard_creation.md#run-k8s-workload-master-branch-pipeline) and include its execution as part of a pipeline stage when changes are merged to the main branch of our chef repo project.",1.0 -72241340,2020-10-06 09:01:21.262,Container registry - defining PostgreSQL setup for prod,"We would like to consider in this issue what will be the production setup for the container registry database. - - -We had some chats on the issue: https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/11294 - -I would like to summarize here the actual idea of the setup for the PostgreSQL for production: - -Hardware spec: `n-standard-32` with 64 GB memory. - - -Architecture: - -Using the production consul cluster have an entry for our Patroni Cluster. - -We will have 3 nodes initially on the cluster: -1-Primary for read-write traffic. -1-Secondary for read-only traffic. -1-Secondary that will not receive traffic to take snapshots and backups. - -Have 2 pgbouncers nodes in front of the primary database node for the read-write traffic. -Have 2 pgbouncer nodes in the read-only servers. - - -Let's enable data checksums in the database from the initial setup. - - -Postgresql Setup:",8.0 -72063955,2020-10-02 08:19:45.331,Work with Delivery team for the initial Container Registry (new architecture) rollout process in Staging,"Work with the Delivery team on any changes needed to deployment tooling - -(e.g. work with the team to check the connectivity to the database, and monitor how they will execute the database migrations)",2.0 -71893025,2020-09-29 19:41:55.652,Create new gitaly storage shard node to replace `nfs-file50`,"Gitaly storage shard `nfs-file50` (`file-50-stor-gprd.c.gitlab-production.internal`) is at `82.50%` usage as of `2020-09-29`. - -Our usage targets specify that we try to maintain usage between 65-79%. New project creation would quickly cause more usage than that on `nfs-file50`. - -There are currently 6 gitaly shard nodes configured to accept new projects (`nfs-file51-56`). Maintaining at least 7 or 8 shards with sufficient capacity for new user repository creation is about the target level of availability we prefer and is important because it helps us avoid a scenario in which shards fill up too quickly. - -Note: I have already removed `nfs-file50` from the configuration to receive new repositories, because it has become well past the 80% threshold by which it should have been removed already. - -To remove a single node from the new projects storage rotation cluster, and also prevent the acceleration of capacity consumption, a new gitaly shard node should be created and added to the list of shards configured in the GitLab Application to store new project repositories. - -[Create a production change issue for this, using the `storage_shard_creation` template: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/new?issuable_template=storage_shard_creation](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/new?issuable_template=storage_shard_creation)",1.0 -71885097,2020-09-29 16:13:53.647,"2020-09-29: Silence alert for: The authorized_projects queue, main stage, has a queue latency outside of SLO","Silencing alert for: The authorized_projects queue, main stage, has a queue latency outside of SLO (`sidekiq_background_job_queue_apdex_ratio_burn_rate_slo_out_of_bounds`) - -Duration: `30 days` - -Request for silence: https://gitlab.slack.com/archives/CB3LSMEJV/p1601395587142400 - -Feature category: `authentication_and_authorization`",1.0 -71712100,2020-09-25 22:07:45.818,Add zone checks to firewall rules for Cloudflare Audit Log,"#### Problem - -The Cloudflare interface does not make it easy to get a quick overview of our current rules applied to each zone and compare them for consistency. There is also variation due to configuration required for production that may not be necessary for staging. - -#### Proposal - -Cloudflare firewall rules described by issues in gitlab-com/gl-infra/cloudflare-firewall> should be labelled with the zones they are applicable to and checked by Cloudflare Audit log. - -The current proposed labels from https://gitlab.com/gitlab-com/runbooks/-/merge_requests/2783/diffs#5a3efee5111997f9405246307d030ade2f15d696_148_152 - -```markdown - - `zone:gitlab-com` - - `zone:staging-gitlab-com` - - `zone:gitlab-net` -``` - -#### Potential Tests - -- [x] Add the necessary labels to existing open tickets -- [ ] Update Cloudflare Audit to check for the rules in each zone -- [ ] Update Cloudflare Audit to check if the matching rules in each zone they are in are identical -- [ ] For the rules that exist in multiple zones, verify that they are in the same order. - -cc @T4cC0re @cmcfarland @dawsmith for feedback and discussion.",3.0 -71707213,2020-09-25 19:41:08.134,Create new gitaly storage shard node to replace `nfs-file49`,"Gitaly storage shard `nfs-file49` (`file-49-stor-gprd.c.gitlab-production.internal`) is at `85.07%` usage as of `2020-09-25`. - -Our usage targets specify that we try to maintain usage between 65-79%. New project creation would quickly cause more usage than that on `nfs-file49`. - -There are currently 6 gitaly shard nodes configured to accept new projects (`nfs-file49-55`). Maintaining at least 7 or 8 shards with sufficient capacity for new user repository creation is about the target level of availability we prefer and is important because it helps us avoid a scenario in which shards fill up too quickly. - -Note: I have already removed `nfs-file49` from the configuration to receive new repositories, because it has become well past the 80% threshold by which it should have been removed already. - -To remove a single node from the new projects storage rotation cluster, and also prevent the acceleration of capacity consumption, a new gitaly shard node should be created and added to the list of shards configured in the GitLab Application to store new project repositories. - -[Create a production change issue for this, using the `storage_shard_creation` template: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/new?issuable_template=storage_shard_creation](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/new?issuable_template=storage_shard_creation)",3.0 -71575839,2020-09-23 15:28:59.425,"Container Registry: Set up backups, delayed replication, and snapshots in production","We should define for the registry database cluster in staging: - -* backup pipelines -* restore pipelines -* delayed replica -* disk snapshots from one of the replicas. We could consider having a replica without traffic for all this activities.",8.0 -71575623,2020-09-23 15:27:04.579,"Set up backups, delayed replication, and snapshots in staging","We should define for the registry database cluster in staging: - -**backup pipelines** - - - We execute a daily backup and copy the wal files daily. This is triggered by a shell cron-job that we have in chef. We need to adapt to be generic on the database name. - - We have a restore pipeline to verify if the backup can restore properly. - - Enable our deadmansnitch alerts for the restore of this new database backup restore. - - review our alerts on the backup generation and alerts. - -**delayed replica** - - - As we have in GitLab.com, we need a delayed replica, where we apply the WAL files with a delay of 8 hours. The main intention of this database is to restore data in case of accidents, e.g.: if we had an accidental `DELETE` statement, or some wrong `WHERE` clause in a `DELETE` statement in production, we would have an environment to recover the deleted data quickly. - - This host is not part of the Patroni cluster. - -**Disk snapshots** - -- We want to have a dedicated replica that will not receive traffic and can not be promoted to be the leader of the Patroni cluster for the container registry database cluster. -- As we execute in GitLab.com this snapshot has to have the following steps to guarantee the synchronizd: - * Execute a pg_Start_backup before the snapshot is initiated. - * execute a GCP disk snapshot - * Execute a pg_stop_backup after the snapshot is finished. - -All the retention policies should be as GitLab.com. - - -Acceptance criteria: -- [ ] Fulfill all the requirements from the issue",13.0 -71575296,2020-09-23 15:24:58.714,Work with Delivery team for the initial rollout process,"Work with the Delivery team on any changes needed to deployment tooling - -(e.g. work with the team to check the connectivity, and monitor how they will execute the database migrations)",2.0 -71574618,2020-09-23 15:16:59.862,Create the (new) monitoring for the Container Registry DB in production,"We should review our monitoring set up in a generic method that we could deploy and monitor several clusters, without impact each cluster. - -The metrics and alerts could be customized per cluster since the business requirements can be different. - -Also, we should review our tools, ( scripts, ansible-playbooks...) that we could test and refactor if needed, to be used in different clusters.",8.0 -71456812,2020-09-21 15:14:15.941,Connect Smartling to Brand and Digital Design team Repository," - -**Details** - - Point of contact for this request: [+ @sdaily +] - - If a call is needed, what is the proposed date and time of the call: [+ Date and Time +] - - Additional call details (format, type of call): [+ additional details +] - -**Documentation** - -Verify the following system prerequisites are installed: - -- Repo user created for Repo Connector -- The application should be hosted on a server that is continuously available and/or publicly addressable. -- Java version 8 or higher. -- Disk space requirements: It should have enough space to clone your Git repositories, as well as 50MB for the installation of the Repository Connector. -- Links to relevant repos (if public) -- Example resource files (e.g. GetText .PO file, JSON file, etc) -- Help Center articles on the Repo Connector: https://help.smartling.com/hc/en-us/sections/360001685234-Repository-Connector - -**SRE Support Needed** - -*dsmith - from call notes, we just need to: - -1. Create a Smartling GCP Project under the Marketing Folder -1. Setup a VM in the new project - n1-standard-2 should do, ubuntu 18.04 or 20.04 - setup auto-patcthing with 100GB disk to start. -1. Do basic hardening -1. Set up a way to maintain ssh keys for the marketing team to access the config files -1. Install the connector per the docs above. - -Open question, should we set up a simple ansible repo to do the hardening/key management (or okta asa)/connector install? - -## Related MRs - -- https://gitlab.com/gitlab-com/marketing/corporate_marketing/corporate-marketing/-/merge_requests/114 -- https://gitlab.com/gitlab-com/marketing/corporate_marketing/corporate-marketing/-/merge_requests/117 -- https://gitlab.com/gitlab-com/marketing/corporate_marketing/corporate-marketing/-/merge_requests/119 -- https://gitlab.com/gitlab-com/marketing/corporate_marketing/corporate-marketing/-/merge_requests/120 -- https://gitlab.com/gitlab-com/marketing/corporate_marketing/corporate-marketing/-/merge_requests/123 -- https://gitlab.com/gitlab-com/marketing/corporate_marketing/corporate-marketing/-/merge_requests/124 -- https://gitlab.com/gitlab-com/marketing/corporate_marketing/corporate-marketing/-/merge_requests/128 - -/cc @mpreuss22 @bmatturro @brandon_lyon @laurenbarker",8.0 -71319088,2020-09-17 15:23:18.778,Investigate to turn off Cloudflare E-Mail Obfuscation for performance reasons,"Current on gitlab.com an JS is loaded automatically through Cloudflare which is responsible for a feature called E-Mail Obfuscation, which is there to protect e-mail adresses from being scraped. It seems that this was not deliberately turned on but rather is a default. Would be great to investigate and/or turn the feature off to gain some performance through that. - -JS that is injected on each page through Cloudflare automatically: https://gitlab.com/cdn-cgi/scripts/5c5dd728/cloudflare-static/email-decode.min.js - -Example page where it is loaded: https://gitlab.com/gitlab-org/gitlab - -Cloudflare info: https://support.cloudflare.com/hc/en-us/articles/200170016-What-is-Email-Address-Obfuscation-",2.0 -71178779,2020-09-15 06:37:53.251,"Create a test plan with Container Registry team to test the new cluster (Q/A + devs), in Staging",Work in progress - see the parent EPIC.,2.0 -71177367,2020-09-15 05:47:36.130,New Readiness Review for Container Registry + new PG DB cluster,"The new Infra Readiness Review process and template are being created and validated [here](https://gitlab.com/gitlab-com/gl-infra/readiness/-/merge_requests/43). We'll need to follow these two processes/templates: -- [Operational readiness](https://gitlab.com/gitlab-com/gl-infra/readiness/-/blob/13c0b59a3e61af6993084be786a06c74199bab51/.gitlab/issue_templates/service_readiness.md) -- [Production Readiness](https://gitlab.com/gitlab-com/gl-infra/readiness/-/blob/13c0b59a3e61af6993084be786a06c74199bab51/.gitlab/issue_templates/production_readiness.md) - - -Critical points to fully cover in this readiness review are: -- [ ] Complete our DB Runbooks - taking into account our new DB clusters (https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/11321) -- [ ] Update Monitoring (https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/11326) -- [ ] A robust Backup setting - -(This issue is still work in progress)",8.0 -71163919,2020-09-14 23:03:29.517,Cloudflare WAF Rules for `/api/graphql` and `/-/wikis`," - -**Details** - - Point of contact for this request: [+ @asaba +] - - If a call is needed, what is the proposed date and time of the call: [+ Date and Time +] - - Additional call details (format, type of call): [+ additional details +] - -**SRE Support Needed** -[+ Support Request Details +] - -The following paths should be excluded from the WAF rules: - -* POSTs to `/api/graphql`, exact match -* POST to paths containing `/-/wikis/` -* POST to paths ending with `/-/wikis` - -Based on the most recent traffic analysis, these are the last exceptions I think we should add before enabling the WAF in block mode. Details here: https://gitlab.com/gitlab-com/gl-security/appsec/appsec-team/-/issues/50#note_412543041 - -",3.0 -71022724,2020-09-11 11:29:12.597,gitlab-restore backup restores failing (excessive duration): Reduce the backup size or enable WAL compression.,We get [deadman snitch alerts](https://deadmanssnitch.com/snitches/178d5bf474) for missing backup restores.,3.0 -70938936,2020-09-09 15:36:27.618,New DB for Container Registry: Create the DB monitoring in Staging,"**Metrics, dashboards and alerts** -Our goal is to collect all the metrics to have the following dashboards for the container registry: - -- Patroni overview: https://dashboards.gitlab.net/d/patroni-main/patroni-overview?orgId=1 -- Postgresql overview: https://dashboards.gitlab.net/d/000000144/postgresql-overview?orgId=1 -- Pgbouncer overview: https://dashboards.gitlab.net/d/PwlB97Jmk/pgbouncer-overview?orgId=1 -- Host stats: https://dashboards.gitlab.net/d/bd2Kl9Imk/host-stats-old-prometheus?orgId=1&var-environment=gprd&var-node=patroni-01-db-gprd.c.gitlab-production.internal&var-promethus=prometheus-01-inf-gprd - -Initially, the alerts should be equal to GitLab.com. If we need we will adapt them. - -**Logs** - -We would like to ensure that we have all the logs from the database ecosystem(PostgreSQL, Patroni, PgBouncer and Consul) available in: - -* ELK - Kibana - -* Sentry",13.0 -70924882,2020-09-09 11:27:37.216,Review all the runbooks for the database ecosystem to attend different databases,"we need to review all the database related runbooks to attend the possibility of being used in different databases. - -Some points to review, hardcoded values, that could be different in other databases. - -Also always consider a certain level of abstraction since we will have more clusters in the future.",8.0 -70885114,2020-09-08 15:30:44.688,Provision a new database cluster in production for the container register service,"We need to provision a PostgreSQL cluster in production, after our staging tests are completed. This issue will address the change design and preparation (Design/validation sessions, Change testing, create documentation), with the below requirements. - -Requirements: - -* Install PostgreSQL version 12. -* The hardware specs have to be evaluated with the traffic that we are expected. -* Create a cluster with 1 primary and 2 nodes read-only. -* Configure a new Patroni cluster. -* Create a new cluster in consul (that Patroni will use) -* Provision 2 pgbouncers one to receive the traffic Read-write and other one to receive traffic read-only.",8.0 -70815983,2020-09-07 09:43:23.463,Memory troubleshooting guide,"(Coming from this incident: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2581) - -Diagnosing memory issues is notoriously hard. There are several methods that can be used, and they are largely dependent on the type of process being profiled. - -It would be good to have a guide that goes into how to diagnose high memory usage. - -It may make sense to have separate guides per process type. - -* Go: We have pprof and potentially even continuous profiling. -* Ruby: Stackprof, heap dump via objspace. GC stats. jemalloc stats. -* Generic: pmap, smaps, core dump, heap dump via gdb. -* Novel: [heaptrack](https://github.com/KDE/heaptrack), [tcmalloc](https://github.com/google/tcmalloc), [poireau](https://github.com/backtrace-labs/poireau). - -Some of these need some more work on the tooling side on our end. But at the very least pprof and continuous profiling would be good to document. - -refs https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/84",1.0 -70807465,2020-09-07 07:08:44.673,Provision new Patroni cluster in staging for the Container Registry service,"We need to provide a PostgreSQL database cluster in staging. - -### Status - -* **MRs** - * **DBTB** - * Chef: https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/33 - * TF: https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/-/merge_requests/2581 - * **Staging** COMPLETED - * Chef: https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/75 - * TF: https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/-/merge_requests/2615 - * CR: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/4796 - -Requirements: -- [x] Install PostgreSQL version 12. -- [x] Hardware config `n1-standard-8`, or we could migrate to `n1-standard-16` -- [x] Create a cluster with 1 primary and 2 nodes read-only. One of them dedicated to snapshots. -- [x] Configure a new Patroni cluster. -- [x] Create a new cluster in consul ( using the same cluster we use for GitLab.com ). -- [x] Provision 2 pgbouncer pools, one to receive the traffic Read-write and the second to receive traffic read-only. -- [ ] We want to offer by default redundancy on pgbouncer. We will have at the moment 2 instances to balance the load on the pgbouncer level since pgbouncer is single-threaded, and in case of a failure we would redirect the load on the second instance. -- [ ] If we need to expand we would use 2 CPU-cores from each node from the pgbouncer layer that are in front of the primary database host, and would not impact performance on the database layer. Imagine from a hardware perspective form the primary with 32 cores having 4 cores dedicated for pooling, is over 10% that I consider high. And we would be able to scale up this layer. -- [ ] if we want to use in the future some functionality as pgbouncer pause, it is ideal to have the pgbouncer nodes on a different level, principally considering a hardware problem where the pgbouncer could be impacted with the database node. -- [x] We need to offer 2 pgbouncer pools, from read-write and read-only, to enable good practices from our developers to redirect the read-only traffic to the secondaries. -- [x] Let's enable data checksums in the database from the initial setup.",21.0 -70643594,2020-09-02 16:31:16.518,Add restore_command to recovery.conf in patroni cluster,"Adding `wal-g wal-fetch` as a `restore_command` to the recovery.conf in our patroni cluster would help with catching up from a big replication delay by loading wal files from GCS until we are close enough to switch over to streaming replication (which will be done transparently by postgres). - -This would reduce the load on the primary and also enable recovering when WAL files already are missing on the primary. - -We already have wal-g binaries installed on all patroni nodes in gstg and gprd. But while gstg will work out of the box, because we use wal-g for wal-push there already we need to point the wal-g configuration in gprd to the wal-e GCS bucket, as we did not enable wal-g wal-push there yet. - -This is a corrective action for https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2570.",5.0 -70642475,2020-09-02 15:59:56.683,Improve replication lag runbook instructions,"The runbook part pointed to by the replication lag alert doesn't cover related issues with WAL files not being cleaned up while a replica is lagging behind and should get instructions to deal with unused replication slots. - -This is a corrective action for https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2570",1.0 -70636194,2020-09-02 13:47:51.383,disable-chef-client isn't preserved over reboots,"In https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2570#note_401672637 we saw that disabling chef-client with `disable-chef-client` wasn't preserved over a reboot - which unexpectedly brought the DB back into rotation with a high replication delay. - -We need to figure out why it isn't preserved over reboots (the node was in an unhealthy state, so maybe it just failed to write changes to disk in this one case and works in general - need to test) and fix it.",3.0 -70624806,2020-09-02 10:02:37.731,Adjust last backup alert threshold to a meaningful value,"We currently alert if the last successful basebackup was more than 48h ago. This is problematic as it will randomly alert either around 14h after a backup failed (if the next backup took slightly longer than the last successful one) or not alert at all (if the next backup took less time then the last successful one). We should set it to something like 30h to give a 4h hour tolerance if the backup takes unexpectedly longer then the previous one for some reason (e.g. the Sunday backup might finish slightly faster then the Monday backup because of less traffic). - -Looking at [thanos](https://thanos-query.ops.gitlab.net/graph?g0.range_input=2w&g0.max_source_resolution=0s&g0.expr=gitlab_com%3Alast_wale_successful_basebackup_age_in_hours&g0.tab=0), the variance in backup time is less than 2h.",1.0 -70422575,2020-08-28 13:36:15.815,Patroni not failing over when data disk is full,"We had an [incident in staging](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2583) where the data disk on the primary was running full, making the DB unavailable, but Patroni did not fail over to another node. - -The patroni logs contained these exceptions all the time: - -``` -2020-08-27_11:50:01 patroni-01-db-gstg patroni[24887]: 2020-08-27 11:50:01,650 ERROR: get_postgresql_status -2020-08-27_11:50:01 patroni-01-db-gstg patroni[24887]: Traceback (most recent call last): -2020-08-27_11:50:01 patroni-01-db-gstg patroni[24887]: File ""/opt/patroni/lib/python3.5/site-packages/patroni/api.py"", line 505, in query -2020-08-27_11:50:01 patroni-01-db-gstg patroni[24887]: with self.patroni.postgresql.connection().cursor() as cursor: -2020-08-27_11:50:01 patroni-01-db-gstg patroni[24887]: File ""/opt/patroni/lib/python3.5/site-packages/patroni/postgresql/__init__.py"", line 222, in connection -2020-08-27_11:50:01 patroni-01-db-gstg patroni[24887]: return self._connection.get() -2020-08-27_11:50:01 patroni-01-db-gstg patroni[24887]: File ""/opt/patroni/lib/python3.5/site-packages/patroni/postgresql/connection.py"", line 23, in get -2020-08-27_11:50:01 patroni-01-db-gstg patroni[24887]: self._connection = psycopg2.connect(**self._conn_kwargs) -2020-08-27_11:50:01 patroni-01-db-gstg patroni[24887]: File ""/opt/patroni/lib/python3.5/site-packages/psycopg2/__init__.py"", line 130, in connect -2020-08-27_11:50:01 patroni-01-db-gstg patroni[24887]: psycopg2.OperationalError: could not connect to server: Connection refused -2020-08-27_11:50:01 patroni-01-db-gstg patroni[24887]: #011Is the server running on host ""localhost"" (127.0.0.1) and accepting -2020-08-27_11:50:01 patroni-01-db-gstg patroni[24887]: #011TCP/IP connections on port 5432? -2020-08-27_11:50:01 patroni-01-db-gstg patroni[24887]: During handling of the above exception, another exception occurred: -2020-08-27_11:50:01 patroni-01-db-gstg patroni[24887]: Traceback (most recent call last): -2020-08-27_11:50:01 patroni-01-db-gstg patroni[24887]: File ""/opt/patroni/lib/python3.5/site-packages/patroni/api.py"", line 452, in get_postgresql_status -2020-08-27_11:50:01 patroni-01-db-gstg patroni[24887]: self.server.patroni.postgresql.lsn_name), retry=retry)[0] -2020-08-27_11:50:01 patroni-01-db-gstg patroni[24887]: File ""/opt/patroni/lib/python3.5/site-packages/patroni/api.py"", line 424, in query -2020-08-27_11:50:01 patroni-01-db-gstg patroni[24887]: return self.server.query(sql, *params) -2020-08-27_11:50:01 patroni-01-db-gstg patroni[24887]: File ""/opt/patroni/lib/python3.5/site-packages/patroni/api.py"", line 511, in query -2020-08-27_11:50:01 patroni-01-db-gstg patroni[24887]: raise PostgresConnectionException('connection problems') -2020-08-27_11:50:01 patroni-01-db-gstg patroni[24887]: patroni.exceptions.PostgresConnectionException: 'connection problems' -``` - -We need to make sure that patroni is able to fail over when the primary becomes unavailable because of no space left on the data disk.",5.0 -70420669,2020-08-28 13:01:13.383,Prevent DB WAL files to fill up the data disk,"Failing `archive_command`, unused replication slots or [failing logical replication](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2583#note_403884088) can prevent WAL files to be cleaned up. We need to take measures to prevent them to fill up the data partition as this can lead serious problems and data loss.",3.0 -70413585,2020-08-28 10:20:09.667,Alert on reboots,"While alerting on symptoms should be preferable, for some classes of nodes we can be certain that a reboot will cause noticeable issues. It would be helpful to get an alert for a reboot in this case, because often from looking at the symptoms it is not always easy to deduce a reboot as cause and sometimes the symptoms might still be below thresholds so that we would miss frequent reboots. - -We should alert (but probably not page) for reboots of Gitaly and Patroni nodes. - -We should also alert on high reboot frequencies for all nodes. - -There is a reboots dashboard already: https://dashboards.gitlab.net/d/yzukVGtZz/reboots?orgId=1",3.0 -70226196,2020-08-24 15:13:17.537,GitLab connection troubleshooting instructions for customers,"When customers are reporting problems connecting to GitLab.com it is often hard to debug the cause because of missing information. We should provide troubleshooting instructions for customers so they can help us debug the problems and make sure support has documentation at hand to point customers to. - -A basic first aid kit would be something like - -* `traceroute gitlab.com` -* `curl http://gitlab.com/cdn-cgi/trace` -* `curl https://gitlab.com/cdn-cgi/trace` -* `curl -svo /dev/null https://gitlab.com` - -For the future, we should work on enabling [NEL](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/10754).",1.0 -69812143,2020-08-13 17:00:21.799,[Runbook] Update the permanent maintenance mode section of the patroni management runbook,"During the ""simulation"" demo, it was noticed that there was an area or two that could be improved. - -Update the permanent maintenance mode section of the patroni management runbook so that `Step 3` includes an explanation of why, not just what: https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/patroni/patroni-management.md#permanently-marking-a-replica-as-in-maintenance",2.0 -69679310,2020-08-11 06:52:19.855,Improve postgres runbook,"I recently received a page for increasing numbers of dead tuples on `patroni-01`, and ran into a dead end while [following the runbook to investigate](https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/patroni/postgres.md#tables-with-a-large-amount-of-dead-tuples). As a follow-up to the investigation from https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2506, we should work with Ongres to ensure that the runbook leads to better/actionable steps for resolution. - -/cc @albertoramos @Finotto",1.0 -69552071,2020-08-07 16:34:39.514,"Add a pagerless, quieter, symlinkable, pipeable version of the existing gkms show utility","It would be nice to have a pagerless, quieter, symlinkable, pipeable version of the existing gkms show utility in the [`chef-repo`](https://ops.gitlab.net/gitlab-cookbooks/chef-repo).",1.0 -69510779,2020-08-06 14:22:17.943,Install `pg_activity` observability tool on the patroni fleet,"**Current Situation** - - -Currently in order to inspect activity in the database one must open a `psql` session using `sudo gitlab-psql` on a particular node and run a query: `SELECT * FROM pg_stat_activity;` and then page through results until one finds what one is looking for. - -**Desired Outcome** - - -This change proposes that we install the `pg_activity` tool: https://wiki.postgresql.org/wiki/Monitoring#pg_activity - - -Project: https://pypi.org/project/pg-activity/ - -Source homepage: https://github.com/dalibo/pg_activity - -This will support an easier method for examining database activity with a wider range of options for sorting and filtering that are only a keystroke away, instead of crafting or copypasta queries into a database `psql` session. - -**Acceptance Criteria** - -- [x] The json chef role for the staging patroni fleet will have to be modified to include: - 1. [x] The installation of the `pg-activity` pypi package. - 1. [x] The invocation script to automatically use credentials from `.pgpass`. -- [ ] Repeat the above steps for both the `gstg` and the `gprd` environment-specific roles.",2.0 -69494139,2020-08-06 06:31:33.481,Praefect cloud_sql Database in staging is almost constantly near 100% CPU,"For example: https://gitlab.slack.com/archives/C017L2ZV4KE/p1596693091048000 - -![image](/uploads/601fab9e3b845e900939f96c547107be/image.png) - -https://dashboards.gitlab.net/d/alerts-sat_cloudsql_cpu/alerts-cloudsql_cpu-saturation-detail?orgId=1&var-PROMETHEUS_DS=Global&var-environment=gstg&var-type=monitoring&var-stage=main&from=now-24h&to=now - -Note: the service will change after https://gitlab.com/gitlab-com/runbooks/-/merge_requests/2607, so the chart variables in the link above will need to change too. - -cc @ahmadsherif @alejandro",3.0 -69421064,2020-08-04 17:44:43.419,revamp storage dashboard,redoing https://app.periscopedata.com/app/gitlab/690541/WIP:-GitLab.com-Storage-Cost-Stats to include all stats over time,1.0 -69383079,2020-08-03 19:39:04.776,Add support for remote file management through chef roles,"Add support for remote file management through chef roles. - -See: https://ops.gitlab.net/gitlab-cookbooks/gitlab-server/-/merge_requests/2",2.0 -69382125,2020-08-03 18:55:04.728,Change system-wide default branch name to `main` on GitLab.com,"Change system-wide default branch name to `main` on GitLab.com. - -See discussion at: https://gitlab.com/gitlab-org/gitlab/-/issues/221013 - -/cc @danielgruesso",4.0 -69370978,2020-08-03 15:20:57.779,Add the database user management tooling scripts to the patroni fleet,"Add the database user management tooling scripts to the patroni fleet. - -- https://gitlab.com/gitlab-com/runbooks/-/blob/nnelson/add-postgresql-database-user-role-utility-scripts-gl-infra-production-1847/scripts/database-gitlab-superuser-session-connection-terminate.sh -- https://gitlab.com/gitlab-com/runbooks/-/blob/nnelson/add-postgresql-database-user-role-utility-scripts-gl-infra-production-1847/scripts/database-gitlab-superuser-user-role-create.sh -- https://gitlab.com/gitlab-com/runbooks/-/blob/nnelson/add-postgresql-database-user-role-utility-scripts-gl-infra-production-1847/scripts/database-gitlab-superuser-user-role-password-update.sh",2.0 -69370790,2020-08-03 15:15:05.060,Create new gitaly storage shard node to replace `nfs-file49`,"Gitaly storage shard `nfs-file49` (`file-49-stor-gprd.c.gitlab-production.internal`) is at `60.02%` usage as of `2020-08-03`. - -Update: `64.16%` as of `2020-08-12 1940 utc` - -Our usage targets specify that we try to maintain usage between 65-79%. New project creation would quickly cause more usage than that on `nfs-file49`. - -There are currently 7 gitaly shard nodes configured to accept new projects (`nfs-file48-54`). Maintaining at least this level of availability is important to avoid any shards filling up too quickly. - -To remove a single node from the new projects storage rotation cluster, and also prevent the acceleration of capacity consumption, a new gitaly shard node should be created and added to the list of shards configured in the GitLab Application to store new project repositories. - -[Create a production change issue for this, using the `storage_shard_creation` template: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/new?issuable_template=storage_shard_creation](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/new?issuable_template=storage_shard_creation)",2.0 -69370738,2020-08-03 15:13:39.595,Create new gitaly storage shard node to replace `nfs-file48`,"Gitaly storage shard `nfs-file48` (`file-48-stor-gprd.c.gitlab-production.internal`) is at `62.63%` usage as of `2020-08-03`. - -Our usage targets specify that we try to maintain usage between 65-79%. New project creation would quickly cause more usage than that on `nfs-file48`. - -There are currently 7 gitaly shard nodes configured to accept new projects (`nfs-file48-54`). Maintaining at least this level of availability is important to avoid any shards filling up too quickly. - -To remove a single node from the new projects storage rotation cluster, and also prevent the acceleration of capacity consumption, a new gitaly shard node should be created and added to the list of shards configured in the GitLab Application to store new project repositories. - -[Create a production change issue for this, using the `storage_shard_creation` template: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/new?issuable_template=storage_shard_creation](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/new?issuable_template=storage_shard_creation)",2.0 -69308903,2020-07-31 18:59:04.776,Proxy *.about.gitlab-review.app via Fastly,"Currently about.gitlab.com is proxied via Fastly but its review apps, hosted on about.gitlab-review.app, are served directly from the `about-src` server. This means that Fastly-dependent features, like edge redirects, can't be tested currently on the review apps domain, requiring us to use a separate domain (about.staging.gitlab.com). Proxying the review apps domain would allow us to simplify our www-gitlab-com infrastructure.",1.0 -69304900,2020-07-31 16:04:23.411,Delete provisioned VM for gl-infra-10957,"A VM was provisioned for customer migration. - -https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/10957 - -Delete it in a month. - -https://console.cloud.google.com/compute/instancesDetail/zones/us-east1-c/instances/ps-congregate-10957?project=transient-imports&organizationId=769164969568",1.0 -69299269,2020-07-31 13:27:49.163,Fix wal-g gcs uploads,wal-g gcs uploads are not reliable. See https://github.com/wal-g/wal-g/issues/266 and https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/9627#note_377259758.,8.0 -69299009,2020-07-31 13:19:50.256,Fix wal-g backup-list sorting,"wal-g is sorting backups by modification timestamp instead of creation timestamp - this can lead to taking the wrong backup when using `backup-fetch LATEST`. We should sort by creation timestamp instead. - -See https://github.com/wal-g/wal-g/issues/694.",5.0 -57456373,2020-07-14 17:21:22.035,RepoStor :: Migrations :: Repos of blocked users : HDD,"After conversations with @jramsay I learned that repos of users that have been blocked are not removed, so these would be good targets to move off to HDD.",3.0 -57053756,2020-07-14 11:20:48.609,Use `general-public-splashscreen` as the default start page on dashboards.GitLab.com,"I'd like to switch the default dashboard on our public grafana instance over to this: - -https://dashboards.gitlab.com/d/general-public-splashscreen/general-gitlab-dashboards?orgId=1 - -![image](/uploads/812ecc3ff84854d3222f4827f496da81/image.png) - -Why: - -1. Key ARES (apdex, request-per-second, error rate and saturation) metrics for our key public-facing services. -1. Service status descriptions for each service (healthy, warning, degraded). -1. Maintained in Git - -@bjk-gitlab wdyt?",1.0 -55804436,2020-07-13 18:35:31.167,Demonstrate runbook to add PgBouncer instances,"Demonstrate https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/pgbouncer/pgbouncer-add-instance.md - -- Creating a PgBouncer Read-Write node: - - https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/-/merge_requests/1925 - - https://ops.gitlab.net/gitlab-cookbooks/chef-repo/-/merge_requests/3882 -- Creating a PgBouncer Read-Only instance: - - https://ops.gitlab.net/gitlab-cookbooks/chef-repo/-/merge_requests/3883",2.0 -55752141,2020-07-13 17:40:00.049,Create #elasticsearch-lab Slack bot for the Elasticsearch API,"The idea is to replicate the success of the `#database-lab` Slack channel but instead for the Elasticsearch index instead of the database. - -### What would it look like - -There would be a new app in Slack that has read-only access to our Elasticsearch index (`grpd`) and would profile/validate the query sent to it, redacting any matching documents. - -The bot would then issue the query, and upload the redacted response to the channel as a file upload. Some metadata (like the **query time**, the **result count**, etc…) could be shown directly in the bot's response. - -### Why would this be useful - -As the usage of the Advanced Search ramps up, we will have to diagnose more slow queries and test improvements, such as https://gitlab.com/gitlab-org/gitlab/-/issues/225998 - -### Implementation - - - [ ] Do a security review of the Elasticsearch Profile API to ensure the data safety - - [ ] Create a Slack application to manage the Elasticsearch calls - - [ ] Add the application to our Slack workplace - -### References - - - Elasticsearch Validate API: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-validate.html - - Elasticsearch Explain API: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-explain.html - - Elasticsearch Profile API: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-profile.html - - Slack's File API: https://api.slack.com/messaging/files/setup)",1.0 -54313901,2020-07-10 19:56:06.484,Revert MR - Remove postgresql-9.6-repack extension package from the production patroni fleet,"[This MR to `remove postgresql-9.6-repack extension package from the production patroni fleet`](https://ops.gitlab.net/gitlab-cookbooks/chef-repo/-/merge_requests/3866) should be reverted, that way, the `remove_package` directive which has become useless post-removal, will no longer be unnecessarily codified going forward.",1.0 -54312183,2020-07-10 19:13:23.344,Revert MR - Remove postgresql-9.6-repack extension package on the staging patroni fleet,"[This MR to `remove postgresql-9.6-repack extension package from the staging patroni fleet`](https://ops.gitlab.net/gitlab-cookbooks/chef-repo/-/merge_requests/3869) should be reverted, that way, the `remove_package` directive which has become useless post-removal, will no longer be unnecessarily codified going forward.",1.0 -54163469,2020-07-09 13:23:04.437,Set up a GCS bucket for serving `gitlab-docs` Review Apps,"Similarly to https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/8203#note_243150421, we'd like to host documentations Review Apps in a GCS bucket served through Cloudflare. - -The setup should almost the same as for `www-gitlab-com` Review Apps. - -- The domain would be `*.docs.gitlab-review.app` and managed in https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/-/blob/master/environments/dns/gitlab_review_app.auto.tfvars.json#L59-66. -- The GCS bucket should probably be under **Marketing > Documentation** (similarly to https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/8203#note_246351579). -- The CI config for https://gitlab.com/gitlab-org/gitlab-docs should be updated similarly to https://gitlab.com/gitlab-com/www-gitlab-com/-/merge_requests/35869/diffs. - -Related to https://gitlab.com/gitlab-org/gitlab-docs/-/issues/735. - -/cc @kwiebers @axil - -/cc @alejandro since you worked on https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/8203 and https://gitlab.com/gitlab-com/www-gitlab-com/-/merge_requests/35869.",2.0 -54077923,2020-07-08 04:52:57.981,Monitor CloudSQL performance,"As https://gitlab.com/gitlab-org/gitlab/-/issues/227215 (see also https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2383) showed, we probably need to monitor CloudSQL performance, at least for the praefect DB but possibly others as well. Extended CPU usage above some quite high limit (90%?) is likely a sign that Something Is Wrong and needs attention, and we would have saved a lot of investigation time and delays if we'd had this as a metric that was alerting earlier. - -/cc @alejandro this might be something you want to take a look at?",3.0 -53868182,2020-07-03 08:01:54.389,Runbooks need to trigger alertmanager k8s updates,"The runbooks repo updates the alertmanager config via pushing a file to GCS. This also needs to trigger an update process to update the config in GKE. - -Some options include: - -* Trigger a deploy/reload of some kind. -* Directly push a config map change.",1.0 -50152574,2020-06-27 05:14:11.182,Create new gitaly storage shard node `file-54-stor-gprd` to replace `file-47-stor-gprd` in the configured rotation for storing new projects,"Gitaly storage shard `nfs-file47` (`file-47-stor-gprd.c.gitlab-production.internal`) is at `89.53%` usage as of `2020-07-10`. - -Our usage targets specify that we try to maintain usage between 65-79%. New project creation would quickly cause more usage than that on `nfs-file47`. - -There are currently 7 gitaly shard nodes configured to accept new projects. Maintaining at least this level of availability is important to avoid any shards filling up too quickly. - -To remove a single node from the new projects storage rotation cluster, and also prevent usage acceleration, a new gitaly node should be created and added to the list of shards configured in the GitLab Application to store new project repositories. - -[Create a production change issue for this, using the `storage_shard_creation` template: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/new?issuable_template=storage_shard_creation](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/new?issuable_template=storage_shard_creation)",1.0 -50122720,2020-06-26 11:53:27.465,Implement locking mechanism for database backups,"We want to [run WAL-G backup push only from one of the Patroni replicas](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/9627). But manually designating a node for running backups is problematic: - -* It needs to be manually adjusted again in case of a failover -* Designating a ""special"" node isn't elegant -* Backups would stop working if the ""special"" node goes down - -Proposal: - -The replicas regularly try to acquire a lock (with a TTL) in consul and only the lock holder is running backup-push. -This lock also is used to run wal-push from a replica in the transition period from wal-e to wal-g. - -* a cron-job on each replica tries to acquire the lock every minute -* the lock holder creates a local lock file (so other processes, like `archive_command`, don't need to query consul several times per second) -* the lock cronjob and backup-push script also check if they run on the primary and release the lock/lock-file in that case -* there is a 1 minute time window for race conditions in case of failover, where wal-push could run from 2 nodes in parallel for a minute, but WAL-G should have no issue with that (should test though)",8.0 -50113426,2020-06-26 09:05:03.165,Decommission `ops-gitlab-net` ES cluster,"~~The ops-gitlab-net cluster is the only one which is still on 6.x . 6.x is EOL so we should upgrade it to 7.x~~ - -Search indexing on ops.gitlab.net is not used. We've disabled the feature and can safely reomve the `ops-gitlab-net` deployment from our Elastic Cloud account.",2.0 -50082769,2020-06-25 14:48:15.078,Investigate initial DNS settings on nodes in GCP and eliminate gaps in resolv.conf settings.,"[Reference Incident](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2327) - -One hypothesis from the incident was that pgbouncer will time-out cached zone records, but it won't re-read the DNS information on what upstream provider to use for resolution. This would mean that if a GCP nodes starts and the resolv.conf specifies (for example) the default Google DNS information from DHCP until our dnsmasq information in in place, pgbouncer might keep trying to query the Google DNS instead of the local dnsmasq service. - -Acceptance Criterea: -1. Is this hypothesis correct? -1. If so, can we prevent the DNS gap from occuring?",2.0 -50031592,2020-06-24 13:58:56.513,WAL-G backup failing in ops,"WalG backups have been failing since May 21. - -See [Thanos data](https://thanos-query.ops.gitlab.net/graph?g0.range_input=5w&g0.end_input=2020-06-24%2014%3A00&g0.max_source_resolution=auto&g0.expr=gitlab_com%3Alast_walg_successful_basebackup_age_in_hours%20%3E%3D%2048&g0.tab=0)",5.0 -49918227,2020-06-22 17:29:46.263,Dial down our `global_search` sidekiq fleet concurrency,"A request has been made to dial down our `global_search` sidekiq fleet concurrency because our elasticsearch (advanced global search) cluster is overloaded. - -https://gitlab.slack.com/archives/C101F3796/p1592846811423200 - -![Screenshot_2020-06-22_at_20.28.03](/uploads/7ba9ce9c14849ca95d5a23d6430c51e6/Screenshot_2020-06-22_at_20.28.03.png)",2.0 -49866678,2020-06-21 02:01:28.995,Deploys are broken in customers.gitlab.com,"It appears that the last few MRs that merged to `master` in https://gitlab.com/gitlab-org/customers-gitlab-com didn't get deployed to customers.gitlab.com. The last commit that was deployed was [this one](https://gitlab.com/gitlab-org/customers-gitlab-com/-/commit/63d3fee5ea8fc9f365776c56cf081f12feddad5d). It appears that this stopped working late on Friday, June 19th. - -I don't know much about the specifics for how the deploys work but I think it has to do with a problem with the chef-client. There is a lot of chef related warnings and errors in `/var/log/syslog`, but maybe this snippet is helpful: -``` -Jun 20 08:27:07 customers.gitlab.com chef-client[30504]: ================================================================================#033[0m -Jun 20 08:27:07 customers.gitlab.com chef-client[30504]: #033[31mRecipe Compile Error in /var/chef/cache/cookbooks/gitlab_walg/recipes/default.rb#033[0m -Jun 20 08:27:07 customers.gitlab.com chef-client[30504]: ================================================================================#033[0m -Jun 20 08:27:07 customers.gitlab.com chef-client[30504]: #033[0mMixlib::ShellOut::ShellCommandFailed#033[0m -Jun 20 08:27:07 customers.gitlab.com chef-client[30504]: ------------------------------------#033[0m -Jun 20 08:27:07 customers.gitlab.com chef-client[30504]: Expected process to exit with [0], but received '1' -Jun 20 08:27:07 customers.gitlab.com chef-client[30504]: #033[0m---- Begin output of gsutil cp gs://gitlab-ops-secrets/gitlab-walg/ops.enc - | gcloud kms decrypt --keyring=gitlab-secrets --key=ops --location=global --plaintext-file=- --ciphertext-file=- ---- -Jun 20 08:27:07 customers.gitlab.com chef-client[30504]: #033[0mSTDOUT: -Jun 20 08:27:07 customers.gitlab.com chef-client[30504]: #033[0mSTDERR: ServiceException: 401 Anonymous caller does not have storage.objects.get access to the Google Cloud Storage object. -Jun 20 08:27:07 customers.gitlab.com chef-client[30504]: #033[0mERROR: (gcloud.kms.decrypt) The required property [project] is not currently set. -Jun 20 08:27:07 customers.gitlab.com chef-client[30504]: #033[0mYou may set it for your current workspace by running: -Jun 20 08:27:07 customers.gitlab.com chef-client[30504]: #033[0m -Jun 20 08:27:07 customers.gitlab.com chef-client[30504]: #033[0m $ gcloud config set project VALUE -Jun 20 08:27:07 customers.gitlab.com chef-client[30504]: #033[0m -Jun 20 08:27:07 customers.gitlab.com chef-client[30504]: #033[0mor it can be set temporarily by the environment variable [CLOUDSDK_CORE_PROJECT] -Jun 20 08:27:07 customers.gitlab.com chef-client[30504]: #033[0m---- End output of gsutil cp gs://gitlab-ops-secrets/gitlab-walg/ops.enc - | gcloud kms decrypt --keyring=gitlab-secrets --key=ops --location=global --plaintext-file=- --ciphertext-file=- ---- -Jun 20 08:27:07 customers.gitlab.com chef-client[30504]: #033[0mRan gsutil cp gs://gitlab-ops-secrets/gitlab-walg/ops.enc - | gcloud kms decrypt --keyring=gitlab-secrets --key=ops --location=global --plaintext-file=- --ciphertext-file=- returned 1#033[0m -Jun 20 08:27:07 customers.gitlab.com chef-client[30504]: #033[0mCookbook Trace:#033[0m -Jun 20 08:27:07 customers.gitlab.com chef-client[30504]: ---------------#033[0m -Jun 20 08:27:07 customers.gitlab.com chef-client[30504]: /var/chef/cache/cookbooks/gitlab_secrets/libraries/secrets.rb:69:in `get' -Jun 20 08:27:07 customers.gitlab.com chef-client[30504]: #033[0m /var/chef/cache/cookbooks/gitlab_secrets/libraries/secrets.rb:87:in `get_secrets' -Jun 20 08:27:07 customers.gitlab.com chef-client[30504]: #033[0m /var/chef/cache/cookbooks/gitlab_secrets/libraries/secrets.rb:98:in `block in merge_secrets' -Jun 20 08:27:07 customers.gitlab.com chef-client[30504]: #033[0m /var/chef/cache/cookbooks/gitlab_secrets/libraries/secrets.rb:94:in `each' -Jun 20 08:27:07 customers.gitlab.com chef-client[30504]: #033[0m /var/chef/cache/cookbooks/gitlab_secrets/libraries/secrets.rb:94:in `merge_secrets' -Jun 20 08:27:07 customers.gitlab.com chef-client[30504]: #033[0m /var/chef/cache/cookbooks/gitlab_walg/recipes/walg_env.rb:7:in `from_file' -Jun 20 08:27:07 customers.gitlab.com chef-client[30504]: #033[0m /var/chef/cache/cookbooks/gitlab_walg/recipes/default.rb:42:in `from_file'#033[0m -```",1.0 -49856734,2020-06-20 14:14:48.799,CA for Redis cluster gitlab is missing instances in ops environment,"This issue tracks work to make corrective actions for ""`Redis cluster gitlab is missing instances in ops environment`"" alert. - -It was recommended to remove the alert entirely, since there is already an alert definition for an existing instance going down. (https://gitlab.com/gitlab-com/runbooks/-/merge_requests/2405#note_364993343) - -Incident issue: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2293 - -MR to remove alert rule: https://gitlab.com/gitlab-com/runbooks/-/merge_requests/2405",1.0 -49310256,2020-06-19 13:23:10.331,Write runbook for Project Export,"We should have a runbook for manual project exports. - -While working on https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/10598 we found several ways how exports can fail, which also should be documented together with possible solutions.",3.0 -47057779,2020-06-17 21:57:01.387,Enable access logging for gitlab-runner-custom-fargate-downloads S3 bucket,"The Runner group would love to be able to gauge how much interest there is in our new AWS Fargate driver. The simplest way to do so would be to track downloads from the S3 bucket at https://gitlab-runner-custom-fargate-downloads.s3.amazonaws.com/master/index.html - -This issue is to request enabling access request logging so that we can then parse the data to see how many downloads of the driver there are. - -FYI @gitlab-com/gl-infra/managers - -CC @DarrenEastman",5.0 -46811018,2020-06-17 17:41:39.375,License Database Extract,"Hi @Finotto & @gerardo.herzig - -Created this issue to request a license DB extract as from the [hand book](https://about.gitlab.com/handbook/business-ops/data-team/data-infrastructure/#license-db) - -Command required below: - -`pg_dump -Fp --no-owner --no-acl license_gitlab_com_production | sed -E 's/(DROP|CREATE|COMMENT ON) EXTENSION/-- \1 EXTENSION/g' > S{DUMPFILE}` - -Thanks so much again :thumbsup:",1.0 -46808291,2020-06-17 17:39:20.658,Export version database for loading into warehouse,Runbook https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/uncategorized/cloudsql-data-export.md,2.0 -44840332,2020-06-16 07:14:15.511,"Review the latest switchover experience, consider possible improvements for future","@Finotto and @hphilipps mentioned that switchover and restart of all replicas performed (successfully) last weekend (started at 23:12 UTC 2020-06-13) caused increased load during ~30 minutes. - -https://dashboards.gitlab.net/d/patroni-main/patroni-overview?panelId=28&fullscreen&orgId=1&from=1592087756529&to=1592093870453&var-PROMETHEUS_DS=Global&var-environment=gprd&var-sigma=2 - -![Screen_Shot_2020-06-16_at_00.08.22](/uploads/0e44980eb93d329ba083e58e563579a7/Screen_Shot_2020-06-16_at_00.08.22.png) - -https://dashboards.gitlab.net/d/patroni-main/patroni-overview?orgId=1&from=1592087756529&to=1592093870453&var-PROMETHEUS_DS=Global&var-environment=gprd&var-sigma=2&panelId=31&fullscreen - -![Screen_Shot_2020-06-16_at_00.08.38](/uploads/4a9cdf94750ec46dc708d5958f4d5659/Screen_Shot_2020-06-16_at_00.08.38.png) - - -Error rates significantly increased during that high-CPU period: - -https://dashboards.gitlab.net/d/000000144/postgresql-overview?panelId=3&fullscreen&orgId=1&from=1592089200000&to=1592092799000 - -![Screen_Shot_2020-06-16_at_00.12.52](/uploads/d6eb2083f674f1d0d2bb79236915b7c3/Screen_Shot_2020-06-16_at_00.12.52.png) - - -From what I see in monitoring, it looked like significant stress to the system, so I think the additional analysis is needed, resulting to some improvements to the procedure in the future.",3.0 -43848773,2020-06-15 14:49:20.375,Use correct timestamps in elastic search postgres logs,The timestamps in kibana logs for postgres are reflecting the time of ingestion but not the actual time of the log event itself. This makes them to be off by a minute sometimes and being in the wrong order. We should use the first field in the postgres.csv log as a timestamp.,3.0 -43416308,2020-06-15 07:24:44.487,Configure Flipper HTTP adapter on gitlab.com,"## Summary - -In terms of ~Dogfooding, we should use GitLab's [feature flag system](https://docs.gitlab.com/ee/user/project/operations/feature_flags.html) in our [development](https://docs.gitlab.com/ee/development/feature_flags/). - -Epic is https://gitlab.com/groups/gitlab-org/-/epics/3367 - -## TODO - -TBD",4.0 -43414649,2020-06-15 07:23:02.356,Configure Flipper HTTP adapter on staging.gitlab.com,"## Summary - -In terms of ~Dogfooding, we should use GitLab's [feature flag system](https://docs.gitlab.com/ee/user/project/operations/feature_flags.html) in our [development](https://docs.gitlab.com/ee/development/feature_flags/). - -Epic is https://gitlab.com/groups/gitlab-org/-/epics/3367 - -## TODO - -TBD",4.0 -24515650,2019-09-06 00:12:38.730,Upgrade Patroni to 1.5.6,"The below notes were copied from [here](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7790#note_213228185): - -The [Patroni release notes for 1.5.6](https://patroni.readthedocs.io/en/latest/releases.html#version-1-5-6) cite a bug fix that allows health checks to Patroni to not be blocked by Patroni's potentially slow call to `get_cluster` (which is what most often stalls when Consul agent cannot promptly talk to Consul server). - -> Reduce lock time taken by dcs.get_cluster method (Alexander) -> -> Due to the lock being held DCS slowness was affecting the REST API health checks causing false positives. - -This [bug fix](https://github.com/zalando/patroni/commit/680444ae13154aca6f03556cdd1280e296b549ca) alone should help avoid *some* of the cases of unnecessary Patroni failover. The Patroni agent was sometimes slow to respond to incoming health check requests because it was holding an internal lock while running a REST call to Consul agent (which can be slow). This bug fix avoids that contention most of the time by waiting to take that lock until after the REST call to Consul finishes (and also by caching the results for up to TTL seconds). - -This does not address all of our concerns (e.g. slow calls to Consul can still lead to the Patroni cluster lock expiring), so we should probably still increase Patroni's DCS `ttl` setting in addition to this upgrade. - ---- - -Be sure to review all of the release notes between our current Patroni release (1.5.0) and the target release (1.5.6). - -Confirm that Patroni cluster can run safely in mixed mode, with some members using 1.5.0 and others using 1.5.6. - -Run through functional testing. - -Plan to deploy in `gstg` and then `gprd`.",5.0 -24515597,2019-09-06 00:04:26.437,Increase Patroni's patience when talking with Consul,"Increase Patroni's DCS settings for `retry_timeout` (currently 10 seconds) and `ttl` (currently 30 seconds). - -The rationale is described here: -https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7790#note_213228185 - -And remember that `ttl` is silently halved by Patroni before being sent to Consul. - -The `ttl` should probably be several times as large as the `retry_timeout`. - -Note: The `loop_wait` setting (currently 10 seconds) may be ok to leave as-is, even though we are increasing the other settings. `loop_wait` is roughly how long Patroni waits passively for notification from Consul of a cluster state change before it starts another round of pushing its own status updates to Consul and renewing its Consul session TTL. The configured `ttl` must be significantly larger than both `loop_wait` and `retry_timeout`.",3.0 -24515528,2019-09-05 23:56:40.841,Stop aborting pg_rewind during Patroni failover,The `statement_timeout` setting should not be applied to the Postgres user account used by `pg_rewind`. This consistently aborts the conversion of the old primary db into a replica after failover. Making that one configuration change could avoid a significant amount of toil (i.e. replacing or rebuilding the old primary node as a fresh replica).,1.0 -24505303,2019-09-05 17:28:32.755,Why are patroni failovers occurring so often,"**Problem statement:** - -Patroni failover events are expensive and are occurring much more frequently than expected. While a failover is in progress, the read-only replica databases remain available, but the writable primary database is unavailable. This causes all upstream clients to fail any task that requires any interaction with the primary database. For most purposes, GitLab.com is effectively unavailable during this time. - -Patroni's failover mechanism is crucial for maintaining high availability of our writable Postgres database, providing efficient and reliable return to service when the writable instance fails or becomes unreachable by its many clients. However, *unnecessary* failover events harm availability (typically cause 1-3 minutes of downtime) and require hours of manual clean-up and analysis. - -**Goal:** - -Reduce the rate of unnecessary failover events, to improve availability and avoid toil. - -Discover what triggers the recent Patroni failover events, and propose options to avoid them without sacrificing too much ability to detect and respond to events that really do necessitate failover. - -**Non-goals:** - -Reducing the amount of toil associated with failover events is a separate and also desirable goal, but will not be addressed here, except for one point: -* The `statement_timeout` setting should not be applied to the Postgres user account used by `pg_rewind`. This consistently aborts the conversion of the old primary into a replica after failover. Making that one configuration change could avoid a significant amount of toil (i.e. replacing or rebuilding the old primary node as a fresh replica). - -Reducing the duration of downtime during a failover event is a separate and also desirable goal, but tuning that is not expected to yield significant improvement. The downtime duration consists of 3 phases: -* *Failure detection time:* Time between an actual failure and its detection is mainly affected by the health checks' frequency, timeouts, and scope. Tuning failure detection to be more aggressive can sometimes lead to higher false-positive rate. That appears to be the case currently, so reducing our currently high false-positive rate may require increasing the time to detect actual failures. To the best of my knowledge, currently Patroni's failure detection time is at most 40 seconds (`loop_delay` + `ttl`). -* *Leader election:* Patroni's leader election process includes a mandatory delay to let the replicas apply as much of the old primary's transactions as possible from the WAL stream. Then the freshest healthy replica is elected to become the new primary. The Postgres timeline is forked, and all other replicas are asked to switch to the new timeline and start consuming new transactions from the new primary. -* *Reconvergence:* Clients must reconnect to the new postgres primary db. This time is already quite small because all clients connect to the writable primary postgres instance via a proxy (`pgbouncer`). Only those handful of `pgbouncer` instances must actually reconnect to Patroni's new primary Postgres db. - -**Background:** - -In the last couple months Patroni has several times initiated failover of the writable primary Postgres node. Most of those failovers appear to have been unnecessary, at least judging from the availability metrics for GitLab.com prior to failover. - -**Prior work:** - -For reference, here are some (but not necessarily all) of the Patroni failover events we investigated: -* [2019-07-17 failover from patroni-01 to patroni-04](https://gitlab.com/gitlab-com/gl-infra/production/issues/968) and its [RCA issue](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7275) -* [2019-08-14 failover from patroni-01 to patroni-04](https://gitlab.com/gitlab-com/gl-infra/production/issues/1054) and its [RCA issue](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7573) -* [2019-08-27 failover from patroni-07 to patroni-10](https://gitlab.com/gitlab-com/gl-infra/production/issues/1094) and its [RCA issue](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7737) -* [2019-09-03 failover from patroni-10 to patroni-11](https://gitlab.com/gitlab-com/gl-infra/production/issues/1119) and its [RCA issue](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7767) - -Several people have independently observed that Patroni failovers are triggered by timeouts during Patroni agent's call to its local Consul agent. Those timeouts are most often observed in Patroni's `get_cluster` method, which makes the Patroni loop's 1st of 4 REST calls to the local Consul agent. What causes those timeouts is not yet clear, although several ideas have been proposed, including (but not limited to): -* ephemeral network packet loss, either in general or along the path of consul agent's connection to a consul server -* kernel memory pressure delaying TCP receives -* consul servers undergoing leader election - -At least one failover (on [2019-09-03](https://gitlab.com/gitlab-com/gl-infra/production/issues/1119)) showed that around the time of the failover, the current Patroni lost its cluster lock -- its consul ""session"" (mutex) was invalidated. This, too, can be a side-effect of brief interruption between consul agent and consul server, since the ""consul session"" (which implements the Patroni cluster lock) automatically expires (unlocks the cluster) if not renewed every 15 seconds. Patroni only attempts to renew this session every 10 seconds, so a 5 second delay leads to lock expiry.",5.0 -24505297,2019-09-05 17:27:59.936,We've lost a few custom metric rules since the GKE migration for the Container Registry,"Since the migration to GKE a few metrics no longer work as the recording rule is invalid. - -Impacted metrics: -* `gitlab_component_availability:ratio{type=""registry""}` -* `slo:min:gitlab_service_apdex:ratio` -* `gitlab_service_apdex:ratio` - -This situation has caused a loss of visibility into some panels on this dashboard: https://dashboards.gitlab.net/d/general-service/general-service-platform-metrics?orgId=1&var-type=registry And we've triggered a few alerts that look at this data. - -Utilize this issue to find new recording rules for these metrics and ensure all panels on the aforementioned dashboard are working.",5.0 -24497368,2019-09-05 13:44:32.827,Create database for GKE Grafana service,"Currently we store Grafana dashboards and other configuration (`dashboards.gitlab.net`) locally in a sqlite database. - -* [x] Create a cloud hosted database. -* [ ] Sync data from old Grafana to new.",2.0 -24497209,2019-09-05 13:41:56.015,Move Grafana Dashboard service to Kubernetes,Top level tracking of moving Grafana dashboards to GKE.,1.0 -24486926,2019-09-05 08:59:51.150,-------------------- Cut Line --------------------,"This issue is a functional hack in this board: https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/boards/1240891 - -See https://gitlab.com/gitlab-com/www-gitlab-com/issues/4075 for the feature request for real cut lines.",1.0 -24476239,2019-09-05 00:58:10.160,Proposal: use imaging for provisioning,"There has been some discussions on the topic, and I would like to bring it more formally and take a decision. I would like to propose to *not* provision the (at least database) nodes with chef directly, but rather from pre-created images. - -The main driver is *reproducibility*. Current dynamic provisioning may lead to different versions of any or potentially many software components, like kernel (this has happened already: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7630) but could also be intermediate libraries, or main components. In any case, not having a consistent, reproducible image leads to some serious problems, like: -* Uneven performance. -* Difficult to impossible debugging if caused by different versioning. -* Bugs caused on one component version not happening on another one. -* Possible security issues or data-affecting issues (PostgreSQL, for example, is prone to index corruption issue if replicating between instances running on different versions of the glibc if they have altered the collations). - -Reproducible builds and reproducible images are generally considered a good pattern. Obviously this doesn't mean chef or the current provisioning system cannot be used: but just to start from a base image, install the software, and generate the final image. Actually, images may be layered, such as in: -* Start from base Ubuntu LTS layer. -* Add basic common utility packages -> image -* Add specific GitLab packages / software -> image -* Add PostgreSQL or pgbouncer or whatever software -> image - -While many options exist, [Packer](https://www.packer.io/) may be used as a tool to generate the images. - -Other than reproducible builds, this technique offers other advantages: -* Make the provisioning stage less error prone (while provisioning, some external software repository may fail, leading to the provisioning to fail). -* Faster provisioning (normally not important, buy may help when, for instance, a new replica is needed to be brought up asap). -* Better security. Since images are immutable, they can be statically analyzed for security issues. Dynamic provisioning may install potentially different (typically newer) versions of some packages which may not have been analyzed from a security perspective. - -It also comes with some drawbacks: -* Image management. Image combination may at some point become kind of exponential. With proper scripting, using a layered approach, this should not be a big deal (only higher storage costs of the images, but this should be very minor cost factor). -* Software packages may be updated less. But can be fixed by creating periodic issues to review and check which software packages can be updated. - -cc: @Finotto",5.0 -24475821,2019-09-05 00:16:25.600,Evaluate Odyssey as a replacement for pgbouncer(s),"In light of the recent issues related to pgbouncer's single-threaded core saturation (like in https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7632), some measures have been taken to mitigate or prevent excessive load: -* Creating more replicas (and hence distributing the load across more servers and, consequently, pgbouncers). See: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7674 -* Adding more than one pgbouncer per PostgreSQL host and expose them as separate services, so as to have effectively multi-process load balancing via DNS. See: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7651 - -Obviously, none are ideal, even we bought some time. Ideally, we should use a multi-threaded load balancer. Enter [Odyssey](https://github.com/yandex/odyssey). It is heavily used at high-profile sites like Yandex, and while at version `1.0rc1`, reports from the main author claim for its stability. We should test it thoroughly and evaluate if it may become a replacement for the multiple pgbouncers per host. - -Proposed plan: - -* [x] Wait until GA release, right now the latest version is an RC => https://github.com/yandex/odyssey/releases/latest -* [ ] Determine a relevant workload and mechanism to benchmark GitLab. This might become a separate issue on its own. We really need to have a way to measure performance, reproducibly, that resembles production workload (unless anything like this already exists). -* [ ] Measure Odyssey performance and compare with saturated pgbouncer. -* [ ] Stress-test Odyssey and look for potential crashes and/or memory leaks. -* [ ] Create a final report summarizing the findings. - -cc @Finotto",16.0 -24475284,2019-09-04 23:24:27.846,Unset statement_timeout for gitlab-superuser,"PostgreSQL configuration parameter [statement_timeout](https://postgresqlco.nf/en/doc/param/statement_timeout?version=9.6) is currently set to `15s`. While there are reasons for this setting on a global scale (mostly to avoid possibly idling connections, taking into account that it doesn't exist any PostgreSQL parameter similar to a `session_timeout`), it causes some troubles to some administrative commands. For instance, it needs to be disabled before running some long running operations like `VACUUM` or `ANALYZE`. Most importantly, it seems to be the cause of recent `pg_rewind` failures (when triggered by Patroni). See: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7543 - -While `pg_rewind` should be patched (by PostgreSQL), as IMHO it shouldn't be subject to the `statement_timeout`, we need to work around it. I already proposed some workarounds: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7543#note_204263742 Among those, I propose to implement the following one: - -> `alter user ""gitlab-superuser"" set statement_timeout=0` or equivalent command, to disable the statement timeout for the `gitlab-superuser` user --which would not be a bad thing anyway, unless any application connects with this user, but hopefully is not the case. - -The main reason is that not only will prevent `pg_rewind` from failing, but will also avoid having to unset `statement_timeout` for several other operations. - -This needs to be added to the DDL code. Required actions: -* [ ] Check that no application code is connecting to the database as `gitlab-superuser` (should not be the case, but needs to be checked). -* [ ] Add the above DDL statement to the DDL creation process. -* [ ] Prepare DDL change script to apply it to production. - -If there's any other proposal on how to act, please comment. - -@Finotto",1.0 -24475245,2019-09-04 23:19:39.884,Make sure meltano cert upgrade happens,"Your SSLMate certificate for www.meltano.com will expire on September 7, 2019. - -The renewal of this certificate is awaiting approval by the administrator(s) of the following domains: - -www.meltano.com: Please visit https://sslmate.com/console/orders/www.meltano.com -meltano.com: Please visit https://sslmate.com/console/orders/www.meltano.com - -For more information about how to approve this certificate, visit https://sslmate.com/console/orders/www.meltano.com - -cc @gitlab-com/gl-infra/cicd-and-enablement",1.0 -24470177,2019-09-04 18:17:31.508,Update gitlab-exporter (former gitlab-monitor) and re-enable CI metric scrapes,"We disabled scraping the CI metrics because the queries are real heavy. The top issue for this is https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7724. - -Now https://gitlab.com/gitlab-org/gitlab-exporter/merge_requests/101 improved the queries by adding a date range filter. This requires a new index which is going to get shipped with https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/32546. - -Once the index has found its way to production, we should -* Update `gitlab-exporter` to 5.0.1, at least on the archive DR replica -* Revert https://ops.gitlab.net/gitlab-cookbooks/chef-repo/merge_requests/1711 to re-enable scraping.",2.0 -24459604,2019-09-04 12:27:22.596,alert toil: HPA Unable to Scale,"This particular alert fires quite often. I think it simply needs to be tuned. https://gitlab.com/gitlab-com/runbooks/blob/master/rules/kubernetes-hpa.yml#L25-33 The alert is relatively basic, and when a scale down operation completes successfully, we'll have hit the minimum number of Pods, which is not a bad thing. The last time I saw this alert fire in our staging environment, was legit to fire based on the current alert rule, but undesired as it doesn't mean anything to us: - -```json -{ - ""type"": ""ScalingLimited"", - ""status"": ""True"", - ""lastTransitionTime"": ""2019-09-04T11:47:26Z"", - ""reason"": ""TooFewReplicas"", - ""message"":""the desired replica count is increasing faster than the maximum scale rate"" -} -``` - -Utilize this issue to figure out how we can tune the alert such that it does not fire in situations where we scaled down to the minimum number of Pods. In this situation, it's completely fine to happen as the HPA wanted to scale down further, but don't allow it too. We want more than 1 Pod running for the sake of redundancy.",1.0 -24458199,2019-09-04 11:39:40.680,Read permissions for analytics for new tables,"When creating new tables, we don't assign read permissions for the `analytics` user. This has led to https://gitlab.com/gitlab-data/analytics/merge_requests/1522#note_212269236. - -An alternative to updating the permissions manually is to add the `analytics` user to the `gitlab` group. `gitlab` is the user owning the tables and has full read/write access. Write is not needed for analytics, but we're only using this on the archive replica (read-only) anyways. - -We may want to make sure the `analytics` user cannot be used on the patroni instances (through pg_hba, for example). - -cc @Finotto",1.0 -24457541,2019-09-04 11:18:04.540,cleanup unused postgres 11 resources,As long as the postgres 11 epic (https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/71) isn't worked on we should remove unused postgres 11 resources and clean up unapplied terraform configs.,3.0 -24436488,2019-09-03 18:23:43.045,Create shared CloudSQL module for terraform,"Since the Google provided CloudSQL module does't suit our use case, we'll need to implement our own module which will extend https://github.com/terraform-google-modules/terraform-google-sql-db - -The primary issues are: - -1. The module does not implement private IP support. We will be connecting to our instances via a private IP from either gitlab instances or apps running on GKE clusters. Adding the private network functionality to the [VPC Module](https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/vpc) makes cloudsql work, but is currently causing problems with API permissions. -1. Google's module does not support `count` - which means that it can't be enabled and disabled with a CI variable. When building a project with a Kubernetes cluster to attach to a project or group for Auto DevOps, we want to be able to use CloudSQL for production instances, but disable it for staging and review apps. The module will not do this and Terraform does not support conditionally including modules (https://github.com/hashicorp/terraform/issues/12906). - -The new module is here: https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/cloud-sql - -/cc @ahanselka @craig",2.0 -24417128,2019-09-03 10:04:46.260,Improve alerting for jobs getting stuck in sidekiq queues,We often do not notice soon enough when jobs get stuck in sidekiq queues because long running jobs are blocking them (see https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7438). We should have better and more prompt alerting on that.,5.0 -24416205,2019-09-03 09:44:55.757,Add runbook for analyzing Gitaly pprof data,In case of incidents or for performance profiling it is very helpful to analyze Go pprof data. We should have runbook instructions for that.,2.0 -24411696,2019-09-03 08:21:41.753,Declare gitlab-ci GCP firewall rules in terraform,"Currently, the [gitlab-ci firewall rules](https://console.cloud.google.com/networking/firewalls/list?project=gitlab-ci-155816&firewallTablesize=50) are manually managed. - -We should declare these in terraform so that we can managed them in a reviewable, version controlled way. - -Somewhat relatedly, some chef roles declare iptables rules, e.g. https://ops.gitlab.net/gitlab-cookbooks/chef-repo/blob/master/roles/gitlab-runners-prometheus.json#L4-28. We could consider removing these as they are redundant in GCP due to the GCP firewall. Careful checking needs to be done first to ensure that any removed iptables rule has a GCP equivalent that's enforced. - -cc @jarv for fact-checking",2.0 -24398833,2019-09-02 21:31:03.703,Fix Terraform CI failure in `aws-snowplow` environment,"While working through fixing the errors in !7744 (MR: https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/merge_requests/973) `tflint` started working properly again, and surfaced the following error in the `aws-snowplow` terraform configuration: - -``` -$ /bin/tflint -Error: Failed to load configurations: main.tf:152,3-21: Attribute redefined; The argument ""assume_role_policy"" was already set at main.tf:149,3-21. Each argument may be set only once., and 6 other diagnostic(s) -``` - -Several `lifecycle: ignore_changes` blocks were generating errors related to attribute names (these seem like false positives) -``` -$ tflint -Error: Failed to load configurations: main.tf:671,34-35: Attribute name required; Dot must be followed by attribute name., and 5 other diagnostic(s) -``` - -Once all the lifecycle blocks were commented to enable `tflint` run to finish, this was the final output: - -``` - $ tflint -main.tf - ERROR:138 name must be 1 characters or higher (aws_iam_role_invalid_name) - ERROR:138 name does not match valid pattern ^[\w+=,.@-]+$ (aws_iam_role_invalid_name) - ERROR:148 name must be 1 characters or higher (aws_iam_role_invalid_name) - ERROR:148 name does not match valid pattern ^[\w+=,.@-]+$ (aws_iam_role_invalid_name) - -Result: 4 issues (4 errors , 0 warnings , 0 notices) -```",1.0 -24393429,2019-09-02 15:47:54.083,Performance Insights -Query Review- :: Week 36,"- Query # 1 : See #7804 -- Query # 2 : See #7805 -- Query # 3 : See #7806",1.0 -24334739,2019-08-30 17:50:50.551,Why does postgres replication lag sometimes grow significantly,"The `dr` environment's postgres replica (on host `postgres-dr-archive-01-db-gprd.c.gitlab-production.internal`) recently lagged by over 1 hour, triggering a PagerDuty alert. The Prometheus metric `replication_lag` suggests this is may be a chronic problem potentially affecting all replicas. - -Why does replication fall behind? What can we do about it?",2.0 -24320338,2019-08-30 10:07:39.769,Alert when running out of NAT ports,"After rolling out the changes described inhttps://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5071, machines will usually not have public IPs and will access the internet via Google Cloud NAT. - -To determine how many IPs we'll need, we can use the following formula - -`nat_ip_count = M * P / 64,512` - -Where: - -M = number of machines in the region (multiply by some generous number to account for future growth) -P = Minimum NAT ports per VM (defaults to 64) - -There are 64512 TCP and UDP (each) ports available per NAT IP. (https://cloud.google.com/nat/docs/overview#number_of_nat_ports_and_connections for context) - -Introducing Cloud NAT will have created another resource that can be saturated. Assuming constant min NAT ports per VM, we risk running out of NAT ports as the number of machines per router grows. We should alert on this so that SREs can provision extra NAT IPs in advance.",2.0 -24312315,2019-08-30 06:37:15.312,gitlab-monitor scrapes cause replication lag on archive replica,"Background: -* We disabled `gitlab-monitor` on the patroni hosts as we suspected the queries were amplifying load on the database -* I moved the heavy queries around the `/ci_builds` endpoint to the archive replica with https://ops.gitlab.net/gitlab-cookbooks/chef-repo/merge_requests/1701 - -We now observe [unsustainable replication lag](https://prometheus-db.gprd.gitlab.net/graph?g0.range_input=2d&g0.expr=pg_replication_lag%7Benvironment%3D%22gprd%22%2Cfqdn%3D%22postgres-dr-archive-01-db-gprd.c.gitlab-production.internal%22%2Cinstance%3D%22postgres-dr-archive-01-db-gprd.c.gitlab-production.internal%3A9187%22%2Cjob%3D%22postgres%22%2Cstage%3D%22main%22%2Ctier%3D%22db%22%2Ctype%3D%22postgres-archive%22%7D&g0.tab=0) on the archive replica: - -![Screenshot_from_2019-08-30_08-27-55](/uploads/89e0f102ca1fce00dfaf2af7c0f95787/Screenshot_from_2019-08-30_08-27-55.png) - -A single scrape takes about 30s and this query is the top offender (27s): -* Query: https://gitlab.com/gitlab-org/gitlab-exporter/blob/master/lib/gitlab_exporter/database/ci_builds.rb#L10 -* Plan: https://explain.depesz.com/s/d1A1 - -There's no direct way of speeding up the query as it is expected to scan a lot of data (about 15GB of buffers per query). - -This issue is to track the infra changes.",2.0 -24304846,2019-08-29 20:15:18.561,Create a new DR Unix group to allow SSH / sudo access,"We'd like a new DR Unix group to be created which is required to allow SSH / sudo access to DR specific nodes. - -Related: - -* https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/12 -* https://gitlab.com/groups/gitlab-org/-/epics/575 - -cc @devin, @dawsmith, @fzimmer, @rnienaber - -This issue was moved from: https://ops.gitlab.net/gitlab-cookbooks/chef-repo/issues/1",2.0 -24195544,2019-08-27 09:15:39.215,create a diagram of postgresql+patroni+pgbouncer+consul on prod,"create a diagram of postgresql+patroni+pgbouncer+consul on prod and setup from ports for visibility of the team. - -Also a table with all the hosts.",2.0 -24186594,2019-08-27 03:58:57.578,customers.gitlab.com 500 Error,Reported in slack (#questions) https://gitlab.slack.com/archives/C0AR2KW4B/p1566877301475900,1.0 -24176931,2019-08-26 19:27:10.632,Add env-zero to environments page in handbook,"We should include information about `env-zero` and how we bootstrap infrastructure projects to https://about.gitlab.com/handbook/engineering/infrastructure/environments/ - -/cc @devin",1.0 -24138265,2019-08-26 01:50:04.795,Dashboards not syncing to dashboards.gitlab.com,"``` -/usr/lib/ruby/2.3.0/net/http/response.rb:120:in `error!': 401 ""Unauthorized"" (Net::HTTPServerException) - from /usr/lib/ruby/2.3.0/net/http/response.rb:129:in `value' - from ./sync_grafana_dashboards:68:in `
' -``` -This is fetching the dashboard list from the public server (itself). Probably broken when the public dashboard server got rebuilt; I suspect we didn't create or update the API key.",1.0 -24105643,2019-08-23 19:01:37.124,Look into rails-console issues for dr environment,"Testing ssh to the rails console on DR, I see something like: - -Creating this when I started to test access for Ash in this access request: -https://gitlab.com/gitlab-com/access-requests/issues/1202 - -Hoping to get the Geo team better access to just look at issues with the DR environment. - -``` -Davids-MBP:users dsmith$ ssh dsmith-rails@console-01-sv-dr.c.gitlab-dr.internal -Starting console, please wait ... --------------------------------------------------------------------------------- - GitLab: 12.1.0-rc23-ee (213728f63b4) - GitLab Shell: 9.3.0 -Traceback (most recent call last): - 66: from bin/rails:4:in `
' - 65: from bin/rails:4:in `require' - 64: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/railties-5.2.3/lib/rails/commands.rb:18:in `' - 63: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/railties-5.2.3/lib/rails/command.rb:46:in `invoke' - 62: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/railties-5.2.3/lib/rails/command/base.rb:65:in `perform' - 61: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/thor-0.19.4/lib/thor.rb:369:in `dispatch' - 60: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/thor-0.19.4/lib/thor/invocation.rb:126:in `invoke_command' - 59: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/thor-0.19.4/lib/thor/command.rb:27:in `run' - 58: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/railties-5.2.3/lib/rails/commands/console/console_command.rb:95:in `perform' - 57: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/railties-5.2.3/lib/rails/command/actions.rb:15:in `require_application_and_environment!' - 56: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/railties-5.2.3/lib/rails/command/actions.rb:28:in `require_environment!' - 55: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/railties-5.2.3/lib/rails/application.rb:337:in `require_environment!' - 54: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/activesupport-5.2.3/lib/active_support/dependencies.rb:291:in `require' - 53: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/activesupport-5.2.3/lib/active_support/dependencies.rb:257:in `load_dependency' - 52: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/activesupport-5.2.3/lib/active_support/dependencies.rb:291:in `block in require' - 51: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/activesupport-5.2.3/lib/active_support/dependencies.rb:291:in `require' - 50: from /opt/gitlab/embedded/service/gitlab-rails/config/environment.rb:6:in `' - 49: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/railties-5.2.3/lib/rails/application.rb:361:in `initialize!' - 48: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/railties-5.2.3/lib/rails/initializable.rb:60:in `run_initializers' - 47: from /opt/gitlab/embedded/lib/ruby/2.6.0/tsort.rb:205:in `tsort_each' - 46: from /opt/gitlab/embedded/lib/ruby/2.6.0/tsort.rb:226:in `tsort_each' - 45: from /opt/gitlab/embedded/lib/ruby/2.6.0/tsort.rb:347:in `each_strongly_connected_component' - 44: from /opt/gitlab/embedded/lib/ruby/2.6.0/tsort.rb:347:in `call' - 43: from /opt/gitlab/embedded/lib/ruby/2.6.0/tsort.rb:347:in `each' - 42: from /opt/gitlab/embedded/lib/ruby/2.6.0/tsort.rb:349:in `block in each_strongly_connected_component' - 41: from /opt/gitlab/embedded/lib/ruby/2.6.0/tsort.rb:415:in `each_strongly_connected_component_from' - 40: from /opt/gitlab/embedded/lib/ruby/2.6.0/tsort.rb:415:in `call' - 39: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/railties-5.2.3/lib/rails/initializable.rb:50:in `tsort_each_child' - 38: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/railties-5.2.3/lib/rails/initializable.rb:50:in `each' - 37: from /opt/gitlab/embedded/lib/ruby/2.6.0/tsort.rb:421:in `block in each_strongly_connected_component_from' - 36: from /opt/gitlab/embedded/lib/ruby/2.6.0/tsort.rb:431:in `each_strongly_connected_component_from' - 35: from /opt/gitlab/embedded/lib/ruby/2.6.0/tsort.rb:422:in `block (2 levels) in each_strongly_connected_component_from' - 34: from /opt/gitlab/embedded/lib/ruby/2.6.0/tsort.rb:350:in `block (2 levels) in each_strongly_connected_component' - 33: from /opt/gitlab/embedded/lib/ruby/2.6.0/tsort.rb:228:in `block in tsort_each' - 32: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/railties-5.2.3/lib/rails/initializable.rb:61:in `block in run_initializers' - 31: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/railties-5.2.3/lib/rails/initializable.rb:32:in `run' - 30: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/railties-5.2.3/lib/rails/initializable.rb:32:in `instance_exec' - 29: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/railties-5.2.3/lib/rails/engine.rb:613:in `block in ' - 28: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/railties-5.2.3/lib/rails/engine.rb:613:in `each' - 27: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/railties-5.2.3/lib/rails/engine.rb:614:in `block (2 levels) in ' - 26: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/railties-5.2.3/lib/rails/engine.rb:656:in `load_config_initializer' - 25: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/activesupport-5.2.3/lib/active_support/notifications.rb:170:in `instrument' - 24: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/railties-5.2.3/lib/rails/engine.rb:657:in `block in load_config_initializer' - 23: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/activesupport-5.2.3/lib/active_support/dependencies.rb:285:in `load' - 22: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/activesupport-5.2.3/lib/active_support/dependencies.rb:257:in `load_dependency' - 21: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/activesupport-5.2.3/lib/active_support/dependencies.rb:285:in `block in load' - 20: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/activesupport-5.2.3/lib/active_support/dependencies.rb:285:in `load' - 19: from /opt/gitlab/embedded/service/gitlab-rails/config/initializers/console_message.rb:9:in `' - 18: from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/database.rb:79:in `version' - 17: from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/database.rb:259:in `database_version' - 16: from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/database.rb:246:in `connection' - 15: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/activerecord-5.2.3/lib/active_record/connection_handling.rb:90:in `connection' - 14: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/activerecord-5.2.3/lib/active_record/connection_handling.rb:118:in `retrieve_connection' - 13: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:1014:in `retrieve_connection' - 12: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:382:in `connection' - 11: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:523:in `checkout' - 10: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:795:in `acquire_connection' - 9: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:834:in `try_to_checkout_new_connection' - 8: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:855:in `checkout_new_connection' - 7: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:811:in `new_connection' - 6: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/postgresql_adapter.rb:48:in `postgresql_connection' - 5: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/postgresql_adapter.rb:48:in `new' - 4: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/postgresql_adapter.rb:223:in `initialize' - 3: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/postgresql_adapter.rb:692:in `connect' - 2: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/pg-1.1.4/lib/pg.rb:56:in `connect' - 1: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/pg-1.1.4/lib/pg.rb:56:in `new' -/opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/pg-1.1.4/lib/pg.rb:56:in `initialize': ERROR: pgbouncer cannot connect to server (PG::ConnectionBad) -```",1.0 -24080762,2019-08-23 07:54:28.165,Use LetsEncrypt for docs.gitlab.com,"Similarly to https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7654, I think it's time we switched the docs site (https://gitlab.com/gitlab-org/gitlab-docs) to using Let's Encrypt. - -I'm really hoping this will work correctly, as https://docs.gitlab.com has a lot of visitors and is very crucial to working properly :)",1.0 -24071878,2019-08-22 23:25:02.158,Use LetsEncrypt for www.remoteonly.org,"https://gitlab.com/gitlab-com/www-remoteonly-org/ is hosted on GitLab Pages; per https://gitlab.com/gitlab-com/www-remoteonly-org/issues/50 the cert expired a few days ago and we didn't notice. It required manual effort to find and update the cert that sslmate had autorenewed. - -Given we have LetsEncrypt (LE) support built into Pages now (beta, but presumably generally working), we should move to that as a dog-fooding exercise, and to reduce irregular (thus cognitively expensive) toil work that can easily be missed.",1.0 -24053089,2019-08-22 14:55:23.763,Some public dashboards are broken,"For example https://dashboards.gitlab.com/d/2zgM_rImz/imported-github-importer?orgId=1 - -Most panels do not render, with the error message ""query timed out in expression evaluation"". Continuously refreshing the page occasionally yields 1 or 2 panels that load. - -**Update**: Work that still needs to be done: - -Remove iptables firewall rules from the runner prometheus instance so that it can be scraped by gprd infra prometheus on its public network interface. The gprd prometheus IPs should be whitelisted. [This MR](https://ops.gitlab.net/gitlab-cookbooks/chef-repo/merge_requests/1740) got the ball rolling but I dropped it on the floor a while ago. It's imperative that the GCP firewall is carefully checked to ensure we're not opening up a security hole by removing the on-box firewall. Add rules using terraform if need be. Note that this interacts with https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8018 which will cause the IP of the gprd prometheus to change. Currently they're behind Cloud NAT.",2.0 -24009928,2019-08-21 13:41:36.679,GKE in GPRD is evicting our registry Pods,"Occasional checks into the cluster as traffic in canary has shifted has resulted in some evicted Pods over the course of time. We do not yet have alerts set for this type of situation. To be handled here: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7604 - -Utilize this issue to track what we can do to prevent Evicted Pods. The reasoning has currently been diagnosed as memory overuse compared to the configured memory requests: - -``` -Message: The node was low on resource: memory. Container registry was using 3309728Ki, which exceeds its request of 32Mi. -``` - -This particular Pod in the above example was using over ~413MB of RAM, but we only request ~32MB. The requests configuration: - -```yaml - registry: - Image: registry:2.7.1 - Port: - Host Port: - Requests: - cpu: 50m - memory: 32Mi -``` - -Utilize this issue to track how best to handle this situation. It may be that we need to figure out an appropriate baseline and manage our requests appropriately such that Kubernetes will scale the nodes as needed. - -Reference: -* https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ -* https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/ - -/cc @gitlab-org/delivery",3.0 -24007535,2019-08-21 12:36:31.005,Use helm-diff for dry-run,"@mwasilewski-gitlab mentioned in a coffeebreak that he has used helm-diff before and thought it could be something useful for us. - -https://github.com/databus23/helm-diff - - -Would be nice addition to our existing dry-run pipeline for branches as right now it doesn't show much",1.0 -23976412,2019-08-20 14:57:47.335,ops.gitlab.net slow-down due to the database and VM being in different regions,"the WIP issue https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6675 has left the ops in a state where it is very slow. we are also seeing some timeout errors from chatops: - -![Screen_Shot_2019-08-20_at_4.54.33_PM](/uploads/32172e4bf04d337df1c501b8b8dec7e0/Screen_Shot_2019-08-20_at_4.54.33_PM.png) - - -To wrap this up we can do the following: - -1. Delete ops-too -2. Create a new instance with a new name -3. Mount nfs, setup to use memorystore, connect to cloudsql (without repos at this point) -4. Shut down both boxes, rsync data from the old to the new server -5. change the dns - -Or if we are worried about introducing too many changes, keep files on local disk.",2.0 -23972334,2019-08-20 13:22:19.724,Run ClearSharedRunnersMinutesWorker on production,"In https://gitlab.com/gitlab-org/gitlab-ce/issues/65540 we fixed an issue where the pipeline minutes usage was showing incorrect values. -The problem was in [`ClearSharedRunnersMinutesWorker`](https://gitlab.com/gitlab-org/gitlab-ee/blob/0c1c17abba98ffabbb59e854672cef60c8803e39/ee/app/workers/clear_shared_runners_minutes_worker.rb) that was timing out when trying to reset numbers for projects. - -Given that the fix is out now, we would need to run it on production to reset the pipeline minutes quota so that users can see correct statistics. [We have decided to do so](https://gitlab.com/gitlab-org/gitlab-ce/issues/65540) now despite giving free minutes to users. - -The same worker should run automatically on cron schedule on the 1st of every month. - -### The ask - -is it possible to run the following on production? - -```ruby -ClearSharedRunnersMinutesWorker.new.perform -``` - -and report the status of the operation in this issue. This should not take long and the result would be that project and namespace statistics will be in sync for the month of August. - -/cc @drewcimino @ahanselka",1.0 -23957253,2019-08-20 06:37:03.333,Route SLO alerts to pagerduty,"https://gitlab.com/gitlab-com/runbooks/merge_requests/1344 is routing latency SLO alerts to pagerduty as they seem to be very accurate in indicating real production issues. - -We also should send other SLO alerts in https://gitlab.com/gitlab-com/runbooks/blob/master/rules/general-service-alerts.yml to pagerduty after we trimmed down the false alert rate (https://gitlab.com/gitlab-org/gitlab-ce/issues/66166): - -* [ ] error ratios alerts -* [ ] saturation alerts -* [ ] service availability alerts - -Operation rate alerts probably shouldn't go to pagerduty, as they will manifest in higher latencies or error ratios when they affect production.",3.0 -23942446,2019-08-19 21:35:53.113,Shut down cron.gitlab.com,"This machine was only used for the Zoom sync (https://ops.gitlab.net/gitlab-cookbooks/chef-repo/merge_requests/1659), and this has now been replaced with a schedule pipeline (https://gitlab.com/gitlab-com/www-gitlab-com/merge_requests/28319). - -- [x] chef-repo changes: https://ops.gitlab.net/gitlab-cookbooks/chef-repo/merge_requests/2174 -- [x] gitlab-com-infrastructure changes: https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/merge_requests/1191 - -/cc: @ahanselka",2.0 -23928490,2019-08-19 15:57:44.977,Bolster Alerting for GKE clusters and components,"We are lacking in alerting/paging for issues with our GKE Clusters and components installed inside of them. Let's see what we can do to improve our stature on this. - -Consider the following: -* [x] The `stable/prometheus-operator` helm chart has many alerts that we've chosen to not implement due to our own configuration for alerting. Consider porting these over if they seem to suit our needs -* [x] Alert for any number of evicted Pods that remain around - https://gitlab.com/gitlab-com/runbooks/merge_requests/1357 -* [x] Alert if we've been unable to scrape metrics for an extended period of time `kube_hpa_status_condition` https://gitlab.com/gitlab-com/runbooks/merge_requests/1376 -* [x] Custom rules for `gitlab_component_availability` need to capture our running Pods - https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7575 -* [x] Validate rules are in place for when we've reached the limits of our HPA scaling configurations https://gitlab.com/gitlab-com/runbooks/merge_requests/1376 -* [x] can we alert when we are bumping against the maximum of allowed nodes in a node pool? https://gitlab.com/gitlab-com/runbooks/merge_requests/1377 -* [x] node specific metrics, disk, memory, cpu usage -* [x] container throttling - https://gitlab.com/gitlab-com/runbooks/merge_requests/1407",5.0 -23927256,2019-08-19 15:52:01.476,Change all Kubernetes monitoring Services to use json log output,Components installed via `monitoring` are using standard test log format. Let's see if we can swap this out to json log formatting to make it easier to search inside of Kibana.,1.0 -23916516,2019-08-19 13:44:24.168,Stackgres blueprint,,2.0 -23846368,2019-08-16 15:39:44.339,Consolidate bastion `howto` pages in runbooks,"We currently have the following in gitlab-com/runbooks: - -``` -howto/gprd-bastions.md -howto/gstg-bastions.md -howto/ops-bastions.md -howto/dr-bastions.md -howto/pre-bastions.md -``` - -At a minimum, this should be consolidate. I prefer we instead make this a file in this repository's onboarding directory. We should update it there regularly.",1.0 -23845288,2019-08-16 14:54:07.365,"When a deployment to Kubernetes fails, the master branch is now inconsistent with what has been deployed","Our deployment procedure for Kubernetes assumes that deployments would always succeed. In times for which they do not, helm will automatically roll back the desired change. This presents an issue as the master branch is no longer an accurrate representation for what is running on the cluster. Utilize this issue to track what we can do to ensure that master is a reflection of what exists on a cluster. - -## Thoughts - -* We would want to setup a failure job that performs work, potentially reverting the change -* Issues with linked MR's would need some form of communication to know that a deploy was unsuccessful, anything that may have been auto closed may need to be reopened - -/cc @gitlab-org/delivery",1.0 -23845203,2019-08-16 14:49:54.746,GPRD Kubernetes Cluster was inadvertently created with preemptible instances,The cluster created with this node type was a mistake. Utilize this issue to switch the node pool to change to non preemptible nodes.,1.0 -23845090,2019-08-16 14:43:49.369,GPRD Kubernetes Cluster is generating many errors,"Utilize this issue to track these errors and learn/discuss/address. - -https://log.gitlab.net/goto/5770bdf4c1f9aa4707de3a60fcd6e37b#/?_g=h@44136fa - -| Log | Filter? | Ignorable | Info | -| --- | ------- | --------- | ---- | -| `daemonsets/fluentd-gcp-v3.2.0 was not changed` | possible | Yes | https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7577#note_204637830 | -| `rm: can't remove '/etc/ssl/certs/*': No such file or directory` | possible | Yes | https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7577#note_204643312 | -| `WARNING: ca-certificates.crt does not contain exactly one certificate or CRL: skipping` | possible | Yes | https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7577#note_204643312 | -| `unable to fetch pod metrics for pod /: no metrics known for pod` | Not advised | No | https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7577#note_204658792 | -| `...watch of \*v1.Endpoints ended with: too old resource version...` | possible | Possibly | https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7577#note_204660050 | -| `caller=shipper.go:350 msg=""upload new block""...` | Not advised | Yes | https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7577#note_204660110 | -| prometheus info level messages | No advised | Yes | https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7577#note_204813763 | -| `time="""" level=info msg=""debug server listening :5001""` | Not advised | Yes | https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7577#note_205157205 | - -### Actionable Items - -https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7577#note_205227095",3.0 -23812123,2019-08-15 15:41:57.460,Enable json logs on registry,"Currently we are not using structured logs on registry in gke, we should enable them.",3.0 -23794489,2019-08-15 02:25:48.952,Change the onprem-testbed env from using GCP filestore to NFSv4 on the gitaly node,Requested in the thread at https://gitlab.com/gitlab-org/gitaly/issues/1708#note_200047676,2.0 -23789919,2019-08-14 19:28:39.258,Investigate saturation of read-only replica PGbouncer (and lack of load distribution),"During https://gitlab.com/gitlab-com/gl-infra/production/issues/1054, we observed the following behavior: - -* At 8:30a UTC `patroni01` experienced a network issue that caused Patroni to fail over to `patroni04` - * `patroni01` ended up in a corrupted state and was offline - * `patroni04` became the new master - * `1` master, `5` replicas - -* `patroni06` immediately became overloaded: - * server connections maxed out at 100 (as configured) - * active clients dropped - * waiting connections increased - -![Screen_Shot_2019-08-14_at_9.22.28_PM](/uploads/99de8c23fc6a81c02ae14e4c23b9b29a/Screen_Shot_2019-08-14_at_9.22.28_PM.png) - -* however, all other replicas, while registering the failover blip, managed to stabilize nearly inmediately: - -![Screen_Shot_2019-08-14_at_9.27.34_PM](/uploads/af8e0f6aef112449bb205b431b4f225b/Screen_Shot_2019-08-14_at_9.27.34_PM.png) - -The behavior of `06` wasn't expected. As an experiment, we took `06` out of the cluster temporarily. What we observed was `03` crater under the excess load: - -![Screen_Shot_2019-08-15_at_12.10.40_AM](/uploads/454b1843f3d5de33ffd2ac6a484cbd55/Screen_Shot_2019-08-15_at_12.10.40_AM.png) - -When `01` was finally restored an added to the cluster as a read-only replica, we attemped the experiement of pulling 06 out of the rotor. As 03 did, it cratered: - -![Screen_Shot_2019-08-15_at_12.12.25_AM](/uploads/9cd6b5a9f969ebbbbb6444ffbbd7d7dd/Screen_Shot_2019-08-15_at_12.12.25_AM.png) - -@stanhu checked the internal list of hosts (no screenshot or data saved) and it simply did not seem to have enough entropy. From memory, `05` was the first database replica in the list. After we added `01` to the cluster, `05` was still first in the list and `01` was last. With a sample of 1 observation, the combination we saw post rejoin is possible, but it seems unlikely. We expected not to see `05` in the same spot as before, and we didn't expect to see `01` last. - -It is worth noting that once we were back to 5 replicas, we started seeing a slow recovery on `06`: - -![Screen_Shot_2019-08-15_at_12.23.27_AM](/uploads/6584dfc0630308f546742333691422e6/Screen_Shot_2019-08-15_at_12.23.27_AM.png) - -Thus, there is clearly a capacity component to this riddle. As a precaution, we added another database replica, `07`, in hopes that this addition will buy us some runway in case of having another replica fail.",3.0 -23788247,2019-08-14 17:51:26.218,Communication during incidents,"Communication during https://gitlab.com/gitlab-com/gl-infra/production/issues/1054 was lacking in effectiveness. - -* we neglected to update status.io for some time -* we did not notify Support -* there was a fair amount of *what we know* and *what we've tried* as people joined the incident call",2.0 -23787912,2019-08-14 17:27:28.050,Client-side read-replica list observability,"During https://gitlab.com/gitlab-com/gl-infra/production/issues/1054, we observed unexpected behavior from the application read-replica load balancing. When the `01` node failed over to `04`, the failover caused significant, uneven load to shift to `06`. When we removed `06` from the cluster, the load shifted to `03`. When we undid that, the load came back to `03`. When we added the restored `01` and tried to remove `06`, `01` became overloaded. It was only when we had all replicas working and let them have time to stabilize that we saw the platform go back to nominal operation. - -One of the questions we asked during the incident was ""what does the list look like across the fleet"". It turns out we can't easily answer that. @stanhu did some magic and was able to watch this on a few nodes: - -![Screen_Shot_2019-08-14_at_3.15.21_PM](/uploads/e02f5543900cac729f2acb5b3445b74a/Screen_Shot_2019-08-14_at_3.15.21_PM.png) - -It would be useful to expose this data and be able to see it site-wide. Is it really as randomized as we think it should be? Exposing this in logs might be a first step, but it would required on-the-flight processing. A possible visual approach is, for each replica, how many clients have it in each position in the array. We may cap that at 4 positions to start this, regardless of the number of replicas). In any event, being able to see this during an incident likes today's is imperative.",3.0 -23787703,2019-08-14 17:14:21.572,Patroni documentation and training,"Post https://gitlab.com/gitlab-com/gl-infra/production/issues/1054, we realized that ee need to do a review of Patroni documentation and runbooks and do training on how the database cluster is configured, how Patroni works, how we perform common operations (add a node, remove a node, check on status, build a node, etc). - -Everyone on-call (EOCs and IMOCs) must become familiar with this. DNA seems like a good setting to run such training.",3.0 -23787506,2019-08-14 17:03:34.745,Patroni runbooks and tooling => 20%,"During https://gitlab.com/gitlab-com/gl-infra/production/issues/1054, it was apparent some of our Patroni runbooks lacked details, didn't work, or were out of date. For instance, creating a new replica ran into a bunch of issues, and there were questions about the validity of the Chef/Terraform state: disk sizing, Patroni startup, etc. - -Also, this still requires deep knowledge of flags, and such. We should think about abstracting some of these operations in a tool that hides this complexity. I want to be able to type something like 'foo create replica --production' or 'foo disable replica --production ', where *foo* is likely `gittab-ctl patroni`. - -The way we do this today is too risky, as we craft commands on the fly and execute them on hosts in production.",3.0 -23787489,2019-08-14 17:02:49.062,Patroni runbooks: add enable statistics,"During https://gitlab.com/gitlab-com/gl-infra/production/issues/1054, @craigf mentioned that statistics had to be enabled on a new master after failover, but that this was not covered in runbooks.",1.0 -23787387,2019-08-14 16:57:21.830,Runbook/Handbook update: how to page Ongres,"During https://gitlab.com/gitlab-com/gl-infra/production/issues/1054, it wasn't clear to the EOC how to page Ongres. Add to Runbook/Handbook, and make changes to `chatops` as necessary to reduce friction.",1.0 -23756624,2019-08-13 22:17:02.706,Set 301 redirect catchall on HAProxy for allremote.info domain,"Per the request in gitlab-com/marketing/corporate-marketing#789 we need to do the following: - -Also do the work for remoteonly.org in https://gitlab.com/gitlab-com/www-gitlab-com/issues/6153",1.0 -23753683,2019-08-13 19:54:06.375,Build automation for calculation and storage of MTBF for user facing services,"Sub issue of https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7431. - -Starting point to discuss the question on how we track MTBF. - -Currently we have a [spreadsheet here](https://docs.google.com/spreadsheets/d/1bCEUQbWMccVm1dfBrOFtXjyDVCflHu4rB6htOaC_0yI/edit#gid=1150439504), but it is manual.. - -Questions for discussion here: -1. Where are we going to get the data? Currently production issues. -1. What are the MTBF metrics we want to track? - - MTBF between Sev 1/2 incident issues - - MTBF between any incident issue - - Further breakdown by tags - eg MTBF by service ? -1. Where do we want to store the end result - prometheus, periscope? - -adding @alejandro @cmcfarland @devin @ahanselka @nnelson @ggillies to start discussion on these",1.0 -23722603,2019-08-13 04:11:07.867,Cert-manager versions older than 0.8.0 need to be upgraded by Nov 1st,"Letsencrypt says: -``` -We've been working with Jetstack, the authors of cert-manager, on a -series of fixes to the client. Cert-manager sometimes falls into a -traffic pattern where it sends really excessive traffic to Let's -Encrypt's servers, continuously. To mitigate this, we plan to start -blocking all traffic from cert-manager versions less than 0.8.0 (the -current semver minor release), as of November 1, 2019. Please upgrade -all of your cert-manager instances before then. -``` - -Some of our older Kubernetes clusters fall into this category - probably because of autodevops setups which run cert-manager pods. - -/cc @skarbek",2.0 -23717031,2019-08-12 21:16:07.025,RCA: Consul SSL Issue,"Incident: gitlab-com/gl-infra/production#1037 - -## Summary - -A brief summary of what happened. Try to make it as executive-friendly as possible. - -We discovered expired, self signed certificates on our consul servers. These certificates could not be renewed in the usual way because the signing key for the Cerficate Authority was no longer available. Existing TLS connections to the service were still up and passing traffic, but any change or network interruption would cause them to disconnect and not be able to reconnect. This is a problem because our database high availability setup uses those connections for service location. If any of the existing connections from web nodes or api nodes were interrupted, the wouldn't be able to find the database. If any of the connections from the database nodes were interrupted, the database would fail over and not be able to decide which is primary. Each of these situations would be very bad, and both of them together would render the entire site unusable until it was fixed. - -The problem in this case was that going to each machine and addressing the problem one at a time would not work. Even rolling out or pushing a change via Chef would leave us with each individual node non-functional for 1 to 30 minutes. All changes needed to be made (exactly) simultaneously, without allowing the database to fail over. - -This left a lot of individual risks, and a lot of unknowns to test and validate. There were several possible solutions to work through, and after walking through them we decided that turning off validation of the certificates would both remove all of the risk, and allow time to come up with a proper solution for certificate management. All of the other options required a similar amount of effort and more importantly the same risk and process for simultaneously restarting the service everywhere. Other solutions explored were: - -- Replacing the certificates and CA with another self signed cert -- Switching from a single custom CA to the system CA store and using sslmate -- Switching to a letsencrypt cert - -### Metadata - -- Service(s) affected : Consul, Database, PGBouncer, Patroni, Web/API -- Team attribution : SRE -- Minutes downtime or degradation : 0 (10 seconds for consul, 1 minute for Patroni) - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? (i.e. service outage, sub-service brown-out, exposure of sensitive data, ...) - -The impact was the elevated risk in that any interruption in any established TCP connection would cause either a partial or total outage of GitLab.com, depending on the node(s) involved. - -- Who was impacted by this incident? (i.e. external customers, internal customers, specific teams, ...) - -Everyone using GitLab.com could have been impacted. In the end, nobody was impacted and nobody noticed who was not involved in the activity. - -- How did the incident impact customers? (i.e. preventing them from doing X, incorrect display of Y, ...) - -There was no impact to customers - - -## Detection & Response - -Start with the following: - -- How was the incident detected? - -This was detected by a restart of one of the database servers in staging. It could not re-connect to consul. - -- Did alarming work as expected? - -No. We had no alert for the expiration of this certificate, since it was never intended to go into production. - -- How long did it take from the start of the incident to its detection? - -3 days from certificate expiration to noticing it - -- How long did it take from detection to remediation? - -About 2 days of troubleshooting, planning, and a maintenance window to remediate - -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team memeber wasn't page-able, ...) - - - sshguard on consul servers was locking out the bastion hosts. - - Behavior when testing in staging did not match behavior when testing in DR - -## Root Cause Analysis - -- The SSL certificates were expired on the consul hosts -- Self signed certificates were in use and the CA key no longer existed -- No production readiness review was done -- These servers were originally a proof of concept and were later promoted to production -- Moving too fast due to the rush to switch the database high availability technology to Patroni - - -## What went well - -Start with the following: - -- The process of restarting consul on all servers without causing an outage went exactly as planned. -- The team did an amazing job of covering all of the possible risks and planning around them. -- The handover between time zones was extremely helpful. - -## What can be improved - -Start with the following: - -- Using the root cause analysis, explain what can be improved to prevent this from happening again. - -Our method of managing certificates is not optimal. Certificates should automatically renew in all cases. - -- Is there anything that could have been done to improve the detection or time to detection? - -All certificates should be monitored, especially in cases where they do not auto-renew - but even when they do -A production readiness review should have caught this usage of a self signed certificate and its associated CA - -- Is there anything that could have been done to improve the response or time to response? - -We could have handed over the planning and response from the APAC shift to the Europe shift after the troubleshooting was finished. We decided instead to set up an emergency procedure and have the people who did the troubleshooting and testing be the ones to plan and execute the response. In retrospect this was the right decision - but if the situation had been more urgent, we could have reduced the time. - -- Is there an existing issue that would have either prevented this incident or reduced the impact? - -Yes: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/1574 - -- Did we have any indication or beforehand knowledge that this incident might take place? - -Since there were no alerts, we had no indication. - -## Corrective actions - -- List issues that have been created as corrective actions from this incident. - - - https://gitlab.com/gitlab-com/gl-infra/production/issues/1042 - - https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7484 - sshguard blocking consul servers - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",2.0 -23683431,2019-08-12 07:18:53.744,Deep Dive on production#1039,@aamarsanaa @hphilipps i please drive a deep dive on the incident https://gitlab.com/gitlab-com/gl-infra/production/issues/1039. we'll perform a walkthrough of the incident during the DNA meeting on 14 August. Please re-familiarize yourselves with the incident and RCA. This issue has much more to do with process than troubleshooting.,1.0 -23679500,2019-08-12 01:37:44.838,Replace SSL cert for dashboards.gitlab.com,"``` -https://dashboards.gitlab.com - SSL certificate for https://dashboards.gitlab.com expires in 4d 23h 24m 58s -``` ------ -``` -$ echo """" |openssl s_client -connect dashboards.gitlab.com:443 -showcerts 2>&1|openssl x509 -noout -enddate -notAfter=Aug 16 23:59:59 2019 GMT -``` ------ -``` -sslmate list |grep dashboards.gitlab.com -dashboards.gitlab.com DV Active 2020-08-17 No key file -``` ------ -Certificates stored/managed in gcloud, so we'll have to upload an entire new one, which means we need to rekey in sslmate.",1.0 -23678289,2019-08-11 22:01:21.387,[Project] Repository Migration => 0%,Epic: https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/65,4.0 -23674684,2019-08-11 16:20:25.116,-------------------- Cut Line --------------------,"This issue is a functional hack in this board: https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/boards/1240891 - -See https://gitlab.com/gitlab-com/www-gitlab-com/issues/4075 for the feature request for real cut lines.",1.0 -23674652,2019-08-11 16:18:22.458,-------------------- Cut Line --------------------,"This issue is a functional hack in this board: https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/boards/1240891 - -See https://gitlab.com/gitlab-com/www-gitlab-com/issues/4075 for the feature request for real cut lines.",1.0 -23674367,2019-08-11 16:15:10.085,-------------------- Cut Line --------------------,"This issue is a functional hack in this board: https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/boards/1240891 - -See https://gitlab.com/gitlab-com/www-gitlab-com/issues/4075 for the feature request for real cut lines.",1.0 -23661453,2019-08-10 21:20:39.569,"Operations Analyst, Infrastructure role to Data Analyst, Infrastructure role",,1.0 -23661438,2019-08-10 21:18:06.406,MTTP for Infrastructure,"Measure our ***mean time to production** for Infrastructure change (i.e., the elapsed time from commit on master to having the change applied to production): Chef, Terraform, etc.",3.0 -23646435,2019-08-09 17:32:07.474,Clean up of Production Board,"The Production Board (https://gitlab.com/gitlab-com/gl-infra/production/-/boards/1204483) has a number of items in the `Open` state but do not seem like incidents, changes or deltas. We need to clean those up, either by moving them to Infrastructure project or closing them out. - -Additionally, there are a bunch of changes which aren't really changes or were changes at some point which were unfinished. As a general rule, a change has clearly defined start and end states. Let's clean that up as well, either by finishing them or moving them to be Infra issues.",2.0 -23632177,2019-08-09 08:42:59.010,Certificate runbooks =>0%,"Ensure we have certificate renewal and installation runbooks for all certificates in use. As we do not have good certificate inventory capabilities, as good place to start gathering the inventory might be Chef. - -Epic: https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/87",4.0 -23631223,2019-08-09 08:17:00.785,Certificate monitoring,"Ensure we have expiration monitoring for all certificates in use. As we do not have good certificate inventory capabilities, as good place to start gathering the inventory might be Chef.",5.0 -23621412,2019-08-08 22:11:38.619,sshguard Blocking access to consul servers,"`sshguard` is blocking ssh access to consul servers. It is getting requests from the bastions which it interprets as hostile and adds an iptables rule to block the bastions on the ssh port. This has the effect of locking everyone out. We should either set a timeout that allows traffic again, or use a different method of determining hostile traffic. - -``` -sudo iptables -nvL --line -``` -Results in lines like the folowing - sometimes 2 or 3 bastion servers are listed - -``` -Chain sshguard (1 references) -num pkts bytes target prot opt in out source destination -1 14214 935K DROP all -- * * 10.216.4.4 0.0.0.0/0 -``` - -The line can be removed with - -``` -sudo iptables -D sshguard 1 -``` - -We should also determine exactly which traffic is triggering this behavior. If it is not being caused by someone with access who has their ssh setup misconfigured, then it is being caused by someone malicious who has gotten access to the bastions.",1.0 -23618000,2019-08-08 19:33:07.855,RCA Merge requests getting closed inadvertently," - -Incident: gitlab-com/gl-infra/production#NNN - -## Summary - -Because of issue noted in https://gitlab.com/gitlab-com/gl-infra/production/issues/1039, some MR's had their source branches reported missing and the refresh service closed all MR's that it found with a reported missing source branch. - -- Service(s) affected : ~""Service:Sidekiq"" - -- Team attribution : - -- Minutes downtime or degradation : - -For calculating duration of event, use the [Platform Metrics Dashboard](https://dashboards.gitlab.net/d/ZUei7TkWz/platform-metrics?orgId=1) to look at appdex and SLO violations. - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? - - a small number of MRs got closed inadvertently -- Who was impacted by this incident? - - a small number of users (internal and external) -- How did the incident impact customers? - - MRs were closed which should have stayed open. -- How many attempts were made to access the impacted service/feature? -- How many customers were affected? -- How many customers tried to access the impacted service/feature? - -Include any additional metrics that are of relevance. - -Provide any relevant graphs that could help understand the impact of the incident and its dynamics. - -## Detection & Response - -Start with the following: - -- How was the incident detected? - - reports from users -- Did alarming work as expected? - - there is no alarming for this kind of event -- How long did it take from the start of the incident to its detection? -- How long did it take from detection to remediation? -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team memeber wasn't page-able, ...) - -## Root Cause Analysis - -Merge requests got closed unexpectedly. - -1. Why? - MergeRequests::RefreshService closed some MRs. -2. Why? - Because their branches seemed missing. -3. Why? - The branch cache was not invalidated. -4. Why? - We killed `post_receive` jobs during incident https://gitlab.com/gitlab-com/gl-infra/production/issues/1039 which seems to have prevented branch invalidation in a few cases. -5. Why? - Because there is a possible race condition with branch invalidation (https://gitlab.com/gitlab-org/gitlab-ce/issues/65803) - -## What went well - -Start with the following: - -- Identify the things that worked well or as expected. -- Any additional call-outs for what went particularly well. - -## What can be improved - -Start with the following: - -- Using the root cause analysis, explain what can be improved to prevent this from happening again. -- Is there anything that could have been done to improve the detection or time to detection? -- Is there anything that could have been done to improve the response or time to response? -- Is there an existing issue that would have either prevented this incident or reduced the impact? -- Did we have any indication or beforehand knowledge that this incident might take place? - -## Corrective actions - -* [x] fix for MergeRequestWorker race condition https://gitlab.com/gitlab-org/gitlab-ce/issues/65803 - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",1.0 -23612242,2019-08-08 15:30:03.111,Work out exactly how many merge requests were closed in https://gitlab.com/gitlab-com/gl-infra/production/issues/1040,"In https://gitlab.com/gitlab-com/gl-infra/production/issues/1040 a race-condition between the PostReceiveWorker and the Merge Request Refresh Worker (?) related to delayed cache invalidation led to merge requests getting closed by a bug in the application. - -At present we do not know how many were closed. - -The decision has been made not to automatically reopen the MRs as this may lead to further confusion for users, however it's important from a messaging point-of-view that we understand roughly how many users were affected by this bug. - -@abrandl has done some initial analysis so assigning to him. - -cc @smcgivern for any insight into how we can determine whether an MR was closed by a missing branch or a user action.",0.0 -23611966,2019-08-08 15:19:47.342,RCA: Gitaly n+1 calls causing bad latency and sidekiq queues to grow," - -Incident: gitlab-com/gl-infra/production#1039 - -Rapid Action Issue: https://gitlab.com/gitlab-com/www-gitlab-com/issues/4997 - -## Summary - -Some commits with a massive amount of tags caused jobs to make many Gitaly calls, leading to higher Gitaly latency and growing sidekiq queues. - -For timeline see the incident issue: https://gitlab.com/gitlab-com/gl-infra/production/issues/1039#timeline - -- Service(s) affected : ~""Service:Gitaly"" ~""Service:Sidekiq"" ~""Service:Web"" - -- Team attribution : - -- Minutes downtime or degradation : 05:10 - 14:55 = 9h45m = 585m - -For calculating duration of event, use the [Platform Metrics Dashboard](https://dashboards.gitlab.net/d/ZUei7TkWz/platform-metrics?orgId=1) to look at appdex and SLO violations. - -![image](/uploads/76ce8e67855fdc9ef9667caac01c5b94/image.png) - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? - - higher Gitaly latencies, Sidekiq queues growing, higher web latencies -- Who was impacted by this incident? - - all users waiting for jobs to be triggered or finish or web hooks -- How did the incident impact customers? -- How many attempts were made to access the impacted service/feature? -- How many customers were affected? -- How many customers tried to access the impacted service/feature? - -Include any additional metrics that are of relevance. - -Provide any relevant graphs that could help understand the impact of the incident and its dynamics. - -## Detection & Response - -Start with the following: - -- How was the incident detected? - - 05:29 UTC [gitaly latency APDEX alert](https://gitlab.slack.com/archives/CD6HFD1L0/p1565242198350300) in #alerts-general which was noticed by EOC at 6:30 -- Did alarming work as expected? - - Alarming on queue size should fire earlier and Gitaly latency alerts should go to pagerduty -- How long did it take from the start of the incident to its detection? - - queues started to grow at 05:10, got detected by EOC at 06:30 = 80m -- How long did it take from detection to remediation? - - 06:30 - 14:55 = 8h25m = 505m -- Were there any issues with the response to the incident? - -## Root Cause Analysis - -Sidekiq jobs were piling up over hours. - -1. Why? - Jobs took longer to process. -2. Why? - Gitaly latency was getting worse. -3. Why? - There were more gitaly calls made by some jobs. -4. Why? - Some jobs were processing massive amounts of tags which cause n+1 problems for Gitaly -5. Why? - Commits of a user contained too many tags. - -## What went well - -Start with the following: - -- Identify the things that worked well or as expected. -- Any additional call-outs for what went particularly well. - -## What can be improved - -- we should add limits for things like the amount of tags -- improve performance for handling many tags -- improve sidekiq architecture -- alerts for growing queue size should fire earlier -- Gitaly latency APDEX alerts should go to pagerduty -- paging CMOC via `/pd-mgr` slack command doesn't seem to work? -- change the severity label on the incident ticket in time to reflect our current rating of the incident severity -- make status.io updates more meaningful for customers - -Start with the following: - -## Corrective actions - -* [x] add chef config for the sidekiq changes we did manual: https://ops.gitlab.net/gitlab-cookbooks/chef-repo/merge_requests/1619 -* [x] create `find_tag` RPC https://gitlab.com/gitlab-org/gitaly/issues/1848 -* [ ] implement `find_tag` RPC https://gitlab.com/gitlab-org/gitlab-ce/issues/65795 -* [x] PostReceive should have bounds on how many changes it processes. https://gitlab.com/gitlab-org/gitlab-ce/issues/65804 -* [x] add timeouts to gitaly calls from sidekiq https://gitlab.com/gitlab-com/www-gitlab-com/issues/4997 -* [x] make it possible to kill running sidekiq jobs https://gitlab.com/gitlab-org/gitlab-ce/issues/51096 -* [ ] re-architect queue implementation https://gitlab.com/gitlab-com/www-gitlab-com/issues/4951 -* [x] page for gitaly SLO alerts https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7391 -* [x] identify limits to prevent platform incidents https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7481 -* [x] add runbook for analyzing Gitaly pprof data https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7751 -* [x] document marquee_account_alerts and infra-escalation channels in oncall runbook - - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/customer-success/professional-services-engineering/workflows/internal/root-cause-analysis.html) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",1.0 -23606406,2019-08-08 12:35:07.926,Terraform GKE Module failure to apply network policy,"When the network policy is disabled as desired, we still attempt to configure the provider, which fails: - -``` -1 error occurred: - * module.gke-skarbek.google_container_cluster.cluster: 1 error occurred: - * google_container_cluster.cluster: googleapi: Error 400: The network policy addon must be enabled before updating the nodes., badRequest -```",3.0 -23606383,2019-08-08 12:34:07.585,Terraform GKE module unable to upgrade Kubernetes version,"When attempting to upgrade the cluster using terraform, we fail: - -``` -1 error occurred: - * module.gitlab-gke.google_container_cluster.cluster: 1 error occurred: - * google_container_cluster.cluster: node_version was updated but default-pool was not found. To update the version for a non-default pool, use the version attribute on that pool. -```",3.0 -23580291,2019-08-07 15:33:07.220,service observability review,"With team-level ownership of services, one of the first and most urgent tasks we need to perform is an in-depth review of service observability (as highlighted by the recent consul certificate expiration incident). - -Please create issues for each service for said review and execute with the **highest priority**.",2.0 -23573661,2019-08-07 12:19:55.691,Create Production GKE Clusters,"Utilize this issue to: -* discuss how we want to spec our production GKE cluster(s) -* decide if we should have 1 cluster for `production` and cluster for `cny`, or if 1 cluster can handle both -* After decisions are made, either spin up a new issue, or reuse this one to complete the work - -Our design documents did not go into the desired detail to complete this implementation. - -/cc @gitlab-org/delivery -/cc @gitlab-com/gl-infra",3.0 -23564824,2019-08-07 08:03:04.210,Provision short-lived consoles with terraform + chef,"Chef doesn't converge on special-case console instances with a name other than ""console"" as seen in https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/merge_requests/894. I'll investigate why.",1.0 -23553317,2019-08-06 20:31:30.681,Error deploying license.gitlab.com,"`sudo chef-client`: - -``` -[2019-08-06T20:30:17+00:00] WARN: Resource cron_access from the client is overriding the resource from a cookbook. Please upgrade your cookbook or remove the cookbook from your run_list. -[2019-08-06T20:30:17+00:00] WARN: Resource cron_access from the client is overriding the resource from a cookbook. Please upgrade your cookbook or remove the cookbook from your run_list. -[2019-08-06T20:30:17+00:00] WARN: Resource cron_manage from the client is overriding the resource from a cookbook. Please upgrade your cookbook or remove the cookbook from your run_list. -[2019-08-06T20:30:17+00:00] WARN: Resource cron_d from the client is overriding the resource from a cookbook. Please upgrade your cookbook or remove the cookbook from your run_list. -[2019-08-06T20:30:17+00:00] WARN: Resource chef_handler from the client is overriding the resource from a cookbook. Please upgrade your cookbook or remove the cookbook from your run_list. -[2019-08-06T20:30:17+00:00] WARN: Resource zypper_repo from the client is overriding the resource from a cookbook. Please upgrade your cookbook or remove the cookbook from your run_list. -Recipe: gitlab-exporters::chef_client - * chef_gem[prometheus-client] action install (up to date) - * directory[/var/chef/handlers] action create (up to date) - * cookbook_file[/var/chef/handlers/prometheus_handler.rb] action create (up to date) - * chef_handler[PrometheusHandler] action enable (up to date) - - ================================================================================ - Recipe Compile Error in /var/chef/cache/cookbooks/cookbook-license-gitlab-com/recipes/default.rb - ================================================================================ - - NoMethodError - ------------- - undefined method `supports' for Chef::Resource::User::LinuxUser - - Cookbook Trace: - --------------- - /var/chef/cache/cookbooks/cookbook-license-gitlab-com/recipes/user.rb:12:in `block in from_file' - /var/chef/cache/cookbooks/cookbook-license-gitlab-com/recipes/user.rb:9:in `from_file' - /var/chef/cache/cookbooks/cookbook-license-gitlab-com/recipes/default.rb:9:in `from_file' - - Relevant File Content: - ---------------------- - /var/chef/cache/cookbooks/cookbook-license-gitlab-com/recipes/user.rb: - - 5: # - 6: # Copyright 2016, GitLab Inc. - 7: # - 8: - 9: user 'gitlab-license' do - 10: shell '/bin/false' - 11: system true - 12>> supports :manage_home => true - 13: end - 14: - 15: directory '/home/gitlab-license' do - 16: recursive true - 17: owner 'gitlab-license' - 18: group 'gitlab-license' - 19: end - 20: - - System Info: - ------------ - chef_version=14.13.11 - platform=ubuntu - platform_version=14.04 - ruby=ruby 2.5.5p157 (2019-03-15 revision 67260) [x86_64-linux] - program_name=/usr/bin/chef-client - executable=/opt/chef/bin/chef-client - - - Running handlers: -[2019-08-06T20:30:21+00:00] ERROR: Running exception handlers - - PrometheusHandler - Running handlers complete -[2019-08-06T20:30:21+00:00] ERROR: Exception handlers complete - Chef Client failed. 0 resources updated in 19 seconds -[2019-08-06T20:30:21+00:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out -[2019-08-06T20:30:21+00:00] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report -[2019-08-06T20:30:21+00:00] FATAL: NoMethodError: undefined method `supports' for Chef::Resource::User::LinuxUser -```",2.0 -23551782,2019-08-06 19:01:06.054,Improve error-handling in Postgres graceful-failover script,"**Goals:** - -Improve the error-handling of our [`graceful-failover` script](https://ops.gitlab.net/gitlab-cookbooks/chef-repo/blob/master/bin/graceful-failover): -* The script appears to have killed itself during error-handling, before calling its `resume_pgbouncer` function. -* The deadlines of 10 and 5 seconds appears to not always be long enough to complete all of the PgBouncer PAUSE operations. Consider tuning those deadlines. - -**Background:** - -The [`graceful-failover` script](https://ops.gitlab.net/gitlab-cookbooks/chef-repo/blob/master/bin/graceful-failover) automates the steps of switching which postgres instance is currently the primary (writable) node: -* Identify the current primary member of the given environment's Patroni cluster. -* Issue the `PAUSE` command to the PgBouncer instances used by db clients that need write access. -* Wait for up to 10 seconds for the `PAUSE` to complete on all PgBouncer instances. -* If still waiting, kill all active or idle-in-transaction db sessions on the primary Postgres node. -* Wait for up to 5 more seconds for the `PAUSE` to complete on all PgBouncer instances. -* If still waiting, try to abort: Kill the `knife` command that was trying to issue the `PAUSE` commands, issue a `RESUME` command to all PgBouncers, and then exit the script. -* At this point in the script, assume the `PAUSE` succeeded. -* Tell Patroni to run the switchover immediately, preferring the specified node as the new primary. -* Unpause all PgBouncer instances. - -During today's production maintenance, the script had 2 problems: -1. The PAUSE command was slow enough to reach the script's 1st and 2nd deadline. That is abnormal, and we should look for ways to reduce the chances of that happening. Note that the PgBouncer [PAUSE command](https://pgbouncer.github.io/usage.html#pause-db) is a blocking operation, so the duration is partially dependent on client behavior. -2. The error-handling routine failed to reach the line that would un-pause the PgBouncer instances. That left PgBouncers in a paused state, causing extended downtime. - -Details of today's automation failure are logged in [this production change issue](https://gitlab.com/gitlab-com/gl-infra/production/issues/952#note_200807981).",3.0 -23551529,2019-08-06 18:45:22.074,Automate terraform for env-projects,"Update gitlab-ci.yml in gitlab-com/gitlab-com-infrastructure> to automatically plan/apply configurations in `environments/env-projects` as well as the other application-centric environments. This environment may need to be handled a little differently because of the inherent circular dependencies. There are multiple approaches to handle bootstrapping the remote state for the genesis project the first time, ([here](https://www.monterail.com/blog/chicken-or-egg-terraforms-remote-backend) is an example [with sample code](https://github.com/monterail/terraform-bootstrap-example))",2.0 -23551395,2019-08-06 18:38:16.005,Add non-core GCP projects to env-projects,Add `infra-vault` (and other?) project(s) to the same configuration as our core projects under `environments/env-projects` in gitlab-com/gitlab-com-infrastructure> to make everything (more?) consistent,1.0 -23551276,2019-08-06 18:31:15.444,Use one consistent method to manage secrets buckets,"We have added the `env-projects` environment to gitlab-com/gitlab-com-infrastructure> in order to start managing GCP projects and the service accounts used to provision resources into those projects via infrastructure-as-code (terraform) using the https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/project module. As a follow-on step, we need to sort out how we handle [the secrets bucket](https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/project/blob/8846b4b6ba45d1917ceaa1b33cc465b085a9d35c/main.tf#L29) - we have conflicting buckets created in multiple different modules, and the `project` module likely isn't the best place for them. - -Some are created using the `project` module, while others are created using the `gitlab-storage` module, and we have various permutations of attributes in either case.",2.0 -23551198,2019-08-06 18:27:15.229,Document bootstrapping GCP projects with terraform,"We have added the `env-projects` environment to gitlab-com/gitlab-com-infrastructure> in order to start managing GCP projects and the service accounts used to provision resources into those projects via infrastructure-as-code (terraform) using the https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/project module. As a follow-on step, we need to add/update documentation for project bootstrap and management process (especially how to handle circular dependency to manage `env-zero` in terraform (if possible), as well)",1.0 -23551045,2019-08-06 18:24:57.184,Add IAM permissions for terraform service accounts,"We have added the `env-projects` environment to gitlab-com/gitlab-com-infrastructure> in order to start managing GCP projects and the service accounts used to provision resources into those projects via infrastructure-as-code (terraform) using the https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/project module. As a follow-on step, we need to add/import/update the relevant IAM permissions for those service accounts.",1.0 -23549558,2019-08-06 17:12:02.055,Understand why Postgres failover caused a brief outage,"When failing over the master node, postgres caused a brief outage. - -https://gitlab.com/gitlab-com/gl-infra/production/issues/1035 - -This issue is to understand why. - -/cc @abrandl @Finotto",1.0 -23549530,2019-08-06 17:10:27.257,Manage the registry application grafana dashboards with libsonnet,"* https://dashboards.gitlab.net/d/CoBSgj8iz/application-info?orgId=1 -* https://dashboards.gitlab.net/d/oWe9aYxmk/pod-metrics?orgId=1 - -These need to be converted over from the current manual creation method into libsonnet. - - -/cc @gitlab-com/gl-infra",5.0 -23521299,2019-08-05 21:01:56.116,Why do the PgBouncer hosts have a very uneven distribution of client connections?,"**Problem:** - -A large majority of database client connections are being handled by a single PgBouncer instance, despite the fact that 2 PgBouncer instances are active backends in our Google Internal TCP Load Balancer. This imbalance defeats our attempt to alleviate our bottleneck of PgBouncer's CPU saturation. - -**Goal:** - -Learn why traffic is not being roughly equally distributed among the active members of the Google Internal TCP Load Balancer (ILB). This is a research task that should result in a recommendation. - -**Alternatives:** - -If we cannot fix this imbalance, we could instead use HAProxy as a load balancer in front of several PgBouncer instances. This would optionally let us run multiple PgBouncer instances per host. However, unlike the Google ILB, HAProxy would be an additional TCP endpoint, adding more network latency and another component to manage. - -**Background:** - - - Recently (~4-8 weeks ago) in response to a performance problem we changed how database clients connect to the primary Postgres instance in our Patroni cluster. - - Previously, all clients needing write-access to Postgres would connect to a single PgBouncer instance running on the same host as the primary Postgres instance. - - PgBouncer is a single-threaded process, so it can use at most 1 CPU's worth of compute cycles. - - That PgBouncer instance's workload sometimes saturated its 1 CPU ceiling under our peak daily workload. - - To spread that workload among more CPUs, we needed to add more PgBouncer instances and treat them as a pool. - - We reconfigured the DB Clients connect to the primary Postgres database via a Google Internal TCP Load Balancer (ILB) that points to a pool of currently 2 active PgBouncer instances. - - The ILB is not a proxy; it is a set of network routing rules that lets clients talk directly to the backend service instances (PgBouncer hosts). - - The ILB's backend-service actually contains 2 active hosts and 1 inactive host. Only the 2 active hosts matter for this analysis. - - Surprisingly, the workload is *not* evenly distributed among the PgBouncer hosts. - - The number of established connections from clients to PgBouncer is very unevenly distributed among the 2 active PgBouncer hosts. - - This unevenness defeats our goal of spreading the workload among multiple PgBouncer instances. - - Clients whose hostnames start with ""web"" and ""git"" seem to be the most strongly affected by this bias for preferring to connect to host ""pgbouncer-03"" rather than ""pgbouncer-02"".",5.0 -23520099,2019-08-05 19:49:35.326,RCA: Note Creation on commit via API Calls Halted `new_note` Sidekiq Queue," - -Incident: gitlab-com/gl-infra/production#1028 - -## Summary - -A user generated many notes on a single commit via API calls which slowed down the `new_note` sidekiq queue and blocked sending notifications for issue and MR comments for all customers. - -- Service(s) affected : ~""Service:Sidekiq"" -- Team attribution : -- Minutes downtime or degradation : 220 - -For calculating duration of event, use the [Platform Metrics Dashboard](https://dashboards.gitlab.net/d/ZUei7TkWz/platform-metrics?orgId=1) to look at appdex and SLO violations. - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? - - blocking to send out pending issue- and MR comment notifications -- Who was impacted by this incident? - - all users supposed to get notifications -- How did the incident impact customers? - - notifications arrived with a delay of up to 220m -- How many attempts were made to access the impacted service/feature? -- How many customers were affected? -- How many customers tried to access the impacted service/feature? - -Include any additional metrics that are of relevance. - - - Up to 14,000 notifications queued up - -Provide any relevant graphs that could help understand the impact of the incident and its dynamics. - -![image](/uploads/7127adb9cf0f94cac69c30480b2fa1c1/image.png) - -## Detection & Response - -Start with the following: - -- How was the incident detected? - - [pagerduty alert](https://gitlab.pagerduty.com/incidents/PPF0U8C) for `new_note` queue size -- Did alarming work as expected? - - Alarming for queue size worked, but the alert threshold was reached over an hour after the incident started -- How long did it take from the start of the incident to its detection? - - 68m -- How long did it take from detection to remediation? - - 152m -- Were there any issues with the response to the incident? - - it wasn't easy to get hold of a backend engineer at the beginning - -## Root Cause Analysis - -`new_note` notifications have been delayed. - -1. Why? - They piled up in the `new_note` sidekiq queue. -2. Why? - The queue was processing jobs considerably slower. -3. Why? - Many long-running jobs have been added to the queue. -4. Why? - A commit with a huge amount of comments got thousands of new comments and processing a commit with many comments is very slow. -5. Why? - Apparently an automation of a user was creating those comments via API inadvertently. - -## What went well - -Our monitoring detected the issue and paged the EOC. Backend engineers did a great job of finding and mitigating the root cause. Thanks to @andrew for jumping on the incident call quickly. - -## What can be improved - -- We should get alerted sooner when notes are stuck in the queue. E.g. not only alerting by queue size but also by job duration. -- Limit the amount of notes a user can create per time and on one commit. -- Make `new_note` processing more efficient on commits with many notes. -- We had the same kind of incident [a year ago](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/4230) and should have improved alerting and protections against too many notes since then. - - -## Corrective actions - -* [ ] https://gitlab.com/gitlab-org/gitlab-ce/issues/46676 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/issues/60857 -* [ ] improve alerting https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7752 - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",2.0 -23517805,2019-08-05 18:14:29.121,Update DBRE section of On-Call Handbook section,"The [DBRE section](https://about.gitlab.com/handbook/on-call/#dbre) of the On-Call Handbook reads: - -> For database-related issues the DBRE on-call can be paged. -> Responding to pages is on a best-effort basis and there is no expected response time. -> Alerts are not triggered via automation. All escalations to the DBREs are initiated by a human, either the SRE or manager on-call. - -This needs to be updated, given that we do have an expected response time. Also, I believe we do have automated alerts.",2.0 -23509114,2019-08-05 13:50:49.228,FY20-Q3 Infrastructure IACV OKR: Raise GitLab.com Availability from 96.64% to a consistent 99.95% => 0%,"* [ ] Key result: MTTD, MTTR, MTBF, and MTTP PIs to be at OK health and a minimum of level 2 maturity `=> 0%, comment` -* [ ] Key result: 100% of services have fully defined and observable service levels, error budgets, and capacity planning `=> 0%, comment` -* [ ] Key result: 50% of services running on Kubernetes `=> 0%, comment`",8.0 -23508275,2019-08-05 13:27:08.059,Adjust number of DBRE reqs,,1.0 -23508038,2019-08-05 13:20:26.323,FY20-Q3 CI/CD & Enablement: Drive all user-visible services’ MTBF from 4 days to 6 days => 0%,"Current MTBF calculations spreadsheet: https://docs.google.com/spreadsheets/d/1bCEUQbWMccVm1dfBrOFtXjyDVCflHu4rB6htOaC_0yI/edit#gid=1150439504 - -4 days to 6 days is based on calculation of MTBF for S1/S2 incidents",1.0 -23501053,2019-08-05 11:03:06.948,FY20-Q3 OKR: Drive all user-visible services’ MTTD from Unknown to 5 min => 0%,"* [ ] Key result: 70% (7/10) DevOps lifecycle stages have SLI dashboards for development to monitor. `=> 0%, comment` -* [ ] Key result: 100% of team-owned services have fully defined and observable service levels, error budgets, and capacity planning `=> 0%, comment` -* [ ] Key result: 50% of team-owned services running on Kubernetes `=> 0%, comment` -* [ ] Key result: Perform weekly load-testing in staging on 2 team-owned services `=> 0%, comment`",8.0 -23499956,2019-08-05 10:47:30.748,FY20-Q3 Dev & Ops OKR: Drive all user-visible services’ MTTR from 4.66 to 2 (days) => 30%,"* [ ] Key result: 100% of incidents have severities, RCAs, and error budgets accounted for `=> 0%, comment` -* [ ] Key result: 100% of team-owned services have fully defined and observable service levels, error budgets, and capacity planning `=> 0%, comment` -* [ ] Key result: Migrate Postgresql and his ecosystem to use stackgres( kubernetes) in staging. `=> 0%, comment`",8.0 -23451215,2019-08-02 20:10:26.524,Error when deploying customers.gitlab.com,"``` - - gitlab-mitigate-sackpanic (0.1.3) - - gitlab_users (0.1.49) - - chef-client (11.2.0) - - gitlab-prometheus (1.2.0) - - systemd (3.2.4) - - windows (6.0.0) - - gitlab-server (1.1.0) - - gitlab-exporters (1.3.1) - -Running handlers: -[2019-08-02T20:07:32+00:00] ERROR: Running exception handlers -Running handlers complete -[2019-08-02T20:07:32+00:00] ERROR: Exception handlers complete -Chef Client failed. 0 resources updated in 11 seconds -[2019-08-02T20:07:32+00:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out -[2019-08-02T20:07:32+00:00] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report -[2019-08-02T20:07:32+00:00] FATAL: Chef::Exceptions::CookbookChefVersionMismatch: Cookbook 'gitlab_application_ruby' version '4.2.3' depends on chef version ["">= 12.1"", ""< 14""], but the running chef version is 14.13.11 -```",1.0 -23439311,2019-08-02 13:57:40.644,We need a way to test kubernetes application configuration changes locally,"Right now testing application changes must happen on an environment, whether that be `gstg` or `pre`. This is not the safest way to test things if multiple persons are testing. It would be nice to utilize Docker's Kubernetes integration locally, or minikube, or another tool to test application configuration changes as necessary. - -@jarv recently demo'd the use of docker-desktop - -@skarbek has toyed with minikube - -We need to ensure that whatever tool we utilize, it's backward compatible with linux and osx. - -Use this issue to track progress on making this reality.",5.0 -23439198,2019-08-02 13:53:23.683,Migration method for GKE registry; configure our get/set weight scripts,Our current scripts used to get/set the weights for haproxy backends currently does not work properly for the registry frontends. Make the necessary modifications such that we can set the weight of the Kubernetes backed Container Registry backend to allow us to transition between VM's and Kuberenetes nice and easy,1.0 -124902511,2023-03-07 19:17:14.873,Create three new gitaly storage shard nodes file-{90..92}-stor-gprd for storing new projects,"# `Production` Change - -### Change Summary - -Create three new gitaly storage shard nodes `file-{90..92}-stor-gprd` for storing new projects. Why? Increase capacity for new project repository storage. - -More details: https://gitlab.com/gitlab-com/gl-infra/capacity-planning/-/issues/507 - -- [ ] [Detailed steps for the change](#detailed-steps-for-the-change) -- [ ] [Build the new VM instance](#build-the-new-vm-instance) -- [ ] [Ensure the creation of the storage directory](#ensure-the-creation-of-the-storage-directory) -- [ ] [Configure the GitLab application so that it is aware of the new node](#configure-the-GitLab-application-so-that-it-is-aware-of-the-new-node) -- [ ] [Add the new Gitaly node to all our Kubernetes container configuration](#add-the-new-gitaly-node-to-all-our-kubernetes-container-configuration) -- [ ] [Test the new node](#test-the-new-node) -- [ ] [Enable the new node in Gitlab](#enable-the-new-node-in-gitlab) - - -### Change Details - -1. **Services Impacted** - ~""Service::Gitaly"" -1. **Change Technician** - @nnelson -1. **Change Reviewer** - @cmcfarland -1. **Time tracking** - `2 hours` -1. **Downtime Component** - `zero downtime` - -## Meta - -- [x] Replace all occurrences of ""`XX`"" with the new gitaly shard node number when executing commands. -- [x] Set the title of this production change issue to: Create new gitaly storage shard node `file-90-stor-gprd` for storing new projects - - -### Detailed steps for the change - -The following are the detailed steps for the change. - - -### Build the new VM instance - -- **pre-conditions for execution of the step** - - [x] [Create a new MR](https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/new). - * The commit should increment the `""node_count"" -> ""default"" -> ""multizone-stor""` variable setting by the number of new gitlay shards that are being added [around line `533` of the file `environments/gprd/variables.tf`](https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/blob/master/environments/gprd/variables.tf#L533) - * Here is [an example title and description to use for this MR](https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/1605). - - [x] Using the new value of the `multizone-stor` field, change the MR title to: Increment multi-zone storage nodes by [Number of new gitaly shards] to [the new total] - - [x] Link: https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/5150 - - [x] Have the MR reviewed by a colleague. -- **execution commands for the step** - - [x] Optionally, check quotas before applying the terraform changes. You can check with: - ```bash - gcloud --project='gitlab-production' compute regions describe us-east1 --format=json | jq -c '.quotas[] | select(.limit > 0) | select(.usage / .limit > 0.5) | { metric, limit, usage }' - ``` - - [x] Merge the MR. - - [x] Click the `apply gprd` pipeline stage `play` button. -- **post-execution validation for the step** - - [x] Examine the `gprd apply` pipeline stage output and confirm the absence of relevant errors. -- **rollback of the step** - - [ ] Revert the MR. - - -### Ensure the creation of the storage directory - -Once the gitaly node is created, it will take a few minutes for chef to run on the system, so it may not be immediately available. - -- **pre-conditions for execution of the step** - - [x] Make sure `chef-client` runs without any errors. - ```bash - export node='file-90-stor-gprd.c.gitlab-production.internal' - bundle exec knife ssh ""fqdn:$node"" ""sudo grep 'Chef Client finished' /var/log/syslog | tail -n 1"" - ``` -- **execution commands for the step** - - [ ] **If** chef does not converge after 10 minutes or so, then invoke it manually. If chef refuses to run, then something is wrong, and this procedure should be rolled-back. - ```bash - bundle exec knife ssh ""fqdn:$node"" ""sudo chef-client"" - ``` - - [x] Confirm storage directory `/var/opt/gitlab/git-data/repositories` exists on the file system of the new node. - ```bash - bundle exec knife ssh ""fqdn:$node"" ""sudo df -hT /var/opt/gitlab/git-data/repositories && sudo ls -la /var/opt/gitlab/git-data/ && sudo ls -la /var/opt/gitlab/git-data/repositories | head"" - ``` -- **post-execution validation for the step** - - [x] Confirm that the gitaly service is running - ```bash - bundle exec knife ssh ""fqdn:$node"" ""sudo gitlab-ctl status gitaly"" - ``` - - [x] Confirm that there are no relevant errors in the logs. - ```bash - bundle exec knife ssh ""fqdn:$node"" ""sudo grep -i 'error' /var/log/gitlab/gitaly/current | tail"" - ``` -- **rollback of the step** - - No rollback procedure for this step is necessary. - - This step only confirms and verifies steps taken so far. - - -### Configure the GitLab application so that it is aware of the new node - -Configure the GitLab application to include the new node. Note: The GitLab application will consider the new node to be disabled by default. - -- **pre-conditions for execution of the step** - - [x] Create [a new MR in the `chef-repo` project](https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/new). - * Here is [an example title and description to use for this MR](https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/1242). - * The commit should consist of the following changes: - - [x] Update the `override_attributes.omnibus-gitlab.gitaly.storage` list items of file [`roles/gprd-base-stor-gitaly-common.json`](https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/blob/master/roles/gprd-base-stor-gitaly-common.json#L410), add item(s) similar to: - ```json - { - ""name"": ""nfs-file90"", - ""path"": ""/var/opt/gitlab/git-data/repositories"" - }, - ``` - - [x] Update the `default_attributes.omnibus-gitlab.gitlab_rb.git_data_dirs` map entry of file [`roles/gprd-base.json`](https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/blob/master/roles/gprd-base.json#L494), add an entry similar to: - ```json - ""nfs-file90"": { - ""path"": ""/var/opt/gitlab/git-data-file90"", - ""gitaly_address"": ""tcp://file-90-stor-gprd.c.gitlab-production.internal:9999"" - }, - ``` - - [x] Link: https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/3017 - - [x] Have the MR reviewed by a colleague. -- **execution commands for the step** - - [x] Notify the Engineer On-call about the planned change. - - [x] Create a silence for `GitalyServiceGoserverTrafficAbsentSingleNode` alert, which will get raised if new Gitaly server(s) do not receive enough traffic for 30 minutes. [Reference of alert raised in the past](https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/16269#note_1100938329). - - [x] Merge the MR. - - [x] Examine the pipeline stage output for `apply_to_prod` job on ops.gitlab.net pipeline to verify that change was applied successfully and there were no errors. -- **post-execution validation for the step** - - [x] Verify chef role to check for the change: - ```bash - $ bundle exec knife role show gprd-base-stor-gitaly-common | grep -A1 'nfs-file90' - name: nfs-file90 - path: /var/opt/gitlab/git-data/repositories - ``` - - [x] Wait 30-35 minutes for the nodes to converge naturally. In the normal circumstances, chef-client periodically runs every 30 (plus upto 5) minutes. Verify by checking node status (ignore patroni/postgres servers in the list): - ```bash - bundle exec knife status ""roles:gprd-base-stor-gitaly-common"" --run-list - ``` - - [ ] Optionally, in case you are running out of patience and thinking explicit run, force `chef-client` to run on the relevant nodes (It will take excruciatingly long time though, so better to wait for natural convergence): - ```bash - bundle exec knife ssh -C 3 ""roles:gprd-base-stor-gitaly-common"" ""sudo chef-client"" - ``` - -- **rollback of the step** - - [ ] Revert the MR. - - [ ] Check the `apply_to_prod` ops.gitlab.net pipeline to see if the change successfully applied. - - [ ] Re-run the commands in the post-execution validation for the step - - -### Add the new Gitaly node to all our Kubernetes container configuration - -- **pre-conditions for execution of the step** - - [x] Create [a new MR in the `gl-infra/k8s-workloads/gitlab-com` project](https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/gitlab-com/-/blob/master/releases/gitlab/values/gprd.yaml.gotmpl). - * Here is [an example title and description to use for this MR](https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/gitlab-com/-/merge_requests/1779/diffs). - - [x] In the MR you want to update the file `releases/gitlab/values/${environment}.yaml.gotmpl` and add the new node to the `global.gitaly.external` yaml list - Typically the data looks like - ``` - - hostname: gitaly-01-sv-pre.c.gitlab-pre.internal - name: default - port: ""9999"" - tlsEnabled: false - ``` - - [x] Link: https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/gitlab-com/-/merge_requests/2599 - - [x] Have the MR reviewed by a colleague in **Delivery** -- **execution commands for the step** - - [x] Notify the Engineer On-call about the planned change and seek approval, to ensure that no other deployment (From #announcements) is ongoing at the time. - - [x] Merge the MR. - - [x] Examine the pipeline stage output to verify that there were no errors. - -- **rollback of the step** - - [ ] Revert the MR. - - [ ] Re-run the execution step for a roll-back. - - -### Test the new node - -Confirm that the new storage node is operational. - -- **pre-conditions for execution of the step** - - [ ] Export your `gitlab.com` user auth token as an environment variable in your shell session. - ```bash - export GITLAB_COM_API_PRIVATE_TOKEN='CHANGEME' - ``` - - [ ] Also export your `gitlab.com` admin user auth token as an environment variable in your shell session. - ```bash - export GITLAB_GPRD_ADMIN_API_PRIVATE_TOKEN='CHANGEME' - ``` -- **execution commands for the step** - - [x] Create a new project: - ```bash - export project_name='nfs-file90-test' - rm -f ""/tmp/project-${project_name}.json"" - curl --silent --show-error --request POST ""https://gitlab.com/api/v4/projects?name=${project_name}&default_branch=main"" --header ""Private-Token: ${GITLAB_COM_API_PRIVATE_TOKEN}"" > ""/tmp/project-${project_name}.json"" - export project_id=$(cat ""/tmp/project-${project_name}.json"" | jq -r '.id') - export ssh_url_to_repo=$(cat ""/tmp/project-${project_name}.json"" | jq -r '.ssh_url_to_repo') - ``` - - [x] Clone the project. - ```bash - git clone ""${ssh_url_to_repo}"" ""/tmp/${project_name}"" - ``` - - [x] Add, commit, and push a `README` file to the project repository. - ```bash - echo ""# ${project_name}"" > ""/tmp/${project_name}/README.md"" - pushd ""/tmp/${project_name}"" && git add ""/tmp/${project_name}/README.md"" && git commit -am ""Add README"" && git push origin main && popd - ``` - - [x] Use the API to [move it to a new storage server](https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/gitaly/storage-rebalancing.md#manual-method): - ```bash - export destination_storage_name='nfs-file90' - export move_id=$(curl --silent --show-error --request POST ""https://gitlab.com/api/v4/projects/${project_id}/repository_storage_moves"" --data ""{\""destination_storage_name\"": \""${destination_storage_name}\""}"" --header ""Private-Token: ${GITLAB_GPRD_ADMIN_API_PRIVATE_TOKEN}"" --header 'Content-Type: application/json' | jq -r '.id') - ``` - - [x] Optionally poll the api to monitor the state of the move: - ```bash - curl --silent --show-error ""https://gitlab.com/api/v4/projects/${project_id}/repository_storage_moves/${move_id}"" --header ""Private-Token: ${GITLAB_GPRD_ADMIN_API_PRIVATE_TOKEN}"" | jq -r '.state' - ``` - - [x] Optionally confirm the new location: - ```bash - curl --silent --show-error ""https://gitlab.com/api/v4/projects/${project_id}"" --header ""Private-Token: ${GITLAB_GPRD_ADMIN_API_PRIVATE_TOKEN}"" | jq -r '.repository_storage' - ``` - - [x] Once the project has finished being moved to the new shard, proceed to add, commit, and push an update to the `README`: - ```bash - echo -e ""\n\ntest"" >> ""/tmp/${project_name}/README.md"" - pushd ""/tmp/${project_name}"" && git add ""/tmp/${project_name}/README.md"" && git commit -am ""Update README to test nfs-file90"" && git push origin main && popd - ``` - - [x] Verify that the changes were persisted as expected: - ```bash - rm -rf ""/tmp/${project_name}"" - git clone ""${ssh_url_to_repo}"" ""/tmp/${project_name}"" - grep 'test' ""/tmp/${project_name}/README.md"" - ``` - -### Enable the new node in Gitlab - -Enabling new nodes in the GitLab admin console requires using an admin account to change where new projects are stored. In [`Admin Area`](https://gitlab.com/admin) > [`Settings`](https://gitlab.com/admin/application_settings/general) > [`Repository`](https://gitlab.com/admin/application_settings/repository) > `Repository storage` > `Expand`, you will see a list of storage nodes. The ones that are checked are the ones that will receive new projects. For more information see [gitlab docs](https://docs.gitlab.com/ee/administration/repository_storage_paths.html#choose-where-new-repositories-will-be-stored). - -- **execution commands for the step** - - [x] Open a private browser window or tab and navigate to: https://gitlab.com/admin/application_settings/repository - - [x] Click the `Expand` button next to `Repository storage`. - - [x] Click the `Save changes` button. (I know you didn't do any changes, just trust the process and click the button) - - [x] Click play on the [Production gitaly-shard-weights-assigner job](https://ops.gitlab.net/gitlab-com/gl-infra/gitaly-shard-weights-assigner/-/pipeline_schedules) to assign a weight. - -- **post-execution validation for the step** - - [x] Take a count of how many projects are being created on the new shard: - ```bash - export node='file-90-stor-gprd.c.gitlab-production.internal' - bundle exec knife ssh ""fqdn:$node"" ""sudo find /var/opt/gitlab/git-data/repositories/@hashed -mindepth 2 -maxdepth 3 -name *.git | wc -l"" - ``` - - [x] Observe that this number goes up over time. - -- **post-execution validation for the step** - - [ ] Take a count of how many projects are being created on the old shard: - ```bash - export node='file-YY-stor-gprd.c.gitlab-production.internal' - bundle exec knife ssh ""fqdn:$node"" ""sudo find /var/opt/gitlab/git-data/repositories/@hashed -mindepth 2 -maxdepth 3 -name *.git | wc -l"" - ``` - - [ ] Observe that this number never goes up over time. (Either goes down or does not change.) - - [x] Delete silence created for `GitalyServiceGoserverTrafficAbsentSingleNode` alert in steps [above](#configure-the-gitlab-application-so-that-it-is-aware-of-the-new-node).",2.0 -123233985,2023-02-07 16:44:01.402,The dbconsole-praefect.sh script must specify secrets for database client connections," - -**Details** - - Point of contact for this request: @nnelson - - If a call is needed, what is the proposed date and time of the call: [+ Date and Time +] - - Additional call details (format, type of call): [+ additional details +] - -**SRE Support Needed** - -The `dbconsole-praefect.sh` script is now required to specify the client certificates to be used for the connection. - -This may be verified with: - -```sh -ssh console-01-sv-gstg.c.gitlab-staging-1.internal -PGSSLCOMPRESSION=0 PGPASSWORD=$(sudo /bin/cat /etc/gitlab/gitlab.rb | perl -ne 'm/praefect.*database_password.*?""(.*)""/ && do { print $1 }') sudo --preserve-env /opt/gitlab/embedded/bin/psql ""sslmode=verify-ca sslrootcert=/etc/gitlab/ssl/praefect-database-server-ca.pem sslcert=/etc/gitlab/ssl/praefect-database-client-cert.pem sslkey=/etc/gitlab/ssl/praefect-database-client-key.pem hostaddr=10.94.0.2 user=praefect dbname=praefect_production"" -``` - -The template and recipe in the unfortunately scoped `gitlab_users` cookbook must be updated accordingly. - - -",4.0 -118261655,2022-11-07 12:02:24.586,Container Registry: Review and adjust SLI for manifest routes,"## Context - -Related to https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/476+. - -## Task - -Now that Phase 2 of the [GitLab.com registry upgrade/migration](https://gitlab.com/groups/gitlab-org/-/epics/5523) is complete, the existing SLIs are likely too loose, as operations are now executed on the metadata database (faster), not the storage backend (slower). - -~""group::container registry"" will self-serve this issue.",1.0 -116351647,2022-10-05 18:42:56.939,Establish a method for monitoring the health/success of disk backup snapshot creation in GCP,"## Summary - - - -Establish a method for monitoring the health/success of disk backup snapshot creation in GCP. - -There is presently a now known risk that disk backup snapshots could begin to fail without anyone being aware of the situation until one goes to try to use a snapshot which has failed for something. - -Failed disk backup snapshots could result in the extensive loss of customer data were any of the virtual server disks to fail for any reason during a period of failing snapshot creation. - - -## Related Incident(s) - - - -Originating issue(s): https://gitlab.com/gitlab-com/gl-infra/production/-/issues/7833 - - -## Desired Outcome/Acceptance Criteria - - - -An alert will be generated, and ultimately the SRE On-call will be paged, when a certain number (any?) of disk backup snapshots fail to be created within a certain period of time from their scheduled initialization. - - -## Associated Services - - - -~""Service::GCP"" - -## Corrective Action Issue Checklist - -* [x] Link the incident(s) this corrective action arose out of -* [x] Give context for what problem this corrective action is trying to prevent from re-occurring -* [x] Assign a severity label (this is the highest sev of related incidents, defaults to 'severity::4') -* [x] Assign a priority (this will default to 'Reliability::P4')",3.0 -113483221,2022-08-18 16:49:48.653,Demo: /chatops run pager pause,"# Demo: `/chatops run pager pause` - -## Scenario - -> For site-wide outages, EOC gets a nearly continuous stream of pages that needs to be acknowledged, which is very distracting when trying to focus on mitigating said outage. For production, we have 3 PagerDuty services that could send a page to the EOC: `GitLab Production`, `SLO Alerts gprd main stage`, and `SLO alerts gprd cny stage`. Manually creating a maintenance window for all of them is a waste of time during an incident. - -See this ~""corrective action"" for details: https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/15550 - -Moderator note: Since Steve Azzopardi's excellent work creating relationships and hierarchies for service alerting, this has since become much less of a burdensome problem during severe incidents. Nevertheless, this feature may have some value, and because it directly effects the way that an on-call engineer is paged or not, it is important to socialize this operational information. - -See runbook documentation here: https://gitlab.com/gitlab-com/runbooks/-/blob/master/on-call/checklists/eoc.md#creating-temporary-pagerduty-maintenance-windows - -## Meeting Format - -Demo presentation. - -- [x] Moderator: @nnelson -- [ ] Note Taker: @shimrangeorge / `tbd` - - -## Acceptance Criteria - - - -- [ ] Google Doc created: `todo` -- [ ] Meeting scheduled; Agenda should include - - Google Doc - - Link to Scenario -- [ ] Meeting must be recorded -- [ ] Recording is uploaded to YouTube; apply the video to the following - playlists: - - Infrastructure Fire Drills - - Infrastructure Group -- [ ] Mark the video as private if any [Yellow and above classified data is - shared](https://about.gitlab.com/handbook/engineering/security/data-classification-standard.html) -- [ ] Review the Google Doc and/or the Video for any potential follow up issues - that need to be resolved",1.0 -112951247,2022-08-08 15:00:45.090,"For project authorizations refresh jobs, use workers based on the applicable urgency context","Based on the discussion at https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/16080#note_1054295096 - -> granting access (inserting into `project_authorizations`) can have lower priority. According to https://docs.gitlab.com/ee/development/sidekiq/worker_attributes.html#job-urgency, this would mean a delay of maximum 1 minute before the users can actually access the project they've gained access to. But when revoking access, this might be to block a bad actor from something, so we might want it more immediate. - -This means that we have an opportunity to have different types of workers that do the same thing, but differ in urgency. - -And based on its context (eg, addition of a member to a project vs removal of a member from a project), we can use the worker based on the urgency of the user action.",3.0 -112182744,2022-07-25 20:50:41.739,Decide tagging pattern for premium machine types,"Premium machine types (such as n2-standard-4 to pick a totally arbitrary example) need some sort of standard tagging format to be determined ahead of time. - -This small detail is important as it's very painful and annoying to users if we change it after the fact. - -Premium machine type runner managers will be set to `run-untagged=false` so that the only way to use them is to run with the tags. - -Blocks #16091 - -## PM proposal for configuration and tagging pattern (2022-07-29) - -### Tagging naming convention for GitLab SaaS runners - -saas-{os}-{instance size}-{arch}-{additional capability} - -- SaaS: denotes that this is a SaaS Runner -- OS: denotes the operating system. -- instance size: Using AWS's [naming convention](https://www.archerimagine.com/articles/aws/aws-ec2-instance-type-tutorial.html#instance-sizing) as prior art, this denotes the T-shirt size representation of the instance (CPU, Memory) -- additional capability: used to represent additional capability. For example - GPU enabled VM's. - -| Description| Cloud Provider |Machine Type|Run Untagged Jobs|Tags| -| ------ | ------ | ------ | ------ | ------ | -| Linux + Docker Builds - [general purpose CI] | Google Compute |n1-standard-1|Yes|saas-linux-small-amd64| -| Linux + Docker Builds - [general purpose CI] | Google Compute |~~n1-standard-2~~
n2d-standard-2|No|saas-linux-medium-amd64| -| Linux + Docker Builds - [general purpose CI] | Google Compute |~~n1-standard-4~~
n2d-standard-4|No|saas-linux-large-amd64| -| Linux + Docker Builds - [AI/MLOPS - GPU Enabled] | Google Compute |n1-standard-4 + GPU|No|saas-linux-large-amd64-gpu|",0.0 -112170477,2022-07-25 17:41:19.540,Increase capacity on larger runner machine offerings,"After testing that the new runner managers work with their different machine types, we need to update chef and configure them to have the production level capacity. - -With that configuration will be ready for the official start. - -The last steps will be then to: - -- announce the new runners -- unpause them in GitLab, so that they become available for users.",1.0 -112170462,2022-07-25 17:40:42.933,Create terraform plans for large machine runner managers,"Add new runner managers into terraform repository. - -## Work checklist - -- [x] merge https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/4139 -- [x] request quota limit upgrade of C2 CPUs to 2000 -- [x] apply changes from https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/4139 to create the runner manager nodes -- [x] apply fix from https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/4155 -- [x] manually verify configuration -- [x] finalize preparation steps (create Docker Machine VM on each of the nodes manually - once per each node - to make sure that the configuration works and to generate Docker Machine TLS Auth certificates)",1.0 -112170426,2022-07-25 17:40:03.400,Create chef roles for large runner machine types,"For new runner managers we need to define chef roles that they will use for configuration. - -## Plan checklist - -- [x] Merge https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/2234+",1.0 -112170398,2022-07-25 17:39:23.622,Register large machine runner managers,"We need to register the new machines and store the tokens generated in our 1password vault. - -To truly be done this, we need to know the tagging format to be used to distinguish the machine types. Tags can be updated after registration though so we can register, store the tokens, then update the tag formats. - -**This issue requires gitlab.com admin access** - -## Plan checklist - -- Register following runner managers and store tokens in chef vault - - `saas-linux-medium` :point_right: `./bin/gkms-vault-create runners-manager-saas-linux-medium-amd64 ci` - - [x] `blue-1.saas-linux-medium-amd4.runners-manager.gitlab.com` - - [x] `blue-2.saas-linux-medium-amd4.runners-manager.gitlab.com` - - [x] `blue-3.saas-linux-medium-amd4.runners-manager.gitlab.com` - - [x] `blue-4.saas-linux-medium-amd4.runners-manager.gitlab.com` - - [x] `blue-5.saas-linux-medium-amd4.runners-manager.gitlab.com` - - [x] `green-1.saas-linux-medium-amd4.runners-manager.gitlab.com` - - [x] `green-2.saas-linux-medium-amd4.runners-manager.gitlab.com` - - [x] `green-3.saas-linux-medium-amd4.runners-manager.gitlab.com` - - [x] `green-4.saas-linux-medium-amd4.runners-manager.gitlab.com` - - [x] `green-5.saas-linux-medium-amd4.runners-manager.gitlab.com` - - `saas-linux-large` :point_right: `./bin/gkms-vault-create runners-manager-saas-linux-large-amd64 ci` - - [x] `blue-1.saas-linux-large-amd4.runners-manager.gitlab.com` - - [x] `blue-2.saas-linux-large-amd4.runners-manager.gitlab.com` - - [x] `blue-3.saas-linux-large-amd4.runners-manager.gitlab.com` - - [x] `blue-4.saas-linux-large-amd4.runners-manager.gitlab.com` - - [x] `blue-5.saas-linux-large-amd4.runners-manager.gitlab.com` - - [x] `green-1.saas-linux-large-amd4.runners-manager.gitlab.com` - - [x] `green-2.saas-linux-large-amd4.runners-manager.gitlab.com` - - [x] `green-3.saas-linux-large-amd4.runners-manager.gitlab.com` - - [x] `green-4.saas-linux-large-amd4.runners-manager.gitlab.com` - - [x] `green-5.saas-linux-large-amd4.runners-manager.gitlab.com` -- Add collected tokens to chef vault for the dedicated roles",0.0 -112170361,2022-07-25 17:38:26.804,Request increase in required GCP project quotas for large runner machine projects,"For each GCP project we create in the [create projects issue](https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/16089) we also need to request quota increases for the ""usual things"". - -Basing on the [recent work](https://gitlab.com/gitlab-com/gl-infra/capacity-planning/-/issues/30) done for `private` runners and assuming that we will handle the same maximum capacity per the GCP project as in `shared` and `private` runners case, we will definitely need to request upgrades for: - -- **Read requests per minute** - 4500 (from the default 1500) -- **Heavy-weight read requests per minute** - 2250 (from the default 750) - -We will also need to increase the `CPUs` quota. The default limit is 2400 and with bigger machine types we will use definitely more. Assuming that the maximum number of VMs we will handle in each GCP project will be 1500, we will need to request: - -- 3000 for the `saas-linux-medium` shard (using the `n1-standard-2` machine type which has 2 vCPUs) -- 6000 for the `saas-linux-large` shard (using the `n1-standard-4` machine type which has 4 vCPUs) - -There was a discussion whether we should use `n2d` machines instead of `n1`. Currently the decision seem to be to stay with `n1`. - -But if we will decide to use `n2d` we will also need to increase the `N2D CPUs` - the value would be same as for the `CPUs` one (as id depends on the estimated maximum number of hosted VMs x number of CPUs per VM). - -## Plan checklist - -- [x] Request quota limits increases - - [x] `gitlab-r-saas-l-m-amd64-1` - - ``` - Thank you for submitting Case # (ID:12bee97039bb48b28a06c087d467547e) to Google Cloud Platform support for the following quotas: - Change CPUs - us-east1 from 2,400 to 3,000 - Change N2D CPUs - us-east1 from 500 to 3,000 - Change Heavy-weight read requests per minute from 750 to 2,250 - Change Read requests per minute from 1,500 to 4,500 - ``` - - - [x] `gitlab-r-saas-l-m-amd64-2` - - ``` - Thank you for submitting Case # (ID:842fbbcf476147dda093a1df763c4da5) to Google Cloud Platform support for the following quotas: - Change CPUs - us-east1 from 2,400 to 3,000 - Change N2D CPUs - us-east1 from 500 to 3,000 - Change Heavy-weight read requests per minute from 750 to 2,250 - Change Read requests per minute from 1,500 to 4,500 - ``` - - - [x] `gitlab-r-saas-l-m-amd64-3` - - ``` - Thank you for submitting Case # (ID:35c99197a5a143ec883c26aceaab15b3) to Google Cloud Platform support for the following quotas: - Change CPUs - us-east1 from 2,400 to 3,000 - Change N2D CPUs - us-east1 from 500 to 3,000 - Change Heavy-weight read requests per minute from 750 to 2,250 - Change Read requests per minute from 1,500 to 4,500 - ``` - - - [x] `gitlab-r-saas-l-m-amd64-4` - - ``` - Thank you for submitting Case # (ID:bf7d29f5201c49b4b6d1140650e953d5) to Google Cloud Platform support for the following quotas: - Change CPUs - us-east1 from 2,400 to 3,000 - Change N2D CPUs - us-east1 from 500 to 3,000 - Change Heavy-weight read requests per minute from 750 to 2,250 - Change Read requests per minute from 1,500 to 4,500 - ``` - - - [x] `gitlab-r-saas-l-m-amd64-5` - - ``` - Thank you for submitting Case # (ID:606d548097904ce3bcb8d7a525b057ac) to Google Cloud Platform support for the following quotas: - Change CPUs - us-east1 from 2,400 to 3,000 - Change N2D CPUs - us-east1 from 500 to 3,000 - Change Heavy-weight read requests per minute from 750 to 2,250 - Change Read requests per minute from 1,500 to 4,500 - ``` - - - [x] `gitlab-r-saas-l-l-amd64-1` - - ``` - Thank you for submitting Case # (ID:3de48d3fd05e415abca185efa3fa5742) to Google Cloud Platform support for the following quotas: - Change CPUs - us-east1 from 2,400 to 6,000 - Change N2D CPUs - us-east1 from 500 to 6,000 - Change Heavy-weight read requests per minute from 750 to 2,250 - Change Read requests per minute from 1,500 to 4,500 - ``` - - - [x] `gitlab-r-saas-l-l-amd64-2` - - ``` - Thank you for submitting Case # (ID:81e8af60637f490dae4661f9c80fa5cf) to Google Cloud Platform support for the following quotas: - Change N2D CPUs - us-east1 from 500 to 6,000 - Change CPUs - us-east1 from 2,400 to 6,000 - Change Heavy-weight read requests per minute from 750 to 2,250 - Change Read requests per minute from 1,500 to 4,500 - ``` - - - [x] `gitlab-r-saas-l-l-amd64-3` - - ``` - Thank you for submitting Case # (ID:abbf0b50a4ce40fb8c8969beb7c963b2) to Google Cloud Platform support for the following quotas: - Change N2D CPUs - us-east1 from 500 to 6,000 - Change CPUs - us-east1 from 2,400 to 6,000 - Change Heavy-weight read requests per minute from 750 to 2,250 - Change Read requests per minute from 1,500 to 4,500 - ``` - - - [x] `gitlab-r-saas-l-l-amd64-4` - - ``` - Thank you for submitting Case # (ID:35032f9e334f4a36836daa1ce54ac4e1) to Google Cloud Platform support for the following quotas: - Change N2D CPUs - us-east1 from 500 to 6,000 - Change CPUs - us-east1 from 2,400 to 6,000 - Change Heavy-weight read requests per minute from 750 to 2,250 - Change Read requests per minute from 1,500 to 4,500 - ``` - - - [x] `gitlab-r-saas-l-l-amd64-5` - - ``` - Thank you for submitting Case # (ID:06a1b9e00bf8410aa373693f06c0d117) to Google Cloud Platform support for the following quotas: - Change CPUs - us-east1 from 2,400 to 6,000 - Change N2D CPUs - us-east1 from 500 to 6,000 - Change Heavy-weight read requests per minute from 750 to 2,250 - Change Read requests per minute from 1,500 to 4,500 - ``` - -- [x] Confirm that quota limits were increased - - [x] `gitlab-r-saas-l-m-amd64-1` - - [x] `gitlab-r-saas-l-m-amd64-2` - - [x] `gitlab-r-saas-l-m-amd64-3` - - [x] `gitlab-r-saas-l-m-amd64-4` - - [x] `gitlab-r-saas-l-m-amd64-5` - - [x] `gitlab-r-saas-l-l-amd64-1` - - [x] `gitlab-r-saas-l-l-amd64-2` - - [x] `gitlab-r-saas-l-l-amd64-3` - - [x] `gitlab-r-saas-l-l-amd64-4` - - [x] `gitlab-r-saas-l-l-amd64-5`",0.0 -112170229,2022-07-25 17:35:11.444,Create new GCP Projects for new Runner managers,"As per https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/768#note_1066322541 we will define two new shards of our SaaS Linux runners: `saas-linux-medium` and `saas-linux-large`. - -For each of them we will have 5 working runners, which means 5 GCP projects per shard. We need to register unique CIDRs for the `ephemera-runners/ephemeral-runners` subnetworks that will be used by the ephemeral VMs. Later, basing on the created projects, we will create runner managers connected with them. - -## Plan checklist - -- [x] Register unique CIDRs for ephemeral runner projects in our documentation :point_right: https://gitlab.com/gitlab-com/runbooks/-/merge_requests/4915/diffs -- [x] Define new projects through terraform :point_right: https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/4100 -- [x] Create new projects manually :point_right: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/7652 -- [x] ~~Merge new environments definition :point_right: https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/4121~~ -- [x] Configure the new projects through terraform :point_right: https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/4101 (please rebase after merging 4121!)",4.0 -112057532,2022-07-22 17:58:40.957,Adjust SLO for `urgent-authorized-projects` sidekiq queue,"## Summary - - - -Newly common alert: [`The shard_urgent_authorized_projects SLI of the sidekiq service (main stage) has an apdex violating SLO`](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/7491) - -Because of: https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/15955#note_1010193110 - -Requires: SLO adjustment - -### Should this be moved to `throttled` - -Looking at our [slos](https://gitlab.com/gitlab-com/runbooks/-/blob/c1f88c0a606e61aab56006f49359eaef6fe154e3/metrics-catalog/services/lib/sidekiq-helpers.libsonnet#L47-61) we have one called `throttled` where it doesn't look at `queue` time but only at `execution` time. - -The `urgent-authorized-projects` is set to `urgent=high` so it also looks at queue time, which is affecting our apdex. - - -#### Execution time - -Looking at 1 day it seems like most jobs complete under 2 seconds. - -![Screenshot_2022-08-02_at_09.36.32](/uploads/b5aea12f775b268af10c357ff2ad8a32/Screenshot_2022-08-02_at_09.36.32.png) - -[Source](https://thanos.gitlab.net/graph?g0.expr=sum(sidekiq_jobs_execution_time%3A1m%7Benvironment%3D%22gprd%22%2C%20shard%3D~%22urgent-authorized-projects%22%7D)%20by%20(shard)%0A&g0.tab=0&g0.stacked=0&g0.range_input=1d&g0.max_source_resolution=0s&g0.deduplicate=1&g0.partial_response=0&g0.store_matches=%5B%5D) - -Zooming out for 1 week we see the worst case scenario is `70s` - -![Screenshot_2022-08-02_at_09.41.51](/uploads/8607f81b3522baf28cefab886b6ec392/Screenshot_2022-08-02_at_09.41.51.png) - -[Source](https://thanos.gitlab.net/graph?g0.expr=sum(sidekiq_jobs_execution_time%3A1m%7Benvironment%3D%22gprd%22%2C%20shard%3D~%22urgent-authorized-projects%22%7D)%20by%20(shard)&g0.tab=0&g0.stacked=0&g0.range_input=1w&g0.max_source_resolution=0s&g0.deduplicate=1&g0.partial_response=0&g0.store_matches=%5B%5D) - -#### Queue time - -Looking at the queue time the p95 is around 10s, which is right around the [`urgent=high` slo](https://gitlab.com/gitlab-com/runbooks/-/blob/c1f88c0a606e61aab56006f49359eaef6fe154e3/metrics-catalog/services/lib/sidekiq-helpers.libsonnet#L49) - -![Screenshot_2022-08-02_at_09.45.07](/uploads/605135208f1f992ca74b929f053c41e8/Screenshot_2022-08-02_at_09.45.07.png) - -[Source](https://thanos.gitlab.net/graph?g0.expr=histogram_quantile(0.95%2C%20sum(sli_aggregations%3Asidekiq_jobs_queue_duration_seconds_bucket_rate5m%7Benvironment%3D%22gprd%22%2C%20shard%3D~%22urgent-authorized-projects%22%7D)%20by%20(le%2C%20shard))&g0.tab=0&g0.stacked=0&g0.range_input=1d&g0.max_source_resolution=0s&g0.deduplicate=1&g0.partial_response=0&g0.store_matches=%5B%5D) - -Zooming out for 1 week we sometimes see it got to 60s - -![Screenshot_2022-08-02_at_09.56.43](/uploads/12a1c80a24e85669f19e0dc917f56954/Screenshot_2022-08-02_at_09.56.43.png) - -[Source](https://thanos.gitlab.net/graph?g0.expr=histogram_quantile(0.95%2C%20sum(sli_aggregations%3Asidekiq_jobs_queue_duration_seconds_bucket_rate5m%7Benvironment%3D%22gprd%22%2C%20shard%3D~%22urgent-authorized-projects%22%7D)%20by%20(le%2C%20shard))&g0.tab=0&g0.stacked=0&g0.range_input=1w&g0.max_source_resolution=0s&g0.deduplicate=1&g0.partial_response=0&g0.store_matches=%5B%5D) - -### Conclusion - -The `urgent-authorized-projects` was created to throttle the number of jobs we execute concurrently from saturating the DB. It's unfortunate that we added `urgent` in the shard name because it's not `urgent` but it's throttled. - - - - - -## Related Incident(s) - - - -Originating issue(s): gitlab-com/gl-infra/production#7491 - - -## Desired Outcome/Acceptance Criteria - - - -This `urgent` classified activity is no longer completed within the expectations for `urgent` work, because it is being deliberately throttled with a lower concurrency level and less replicas. It is not desired to reclassify this workload, but instead simply adjust the SLO for this particular queue so that it no longer alerts EOCs. - -What we should do is: - -- [x] Change the queue to `throttled` inside of the runbook :point_right: https://gitlab.com/gitlab-com/runbooks/-/merge_requests/4860 -- [x] Update the label from `urgent=high` to `urgent=throttled` so the `throttled` SLO is taken into consideration. - - [x] `gstg`: https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/gitlab-helmfiles/-/merge_requests/942 - - [x] `gprd`: https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/gitlab-helmfiles/-/merge_requests/944 -- [x] Create follow up issue to clean this up with the shard itself (since the shard will be deleted once we fix https://gitlab.com/groups/gitlab-org/-/epics/8200) :point_right: https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/16080#note_1065636788 - -## Associated Services - - - -## Corrective Action Issue Checklist - -* [x] Link the incident(s) this corrective action arose out of -* [x] Give context for what problem this corrective action is trying to prevent from re-occurring -* [x] Assign a severity label (this is the highest sev of related incidents, defaults to 'severity::4') -* [x] Assign a priority (this will default to 'priority::4')",1.0 -112047375,2022-07-22 14:22:30.884,Update home page url in staging from https://about.staging.gitlab.com to https://staging.gitlab.com/users/sign_in," - -**Details** - - Point of contact for this request: @nnelson - - If a call is needed, what is the proposed date and time of the call: `2022-07-22 1500 utc` - - Additional call details (format, type of call): `No call required` - -**SRE Support Needed** - -Update home page url in staging from `https://about.staging.gitlab.com` to `https://staging.gitlab.com/users/sign_in`. - -Navigate to [`https://staging.gitlab.com/admin/application_settings/general`](https://staging.gitlab.com/admin/application_settings/general) `> ` `Sign-in restrictions` - -Change this field: - -![Screen_Shot_2022-07-22_at_9.20.34_AM](/uploads/c3be969bc84070782f54b984a79139de/Screen_Shot_2022-07-22_at_9.20.34_AM.png) - -To the new value: https://staging.gitlab.com/users/sign_in - -Click `Save changes`. - -",1.0 -102941271,2022-02-23 16:29:21.885,Bring node_count from 2 to 3 for camoproxy service cluster in staging,"Bring node_count from 2 to 3 for camoproxy service cluster in staging. - -Part of: https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/15277 - -For more details see: https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/15277#note_852825341",1.0 -102882061,2022-02-22 18:22:57.419,Increase machine type for staging camoproxy,"Increase machine type for `staging` `camoproxy`. - -So, a configuration for `osqueryd` appears to be causing leading to performance issues on small instances. - -This issue tracks the work for increasting the machine type for the ~""Service::Camoproxy"" instances in the `staging` environment. - -Part of: https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/15110",1.0 -94803334,2021-10-04 13:49:07.953,Metrics and dashboards improvements,"## Context - -This is part of the work to upgrade and migrate the GitLab.com container registry to a new version backed by a metadata database and online garbage collection (https://gitlab.com/groups/gitlab-org/-/epics/5523). This will be achieved following the gradual migration plan detailed in https://gitlab.com/gitlab-org/container-registry/-/issues/374. - -## Task - -Go through the list of Prometheus metrics and Grafana dashboards for the registry and ensure that everything is working as expected and displayed accurately. This is a good opportunity not only for fixes but also for generic improvements.",3.0 -93417473,2021-09-09 19:05:06.044,Clean up Consul node snapshots after OS upgrade,"After upgrading the console nodes https://gitlab.com/gitlab-com/gl-infra/production/-/issues/5481 and https://gitlab.com/gitlab-com/gl-infra/production/-/issues/5606, there will be some snapshots which we want to keep for at least several days, but probably not more than a few weeks. We want to be able to close out the upgrade CR when it is finished, and not keep it open pending deleting these snapshots. - -This issue is a reminder to clean up the snapshots after a reasonable amount of time has passed.",2.0 -92826617,2021-08-30 20:41:55.701,Increase the `shared_buffers` of `postgres-dr-delayed-01-db-gprd.c.gitlab-production.internal` to `20GB`,"Increase the `shared_buffers` of `postgres-dr-delayed-01-db-gprd.c.gitlab-production.internal` to `20GB`. - -See: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/5428#note_663616379",1.0 -92222481,2021-08-18 18:28:04.138,Upgrade GitLab services to terraform v1.0.X," -The latest major version of terraform (v1.0) has hit GA and we should start planning to upgrade to avoid unnecessary tech debt. - -We will have to push the changes to the following Auto-Devops environments: - -- [x] services-staging -- [x] services-prod -- [x] license-stg -- [x] license-prd -- [x] gs-staging -- [x] gs-production - - -Related Issues: -- We can use #12287 (closed) as a procedure reference. -- [gitlab-com-infrastructure Terraform v0.14 to v1.0.4 Upgrade](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/13908)",2.0 -89844435,2021-07-06 14:44:54.102,Remove un-needed keys from the staging omnibus GKMS vault,"https://gitlab.com/gitlab-com/gl-infra/production/-/issues/5050 - -Per this incident issue, we should take a short term corrective action to fix the immediate problem of the Omnibus GKMS vault file being too large to encrypt. There are other on-going efforts to compress these JSON files before encryption. - -Per @alejandro, we should be able to remove some keys for praefect TLS to help save enough space to get things working in the short term. https://gitlab.com/gitlab-com/gl-infra/production/-/issues/5050#note_617653560 - -Specifically, removing these keys: -- `omnibus-gitlab`: - - `ssl`: - - `praefect_certificate` - - `praefect_private_key` - - `trusted_certs`: - - `praefect.crt`",2.0 -89669363,2021-07-02 12:14:55.125,Validate an existing node can switch to the v10 boot script,"Because of https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/13477 we can update the boot script and just require a reboot for it to stick. - -The v10 of this script introduced in https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/13566 forces a node to use a `-generic` kernel to receive Ubuntu Advantage Kernel Livepatches. - -We should validate that we can switch a node to v10 without a rebuild, by just rebooting. - -AC: - -- [x] In gstg create a new instance of `generic-stor` for each boot script version prior to v10 and each Ubuntu 16.04, 18.04, and 20.04 - - start version is debatable, check in TF what is the lowest version in use. -- [x] Make sure everything runs as usual (chef-client works successfully) -- [x] bulk update the script version to v10 in TF -- [x] reboot the machines -- [x] Check the machines are now running a `-generic`-kernel -- [x] Make sure everything runs as usual (chef-client works successfully) - -Checklist / notes whether it worked: (Replace `OK` with notes if required) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Script version/OSxenial / 16.04bionic / 18.04focal / 20.04
-6 - - -- [x] OK - - - -- [x] OK - - - -- [x] OK - -
-7 - - -- [x] OK - - - -- [x] OK - - - -- [x] OK - -
-8 - - -- [x] OK - - - -- [x] OK - - - -- [x] OK - -
-9 - - -- [x] OK - - - -- [x] OK - - - -- [x] OK - -
- 10 itself (tested in dev) - - -- [x] OK - - - -- [x] OK - - - -- [x] OK - -
- -(Any failures in regards to the epbf-exporter can be ignored, as long as chef-client still finishes. This is a known issue: https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/13575)",8.0 -89033504,2021-06-21 13:19:38.911,Determine gitaly deduplication efficiency,"To determine whether we might use the project migration API we need to know the worst-case size of a project. - - -AC: -- [x] Build a script, that can get the size of the project itself, as well as referenced objects in the `@pool` structure. -- [x] Run the script on multiple shards and gather the output",5.0 -88756621,2021-06-15 15:04:00.151,Ubuntu Advantage aware bootstrap script,"This issue is to split our scope-creep from https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/10039 regarding https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/bootstrap/-/merge_requests/14 - -As per https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/10039#note_577158689: - -Rework kernel management - * Install the latest kernel at node bootstrap time, no more hardcoding of versions. (make sure apt holding works though, just in case we are forced to rollback a kernel) - * Use livepatch to keep security updates - * Use livepatch status to see whether all security updates could be applied or if there is a reboot required. - -The MR linked above adds it. This issue is to document the rollout of v10 of the bootstrap script.",8.0 -87529364,2021-05-21 21:02:13.266,Teleport approvals not working from remote command lines,"The `tctl` command is not working from remote sessions. It works fine from the server, but from an approver's laptop, it's not working. - -The error looks like: - -``` -[CLIENT] Cannot connect to the auth server: failed direct dial to auth server: Get ""https://teleport.cluster.local/v2/configuration/name"": x509: certificate is valid for teleport.gprd.gitlab.net, not 73746167696e672d74656c65706f72742d636c7573746572.teleport.cluster.local - Get ""https://teleport.cluster.local/v2/configuration/name"": x509: certificate is valid for teleport.gprd.gitlab.net, not 73746167696e672d74656c65706f72742d636c7573746572.teleport.cluster.local, failed dial to auth server through reverse tunnel: Get ""https://teleport.cluster.local/v2/configuration/name"": dial tcp 34.74.121.190:3024: connect: operation timed out - Get ""https://teleport.cluster.local/v2/configuration/name"": dial tcp 34.74.121.190:3024: connect: operation timed out. -Is the auth server running on ""teleport.gprd.gitlab.net:3080""? -```",2.0 -87295112,2021-05-18 16:16:12.998,Container Registry: Define custom SLI for blob upload routes,"## Context -As part of https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/476, we're breaking the single Container Registry SLI into multiple SLIs per API route. - -## Proposal - -Separate the blob upload routes SLI. 3 SLIs is what we should go for this route: - -* `server_route_blob_upload_uuid_writes`: `PATCH`, `PUT` -* `server_route_blob_upload_uuid_deletes`: `DELETE` -* `server_route_blob_upload_uuid_reads`: `GET` (with a comment that `GET` is unused?) - - -See https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/476#note_578464890 for additional details.",2.0 -87036899,2021-05-13 14:05:50.617,Investigate/setup restart policy for teleport,"Discovered on staging today (2021-05-13) that teleport on staging had been dead for a while: - -``` -dsmith@teleport-01-inf-gstg.c.gitlab-staging-1.internal:~$ systemctl status teleport -● teleport.service - Gravitational Teleport - Access Control Server - Loaded: loaded (/etc/systemd/system/teleport.service; enabled; vendor preset: enabled) - Active: inactive (dead) since Fri 2021-04-30 21:15:51 UTC; 1 weeks 5 days ago - Docs: https://goteleport.com/teleport/docs/ - Main PID: 19722 (code=exited, status=0/SUCCESS) -``` - -Oddly the code is exited / success so I wonder what happened. - -Looks like the systemd unit does have `Restart = on-failure`, but that didn't work in this case with success?",2.0 -86945122,2021-05-12 14:15:31.306,"Disk space usage on nfs-file-{48,49,50,51} is between `86.41%` and `89.45%`","We might need to do some repository shuffling away from `nfs-file-{48,49,50,51}` which are between `86.41%` and `89.45%`: https://dashboards.gitlab.net/d/W_Pbu9Smk/storage-stats?orgId=1&refresh=30m&var-env=gprd&var-environment=gprd&var-node=All&from=1620742126000&to=1620828526000&viewPanel=160 - -Unless there is some trigger which automatically re-homes git repositories after a specific threshold is crossed. - -This is a scenario which this epic (https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/357) is intended to address, and is implementation complete (https://gitlab.com/gitlab-com/gl-infra/balancer), fully tested in `staging`, but not yet tested in production. - -Obviously, we can do this manually very easily now. - -I think it is worth discussing if now would be a good time to proceed with production testing of the automation.",1.0 -85776055,2021-04-22 21:41:58.190,Setup ansible user across the fleet,"In order to support testing Ansible against nodes in our GCP projects, we will need to have a target user setup for connections from the control node (CI runner). - -To establish access to existing hosts, we will create a data bag for the user in Chef, and add a pre-generated SSH key (that will be saved in 1Password). - -For long-term access that will remain once we move further with the migration and no longer have Chef, or no longer perform user provisioning with Chef, we will need to update the bootstrap module to make sure the user exists when a compute instance is initially provisioned by terraform.",3.0 -84947722,2021-04-09 17:32:36.685,Developer Evangelism Group Project,"# Group Project Request - -- Project / Group Name (<17 characters and start with `group-`): group-community -- Project Administrator (email): jcoghlan@gitlab.com - -### Provide a brief overview of the reason for this project and why it is needed and for how long it will be used. - -Will primarily be used by the Developer Evangelism team in testing, demos, and content creation. - -## Security - -### Provide a list of data and the corresponding classification that will be used in this project and how it will be accessed. - - -## Group Project Access Checklist - -Make sure the following criteria is met and understood by the project administrator. - -- [ ] If the gitlab.com database is copied, that data has been processed by the [pseudonymization script]( https://gitlab.com/gitlab-com/runbooks/blob/master/howto/pseudonymization-gitlab-db.md). -- [ ] Regular security updates are applied to all nodes in the project. -- [ ] Unused instances will be removed in a timely manner -- [ ] The Project Administrator is responsible for any users or additional administrators that they add to the project -- [ ] The Project Administrator is responsible for justifying any cloud spend within the project. -- [ ] Group Projects are intended for development, test, or demo work. Everything in these projects is considered temporary. - -## Infrastructure Tasks - -- [x] Create a new branch that is **not** the same as the group name and is less than 25 characters long. For example, `add-telemetry-group`. -- [x] Create file in https://ops.gitlab.net/gitlab-com/group-projects named `environments/(group name from above).tfvars` by copying an existing file and changing the Administrator and Group Name variables -- [x] Once the pipeline succeeds, review the changes are correct and stop the review by activating the `stop_review` job -- [x] Merge the change to master -- [x] Create a branch from master named `(group name from above)` and push -- [x] Verify that the pipeline completed successfully at https://ops.gitlab.net/gitlab-com/group-projects/pipelines -- [ ] (Optional) If the group does not start with `group-*` or `gitlab-qa-*k`, add the newly created branch as a protected branch.",2.0 -82056775,2021-03-31 23:23:53.646,PoC migrating GET to collections format,"As noted in https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/12787#note_542492346, shifting the ansible code within the [GitLab Environment Toolkit](https://gitlab.com/gitlab-org/quality/gitlab-environment-toolkit) and publishing the package to Galaxy would enable us to use that as an upstream dependency upon which we would build our gitlab.com-specific code. Ideally, when we begin shifting functionality from our existing Chef cookbooks, this would enable us to focus general updates and improvements to GET for all of its downstream users to benefit, and things which are highly specific to the gitlab.com deployment within GCP to a `gitlab.gitlab-com` collection. - -This idea is not fully fleshed out, yet, and may not be feasible to incorporate with our current processes/requirements for managing the various roles/fleets within our infrastructure.",5.0 -81962137,2021-03-30 22:57:39.066,Update CSP directives for embedded iframes,"Many teams are creating curated learning paths/partner content in the EdCast platform (GitLab Learn) and ImPartner using SSOT content from the GitLab docs site and handbook. But if a user clicks the Web IDE button at the bottom of a handbooks page from within the iframe in either, the iframe goes blank. - -We need to support embedded content from -``` -edcast.gitlab.com -gitlab-learn.leapest.com -partners.gitlab.com -``` - -The definition of done here is to ensure that the 3rd party sites listed above where we're embedding content from the handbook are allowed in the CSP `frame-ancestors` directive within Fastly and the same for gitlab.com and Cloudflare (for the WebIDE). Ideally, we will also be able to test against staging before changing the production config. - -For additional context, this was highlighted while [updating CSP directives for the handbook](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/12062#note_541359829) to enable clickable SVG objects, and the [updated CSP inadvertently blocked embedded iframes](https://gitlab.com/gitlab-com/www-gitlab-com/-/issues/10052#note_541406634) from external 3rd party sites GitLab teams are using to provide educational content, partner resources, etc. This [has been an issue](https://gitlab.com/gitlab-com/www-gitlab-com/-/issues/7638#note_341249056) in the past, as well.",3.0 -81448025,2021-03-22 19:32:48.893,SRE Onboarding template assumes the engineer has SSH access for Chef steps,"As an onboarding engineer, the steps to configure Chef (and knife) require that an SSH user is configured via the Chef data bags to allow SSH to the chef server. The order of these steps should be updated to reflect this dependency and a caveat should be added to the Chef steps for creating the Chef user so that it is understood that SSH steps will need to be completed first.",1.0 -81337239,2021-03-20 21:16:37.564,Increase frequency of `ANALYZE` operation on `namespaces` table to every `30 minutes`,"Increase frequency of `ANALYZE` operation on `namespaces` table to every `30 minutes`. - -Alvaro Hernandex writes: - -> Current incident started ~ 40mins after the last cron-ed ANALYZE. So probably 1h is not frequent enough. I'd suggest to run it more frequently, every 30 mins or even 15 minutes. It is a lightweight operation, which causes a bit of I/O and lasts, under weekend non-incident profile, just 18 seconds (probably under peak load on a week day more, but still quite lightweight). - -@brentnewton writes: - -> based on this, let’s move to every 30 mins. - -~""corrective action"" for production incident https://gitlab.com/gitlab-com/gl-infra/production/-/issues/4024",1.0 -80939112,2021-03-15 14:30:07.565,Replace CustomerDot Zuora integration User to Remediate SOX Gap,"The CustomerDot Integration User is currently tied to a specific engineer's email address. The following Issue contains additinoal context: https://gitlab.com/gitlab-com/business-ops/enterprise-apps/intake/-/issues/91. @jameslopez [has provided advice on what email should be associated with the CustomerDot integration user](https://gitlab.com/gitlab-com/business-ops/enterprise-apps/intake/-/issues/91#note_520357501): `fulfillment-be+zuoraprod@gitlab.com` - -The ask here is to work with the Infrastructure team to complete the switch of the CustomerDot integration user from `ruben_APIproduction@gitlab.com` to `fulfillment-be+zuoraprod@gitlab.com`",3.0 -80821930,2021-03-12 17:57:15.168,Disable Cloudflare `Onion Routing` setting,"**Current Situation** - -The `Onion Routing` setting is enabled on GitLab.com. This causes Cloudflare to issue an `alt-svc` header to tor browsers. -It is presumed, that the behaviour causes issues with Cloudflare Spectrum to trigger https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/10860. This makes the site *unusable through tor* without disabling that header manually client-side. - -![Screenshot_from_2021-03-12_18-40-01](/uploads/8daccc432b27ca23241f5caf267b0230/Screenshot_from_2021-03-12_18-40-01.png) - -**Desired Outcome** - -Disable the `Onion Routing` setting. That will cause TOR traffic to run through the usual tor relays and not be short-circuited to Cloudflare via a hidden service. Customers might see the Challenge page, but afterward, it should work normally, as we don't hit the issue in #10860. - -This fix is easy to apply and at least will make the site usable, albeit with some inconvenience to the user. - -https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/-/blob/cc7bfef7e45642aebaff8c19a8db936da21d361e/environments/gstg/main.tf#L2058 must be set to `off` -https://ops.gitlab.net/gitlab-com/gitlab-com-infrastructure/-/blob/cc7bfef7e45642aebaff8c19a8db936da21d361e/environments/gprd/main.tf#L2208 must be set to `off` - ---- -This might also be an upstream but in Cloudflare, as we use Cloudflare Spectrum for our HTTPS ingress. We should raise a case with them to see if that may contribute to the problem, as their hidden services might interfere with that/affect this behavior. ---- - -**Acceptance Criteria** - -- [ ] `Onion Routing` is turned off on gstg -- [ ] `Onion Routing` is turned off on gprd",1.0 -80675908,2021-03-10 23:06:18.581,Hook onto GCP instance maintenance events,"During https://gitlab.com/gitlab-com/gl-infra/production/-/issues/3891 we discovered, that we can react to imminent maintenance events by [hooking into a GCP API](https://cloud.google.com/compute/docs/storing-retrieving-metadata#maintenanceevents). - -I wrote a little script that registers itself and listens for those events. - -If everything is okay, we run all executable files in `/etc/gcp_watchdog/hooks/up`, if there is a problem/maintenance we run them in `/etc/gcp_watchdog/hooks/maint`. -Every script will get the event (or `UP` for okay events) as argv1. Those can also be symlinks - -`/dev/shm/gcp_watchdog.state` will contain the state/last event or `UP`. - -If the metadata API is unavailable, a maintenance will be assumed and `metadata_unavailable_without_state` will be passed as the event. If there is a state saved in the current execution, that will be used instead. - -Only a change in state will cause hooks to run. - -We should seek to deploy this everywhere. We can then add some cheap scripts to see if it triggers as expected before hooking into production services. - -```bash -#!/usr/bin/env bash - -set -uf - -STATE=/dev/shm/gcp_watchdog.state -HOOK_DIR=/etc/gcp_watchdog/hooks - -# Clean the state. Others might consume this, but we want to always be in the know of what happens. -touch ""${STATE}""; -truncate -s0 ""${STATE}"" - -handle_event () { - local event=""$1"" - local subdir=""$2"" - - if does_state_match ""${event}""; then - # We are already there. - return 0; - fi - - echo ""Got new event: ${event}"" - - # Find all executable hooks and execute 16 at a time, passing the event as argv1. OR-ing true to prevent xargs from qutting on error. - find ""${HOOK_DIR}/${subdir}"" -executable -print0 | xargs -0rn1 -P16 -I% -- sh -c ""% ${event} || true""; - - set_state ""${event}"" -} - -set_state () { - local event=""$1"" - - echo ""${event}"" > ""${STATE}"" -} - -does_state_match () { - if [ ""$(cat ""${STATE}"" 2>/dev/null)"" == ""$1"" ]; then - return 0; - else - return 1; - fi -} - -response=""$(curl -s http://metadata.google.internal/computeMetadata/v1/instance/maintenance-event -H 'Metadata-Flavor: Google')"" -while true; do - if [ -z ""${response}"" ]; then - # request failed - echo request failed - if [ -z ""$(cat ""${STATE}"" 2>/dev/null)"" ]; then - echo ""no state registered. Assuming maintenance mode"" - handle_event ""metadata_unavailable_without_state"" ""maint"" - fi - # Do not wait for changes, but fetch value ASAP - response=""$(curl -s http://metadata.google.internal/computeMetadata/v1/instance/maintenance-event -H 'Metadata-Flavor: Google')"" - sleep 1; - continue; - elif [ ""NONE"" == ""${response}"" ]; then - # all clear, poll again - handle_event ""UP"" ""up"" - else - handle_event ""${response}"" ""maint""; - fi - sleep 1; - # For subsequent requests: Poll - response=""$(curl -s http://metadata.google.internal/computeMetadata/v1/instance/maintenance-event?wait_for_change=true -H 'Metadata-Flavor: Google')"" -done -```",2.0 -80645953,2021-03-10 14:27:41.470,Expose the container registry database-related metrics in the Grafana dashboards,"Related to https://gitlab.com/groups/gitlab-org/-/epics/5392. - -As we approach the deployment of the new registry with a metadata database and online garbage collection to pre-production, we'll need to extend the existing Grafana dashboards for the registry with database-related metrics (application metrics, not the database cluster metrics). - -The container registry, when the metadata database is enabled, emits the following sets of Prometheus metrics: - -- Connection pool statistics: Count of open, idle, in use, etc. connections ([`sql.Stats`](https://golang.org/pkg/database/sql/#DBStats)); -- Statements duration: The duration of every single statement executed against the database is recorded using a histogram; -- Online GC statistics: Statistics about online GC, such as the number of processed tasks, recovered space, etc. - -The ~""group::package"" team will try to self-serve these changes but we may need some guidance.",3.0 -80188941,2021-03-03 16:41:23.161,Get Chef working for Staging Subscription GCP Proof of Concept,,5.0 -80044697,2021-03-02 00:05:44.503,Rebuilt org-ci to increase IP address space,"In the effort to roll out https://gitlab.com/gitlab-com/gl-infra/production/-/issues/3648, we discovered there isn't enough IP space for doing an upgrade. We need to change the CIDR and thus rebuild the cluster. - -The suggestion is to update `gitlab-gke-pod-cidr` to be `10.3.0.0/16`. - -I estimate this won't take much time as there is tolerance for downtime with no customer impact as well as it being a pretty simple change. We will also need to be sure that the helm configs are pushed up to the new cluster. - -We also need to update the [runbook](https://ops.gitlab.net/gitlab-com/runbooks/-/tree/master/docs/ci-runners#gitlab-org-ci-project) with new ranges.",4.0 -80016011,2021-03-01 15:35:27.755,Verifying gitlab domain for One trust," - -**Details** - - Point of contact for this request: @Karuna16 - - If a call is needed, what is the proposed date and time of the call: [+ Date and Time +] - - Additional call details (format, type of call): [+ additional details +] - -**SRE Support Needed** -[+ Support Request Details +] - -New txt record on `gitlab.com` -Verifying domain name for gitlab.com for One Trust SSO setup. Here's the [link](https://my.onetrust.com/s/article/UUID-3ec2fcd6-37f9-a0e3-20a1-acb6c64f99c3#UUID-3ec2fcd6-37f9-a0e3-20a1-acb6c64f99c3_bridgehead-idm458761989491843162723680452) on how to verify domain. Attaching the screenshot in case link doesn't work.![Screenshot_2021-03-01_at_9.04.10_PM](/uploads/3b7cf7fa46101c34e13c48ac61cdbf43/Screenshot_2021-03-01_at_9.04.10_PM.png) - -The TXT token from One trust: onetrust-domain-verification=af5b5fda116e45a9b4c4abcd9e571923 -Alias can @ or blank. - -![Screenshot_2021-03-01_at_9.38.48_PM](/uploads/18a6e3bdc131df47aa2c7c386675dad7/Screenshot_2021-03-01_at_9.38.48_PM.png)",1.0 -79744643,2021-02-24 20:56:33.255,"Transfer meltano.{com,io,net,org} domains to Meltano's AWS account","As a separate business unit in the company, the Meltano team mostly manages its own assets (for example its website, accounts for Google Workspace, Slack, Zendesk, and SendGrid, and domain names meltanodata.com and singerhub.io), that are billed to a Meltano-specific corporate credit card and budgeted to the ""R&D - Meltano"" department. - -The one exception are the `meltano.{com,io,net,org}` domains that are registered under GitLab's AWS account. -To give the Meltano team full control over the assets it's responsible for and make sure expenses are taken out of the correct budgets, -I think it would be appropriate for these domains to be [transferred](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-transfer-from-route-53.html) to Meltano's AWS account with ID 292927715491. - -Since this was requested and rejected two years ago in https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/6316, the Meltano team has become more independent and now manages more of its own infra assets, with the ownership of these domains a remnant of its early days as ""just"" a team in GitLab.",2.0 -79703641,2021-02-24 11:13:26.320,Add deployment version info to K8s dropdown menu for each service in Grafana,"Use case: We want to get information about the deployed version of a service in Grafana. - -Right now there is a `pod-info` dashboard for each service where you can see the deployment versions e.g. [this](https://dashboards.gitlab.net/d/sidekiq-pod/sidekiq-pod-info?orgId=1) for sidekiq. But it would be easier to find, if we - -1. would add this to the ""Kubernetes Detail"" dropdown menu for each service overview dashboard or -2. add the deployment version information directly to the ""Kube Deployment Detail"" dashboard, which already is linked in the ""Kubernetes Detail"" dropdown menu - -I would prefer option 2.",3.0 -79643924,2021-02-23 15:14:44.119,Review and update License/Version/Customers Runbook Documentation,"Per this incident: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/3398 - -As an SRE on-call, I should be able to open the Runbooks and find basic information on how to find and change the Customers, License, and Version services.",1.0 -79643555,2021-02-23 15:09:15.086,Review and update the Services-Base README,"Per this incident: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/3398 - -The [README.md](https://ops.gitlab.net/gitlab-com/services-base/-/blob/master/README.md) for the Services-Base project should be reviewed and updated to better reflect the intended workflow of making a change, promoting the change through the Staging and Production environments, and cleaning up any remaining environments.",3.0 -79434974,2021-02-19 19:21:00.593,TLS renewal or certificate for airflow.gitlabdata.com,"This was originally set up with https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/9205 and was implemented as described in [our handbook](https://about.gitlab.com/handbook/business-ops/data-team/platform/infrastructure/#tls) - -Looks like we either need access to Route 53 to do this or the renewed certificate. - -The data team work is tracked in https://gitlab.com/gitlab-data/analytics/-/issues/7994",2.0 -78997103,2021-02-12 20:01:00.964,Grow production API fleet by 6 nodes (or more),"[![Screen_Shot_2021-02-12_at_2.52.48_PM](/uploads/97561837174e5c2969e4a676824ce8ec/Screen_Shot_2021-02-12_at_2.52.48_PM.png)](https://dashboards.gitlab.net/d/api-main/api-overview?viewPanel=66&orgId=1&from=1612986771345&to=1613159571346&var-PROMETHEUS_DS=Global&var-environment=gprd&var-stage=main&var-sigma=2) - -[![Screen_Shot_2021-02-12_at_2.56.10_PM](/uploads/62cd2c700ef8577f275fd664a6b67421/Screen_Shot_2021-02-12_at_2.56.10_PM.png)](https://dashboards.gitlab.net/d/api-main/api-overview?viewPanel=1217942947&orgId=1&from=1612986960558&to=1613159760558&var-PROMETHEUS_DS=Global&var-environment=gprd&var-stage=main&var-sigma=2) - -The API VM fleet is starting to show loads that place the VMs load/core value over 1 during peak use times. This coupled with a deploy can lead to poor apdex. Until API is moved into Kubernetes, we should increase the fleet size to minimize customer facing performance issues. - -The outcome of this work should be that deploys during peak times should not spike single node puma worker component to 100%.",3.0 -78727798,2021-02-09 15:16:15.273,Update packagecloud to latest version," - -**Details** - - Point of contact for this request: @twk3 @joshlambert @mendeni - - If a call is needed, what is the proposed date and time of the call: (likely not needed) - - Additional call details (format, type of call): - -**SRE Support Needed** - -We would like to have packages.gitlab.com upgraded to the latest version of packagecloud 3.0.3 as it brings support for additional package versions the Distribution team would like to provide to customers. - -We would like for the upgrade to be available for us to use with the GitLab 13.10 release. - -Omnibus issue: https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/5979 - -cc\ @ahanselka @dawsmith - - - - -",5.0 -78681464,2021-02-09 01:09:26.346,Deploy Ephemeral Sessions on Read Only console servers,"Using the Virtual Machines provisioned in https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/12562 - -Create and push out the chef configuration that will ephemeral systemd sessions in the staging and production environments.",8.0 -78681460,2021-02-09 01:09:09.912,Deploy Teleport on Read Only console servers,"Using the Virtual Machines provisioned in https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/12562 - -Create and push out the chef configuration that will install Teleport in the staging and production environments.",8.0 -78668132,2021-02-08 20:22:20.305,Provision read only console servers,"Write Terraform to provision read only console servers including Teleport and ephemeral sessions from PoC - -Once this is complete we can: - -- Deploy Teleport on Read Only console servers: https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/12565 -- Deploy Ephemeral Sessions on Read Only console servers: https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/12566",8.0 -78447341,2021-02-04 17:31:50.624,Unable to expand Cloud NAT IP routing any further,"In recent incident: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/3448 - -We needed to add additional IP addresses to our Cloud NAT device. Luckily we were able to easily add two ip addresses that were next in line with our IP reservations. Should we need to expand any further, our terraform module may not support this. - -Utilize this issue to accomplish the following: - -1. [x] Learn how IP allocation works with respect to Cloud NAT devices - during the incident, we appeared to use IP addresses that are not technically allowed to be used in the CIDR range -1. [ ] Determine how we can expand the set of IP's to the Cloud NAT device - currently we apply a CIDR range and a count of IP addresses - the current allocated CIDR has no free IP addresses -1. [ ] Documentation updated to reflect how these are configured and how to expand in the future -1. [ ] Consider expanding our current Cloud NAT IP Allocation - -Marking this as high priority because the next time we run out of ports, we currently do not have an identified mitigation strategy.",13.0 -78440319,2021-02-04 15:43:03.444,"Fluentd pod restarts should not cause many old, already indexed log lines to be sent to elasticsearch"," - -## Problem - -It's been observed (e.g. https://gitlab.com/gitlab-com/gl-infra/production/-/issues/3435#note_498652734) that restarting fluentd (by reprovisioning the daemonset pod) causes log files to be read in full from the beginning, entering many old and duplicated lines into elasticsearch. This skews logs-based aggregations, and puts strain on the elasticsearch nodes. - -## Desired outcome - -Fluentd makes a best-effort attempt to pick up where it left off. The pos_file feature of the tail input plugin (https://docs.fluentd.org/input/tail#read_from_head), which we do configure, is meant to allow fluentd to resume from its previous position. The pos files are kept on `/var/log`, which in the daemonset pod is the same mountpoint is in the host's mount namespace and so files written there persist across pod restarts. - -## Acceptance criteria - -- [ ] As few as possible duplicate lines are entered into elastic in most fluentd pod restart scenarios",2.0 -78415198,2021-02-04 09:51:56.597,"ILB for thanos-query-frontend, to bypass IAP for grafana"," - -## Problem - -We need to access thanos-query-frontend in kubernetes internally, inside and outside GKE (because we have not migrated grafana to k8s yet). - -## Desired outcome - -A DNS entry that resolves to a private IP, that allows unauthenticated access to thanos QFE from inside the VPC. - -## Acceptance criteria - -- [x] A DNS entry that resolves to a private IP, that allows unauthenticated access to thanos QFE from inside the VPC - ---- - -FYI @bjk-gitlab",1.0 -78093399,2021-01-29 19:48:26.156,CI Cleaner script seems to have stopped working,"For some reason, the CI cleaner script has stopped working but doesn't error out. I'm not quite sure why as there is no error. Nonetheless, this should be fixed. - -My initial suspicion is that something changed about our naming scheme or the GCP API, but I have done minimal investigation thus far. - -[link to pipelines](https://ops.gitlab.net/gitlab-com/gl-infra/ci-project-cleaner/-/pipelines)",8.0 -78091731,2021-01-29 19:02:02.366,Delete extra server entry from Okta ASA,"During the course of fixing https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/12441, an additional server entry for Okta ASA was created. This is unnecessary and should be deleted, but I don't have access to. The only person I know of that has access to delete is @cmiskell, but there may be others. - -[this is the server that needs deleting](https://app.scaleft.com/t/gitlab-poc/project/windows_ci/servers/28409349-186f-4079-a036-9faf2da9a92a)",1.0 -78079775,2021-01-29 15:37:28.442,Revise Chef server LEGO cron for certificate renewal,"The Chef server is not reloading nginx after a new certificate is installed. The LEGO command needs the following added: `--renew-hook='/usr/bin/chef-server-ctl hup nginx` to make sure this happens on a renewal. - -We should also re-evaluate the LEGO schedule so that it will run once a day.",2.0 -78011323,2021-01-28 15:57:08.416,Support pg_ctl_timeout attribute in the patroni service configuration,"Support `pg_ctl_timeout` attribute in the patroni service configuration. - -## Why - -The present situation in the production patroni hosts is as follows: - -The command `sudo systemctl stop patroni` when invoked in production does not wait long enough for the postgresql PostgreSQL database server to terminate, and sends it a `sigkill` signal, which forces the process to halt in an untidy way. - -This would also happen if a patroni host was instructed to shutdown or reboot -- the `systemd` service manager would attempt to stop all its unit daemons, including the `patroni` service. - -## Expectations - -This is an experiment with no bad consequences. The experiment is to put in place a configuration attribute which allows us to adjust the amount of time which patroni waits for the PostgreSQL database server to cleanly shutdown before patroni gives up and sends postgresql a `sigkill`. - -It is NOT KNOWN how long the PostgreSQL database server is required to shut down cleanly in production. We have not yet conducted any tests to determine what this interval actually is. - -## Possible outcomes - -The worst possible outcome is that patroni takes longer than the default timeout of `30` seconds for the PostgreSQL database server to shut down cleanly. If the PostgreSQL database server still has not shut down by the end of the new timeout of `120` seconds, then we are no worse off than we were before. - -But now we will have in place a mechanism to control this timeout interval configuration and adjust it upwards as we encounter new situational information around host system and patroni service shutdown commands, whereas presently we have no method by which to control this mechanism using our configuration management tooling. - -## Material references - -Discussions around discovery of problem which this issue intends to iteratively solve. - -- https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/12066#note_464281824 -- https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/12066#note_463307997 -- https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/12066#note_463768785 - -## Alternative proposals - -This is not strictly an alternative option, as it can be combined with the current approach to reduce anxiety around system and service stop instructions. It is possible to also disable the patroni systemd unit configuration to use the `sigkill` method at all. - -- https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/12066#note_465064619",1.0 -77826560,2021-01-25 21:46:07.270,Core Infra team vision discussion,"@dawsmith and I were discussing ideas for clarifying, expanding, and updating our [team vision](https://about.gitlab.com/handbook/engineering/infrastructure/team/reliability/core-infra/#vision), and I wanted to kickstart a discussion amongst the team to make sure we incorporate all perspectives with any changes we make. - -Here's one section from my notes: -> I’ve struggled for a long time trying to figure out what our team should be, what our vision is, and where our north star lies. This statement: “Continue delivering tangible value for both GitLab.com, but indirectly our self-managed customers who run at a larger scale. It has been mentioned too many times, but every improvement we make at scale for .com is a challenge our self-managed customers won't have to deal with.” resonated strongly with me. The areas that core infra owns include many areas we take for granted, a bit. Rarely do we have clear ways to incorporate a link back to our self-hosted customers, outside of the occasional sales call where we answer questions about how we manage .com. Incorporating more focus on reusability for self-hosted GitLab admins would significantly help us to answer those questions, clarify priorities, and provide impetus to improve the quality of our work products. -> - Chef (and next, Ansible), with all the related cookbooks/playbooks, pipelines, testing, and deployment -> - Standardize around publicly available/distributed terraform modules, helper scripts, pipeline definitions, etc. -> - Frontend traffic management, allow/blocklists, WAF, etc. -> - Kubernetes cluster monitoring and management - -In addition to ensuring we focus more on repeatability and reusability in our code artifacts, one common theme I keep coming back to in my 1:1 discussions with @dawsmith centers around shifting our mindset to think about services and platforms instead of (just) tools and technologies. So instead of attacking the problem of managing infrastructure by thinking about terraform code updates and custom scripts, I'd like to think through some standard use-cases and abstract those implementation details a bit. We can still leverage the best technology for a given purpose on the backend, but we can more effectively enable self-service for other teams within GitLab by codifying what it means to deploy infrastructure _in our environment_ with a narrower, more focused abstractions. - -The idea is to enable scalability and SaaS platform maturity by distilling down complex codebases and diverse technologies into simpler, more consumable interfaces. - -One example might be to define a ""frontend service"" that comprises managing allow/blocklists, path-based routing & ACLs, DNS, and SSL certificates; another could be dealing with the mountain of options with raw GCP/GKE APIs, console, and CLI options, and presenting a service/API/CLI of our own that allows a user to ""just"" create a VM, project, etc. while ensuring that the results of that action are properly tagged, follow naming conventions, have appropriate security measures in place, and are subject to lifecycle policies to reduce bloat in idle infrastructure. - -For a GitLab engineer looking to engage with our infrastructure, it is much less intimidating to be presented with a curated selection of common use-cases via a web frontend, internal API, or custom CLI utility, than to be told ""Oh yeah - just check out our thousands of lines of terraform/chef/ansible/go code, that'll get you what you need."" The easier and safer we make it for more people to productively contribute to the growth and management of gitlab.com, the better all our lives will be. As things stand, we often operate as a bit of a ""service team"", dealing with many fractured ad-hoc requests from many different directions, and that simply won't scale effectively. - -These are just a couple points from my perspective. What does everyone else think? Since we could brainstorm a thousand ideas that may or may not be feasible/important/urgent, we should try to focus on the things that are most critical, alleviate the most pain, provide the greatest value, etc. As a concrete exercise, one way of framing this is to consider the end goal for a particular timeframe, then work backwards from there filling in the broad strokes of what it'll take to get there. - -Ex: -_It is February 2022; in the past year we have_ -- _implemented a core webservice/API and associated CLI that we use to holistically manage frontend services for gitlab.com. So far we can manage allow/blocklists in Cloudflare, page rules, WAF rules, and path-based routing_ -- _while we have emergency-use out-of-band SSH credentials for SREs, we have successfully migrated all other infrastructure authn and authz to Okta or OAuth integrations_ -- _we have a service that allows us to create and manage VM base images, with reports on utilization throughout the environment for visibility as we facilitate updates, OS upgrades, and removal of outdated images from the catalog_ -- _we have established a baseline definition of the standards required for our GKE cluster, and have a plan to develop a CLI for creating, updating, and validating foundational cluster configuration throughout all our environments_",8.0 -77690708,2021-01-22 14:52:46.656,replace `module.gcp-tcp-lb-spectrum` with `module.gcp-tcp-lb` in gprd,"As part of https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/12313 we cleaned up `gprd` terraform to have a clean resource for the main spectrum app. To reach full alignment with `gstg`, we need to create a new `module.gcp-tcp-lb` with identical configuration and migrate the spectrum app over to that.",13.0 -77621595,2021-01-21 16:09:00.126,Cleanup residue from email.customer.gitlab.com Cloudflare move,"Cleanup the terraform diffs caused by https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/12411 - -This will involve touching the dns_record module to allow for orange-clouded domains, as well as importing the new page rule and fixing the order in terraform.",8.0 -77589454,2021-01-21 07:45:42.738,Woodhouse is @-ing unrelated people in incident issues,"Example: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/3381#note_490373672 - -It appears as thought woodhouse is sometimes @-ing the incorrect GitLab handle for incident openers in the issue description, leading to this situation in which @brentnewton opened an issue which tagged `@bnewton`. We can see this was the case, from this subsequent description edit: - -![Screenshot_2021-01-21_at_07.43.03](/uploads/d633389241bfe4a3d9aa8e85ba19f32e/Screenshot_2021-01-21_at_07.43.03.png) - -This may be due to incorrectly assuming some relationship between the user's Slack handle and their GitLab handle.",1.0 -77573036,2021-01-20 21:39:07.484,Woodhouse mentions the wrong GitLab user,"I noticed after I created production#3338, that the initial description populated by Woodhouse at-mentioned `cbarrett`, but my gitlab.com username is @craig. I'm not sure where this identity is managed, since my slack handle is also @craig, so it's not pulling from there. My email is either cbarrett or craig at gitlab.com, depending on how/where you look.",3.0 -77561597,2021-01-20 17:00:48.192,Manage repository residency audit tooling,"- [ ] Package the runbooks gitaly shard repository residency audit tooling script as a ruby gem. -- [ ] Host it somewhere, like rubygems. (Or aptly.gitlab.com?) -- [ ] Roll it out as part of a chef recipe included by a gitaly-specific role.",5.0 -77490807,2021-01-19 17:22:14.017,Web-Pages HAProxy logs need more visibility,"As an EOC, I should be able to examine access requests coming into the web-pages service to help troubleshoot incidents involving web-pages. - -Inspiring incident: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/3317",2.0 -77485561,2021-01-19 15:46:12.503,Bump Cloud Native GitLab (CNG) to Redis 6.0.10,"This is a sister issue to https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/12346, for bumping the Redis version in CNG.",1.0 -77439656,2021-01-19 00:01:15.768,Upgrade all GKE clusters to 1.18,"We need to upgrade all our GKE clusters to Kubernetes 1.18 (minimum release `v1.18.6-gke.6300`) as it was highlighted at https://gitlab.com/gitlab-org/charts/gitlab/-/issues/2440#note_486117933 that we have incorrect sysctl settings on our node, potentially leading to issues. - -Looking at the upgrades notes at https://v1-18.docs.kubernetes.io/docs/setup/release/notes/#urgent-upgrade-notes I'll highlight the important things of note that I think affect us - -> kube-apiserver: -> the following deprecated APIs can no longer be served: -> All resources under apps/v1beta1 and apps/v1beta2 - use apps/v1 instead -> daemonsets, deployments, replicasets resources under extensions/v1beta1 - use apps/v1 instead -> networkpolicies resources under extensions/v1beta1 - use networking.k8s.io/v1 instead -> podsecuritypolicies resources under extensions/v1beta1 - use policy/v1beta1 instead (#85903, @liggitt) [SIG API Machinery, Apps, Cluster Lifecycle, Instrumentation and Testing] - -We need to audit everything in the Gitlab chart (and all other services we deploy) to make sure we aren't using any deprecated interfaces - -> resource metrics endpoint /metrics/resource/v1alpha1 as well as all metrics under this endpoint have been deprecated. Please convert to the following metrics emitted by endpoint /metrics/resource: -> - scrape_error --> scrape_error -> - node_cpu_usage_seconds_total --> node_cpu_usage_seconds -> - node_memory_working_set_bytes --> node_memory_working_set_bytes -> - container_cpu_usage_seconds_total --> container_cpu_usage_seconds -> - container_memory_working_set_bytes --> container_memory_working_set_bytes -> - scrape_error --> scrape_error -> (#86282, @RainbowMango) [SIG Node] - -We need to confirm we don't rely on any of these - -> Ingress: -> spec.ingressClassName replaces the deprecated kubernetes.io/ingress.class annotation, and allows associating an Ingress object with a particular controller. -> path definitions added a pathType field to allow indicating how the specified path should be matched against incoming requests. Valid values are Exact, Prefix, and ImplementationSpecific (#88587, @cmluciano) [SIG Apps, Cluster Lifecycle and Network] - -We should check all ingress objects to make sure this is ok. I think this should be fine for backwards compatibility, but a bit of confirmation and investigation is worthwhile. - -Metrics changes also documented at https://v1-18.docs.kubernetes.io/docs/setup/release/notes/#metrics which we should review - -# Checklist - -## Pre-upgrade checks for 1.18 -* [x] Upgrade kubectl client for CI - https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/12367#note_488814977 -* [x] Confirm all resources in `k8s-workloads/gitlab-com` use non-deprecated apis -* [x] Confirm all resources in `k8s-workloads/gitlab-helmfiles` use non-deprecated apis -* [x] Confirm all resources in `k8s-workloads/tanka-deployments` use non-deprecated apis -* [x] Confirm that we do not use resource metrics endpoint `/metrics/resource/v1alpha1` and if so, migrate to new endpoint -* [x] Confirm that all ingress resources do not rely on the `kubernetes.io/ingress.class` annotation -* [x] Confirm that the metric changes documented at https://v1-18.docs.kubernetes.io/docs/setup/release/notes/#metrics do not affect us - -## Upgrade -* [x] `ops` master(s) upgraded -* [x] `ops` nodes upgraded -* [x] `pre` master(s) upgraded -* [x] `pre` nodes upgraded -* [x] `gstg` master(s) upgraded -* [x] `gstg` nodes upgraded -* [x] `gprd` master(s) upgraded -* [x] `gprd` nodes upgraded -* [x] `org-ci` master(s) upgraded -* [x] `org-ci` nodes upgraded",3.0 -77506531,2021-01-18 16:41:38.641,Update our WAL-G info in our Runbooks,"This Runbook information is outdated and we should update it as a result of our successful implementation of WAL-G in production (retiring WAL-E) - -https://ops.gitlab.net/gitlab-com/runbooks/-/blob/master/docs/patroni/postgresql-backups-wale-walg.md - -We may want to iterate on this in 2-3 different phases, to get results asap (clean the wrong info first), given the size and the amount of info of this Runbook. - -Acceptance Criteria: -- [ ] All the mentions to WAL-E are removed, or state clearly the ones that are not valid anymore. -- [ ] Update the mentions to WAL-G in the runbook, so they represent what we have in production at the moment. -- [ ] Clean the sections that are not needed, and the pointers to external links.",5.0 -77227259,2021-01-14 14:13:44.163,Bump omnibus to Redis 6.0.10,"We'll also want to make sure to allow the new threaded-io settings to be configured: - -From [annotated `redis.conf`](https://raw.githubusercontent.com/redis/redis/6.0/redis.conf): - -``` -################################ THREADED I/O ################################# - -# Redis is mostly single threaded, however there are certain threaded -# operations such as UNLINK, slow I/O accesses and other things that are -# performed on side threads. -# -# Now it is also possible to handle Redis clients socket reads and writes -# in different I/O threads. Since especially writing is so slow, normally -# Redis users use pipelining in order to speed up the Redis performances per -# core, and spawn multiple instances in order to scale more. Using I/O -# threads it is possible to easily speedup two times Redis without resorting -# to pipelining nor sharding of the instance. -# -# By default threading is disabled, we suggest enabling it only in machines -# that have at least 4 or more cores, leaving at least one spare core. -# Using more than 8 threads is unlikely to help much. We also recommend using -# threaded I/O only if you actually have performance problems, with Redis -# instances being able to use a quite big percentage of CPU time, otherwise -# there is no point in using this feature. -# -# So for instance if you have a four cores boxes, try to use 2 or 3 I/O -# threads, if you have a 8 cores, try to use 6 threads. In order to -# enable I/O threads use the following configuration directive: -# -# io-threads 4 -# -# Setting io-threads to 1 will just use the main thread as usual. -# When I/O threads are enabled, we only use threads for writes, that is -# to thread the write(2) syscall and transfer the client buffers to the -# socket. However it is also possible to enable threading of reads and -# protocol parsing using the following configuration directive, by setting -# it to yes: -# -# io-threads-do-reads no -# -# Usually threading reads doesn't help much. -# -# NOTE 1: This configuration directive cannot be changed at runtime via -# CONFIG SET. Aso this feature currently does not work when SSL is -# enabled. -# -# NOTE 2: If you want to test the Redis speedup using redis-benchmark, make -# sure you also run the benchmark itself in threaded mode, using the -# --threads option to match the number of Redis threads, otherwise you'll not -# be able to notice the improvements. -``` - -We'll need to review the [CHANGELOG](https://raw.githubusercontent.com/antirez/redis/6.0/00-RELEASENOTES) for potential BC breaks.",2.0 -77160393,2021-01-13 13:43:04.674,Create lifecycle for docker images and containers on runner-01-inf-ops,"The `runner-01-inf-ops` runner has a lot of disk space that is consumed by docker images/containers/volumes/etc. - -We should add some cron jobs to perform routine clean-ups on these resources to keep the node operating with plenty of disk space for new images, etc. - -Minimum DOD: -* A weekly cron job to clean up dangling images `docker image prune` - -Nice to have DOD: -* A more elaborate cron job that also cleans up images not attached to containers, old containers, old volumes, and old networks. -* Consider moving `/var/lib/docker` to it's own filesystem to prevent this disk usage from endangering the node's root filesystem.",2.0 -77084222,2021-01-12 12:02:21.061,Align altssh on gstg with gprd & cleanup terraform residue,"**Current Situation** - -Currently on gprd we redirect the altssh traffic to our regular SSH LBs in Cloudflare Spectrum. We don't do this for gstg. - -**Desired Outcome** - -gprd and gstg both re-map `altssh.gitlab.com:443` to `$gitlab_com_origin_ip:22` in Cloudflare spectrum. - -**Acceptance Criteria** -These are intertwined, thus not separate issues! - -- [x] Copy the terraform definition for altssh from gprd to gstg -- [x] Apply the changes -- [x] Clean up the old terraform code",13.0 -76948932,2021-01-08 23:59:33.703,Wrap up GSM epic and milestone,"The [project to launch Google Secrets Manager](&343) and initial set of related processes is effectively done, and needs to be wrapped up. The [epic](&343) and [milestone](https://gitlab.com/groups/gitlab-com/gl-infra/-/milestones/113) both need to be updated and closed, with references to leftover follow-up tasks to be addressed individually or as part of other, subsequent efforts (e.g. fixing instance-level service accounts during &231)",1.0 -76890801,2021-01-07 16:40:44.714,Create new gitaly storage shard nodes to replace `nfs-file52` and `nfs-file53`,"Gitaly storage shard `nfs-file52` (`file-52-stor-gprd.c.gitlab-production.internal`) is at `75.46%` usage as of `2021-01-07`. Gitaly storage shard `nfs-file53` (`file-53-stor-gprd.c.gitlab-production.internal`) is at `72.68%` usage as of `2021-01-07`. - -Our usage targets specify that we try to maintain usage between 65-79%. New project creation would quickly cause more usage than that on both `nfs-file52` and `nfs-file53`. The automatic weights assignment tool will soon eliminate `nfs-file52` and also `nfs-file53` as a candidate shard receiver for new project residencies. - -Note: This is partially a corrective action for incident [`2020-12-21: Gitaly nodes being removed from weight pool too soon`](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/3225): https://gitlab.com/gitlab-com/gl-infra/production/-/issues/3225 - -It is no longer the case that nodes are being removed too soon, and so the initial five additional nodes were reduced to only two, instead.",1.0 -76869540,2021-01-07 11:47:39.717,keep wal-g backup-push logs for 7 days,We should keep the wal-g backup-push logs for 7 days instead of simply overwriting the last log file with each backup.,2.0 -76838440,2021-01-06 21:25:51.277,Evaluate infracost for inclusion in CI for terraform repositories,"The [infracost](https://www.infracost.io) utility seems like a very useful addition to our standard terraform workflows, and [this template](https://gitlab.com/infracost/infracost-gitlab-ci) may provide a useful starting place for incorporating the addition into our terraform project workflows (gitlab-com-infrastructure, gitlab-services, environments)",3.0 -76837913,2021-01-06 21:01:39.498,High-level planning and discussion for OS upgrades (16.04 to 20.04),"The following is an initial _**rough draft**_ of pros and cons for each method. I'm sure there are many more pros and cons, but I'd like to open this up for input while I continue to consider the options. - -This list will eventually be used to determine how to proceed. We should mainly consider patroni first, but I included others as I thought of them. - -# Pros and Cons of Upgrade Methods - -I am only considering pet services such as patroni and gitaly. Other nodes such as git and web are much more ephemeral and are likely going to be easiest to just rebuild. -Patroni is in a high availability configuration and thus we could do the upgrades with only one bit of downtime during the failover of the primary. Gitaly nodes on the other -hand are single points of failure and are unlikely to be able to be upgraded without downtime. - -## Delete and rebuild server with 20.04 - -I would expect this process to be done via terraform by updating the base image for a given module and applying it in a targeted, one by one, fashion. -The disk would not be deleted while applying the terraform and would be reattached as-is to the newly built server. - -This is my preferred method for redis, patroni, and ideally gitaly. - -### Pros - -- We will have a completely fresh server -- Unlikely to end up in a state where a server doesn't come back from boot -- Will be able to prove that this is a viable process for stateful servers in the future -- By rebuilding these we could also implement the required changes for GSM (https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/343) -- An excellent time to upgrade machine types as well (https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/12484#note_505538821, https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/12294) - -### Cons - -- A rebuild might take some time to bootstrap and become available -- Gitaly is a single point of failure, so doing this there might take more downtime than other methods -- Terraform would be ""dirty"" for however long it takes to go through and rebuild each server - - There are around 60 gitaly servers so this would probably be quite a while -- It is always unnerving to delete servers (not something we should base a decision on, simply wanted to express :slight_smile:) - -## In place update to 20.04 - -With this path, we would run an in place upgrade in which the only time the node would be down would be for the reboot. -We could set ignore_changes in the storage terraform module to allow new servers to be built with new images while not trying to force rebuilds on the old ones. - -This is my preferred method for CI nodes s. - -### Pros - -- Doesn't require rebuild of server -- Shorter downtime for a service like Gitaly as the downtime would only be a reboot -- Would avoid rebuilding CI nodes since they are all fairly manual setup - -### Cons - -- Potential for strange behavior if the upgrade goes awry for some reason -- If upgrade does go awry, recovering from that may be difficult and would likely be faster to fallback to the rebuild plan -- We could update the machine type at the same time, but it would be a bit more difficult than rebuilding as it would involve terraform changes and applies unrelated to the upgrade. -sd -# Gitaly specific options - -## Subscribe to Ubuntu Advantage - -This option would have us subscribe to [Advantage](https://ubuntu.com/advantage/) which would give us software and security updates until up to 2024. There would be a yearly fee associated with this option. A subscription for 80 VMs would be 6k a year with no support. If we wanted support it would be 20k or 40k per year based on whether we wanted 5 or 7 day access to support respectively. This would give us software and security updates until 2024, which ideally will be plenty of time. This in conjunction with ignoring boot disk and image changes in the storage module would allow us to build new servers using a new image but the same process. - -This is my preferred method for Gitaly. - -### Pros - -- We can have quite a bit of runway if necessary to try to figure out the best way to update these, potentially with zero downtime. -- We could thus focus on all of the other nodes we need to upgrade this quarter -- The cost isn't really that expensive. - -### Cons - -- It costs. It is reasonable, but it does still cost. -- We would be further kicking the can down the road (which is really only a minor downside given the benefit of extra time). -- We're wanting to stop building GitLab packages for 16.04 so this would be a problem. - -## Upgrade by switching over to a newly built server - -This method would involve building an additional server running 20.04, configuring it as desired, and then switching the data disk between the old and new nodes. -These servers would likely come in the form of a new terraform module definition. - -I do not think this a particularly great idea for any of our cases. I believe the only reasonable use of this method would be for Gitaly. - -### Pros - -- We could likely do the gitaly updates with the shortest amount of downtime this way, if we were able to solve the problems below. - -### Cons - -- We would have to have some method to switch traffic from the old node to the new node - - Currently all servers are referenced by DNS name which would be different with new servers - - The DNS name cannot be changed once the server is created - - Rolling out config changes to update the location of the gitaly shards is going to be a long and painful process, with high risk - - If we wanted to try to mitigate risk, we could create some sort of haproxy system in order to be able to swap traffic between the nodes -- This method would likely be a substantial amount of extra work in validating everything as well as planning and executing all the switchovers - -## Build new servers and migrate all the repos from one node to the new node - -This method would involve using the repo migration api to move all repos from one gitaly shard to a new one. I do not recommend this method. - -### Pros - -- We could upgrade the gitaly nodes with no downtime - -### Cons - -- The migration API is good, but there would undoubtedly be a decent number of failures, each of which would require investigation, validation, and repair -- This would take ages. The migration of repos between shards is a slow process and we have about 60 shards to go through",8.0 -76836574,2021-01-06 20:15:58.989,Discussion and high-level plan for Chef to Ansible transition,"With Chef Server now EOL, limited options for community alternatives, and an overall pessimistic outlook on the future of the product/community after the sale of Chef Software, we have decided to begin the transition to Ansible next quarter. This issue will be used to kickstart the discussion around gathering some initial requirements, and generate the high-level plan for the effort that will be fed back into &392. - -## Basic plan - -Here is a roll-up of the (very) high-level plan with additional/more detailed notes below. This is still a work-in-progress, and will likely continue to evolve even as we proceed to implementation and begin iterating on rolling through specific portions of the infrastructure. - -1. Reorganize gitlab-com/gitlab-com-infrastructure> and update README -1. Extend pipeline configs in gitlab-com/gitlab-com-infrastructure> to support ansible / [GET](https://gitlab.com/gitlab-org/quality/gitlab-environment-toolkit) -1. Implement [tagging standards](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/11312) with additions as needed for this project -1. Leverage tags/filters for [GCP inventory](https://docs.ansible.com/ansible/latest/collections/google/cloud/gcp_compute_inventory.html) - 1. This should not necessarily be the same for [deployment](https://gitlab.com/gitlab-com/gl-infra/deploy-tooling), consul/service discovery may be a better option to distinguish between new, unconfigured nodes and those which have been successfully bootstrapped -1. Develop bootstrap process, plan/implement corresponding terraform changes to support both Chef & Ansible -1. Design equivalent top-level playbook structure for existing Chef roles -1. Begin migrating common/base configuration and roles -1. Design, document, and implement GKMS/GSM equivalents to Chef Vault items -1. Rollout sequence (WIP) - 1. Leave roles that are/will be migrating to k8s for last - 1. Bastions - 1. Gitaly - 1. Praefect - 1. Patroni - 1. Redis - 1. Console - 1. Consul -1. Develop plays for current configuration in chef cookbooks, by role -1. Cleanup unused Chef ""stuff"" - 1. Roles - 1. Cookbooks - 1. Data bags - 1. Archive projects in gitlab - 1. Decommission chef.gitlab.net - 1. Remove monitoring/backups - 1. Audit/cleanup documentation - -## Notes -### Inventory - -Early on, I had considered using Consul as an inventory source, but later realized that we would encounter a circular dependency on bootstrap, where nodes not already registered in Consul would not be visible, and as such Ansible could not target them to run a playbook that would... register... them... in... consul. While expanding our use of consul service discovery may be useful in other areas, it is not appropriate for this case. Using consul as a pivot point for a successfully bootstrapped node _may_ make that a more appropriate inventory source for deployments, however. - -Which brings us to the [Google Cloud Compute Engine inventory source](https://docs.ansible.com/ansible/latest/collections/google/cloud/gcp_compute_inventory.html). This plugin will allow us to dynamically build inventories based on filters for matching resources within the GCP environment. We currently rely on Chef as our inventory source for deployments, so both the deployment tooling and the new Ansible infrastructure will need a new source of truth for their inventories. It clearly makes sense to shift to a GCP filter/tag-based approach for bootstrapping, as those values are assigned by terraform and provide a clean method for handoff between terraform and ansible. We already [have plans](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/11312) for implementing formal tagging standards, which should also be leveraged here, as well. - -For deployments, however, leveraging Consul as the next ""higher-level"" inventory source seems appropriate, as a node would be registered in the service catalog and be subject to health/liveness/readiness checks indicating that a deployment to that specific node is appropriate, whereas a newly bootstrapped node may not have all pieces fully configured. Subsequent ansible runs could bridge the gap between the two should a new node be in the process of being bootstrapped _during_ a deployment, or we can incorporate functionality to prevent such situations into our deployment workflow. - -Implementing the tagging standards to facilitate updating/instantiating the inventory using a new source of truth (GCP) should be one of the first things to be done. The current Chef inventory can be kept for the duration of the project (ie minimal bootstrap, empty run-lists), but ultimately the new inventory will need to be well understood and fully integrated by the end of this project, with an ability to easily disable and remove chef once we have everything shifted to ansible. Similarly, developing plays to register services in consul and develop corresponding health checks should follow immediately after, so that we have that framework in place to adopt as we shift from Chef and Chef inventory to Ansible. - -### Pull vs push model - -From an architecture standpoint, one of the most fundamental differences between Chef and Ansible is the communications paradigm. While Ansible [can support a pull model](https://docs.ansible.com/ansible/latest/cli/ansible-pull.html) and we should certainly consider that if we encounter scaling challenges, the default starting assumption for most of the documentation, examples, tutorials, and small-scale examples rely on a push model via SSH. This is how our deployments to gitlab.com are conducted, and for simple alignment with the overwhelming mass of documentation and examples, is probably where we should start. - -In addition, while not intrinsically related to this effort, we would benefit in several areas from leveraging Ansible in a build pipeline for ""golden"" images used in dynamically scaled infrastructure; we should keep that in mind as we work through this project. Because of its agentless nature, Ansible lends itself well to this approach, and that is a likely next project after the initial switch. - -### Management and orchestration - -As I started reading about Ansible, one of the first questions I had was whether we need/want to leverage AWX (or Tower). After skimming through the main product page, I don't see a very compelling need for this in our case, especially based on the success we already have had in using GitLab for most of the primary features listed at https://www.ansible.com/products/tower for [deployments](https://gitlab.com/gitlab-com/gl-infra/deploy-tooling) and more recently for [codifying/automating database operations tasks](https://gitlab.com/gitlab-com/gl-infra/db-ops). - -| Feature | GitLab equivalent | -| --- | --- | -| Ansible Dashboard | Metrics exported to Prometheus and Grafana dashboards | -| Real-time job status updates | GitLab CI Jobs | -| Ansible Tower Workflows | GitLab CI Pipelines | -| Activity streams (auditing) | GitLab CI Pipelines / repository history | -| Ansible Tower clusters | N/A | -| Integrations | GitLab CI Pipelines with webhooks/integrations | -| Ansible Tower Smart Inventories | N/A - at most, dynamic inventory will suffice for our needs | -| Ad-hoc jobs | GitLab CI Jobs | -| Remote command execution | GitLab CI Jobs (or Teleport) | -| REST API | GitLab API | - -### Secrets - -With the recent completion of initial setup for Google Secrets Manager, we can begin coordinating the shift to begin using that service. I have not dug into this in sufficient detail, yet, but [some initial work](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/12275#note_480745661) to update the instance service accounts will need to coordinated with this effort, as well as with the OS upgrades. In addition to updating instance service accounts, we need to functionally replace all existing tooling for managing secrets in GKMS and applying via Chef Vault. I have not really explored this in-depth, yet, so this is definitely a case of ""here there be dragons"" due to being so inherently vague. - -### Project structure - -As we have seen over time, breaking out similar codebases and modular assets into multiple repositories enables flexibility when you have many consumers of the modular code, but this inherently brings a certain amount of added complexity and management pain. I have two thoughts to address this for this project. - -First is the recently announced [GitLab Environment Toolkit](https://gitlab.com/gitlab-org/quality/gitlab-environment-toolkit), developed by our QA team to effectively provision GitLab environments on-demand for testing. We need to evaluate the included Ansible code from that project due to the clear, overwhelming overlap between the two efforts. Absent any insurmountable complications, we should try to avoid forking to make the most effective use of a shared resource, but could also consider forking and maintaining a descendent project if absolutely necessary. Alternatively, we could attempt to cherry-pick the portions of Ansible code that apply to both situations, and work on a separate project from which both GET and our infrastructure tooling could inherit. In either case, I am hopeful that a closer examination shows this as an effective means of sharing infrastructure provisioning code both within GitLab and with the broader community. - -Second, and possibly independent of the first, I'm thinking that our Ansible tooling need not be completely separate from our current Terraform code, and that consolidating both projects under gitlab-com-infrastructure could bring multiple benefits. It would provide the reduced complexity and ease of management of a monorepo, easier coordination of changes between Ansible and Terraform code for bootstrapping instances / managing instance groups, and provide a single place where most core infrastructure code and tooling can be stored/managed/documented. Consolidation of our Terraform code back into a monorepo with strict(er) adherence to feature-flagging code changes would necessarily be included in this approach to get the most benefit. Improved consistency across and within our infrastructure code is certainly desirable, but is not implicit from either approach. In the best case, the consolidation would make it easier to find portions of our codebase and therefore make it easier to keep things consistent (this is definitely a reach, however). This idea is not _completely_ fleshed out, and there are implications with things like our ephemeral and gitlab-services environments, but those too could conceivably be consolidated for the same reasons (though with a fair bit more difficulty, due to differing deployment patterns). Additionally, this centralizes a significant amount of privilege and therefore security risk into a single project, so maintaining good code hygiene and proper access controls would be critical. - -While I am in favor of the second point, it is the most significant decision I have yet to (fully) make, is not fully fleshed out (as noted), and would benefit the most from feedback from the broader team. If we decide that a shared infrastructure repo is not the right approach, I am still strongly in favor of keeping our Ansible code and tooling in a monorepo, regardless. If we have a clear need to distribute code in consumable format outside the team and/or outside of GitLab, we can revisit this choice later, but for our own team's immediate usage, I think the benefits far outweigh any drawbacks. - -### Disaster Recovery - -When I first approached conceptualizing this, I was thinking far more within the centralized model of Chef, and leaning towards implementation of AWX/Tower. Once I worked through the table above, it was easy to see that we shouldn't need that model. With that in mind, we will need to structure this project and locate the corresponding CI pipelines/jobs in much the same way that we do for Chef or Terraform today (e.g. on the ops instance, with a mirror to .com). Thus we would need to look to the DR capabilities from the underlying GitLab instance hosting the project and its CI pipelines. - -### Other Outstanding Questions/Topics - -There are a few aspects that I can still think of, which I haven't touched on too much, yet. - -#### ""Service-ness"" - -So far, this is very squarely in the realm of implementing a tool, and a corresponding technology stack. As there is no central server, and it doesn't really have a well-defined ""interface"" for interacting with the tool beyond directly accessing the code and automations triggered/run via CI, I hesitate to push this through a standard production readiness review, but at the same time we should reference our template to ensure that we check all the boxes for the relevant bits that still apply (documentation, architecture diagrams, monitoring/logging/alerting, etc). - -#### Observability - -This is an area where we have interesting challenges, and not much to compare against in our current system. With Ansible being an agent less system, much of the monitoring will likely need to be baked into the CI jobs, and handled via hooks to external services at runtime. There is no central server to collect and emit logs to a traditional logging infrastructure, and likewise the same with metrics.",8.0 -76834477,2021-01-06 19:27:47.914,Verify that customers.gitlab.com has a working database backup,It is unclear if customers.gitlab.com has a functional backup of the database.,1.0 -76829474,2021-01-06 16:51:53.035,Improve wal-g backup job alerting,"The wal-g backup job metrics caused bogus alerts, because the backup is running from a random node each time, which made the metrics look like there was no backup for several days on a single node, while we are only interested on one successful backup job per env each day. - -This was tracked within this incident issue: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/3263 - -The fixes: - -* https://gitlab.com/gitlab-com/runbooks/-/merge_requests/3088 -* https://ops.gitlab.net/gitlab-cookbooks/gitlab-walg/-/merge_requests/39 -* https://ops.gitlab.net/gitlab-cookbooks/chef-repo/-/merge_requests/4813 -* https://gitlab.com/gitlab-com/runbooks/-/merge_requests/3093 - -Adding this issue for tracking in our milestone. - -## Acceptance criteria - -* [x] Do not cause ""wal-g backup delayed"" false alerts for single patroni nodes -* [x] runbook doc for how to use central pushgateways for job metrics with environment focus",8.0 -76824777,2021-01-06 15:03:41.275,Migrate to the new Subscriptions Management Application production fleet,"This issue will most likely spawn a production change issue (or two). - - -- Migrate data (database?) -- Switch DNS",8.0 -76824770,2021-01-06 15:03:32.222,Create a new subscriptions-production Terraform environment,,2.0 -76824761,2021-01-06 15:03:17.877,Create a new Chef role for the new production fleet `prd-subscriptions`,,1.0 -76824754,2021-01-06 15:03:07.116,Create a new Google Cloud Project for production `gitlab-subscriptions-production`,,2.0 -76824713,2021-01-06 15:02:03.630,Create a new deploy process in Ansible and Chef," - - A new recipe in the customer cookbook that sets up the right credentials for Ansible - - A new project that mirrors the customers code base and performs staging deploys and production deploys from master",5.0 -76824698,2021-01-06 15:01:50.560,Create a new subscriptions-staging Terraform environment," - TCP load balancer with health checks and a static IP - - Start with a single node to begin with - - Bastion host - - Restrict outgoing as well as incoming traffic",3.0 -76824677,2021-01-06 15:00:54.506,Create a new Chef role for the new staging fleet `stg-subscriptions`,"- Start this role as basic as possible -- No customers cookbook to start with",1.0 -76824670,2021-01-06 15:00:37.347,Create a new Google Cloud Project for staging `gitlab-subscriptions-staging`,"A new project for the subscriptions management app should be created. The intent of having this in its own project is to build a stronger security barrier between other environments and the Customers infrastructure. - -Consider using this module to manage the new project: https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/project",2.0 -76795255,2021-01-06 01:23:42.187,Set up slack integration for access request workflow,"Using the instructions from Gravitational, we need to install the Teleport Slack application: https://goteleport.com/teleport/docs/enterprise/workflow/ssh-approval-slack/ - -This app will relay access requests into the `#production-access` channel in Slack, and authorized users will be able to review and approve them. - -Currently we are at the proof of concept stage, so this is still considered development, but there should be minimal changes to get this into production when we are ready.",2.0 -76531461,2020-12-28 17:14:33.692,current Gitlab org Dockerhub plan expires Jan 31 2021," - -From Docker: - -[Docker Team](https://email.docker.com/Ao6m0tFJ30IT3Po00K0x14L) plans start at $7 per user per month and provide access to advanced features and capabilities to help your team automate development workflows and increase productivity, including: -Unlimited private and public repositories -Unlimited authenticated image pulls -3 Parallel auto-builds -Role-based access controls -Unlimited teams -Unlimited Hub image vulnerability scans -And [much more](https://email.docker.com/Ao6m0tFJ30IT3Po00K0x14L) - -You are receiving this email because your legacy organizational repository plan expires on your January 2021 billing cycle date. [Read the FAQ.](https://email.docker.com/y3oF4oPI0T0KLJy1tn00630) - - -cc @marin @erushton as I think this is legacy things we are still using dockerhub for. - -I also see @ddavison, @tmaczukin and @twk3, @dzaporozhets as owners so cc in case they have further input on what we should do with this organization. - - -",1.0 -76476214,2020-12-25 22:45:38.749,Adjust SLO calculation for apples to apples time window comparisons,"Because of https://gitlab.com/gitlab-com/gl-infra/production/-/issues/3238 I'm proposing adjusting the SLO calculations to compare similar time windows for errors and requests. Otherwise, we wind up seeing alerts for underlying omni-present errors when RPS is reduced.",2.0 -76196529,2020-12-17 16:03:15.996,GitLab Hosted version of Codesandbox Sandpack Fork,"### Overview - -As part of https://gitlab.com/groups/gitlab-org/-/epics/3138#note_465738194 we needed to make some upstream contributions to the [Codesandbox Sandpack package `smooshpack`](https://www.npmjs.com/package/smooshpack). Unfortunately, it might take a while for these changes to land in the upstream package due to upstream availability. - -After [receiving approval to fork this package](https://gitlab.com/gitlab-com/legal-and-compliance/-/issues/308), we've applied these changes to a [GitLab project](https://gitlab.com/pslaughter/gitlab-codesandbox-client) and [published a new npm package `gitlab-smooshpack`](https://www.npmjs.com/package/gitlab-smooshpack). - -For us to [use this behind a feature flag](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/50259#note_468978603), we need to have our GitLab forked package assets hosted in a separate bucket. Currently the Codesandbox package assets are hosted at `https://sandbox-prod.gitlab-static.net/`. - -### Proposal - -Similar to the [steps used for the original package](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/6709#note_259020854), here's how to get the specific assets we need to host: - -``` -# Download the package tarball -wget $(npm view gitlab-smooshpack@0.0.66-1 dist.tarball) - -# Extract just the `sandpack` folder which contains the pre-built assets -tar -xzf gitlab-smooshpack-0.0.66-1.tgz package/sandpack - -# The assets to host will be in `./package/sandpack` -``` - -Then drop all the files in `./package/sandpack` in a asset hosting bucket. Requirements: - -* Suggested domain: `https://gl-sandbox-prod.gitlab-static.net/` -* SSL Enabled - -### Links / References - -- https://gitlab.com/groups/gitlab-org/-/epics/3138 -- Legal issue for forking https://gitlab.com/gitlab-com/legal-and-compliance/-/issues/308",3.0 -76158340,2020-12-16 22:07:39.290,Export Version and License DB for Import into Data Warehouse,"runbook: https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/uncategorized/cloudsql-data-export.md - -this is hopefully one of the very last times",1.0 -76148299,2020-12-16 16:29:29.168,Geo secondary in staging is not sending out emails,After promoting the geo secondary node in staging we ran some tests that should have triggered sending out some emails but no emails have been received. We should check the email configuration of the geo node and fix and document the necessary settings.,5.0 -76148097,2020-12-16 16:25:50.589,Check google auth config for geo in staging,"During the staging failover test, after promotion of the secondary, Skarbek wasn't able to log into the geo node, upon using Google auth: `Error 400: redirect_uri_mismatch`. - -We need to check the google auth settings and make sure that Google auth is working after a failover.",3.0 -76147615,2020-12-16 16:13:04.740,Make sure pg_hba.conf is working for geo failover,"During staging geo failover test, the last step of Promotion failed (geo:set_secondary_as_primary) attempting to authenticate with postgres: - -``` -2020-11-27_13:18:12.31820 Connection matched pg_hba.conf line 77: ""host all all 0.0.0.0/0 md5"" -2020-11-27_13:18:12.57502 2020-11-27 13:18:12 GMT [32557]: [1-1] FATAL: password authentication failed for user ""gitlab"" -2020-11-27_13:18:12.57504 2020-11-27 13:18:12 GMT [32557]: [2-1] DETAIL: Password does not match for user ""gitlab"". -``` - -It was temporarily fixed by adding - -``` -host all all 0.0.0.0/0 trust -``` - -above line 77 (which is mentioned in the error above) to `/var/opt/gitlab/postgresql/data/pg_hba.conf` and doing a `gitlab-ctl restart postgresql`, but this is getting overridden every time when doing a `gitlab-ctl reconfigure`. - -We need to fix pg_hba.conf to work on both primary and secondary site for a failover or maybe need to change the DB password for the gitlab role.",4.0 -76099203,2020-12-15 17:53:15.372,Ensure Accurate Domain Contacts at Gandi,I've been working with some domains in Gandi recently and noticed that some of our contacts are old (people who are no longer here for instance). We should go through and ensure they're all accurate.,1.0 -75967749,2020-12-11 16:14:55.941,License app data export credentials configuration,"(this issue description is based on [another similar issue's](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/10670)) - -Request related to this issue: https://gitlab.com/gitlab-org/license-gitlab-com/-/issues/191 - -For automating export of the several tables for the `license-gitlab-com` project, we have a ""pipeline Schedule"" which needs several configuration parameters to be able to run properly. - -- Configuration screen: https://gitlab.com/gitlab-org/license-gitlab-com/-/pipeline_schedules/85641/edit - -## Credentials - -| Variable | Value | -| ------ | ------ | -| GCLOUD_SERVICE_KEY | GCP [service account](https://console.cloud.google.com/iam-admin/serviceaccounts) auth JSON file name | -| GOOGLE_PROJECT_ID | GCP project id ( API [sqladmin.googleapis.com] not enabled on project) ) | -| GOOGLE_COMPUTE_ZONE | The region and zone for the GCP compute | -| BUCKET | The GCS bucket where we shall upload the SQL export | -| INSTANCE | Cloud SQL instance name | -| DATABASE | Cloud SQL database name |",1.0 -75826476,2020-12-09 12:18:30.814,Fix these multiple wal-g backup-push failures,"Tonight we had GCS failures very often which showed multiple problems with wal-g backup-push (3 gprd backup runs failed tonight). Reminder: We have `GCS_CONTEXT_TIMEOUT` set to 600s currently, to make wal-push not block for too long on errors. - -### backup fails when missing to cleanup composite parts - -Apparently GCS failed just at the moment when uploading and composing part 3179 was done, and wal-g tried to delete the composite parts. - -``` -<13>Dec 9 00:00:01 backup.sh: INFO: 2020/12/09 02:04:56.856506 Finished writing part 3179. -<13>Dec 9 00:00:01 backup.sh: INFO: 2020/12/09 02:04:56.856548 Starting part 3187 ... -<13>Dec 9 00:00:01 backup.sh: INFO: 2020/12/09 02:04:57.665167 Finished writing part 3180. -<13>Dec 9 00:00:01 backup.sh: INFO: 2020/12/09 02:04:58.793366 Starting part 3188 ... -<13>Dec 9 00:00:01 backup.sh: INFO: 2020/12/09 02:05:03.142654 Finished writing part 3181. -<13>Dec 9 00:00:01 backup.sh: ERROR: 2020/12/09 02:05:09.417098 Failed to run a retriable func. Err: storage: object doesn't exist, retrying attempt 0 -<13>Dec 9 00:00:01 backup.sh: ERROR: 2020/12/09 02:05:09.431631 Failed to run a retriable func. Err: storage: object doesn't exist, retrying attempt 1 -<13>Dec 9 00:00:01 backup.sh: ERROR: 2020/12/09 02:05:09.612694 Failed to run a retriable func. Err: storage: object doesn't exist, retrying attempt 2 -<13>Dec 9 00:00:01 backup.sh: ERROR: 2020/12/09 02:05:09.999934 Failed to run a retriable func. Err: storage: object doesn't exist, retrying attempt 3 -<13>Dec 9 00:00:01 backup.sh: ERROR: 2020/12/09 02:05:10.667517 Failed to run a retriable func. Err: storage: object doesn't exist, retrying attempt 4 -<13>Dec 9 00:00:01 backup.sh: ERROR: 2020/12/09 02:05:12.002879 Failed to run a retriable func. Err: storage: object doesn't exist, retrying attempt 5 -<13>Dec 9 00:00:01 backup.sh: ERROR: 2020/12/09 02:05:15.451436 Failed to run a retriable func. Err: storage: object doesn't exist, retrying attempt 6 -<13>Dec 9 00:00:01 backup.sh: ERROR: 2020/12/09 02:05:20.455462 Failed to run a retriable func. Err: storage: object doesn't exist, retrying attempt 7 -<13>Dec 9 00:00:01 backup.sh: ERROR: 2020/12/09 02:05:30.321180 Failed to run a retriable func. Err: storage: object doesn't exist, retrying attempt 8 -<13>Dec 9 00:00:01 backup.sh: ERROR: 2020/12/09 02:05:52.629282 Failed to run a retriable func. Err: storage: object doesn't exist, retrying attempt 9 -<13>Dec 9 00:00:01 backup.sh: ERROR: 2020/12/09 02:06:44.111167 Failed to run a retriable func. Err: storage: object doesn't exist, retrying attempt 10 -<13>Dec 9 00:00:01 backup.sh: ERROR: 2020/12/09 02:08:46.191842 Failed to run a retriable func. Err: storage: object doesn't exist, retrying attempt 11 -<13>Dec 9 00:00:01 backup.sh: ERROR: 2020/12/09 02:12:35.924007 Failed to run a retriable func. Err: storage: object doesn't exist, retrying attempt 12 -<13>Dec 9 00:00:01 backup.sh: ERROR: 2020/12/09 02:14:29.377187 GCS error : Unable to delete temporary chunks: GCS error : Unable to delete a temporary chunk: context deadline exceeded -<13>Dec 9 00:00:01 backup.sh: ERROR: 2020/12/09 02:14:29.377220 upload: could not upload 'base_000000040003836F000000A5/tar_partitions/part_3179.tar.br' -<13>Dec 9 00:00:01 backup.sh: ERROR: 2020/12/09 02:14:29.377225 GCS error : Unable to delete temporary chunks: GCS error : Unable to delete a temporary chunk: context deadline exceeded -<13>Dec 9 00:00:01 backup.sh: ERROR: 2020/12/09 02:14:29.377288 Unable to continue the backup process because of the loss of a part 3179. -``` - -I think wal-g managed to delete chunks 0-4, then GCS maybe returned a failure for deleting chunk 5 but chunk 5 actually was deleted, and then wal-g re-tried to delete chunk 5 - always getting `object doesn't exist` - until the deadline kicked in. And this made the whole backup fail. - -I checked and part 3179 actually is there and also the chunks that couldn't be deleted: - -``` -Henris-MBP:~ hphilipps$ gsutil ls -l gs://gitlab-gprd-postgres-backup/pitr-walg-pg11/basebackups_005/base_000000040003836F000000A5/tar_partitions/part_3179.tar.br - 436441173 2020-12-09T02:04:57Z gs://gitlab-gprd-postgres-backup/pitr-walg-pg11/basebackups_005/base_000000040003836F000000A5/tar_partitions/part_3179.tar.br -TOTAL: 1 objects, 436441173 bytes (416.22 MiB) -Henris-MBP:~ hphilipps$ gsutil ls -l gs://gitlab-gprd-postgres-backup/pitr-walg-pg11/basebackups_005/base_000000040003836F000000A5/tar_partitions/part_3179.tar.br_chunks/ - 52428800 2020-12-09T02:04:52Z gs://gitlab-gprd-postgres-backup/pitr-walg-pg11/basebackups_005/base_000000040003836F000000A5/tar_partitions/part_3179.tar.br_chunks/chunk6 - 52428800 2020-12-09T02:04:56Z gs://gitlab-gprd-postgres-backup/pitr-walg-pg11/basebackups_005/base_000000040003836F000000A5/tar_partitions/part_3179.tar.br_chunks/chunk7 - 17010773 2020-12-09T02:04:57Z gs://gitlab-gprd-postgres-backup/pitr-walg-pg11/basebackups_005/base_000000040003836F000000A5/tar_partitions/part_3179.tar.br_chunks/chunk8 -TOTAL: 3 objects, 121868373 bytes (116.22 MiB) -``` - -**Suggestion:** - -* When cleaning up, we should not fail on `object doesn't exist` errors - just print a warning -* Failing to cleaning up chunks shouldn't be a fatal error stopping the whole backup - -### Backup fails because of serious GCS outage and short GCS context timeout - -The backup was restarted by the EOC, but then failed on multiple parts at the same time - GCS must have had serious problems. Maybe with more than 10m context timeout we would have been able to retry long enough to survive this. So this isn't a fault of wal-g in this case: - -``` -<13>Dec 9 02:39:12 backup.sh: INFO: 2020/12/09 04:32:43.855286 Starting part 2828 ... -<13>Dec 9 02:39:12 backup.sh: WARNING: 2020/12/09 04:34:42.924392 Unable to copy an object chunk base_000000040003838C000000F8/tar_partitions/part_2826.tar.br_chunks/chunk1, part 1, err: -googleapi: got HTTP response code 503 with body: -<13>Dec 9 02:39:12 backup.sh: WARNING: 2020/12/09 04:34:42.924426 Unable to close object writer base_000000040003838C000000F8/tar_partitions/part_2826.tar.br_chunks/chunk1, part 1, err: g -oogleapi: got HTTP response code 503 with body: -<13>Dec 9 02:39:12 backup.sh: ERROR: 2020/12/09 04:34:42.924432 Failed to run a retriable func. Err: googleapi: got HTTP response code 503 with body: , retrying attempt 0 -<13>Dec 9 02:39:12 backup.sh: INFO: 2020/12/09 04:35:56.501148 Finished writing part 2820. -<13>Dec 9 02:39:12 backup.sh: INFO: 2020/12/09 04:35:56.501253 Starting part 2829 ... -<13>Dec 9 02:39:12 backup.sh: INFO: 2020/12/09 04:35:59.029180 Finished writing part 2822. -<13>Dec 9 02:39:12 backup.sh: INFO: 2020/12/09 04:35:59.029211 Starting part 2830 ... -<13>Dec 9 02:39:12 backup.sh: INFO: 2020/12/09 04:37:00.796524 Finished writing part 2824. -<13>Dec 9 02:39:12 backup.sh: INFO: 2020/12/09 04:37:00.796608 Starting part 2831 ... -<13>Dec 9 02:39:12 backup.sh: INFO: 2020/12/09 04:37:04.137069 Finished writing part 2823. -<13>Dec 9 02:39:12 backup.sh: INFO: 2020/12/09 04:37:04.137107 Starting part 2832 ... -<13>Dec 9 02:39:12 backup.sh: WARNING: 2020/12/09 04:39:54.072958 Unable to copy an object chunk base_000000040003838C000000F8/tar_partitions/part_2828.tar.br_chunks/chunk3, part 3, err: googleapi: got HTTP response code 503 with body: -<13>Dec 9 02:39:12 backup.sh: WARNING: 2020/12/09 04:39:54.073007 Unable to close object writer base_000000040003838C000000F8/tar_partitions/part_2828.tar.br_chunks/chunk3, part 3, err: googleapi: got HTTP response code 503 with body: -<13>Dec 9 02:39:12 backup.sh: ERROR: 2020/12/09 04:39:54.073010 Failed to run a retriable func. Err: googleapi: got HTTP response code 503 with body: , retrying attempt 0 -<13>Dec 9 02:39:12 backup.sh: WARNING: 2020/12/09 04:40:44.952144 Unable to copy an object chunk base_000000040003838C000000F8/tar_partitions/part_2830.tar.br_chunks/chunk2, part 2, err: googleapi: got HTTP response code 503 with body: -<13>Dec 9 02:39:12 backup.sh: WARNING: 2020/12/09 04:40:44.952176 Unable to close object writer base_000000040003838C000000F8/tar_partitions/part_2830.tar.br_chunks/chunk2, part 2, err: googleapi: got HTTP response code 503 with body: -<13>Dec 9 02:39:12 backup.sh: ERROR: 2020/12/09 04:40:44.952185 Failed to run a retriable func. Err: googleapi: got HTTP response code 503 with body: , retrying attempt 0 -<13>Dec 9 02:39:12 backup.sh: WARNING: 2020/12/09 04:40:54.395088 Unable to copy an object chunk base_000000040003838C000000F8/tar_partitions/part_2829.tar.br_chunks/chunk2, part 2, err: googleapi: got HTTP response code 503 with body: -<13>Dec 9 02:39:12 backup.sh: WARNING: 2020/12/09 04:40:54.395122 Unable to close object writer base_000000040003838C000000F8/tar_partitions/part_2829.tar.br_chunks/chunk2, part 2, err: googleapi: got HTTP response code 503 with body: -<13>Dec 9 02:39:12 backup.sh: ERROR: 2020/12/09 04:40:54.395128 Failed to run a retriable func. Err: googleapi: got HTTP response code 503 with body: , retrying attempt 0 -<13>Dec 9 02:39:12 backup.sh: WARNING: 2020/12/09 04:41:14.960228 Unable to copy an object chunk base_000000040003838C000000F8/tar_partitions/part_2832.tar.br_chunks/chunk1, part 1, err: googleapi: got HTTP response code 503 with body: -<13>Dec 9 02:39:12 backup.sh: WARNING: 2020/12/09 04:41:14.960264 Unable to close object writer base_000000040003838C000000F8/tar_partitions/part_2832.tar.br_chunks/chunk1, part 1, err: googleapi: got HTTP response code 503 with body: -<13>Dec 9 02:39:12 backup.sh: ERROR: 2020/12/09 04:41:14.960268 Failed to run a retriable func. Err: googleapi: got HTTP response code 503 with body: , retrying attempt 0 -<13>Dec 9 02:39:12 backup.sh: INFO: 2020/12/09 04:42:12.277519 Finished writing part 2825. -<13>Dec 9 02:39:12 backup.sh: INFO: 2020/12/09 04:42:12.277555 Starting part 2833 ... -<13>Dec 9 02:39:12 backup.sh: WARNING: 2020/12/09 04:42:37.292358 Unable to close object writer base_000000040003838C000000F8/tar_partitions/part_2825.tar.br_chunks/chunk6, part 6, err: Post https://storage.googleapis.com/upload/storage/v1/b/gitlab-gprd-postgres-backup/o?alt=json&name=pitr-walg-pg11%2Fbasebackups_005%2Fbase_000000040003838C000000F8%2Ftar_partitions%2Fpart_2825.tar.br_chunks%2Fchunk6&prettyPrint=false&projection=full&uploadType=multipart: context deadline exceeded -<13>Dec 9 02:39:12 backup.sh: ERROR: 2020/12/09 04:42:37.293047 Failed to run a retriable func. Err: context deadline exceeded, retrying attempt 0 -<13>Dec 9 02:39:12 backup.sh: ERROR: 2020/12/09 04:42:37.322158 GCS error : Unable to compose object: context deadline exceeded -<13>Dec 9 02:39:12 backup.sh: ERROR: 2020/12/09 04:42:37.322187 upload: could not upload 'base_000000040003838C000000F8/tar_partitions/part_2825.tar.br' -<13>Dec 9 02:39:12 backup.sh: ERROR: 2020/12/09 04:42:37.322191 GCS error : Unable to compose object: context deadline exceeded -<13>Dec 9 02:39:12 backup.sh: ERROR: 2020/12/09 04:42:37.322199 Unable to continue the backup process because of the loss of a part 2825. -``` - -### Backup fails when closing writer of a chunk fails? - -This one isn't fully clear to me, as the logs don't contain information which retriable func (upload, compose, cleanup) failed. But my assumption is this: - -Closing the writer for chunk 6 of part 6628 failed - but we don't seem to retry when closing the writer failed on upload, thus chunk 6 does not exist and when trying to compose, we get 404 errors until we timeout: - -``` -13>Dec 9 05:43:15 backup.sh: INFO: 2020/12/09 10:19:03.258079 Finished writing part 6634. -<13>Dec 9 05:43:15 backup.sh: WARNING: 2020/12/09 10:19:27.668491 Unable to close object writer base_00000004000383BC0000006B/tar_partitions/part_6628.tar.br_chunks/chunk6, part 6, err: g -oogleapi: got HTTP response code 503 with body: -<13>Dec 9 05:43:15 backup.sh: ERROR: 2020/12/09 10:19:27.792494 Failed to run a retriable func. Err: googleapi: Error 404: Object pitr-walg-pg11/basebackups_005/base_00000004000383BC0000006B/tar_partitions/part_6628.tar.br_chunks/chunk6 (generation: 0) not found., notFound, retrying attempt 0 -<13>Dec 9 05:43:15 backup.sh: ERROR: 2020/12/09 10:19:27.938009 Failed to run a retriable func. Err: googleapi: Error 404: Object pitr-walg-pg11/basebackups_005/base_00000004000383BC0000006B/tar_partitions/part_6628.tar.br_chunks/chunk6 (generation: 0) not found., notFound, retrying attempt 1 -<13>Dec 9 05:43:15 backup.sh: ERROR: 2020/12/09 10:19:28.128904 Failed to run a retriable func. Err: googleapi: Error 404: Object pitr-walg-pg11/basebackups_005/base_00000004000383BC0000006B/tar_partitions/part_6628.tar.br_chunks/chunk6 (generation: 0) not found., notFound, retrying attempt 2 -<13>Dec 9 05:43:15 backup.sh: ERROR: 2020/12/09 10:19:28.508147 Failed to run a retriable func. Err: googleapi: Error 404: Object pitr-walg-pg11/basebackups_005/base_00000004000383BC0000006B/tar_partitions/part_6628.tar.br_chunks/chunk6 (generation: 0) not found., notFound, retrying attempt 3 -<13>Dec 9 05:43:15 backup.sh: ERROR: 2020/12/09 10:19:29.531231 Failed to run a retriable func. Err: googleapi: Error 404: Object pitr-walg-pg11/basebackups_005/base_00000004000383BC0000006B/tar_partitions/part_6628.tar.br_chunks/chunk6 (generation: 0) not found., notFound, retrying attempt 4 -<13>Dec 9 05:43:15 backup.sh: ERROR: 2020/12/09 10:19:31.349780 Failed to run a retriable func. Err: googleapi: Error 404: Object pitr-walg-pg11/basebackups_005/base_00000004000383BC0000006B/tar_partitions/part_6628.tar.br_chunks/chunk6 (generation: 0) not found., notFound, retrying attempt 5 -<13>Dec 9 05:43:15 backup.sh: ERROR: 2020/12/09 10:19:35.075894 Failed to run a retriable func. Err: googleapi: Error 404: Object pitr-walg-pg11/basebackups_005/base_00000004000383BC0000006B/tar_partitions/part_6628.tar.br_chunks/chunk6 (generation: 0) not found., notFound, retrying attempt 6 -<13>Dec 9 05:43:15 backup.sh: ERROR: 2020/12/09 10:19:42.193005 Failed to run a retriable func. Err: googleapi: Error 404: Object pitr-walg-pg11/basebackups_005/base_00000004000383BC0000006B/tar_partitions/part_6628.tar.br_chunks/chunk6 (generation: 0) not found., notFound, retrying attempt 7 -<13>Dec 9 05:43:15 backup.sh: ERROR: 2020/12/09 10:19:51.924669 Failed to run a retriable func. Err: googleapi: Error 404: Object pitr-walg-pg11/basebackups_005/base_00000004000383BC0000006B/tar_partitions/part_6628.tar.br_chunks/chunk6 (generation: 0) not found., notFound, retrying attempt 8 -<13>Dec 9 05:43:15 backup.sh: ERROR: 2020/12/09 10:20:15.367526 Failed to run a retriable func. Err: googleapi: Error 404: Object pitr-walg-pg11/basebackups_005/base_00000004000383BC0000006B/tar_partitions/part_6628.tar.br_chunks/chunk6 (generation: 0) not found., notFound, retrying attempt 9 -<13>Dec 9 05:43:15 backup.sh: ERROR: 2020/12/09 10:21:17.551962 Failed to run a retriable func. Err: googleapi: Error 404: Object pitr-walg-pg11/basebackups_005/base_00000004000383BC0000006B/tar_partitions/part_6628.tar.br_chunks/chunk6 (generation: 0) not found., notFound, retrying attempt 10 -<13>Dec 9 05:43:15 backup.sh: ERROR: 2020/12/09 10:23:07.873625 Failed to run a retriable func. Err: googleapi: Error 404: Object pitr-walg-pg11/basebackups_005/base_00000004000383BC0000006B/tar_partitions/part_6628.tar.br_chunks/chunk6 (generation: 0) not found., notFound, retrying attempt 11 -<13>Dec 9 05:43:15 backup.sh: ERROR: 2020/12/09 10:25:30.905058 Failed to run a retriable func. Err: googleapi: Error 404: Object pitr-walg-pg11/basebackups_005/base_00000004000383BC0000006B/tar_partitions/part_6628.tar.br_chunks/chunk6 (generation: 0) not found., notFound, retrying attempt 12 -<13>Dec 9 05:43:15 backup.sh: ERROR: 2020/12/09 10:28:03.582246 GCS error : Unable to compose object: context deadline exceeded -<13>Dec 9 05:43:15 backup.sh: ERROR: 2020/12/09 10:28:03.582279 upload: could not upload 'base_00000004000383BC0000006B/tar_partitions/part_6628.tar.br' -<13>Dec 9 05:43:15 backup.sh: ERROR: 2020/12/09 10:28:03.582284 GCS error : Unable to compose object: context deadline exceeded -<13>Dec 9 05:43:15 backup.sh: ERROR: 2020/12/09 10:28:03.582290 Unable to continue the backup process because of the loss of a part 6628. -``` - -**Suggestion:** - -* Make sure we retry when failing to close the writer on uploads. Maybe overriding the `err` return var in the defer func doesn't work as expected? https://github.com/wal-g/storages/blob/master/gcs/uploader.go#L75 - -/cc @NikolayS",8.0 -75788120,2020-12-08 20:56:31.730,Fuzzit.dev redirect,"The Secure sub-dept is completing the acquisition of fuzzit.dev and the last step is to redirect fuzzit's website to GitLab. My question is, does Infrastructure handle the redirection, if say, we want to redirect to a specific page? Please provide guidance on how to move forward. - -cc/ @dawsmith @david @tstadelhofer",1.0 -13305380,2018-05-01 21:04:38.048,pgbouncer in GPRD has an invalid user in SHOW POOLS,"While testing the pgbouncer exporter against GPRD, I noticed it was failing hard here: - -```golang -column name is user, data is ˇˇˇˇC -panic: label value ""\xff\xff\xff\xffC"" is not valid UTF-8 - -goroutine 19 [running]: -github.com/prometheus/client_golang/prometheus.MustNewConstMetric(0xc420147500, 0x2, 0x0, 0xc420180340, 0x2, 0x2, 0x0, 0x0) - /home/stanhu/go/src/github.com/prometheus/client_golang/prometheus/value.go:98 +0xb0 -main.queryNamespaceMapping(0xc42012c600, 0xc420110c80, 0x8f7bd9, 0x5, 0xc42011eed0, 0x0, 0x0, 0x0, 0x0, 0x0) - /home/stanhu/go/src/github.com/larseen/pgbouncer_exporter/collector.go:146 +0xa51 -main.queryNamespaceMappings(0xc42012c600, 0xc420110c80, 0xc42011eba0, 0xc420116e60) - /home/stanhu/go/src/github.com/larseen/pgbouncer_exporter/collector.go:229 +0x213 -main.(*Exporter).scrape(0xc42015c900, 0xc42012c600) - /home/stanhu/go/src/github.com/larseen/pgbouncer_exporter/collector.go:295 +0x18c -main.(*Exporter).Collect(0xc42015c900, 0xc42012c600) - /home/stanhu/go/src/github.com/larseen/pgbouncer_exporter/collector.go:276 +0x3c -main.(*Exporter).Describe(0xc42015c900, 0xc42012c5a0) - /home/stanhu/go/src/github.com/larseen/pgbouncer_exporter/collector.go:269 +0xb0 -github.com/prometheus/client_golang/prometheus.(*Registry).Register.func1(0xb09b80, 0xc42015c900, 0xc42012c5a0) - /home/stanhu/go/src/github.com/prometheus/client_golang/prometheus/registry.go:250 +0x3b -created by github.com/prometheus/client_golang/prometheus.(*Registry).Register -``` - -If you look at `SHOW POOLS`, you see the `C` under `gitlabhq_production_sidekiq`. I haven't checked production yet, but I wonder where this garbage comes from: - -```sql -root@pgbouncer-01-db-gprd.c.gitlab-production.internal:/tmp# sudo -u gitlab-psql /opt/gitlab/embedded/bin/psql -h /tmp -p 6432 -U pgbouncer -d pgbouncer -psql (9.6.5, server 1.7.2/bouncer) -Type ""help"" for help. - -pgbouncer=# show pools; - database | user | cl_active | cl_waiting | sv_active | sv_idle | sv_used | sv_tested | sv_login | maxwait | pool_mode ------------------------------+----------------+-----------+------------+-----------+---------+---------+-----------+----------+---------+------------- - gitlabhq_production | chatops | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | transaction - gitlabhq_production | gitlab | 274 | 0 | 0 | 4 | 1 | 0 | 0 | 0 | transaction - gitlabhq_production | gitlab_geo_fdw | 21 | 0 | 7 | 6 | 4 | 0 | 0 | 0 | transaction - gitlabhq_production | pgbouncer | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | transaction - gitlabhq_production_sidekiq | gitlab | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | transaction - gitlabhq_production_sidekiq | pgbouncer | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | transaction - gitlabhq_production_sidekiq | C | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | transaction - pgbouncer | pgbouncer | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | statement -(8 rows) -```",1.0 -13305236,2018-04-27 16:11:37.403,Migrate Artifacts from S3 to GCS,"We need to migrate artifact object storage from S3 to GCS. This has the added benefit of allowing `direct_upload` of artifacts. Much of the work of this has already been done. We sync the GCS bucket with S3 daily so everything is kept up do date. The process for completing the migration should be as follows: - -- [ ] Disable `direct_upload` of artifacts -- [ ] Run a final sync between S3 and GCS of the artifacts bucket -- [ ] Change storage location in `gitlab.rb` to point to GCS -- [ ] Verify artifacts still work -- [ ] Enable `direct_upload` of artifacts once more -- [ ] migrate the artifacts that landed on disk to object storage - -cc/ @ayufan @jarv @dosuken123",3.0 -13304901,2018-04-27 11:41:14.532,Migrate Sentry to GCP,"Once the primary failover is complete, migrate Sentry from Azure to GCP.",3.0 -13305333,2018-04-25 07:52:10.853,Create a canary in gprd/gstg,"We don't have a canary environment in gprd yet and we probably should. I think it would be nice to get this done pre failover if possible but for now it will be put in the post-failover milestone. - -Here is a summary of the current proposal: - -- Do not use a new chef environment, use the existing gprd/gstg environments -- Have the option to create a gstg cny so we can test configuration changes on gstg before gprd -- The cny names will be `{web,api,git}-cny-##-sv-{gprd,gstg}` -- First iteration will just have a single web node, this means we will have the following two new vms - - web-cny-01-sv-gstg.c.gitlab-staging-1.internal - - web-cny-01-sv-gprd.c.gitlab-production.internal -- We should run migrations for the canary deploy -- To identify them in prometheus we will use a new tag stage=canary -- Will need to make some takeoff adjustments since canary is no longer its own environment -- Once we have the canary accepting traffic with a cookie set in the browser, we will add weights to the haproxy configuration so that a small percentage of production traffic will go to the canary - -For the work here is what will need to be done: - -- [x] Create new shared terraform config for canary -- [x] Create new chef configuration for the canary roles (gstg, then gprd) -- [x] Modify takeoff so that we can deploy to canary -- [x] Create canary specific dashboards -- [ ] Create canary specific alerts Will track this separately, when we start moving some prod traffic to the VM. -- [ ] Update the gitlab-haproxy config so that some production traffic is directed to the canary (this doesn't need to be in the first iteration) We will track this separately. - - - -",4.0 -13305386,2018-04-24 16:43:37.664,monitoring and metric aggregation for the CI/CD infrastructure in GCP,"There might be some additional work to bring in the prometheus servers that the CI/CD team uses so that we are at parity to what was working in Azure. - -In general, it would be better if we could put prometheus servers behind IAP like the current ones and also add them to the prd chef environment. - - -/cc @tmaczukin",4.0 -13305349,2018-04-20 08:59:09.208,Update general information about gitlab.com configuration,"This page https://docs.gitlab.com/ee/user/gitlab_com/ appears to be the place where we document general information about gitlab.com settings. - -- [ ] Update gitlab.com settings -- [ ] Update environments documentation on about.gitlab.com",1.0 -13305238,2018-04-12 10:22:36.550,Add config for a sentry node in gstg,gprd doesn't seem to have such config.,2.0 -13305171,2018-04-09 18:26:44.837,Drop InfluxDB entirely,"This issue is to capture the effort to remove InfluxDB from our infrastructure once our Prometheus metrics are good enough, and obviously post GCP-migration. - -More conversation is here: https://gitlab.com/gitlab-com/migration/issues/145#note_55954264 - -Check the tickbox if you're satisfied.... - -- [ ] **CI Runners**: https://dashboards.gitlab.net/dashboards/f/bpcbFeIiz/ci-runners-service cc @tmaczukin -- [ ] **GCP Migration Project** https://dashboards.gitlab.net/dashboards/f/vtC8ceSmk/gcp-migration-project cc @andrewn -- [ ] **Gitaly** https://dashboards.gitlab.net/dashboards/f/SRXyrrSmk/gitaly-service cc @tommy.morgan -- [ ] **GitLab Rails** https://dashboards.gitlab.net/dashboards/f/KuFJt6Iiz/gitlab-rails-service cc @smcgivern -- [ ] **HA Proxy** https://dashboards.gitlab.net/dashboards/f/N9YSt6Siz/haproxy-service cc @northrup -- [ ] **Operations** https://dashboards.gitlab.net/dashboards/f/KN7OgCSiz/operations cc @dawsmith -- [x] **Pages** https://dashboards.gitlab.net/dashboards/f/v2ZhpeSik/pages-service cc @nick.thomas -- [x] **Postgres** https://dashboards.gitlab.net/dashboards/f/u3xKRjIiz/postgres-service cc @yorickpeterse -- [ ] **Prometheus** https://dashboards.gitlab.net/dashboards/f/5dsvpeImz/prometheus-service cc @bjk-gitlab -- [ ] **Redis** https://dashboards.gitlab.net/dashboards/f/D5R0peIik/redis-service cc @jarv -- [x] **Workhorse** https://dashboards.gitlab.net/dashboards/f/_OKcteIiz/workhorse-service cc @jacobvosmaer-gitlab - -FYI, several other folders exist: - -- **Broken** https://dashboards.gitlab.net/dashboards/f/cYFN9rSmz/broken Dashboards that rely on Influxdb and are not currently working - -- **Once-off** https://dashboards.gitlab.net/dashboards/f/kGgx9rIiz/once-off Once off dashboards - -- ~~**Archives** https://dashboards.gitlab.net/dashboards/f/phfSe3Iiz/archived @northrup and @toon's manually migrated dashboards (first attempt)~~ (cleaned and deleted) - - -/cc @sytses @jarv @andrewn @jtevnan @northrup @ilyaf @ahanselka",4.0 -13305343,2018-04-04 16:58:58.147,Lower the size of the besteffort nodes,The current `besteffort` sidekiq nodes on GPRD are absolutely massive. They can and should be scaled down pretty substantially.,1.0 -13305399,2018-03-24 21:20:24.442,fluentd parse error on nfs-09 for gitaly logs,"Maybe on other servers too, wi'll look into it on monday. - -``` -2018-03-24 21:16:54 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::Parser::ParserError error=""parse failed Empty input at line 1, column 1 [parse.c:926] in 'SCHED 214332569ms: gomaxprocs=8 idleprocs=8 threads=236 spinningthreads=0 idlethreads=220 runqueue=0 [0 0 0 0 0 0 0 0]"" location=""/opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.0.2/lib/fluent/plugin/filter_parser.rb:107:in `rescue in filter_with_time'"" tag=""gitaly"" time=# record={""ident""=>""gitaly"", ""message""=>""SCHED 214332569ms: gomaxprocs=8 idleprocs=8 threads=236 spinningthreads=0 idlethreads=220 runqueue=0 [0 0 0 0 0 0 0 0]""} -2018-03-24 21:16:55 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::Parser::ParserError error=""parse failed Empty input at line 1, column 1 [parse.c:926] in 'SCHED 214333575ms: gomaxprocs=8 idleprocs=7 threads=236 spinningthreads=0 idlethreads=220 runqueue=0 [0 0 0 0 0 0 0 0]"" location=""/opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.0.2/lib/fluent/plugin/filter_parser.rb:107:in `rescue in filter_with_time'"" tag=""gitaly"" time=# record={""ident""=>""gitaly"", ""message""=>""SCHED 214333575ms: gomaxprocs=8 idleprocs=7 threads=236 spinningthreads=0 idlethreads=220 runqueue=0 [0 0 0 0 0 0 0 0]""} -2018-03-24 21:16:56 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::Parser::ParserError error=""parse failed Empty input at line 1, column 1 [parse.c:926] in 'SCHED 214334576ms: gomaxprocs=8 idleprocs=8 threads=236 spinningthreads=0 idlethreads=222 runqueue=0 [0 0 0 0 0 0 0 0]"" location=""/opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.0.2/lib/fluent/plugin/filter_parser.rb:107:in `rescue in filter_with_time'"" tag=""gitaly"" time=# record={""ident""=>""gitaly"", ""message""=>""SCHED 214334576ms: gomaxprocs=8 idleprocs=8 threads=236 spinningthreads=0 idlethreads=222 runqueue=0 [0 0 0 0 0 0 0 0]""} -2018-03-24 21:16:57 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::Parser::ParserError error=""parse failed Empty input at line 1, column 1 [parse.c:926] in 'SCHED 214335582ms: gomaxprocs=8 idleprocs=8 threads=236 spinningthreads=0 idlethreads=220 runqueue=0 [0 0 0 0 0 0 0 0]"" location=""/opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.0.2/lib/fluent/plugin/filter_parser.rb:107:in `rescue in filter_with_time'"" tag=""gitaly"" time=# record={""ident""=>""gitaly"", ""message""=>""SCHED 214335582ms: gomaxprocs=8 idleprocs=8 threads=236 spinningthreads=0 idlethreads=220 runqueue=0 [0 0 0 0 0 0 0 0]""} -2018-03-24 21:16:58 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::Parser::ParserError error=""parse failed Empty input at line 1, column 1 [parse.c:926] in 'SCHED 214336583ms: gomaxprocs=8 idleprocs=8 threads=236 spinningthreads=0 idlethreads=222 runqueue=0 [0 0 0 0 0 0 0 0]"" location=""/opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.0.2/lib/fluent/plugin/filter_parser.rb:107:in `rescue in filter_with_time'"" tag=""gitaly"" time=# record={""ident""=>""gitaly"", ""message""=>""SCHED 214336583ms: gomaxprocs=8 idleprocs=8 threads=236 spinningthreads=0 idlethreads=222 runqueue=0 [0 0 0 0 0 0 0 0]""} -2018-03-24 21:16:59 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::Parser::ParserError error=""parse failed Empty input at line 1, column 1 [parse.c:926] in 'SCHED 214337584ms: gomaxprocs=8 idleprocs=8 threads=236 spinningthreads=0 idlethreads=222 runqueue=0 [0 0 0 0 0 0 0 0]"" location=""/opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.0.2/lib/fluent/plugin/filter_parser.rb:107:in `rescue in filter_with_time'"" tag=""gitaly"" time=# record={""ident""=>""gitaly"", ""message""=>""SCHED 214337584ms: gomaxprocs=8 idleprocs=8 threads=236 spinningthreads=0 idlethreads=222 runqueue=0 [0 0 0 0 0 0 0 0]""} -``` - -/cc @gl-infra - -For now, I'll just clean up the tf-agent logs. Longterm, we need visibility and alerting into fluentd state which we never have time to do :disappointed: - -``` -$> be knife ssh 'roles:gitlab-base-stor-nfs' 'sudo du -sh /var/log/td-agent' | sort -nfs-file-01.stor.gitlab.com 39M /var/log/td-agent -nfs-file-02.stor.gitlab.com 19M /var/log/td-agent -nfs-file-03.stor.gitlab.com 36M /var/log/td-agent -nfs-file-04.stor.gitlab.com 20M /var/log/td-agent -nfs-file-05.stor.gitlab.com 41M /var/log/td-agent -nfs-file-06.stor.gitlab.com 49M /var/log/td-agent -nfs-file-07.stor.gitlab.com 48M /var/log/td-agent -nfs-file-08.stor.gitlab.com 59M /var/log/td-agent -nfs-file-09.stor.gitlab.com 4.8G /var/log/td-agent -nfs-file-10.stor.gitlab.com 3.9G /var/log/td-agent -nfs-file-11.stor.gitlab.com 210M /var/log/td-agent -nfs-file-12.stor.gitlab.com 198M /var/log/td-agent -nfs-file-13.stor.gitlab.com 56M /var/log/td-agent -nfs-file-14.stor.gitlab.com 125M /var/log/td-agent -nfs-file-15.stor.gitlab.com 48M /var/log/td-agent -nfs-file-16.stor.gitlab.com 317M /var/log/td-agent -```",4.0 -9846487,2018-03-21 14:08:39.691,Should we serve about.gitlab.com from an object storage?,@jarv raised this concern in https://gitlab.com/gitlab-com/infrastructure/issues/3873#note_64028076 since we don't have a redundancy for the www-gitlab-com node.,5.0 -13305409,2018-03-03 01:23:54.569,create prometheus exporters for elastic cloud,"currently, the only metrics we have is 6 graphs in the admin panel, and whatever one can query directly. - -We should setup two dedicated (for HA) nodes with prometheus elasticsearch exporters which point to elastic cloud, and scrape them from our monitoring. - -This is potentially big issue to be split in several.",1.0 -13305417,2018-03-03 01:19:50.814,Create pipelines for backing up and restoring kibana settings,"Currently, we only have password based administrative access to kibana and ES cloud cluster. We need to create backup/restore procedures for kibana configuration before we add a lot to it and lose it due to some mistake.",1.0 -13305422,2018-03-03 01:17:43.202,Write documentation and runbooks for elastic cloud setup.,"Currently, apart from `gitlab_fluentd` cookbook there's no documentation about elastic cloud setup. This issue is to track this.",1.0 -13305433,2018-03-03 01:16:40.983,Migrate existing elastalerts to new elastic cloud cluster,"As we move from our current setup to elastic cloud, we should move existing integrations to new infrastructure.",2.0 -13305425,2018-03-03 01:15:39.833,Stop current logstash pipeline and tear down existing elastic cluster,"This issue is to track the cleanup progress. - - [ ] stop current pipeline - - [ ] tear down current infrastructure",2.0 -13305222,2018-02-09 16:17:16.012,Migrating Non-GitLab.com Production Servers,"We have a handful of servers, notably: - -* about.gitlab.com -* version.gitlab.com -* customers.gitlab.com -* license.gitlab.com - -that are required for the production usage of GitLab but we have not addressed moving them or standing up replicas in GCP yet.",4.0 -13305164,2018-02-09 10:37:44.045,Create a runbook and test the manual process for updating the staging database from production,"We will need to periodically update the staging database with pseudoanonymization of customer data. - -Once this complete, the next step will be to automate the process.",4.0 -13304990,2018-02-09 10:35:18.002,Copy backup configuration from production,All of our backups for customer data in production should be duplicated in staging.,1.0 -14000565,2017-12-21 16:31:17.789,[META] Structured (JSON) logging for PostgreSQL,"Structured logging would make it easier to ingest and search through logs. In particular this could be useful for searching for slow queries that may spread across multiple lines. We should investigate the options here and if deemed possible/useful enough implement it. - -There are several steps to investigating and deploying this change: - -* [x] Build the jsonlog module and manually install it on staging -* [x] Decide on the destination path for the logs -* [x] Evaluate the log output and fix any deficencies - * [x] https://github.com/michaelpq/pg_plugins/pull/18 - * [x] https://github.com/michaelpq/pg_plugins/pull/19 - * [x] https://github.com/michaelpq/pg_plugins/pull/20 - * [x] https://github.com/michaelpq/pg_plugins/pull/21 -* [x] Configure ELK stack to ingest these logs - * [x] https://gitlab.com/gitlab-cookbooks/gitlab_fluentd/merge_requests/5#note_61734780 - * [ ] Ensure these logs are rotated and cleaned up after some retention period - * View -* [ ] Integrate jsonlog into omnibus - * https://gitlab.com/gitlab-org/omnibus-gitlab/merge_requests/2332#note_62951392 -* [ ] Wait for omnibus with jsonlog to be deployed to production -* [ ] - - -Unanswered questions: -* [ ] Can we still use mtail or is there an alternate way to get metrics from logs?",4.0 -7367111,2017-10-27 00:14:52.213,[meta] New Geo Testbed,"## Updated plan with smaller issues - -Azure - geo1.gitlab.com: - -- [x] ~1d Create terraform configuration for Azure - https://gitlab.com/gitlab-com/infrastructure/issues/3090 -- [x] ~2d Create new chef environment for Geo testbed - https://gitlab.com/gitlab-com/infrastructure/issues/3091 -- [x] ~2d Setup NFS storage for the geo testbed in Azure (copy of nfs shards) - https://gitlab.com/gitlab-com/infrastructure/issues/3092 -- [x] ~1d Setup Uploads/Attachments for the geo testbed in Azure - https://gitlab.com/gitlab-com/infrastructure/issues/3093 -- [x] ~1d Setup Database that is pseudo-anonimized and only contains projects for the nfs shards -https://gitlab.com/gitlab-com/infrastructure/issues/3094 -- [x] ~1d Setup monitoring and alarming for the Azure testbed - https://gitlab.com/gitlab-com/infrastructure/issues/3095 -- [x] ~1d Setup automatic deploys for the Azure testbed - https://gitlab.com/gitlab-com/infrastructure/issues/3096 - -Azure - geo2.gitlab.com: - -- [x] ~1d Create terraform configuration for Azure - #3203 -- [x] ~2d Create new chef environment for Geo testbed - #3204 -- [x] ~2d Setup NFS storage - #3205 -- [x] ~1d Setup Uploads/Attachments - #3206 -- [x] ~1d Setup Database with disk config as geo1 - #3207 -- [x] ~1d Setup monitoring and alarming - #3208 -- [x] ~1d Setup automatic deploys - #3209 - - -## Original issue - -Per discussions a few days ago, we'd like to have a new Geo testbed (or upgrade the existing one) so that it has GitLab running in HA mode on both the primary and secondary, with at least two web workers on each side, and load balancer in front, to make sure that none of the Geo code / functionality bugs out in an HA setup. -",5.0 -6618867,2017-08-30 17:08:35.800,Set up alert on HTTP Queue Timing,"Carry over from https://gitlab.com/gitlab-com/infrastructure/issues/2379 - -While we anticipate better ways of measuring web request response timing with more integration of Prometheus, we should not wait on that to set up an alert on HTTP Queue timings. Once those timings go above an SLO that I am about to propose in this issue, the runbook that ties to the alert should describe how to add more unicorn hosts, as a first step. Obviously there can be other reasons for high HTTP Queue Timings, but this is a start. - -- Proposed SLO: HTTP Queue Timing p99 < 15 ms. -- Proposed alert: Alert when p99 has been > 15 ms for more than 30 minutes. - -https://performance.gitlab.net/dashboard/db/transaction-overview?panelId=13&fullscreen&orgId=1",5.0 -6091522,2017-07-14 12:24:59.497,Sync `jws_private_key` between all app servers,"Zendesk: https://gitlab.zendesk.com/agent/tickets/79645 - -The person that contributed the OpenID Connect feature reported that GitLab.com is returning a different key for each request, presumably because the app servers don't have the private key synced. We should sync `jws_private_key` in `gitlab-secrets.json` across all nodes. - -See https://gitlab.com/gitlab-org/omnibus-gitlab/merge_requests/1222/diffs for where this was implemented.",2.0 -5664260,2017-06-09 16:09:20.371,Create a Terraform gym,"As a follow-up of #1715 it'd be great if we can create a Terraform gym to fiddle with the tool in an isolated environment. This to allow team members to learn and experiment in complete freedom on the same platform as production without the risk of hurting GitLab.com. - -To do this we should create a separate Azure account (https://gitlab.com/gitlab-com/infrastructure/issues/1906) and decide if we want to limit the exercises to pure infrastructure or if we can spend some time and have a Chef server set up for this. The latter would allow full coverage of the way we provision servers at the moment. Discussion about this from [this comment](https://gitlab.com/gitlab-com/infrastructure/issues/1715#note_31536847). Let's continue it here. - -/cc @pcarranza @ernstvn @northrup @jtevnan @stanhu",5.0 -5424965,2017-05-17 18:12:26.398,Automate the staging environment creation,"Following a call with @ernstvn and @andrewn we agreed that we need a way to create staging environments for testing new features. This is meant to be an intermediate step towards #1504 which remains the ultimate long term goal. - -The way we can achieve this in the short to mid term is to leverage on Terraform for the creation of the nodes. This is already part of an existing effort to better align staging to the production environment and the configuration [is already coded](https://gitlab.com/gitlab-com/gitlab-com-infrastructure/tree/master/staging). - -To make it scalable to a number of different staging environments we will need to define a schema for the chef roles, like `staging--*`. - -@andrewn was also asking about how we would deal with the monitoring, since it's a fundamental component to gather data about the changes. Would we use a global influxdb/prometheus server? Should we use one for all staging environments? Should we create one for each? - -Last but not least, the database. We'll probably need to use a copy of the staging data set. If we use the same staging database then the schema will get stale real quick. Also, this could be a good opportunity to test migrations in a controlled way. @northrup has a plan for that. - -/cc @gl-infra @bjk-gitlab",5.0 -4737052,2017-03-16 14:38:01.545,Custom robots recipe applies multiple lines,"The [gitlab-server::custom-robots](https://gitlab.com/gitlab-cookbooks/gitlab-server/merge_requests/46/diffs) recipe seems to apply the `Disallow: /snippets/*` line multiple times to `robots.txt` on production. - -**Symptoms** - -``` -# tail /opt/gitlab/embedded/service/gitlab-rails/public/robots.txt -Disallow: /snippets/* -Disallow: /snippets/* -Disallow: /snippets/* -Disallow: /snippets/* -Disallow: /snippets/* -Disallow: /snippets/* -Disallow: /snippets/* -```` - -__________________ - -The issue is probably due to the regex - -``` -robots.insert_line_if_no_match(""/Disallow: \/snippets\/\*/"", ""Disallow: /snippets/*"") -```",1.0 -3313168,2016-10-02 01:34:40.896,Enable IPv6 for GitLab.com (redux),https://gitlab.com/gitlab-com/operations/issues/43 was closed last year because Azure did not offer IPv6 support; now it does. Please revisit the issue.,21.0 -2472686,2016-04-07 14:11:35.784,Redirects on blue-moon and status.gitlab.com,"We need to create a new node which will be chef controlled and which will hold all the redirects to the domain. -Old domains still point to the blue-moon IP address and this redirects to the current website. -Blue-moon also holds status site. - -To discontinue blue-moon, we need to: - -- [ ] Move status to a separate node -- [ ] Provision a new node to host all the redirects -- [ ] update the IP for all domains listed in `/etc/nginx/sites-available/www.gitlab.com` -- [ ] Make sure that the redirects in `blog_rewrite_rules.rules` and `org_to_com_rewrite_rules.rules` are still working - -cc @jnijhof @pcarranza ",5.0 -2472662,2016-03-17 13:44:48.939,Send operations notifications email only on failure,"(Migrated from chef-repo/issues/366): - -For a long time I have been reading Cron emails etc. from the operations notifications email and creating issues about things that looked interesting/important to me. Because I am moving out of ops I decided that as of now I am stopping that; I will not read any ops-notifications email anymore.",3.0 -15802836,2018-11-13 17:47:10.921,Gitter SSH proxy bastion fails to connect to many boxes when running Ansible command,"Gitter SSH proxy bastion fails to connect to many boxes when running Ansible command. It especially happens with the `prod` inventory that has lots of boxes to connect to. - -Example Ansible command where we add SSH keys: https://gitlab.com/gitlab-com/gl-infra/gitter-infrastructure#ssh-to-boxes - -Example: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5496 - -Another example when I was trying to add SSH keys (look for `unreachable=1`), -``` -PLAY RECAP ************************************************************************************************************************** -apps-01.prod.gitter : ok=4 changed=1 unreachable=0 failed=0 -apps-02.prod.gitter : ok=2 changed=1 unreachable=1 failed=0 -bastion-01.prod.gitter : ok=4 changed=1 unreachable=0 failed=0 -cube-01.prod.gitter : ok=4 changed=1 unreachable=0 failed=0 -cube-02.prod.gitter : ok=4 changed=1 unreachable=0 failed=0 -cube-arbiter.prod.gitter : ok=4 changed=1 unreachable=0 failed=0 -es-01.prod.gitter : ok=1 changed=0 unreachable=1 failed=0 -es-02.prod.gitter : ok=4 changed=1 unreachable=0 failed=0 -logging-01.prod.gitter : ok=2 changed=1 unreachable=1 failed=0 -logging-02.prod.gitter : ok=3 changed=1 unreachable=1 failed=0 -master-01.prod.gitter : ok=0 changed=0 unreachable=1 failed=0 -master-02.prod.gitter : ok=0 changed=0 unreachable=1 failed=0 -master-03.prod.gitter : ok=0 changed=0 unreachable=1 failed=0 -mongo-replica-01.prod.gitter : ok=4 changed=1 unreachable=0 failed=0 -mongo-replica-02.prod.gitter : ok=4 changed=1 unreachable=0 failed=0 -mongo-replica-03.prod.gitter : ok=2 changed=1 unreachable=1 failed=0 -mongo-replica-arbiter.prod.gitter : ok=4 changed=1 unreachable=0 failed=0 -neo4j-001.prod.gitter : ok=4 changed=1 unreachable=0 failed=0 -redis-01.prod.gitter : ok=4 changed=1 unreachable=0 failed=0 -redis-02.prod.gitter : ok=3 changed=1 unreachable=1 failed=0 -sentinel-01.prod.gitter : ok=4 changed=1 unreachable=0 failed=0 -sentinel-02.prod.gitter : ok=1 changed=0 unreachable=1 failed=0 -sentinel-03.prod.gitter : ok=1 changed=0 unreachable=1 failed=0 -typeahead-01.prod.gitter : ok=4 changed=1 unreachable=0 failed=0 -typeahead-02.prod.gitter : ok=4 changed=1 unreachable=0 failed=0 -webapp-01.prod.gitter : ok=3 changed=1 unreachable=1 failed=0 -webapp-02.prod.gitter : ok=4 changed=1 unreachable=0 failed=0 -webapp-03.prod.gitter : ok=4 changed=1 unreachable=0 failed=0 -webapp-04.prod.gitter : ok=1 changed=0 unreachable=1 failed=0 -webapp-05.prod.gitter : ok=4 changed=1 unreachable=0 failed=0 -webapp-06.prod.gitter : ok=4 changed=1 unreachable=0 failed=0 -webapp-07.prod.gitter : ok=4 changed=1 unreachable=0 failed=0 -webapp-08.prod.gitter : ok=3 changed=1 unreachable=1 failed=0 -ws-01.prod.gitter : ok=4 changed=1 unreachable=0 failed=0 -ws-02.prod.gitter : ok=4 changed=1 unreachable=0 failed=0 -ws-03.prod.gitter : ok=4 changed=1 unreachable=0 failed=0 -ws-04.prod.gitter : ok=4 changed=1 unreachable=0 failed=0 -ws-05.prod.gitter : ok=3 changed=1 unreachable=1 failed=0 -ws-06.prod.gitter : ok=4 changed=1 unreachable=0 failed=0 -ws-07.prod.gitter : ok=4 changed=1 unreachable=0 failed=0 -ws-08.prod.gitter : ok=4 changed=1 unreachable=0 failed=0 -``` - ---- - -cc @andrewn",1.0 -15802770,2018-11-13 17:42:32.645,Add more runner capacity for ops.gitlab.net,"We currently have three runners -* one dedicated for release -* one dedicated for chatops -* one for everything else. - - -The one for everything else I think would be nice if we could switch to kubernetes. -We should probably create a dedicated runner for traffic generation -The runner cookbook isn't designed for how we are using it now with multiple runners so we may want to make some minor updates to it. Ideally we can scale each runner type out horizontally with different tokens.",2.0 -15797526,2018-11-13 14:52:45.454,Rake task rake update_vault_admins does not appear to work,"In the setup of new SRE's we need to execute the rake task `rake update_vault_admins` to update the vaults such that the user has their new access to our vaults. This does not appear to be working... - -The vault is not updated with the new users. A very quick look doesn't really indicate to me why this task would not work.",1.0 -15795660,2018-11-13 14:19:31.153,META: Move backup CICD jobs needed for operations to ops.gitlab.net and switch to pushgateway for monitoring,"We will probably want to split this out into the following smaller tasks: - -- [x] Move https://gitlab.com/gitlab-restore to `gitlab-com/gl-infra/` first. I'm not sure why they are under this `gitlab-restore` group but we should keep everything under `gitlab-com` so it was a mistake for sure. -- [x] Mirror the repositories in `gitlab-restore` over to ops.gitlab.net with a push rule on the projects under `esc-tools`. -- [x] Move the CICD configuration from gitlab.com to ops.gitlab.net. This means we will use gitlab.com as the source for code but CICD will run on ops. -- [ ] Replace deadmansnitch with pushgateway for notifications and alerts -> https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5167",2.0 -15784725,2018-11-13 09:07:08.945,Update the status page from chatops,"It was noticed on https://gitlab.com/gitlab-com/gl-infra/production/issues/564 that we did not update our status page. One of the reasons for this is that it is a job currently done manually by the MOC and is sometimes overlooked. - -This needs to be easier, all of our status updates that are posted via twitter should go through the status page first as it is the most likely place for the wider community to check for what is going on.",5.0 -15765309,2018-11-12 16:18:38.297,Assign privileges to admin account hphilipps_admin,"According to my infrastructure onboarding issue #5447, please provide the needed privileges for my gitlab.com admin account (https://gitlab.com/hphilipps_admin).",1.0 -15759359,2018-11-12 13:39:30.035,Access To Sidekiq Rails Logs For Staging,"I am trying to read `Rails.logger.error` logs from staging sidekiq worker jobs. I have an example log in https://gitlab.com/gitlab-org/gitlab-ce/blob/master/app/services/clusters/applications/install_service.rb#L16 which I'm sure is being triggered on staging but no such log is appearing in Kibana. I've tried searching `""Kubernetes error""` in `pubsub-sidekiq-inf-gstg*` but nothing shows up for the last 7 days and I've triggered it many times. - -If we aren't forwarding these logs to kibana then I need access to some other system that has these logs as they are necessary for me to troubleshoot issues in staging. - -I have yet to be able to determine if these logs are available in kibana for production either as we haven't deployed that code to production yet but assuming they aren't available in production kibana then I'll also need access to the production logs so I can read them there too.",2.0 -15755845,2018-11-12 13:00:30.827,Rollout Prometheus 2.5,"Prometheus [2.5.0 is out](https://github.com/prometheus/prometheus/releases/tag/v2.5.0). Now includes a `--query.max-samples` flag to limit how much memory a query can use. - -* [x] Decide on a max-samples limit. -* [x] Rollout to staging. -* [x] Rollout to ops. -* [x] Rollout as default.",2.0 -15752769,2018-11-12 11:32:47.497,Log DDL statements with txid,"In order to recover from a `DROP TABLE` or otherwise DDL-related incident without data-loss, we employ PITR to replay until right before the incident happened. Knowing the txid of the incidental transaction enables us to precisely replay until the last transaction before the incident. Without knowing the txid, one would have to replay until a certain point in time which is loss-inducing (if too early) or rendering the restore unusable (if too late). - -So the proposal here is to: - -* log DDL statements (`log_statements = 'ddl'`) -* make sure the log prefix contains the transaction id",1.0 -15711421,2018-11-09 20:52:37.330,Change label title,"Because we strive not to use violent analogies in our culture and language, we need to update the label ""outage:postmortem"" as per https://gitlab.com/gitlab-com/www-gitlab-com/issues/2821. I'd recommend we change it to ""**outage:rootcause**"" or ""**outage:analysis**"" for the sake of brevity - -I found the label [here](https://gitlab.com/gitlab-com/gl-infra/infrastructure/labels?utf8=%E2%9C%93&subscribed=&search=postmortem), but I don't believe I have the permissions necessary to edit it.",1.0 -15707301,2018-11-09 16:46:41.614,Add SSH key for Amar and Skarbek to Gitter hosts,"@aamarsanaa and @skarbek added their SSH keys in gitlab-com/gl-infra/gitter-infrastructure!71 & gitlab-com/gl-infra/gitter-infrastructure!72, and need a current admin to run ansible to distribute across the fleet.",3.0 -15745749,2018-11-09 16:14:40.284,Ingest structured audit logs into ELK,"With https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22471 we now have structured audit logs coming from rails :tada: (thanks @stanhu!) - -These logs are even flowing into stackdriver. - -We should also ingest them in ELK. - -Once we have them, we'll be able to use them for a whole lot of abuse detection activities in the way we do for Gitaly at present. - -cc @jarv - -cc @dawsmith and @Finotto for scheduling",1.0 -15646862,2018-11-08 00:36:09.336,Slack's chatops run explain access for groulot,"* Access for @groulot to explain in gitlab.com database -* ref: https://gitlab.com/gitlab-com/access-requests/issues/46",1.0 -15634801,2018-11-07 14:58:36.847,Nov SRE hiring questionnaire reviews Nov 7 - 17,"Bucket issue for the milestone to track questionnaire reviews. - -* Nov 7 - we have 33 questionnaires to be reviewed. -* Nov 17 - we have 13 questionnaires to be reviewed. - -",13.0 -15613204,2018-11-06 17:48:44.796,Add alert for increase in active sessions,"as a ~""corrective action"" for https://gitlab.com/gitlab-com/gl-infra/production/issues/553 we should add an alert for a large increase in active sessions. - - -https://dashboards.gitlab.net/d/wyBYDjnmk/gitlab-workhorse?panelId=8&fullscreen&orgId=1&from=1541446106294&to=1541526372156 - -![Screen_Shot_2018-11-06_at_6.46.53_PM](/uploads/7e141f35d147f77bbbcd780aa7b7025e/Screen_Shot_2018-11-06_at_6.46.53_PM.png)",2.0 -15612110,2018-11-06 17:39:47.852,503 errors when using the Web Editor on staging,"This could be related to 11.5.0 RC1, but we're not sure yet. To reproduce: - -1. Find a repo on staging.gitlab.com -2. Click on the README.md or some file in the repository view -3. Click ""Edit File"" - -I get 503s all the time. - -The `production_json.log` shows: - -```json -{""method"":""PUT"",""path"":""/stanhu/rouge-test/update/master/LICENSE"",""format"":""html"",""controller"":""Projects::BlobController"",""action"":""update"",""status"":503,""duration"":55312.19,""view"":17.46,""db"":17.38,""time"":""2018-11-06T17:32:36.571Z"",""params"":[{""key"":""utf8"",""value"":""‚úì""},{""key"":""_method"",""value"":""put""},{""key"":""authenticity_token"",""value"":""[FILTERED]""},{""key"":""file_path"",""value"":""LICENSE""},{""key"":""encoding"",""value"":""text""},{""key"":""commit_message"",""value"":""Update LICENSE""},{""key"":""branch_name"",""value"":""master""},{""key"":""original_branch"",""value"":""master""},{""key"":""last_commit_sha"",""value"":""88d77e88631d11244bc105714eb2d15e4fd20a8b""},{""key"":""content"",""value"":""[FILTERED]""},{""key"":""from_merge_request_iid"",""value"":""""},{""key"":""namespace_id"",""value"":""stanhu""},{""key"":""project_id"",""value"":""rouge-test""},{""key"":""id"",""value"":""master/LICENSE""}],""remote_ip"":""64.71.20.74"",""user_id"":64248,""username"":""stanhu"",""ua"":""Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36"",""gitaly_calls"":7} -``` - -Workhorse log shows: - -```json -{""correlation-id"":""lxcwkZTK0V7"",""duration"":55.355355981,""host"":""staging.gitlab.com"",""level"":""info"",""method"":""POST"",""msg"":""access"",""proto"":""HTTP/1.1"",""referer"":""https://staging.gitlab.com/stanhu/rouge-test/edit/master/LICENSE"",""remoteAddr"":""@"",""status"":503,""system"":""http"",""time"":""2018-11-06T17:33:31Z"",""uri"":""/stanhu/rouge-test/update/master/LICENSE"",""userAgent"":""Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36"",""written"":2930} -```",1.0 -15612015,2018-11-06 17:34:44.907,Test plan for testing Patroni replication environment,"Test with QA the Patroni environment with QA.Infrastructure team has already done sync/replication testing. Quality team to help with functional tests. - -* See that a Project with CI/CD is working -* MR is working -* Diff is displaying correctly -* Performance tests, we have some already in-flight automation by @at.ramya -* Load testing, use https://gitlab.com/gitlab-com/large-staging-collider that we used in GCP migration. - -Follow-ups after the meeting - -* @ahmadsherif will update the issue with env details. -* Quality team will work on turning this issue into a test plan. - -**Recording:** https://drive.google.com/drive/folders/1KIf4mJsL6TABOizYDlyx_gfG84kblN8x - -## Plan - -* Run existing automated tests against the new environment -* Run our new performance test automation -* Run load tests using https://gitlab.com/gitlab-com/large-staging-collider",5.0 -15611809,2018-11-06 17:32:47.081,Review patroni setup and cookbooks,Review totally the cookbooks and environment (setup and variables) for patroni test environment created by gitlab,4.0 -15608505,2018-11-06 15:17:07.029,VACUUM FULL to get another bloat statistic,"So this is about running a `VACUUM FULL` on production data using a restore from a backup. The goal is to get another figure how much bloat we currently have in the database. - -I'll export table-level size statistics before and after the fact, so we can analyze better.",2.0 -15605562,2018-11-06 13:36:50.625,Plan Monitoring/Visibility for Geo,"Create a list of issues to validate that we are properly monitoring and alerting on the Geo infrastructure - -Consider the following -- [ ] Ensure that our existing monitoring solution is sufficient: -- [ ] Ensure firewall rules are in place such that we can properly scrape for metrics: -- [x] Ensure that logging is in place: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6330 -- [ ] Ensure that alerting is routed properly for Geo: -- [x] Ensure Sentry is correctly configured for Geo: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6365 - -This issue can be closed when the above issues have been created.",1.0 -15605056,2018-11-06 13:20:53.357,Plan to enable Geo,"Plan what is required to enable Geo for GitLab.com. - -Consider the following: -* Should we do a phased sync approach? Briefly discussed here (https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/4741#note_111090672) - -This issue can be closed after creation the necessary stories that enable the Geo feature is complete.",1.0 -15604893,2018-11-06 13:15:54.049,Build Geo Environment,"Utilize what we've learned from discovery https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5465 and proceed to build our the necessary infrastructure in our chosen region. - -This issue can be closed when all of the components for Geo are up. This issue does _not_ include enabling Geo.",5.0 -15604805,2018-11-06 13:12:42.381,Request Quota Increase for X region,"After we've decided what region to build Geo, proceed to contact Google Support to increase the quotas as required from our discovery: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5465",1.0 -15604761,2018-11-06 13:10:26.198,Discover recommended installation parameters for Geo,"Gather a list of the following: -* ~~Where to build Geo (conversation started here: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/4741 & https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5466#note_120537201)~~ -* Instance sizing requirements -* Disk type requirements - -This issue is only for discussion and discovery only. - -Keep in mind the following when working on this issue: -* We only plan to sync repo and database data -* The region that we are placing this stuff in will not see customer traffic -* Consider places where we can reduce cost by using less expensive infrastructure components",1.0 -15604577,2018-11-06 13:03:04.032,Discover current Geo implementation,"Despite Geo currently being disabled for GitLab.com, we still have at least one server up and running and some configuration related to it. - -Figure out what servers are around for Geo _currently_. Compile a list of action items to tear those down. Anything left right now would have been for the purpose of migrating to GCP, and since, they've been neglected. - -This issue can be closed when issues from the above have been created.",1.0 -15600024,2018-11-06 10:08:56.178,PostgreSQL Europe Conference summary,"This is a quick recap of some good talks I visited at [pgconfeu 2018](https://www.postgresql.eu/events/pgconfeu2018) in Lisbon in October this year. I went to more than these talks obviously, but that's all the notes I have. - -The [conference schedule](https://www.postgresql.eu/events/pgconfeu2018/schedule/) also contains links to slides. - -My TOP3 talks are about HyperLogLog algorithm for counting, various index types in PG and spatial analysis with PostGIS. - -### Training: Dig the WAL - -[Slides](https://www.slideshare.net/loxodata/dig-thewal-pgconfeu2018) [Description](https://www.postgresql.eu/events/pgconfeu2018/sessions/session/2069-dig-the-wal/) - -In the half-day training, we were looking into WAL details and how to use this to recover certain catastrophic situations. The speakers had automated examples of recovering clusters using PITR, understanding conflicts with logical replication and using pg_rewind to recover split brain situations. - -Apart from the rather high-level discussion, my take aways were - -* Logging DDL statements along with txids is tremendously helpful https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5502 -* Group commit can considerably reduce amount of fsyncs required https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5503 -* `\gset` is really useful for scripting with `psql` - -### TOP3 Talk: Location - the universal foreign key. Past, present and future of spatial PostgreSQL - -[Slides](https://docs.google.com/presentation/d/1xyXA4-0wmNX7WfiLeH9h10bIkZxrej278-mMaClagys/edit?usp=sharing) [Description](https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2020-location-the-universal-foreign-key-past-present-and-future-of-spatial-data-in-postgresql/) - -The talk was given by Paul Ramsey (himself!) and he beautifully managed to build a bridge for database folks into GIS land. The spatial analysis examples were awesome, for example he correlated starbucks locations with income data and more. If you never worked with spatial data before: - -* How to get spatial data: ogr2ogr -* How to visualize it: gqis -* How to load, how to analyze: postgis - -Personally, it was a good reminiscence of my bachelor thesis covering different spatial databases. - -### TOP3 Talk: Cleaning out crocodiles teeth with PostgreSQL indexes - a story on all the index types in PG - -[Description](https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2119-cleaning-out-crocodiles-teeth-with-postgresql-indexes-a-story-on-all-the-index-types-in-pg/) - -Great talk by Louise Grandjonc (Citus Data): Covered implementation details of b-tree, GIN, GiST, SP-GiST, BRIN and Hash indexes and how they compare. - -### TOP3 Talk: The HyperLogLog Algorithm: How it works and why you will love it - -[Description](https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2210-the-hyperloglog-algorithm-how-it-works-and-why-you-will-love-it/) - -Theory behind HyperLogLog algorithm to implement efficient `COUNT(DISTINCT)` queries in PostgreSQL. - -If you always wondered how HLL works - this is the talk to read: [Slides](https://docs.google.com/presentation/d/1Bn_LOUwaZiKAcat_LtyyeqY-NxRohFqo6uJYrMvpnM8/edit) - -### Talk: PostgreSQL worst practices - -[Description](https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2169-postgresql-worst-practices/) - -Countless worst practices presented by a database consultant. Excerpts for bingo play: - -* Disallow access to the production database to developers. Not even read-only. -* Always use a ORM -* Count everything precisely (this is like all over the place in GitLab, counting is really hard at scale and we have a few anti patterns implemented that do not scale, see https://gitlab.com/gitlab-org/gitlab-ce/issues/52096 and related issues/discussions) -* In-memory joins -* Be in trend, be schemaless, use JSONB -* Be agile, use EAV -* (and many more) - -### Talk: Advanced logical replication - -[Description](https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2173-advanced-logical-replication/) - -This talk detailed how to implement a transactional, at-least-once message queue using logical decoding in PostgreSQL (based on `pg_logical_emit_message`). This seems relevant to our needs regarding [Geo design](https://gitlab.com/gitlab-org/gitlab-ee/issues/7420) - it would be a great way of introducing a transactional queue without introducing another dependency (to like Kafka or others). - -The talk also covered online upgrades ([relevant to us](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/4937)) and bi-directional replication. - -### Talk: Around the World With Extensions - -[Description](https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2000-around-the-world-with-extensions/) - -Great overview of available extensions including cool stuff like [HyperLogLog counting](https://github.com/citusdata/postgresql-hll), [TOP-N](https://github.com/citusdata/postgresql-topn), [pg_partman](https://github.com/pgpartman/pg_partman) or sharding with citus. - -### Talk: High Performance pgBackRest - -[Description](https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2013-high-performance-pgbackrest/) - -Details of pgBackRest and how it implements parallelism for backup and restore. Claims to be able to push 20 TB worth of WAL per hour (async parallel push) - if network keeps up with it. - -This is relevant to us discussing replacing wal-e for various reasons ([design pending](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5022)). - -### Talk: Advanced PostgreSQL backup and recovery methods - -[Description](https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2098-advanced-postgresql-backup-and-recovery-methods/) - -Great overview of different backup/recover tools available for PostgreSQL and how they compare.",1.0 -15583051,2018-11-05 18:26:28.553,Introducing a small amount of traffic on staging cripples the environment,"A small amount of load to the staging environment -* web: 22 req/sec -* api: ~100 req/sec -* git: ~100 req/sec - -Starts to add enough load to cause problems. - -https://dashboards.gitlab.net/d/RZmbBr7mk/gitlab-triage?orgId=1&var-environment=gstg&var-prometheus=prometheus-01-inf-gstg&var-prometheus_app=prometheus-app-01-inf-gstg&var-backend=All&var-type=All&var-stage=main&from=1541440657228&to=1541441618961 - - -api/git/gitaly in particular: - -![Screen_Shot_2018-11-05_at_7.25.15_PM](/uploads/4d8bc9dfef0b6558f824395b8c769aa8/Screen_Shot_2018-11-05_at_7.25.15_PM.png) -![Screen_Shot_2018-11-05_at_7.25.11_PM](/uploads/049aa7e4cd42e9d67ab73efe6b93e1d0/Screen_Shot_2018-11-05_at_7.25.11_PM.png) -![Screen_Shot_2018-11-05_at_7.25.05_PM](/uploads/f99a4e73b6005724f6c3844d2d0c4298/Screen_Shot_2018-11-05_at_7.25.05_PM.png) -![Screen_Shot_2018-11-05_at_7.24.59_PM](/uploads/2c956ffc0ba31156d11577ff94eb4b7b/Screen_Shot_2018-11-05_at_7.24.59_PM.png)",2.0 -15582965,2018-11-05 18:21:10.435,Database Reviews,"Last milestone: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5335# - -* [x] https://gitlab.com/gitlab-org/gitlab-ce/issues/52271#note_109523119 along with https://gitlab.com/gitlab-org/gitlab-ce/issues/49651#note_111247684 -* [x] Sean https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/7903 -* [x] Douwe https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22799 -* [x] https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8231 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22808 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22433 -* [x] https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8107 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22750/diffs#note_114883642 -* [x] https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/6947#note_114740347 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22522#note_115135349 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22308/diffs#note_115063771 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22799 -* [ ] Recursive CTE https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22308#note_115367888 -* [x] Sean https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/21492#note_115661498 https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/7222#note_115909424 -* [x] Douwe https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8120#note_116819401 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23020/diffs -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22694 and https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8232 -* [ ] Sean https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8070 -* [x] https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8120",5.0 -15582728,2018-11-05 18:07:43.168,Error when deploying customers.gitlab.com,"``` - * cookbook_file[/var/chef/handlers/prometheus_handler.rb] action create[2018-11-05T18:03:43+00:00] INFO: Processing cookbook_file[/var/chef/handlers/prometheus_handler.rb] action create (gitlab-exporters::chef_client line 20) - (up to date) - * chef_handler[PrometheusHandler] action enable[2018-11-05T18:03:43+00:00] INFO: Processing chef_handler[PrometheusHandler] action enable (gitlab-exporters::chef_client line 27) -[2018-11-05T18:03:43+00:00] INFO: Enabling PrometheusHandler as a report handler. -[2018-11-05T18:03:43+00:00] INFO: Enabling PrometheusHandler as a exception handler. - (up to date) -[2018-11-05T18:03:43+00:00] WARN: This cookbook is being re-written to use resources, not recipes and will only be Chef 13.8+ compatible. Please version pin to 6.1.1 to prevent the breaking changes from taking effect. See https://github.com/sous-chefs/postgresql/issues/512 for details -[2018-11-05T18:03:43+00:00] WARN: This cookbook is being re-written to use resources, not recipes and will only be Chef 13.8+ compatible. Please version pin to 6.1.1 to prevent the breaking changes from taking effect. See https://github.com/sous-chefs/postgresql/issues/512 for details -[2018-11-05T18:03:43+00:00] WARN: This cookbook is being re-written to use resources, not recipes and will only be Chef 13.8+ compatible. Please version pin to 6.1.1 to prevent the breaking changes from taking effect. See https://github.com/sous-chefs/postgresql/issues/512 for details - - ================================================================================ - Recipe Compile Error in /var/chef/cache/cookbooks/cookbook-customers-gitlab-com/recipes/default.rb - ================================================================================ - - NoMethodError - ------------- - undefined method `[]' for nil:NilClass - - Cookbook Trace: - --------------- - /var/chef/cache/cookbooks/postgresql/recipes/apt_pgdg_postgresql.rb:6:in `block in from_file' - /var/chef/cache/cookbooks/postgresql/recipes/apt_pgdg_postgresql.rb:4:in `from_file' - /var/chef/cache/cookbooks/postgresql/recipes/client.rb:10:in `from_file' - /var/chef/cache/cookbooks/postgresql/recipes/server.rb:23:in `from_file' - /var/chef/cache/cookbooks/cookbook-customers-gitlab-com/recipes/database.rb:9:in `from_file' - /var/chef/cache/cookbooks/cookbook-customers-gitlab-com/recipes/default.rb:11:in `from_file' - - Relevant File Content: - ---------------------- - /var/chef/cache/cookbooks/postgresql/recipes/apt_pgdg_postgresql.rb: - - 1: Chef::Log.warn 'This cookbook is being re-written to use resources, not recipes and will only be Chef 13.8+ compatible. Please version pin to 6.1.1 to prevent the breaking changes from taking effect. See https://github.com/sous-chefs/postgresql/issues/512 for details' - 2: - 3: # frozen_string_literal: true - 4: apt_repository 'apt.postgresql.org' do - 5: uri 'http://apt.postgresql.org/pub/repos/apt' - 6>> distribution ""#{node['postgresql']['pgdg']['release_apt_codename']}-pgdg"" - 7: components ['main', node['postgresql']['version']] - 8: key 'https://www.postgresql.org/media/keys/ACCC4CF8.asc' - 9: action :add - 10: end - 11: - - Platform: - --------- - x86_64-linux - - - Running handlers: -[2018-11-05T18:03:43+00:00] ERROR: Running exception handlers - - PrometheusHandler - Running handlers complete -[2018-11-05T18:03:43+00:00] ERROR: Exception handlers complete - Chef Client failed. 0 resources updated in 20 seconds -[2018-11-05T18:03:43+00:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out -[2018-11-05T18:03:43+00:00] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report -[2018-11-05T18:03:43+00:00] ERROR: undefined method `[]' for nil:NilClass -[2018-11-05T18:03:43+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1) -```",1.0 -15578262,2018-11-05 16:06:15.342,Research Improvements to the Docker Registry Monitoring/Alerting,"Logging from our registry is coming into elastic search unstructured which increases the time it takes to sift through logs. - -We also don't have runbooks on the registry. If something were to happen we don't have any details of where to look, troubleshoot, and diagnose issues. - -Utilize this issue as research into areas where we can improve the overall stature of how we monitor and alert on issues related to the docker registry bits of infrastructure. Create issues upon completion of the research that will address each of the potential areas for improvement with appropriate acceptance criterion as you see fit.",2.0 -15572063,2018-11-05 14:47:29.588,Iterate our snapshot timing down to 6 hours,"Continuing off https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5412, let's bring down the time between snapshots down to 6 hours instead of the current 12.",1.0 -15571885,2018-11-05 14:44:03.349,Request: Admin access to staging.gitlab.com for Reuben Pereira,"Per https://gitlab.com/gitlab-com/access-requests/issues/45 - - -/cc @rpereira2 @sengelhard - -Edit: Tagging proper person for provisioning",1.0 -15567470,2018-11-05 13:06:42.903,Rake task for updating our vaults misconfigured,"Example run: -``` -rake 'edit_role_secrets[dev-gitlab-org]' --> bundle exec knife vault edit dev-gitlab-org _default --mode client --> bundle exec knife vault update dev-gitlab-org _default --clean -S(roles:dev-gitlab-org OR role:dev-gitlab-org) AND chef_environment:_default -Aabrandl,ahanselka,ahmadsherif,alejandro,jarv,jjn,stanhu,yorickpeterse,dsmith,devin,skarbek,craig --mode client -WARN: No clients were returned from search, you may not have got what you expected!! -✓ Switching to master (0.11 sec) -✓ Pulling from git@dev.gitlab.org:cookbooks/chef-repo.git (7.71 sec) --> bundle exec knife download data_bags/dev-gitlab-org -Updated data_bags/dev-gitlab-org/_default_keys.json -Updated data_bags/dev-gitlab-org/_default.json --> git add -A data_bags/dev-gitlab-org --> git commit -v data_bags/dev-gitlab-org -Aborting commit due to empty commit message. -Failed: git commit -v data_bags/dev-gitlab-org -``` - -1. It's pulling from `dev` when it should be pulling from `ops` -1. It didn't properly search `roles:dev-gitlab-org` - * The dev node is clearly part of this role as shown by `knife node show dev.gitlab.org` - -Either fix the rake task appropriately, or update the documentation and remove this rake task. - -Reference: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5439#",1.0 -15563093,2018-11-05 10:29:44.657,[Design Doc] Implementing Patroni,The doc with the detailed info : https://docs.google.com/document/d/1mWKvujpdzk35SeXPU_2pOfUAg9ZAo9yx8qYQLlrm29Y/edit,5.0 -15543857,2018-11-04 08:41:08.732,[PagerDuty] SSLCert expiring soon: dev.gitlab.org,"## Summary - -### Issue -We got paged for the following incident - Certification for dev.gitlab.org is expiring on 11/10/18 23:59:59PM GMT - -### Investigation -According to the [runbook](https://gitlab.com/gitlab-com/runbooks/blob/master/troubleshooting/ssl_cert.md), ran the command to look at the certification detail. - -``` -echo | openssl s_client -showcerts -servername dev.gitlab.org -connect dev.gitlab.org:443 2>/dev/null | openssl x509 -inform pem -noout -text - -Certificate: - Data: - Version: 3 (0x2) - Serial Number: - 42:75:ea:82:ef:98:ba:ac:46:e2:a7:dc:66:ba:ca:c7 - Signature Algorithm: sha256WithRSAEncryption - Issuer: C=GB, ST=Greater Manchester, L=Salford, O=COMODO CA Limited, CN=COMODO RSA Domain Validation Secure Server CA - Validity - Not Before: Nov 9 00:00:00 2017 GMT - Not After : Nov 10 23:59:59 2018 GMT -``` - -Tried to renew it but got an error: - -``` -sslmate renew dev.gitlab.org -If you don't have an account yet, visit https://sslmate.com/signup -Enter your SSLMate username: gitlabops -Enter your SSLMate password: ******************************** -Authenticating... Done. -Tip: if you don't want to have to type your password every time, you can run 'sslmate link' to link this system with your account. - -Error: the certificate for dev.gitlab.org is not about to expire. -Tip: to reissue this certificate, run 'sslmate reissue dev.gitlab.org'. -Tip: use --force to override the above error. -``` - -So not sure if we need to force it or wait. Will check with the team on (the practice) since we still have time. - -### Root Cause -This is not really an issue. A pre-cautionary alert. - -### Action Items -[ ] Renew the certificate",1.0 -15520640,2018-11-02 17:44:25.018,Access to production database,"Hello, - -I would like a read only access to the secondary production database to run explains on some tricky queries for Issue https://gitlab.com/gitlab-org/gitlab-ee/issues/7851 - -Can you set up an access just like you did in https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/4355 ? -The hostname in the response there doesn't exists anymore so I guess my access didn't survive the move to GCP. - -Here is my SSH public key: -``` -ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEArTflMdTi+WGL0qptp5bJWnRBx54nOMmvYx8G66gigpjnjKqKV2c7NVf/wIo8G6pWzdj4XAOwnjUSERran5cJbpj/SqD0ah9Fr2xYZ7HcvmqasXejnrizRGdfIjCJPLTP8C77wrZ5a13+H4Rg3MN+B8E9i5/wsMvgwgkz6jagkIk8RFyQbs/8iULVdhYnNiosNYkFDA8c9AzoThS1EwOT0namQBU/T1t1IMbqxXJSv1cY9k01hMW6sdytl9XmQu9mk6FDlxQKwYTpbKzxo/cRE2ez3nhSKo5cAL3NAciQbDLQKgTDnnNMpT1oZbHOdNFsdnAwrmrPl4kwLCEeK+2UjQ== groulot@gitlab.com -```",1.0 -15517988,2018-11-02 15:00:27.572,Root device on postgres primary runs out of space,"The device for `/` is only 20GB in size and may fill up completely. Since `/tmp` is on that disk, it may cause problems with postgres backups (I have a weak suspicion in the direction of https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5434). - -Can we increase the size to 100GB for all postgres hosts ? Is it possible to do an online resize? - -gprd primary: https://dashboards.gitlab.net/d/bd2Kl9Imk/host-stats?refresh=1m&panelId=14&fullscreen&orgId=1&from=now-7d&to=now&var-environment=gprd&var-node=postgres-03-db-gprd.c.gitlab-production.internal&var-promethus=prometheus-01-inf-gprd - -Even though that does not indicate it ever went to 0, it doesn't mean it didn't: It takes only a few seconds to write out 20 GBs worth of data and hit the bottom, the process would fail, clean up and we would never notice in prometheus. - -/cc @gitlab\-com/gl\-infra",2.0 -15517511,2018-11-02 14:42:10.859,Create env in staging equal to production for Patroni,"Create the clusters for Postgresql, Pgbouncer, consul agent and server and ILB. -Use the cookbooks and terraform and see possible changes and review. -Ongres will review the environment during our progress.",1.0 -15513612,2018-11-02 12:46:48.579,wal-e full backups occasionally fail,"This surfaced after moving the backups to GCS but apparently this was also the case with S3: - -https://log.gitlab.net/goto/eba78912bf25fdc72d798344727698df - -It looks like wal-e crashes for some reason in the middle of a backup and does not recover from it.",2.0 -15508112,2018-11-02 09:54:19.772,Redirect developer pages to backend engineer pages,"After https://gitlab.com/gitlab-com/www-gitlab-com/merge_requests/15669 and https://gitlab.com/gitlab-com/www-gitlab-com/merge_requests/16096, we need these redirects: - -* `/job-families/engineering/developer/` -> `/job-families/engineering/backend-engineer/` -* `/handbook/engineering/career-development/junior-developers/` -> `/handbook/engineering/career-development/junior-engineers/`",1.0 -15502473,2018-11-02 04:12:09.218,*Triggered #3777:* Firing 1 - CPU use percent is extremely high on postgres-dr-archive-01-db-gprd.c.gitlab-production.internal for the past 2 hours.,"## Summary - -### Issue -CPU utilization alert triggered for: postgres-dr-archive-01-db-gprd.c.gitlab-production.internal - -### Investigation -Bunch of wal-e processes appear to be taking up most of the CPU but CPU has been utilized > 90% for quite some time (though it spiked down below 90% at times as well). In postgresql log, we see errors indicating absolute URIs not being able to be loaded - which could be causing the CPU jump. A PD incident# 3776 was also fired earlier for postgres-dr-delayed node for a delayed replica and the node was also consuming high CPU. (Could be related and we decided to wait until POC in EMEA looks at it). For detailed investigation, see comments below. - -### Proximal Root Cause -The URIs that could not be loaded might be causing retries and/or subsequent tasks/steps not to be completed properly causing loops and consequently causing high CPU utilization. Since this is not yet a definite root cause, hence the name: ""proximal"". - -### Mitigation -The alert cleared on its own for now. But we might have to suppress the alarm for the same reason as PD incident#3776. - -### Action Items -- Monitor the CPU and if the alarm goes off again and we see the same symptom (logs), then suppress the alert in AlertManager -- Inform POC in EMEA (@abrandl) about the incident to see if both #3776 and #3777 are things he can help us look into and root cause -",1.0 -15501330,2018-11-02 02:08:28.862,PostgreSQL_ReplicationLagTooLarge_DelayedReplica Alert,"We are getting an alert for PostgreSQL_ReplicationLagTooLarge_DelayedReplica in production. It appears that the delayed replica is still trying to use s3 for the wal-e replication. - -Slack thread is at: - -https://gitlab.slack.com/archives/C101F3796/p1541120049182300",1.0 -15494176,2018-11-01 18:41:31.442,High4xxRateForRegistry in staging,"High 4xx Error Rate on Docker Registry - - We are seeing an increase of 4xx errors on the load balancing across all backends, more than 60% of http requests for at least 5 minutes. Check the registry nodes since they are the ones processing live traffic at the moment.",1.0 -15493622,2018-11-01 18:08:05.126,Complete Phase 2 of TLS1.0/1.1 End of Support,"https://gitlab.com/gitlab-com/gl-security/engineering/issues/202#2018-phase-2-complete-this-phase-by-end-of-november-15-2018 - -Notes are in the issue above. To be completed Nov 15, but putting the 12th for due date to remind us earlier.",1.0 -15491965,2018-11-01 16:31:45.359,301 Redirects,"As a follow up to the [Website IA](https://gitlab.com/gitlab-com/marketing/general/issues/3078) we need several 301 redirects put in place (including the team page) - -The list of redirects is in this Google sheet: https://docs.google.com/spreadsheets/d/1yReGphpeGjqy3hoUYl8jaudqNS1fHXDKlbGhHxrsfwA/edit#gid=0 - -cc @northrup @williamchia - -Although we have JS redirects in place (which ensures people can get to the page after a small delay), Google is not picking up on the route change. For example, pages are still showing up at `/features/` even though they've been moved to `/solutions/`. A 301 redirect is the only reliable way to communicate change to search engines. - -as one example https://about.gitlab.com/features/github/ should be https://about.gitlab.com/solutions/github/ but it shows up incorrectly in SERPs: - -![image](/uploads/2ac346f6b0f8309e63c71a06d8cf1753/image.png)",2.0 -15491641,2018-11-01 16:15:09.485,Add gitlab-bot triage to the infrastructure and production issue queue,"Here are some rules that I think we should implement to start: - -* gitlab-bot https://gitlab.com/gitlab-bot -* https://gitlab.com/gitlab-org/gitlab-triage - -## Rules - -- Incident issue without severity label -- Incident issue without attribution label `Service.*` -- Infrastructure issue without weight -- Comment and maybe close issues with xxx days of innactivity -",2.0 -15466888,2018-10-31 22:14:21.362,Error when deploying customers.gitlab.com,"``` -================================================================================ - Recipe Compile Error - ================================================================================ - - Chef::Exceptions::RecipeNotFound - -------------------------------- - could not find recipe server for cookbook postgresql - - Cookbook Trace: - --------------- - /var/chef/cache/cookbooks/cookbook-customers-gitlab-com/recipes/database.rb:9:in `from_file' - /var/chef/cache/cookbooks/cookbook-customers-gitlab-com/recipes/default.rb:11:in `from_file' - - Relevant File Content: - ---------------------- - /var/chef/cache/cookbooks/cookbook-customers-gitlab-com/recipes/database.rb: - - 2: # Cookbook Name:: cookbook-customers-gitlab-com - 3: # Recipe:: database - 4: # License:: MIT - 5: # - 6: # Copyright 2016, GitLab Inc. - 7: # - 8: include_recipe 'gitlab-vault' - 9>> include_recipe 'postgresql::server' - 10: - 11: customers_gitlab_conf = GitLab::Vault.get(node, 'cookbook-customers-gitlab-com') - 12: - 13: package 'libpq-dev' - 14: - 15: bash 'create database user' do - 16: user 'postgres' - 17: code ""psql -c \""CREATE USER \\\""#{customers_gitlab_conf['database_user']}\\\"" WITH PASSWORD '#{customers_gitlab_conf['database_password']}'\"""" - 18: not_if ""sudo su - postgres -c \""psql -c '\\du' | grep #{customers_gitlab_conf['database_user']}\"""" - - Platform: - --------- - x86_64-linux - - - Running handlers: -[2018-10-31T22:11:52+00:00] ERROR: Running exception handlers - - PrometheusHandler - Running handlers complete -[2018-10-31T22:11:52+00:00] ERROR: Exception handlers complete -``` - -@skarbek",1.0 -15453153,2018-10-31 19:35:15.658,Unable to publish new version of gitlab-postgresql,"When trying to publish the `gitlab-postgresql` cookbook, it fails. The chef-repo has this pinned to `0.1.0` in the Berks file. - In the process of updating the Berksfile for our chef-repo, I'm unable to resolve a dependency conflict. Berks doesn't actually tell me where the conflict lies, and ends up resulting in it's inability to satisfy a long list of requirements. - -Reference: https://ops.gitlab.net/gitlab-cookbooks/gitlab-postgresql/pipelines/8228",1.0 -15450766,2018-10-31 17:01:59.591,Missing `traces` sidekiq from dashboard,"The dashboard [Sidekiq Stats](https://dashboards.gitlab.net/d/9GOIu9Siz/sidekiq-stats?orgId=1) is missing panels that are specific to the `traces` servers. - -Reference: https://gitlab.com/gitlab-com/gl-infra/production/issues/532",1.0 -15440789,2018-10-31 14:12:45.543,High CPU on the web fleet after 11.4.2 release,"The load on the web fleet increased on the 26th at ~09:00. This is around when we deployed *v11.4.2-ee.0* to gprd. - -![Screen_Shot_2018-10-31_at_3.10.43_PM](/uploads/7082bba7508d0fb8a915f1954a626d7f/Screen_Shot_2018-10-31_at_3.10.43_PM.png) - -https://dashboards.gitlab.net/d/RZmbBr7mk/gitlab-triage?refresh=30s&panelId=1176&fullscreen&orgId=1&from=now-30d&to=now - -web-04 is in a current state of alarm so i used it as an example below: - - -![Screen_Shot_2018-10-31_at_3.05.17_PM](/uploads/7209aa9f3c1d17d7aa99295a93be0752/Screen_Shot_2018-10-31_at_3.05.17_PM.png) - -https://dashboards.gitlab.net/d/bd2Kl9Imk/host-stats?orgId=1&from=now-14d&to=now&refresh=1m&var-environment=gprd&var-node=web-04-sv-gprd.c.gitlab-production.internal&var-promethus=prometheus-01-inf-gprd - - -![Screen_Shot_2018-10-31_at_3.03.18_PM](/uploads/f3c05dc6363f78d82b76414f673d1a99/Screen_Shot_2018-10-31_at_3.03.18_PM.png) - - -![Screen_Shot_2018-10-31_at_2.51.35_PM](/uploads/73ffe7598f7bb054bd90998aae3f4dd5/Screen_Shot_2018-10-31_at_2.51.35_PM.png) - -For a ruby process that is using a full cpu: - -``` -# perf record -F 99 -p 15962 -g -- sleep 60 -[ perf record: Woken up 1 times to write data ] -[ perf record: Captured and wrote 0.918 MB perf.data (3585 samples) ] -# perf script > out.perf -# ./stackcollapse-perf.pl out.perf > out.folded -# less out.folded -# ./flamegraph.pl out.folded > kernel.svg -``` - - -![kernel.svg](/uploads/7607bed3215a8682a791a5fd46e68448/kernel.svg)",2.0 -15421785,2018-10-31 09:35:55.091,Daily DB Restore job failing,"``` -ERROR: (gcloud.compute.disks.create) Could not fetch resource: - - Quota 'SSD_TOTAL_GB' exceeded. Limit: 10000.0 in region us-west1. -``` - -https://gitlab.com/gitlab-restore/postgres-gprd/-/jobs/114695420",1.0 -15420977,2018-10-31 09:19:05.358,Change backups for the gitaly fleet,"Actually, we execute backups each 24h. -https://gitlab.com/gitlab-restore/gitlab-production-snapshots - -The idea is to execute every 12 hours Initially, in further iterations we could go to 6 hours and 3. - -In case of a disaster, we will reduce the data lost, and the effort seems to be smaller as spoken with Jarv. - -@jarv @glopezfernandez @dawsmith @andrewn - Any thoughts on that?",1.0 -15413433,2018-10-31 00:10:13.715,301 redirect ci page,"Redirect `/features/gitlab-ci-cd/` to `/product/continuous-integration/` - -There's a JS redirect in place today, but this is a key page that needs a 301 asap.",1.0 -15412354,2018-10-30 22:07:03.029,GitalyLatencyOutlier Alert in Staging,"The following alert is coming up in staging repeatedly. It stays on for about 5 minutes, then resolves itself. If this is a problem, we should fix it. If it's not, we should fix the alert. - - -``` -Gitaly: Latency on the Gitaly ListBranchNamesContainingCommit is unusually high compared with a 24 hour average - - The error rate on the ListBranchNamesContainingCommit endpoint is outside normal values over a 12 hour period (95% confidence). Check https://dashboards.gitlab.net/dashboard/db/gitaly-feature-status?var-method=ListBranchNamesContainingCommit&var-tier=stor&var-type=gitaly&var-environment=gstg&refresh=5m - - - ** - Gitaly: Latency on the Gitaly ListBranchNamesContainingCommit is unusually high compared with a 24 hour average - - The error rate on the ListBranchNamesContainingCommit endpoint is outside normal values over a 12 hour period (95% confidence). Check https://dashboards.gitlab.net/dashboard/db/gitaly-feature-status?var-method=ListBranchNamesContainingCommit&var-tier=stor&var-type=gitaly&var-environment=gstg&refresh=5m - -:label: *Labels*: - - *Alertname*: GitalyLatencyOutlier - *Channel*: gitaly - *Environment*: gstg - *Grpc_method*: ListBranchNamesContainingCommit - *Monitor*: gstg-default - *Provider*: gcp - *Region*: us-east - *Severity*: warn -```",1.0 -15412296,2018-10-30 22:00:58.831,Add version constraints on cookbook publisher,"Per discussion in [this comment](https://gitlab.com/gitlab-cookbooks/gitlab-postgresql/merge_requests/1#note_113276171) our chef cookbook publisher script is currently inserted into the CI pipelines via a simple git clone, with no version locking or other mechanisms in place to ensure that changes to the tooling don't inadvertently break downstream cookbook pipelines. That publisher script should be repackaged as a gem and included in the Gemfile with appropriate version pinning (or implement an equivalent control, e.g. tagged references on git clone, etc.)",3.0 -15410731,2018-10-30 20:08:48.425,301 Redirect: /company/culture/remote-only/ to /company/culture/all-remote/,"I've opened [a merge request](https://gitlab.com/gitlab-com/www-gitlab-com/merge_requests/16038/diffs) that renames the directory /company/culture/remote-only/ as /company/culture/all-remote/. Before we merge (preferably), I'd like to setup a redirect for each of the two affected URLs - -1. https://about.gitlab.com/company/culture/remote-only should redirect to https://about.gitlab.com/company/culture/all-remote -1. https://about.gitlab.com/company/culture/remote-only/building-all-remote should redirect to https://about.gitlab.com/company/culture/all-remote/building-all-remote - -Thanks in advance for your help! - -Reference https://gitlab.com/gitlab-com/www-gitlab-com/issues/2745",1.0 -15410708,2018-10-30 20:06:16.140,PostgresSQL_XIDConsumptionTooLow in staging,"Alert in staging -``` -*postgres-06-db-gstg.c.gitlab-staging-1.internal* - Postgres seems to be consuming transaction IDs very slowly - - TXID/s is 0.2 on postgres-06-db-gstg.c.gitlab-staging-1.internal:9187 which is unusually low. Perhaps the application is unable to connect -``` - -Possibly related to: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5402",1.0 -15410692,2018-10-30 20:04:22.872,WALEBackupDelayed in staging,"Alert for WALEBackupDelayed on postgres-06-db-gstg.c.gitlab-staging-1.internal - -Possibly related to https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5402",1.0 -15410319,2018-10-30 19:50:16.891,Disk Full on postgres-06-db-gstg,"The / disk was full on postgres-06-db-gstg. - -There were a bunch of very old tmp files in /tmp which were each over a gig. I deleted a few, oldest first, until the usage was under 90%. - -There may still be some leftover effects of this. I'll keep an eye on it.",1.0 -15407040,2018-10-30 18:34:32.880,PullMirrorsOverdueQueueTooLarge in staging,"https://ops.gitlab.net/gitlab-com/runbooks/blob/master/troubleshooting/large-pull-mirror-queue.md states to check the mirror dashboard, but there is no data for staging. - -UpdateAllMirrorsWorker had 116 entries when I checked. - -Since this is staging, those can't be customer jobs - so I cleared the set. - -states = ProjectImportState.where(project_id: projects).order(:last_update_started_at).map(&:last_error) Returned the following: - -``` -=> [""Import timed out. Import took longer than 54000 seconds"", ""Import timed out. Import took longer than 54000 seconds"", nil, nil, ""error: cannot lock ref 'refs/remotes/upstream/old_version': Unable to create '/var/opt/gitlab/git-data-file03/repositories/ithkuil/ithkuil.git/./refs/remotes/upstream/old_version.lock': No space left on device\nerror: cannot lock ref 'refs/remotes/upstream/site': Unable to create '/var/opt/gitlab/git-data-file03/repositories/ithkuil/ithkuil.git/./refs/remotes/upstream/site.lock': No space left on device\nerror: cannot lock ref 'refs/remotes/upstream/site-python2': Unable to create '/var/opt/gitlab/git-data-file03/repositories/ithkuil/ithkuil.git/./refs/remotes/upstream/site-python2.lock': No space left on device\n"", ""error: cannot lock ref 'refs/remotes/upstream/master': Unable to create '/var/opt/gitlab/git-data-file03/repositories/RoliSoft/dotfiles.git/./refs/remotes/upstream/master.lock': No space left on device\n"", nil, ""error: cannot lock ref 'refs/remotes/upstream/form2': Unable to create '/var/opt/gitlab/git-data-file03/repositories/ViktorBodrogi/dokuwiki.git/./refs/remotes/upstream/form2.lock': No space left on device\nerror: cannot lock ref 'refs/remotes/upstream/fs2851': Unable to create '/var/opt/gitlab/git-data-file03/repositories/ViktorBodrogi/dokuwiki.git/./refs/remotes/upstream/fs2851.lock': No space left on device\nerror: cannot lock ref 'refs/remotes/upstream/large-screens': Unable to create '/var/opt/gitlab/git-data-file03/repositories/ViktorBodrogi/dokuwiki.git/./refs/remotes/upstream/large-screens.lock': No space left on device\nerror: cannot lock ref 'refs/remotes/upstream/lessnew': Unable to create '/var/opt/gitlab/git-data-file03/repositories/ViktorBodrogi/dokuwiki.git/./refs/remotes/upstream/lessnew.lock': No space left on device\nerror: cannot lock ref 'refs/remotes/upstream/localdraft': Unable to create '/var/opt/gitlab/git-data-file03/repositories/ViktorBodrogi/dokuwiki.git/./refs/remotes/upstream/localdraft.lock': No space left on device\nerror: cannot lock ref 'refs/remotes/upstream/mail': Unable to create '/var/opt/gitlab/git-data-file03/repositories/ViktorBodrogi/dokuwiki.git/./refs/remotes/upstream/mail.lock': No space left on device\nerror: cannot lock ref 'refs/remotes/upstream/mail_headers': Unable to create '/var/opt/gitlab/git-data-file03/repositories/ViktorBodrogi/dokuwiki.git/./refs/remotes/upstream/mail_headers.lock': No space left on device\nerror: cannot lock ref 'refs/remotes/upstream/master': Unable to create '/var/opt/gitlab/git-data-file03/repositories/ViktorBodrogi/dokuwiki.git/./refs/remotes/upstream/master.lock': No space left on device\nerror: cannot lock ref 'refs/remotes/upstream/mediarefactor': Unable to create '/var/opt/gitlab/git-data-file03/repositories/ViktorBodrogi/dokuwiki.git/./refs/remotes/upstream/mediarefactor.lock': No space left on device\nerror: cannot lock ref 'refs/remotes/upstream/old-stable': Unable to create '/var/opt/gitlab/git-data-file03/repositories/ViktorBodrogi/dokuwiki.git/./refs/remotes/upstream/old-stable.lock': No space left on device\nerror: cannot lock ref 'refs/remotes/upstream/pagetools': Unable to create '/var/opt/gitlab/git-data-file03/repositories/ViktorBodrogi/dokuwiki.git/./refs/remotes/upstream/pagetools.lock': No space left on device\nerror: cannot lock ref 'refs/remotes/upstream/php7comptib': Unable to create '/var/opt/gitlab/git-data-file03/repositories/ViktorBodrogi/dokuwiki.git/./refs/remotes/upstream/php7comptib.lock': No space left on device\nerror: cannot lock ref 'refs/remotes/upstream/phpunit-fix': Unable to create '/var/opt/gitlab/git-data-file03/repositories/ViktorBodrogi/dokuwiki.git/./refs/remotes/upstream/phpunit-fix.lock': No space left on device\nerror: cannot lock ref 'refs/remotes/upstream/retrytests': Unable to create '/var/opt/gitlab/git-data-file03/repositories/ViktorBodrogi/dokuwiki.git/./refs/remotes/upstream/retrytests.lock': No space left on device\nerror: cannot lock ref 'refs/remotes/upstream/scrutinizer-patch-1': Unable to create '/var/opt/gitlab/git-data-file03/repositories/ViktorBodrogi/dokuwiki.git/./refs/remotes/upstream/scrutinizer-patch-1.lock': No space left on device\nerror: cannot lock ref 'refs/remotes/upstream/stable': Unable to create '/var/opt/gitlab/git-data-file03/repositories/ViktorBodrogi/dokuwiki.git/./refs/remotes/upstream/stable.lock': No space left on device\nerror: cannot lock ref 'refs/remotes/upstream/tpl_action_get2': Unable to create '/var/opt/gitlab/git-data-file03/repositories/ViktorBodrogi/dokuwiki.git/./refs/remotes/upstream/tpl_action_get2.lock': No space left on device\nerror: cannot lock ref 'refs/remotes/upstream/travis': Unable to create '/var/opt/gitlab/git-data-file03/repositories/ViktorBodrogi/dokuwiki.git/./refs/remotes/upstream/travis.lock': No space left on device\nerror: cannot lock ref 'refs/remotes/upstream/versionfixtool': Unable to create '/var/opt/gitlab/git-data-file03/repositories/ViktorBodrogi/dokuwiki.git/./refs/remotes/upstream/versionfixtool.lock': No space left on device\n"", ""error: cannot lock ref 'refs/remotes/upstream/master': Unable to create '/var/opt/gitlab/git-data-file03/repositories/boost/boostdep.git/./refs/remotes/upstream/master.lock': No space left on device\n"", nil, nil, nil, nil, nil, nil, nil, nil, ""no repository for such path"", nil, ""Import timed out. Import took longer than 54000 seconds"", nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, ""Import timed out. Import took longer than 54000 seconds"", nil, ""fatal: could not read Username for 'https://github.com': terminal prompts disabled\n"", nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, ""no repository for such path"", ""undefined method `commit_id' for nil:NilClass"", ""undefined method `commit_id' for nil:NilClass"", ""no repository for such path"", ""no repository for such path"", ""no repository for such path"", ""no repository for such path"", ""no repository for such path"", ""no repository for such path"", ""no repository for such path"", ""no repository for such path"", ""no repository for such path"", ""no repository for such path"", ""no repository for such path"", ""no repository for such path"", ""undefined method `commit_id' for nil:NilClass"", ""undefined method `commit_id' for nil:NilClass"", ""undefined method `commit_id' for nil:NilClass"", ""undefined method `commit_id' for nil:NilClass"", ""undefined method `commit_id' for nil:NilClass"", ""undefined method `commit_id' for nil:NilClass"", ""undefined method `commit_id' for nil:NilClass"", ""undefined method `commit_id' for nil:NilClass"", ""undefined method `commit_id' for nil:NilClass""] -``` - -This seems to be indicating a full disk on file-03 - but there are only 01 and 02 file nodes in staging. - -I've cleared the set as specified in the runbook: https://ops.gitlab.net/gitlab-com/runbooks/blob/master/troubleshooting/large-pull-mirror-queue.md - the alert has still not cleared.",2.0 -15405079,2018-10-30 17:37:17.448,Ethan Strike access to staging.gitlab.com,Per https://gitlab.com/gitlab-com/access-requests/issues/18,1.0 -15404915,2018-10-30 17:27:38.738,Transfer troupe.co. domain to GitLab,"`gitter.im` was transfered soon after the purchase, but troupe.co was never transferred. - -This should be done as many of the emails reference the troupe.co domain. - -cc @MadLittleMods - -Reference https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/1594",1.0 -15402187,2018-10-30 15:36:00.054,301 Redirect: handbook/product/pricing/ to /handbook/ceo/pricing,"We've moved the page previously hosted at /handbook/product/pricing/ to /handbook/ceo/pricing/, and I'd appreciate some help setting up a redirect to catch any traffic pointed at the old URL. See https://gitlab.com/gitlab-com/www-gitlab-com/merge_requests/15799/diffs for reference. - -This may be a redundant request as I thought I'd opened this issue already, but I didn't see anything when I went back and searched through open and closed issues. Apologies in advance if that's the case. - -Thanks!",1.0 -15381864,2018-10-29 21:32:36.140,Get Ephemeral Environments ready for Postgres work,"The Environments project is nearly ready for use in deploying arbitrary temporary environments. This task is to make the minimal changes necessary to make this project usable for deploying arbitrary environments. - -The intention is to use these temporary workspaces to do work on Chef recipes for provisioning Postgres with patroni clustering. In the near term it will also be used for migrating terraform code to versioned modules. - -This implements a first iteration of design document https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5285",8.0 -15369359,2018-10-29 14:40:07.382,301 redirect for /2016/11/30/setting-up-gitlab-ci-for-android-projects/,"Last week we published an updated and expanded version of an old blog post and need to redirect the link from the old post to the new post. - -| 404 link | redirect URI | -| ------ | ------ | -| https://about.gitlab.com/2016/11/30/setting-up-gitlab-ci-for-android-projects/ | https://about.gitlab.com/2018/10/24/setting-up-gitlab-ci-for-android-projects/ | - -Thanks!",1.0 -15364868,2018-10-29 12:59:01.524,Stackdriver pubsub metrics stopped reporting on the 19th,![Screen_Shot_2018-10-29_at_1.58.05_PM](/uploads/29f33061425a63a83574943d3334937c/Screen_Shot_2018-10-29_at_1.58.05_PM.png),1.0 -15290981,2018-10-26 13:24:13.786,Import big projects for customers,"## TL;DR - -Customers and potential ones may want to import a large project into GitLab.com. This isssue is to provide a workaround in a timely manner for them. - -## Background - -In the past, for big imports such as the K8S import, we had a dedicated instance that helped us achieve this. - -Large imports, will either timeout (we have a timeout of a few hours) or will get killed by the Sidekiq Memory Killer. This is becoming a bit more frequent as the app grows larger, and the Sidekiq process eats more memory. - -We improved the Import mechanism so it doesn't use as much memory at the cost of getting a bit slower (executing less transactions in a single commit, keeping less objects in memory). This can be tweaked, but either way, we'll either hit a memory issue or a timeout problem. The next big step would be to separate this into different independent workers in order to save memory, but it's not a small refactor. - -## Why? - -This will help Support/Sales - -## Current workarounds - -For a self-hosted instance, this is easy to workaround by either increasing the Sidekiq RSS memory allowed and/or disabling the worker that kills the imports when a certain timeout is reached. - -For GitLab.com, we can't tweak that. So we normally get rid of `pipelines` or heavy objects in the exported project in order to free a bit more memory. But we could still encounter the problem. - -## Example - -I managed to import a ~900MB export in my DO instance, averaging an RSS of 850MB, peak at 950MB (note that practically half of this is just loading the Rails app). The default max RSS at GitLab.com is ~~1GB~~ (2GB now). But this wouldn't work at GitLab.com - What's the difference? The thread where the import ran in my DO instance wasn't shared by any other thread, as that was the only thing the process was doing (the other threads were free). - -## Proposal - -For prospects or customers, use the deploy node to run a script that does this for us, without the memory limits we would normally hit. - -We'll need to: - -1. (James) Provide a script that calls the I/E logic provided an export archive and a target (easy). -1. (James) Document somewhere how this works. We may need to use tmux/screen. -1. (Infra) Confirm this is OK. I would suggest every time we have to do this we ping the oncall. -1. (Support) Confirm the identity of the customer to verify access to the target namespace. -1. (Support) Ping the oncall when this is required from a customer. - -## Alternative - -Maybe we can automate this hooking it into ChatOps and use a runner with enough memory :thinking: But haven't put much thought about it. - -## Related: - -https://gitlab.com/gitlab-com/support/dotcom/dotcom-escalations/issues/2 - -https://gitlab.com/gitlab-org/gitlab-ce/issues/52956 - -cc @cynthia @dawsmith @lmcandrew",1.0 -15275488,2018-10-26 01:22:12.117,Ephemeral Environments Demo Video,"Rather than trying to schedule a bunch of zoom calls to show how CI/CD works for [Ephemeral Environments](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5285), I've created this video so that everyone can watch it when it's convenient. I'm using this issue as a place to gather feedback. Please comment here, rather than youtube. - -[![Environments-Video](/uploads/e1e29bf2015ac24b268fb2f233bb7393/Environments-Video.png)](https://www.youtube.com/watch?v=XCIz75ffhlY&t=6s) - -A few things to note: - -- This initial iteration is in the context of building a [POC environment for patroni](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5342), but it can be useful as an easy way to experiment with [Kubernetes](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5158) and other things. - -- The Worflow in this demo doesn't yet take into account the [Desired git workflow](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5276) of using branches to create test environments. Issue #5276 and it's merge requests are the appropriate place for that conversation. - -- The code that the video skims over is visible at: https://ops.gitlab.net/gitlab-com/environments and if you are curious what is actually happening in the pipelines you can see the output at https://ops.gitlab.net/gitlab-com/environments/pipelines - -- We could use a lot more discussion on how this should all work. The design doc issue is #5285 and it's merge requests are still pending comments. - -- This project probably needs to move from my workspace to the ops instance - -/cc @gitlab\-com/gl\-infra - please click a reaction so I can keep track of who has watched this.",3.0 -15273047,2018-10-25 20:33:08.149,AWS admin access to staging for new SREs,"Per comments in #5322, some new SREs do not have access to the `staging` AWS account, with account ID `4745-2183-0347`. - -Please create IAM user accounts and assign the necessary role(s)/group(s) so that @ctbarrett, @skarbek, @dsylva, and @aamarsanaa are able to manage this account - -/cc @gitlab\-com/gl\-infra",1.0 -15270446,2018-10-25 17:56:08.625,postgres-* in staging is running out of disk space on /dev/sda,"The /tmp keeps filling up with deleted files, as show via -`lsof -n | grep deleted` - -Killing the lingering process removes them, but sometimes we need to restart postgres which is not the greatest of idea. - -Are we ignoring any alerts?",2.0 -15237984,2018-10-25 09:24:56.907,Remove workaround to not send params to elasticsearch for the api_json.log,"we currently strip the `params` field because of indexing issues on elasticsearch. -This issue is to remove that workaround when the fix makes it into a release, once https://gitlab.com/gitlab-org/gitlab-ce/issues/53155 . - -Putting this in the next milestone but we may need to push it out further depending on how long it takes for the fix to land.",1.0 -15193724,2018-10-24 04:54:42.002,AWS Access for @aamarsanaa,"I don't think I have access to our AWS account so creating this issue to keep track of it. This will also help me pick up AWS access request issues going forward. - -Assigning to on-call for help.",1.0 -15181465,2018-10-23 15:12:46.430,Consolidate PagerDuty Schedules,"Right now we have multiple schedules, one for each geographic region, which should be consolidated into a single schedule with multiple layers for the regions. We should clean this up and consolidate for reduced overhead and ease of understanding.",1.0 -15181087,2018-10-23 14:57:55.515,Clean up moved repos,"When we use the API call to migrate a project from one storage node to another, the project on the old storage node gets renamed to something like `projectname+unixtime+moved.git`. The call does NOT clean up the old repos and as such we probably have many old `+moved` repos lying around. - -We should audit for these repos and remove them.",3.0 -15180368,2018-10-23 14:42:17.652,rebalance repositories on storage shards,"Currently we have a few storage shards that are much more full than others. I think we should consider rebalancing repos across shards to be more evenly spread out. Ideally everything will be less than 70%. - -https://dashboards.gitlab.com/d/W_Pbu9Smk/storage-stats - -cc/ @gitlab\-com/gl\-infra for your consideration",5.0 -15179022,2018-10-23 13:54:56.968,create alert for failing consul service check,"We should always have one postgresql service check passing in consul for our postgres HA topology - -https://dashboards.gitlab.net/d/a988f2tmz/consul?orgId=1&from=now-30m&to=now&var-environment=gprd&var-prometheus=prometheus-01-inf-gprd&var-pg_service_nodes=All&var-pg_node_status=All&var-consul_server=consul-01-inf-gprd.c.gitlab-production.internal - -If we don't, this should definitely page the oncall as it means we will be unable to failover properly after a failure.",2.0 -15178924,2018-10-23 13:50:53.185,postgresql service check is failing on production,"It looks like this has been failing for some time: - -https://dashboards.gitlab.net/d/a988f2tmz/consul?orgId=1&from=1535089028546&to=1540313847444&var-environment=gprd&var-prometheus=prometheus-01-inf-gprd&var-pg_service_nodes=All&var-pg_node_status=All&var-consul_server=consul-01-inf-gprd.c.gitlab-production.internal - -What this means is that we are probably never going to succeed with a failover which puts our database HA topology at risk. - -The consul definition for the postgresql service: - -``` -""service"":{""name"":""postgresql"",""address"":"""",""port"":5432,""checks"":[{""script"":""/opt/gitlab/bin/gitlab-ctl repmgr-check-master"",""interval"":""10s""}]},""watches"":[{""type"":""keyprefix"",""prefix"":""gitlab/ha/postgresql/failed_masters/"",""handler"":""/opt/gitlab/bin/gitlab-ctl consul watchers handle-failed-master""}]} -``` - -on postgres-01 in production: - -``` -gitlab-+ 41999 2353 0 14:43 ? 00:00:00 /bin/sh -c /opt/gitlab/bin/gitlab-ctl repmgr-check-master -gitlab-+ 42000 41999 0 14:43 ? 00:00:00 /bin/bash /opt/gitlab/bin/gitlab-ctl repmgr-check-master -gitlab-+ 42001 42000 0 14:43 ? 00:00:00 /opt/gitlab/embedded/bin/ruby /opt/gitlab/embedded/bin/omnibus-ctl gitlab /opt/gitlab/embedded/service/omnibus-ctl* repmgr-check-master -gitlab-+ 42083 2353 0 14:44 ? 00:00:00 /bin/sh -c /opt/gitlab/bin/gitlab-ctl repmgr-check-master -gitlab-+ 42084 42083 0 14:44 ? 00:00:00 /bin/bash /opt/gitlab/bin/gitlab-ctl repmgr-check-master -gitlab-+ 42085 42084 1 14:44 ? 00:00:00 /opt/gitlab/embedded/bin/ruby /opt/gitlab/embedded/bin/omnibus-ctl gitlab /opt/gitlab/embedded/service/omnibus-ctl* repmgr-check-master -gitlab-+ 42251 2353 0 14:45 ? 00:00:00 /bin/sh -c /opt/gitlab/bin/gitlab-ctl repmgr-check-master -gitlab-+ 42252 42251 0 14:45 ? 00:00:00 /bin/bash /opt/gitlab/bin/gitlab-ctl repmgr-check-master -gitlab-+ 42253 42252 2 14:45 ? 00:00:00 /opt/gitlab/embedded/bin/ruby /opt/gitlab/embedded/bin/omnibus-ctl gitlab /opt/gitlab/embedded/service/omnibus-ctl* repmgr-check-master -root 42357 42192 0 14:45 pts/2 00:00:00 grep repmgr-check-master -``` - -It appears this command is just hanging on production. - -Credentials are correct so the problem is with the command: - -``` -jarv@postgres-01-db-gprd.c.gitlab-production.internal:~$ sudo -u gitlab-consul /opt/gitlab/embedded/bin/psql -d gitlab_repmgr -h /var/opt/gitlab/postgresql -p 5432 -U gitlab-consul -could not change directory to ""/home/jarv"": Permission denied -psql (9.6.8) -Type ""help"" for help. - -gitlab_repmgr=> -``` -",2.0 -15159893,2018-10-23 04:24:41.880,Refactor PagerDuty Cog Functions into GitLab ChatOps,"There's little sense in putting effort into rewriting the Cog PagerDuty function that we all use as that's what it would require given the changes in gems, API, and the abandonment of Cog. Rather than refactor the function into something we need to dispose of, it needs to be written in format commutable with our CI driven ChatOps. - -Current Requirements for matched functionality are: -- [x] It should accept a bare request for 'on call' and return who is on call for all services. -- [x] It should accept a service name or partial name match and return who is on call for the service.",4.0 -15132778,2018-10-22 09:09:40.487,Investigation for new haproxy draining deployment,"The new haproxy draining logic was used for the first time and it appears to be successful as we did not see a normal spike in errors during deployment. - -There are a few other anomolies that we should investigate to follow-up. These traffic changes may be related to healthchecks which due account for a large amount of workhorse and backend traffic. The spikes in api traffic are still a bit of a mystery. - - - -* Traffic spikes on the api at 07:47:45 -![Screen_Shot_2018-10-22_at_10.54.25_AM](/uploads/0a7c91af005d81b9271c085c22be1305/Screen_Shot_2018-10-22_at_10.54.25_AM.png) - - -* Dip in workhorse web traffic, web 2xx and web 3xx at 07:40:25 and ~07:53 and ~07:55 -![Screen_Shot_2018-10-22_at_10.54.02_AM](/uploads/b88f0005340f1ee1418f3eab337428ce/Screen_Shot_2018-10-22_at_10.54.02_AM.png) -![Screen_Shot_2018-10-22_at_10.53.39_AM](/uploads/d4046e00b2d44b6e3a53cb9868accacc/Screen_Shot_2018-10-22_at_10.53.39_AM.png) -![Screen_Shot_2018-10-22_at_10.53.11_AM](/uploads/01147c37dc4ea0a7fb823a46b99c3026/Screen_Shot_2018-10-22_at_10.53.11_AM.png)",1.0 -15116196,2018-10-20 23:58:05.229,301 Redirect for partners,"Previously, we redirected `/partners/` to `/applications/` but `/partners/` should be the path. `/integrations/` should `/partners/integrate/` https://gitlab.com/gitlab-com/www-gitlab-com/merge_requests/15698 fixes the paths. We need some 301s put in place and the deployment coordinated. - -## Create chef MR -- [x] remove 301 for `/partners` to `/applications` -- [x] create 301 for `/applications` to `/partners` -- [x] create 301 for `/integrations` to `/partners/integrate` - -## Deploy list -- [x] Stop chef client -- [x] Merge chef MR -- [x] Merge website MR https://gitlab.com/gitlab-com/www-gitlab-com/merge_requests/15698 -- [x] When website MR is close to deployed, start chef client to deploy the 301s",2.0 -15116140,2018-10-20 23:38:49.883,Update comparison 301,"https://about.gitlab.com/comparison is 404 - -The rule for `/comparison/` should be updated to `/comparison`",1.0 -15098617,2018-10-19 17:10:19.359,Improve logging content of GitLab Pages,"from production incident - https://gitlab.com/gitlab-com/gl-infra/production/issues/526. It was very difficult to look at request logs for Pages for this incident. - -Get hostname, path and remote IP for requests into the logs we ship.",3.0 -15098019,2018-10-19 16:26:34.041,Create terraform setup for load balancers Install,"* The load balancers will be one for RW ( read write) traffic and another one for RO ( read only ) traffic. -* The technology suggested in GCP is ILB. -* With backend(query to know what the IP is RW traffic and the RO traffic ) and health configuration.",2.0 -15097962,2018-10-19 16:22:31.796,Create chef cookbook for Patroni Install and setup,"* Install the package -* Setup each node - * Add all the actual parameters from production cluster - * Setup all the monitoring metrics for the new postgresql instances ( move / duplicate the actual ones ) - * Define rules for autofailover ( patroni.yml) - * Create pgbouncer users -* Create /check mechanism to restart in case of the service dies ( runit / initd/ systemd) -* Create monitoring for the service, each node and health status. -* Create process to setup the first node getting data from Prod ( cookbook ) - * Could be using pg_basebackup - * Could be using e-wally -* Create process to create patroni nodes ( cookbook ) -Pgbouncer : -* Add all the actual parameters from production cluster -* Create /check mechanism to restart in case of the service dies ( runit / initd) -* Setup all the monitoring metrics for the new pgbouncer instances ( move / duplicate the actual ones ) -* Create/check cronjob to updage the access of new users to connect by pgbouncer to avoid direct connections or authquery. -* Update Watcher script to consul to update the pgbouncer entry points for pgobuncers to master database only. - -The ongres document how to install is here : -https://docs.google.com/document/d/1NTqnRzT-elPEmPyq_sNC80sULFsCUXX7F10EVOFyZK0/edit?usp=sharing - -",4.0 -15097912,2018-10-19 16:18:55.707,Create chef cookbook for consul agent Install and setup,"* Install a standard version of Consul agent. -* Create /check mechanism to restart in case of the service dies ( runit / initd) -* Add monitoring metrics of each consul agent and his health. -* Generate config to connect the agent to the master if needed.",1.0 -15097887,2018-10-19 16:16:39.691,Create chef cookbook for consul server Install,"* Install standard version of Consul server. -* Create /check mechanism to restart in case of the service dies ( runit / initd) -* Initially setup the cluster without autospawn of instances ( create 5 servers considering 3 for consensus for consul). -* Add monitoring metrics of each consul server and his health. -* Usage of vips meaning Ip’s reserved. In gcp we should use Static Internal Ip’s. - * Seems we need to setup this in the startup of the server. - * The first node of consul - the bootstrap - is a good practice to have a fixed IP. -* Explore if Instance group makes is vital for the first iteration. - - -- Can we use be terraform consul module? -https://registry.terraform.io/browse?provider=google -https://registry.terraform.io/modules/hashicorp/consul/google/0.3.0",1.0 -15097847,2018-10-19 16:13:46.167,Create chef cookbook for pgbouncer Install,* Install the standard package of pgbouncer from the community and same version from production. Will be installed in all the boxes that we have postgresql.,1.0 -15097793,2018-10-19 16:09:37.688,Create chef cookbook for Postgresql Install,"* Install a standard package of postgresql from the community and same version from production - -* Create Postgres user in the OS - -Could we use images? ",3.0 -15088152,2018-10-19 10:52:48.150,Workaround for GitHub importer bug,"Apply a database-level fix for affected projects for GitLab.com. - -Context: https://gitlab.com/gitlab-org/gitlab-ce/issues/51817#note_109112277",2.0 -15087137,2018-10-19 10:29:00.120,Database Reviews,"* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22430#note_109954169 -* [x] https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/7806#note_109966838 -* [x] https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8000#note_109977095 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/issues/45522#note_110188925 - -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22380#note_109456537 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22404 -* [x] https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8012 -* [x] https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/7990 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22482 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22563 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22433#note_110337738 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22307 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22143 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22307 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22292#note_111772401 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22292#note_111766801 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22143 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22713 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22606 and https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8105 -* [x] https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8016 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22482#note_113802530 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22013#note_110236879 -* [x] https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/6947#note_111236178 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22694 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22433/diffs#note_113259877 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22522 / https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8144/diffs#note_113443391 -* [x] https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/6878#note_113587368 -* [x] Monday/Kamil https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22380 -* [x] Monday/Kamil https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22433 -* [x] https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/22795",8.0 -15065301,2018-10-18 15:22:08.310,Document setup of Patroni,"Considering that we have a postgresql cluster -We need to create a chef cookbook to setup Patroni in the environment. -Please list the steps here.",2.0 -15065336,2018-10-18 15:19:44.377,create a project to create a minimun cluster for Patroni POC,"Please create a project GCP to allow Ongres to generate a POC of Patroni. -Please minimize the usage of resoureces as possible.",1.0 -15063833,2018-10-18 14:31:44.730,Recover from accidential label deletion,This is the infra issue to track weight for https://gitlab.com/gitlab-com/gl-infra/production/issues/509#note_109823129., -15042790,2018-10-18 01:23:30.832,301 GitHub URL,"https://about.gitlab.com/comparison/github-com-vs-gitlab.html redirects to 404. - -A mistake I made on [line 161](https://docs.google.com/spreadsheets/d/17cU2VUlIIaw9LEU1VnPuNrA2uBos3UAD8CItZP9ESzo/edit#gid=0) - -/comparison/github-com-vs-gitlab.html /devops-tools/github-com-vs-gitlab.html - -Should be - -/comparison/github-com-vs-gitlab.html /devops-tools/github-vs-gitlab.html - -@ahanselka can you fix?",2.0 -15033485,2018-10-17 15:04:43.006,Get packaged Omnibus downloads counts for September '18,"Please obtain the packaged Omnibus downloads count for the month of September, 2018. - -- Instructions are in [usage statistics sheet](https://docs.google.com/spreadsheets/d/1ujEmxvQQXjqFHHylZUwJw8RdvLdGBnKFzZLwtAB7dYo/edit#gid=463675429) shows (among others). Goal is cell AU3. - -![image](/uploads/b73b0fe569450ac1182d6f2ffa42b94c/image.png) - -Thanks a lot!",1.0 -15012329,2018-10-17 03:46:21.212,Add alerting for repositories mirrored on github,"Assigning to @nolith since he solved https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/3703. @erushton, please reassign in case there is someone more -appropriate. - -CC @marin to investigate if this is related to a recent release - -CC @sytses @markpundsack @eliran.mesika @bjung @dzaporozhets @axil @dhavens @pritianka ",4.0 -14999517,2018-10-16 15:32:21.320,Dynamically Populate HA Proxy Backends,"We currently have Consul on all of our production nodes, as a first iteration I am proposing that we: - -1) publish services on the web fleet -2) dynamically update HA Proxy backends based upon service availability",2.0 -14998558,2018-10-16 15:05:53.128,Gitter's SSL Certificates need renewal (2018),"Gitter uses a wildcard certificate. This is due to expire 15 November 2018. - -![image](/uploads/23c15d6f1e34b23c04a8479d642dae4e/image.png) - -cc @dawsmith @Finotto",3.0 -14971954,2018-10-15 21:46:53.801,Set up a redirect for /handbook/people-operations/group-conversations,"We've renamed Functional Group Updates (FGUs) to Group Conversations. - -In [this merge request](https://gitlab.com/gitlab-com/www-gitlab-com/merge_requests/15506) you'll see that I've updated the URL for the People Operations page on Group Conversations and attempted to fix any links broken as a result, but I think we should set up a redirect just in case other links point to the page that I can't see. - -The page /handbook/people-operations/functional-group-updates/ should redirect **to** /handbook/people-operations/group-conversations/",1.0 -14971594,2018-10-15 21:13:29.882,Admin access on forum.gitlab.com,Please enable admin access for my account `ctbarrett` on [forum.gitlab.com](https://forum.gitlab.com),1.0 -23422847,2019-08-02 08:50:33.140,Delivery: MTTP Metric,"Define, develop and track `Mean-Time-to-Production` (MTTP) metric as a KPI for both Infrastructure as a department and Scalability as a team. Iterate as necessary. - -Update URL, Health, Maturity and Next Steps on the Handbook's [Infrastructure Performance Indicators](https://about.gitlab.com/handbook/engineering/infrastructure/performance-indicators/#mean-time-to-production- mttp) page as necessary.",2.0 -23409359,2019-08-01 17:08:09.216,Update to chef 14.x,"* Update chef-client to v14.x on the `pre` environment: https://ops.gitlab.net/gitlab-cookbooks/chef-repo/merge_requests/1558 -* Identify and fix all issues on `pre`: No issue identified -* Deploy all fixes to `pre`: No fixes required -* Verify `pre` works: :white_check_mark: -* Deploy across all our environments all non-breaking fixes: No fixes required -* Update chef-client to v14.x and breaking fixes across all our environments: https://ops.gitlab.net/gitlab-cookbooks/chef-repo/merge_requests/1559 - -First step of https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7406",3.0 -23406508,2019-08-01 15:58:00.921,RCA Gitaly latency below SLO," - -Incident: gitlab-com/gl-infra/production#1013 - -## Summary - -A file descriptor leak in gitaly on file-23 caused high resource consumption and thus very high latencies for all gitaly operations on this host. -This was causing high latencies for all projects on this host while all other projects stayed unaffected. -Some projects of severely blocked customers have been moved to another node during the investigation. - -- Service(s) affected : ~""Service:Gitaly"" - -- Team attribution : Gitaly - -- Minutes downtime or degradation : 270 - -For calculating duration of event, use the [Platform Metrics Dashboard](https://dashboards.gitlab.net/d/ZUei7TkWz/platform-metrics?orgId=1) to look at appdex and SLO violations. - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? - - Very high Gitaly latencies for projects hosted on file-23 -- Who was impacted by this incident? - - all customers with projects on file-23 (some big customers reported issues) -- How did the incident impact customers? - - all gitaly based operations, like merge requests, have been very slow -- How many attempts were made to access the impacted service/feature? -- How many customers were affected? -- How many customers tried to access the impacted service/feature? - -Include any additional metrics that are of relevance. - -Provide any relevant graphs that could help understand the impact of the incident and its dynamics. - -Gitaly P99 latency -![image](/uploads/3535cbc05aa5452b6172c5be0458a4bc/image.png) - -## Detection & Response - -Start with the following: - -- How was the incident detected? - - Gitaly latency SLO alert -- Did alarming work as expected? - - the Gitaly latency SLO alert did not go to pagerduty, so it wasn't seen by the EOC immediately - - if we had alarming on gitaly file descriptor count, we could have seen issues days before (since 7/18) -- How long did it take from the start of the incident to its detection? - - customers reported issues since 7/18 (the start of the fd leakage), so it took 13 days -- How long did it take from detection to remediation? - - 258m -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team memeber wasn't page-able, ...) - - while 503 issues on file-23 were [identified on 7/18 already](https://gitlab.com/gitlab-com/customer-success/account-triage/issues/119#note_194548970) we didn't take action - -## Root Cause Analysis - -Some projects suffered from high latencies for Gitaly-related operations (like MRs). - -1. Why? - Because Gitaly had higher latencies for some projects. -2. Why? - Because Gitaly on file-23 had issues, which affected only projects hosted on that node. -3. Why? - Because the Gitaly process was using a lot CPU resources. -4. Why? - Because Gitaly had 10,000 hanging cat-file processes. -5. Why? - Because Gitaly was leaking file descriptors. - -## What went well - -Start with the following: - -- Identify the things that worked well or as expected. -- Any additional call-outs for what went particularly well. - -## What can be improved - -- Alerting on gitaly file descriptors -- Make Gitaly SLO alerts go to pagerduty -- We should have followed up after we identified 503 errors on file-23 days ago -- We should have a clear escalation path from support to SRE - - - -## Corrective actions - -- List issues that have been created as corrective actions from this incident. -- For each issue, include the following: - - - Issue labeled as ~""corrective action"". - - Include an estimated date of completion of the corrective action. - - Incldue the named individual who owns the delivery of the corrective action. - -* [x] prevent cat-file leak https://gitlab.com/gitlab-org/gitaly/merge_requests/1390 -* [x] alert on high gitaly file descriptor count https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7390 -* [x] page for gitaly SLO alerts https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7391 - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",3.0 -23396890,2019-08-01 14:03:17.302,RCA Job queue pipeline_processing:pipeline_process is growing," - -Incident: gitlab-com/gl-infra/production#1014 - -## Summary - -CI jobs took very long to complete because jobs in the pipeline_processing:pipeline_process sidekiq queue piled up. -2 pipelines caused a high amount of sidekiq jobs, sidekiq pipeline nodes were maxing out their CPU, pipeline_processing jobs are causing many SQL calls and the pgbouncer pool for sidekiq was becoming saturated. - -RCA doc: https://docs.google.com/document/d/15UPwfmUFVmx6jtghlUoOod3JAGVa4BRycNqFA1OSrjs/edit# - -- Service(s) affected : ~""Service:Sidekiq"" - -- Team attribution : - -- Minutes downtime or degradation : 240 - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? - - delay of CI jobs -- Who was impacted by this incident? - - all customers CI pipelines -- How did the incident impact customers? - - preventing them from running CI tests/deploys -- How many attempts were made to access the impacted service/feature? -- How many customers were affected? -- How many customers tried to access the impacted service/feature? - -Include any additional metrics that are of relevance. - -Provide any relevant graphs that could help understand the impact of the incident and its dynamics. - -![image](/uploads/4a4c80198cb01859443acdb8be55209d/image.png) - -## Detection & Response - -Start with the following: - -- How was the incident detected? - - support reporting customer issues with CI pipelines -- Did alarming work as expected? - - we got [Sidekiq single_node_cpu alerts](https://gitlab.slack.com/archives/CD6HFD1L0/p1564581102209800) and [pgbouncer connection_pool saturation alerts](https://gitlab.slack.com/archives/CD6HFD1L0/p1564585485211100) but no pages. We did __not__ get an alert for the queue size which would have been a clear indication of the issue. -- How long did it take from the start of the incident to its detection? - - 80m from queue starting to rise till first alert for sidekiq CPU -- How long did it take from detection to remediation? - - 240m -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team member wasn't page-able, ...) - - EOC became aware of the incident by reports from customer support and not from being paged for alerts. - - It was hard to find someone to help with that issue. - -## Root Cause Analysis - -The purpose of this document is to understand the reasons that caused an incident, and to create mechanisms to prevent it from recurring in the future. A root cause can **never be a person**, the way of writing has to refer to the system and the context rather than the specific actors. - -Follow the ""**5 whys**"" in a **blameless** manner as the core of the root-cause analysis. - -For this it is necessary to start with the incident, and question why it happened. Keep iterating asking ""why?"" 5 times. While it's not a hard rule that it has to be 5 times, it helps to keep questions get deeper in finding the actual root cause. - -Keep in min that from one ""why?"" there may come more than one answer, consider following the different branches. - -### Example of the usage of ""5 whys"" -The vehicle will not start. (the problem) - -1. Why? - The battery is dead. -2. Why? - The alternator is not functioning. -3. Why? - The alternator belt has broken. -4. Why? - The alternator belt was well beyond its useful service life and not replaced. -5. Why? - The vehicle was not maintained according to the recommended service schedule. (Fifth why, a root cause) - -## What went well - -Start with the following: - -- Identify the things that worked well or as expected. -- Any additional call-outs for what went particularly well. - -## What can be improved - -Start with the following: - -- Using the root cause analysis, explain what can be improved to prevent this from happening again. -- Is there anything that could have been done to improve the detection or time to detection? -- Is there anything that could have been done to improve the response or time to response? -- Is there an existing issue that would have either prevented this incident or reduced the impact? -- Did we have any indication or beforehand knowledge that this incident might take place? - -## Corrective actions - -- List issues that have been created as corrective actions from this incident. -- For each issue, include the following: - - - Issue labeled as ~""corrective action"". - - Include an estimated date of completion of the corrective action. - - Incldue the named individual who owns the delivery of the corrective action. - -* [x] increase CPU for sidekiq nodes https://gitlab.com/gitlab-com/gl-infra/production/issues/997 -* [ ] review pgbouncer pool config https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7403 -* [x] optimize PipelineProcessWorker https://gitlab.com/gitlab-org/gitlab-ce/issues/65414 -* [ ] deduplicate sidekiq jobs https://gitlab.com/gitlab-org/gitlab/-/issues/30585 -* [x] define sidekiq SLOs https://gitlab.com/gitlab-org/gitlab/-/issues/30174 -* [ ] simplify sidekiq setup https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7219 -* [ ] improve sidekiq observability -* [ ] prevent customers from causing platform issues by adding per-client limits in all places - - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",5.0 -23366845,2019-08-01 03:47:50.567,rsyslog not creating system log files at bootstrap,"Noticed while building sidekiq nodes for https://gitlab.com/gitlab-com/gl-infra/production/issues/963: after initial bootstrap, /var/log/syslog, /var/log/auth.log and so on did not exist, even though rsyslog was running. It required a further restart of rsyslog (or a reboot) for them to be created and logged to. This means system logs are neither logged locally, nor shipped to ElasticSearch, until manual activity occurs. - -I have done no further investigation yet, no idea how simple or complex this will be to fix.",3.0 -23362896,2019-07-31 21:05:46.279,chef-client on dev.gitlab.org not working due to runit,"@alejandro Does this have something to do with https://gitlab.com/gitlab-cookbooks/gitlab-monitor/merge_requests/27? - -``` -# sudo chef-client -[2019-07-31T21:01:51+00:00] INFO: Forking chef instance to converge... -Starting Chef Client, version 12.19.36 -[2019-07-31T21:01:52+00:00] INFO: *** Chef 12.19.36 *** -[2019-07-31T21:01:52+00:00] INFO: Platform: x86_64-linux -[2019-07-31T21:01:52+00:00] INFO: Chef-client pid: 70984 -[2019-07-31T21:01:57+00:00] INFO: Run List is [role[base-debian], role[dev-gitlab-org], role[allow-ssh-git], recipe[gitlab-exporters::node_exporter], role[gitlab-wale]] -[2019-07-31T21:01:57+00:00] INFO: Run List expands to [gitlab-server::ohai-plugin-path, gitlab-server::packages, gitlab-server::timezone-utc, gitlab-server::disable_history, gitlab-server::cron-check-authorized_keys2, gitlab-server::aws-get-public-ip, gitlab-server::get-public-ip, apt::unattended-upgrades, gitlab-server::locale-en-utf8, gitlab-server::ntp-client, gitlab-server::screenrc, gitlab-server::updatedb, gitlab_users::default, gitlab_sudo::default, gitlab-openssh, chef_client_updater, chef-client, gitlab-exporters::node_exporter, gitlab-server::rsyslog_client, postfix::_common, postfix::aliases, gitlab-server::debian-editor-vim, gitlab-server::dpkg-defaults, gitlab-iptables, gitlab-security::rkhunter, gitlab-security::auditd, omnibus-gitlab::default, gitlab-openssh::default, gitlab-server::systemd-logind, gitlab-server::package-auto-upgrade, gitlab-mtail::gitlab-shell, gitlab-mtail::rails, gitlab-mtail::unicorn, gitlab-exporters::gitlab_version_exporter, gitlab-server::hack_google_creds, gitlab_fluentd::default, gitlab_fluentd::gitaly, gitlab_fluentd::nginx, gitlab_fluentd::pages, gitlab_fluentd::postgres, gitlab_fluentd::rails, gitlab_fluentd::shell, gitlab_fluentd::sidekiq, gitlab_fluentd::unicorn, gitlab_fluentd::workhorse, gitlab-server::ssh-users, gitlab_wale::default] -[2019-07-31T21:01:57+00:00] INFO: Starting Chef Run for dev.gitlab.org -[2019-07-31T21:01:57+00:00] INFO: Running start handlers -[2019-07-31T21:01:57+00:00] INFO: Start handlers complete. -[2019-07-31T21:01:58+00:00] INFO: HTTP Request Returned 404 Not Found: -[2019-07-31T21:01:58+00:00] INFO: HTTP Request Returned 404 Not Found: -[2019-07-31T21:01:58+00:00] INFO: Error while reporting run start to Data Collector. URL: https://chef.gitlab.com/organizations/gitlab/data-collector Exception: 404 -- 404 ""Not Found"" (This is normal if you do not have Chef Automate) -resolving cookbooks for run list: [""gitlab-server::ohai-plugin-path"", ""gitlab-server::packages"", ""gitlab-server::timezone-utc"", ""gitlab-server::disable_history"", ""gitlab-server::cron-check-authorized_keys2"", ""gitlab-server::aws-get-public-ip"", ""gitlab-server::get-public-ip"", ""apt::unattended-upgrades"", ""gitlab-server::locale-en-utf8"", ""gitlab-server::ntp-client"", ""gitlab-server::screenrc"", ""gitlab-server::updatedb"", ""gitlab_users::default"", ""gitlab_sudo::default"", ""gitlab-openssh"", ""chef_client_updater"", ""chef-client"", ""gitlab-exporters::node_exporter"", ""gitlab-server::rsyslog_client"", ""postfix::_common"", ""postfix::aliases"", ""gitlab-server::debian-editor-vim"", ""gitlab-server::dpkg-defaults"", ""gitlab-iptables"", ""gitlab-security::rkhunter"", ""gitlab-security::auditd"", ""omnibus-gitlab::default"", ""gitlab-openssh::default"", ""gitlab-server::systemd-logind"", ""gitlab-server::package-auto-upgrade"", ""gitlab-mtail::gitlab-shell"", ""gitlab-mtail::rails"", ""gitlab-mtail::unicorn"", ""gitlab-exporters::gitlab_version_exporter"", ""gitlab-server::hack_google_creds"", ""gitlab_fluentd::default"", ""gitlab_fluentd::gitaly"", ""gitlab_fluentd::nginx"", ""gitlab_fluentd::pages"", ""gitlab_fluentd::postgres"", ""gitlab_fluentd::rails"", ""gitlab_fluentd::shell"", ""gitlab_fluentd::sidekiq"", ""gitlab_fluentd::unicorn"", ""gitlab_fluentd::workhorse"", ""gitlab-server::ssh-users"", ""gitlab_wale::default""] -[2019-07-31T21:01:59+00:00] INFO: HTTP Request Returned 412 Precondition Failed: {""message""=>""Unable to satisfy constraints on package runit due to solution constraint (gitlab-mtail >= 0.0.0). Solution constraints that may result in a constraint on runit: [(gitlab-exporters = 1.3.0) -> (runit ~> 4.3.0)], [(gitlab-mtail = 0.1.18) -> (runit ~> 3.0.0)]"", ""unsatisfiable_run_list_item""=>""(gitlab-mtail >= 0.0.0)"", ""non_existent_cookbooks""=>[], ""most_constrained_cookbooks""=>[""runit = 1.7.8 -> [(packagecloud >= 0.0.0)]""]} - -================================================================================ -Error Resolving Cookbooks for Run List: -================================================================================ - -Cookbook dependency resolution error: -------------------------------------- -Error message: Unable to satisfy constraints on package runit due to solution constraint (gitlab-mtail >= 0.0.0). Solution constraints that may result in a constraint on runit: [(gitlab-exporters = 1.3.0) -> (runit ~> 4.3.0)], [(gitlab-mtail = 0.1.18) -> (runit ~> 3.0.0)] -You might be able to resolve this issue with: - 1-) Removing cookbook versions that depend on deleted cookbooks. - 2-) Removing unused cookbook versions. - 3-) Pinning exact cookbook versions using environments. - -Expanded Run List: ------------------- -* gitlab-server::ohai-plugin-path -* gitlab-server::packages -* gitlab-server::timezone-utc -* gitlab-server::disable_history -* gitlab-server::cron-check-authorized_keys2 -* gitlab-server::aws-get-public-ip -* gitlab-server::get-public-ip -* apt::unattended-upgrades -* gitlab-server::locale-en-utf8 -* gitlab-server::ntp-client -* gitlab-server::screenrc -* gitlab-server::updatedb -* gitlab_users::default -* gitlab_sudo::default -* gitlab-openssh -* chef_client_updater -* chef-client -* gitlab-exporters::node_exporter -* gitlab-server::rsyslog_client -* postfix::_common -* postfix::aliases -* gitlab-server::debian-editor-vim -* gitlab-server::dpkg-defaults -* gitlab-iptables -* gitlab-security::rkhunter -* gitlab-security::auditd -* omnibus-gitlab::default -* gitlab-openssh::default -* gitlab-server::systemd-logind -* gitlab-server::package-auto-upgrade -* gitlab-mtail::gitlab-shell -* gitlab-mtail::rails -* gitlab-mtail::unicorn -* gitlab-exporters::gitlab_version_exporter -* gitlab-server::hack_google_creds -* gitlab_fluentd::default -* gitlab_fluentd::gitaly -* gitlab_fluentd::nginx -* gitlab_fluentd::pages -* gitlab_fluentd::postgres -* gitlab_fluentd::rails -* gitlab_fluentd::shell -* gitlab_fluentd::sidekiq -* gitlab_fluentd::unicorn -* gitlab_fluentd::workhorse -* gitlab-server::ssh-users -* gitlab_wale::default - -Platform: ---------- -x86_64-linux - - -Running handlers: -[2019-07-31T21:01:59+00:00] ERROR: Running exception handlers -Running handlers complete -[2019-07-31T21:01:59+00:00] ERROR: Exception handlers complete -Chef Client failed. 0 resources updated in 07 seconds -[2019-07-31T21:01:59+00:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out -[2019-07-31T21:01:59+00:00] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report -[2019-07-31T21:01:59+00:00] ERROR: 412 ""Precondition Failed"" -[2019-07-31T21:01:59+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1) -```",1.0 -23397689,2019-07-31 20:23:45.403,Review PgBouncer Pool configuration and architecture,"Related issue [1014](https://gitlab.com/gitlab-com/gl-infra/production/issues/1014), from July 31st. - -It is being observed that the current Database Pool configuration may be exhausting current Datanode CPU and memory capacity. That is, the settings of the number of connections allowed from each node and pool can be potentially harmful to the leader's performance. - -Right now, `max_connections` in the cluster is set at 300, a conservative number yet very above what is the best theoretical connection/throughput relation peak. The way that we calculate the _theoretical connection capacity_ is: `(cores / % effective usage) * scale_factor`. Effective usage is the client busy percentage, which can be calculated around 95% as this is a queue processing; scale_factor is a coefficient between 2 and 4. Within this, the best suitable amount of maximum _active_ connections would be around 130 and 200. - -But, in the PgBouncer side, we currently have two nodes, each with 3 pools (2 intensive: production and sidekiq). Production pool is 50 and sidekiq is 75 on each node, meaning that there are potentially 250 active connections. - -From OnGres we need to: - -- Define the pool size on sidekiq and production for reducing the performance degradation occurrences that happened during the last days. -- Splitting pools can offer better resilience when one of the pools is generating waits that affect other queues in the node. -- Revisit other PgBouncer configuration. For instance: - - Current: `min_pool_size = 0`, recommended `min_pool_size = 20`. This opens a minimum amount of persistent connections, decreasing any possible startup time when issuing new connections. -- If the split is necessary, establish the necessary amount of nodes per each pool and its configuration. - - -Related graphs: - -https://prometheus.gprd.gitlab.net/graph?g0.range_input=2w&g0.expr=sum(sidekiq_queue_size)%20by%20(fqdn)&g0.tab=0 -https://dashboards.gitlab.com/d/9GOIu9Siz/sidekiq-stats?orgId=1&fullscreen&panelId=71&from=now-10d&to=now - - -IOWait on Pgbouncer https://dashboards.gitlab.com/d/PwlB97Jmk/pgbouncer-overview?from=1564563814425&to=1564571034282&fullscreen&panelId=6",0.0 -23350105,2019-07-31 13:01:20.419,Gitaly SLO alerts should go to pagerduty,Gitaly latency SLO alerts are only going to Slack (https://gitlab.slack.com/archives/CD6HFD1L0/p1564559043204400) but not to pagerduty. We should be paged for those alerts.,3.0 -23349885,2019-07-31 12:55:40.953,Alert on gitaly file descriptors,"Incident https://gitlab.com/gitlab-com/gl-infra/production/issues/1013 showed that alerting on file descriptors would help to detect incidents earlier. - -![image](/uploads/24356df96aff1adb137913083789a4fa/image.png)",3.0 -23328620,2019-07-30 23:20:30.333,Setup/update Terraform Admin Project,"Following a slight modification of the pattern [here](https://cloud.google.com/community/tutorials/managing-gcp-projects-with-terraform#add_organizationfolder-level_permissions), I propose utilizing the `env-zero` project as our central place to manage project permissions and service accounts for automation under #6810. @devin is already doing the same for https://ops.gitlab.net/gitlab-com/group-projects. - -As for the modification, instead of using a single admin-level service account to provision resources in _all_ projects, we should create a service account for each project (environment), with permissions scoped to allow access to resources only within that project. Once a project is bootstrapped, the relevant credentials are then added to the environment-specific CI variables for automated terraform runs within that project. However, I'm still not sure if it is possible to have the project-specific service accounts in the terraform admin project, or within the projects they will be used to manage. - -This issue is to track research/discovery and effort to implement this type of setup, including documentation and/or [a bootstrap script](https://github.com/monterail/terraform-bootstrap-example) to work around circular dependencies so that this project can also be managed in terraform, along with (possibly) relocating and consolidating the state files for all managed child projects (same questions/rationale as the point above about where to scope the project-specific service account). - -/cc @gitlab-com/gl-infra FYI for discussion",2.0 -23326253,2019-07-30 20:18:59.618,RCA Deep Dive #7280,"@craig to perform a walk through of the incident during the DNA meeting on Wednesday, 31 July.",1.0 -23321292,2019-07-30 17:15:26.976,Postgres-related services nearing capacity in CaPlan dashaboard,"Postgres-related services are *nearing capacity* on the [Caplan dashboard](https://dashboards.gitlab.net/d/TeJU3AIWz/capacity-planning?orgId=1): - -![Screen_Shot_2019-07-30_at_7.47.34_PM](/uploads/f4075aab00becd7c66ba705f1055d97b/Screen_Shot_2019-07-30_at_7.47.34_PM.png) - -Please investigate, and either fix the dashboard or create issues to address specific caplan concerns. Ongres should be aware of how the caplan metric is calculated so they can help. - -Link said issues to this issue and tag with `Obserbavility` and `caplan`.",2.0 -23312022,2019-07-30 15:21:01.575,Move gstg back to wal-e,"We decided to discontinue testing wal-g in staging because it hinders our ability to test changes to wal-e before we release them to production. - -Background: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7359 - -> From a high level perspective, we transitioned over to use wal-g in staging in the beginning of 2019. We hoped to be able to use it in production too, to alleviate the pressure on the primary (wal-g is capable of taking basebackups off of a secondary, wal-e is not). However, we kept running into wal-g issues (with GCS in particular) and kept waiting on wal-g releases and so on. wal-g never made it to production because of that. - -The task here is to actually move gstg back to use wal-e: - -* [x] Reconfigure gstg for wal-e -* [x] Deploy change (daily backup, WAL push) -* [x] ~~Resync DR replicas (they will not like this, we'll just resync with a (base)backup and use wal-e going forward).~~ Not needed as the transition was gapless. Letting them catch up now. -* [x] Have backup testing project use wal-e for staging, too",2.0 -23292414,2019-07-30 08:30:12.183,Use native instrumentation for camoproxy,"go-camo now has native Prometheus support. - -* [x] Enable monitoring in Prometheus scrape config. -* [x] Cleanup textfile exporter. -* [x] Remove obsolete `--stats` flag from gitlab-camoproxy cookbook. -* [x] Add metrics to upstream go-camo to replace mtail metrics. -* [x] Remove mtail watcher. -* [x] Create Grafana dashboard.",3.0 -23284858,2019-07-30 03:58:04.685,Right-size redis persistent persistent and redis-sidekiq nodes,"The redis-sidekiq nodes were sized the same as the previous redis-persistent nodes, as n1-standard-8, for safety during the initial migration. This gives them 8 CPU and 30GB of RAM - -All of our redis nodes generally need no more than 2 CPU (1 for redis, 1 for other things); the sizing is for RAM. - -Redis on the redis-sidekiq nodes is currently using <1GB RES + VIRT, and another 7GB of general cache. 8GB RAM total would be more than enough. And n1-standard-2 should be sufficient. - -On the old redis persistent nodes, the running system is using about 9GB with 12GB of general cache. We might be able to get away with 15GB RAM for this, but I'm less confident, and would like to see some more usage before deciding that.",2.0 -23271205,2019-07-29 15:40:10.195,GitLab.com Performance Metric,"Per https://about.gitlab.com/handbook/engineering/infrastructure/performance-indicators/#gitlab-com-performance, we need to produce a performance metric that _reflects the performance of GitLab as experienced by users_. Something along these lines was outlined in the [Service Levels and Error Budgets](https://about.gitlab.com/handbook/engineering/infrastructure/blueprint/service-levels-error-budgets/) blueprint. - -Let's do a first (if possibly rough) iteration, whether with Pingdom or Thousand Eyes. In particular, we can start using the [SLIs outlined in the blueprint](https://about.gitlab.com/handbook/engineering/infrastructure/blueprint/service-levels-error-budgets/#slis). - -Please update the Infrastructure Performance Indicators handbook page to reflect the health, maturity and URL for the data.",3.0 -23174740,2019-07-26 00:24:50.159,postgres-dr-delayed-01-db-gstg replication is broken,"``` -Alertname: PostgreSQL_ReplicationLagTooLarge_DelayedReplica - Channel: database - Env: gstg - Environment: gstg - Fqdn: postgres-dr-delayed-01-db-gstg.c.gitlab-staging-1.internal -``` - -```note: Replication lag on server postgres-dr-archive-01-db-gstg.c.gitlab-staging-1.internal:9187 is currently 9d 2h 7m 11s``` - -From the logs: -``` -2019-07-26_00:18:07.60831 ERROR: 2019/07/26 00:18:07.608213 Archive '0000006200003C7500000073' does not exist. -2019-07-26_00:18:07.93544 ERROR: 2019/07/26 00:18:07.935350 Archive '00000063.history' does not exist. -```",1.0 -23167854,2019-07-25 18:19:12.056,Create Runbook documentation for all types of Access Request,The existing How To guide is severely lacking. We need to better the documentation in our [runbook](https://gitlab.com/gitlab-com/runbooks/blob/master/howto/access-requests.md).,3.0 -23166908,2019-07-25 16:57:39.499,Enable external merge request diff storage,"On GitLab.com, `merge_request_diff_files` is the biggest table at about 1.1TB currently and it grows at a rate of about 60 GB per month. - -![Screenshot_from_2019-07-25_18-59-00](/uploads/cb49c0e72b8b4ca62dbdee6a27ffc5a3/Screenshot_from_2019-07-25_18-59-00.png) - -We should look into enabling external merge request diff storage as pointed out below: https://docs.gitlab.com/ee/administration/merge_request_diffs.html - -### Update (2020-02-26): - -The current size of the relevant table `merge_request_diff_files` is 1,659 GB out of 5,400 GB total database size (about 30% of the total database size is this single table). Once the migration is done, all of this data should live outside the database and reduce the database size by 30%.",5.0 -23755565,2019-07-25 14:21:14.452,"Upgrade and verify all `fe-*` hosts are running the same kernel, OS, and HAProxy hosts.","After reading through production#956, I would like to see all `fe-*` boxes configured with the latest version of https://gitlab.com/gitlab-cookbooks/gitlab-haproxy and are running the same underlying OS and kernel.",3.0 -23136657,2019-07-25 03:11:56.536,Delete performance.gitlab.net cert in sslmate,"It just auto renewed: -``` -From: SSLMate -To: ops-notifications@gitlab.com -Subject: SSLMate Certificates for performance.gitlab.net - - -Your certificates for performance.gitlab.net are ready! -``` -but we don't use it anymore (replaced by dashboards.gitlab.net). We need to stop auto-renewing it and let it expire gracefully",1.0 -23136455,2019-07-25 02:57:43.571,imap_mailbox_exporter not working,"While working on gitlab-cookbooks/gitlab-exporters!97 I encountered the `gitlab-exporters::imap_mailbox_exporter` recipe failing on my kitchen setup. Unable to figure out why, I went to a production mailroom box to encounter it had the same issue: - -``` -2019-02-15_13:37:37.07491 time=""2019-02-15T13:37:37Z"" level=info msg=""Exporter listening on 0.0.0.0:9117"" source=""imap-mailbox-exporter.go:194"" -2019-02-15_13:37:37.07501 time=""2019-02-15T13:37:37Z"" level=fatal msg=""listen tcp 0.0.0.0:9117: bind: address already in use"" source=""imap-mailbox-exporter.go:196"" -``` - -The production log at `/var/log/prometheus/imap_mailbox_exporter/current` on mailroom-02-sv-gprd.c.gitlab-production.internal also has some other errors, the point being that the exporter is not serving data (`curl localhost:9117` just hangs) - -/cc @ahmadsherif since I see your name in that recipe's code",2.0 -23135310,2019-07-25 01:18:15.298,Anomaly-based detection of sidekiq queue size,"Following from https://gitlab.com/gitlab-com/gl-infra/production/issues/992 we only alert on some specific sidekiq_queue_size labels, or generically if a given queue exceeds 50000. - -It seems to me that we could use some of the anomaly detection techniques already established (recording rules in `rules/service_ops_rate.yml`, alerts in `rules/general-service-alerts.yml`) to be more flexible about noticing unusual volumes of queuing (such as we had in the incident), rather than trying to manage hard-coded limits.",3.0 -23133138,2019-07-24 21:29:04.929,Update runit cookbook to v5.x,"One of the errors we encountered [while testing chef-client 15](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7258) was with the `runit` cookbook. We should update to v5.x of it. Notice that according to their CHANGELOG versions >= 5.0 require at least chef 14 https://github.com/chef-cookbooks/runit/blob/master/CHANGELOG.md#500-2019-02-04. This means that the cookbook updates for these updates will need to held for deployment until we roll out chef 15. - -According to chef-repo's Berksfile.lock, the following cookbooks are pinned to runit ~> 4.3 and will need update: - -- gitlab-alertmanager: https://gitlab.com/gitlab-cookbooks/gitlab-alertmanager/merge_requests/40 -- gitlab-camoproxy: https://gitlab.com/gitlab-cookbooks/gitlab-camoproxy/merge_requests/10 -- gitlab-elk: https://gitlab.com/gitlab-cookbooks/gitlab-elk/merge_requests/111 -- gitlab-exporters: https://gitlab.com/gitlab-cookbooks/gitlab-exporters/merge_requests/97 -- gitlab-monitor: https://gitlab.com/gitlab-cookbooks/gitlab-monitor/merge_requests/27 -- gitlab-mtail: https://gitlab.com/gitlab-cookbooks/gitlab-mtail/merge_requests/45 -- gitlab-prometheus: https://gitlab.com/gitlab-cookbooks/gitlab-prometheus/merge_requests/458 - -Ideally in those MRs we add rspec examples if missing, and include the recipes that use `runit_service` in kitchen to make sure the update didn't break them. - -Part of https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6942 - -/cc @ahanselka @nnelson @devin @cmcfarland @dawsmith",4.0 -23130461,2019-07-24 19:27:00.624,Plan In lieu of Automating Access Request issues,"Access Requests are a source of ~toil for the SRE on-call and SRE managers. Issues are manually ingested and approved blindly. After a requestor's manager approves the issue, SRE managers have no way of knowing whether the requestor should or should not be granted access–nor should we. If these issues are used as an audit trail, I'm no longer comfortable adding my approval. And if we should continue processing without challenge, then I propose we remove the below language altogether. - -> INFRASTRUCTURE MANAGER: For requests involving access to critical Infrastructure systems, an additional layer of approval is required. Review the requests and, if approved, copy and paste `/label ~""InfrastructureApproved""` - -Longterm, I expect we will roll out role-based access using OKTA and IAM. But, regardless of the outcome of the aforementioned approval process, an SRE's focus should not be hampered with access requests. Short of fully automating the workflow, I advocate for one of the following options. - -1. Process requests in batch by type at the start of each week. -_or_ -2. Document the process and instruct requestors to submit merge requests to the cookbook. - - -@gitlab-com/gl-infra/managers - -Cc @glopezfernandez",2.0 -23118017,2019-07-24 14:04:30.410,The latest version of the prometheus-operator breaks during upgrade,"Due to not locking down the version of our component: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7110 - -In the `pre` environment, we run into the following (this was from testing locally): - -``` -UPGRADE FAILED -Error: Deployment.apps ""gitlab-monitoring-kube-state-metrics"" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{""app.kubernetes.io/name"":""kube-state-metrics""}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable -Error: UPGRADE FAILED: Deployment.apps ""gitlab-monitoring-kube-state-metrics"" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{""app.kubernetes.io/name"":""kube-state-metrics""}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable -Stopping Tiller... -Error: plugin ""tiller"" exited with error -25s monitoring master● % helm search stable/prometheus-operator -NAME CHART VERSION APP VERSION DESCRIPTION -stable/prometheus-operator 6.1.0 0.31.1 Provides easy monitoring definitions for Kubernetes servi... -56s monitoring master● % -``` - -This is an upgrade from version 5.15.0. (currently running on gstg) - -Use this issue to figure out what is going on and what we need to accomplish to cleanly upgrade the `gstg` without downtime. This is to be utilized as a practice run to determine how we can investigate and perform clean upgrades between versions of a component that we do not manage. - -/cc @gitlab-org/delivery",1.0 -23108892,2019-07-24 09:30:08.338,Cleanup up Postgres runbooks,Some Postgres runbooks still have references to REPMGR. Let's review the runbooks and bring them up to date.,2.0 -23108666,2019-07-24 09:21:55.425,Google network failure detection,"We seem to have had a number of failures on Google's network recently. During the RCA Deep Dive for https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7314, @andrewn brought up the idea of investigating how to determine the health of the network, perhaps by looking at an aggregate of our network traffic and looking for significant deviations. IF nothing else, this will help us better troubleshoot and track root causes.",2.0 -23081589,2019-07-23 12:00:28.574,RCA Deep Dive #7275,@ahmadsherif and @craigf we'll perform a walkthrough of the incident during the DNA meeting on 24 Jul. Please re-familiarize yourselves with the incident and RCA. This issue has much more to do with process than troubleshooting. It's a great example of SRE's executing tasks that normally would be executed by a DBRE.,1.0 -23065560,2019-07-22 23:58:22.249,Make GKE node pools Production Ready,"I’ve noticed a few things we’re going to have to think about with using terraform to manage K8s. Mainly that we’re probably going to have to use multiple node pools, if not multiple clusters for production instances. - -Any change to the nodes (disk space, etc.) results in the entire node pool being deleted and re-created at once, including all running pods. It doesn’t do it in a rolling way like you’d expect. - -This issue is to track investigating whether there is a way to make it roll updates, or whether we need to find a workaround so that we can run these changes with no downtime. - -/cc @skarbek",5.0 -23055856,2019-07-22 16:13:51.162,Review of Postgres failovers over the last 4 weeks,"We seem to have had more failover activity over the last few weeks. - -Let's get Ongres to review relevant failover data and report on why this has been the case.",2.0 -23055597,2019-07-22 16:02:35.433,Rename scoped Team Labels,"To keep work on the appropriate managers' boards, we'll need to rename the scoped labels. I'm proposing we use -- `team::observability` -- `team::availability` -- `team::reliability` - -@Finotto @dawsmith",1.0 -23050307,2019-07-22 13:10:24.243,Availability: MTTR Metric => 90%,"Define, develop and track `Mean-Time-to-Recover` (MTTR) metric as a KPI for Infrastructure as a department. Iterate as necessary. - -Update URL, Health, Maturity and Next Steps on the Handbook's [Infrastructure Performance Indicators](https://about.gitlab.com/handbook/engineering/infrastructure/performance-indicators/#mean-time-to-recovery-mttr) page as necessary. - -Epic : https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/88",2.0 -23050298,2019-07-22 13:09:57.039,Observability: MTTD Metric,"Define, develop and track `Mean-Time-to-Detect` (MTTD) metric as a KPI for Infrastructure as a department. Iterate as necessary. - -Update URL, Health, Maturity and Next Steps on the Handbook's [Infrastructure Performance Indicators](https://about.gitlab.com/handbook/engineering/infrastructure/performance-indicators/#mean-time-to-detection-mttd) page as necessary.",2.0 -23050278,2019-07-22 13:09:17.543,Reliability: MTBF Metric => 90%,"Define, develop and track `Mean-Time-between-Failures` (MTBF) metric as a KPI for both Infrastructure as a department and Site Reliability Engineering as a team. Iterate as necessary. - -Update URL, Health, Maturity and Next Steps on the Handbook's [Infrastructure Performance Indicators](https://about.gitlab.com/handbook/engineering/infrastructure/performance-indicators/#mean-time-between-failures-mtbf) page as necessary.",2.0 -23045213,2019-07-22 10:28:37.645,Rebuild patroni-06 instance,"After https://gitlab.com/gitlab-com/gl-infra/production/issues/948 has closed out, we're left with a shut down and otherwise unused patroni-06 instance. - -We should clean up and do one of the following: - -1. Rebuild patroni-06 and remove patroni-07 from the cluster -1. Just terminate/delete patroni-06, leaving back a ""gap"" in the instance name sequence (does TF cope with that well?)",2.0 -23033031,2019-07-22 00:14:59.610,More CPU for sidekiq-pipeline nodes?,"updated: -There have been some intermittent reports lately of pipelines not running in a timely fashion. I believe this is due to CPU contention on the pipeline sidekiq nodes at peaks, and we should consider giving those nodes more CPU. - -Original: -There have been some intermittent reports lately of pipelines not running in a timely fashion. I believe this is due to increased load on the pipeline sidekiq nodes, and we may have to expand this group from 3 to 4, or more.",2.0 -23010576,2019-07-19 21:31:04.743,DR Site Patroni won't stay replicated,"The runbook to resynchronize the DR site database from the master using WAL replication works. It takes a day or so, but it ends up replicated. https://gitlab.com/gitlab-com/runbooks/blob/master/howto/geo-patroni-cluster.md - -However it has never stayed synchronized. We have to manually resync it every week or two. This will not work when we go live. It needs to stay replicated on its own. - -This time, it looks like something going on with Patroni: - -``` -+---------------+---------------------------------------+--------------+------+----------+----+-----------+ -| Cluster | Member | Host | Role | State | TL | Lag in MB | -+---------------+---------------------------------------+--------------+------+----------+----+-----------+ -| pg-ha-cluster | patroni-01-db-dr.c.gitlab-dr.internal | 10.251.9.101 | | starting | | unknown | -| pg-ha-cluster | patroni-02-db-dr.c.gitlab-dr.internal | 10.251.9.102 | | starting | | unknown | -| pg-ha-cluster | patroni-03-db-dr.c.gitlab-dr.internal | 10.251.9.103 | | starting | | unknown | -+---------------+---------------------------------------+--------------+------+----------+----+-----------+ -``` - -@Finotto @dawsmith @abrandl - can we get Ongres or someone more familiar with our database set up to take a look at this and make some recommendations?",4.0 -23005476,2019-07-19 16:38:14.226,Cleanup DNS Information in Runbooks,"We have some stale runbooks that reference PowerDNS, which is apparently no longer in use (discovered in gitlab-com/gl-infra/infrastructure#2332). - -We need to move https://gitlab.com/gitlab-com/runbooks/blob/master/howto/internal_dns.md to simply `dns.md` and include information about external DNS, too (see @dawsmith's comment in https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/3004#note_193464998).",1.0 -23004685,2019-07-19 15:50:01.811,RCA Deep Dive of gitlab-com/gl-infra/infrastructure#7090,"@abrandl and @cmcfarland we'll perform a walk through of the incident during the DNA meeting on 24 Jul. Please re-familiarize yourselves with the incident and RCA. This issue has much more to do with process than troubleshooting. It's a great example, illustrative of successful collaboration between Development and Infrastructure, overlapping on-calls for DBRE and SRE, and well executed corrective actions. Also, with the timezone split, this issue can be covered in both the EMEA and Americas timezones!",1.0 -23004148,2019-07-19 15:28:45.363,Proposed Label Cleanup,"I've cleaned house and deleted some really old labels with < 10 issues, most or all of which were closed with updates longer than one year ago. - -My list of remaining issues I'm less sure about are in the table below. - -|Label Name | Total Number of Issues | Remove |Reason| -|-----------|-----------------------:|-------:|------| -|Bronze | 6| Y|| -|moved 1 | 203| Y|| -|moved 2 | 74| Y|| -|moved 3 | 33| Y|| -|moved 4 | 17| Y|| -|critical | 118| Y|Captured by priority labels|",2.0 -23002661,2019-07-19 15:08:22.265,Version.gitlab.com needs an elastic IP,"The AWS instance hosting version.gitlab.com will change IPs when restarted or stopped and started. It should be using an elastic IP address to prevent this from causing a long outage as the DNS is manually updated. - -This issue is a case were this problem occurred: https://gitlab.com/gitlab-com/gl-infra/production/issues/978",1.0 -23002291,2019-07-19 14:52:49.486,"Investigate: Recent demo showed young Pods, without any known reason","In a recent demo of our progress of implementing Kubernetes for the Container Registry, we realized the pods running for the registry were much younger than they should have been. Utilize this issue to track why. Create necessary documentation for troubleshooting purposes.",1.0 -23002140,2019-07-19 14:46:24.440,Kubernetes Deploys perform a set-it and forget-it style so we don't see failures,"When testing an upgrade of a Kubernetes Application, the deployment went through just fine, and helm thought the deployment was successful, however, the new replicaset that was created was not coming up properly. The pod that was starting was stuck in a crashloop. With this use case the deployment was technically not successful and required a roll back. It would be wise to add something to our deployment pipeline to detect this failure and fail the pipeline in instances such as this.",3.0 -23002061,2019-07-19 14:43:19.303,Version of components not exposed in prometheus when deployed into Kubernetes,"Currently we don't have visibility via our metrics to which version of a component is running in Kubernetes. We are running version 2.7.1 of the container registry, but I cannot seem to find this anywhere. This would be useful in finding metrics specific to version changes, watching upgrades and roll backs run on clusters, and increase the visibility into what is running at any given time.",3.0 -23000711,2019-07-19 13:54:49.007,On a failed registry upgrade/downgrade we should alert if one of the pods fails to come up,"Downgrading the registry to version 2.7.0: - -can we alert on kube_pod_status_ready? - -``` -$ k get pods -n gitlab -NAME READY STATUS RESTARTS AGE -gitlab-certmanager-788c6859c6-zk25p 1/1 Running 0 5d23h -gitlab-issuer.4-lx655 0/1 Completed 0 2m45s -gitlab-nginx-ingress-controller-78fb4c686b-d8s5t 1/1 Running 0 14m -gitlab-nginx-ingress-controller-78fb4c686b-nd5lg 1/1 Running 0 7d -gitlab-nginx-ingress-controller-78fb4c686b-zk6r6 1/1 Running 0 14m -gitlab-nginx-ingress-default-backend-7f87d67c8-blzzq 1/1 Running 0 6d15h -gitlab-nginx-ingress-default-backend-7f87d67c8-gmt7l 1/1 Running 0 14m -gitlab-registry-68cbc8c489-7ftcj 1/1 Running 0 14m -gitlab-registry-68cbc8c489-8n2wm 1/1 Running 0 14m -gitlab-registry-7f64787dd6-htttg 0/1 CrashLoopBackOff 4 2m45s -```",3.0 -22974838,2019-07-18 17:39:00.563,RCA: Elevated git error rate on 2019-07-18,"## Summary - -Increased error rates for ~""Service:Gitaly"" due to large file uploads by a single user - -- Service(s) affected : ~""Service:Gitaly"" -- Team attribution : -- Minutes downtime or degradation : 23 (1642-1704UTC) - -For calculating duration of event, use the [Platform Metrics Dashboard](https://dashboards.gitlab.net/d/ZUei7TkWz/platform-metrics?orgId=1) to look at appdex and SLO violations. - -![image](/uploads/685ee0366e1cdcf876fa4c8b84ba485c/image.png) -![image](/uploads/1046c5ce826f5ab008f4150a1bb87e42/image.png) - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? partial service degradation -- Who was impacted by this incident? external customers -- How did the incident impact customers? (i.e. preventing them from doing X, incorrect display of Y, ...) -- How many attempts were made to access the impacted service/feature? -- How many customers were affected? -- How many customers tried to access the impacted service/feature? - -Include any additional metrics that are of relevance. - -Provide any relevant graphs that could help understand the impact of the incident and its dynamics. - - -## Detection & Response - -Start with the following: - -- How was the incident detected? alertmanager/pagerduty notification -- Did alarming work as expected? yes -- How long did it take from the start of the incident to its detection? Approx. 5 minutes -- How long did it take from detection to remediation? Approx. 15 minutes -- Were there any issues with the response to the incident? No - -## Timeline - -2019-07-18 - -- 16:46 UTC - received alerts of increased error rates for Gitaly on `file-23-stor-gprd` -- 16:56 UTC - confirmed that the errors were due to operations from the user identified in production#972 -- 17:01 UTC - EOC initiated a user block -- 17:01 UTC - Load average on `file-23-stor-gprd` begins dropping -- 17:01 UTC - Alert `Gitaly error rate is too high` [cleared](https://gitlab.pagerduty.com/incidents/PD8YZ3Q) -- 17:06 UTC - Node-level alert `Gitaly error rate is too high` for `file-23-stor-gprd` [cleared](https://gitlab.pagerduty.com/incidents/PZZNYR0) - -~~At first glance, this is likely~~ This was a duplicate/recurrence of production#972 - -## Root Cause Analysis - -Gitaly service on `file-23-stor-gprd.c.gitlab-production.internal` was unresponsive - -1. Why? - Gitaly was consuming 100% CPU on `file-23-stor-gprd` -1. Why? - Gitaly `SSHReceivePack` and `SSHUploadPack` processes servicing requests were [consuming all CPU resources](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7281#note_193666706) -1. Why? - A user attempted to upload very large file(s) (>=10GB) via SSH - -## What went well - -1. Alerting allowed us to identify the root problem quickly -1. On-call engineers worked well across rotations to handle/respond to multiple incidents -1. Support was engaged to interface with the affected customer and request that they configure `git-lfs` for better handling of large files - -## What can be improved - -1. We need to better understand why operations from a single user were able to consume all of the resources for a single Gitaly shard - - -## Corrective actions - -1. Do we already have / can we implement a facility for Gitaly to abort git operations that take longer than some time threshold - #7388 -1. Investigate using resource constraint mechanism (like cgroups) to limit the resources that any one git operation can consume - #7387 - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",2.0 -22917311,2019-07-17 10:21:41.586,Schedule RCA Deep Dives,"Please schedule RCA deep dives for DNA meetings. To do so: - -* create an issue for a deep dive, -* assign to a team member -* assign a due date to match a DNA meeting -* link the relevant production incident and incident (RCA) issues -* link the deep dive issue to the following Epic: https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/83",1.0 -22860599,2019-07-16 21:26:25.117,Discussion: What practices should we adopt to help train new on-callers?,"**Goal:** - -This is a pitch for adopting a structured learning path for new GitLab team members (SRE or non-SRE) to incrementally accumulate the knowledge necessary to successfully fielding most on-call issues. - -In this context, ""structured learning path"" means gradually building a working knowledge of the system and how it is maintained, beginning with immediately applicable knowledge and iteratively expanding in breadth and depth, with specific learning objectives at each stage. Students should be able to use what they've learned to answer questions about how components interact or next steps to diagnose scenario X or how to get help with investigating or fixing component Y. - -Lots of folks in our organization have experience being on-call (either here or elsewhere), witnessing what worked well for themselves and others. Personally I feel strongly that catering to several different learning styles is essential to being kind and inclusive to our current and future teammates, so I'd love for folks to contribute their ideas, experiences, and opinions on what practices we could adopt that would be most helpful for supporting new on-callers (and ideally also promote continuous learning that benefits tenured on-callers too). - -**Starter material:** - -There are several ways we could organize both existing and new training materials. Whatever method we choose, there should be a clear progression from introductory to advanced topics, each of which should have explicitly stated learning outcomes that tie into the overarching theme of supporting on-call duties. - -Some examples: - -*Intro level:* - -* Learning objective: List all major components of GitLab.com's service stack, and briefly describe the role of each component. -* Learning objective: Know basic usage of these 5 observability tools. Understand what kinds of questions each can answer. Describe a scenario in which this tool would provide useful information. -* Learning objective: Know how to find help for troubleshooting a problem with a component or scenario you are not familiar with. Where can you find a list of subject matter experts? - -*Intermediate level:* - -* Learning objective: Know how to use the rails console, specifically with the GitLab object models. -* Learning objective: Walk through 3 of the curated historical postmortems to gain familiarity with common patterns of troubleshooting the GitLab.com stack. Include at least 1 application regression and 1 infrastructure failure. Become acquainted with the tools and tactics used for diagnosing and fixing these regressions, as well as the styles of collaboration and engagement with peers during the troubleshooting and remediation phases. -* Learning objective: What does ""normal"" looks like at a system- and component-level? View this from each of the vantage points you would use during troubleshooting: Grafana dashboards, host-level command-line tools (perf, netstat, etc.), rails console, psql, redis-cli, etc. - -*Advanced level:* - -* Learning objective: What does ""abnormal"" look like? Limit the scope of this to just your chosen areas of focus. Know what are the most common or most critical known failure modes for that service or component. What are the symptoms, side-effects, and remedies are for those failure scenarios? -* Learning objective: Which components of the architecture are currently the most fragile? Which are singletons? Which services lack graceful failover? How does this affect our SLO, and is our recovery time reliably low? Where can you quickly find the documented recovery procedures for these critical components? -* Learning objective: Know how to scale out a service tier with Terraform. Know how to infer from effects whether this was helpful or harmful, and if harmful how to identify the critical downstream bottleneck.",3.0 -22849556,2019-07-16 18:32:14.503,"When starting from an empty cluster, k-ctl does checks for objects in a non-existent namespace","One of the checks that `k-ctl` validates is that we've got our secrets already loaded into an environment. https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/gitlab-com/blob/master/bin/k-ctl#L47 - -This doesn't work out so well if we are attempting to install something for the first time on a completely clean cluster. One, the namespace does not yet exist, and we fail to bail out of the script at this point. Secondly, in order to install, we need the certificates. However, the namespace is installed by helm. - -To work around this we need to skip the validation part. And/or we can create the namespace, drop in the secrets, then perform the install. - -This is not documented, and is a bit confusing. Utilize this issue to come up with a better way to handle this cyclic dependency. - -/cc @gitlab-org/delivery",1.0 -22820679,2019-07-16 03:53:02.164,Logrotate on ops.gitlab.net is broken,"I got a page today for `ops.gitlab.net` being at 1% disk space. Upon investigation some of the logs had data back from last year. - -We don't seem to be sending these logs anywhere so I didn't want to just delete them all. Because `production.log` is also in `production_json.log`, I cleared out `production.log`. - -``` -root@gitlab-01-inf-ops.c.gitlab-ops.internal:/var/log/gitlab/gitlab-rails# >production.log -``` - - - We need revisit log retention and fix logrotate.",3.0 -22812532,2019-07-15 17:57:58.390,Add subdomain for 10k instance under testbed.gitlab.net,"We need a subdomain to be created for the 10k instance: - -``` -10k.testbed.gitlab.net -``` - -The IP can be found in [dev-resources](https://gitlab.com/gitlab-com/dev-resources/-/jobs/251853108) job output under `drew-10k-lb` - -See https://gitlab.com/gitlab-org/quality/performance/issues/38",1.0 -22770473,2019-07-14 14:49:33.430,Setup network and DNS for Infra-Vault,"* [x] choose a free subnet to avoid IP collisions with other envs -* [x] setup VPC peering with our envs for accessing vault -* [x] setup DNS -* [x] disable the external IP for vault",5.0 -22745634,2019-07-12 19:03:58.956,Proposal to adopt RabbitMQ,"# Proposal to adopt RabbitMQ - -I would like to propose a solution for the longer term. That is, just one solution option that _could_ be implemented in the timeframe for which the nearer term, interim, solution options are intended to buy us a longer runway, such as [application-scope sharding](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7199). - -This proposal's subject is the current status quo application use of the sidekiq framework for asynchronous task processing with redis cache as a FIFO queue/locking mechanic. - -As a software developer for many years, I have experienced the consequences of inter-system and inter-service bottlenecks which are often introduced by misuse of software patterns and frameworks. Of course intentions are always good and it is almost always the case that some software pattern implementations tests well and work well in the nearer term, at small scales, or with simplistic/naive test scenarios. However, as we systems engineers know all too well, as an application's usage profile grows, or adherence to application software development best practices drifts in only one or two code modules, feature releases, or commits, things become more complicated. Sometimes software frameworks and patterns are employed with default configurations that are seldom if ever adjusted to suit the application usage. Sometimes such solutions are simply not designed to be used feasibly at certain scales. - -## RabbitMQ or sidekiq? - -Approaches to clustering Redis include single node multi-process distributed request proxy load balancing, multi-node single-process distributed request load balancing, and also multi-node multi-process-per-node distributed request load balancing. As far as I know, we are using none of these, except maybe multi-node single-process distributed request load balancing, but even then, the additional nodes are not being used as clustered worker nodes to perform load-sharing, but as failover nodes with replicated data. Furthermore, it seems unlikely that all read requests are getting properly distributed exclusively to the failover nodes, but instead are being equally distributed to all three nodes, which overburdens the master Redis node that should be solely responsible for write requests. (https://redis.io/topics/faq, https://redis.io/topics/partitioning, https://redislabs.com/redis-enterprise/technology/redis-enterprise-cluster-architecture/) - -On the other hand, adopting a write-only usage for the master node could cause failures with a variety of application usages such as any update-on-miss (`SETNX`) operations. (https://redis.io/commands/setnx, https://redis.io/commands/setnx#handling-deadlocks) - -Such concerns are further complicated by the best practice patterns for using the sidekiq framework itself. Configuring task prioritization with sidekiq can be troublesome. For instance, Phil Sturgeon points out that, ""defining multiple queues in your Sidekiq config does not distribute work evenly between them."" [Nov 16 2016](https://phil.tech/2016/11/16/tips-on-sidekiq-queues/) - -With RabbitMQ working as a broker for sending parametric info for tasks with appropriate background-processing/asynchronous work profiles, and distributing messages to categorical pools of asynchronous workers, cpu load is automatically balanced between all cores by design. RabbitMQ's multi-threaded Erlang platform is built for services requiring concurrency and makes horizontal scaling very simple from a configuration and deployment perspective. - -This is not the case with Redis, which is built to be a single-threaded single process application by design. This design is adequate for its commonly marketed use cases, but can make scaling it horizontally somewhat complicated, as explained above. - -## Support for microservice architecture - -Message broker technologies like RabbitMQ and Kafka can also support an architectural transition to microservices, which is where I understand GitLab may be headed towards in the future. Martin Fowler had plenty to say about [Microservice Trade-Offs](https://martinfowler.com/articles/microservice-trade-offs.html) in 2015. Another good resource for reading more about distributed services communication is here over at [microservices.io](https://microservices.io/patterns/communication-style/messaging.html) which also has coverage on other conventional communication techniques like [RPI](https://microservices.io/patterns/communication-style/rpi.html). - -The take-away here is that microservice architectural patterns are commonly accompanied by message broker technologies. Technologies like sidekiq certainly have their place, but distributed inter-service communication orchestration might not be that place. So, if a more distributed architecture is in the stars for GitLab, then perhaps we can take this opportunity to begin to lay the groundwork for such a transition now. - -My intention here is to cultivate discussion around the singular problem presented by sidekiq which appears to be that horizontal scaling with sidekiq and redis is more challenging compared to other options like RabbitMQ. - - -## RabbitMQ and Sneakers - -Sneakers is a ruby background job processing framework that uses RabbitMQ. - -> ""Nor could I allow the single-point-of-failure that being Redis, which isn't really suitable for a highly-available background processing framework - I couldn't lose messages (it's worth mentioning that Redis is often my go-to swiss-army knife and you'll have to pry it from my dying corpse)."" - --- [Dotan J. Nahum](https://github.com/jondot/sneakers/wiki/Why-i-built-it#cruby-performance-and-high-availability), Jan 10, 2014, https://github.com/jondot/sneakers - -Additional advantages of using Sneakers: - -> Compared to Sidekiq, per my use case, I needed - -> A great-performing framework limited only by broker speed - at least 1000req/s acknowledged and persisted on EC2-Large (Sneakers does more than that), and -> *That would use all cores* -> A highly available processing framework (here we have same guarantees as RabbitMQ offers, which is great) -> A familiar DSL/API that also supports advanced messaging semantics such as reject, requeue, acknowledge, etc, and -> That would not expose the whole guts of AMQP at me, but just-enough from it. - -> And irrelevant of the comparison to Sidekiq or any other background processing framework I needed - -> It should use a ruby that doesn't care about content of gems and can run C-extensions. MRI. -> A production-ready package that holds all of these together allowing me to be as lazy as possible -> Metrics and logging baked in -> Convenient deployment, maintenance and supervision story - --- [Dotan J. Nahum](https://github.com/jondot/sneakers/wiki/Why-i-built-it#sneakers) - -Additional reading: - -* [Messages on Rails Part 3: RabbitMQ](https://karolgalanciak.com/blog/2019/06/23/messages-on-rails-part-3-rabbitmq/) -* [RabbitMQ Scheduling Messages with Sneakers](https://medium.com/@twobuckchuck/rabbitmq-scheduling-messages-with-sneakers-18089e8aa7d2) - -## Additional sharding and prioritization using queues - -Solutions like RabbitMQ provide support for application features like programmatic sharding for job criticality levels and domains by using RabbitMQ first-class queues and channels, instead of redis locking patterns which are contentious and susceptible to inconsistency errors during fail-overs. - -> ""Another core concept of job framework is queues. A typical app would have a dozen queues (critical, default, webhooks, low, imports, payments etc) and the developer would have to choose one for their job. As you can see, the set of queues has a mix of priority based queues (critical, default, low) and domain-specific queues (webhooks, imports, payments)."" - --- [Kir Shatrov](https://kirshatrov.com/2019/01/03/state-of-background-jobs/), 03 Jan 2019 - -Additional reading: [The State of Background Jobs in 2019](https://kirshatrov.com/2019/01/03/state-of-background-jobs/) - -## Summary - -To summarize, my limited research and experience informs me that it can be a complicated design problem to scale redis when relying on lock acquisition mechanics for asynchronous job queueing. It requires a sophisticated coordination between good architectural deployment patterns and programmatic adherence to certain application software patterns in order to ensure both reliability, but also horizontal scalability. - -Personally, I suspect that GitLab has outgrown redis-backed sidekiq, to an extent. I recommend incorporating RabbitMQ and one of the many available Ruby libraries and frameworks, such as Sneakers or Bunny, into the GitLab Rails deployment configuration and make the necessary changes to the codebase in order to interface with appropriately considered queues for invoking categorical asynchronous tasks, and reserve redis use for the purposes of caching simplistic session token and frequently read/infrequently modified application data. - -## Falsifiability - -My suspicions could certainly be mistaken. I have not yet grokked the numbers involved in the profiles of our redis usage by our application. It is currently difficult to do such a thing because our present usage is somewhat functionally overloading one of the redis clusters. This will probably be remedied by [this issue](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7199), and we'll soon have a clearer picture of usage which is dominating the bandwidth of the existing redis system resources. It may be that once splitting redis sidekiq from redis persistent has completed, and obvious mal-patterns illuminated and eliminated, there will be plenty of room to grow, and the use-case for a message broker service obviated. - -Related issues: https://gitlab.com/gitlab-com/gl-infra/production/issues/928 - -Tagging team members for visibility and feedback solicitation: @glopezfernandez, @stanhu, @andrewn, @ahanselka, @msmiley, @craig, @craigf, @cmiskell, @devin, @skarbek, @jarv, @ansdval, @Finotto",1.0 -22710228,2019-07-11 17:36:10.618,Shutdown postgres11 instances in staging,"We are going to pause the postgres 11 upgrade effort just for a little while. - -We should stop the postgres11 instances in staging because they're not needed (saves about 1,200 $/month). - -``` - postgres11-01-db-gstg - postgres11-02-db-gstg - postgres11-03-db-gstg - postgres11-04-db-gstg - postgres11-05-db-gstg - postgres11-06-db-gstg -```",1.0 -22701083,2019-07-11 12:12:36.886,PostgreSQL: Minor upgrades from 9.6.12 to 9.6.14,"We are 3 minor versions behind the latest release of PostgreSQL 9.6 and we should perform a minor upgrade. - -Minor upgrades are generally safe and can be done without incurring long downtime since it just needs a restart of the postgres process. That means, we would be doing a rolling upgrade and restart across the cluster. - -Minor upgrades usually fix bugs and make other non breaking improvements. - -Changelogs: -* https://www.postgresql.org/docs/9.6/release-9-6-14.html -* https://www.postgresql.org/docs/9.6/release-9-6-13.html -* https://www.postgresql.org/docs/9.6/release-9-6-12.html",2.0 -22674953,2019-07-10 16:23:20.331,Proposal to simplify sidekiq worker pools,"**Requires** https://gitlab.com/gitlab-org/gitlab-ce/issues/64692 - ------------------------ - -Spawned from https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7177 - -Currently we have a number of different sidekiq priority queues. - -Its unclear to me what the key differentiator between the different queues is. I had assumed it was based on throughput - for example `realtime` for high priority, short jobs, and `besteffort` for low priority, long running jobs, but this doesn't appear to be the case: for example there are some tasks which take upwards of 2.5 hours which run on the `realtime` queues. - -Once a job is assigned to a priority queue, it will be processed by a fleet of sidekiq workers dedicated to that queue. For example, we have sidekiq fleets for `realtime`, `besteffort` etc. - -If we look at things on a machine level, each node is running a set of sidekiq worker processes and each worker has a set of threads handling jobs. - -At this point there are some more surprises: - -1. Each process has a different number of worker threads (between 3 and 12 per process) -1. Each **process will handle a different set of jobs from the queue** - - -``` -git 8693 2523 0 12:37 ? 00:00:00 ruby /opt/gitlab/embedded/service/gitlab-rails/ee/bin/sidekiq-cluster -e production -r /opt/gitlab/embedded/service/gitlab-rails post_receive,merge,update_merge_requests,gitlab_shell,email_receiver,repository_fork,reactive_caching,project_update_repository_storage,ldap_group_sync,new_issue,new_merge_request update_merge_requests,post_receive process_commit,process_commit,process_commit process_commit,process_commit,process_commit authorized_projects,authorized_projects new_note,new_note merge,merge,update_merge_requests merge,merge,update_merge_requests update_merge_requests,post_receive -git 8700 8693 55 12:37 ? 00:13:32 sidekiq 5.2.7 queues: post_receive, merge, update_merge_requests, gitlab_shell, email_receiver, repository_fork, reactive_caching, project_update_repository_storage, ldap_group_sync, new_issue, new_merge_request [3 of 12 busy] -git 8702 8693 24 12:37 ? 00:05:55 sidekiq 5.2.7 queues: update_merge_requests, post_receive [0 of 3 busy] -git 8704 8693 9 12:37 ? 00:02:18 sidekiq 5.2.7 queues: process_commit (3) [0 of 4 busy] -git 8706 8693 9 12:37 ? 00:02:21 sidekiq 5.2.7 queues: process_commit (3) [0 of 4 busy] -git 8708 8693 7 12:37 ? 00:01:45 sidekiq 5.2.7 queues: authorized_projects (2) [0 of 3 busy] -git 8710 8693 8 12:37 ? 00:02:03 sidekiq 5.2.7 queues: new_note (2) [0 of 3 busy] -git 8712 8693 13 12:37 ? 00:03:12 sidekiq 5.2.7 queues: merge (2), update_merge_requests [1 of 4 busy] -git 8714 8693 13 12:37 ? 00:03:13 sidekiq 5.2.7 queues: merge (2), update_merge_requests [0 of 4 busy] -git 8716 8693 29 12:37 ? 00:07:13 sidekiq 5.2.7 queues: update_merge_requests, post_receive [1 of 3 busy] -``` - -This means that some jobs could be saturated by busy workers while other worker processes _in the same fleet_ sit idle. - -It also means that we need to be able to _manually_ monitor the fleet and make constant _manual_ adjustments. - -Unfortunately, as far as I can tell, we don't have metrics to alert us when all the workers for a certain subset of the fleet are busy. - -Instead we will reactively respond when worker queues lengths start climbing. - ---------------------------------------------------------------------------------------------- - -# Proposal - -I propose a simpler approach, which should be easier to manage. - -1. Priority queues are strictly based on throughput requirements and job latency. -1. Each priority queue has strict SLO requirements for latency. If the apdex for a particular job consistently does not meet the required SLO, development teams will be notified and the job will be de-prioritised to a high-latency queue. -1. Each priority queue will have its own fleet (same as present) -1. Each worker process will process all jobs for a given priority queue, not a subset -1. Each worker will have the same number of threads - -This approach will be easier to manage and will not require manual adjustment. If the `realtime` queue is not keeping up with jobs, it can be scaled up to process more. If saturation of worker threads across a fleet drops below a threshold for a certain period, the fleet can be scaled back. - -This will also be much simpler to deal with in a k8s world (@skarbek what strategy are we using here?)",5.0 -22647225,2019-07-09 20:47:42.107,Add thanos-query from GKE to the rest of our thanos infra,"Currently our GKE clusters thanos-queries are it's own datasource in grafana. We should connect these to the ops thanos query instance such that we get metric data from all of our infrastructure. - -1. [x] Write the ability to add additional query targets -1. [x] Wire it up - -/cc @gitlab-com/gl-infra",5.0 -22641309,2019-07-09 17:14:14.410,DR vs GPRD file server count differences,"Currently we have 32 file servers in `dr` while `gprd` has 36 file servers. This would seem to break the Geo feature. Use this issue to discuss why we are missing 4 servers in the `dr` environment, and bolster our documentation to ensure that when we scale up file servers, the same occurs in `dr`. We should also consider alerting on a situation like this as this is one of our ways to recover data. - -/cc @gitlab-com/gl-infra",1.0 -22632250,2019-07-09 12:48:00.868,Ensure that the zpool is cleanly reimported when instances are destroyed and recreated,"If I GCP-destroy a ZFS-backed storage node and use terraform to reprovision it, we are confident that the correct disks will be reattached, but will the zpool be reimported after chef runs? If not, fix it! - -We already know the zpool is imported and the relevant filesystem(s) mounted when the instance is rebooted.",1.0 -22617197,2019-07-09 03:31:25.039,Register prometheus service on consul,Necessary for us to be able to do dynamic inventory for consul (e.g. for gitlab-org/release/framework#354).,1.0 -22616232,2019-07-09 01:57:44.633,Increase in Postgres Dead Tuple alerts,"We have seen an increase in unactionable Postgres alerts over the last on-call shift. This always recovers on its own, but it is a change in behavior, so we should understand what is causing it. - -The alerts are: -- PostgreSQL_TooManyDeadTuples -- PostgreSQL_ReplicaStaleXmin - -Dead Tuples Percentage over the last 7 days: - -![Screen_Shot_2019-07-08_at_3.42.29_PM](/uploads/65f55700269e60bed022be613169ee5d/Screen_Shot_2019-07-08_at_3.42.29_PM.png) - -Dead Tuple Rates over the last 7 days: - -![Screen_Shot_2019-07-08_at_3.43.36_PM](/uploads/74e0659763f5c2e7d3068d5f7eb0df19/Screen_Shot_2019-07-08_at_3.43.36_PM.png) - -Total Dead Tuples over the last 7 days: - -![Screen_Shot_2019-07-08_at_3.56.54_PM](/uploads/d65634813fcb836d0edb5e273855d447/Screen_Shot_2019-07-08_at_3.56.54_PM.png) - -Autovacuum per table over the last 7 days (note the last day or so): - -![Screen_Shot_2019-07-08_at_3.44.15_PM](/uploads/92830f308f32e07553904aab39d3ea32/Screen_Shot_2019-07-08_at_3.44.15_PM.png) - -All of these metrics are here: - -https://dashboards.gitlab.net/d/000000167/postgresql-tuple-statistics?orgId=1&refresh=1m&var-environment=gprd&var-prometheus=Global&var-instance=patroni-04-db-gprd.c.gitlab-production.internal&from=1562031895380&to=1562636695380 - -/cc @dawsmith @Finotto",2.0 -22615000,2019-07-08 23:06:00.733,Renew SSL cert for status.gitlab.com,"Hello, - -This is an automated message to inform you that the SSL certificate for your status page (status.gitlab.com) is expiring soon. - -Please login to the Status.io Dashboard and upload a new certificate in the Settings / SSL tab.",1.0 -22594425,2019-07-08 10:22:35.327,Do we still need to provision git storage nodes to be overweighted towards one zone in a region?,"Our git storage fleet is provisioned by the generic-stor terraform module, which provisions node_count nodes in the configured zone, and multizone_node_count nodes in the configured region, allocated across zones in that region by round robin. In this way file store nodes are ""overweight"" in the configured zone. - -https://ops.gitlab.net/gitlab-com/gl-infra/terraform-modules/google/generic-stor/merge_requests/7 proposes breaking this feature of generic-stor (for reasons discussed in that MR), and implementing any required zone overweightness by provisioning 2 file store pools, 1 which specifies a zone, and 1 which specifies a region for round robin node allocation. - -This issue aims to address whether we even need to do this: can we simply provision regional pools with round robin allocation, and not be overweight in any one zone?",1.0 -22557161,2019-07-05 20:55:03.290,Create a consul server fleet for ops,For all ops exclusive services (e.g. alertmanager. Necessary for https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7181),2.0 -22556971,2019-07-05 20:36:03.241,Register alertmanager service on consul,Necessary for us to be able to do dynamic inventory for consul (e.g. for gitlab-org/release/framework#354).,2.0 -22552517,2019-07-05 15:34:23.527,Import prometheus rules into our GKE prometheus-operator configuration,"We have a lot of rules https://gitlab.com/gitlab-com/runbooks/tree/master/rules that now need to be imported into the prometheus-operator running in GKE. - -/cc @bjk-gitlab -/cc @gitlab-org/delivery",5.0 -22550222,2019-07-05 14:12:50.571,Consider the removal of the nginx ingress for the GKE Container Registry,"With the recent implementation of the Container Registry into GKE, we went the quick route and utilized the nginx ingress provided by our helm chart in order to quickly utilize this as a PoC. This adds a few things: -1. Complexity with the way our haproxy must transfer data to and from the GKE cluster - * https://gitlab.com/gitlab-cookbooks/gitlab-haproxy/commit/29e5dab8aab05d1118df46a0f84f1e47f706dd6c - * https://gitlab.com/gitlab-cookbooks/gitlab-haproxy/commit/b7c806f4b77ffa37e30bf429ddc295c388589568 -1. Adds an extra network hop that we don't REALLY need: - * Current: GLB -> haproxy -> registry nodes - * Proposed: GLB -> haproxy -> nginx ingress -> pods - * Ningx is not doing anything out of the ordinary other than forwarding traffic -1. With this configuration we are using Let's Encrypt on this new external endpoint `registry.gke..gitlab.`. This adds a layer of configuration that we could potentially eliminate -1. This makes it hard to slowly roll traffic into either our VM's or GKE. Instead our solution is an on/off switch - * https://gitlab.com/gitlab-cookbooks/gitlab-haproxy/merge_requests/159/diffs#20875b27e096b4a4356a90b6ae97d03a1dbf877a_114_117 - -## Proposal -1. Configure the container registry without an ingress -1. Expose the container registry service with an internal static IP that we can feed to haproxy - -/cc @gitlab-org/delivery",5.0 -22499039,2019-07-03 22:38:07.524,Dead Tuples and Stale Replica,"These two errors are intermittently flapping. - -Additionally, the runbook has a lot of information which is misleading or not up to date with our current environment. - -``` -*patroni-04-db-gprd.c.gitlab-production.internal* - PostgreSQL dead tuples is too large - - The dead tuple ratio of import_export_uploads is greater than 5% -``` - -``` - *patroni-04-db-gprd.c.gitlab-production.internal* - PostgreSQL replication slot patroni_01_db_gprd_c_gitlab_production_internal on patroni-04-db-gprd.c.gitlab-production.internal is -falling behind. - - - The replication slot patroni_01_db_gprd_c_gitlab_production_internal on patroni-04-db-gprd.c.gitlab-production.internal is using -a minimum transaction ID that is 861.3k transactions old. -This can cause an increase in dead tuples on the primary. This can be -caused by long-running transactions on the master or any standby, or unused replication. -```",1.0 -22498589,2019-07-03 21:53:30.191,Register haproxy service on consul,Necessary for us to be able to do dynamic inventory for consul (e.g. for gitlab-org/release/framework#354).,2.0 -22498216,2019-07-03 21:21:21.611,git-over-SSH errors debugging,"Following on from https://gitlab.com/gitlab-com/gl-infra/production/issues/844 there are still some underlying issues to resolve. In particular the regularity of issues reported in https://gitlab.com/gitlab-com/gl-infra/production/issues/844#note_187833688 suggests we could track that instance down specifically and hopefully capture enough data to find a source of at least one cause. - -Current status: -Problem alleviated; some followup still to occur regarding unicorn queuing on git front-end servers",5.0 -22487647,2019-07-03 15:13:33.696,GKE Prometheus does not store data persistently,"Our prometheus instance in GKE is not storing data persistently when the pod is removed. Despite configuring data to be stored for 4 weeks per our configurations, once the prometheus Pod is removed, an data older than the new pod is lost. The helm chart defaults to using an EmptyDir instead of a disk. We can solve this in 2 ways: - -1. Add data persistence to the prometheus-operator -1. Finish this issue: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7068 - -I think we should pursue option 1 as there exists the potential that a pod will disappear before thanos has had a chance to grab the data and push the data into cloud storage. **This is not a replacement for issue #7068** - -/cc @gitlab-org/delivery -/cc @bjk-gitlab",1.0 -22483996,2019-07-03 13:48:37.790,DRY up terraform module code,"With the upgrade to terraform 0.12 complete, and availability of HCL 2.0, we can now take advantage of the new language features to reduce duplication across our terraform codebase, in particular within our modules. - -1. [ ] Identify near-identical modules for refactor (e.g. `generic-sv-with-group`, `generic-stor`, `generic-stor-with-group`) -1. [ ] Create/relate issues for refactor work on specific modules -1. [ ] Create/relate issues to update version pinning / deploy all changes via `gitlab-com-infrastructure` pipelines",8.0 -22453338,2019-07-02 16:30:54.793,RCA: Degraded performance because of Redis-cache overload.,"**Please note:** if the incident relates to sensitive data, or is security related consider -labeling this issue with ~security and mark it confidential. -*** - -## Summary - -Since July 1st, 8:00 UTC we were seeing degraded performance and elevated 500 errors for Web, API and delayed CI jobs. The imminent root cause turned out to be maxing out the CPU on the redis-cache primary by many expensive calls to redis-cache from the application. - -Service(s) affected : ~""Service:Web"" - -Team attribution : - -Minutes downtime or degradation : 540m based on web below 95% latency APDEX - -## Impact & Metrics - -Start with the following: - -- What was the impact of the incident? - - degraded performance and elevated error rate on Web and API component, delayed CI jobs. -- Who was impacted by this incident? - - All users of GitLab.com, mostly during EMEA business times -- How did the incident impact customers? - - slow loading pages, 500 errors, delayed CI jobs and pull mirrors -- How many attempts were made to access the impacted service/feature? -- How many customers were affected? -- How many customers tried to access the impacted service/feature? - -Include any additional metrics that are of relevance. - -Provide any relevant graphs that could help understand the impact of the incident and its dynamics. - -![image](/uploads/417ce8afae1b29255173602a3fe98872/image.png) - -## Detection & Response - -Start with the following: - -- How was the incident detected? - - [Pagerduty alert](https://gitlab.pagerduty.com/incidents/PETCR1R) on `GitLabComLatencyWebCritical` -- Did alarming work as expected? - - yes -- How long did it take from the start of the incident to its detection? - - 5 minutes -- How long did it take from detection to remediation? - - 27h until a patch eliminated heavy app config requests to redis -- Were there any issues with the response to the incident? (i.e. bastion host used to access the service was not available, relevant team memeber wasn't page-able, ...) - - we should have been detecting Redis-cache slowly becoming saturated earlier - -## Timeline - -2019-07-01 - -- 07:56 UTC - connections queueing up at unicorn workers, latencies rise for web and api -- 08:01 UTC - [Pagerduty alert](https://gitlab.pagerduty.com/incidents/PETCR1R) on `GitLabComLatencyWebCritical` -- 08:05 UTC - Alert acknowledged by SRE on call -- 08:15 UTC - Job queue durations rise -- 08:54 UTC - Incident issue [928](https://gitlab.com/gitlab-com/gl-infra/production/issues/928) opened -- 09:06 UTC - status.io [incident](https://app.status.io/dashboard/5b36dc6502d06804c08349f7/incident/5d19cd0d0d4f23274506e6f4/edit) opened -- 09:56 UTC - status.io update: ""We are adding more workers..."" -- 10:30 UTC - 4 new api and 4 web workers added to LBs https://ops.gitlab.net/gitlab-cookbooks/chef-repo/merge_requests/1338 -- 11:24 UTC - support reports stuck CI jobs for customers (https://gitlab.zendesk.com/agent/tickets/125409) -- 12:09 UTC - new incident issue [929](https://gitlab.com/gitlab-com/gl-infra/production/issues/929) opened for reports of delayed CI runners -- 12:11 UTC - [tweet](https://twitter.com/gitlabstatus/status/1145666158382604289) ""jobs on shared runners being picked up at a low rate or appear being stuck..."" -- 13:14 UTC - status.io update acknowledging CI pipeline delays -- 13:15 UTC - incident issue [929](https://gitlab.com/gitlab-com/gl-infra/production/issues/929) closed again as it is related to [928](https://gitlab.com/gitlab-com/gl-infra/production/issues/928) -- 13:51 UTC - status.io update: ""continue to investigate..."", announcing incident issue URL -- 14:20 UTC - additional workers removed again to reduce connections to redis-cache -- 16:51 UTC - status.io update: status changed to ""monitoring"", ""CI jobs are catching up..."" -- 18:11 UTC - status.io update: ""back to normal levels..."" -- 19:40 UTC - status.io incident resolved - -2019-07-02 - -- 09:45 UTC - kernel update and reboot of redis-cache-03 -- 10:06 UTC - unexpected failover to redis-cache-01 -- 10:50 UTC - redis-cache-02 kernel upgrade and reboot -- 11:22 UTC - unexpected failover to redis-cache-02 -- 11:20 UTC - patch eliminating application config requests to redis-cache deployed: https://ops.gitlab.net/gitlab-com/gl-infra/patcher/merge_requests/113 (https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/14500) -- 11:20 UTC - CPU usage drops to 85%, network from 300Mb/s to under 100Mb/s, all metrics improve -- 12:15 UTC - redis-cache-01 kernel update and reboot - - - -## Root Cause Analysis - -The web component had slower response times. - -1. Why? - Redis-cache had slower response times. -2. Why? - Redis-cache was saturating it's CPU. -3. Why? - Too many and too heavy requests to Redis from the application. -4. Why? - Missing awareness and testing for how many and how expensive Redis-cache requests would be generated from the application. - - -## What went well - -- Alerting worked for getting aware of web performance issues immediately. -- A lot support from all over engineering to find the root cause and working on several remediations. - -## What can be improved - -* detection of Redis performance issues (or generally: detecting saturation of a service/system) -* trend analysis, capacity planning -* finding the root cause of performance degradations - we sometimes don't followup on degradations if they resolved from self and we didn't see a direct root cause at first sight, but they might be an indication of a deeper issue or trend. - -## Corrective actions - -- List issues that have been created as corrective actions from this incident. -- For each issue, include the following: - - - Issue labeled as ~""corrective action"". - - Include an estimated date of completion of the corrective action. - - Incldue the named individual who owns the delivery of the corrective action. - -per @andrewn: -1. [x] Start monitoring on various saturation metrics: https://gitlab.com/gitlab-com/runbooks/merge_requests/1188, add per-service SLOs -1. [x] Distributed tracing instrumentation of Rails caching: https://gitlab.com/gitlab-org/labkit-ruby/merge_requests/12 -1. [x] Distributed tracing instrumentation of Redis calls: https://gitlab.com/gitlab-org/labkit-ruby/issues/2 -1. [ ] Discuss adding `n+1` style limits on Redis calls, in development and testing environments (no issue yet) -1. [ ] Discuss adding size limits on Redis keys stored in the cache (no issue yet) -1. [x] Stop caching junit files in Redis: https://gitlab.com/gitlab-org/gitlab-ce/issues/64035 -1. [ ] Monitor cache misuse of Redis by application teams `redis-cli --bigkeys` -1. [ ] Add `redis_duration_ms` field to our Rails+API structured logs (no issue yet) -1. [x] Add documentation on how to monitor redis instances: https://gitlab.com/gitlab-com/runbooks/merge_requests/1187 -1. [x] Consider breaking our Redis instances down further than the current persistent/cache pair - for example, CI-cache, MergeRequest-cache, etc -1. [ ] Discuss the possibility of moving over to Redis-cluster or managed Redis instances (eg Redis Labs) (no issue yet) -1. [x] Use cached markdown fields for calculating participants https://gitlab.com/gitlab-org/gitlab-ce/issues/63967 -1. [x] Bandaid: Disable juint reports: https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/30254 - -per @stanhu: -1. [x] Move Flipper caching away from Redis to in-memory cache: https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/30276 -1. [x] Move Geo checks away from Redis to in-memory cache: https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/14513 -1. [x] Add Redis details to Peek performance bar: https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/30191 - -per @rymai: -1. [ ] compress `Rails.cache` payloads that are bigger than a certain threshold - -per @bjk-gitlab: - -1. [x] cleanup/improve the redis cache metrics to be more useful: https://gitlab.com/gitlab-org/gitlab-ce/issues/64064 - - -## Guidelines - -* [Blameless RCA Guideline](https://about.gitlab.com/handbook/infrastructure/#rca) -* [5 whys](https://en.wikipedia.org/wiki/5_Whys)",1.0 -22441524,2019-07-02 11:03:37.172,Make redis-cache instances reboot-safe,"When rebooting redis-cache instances for a kernel upgrade, the redis process wasn't starting from self. -We need to make the redis process reboot-safe.",1.0 -22438562,2019-07-02 09:31:28.448,"turn off junit config, uploading junit artifacts","As part of the effort to alleviate saturation on Redis nodes related to https://gitlab.com/gitlab-com/gl-infra/production/issues/928, we're going to turn off junit config and, uploading junit artifacts.",2.0 -22438505,2019-07-02 09:29:11.163,Redis nodes kernel upgrades,"As part of the effort to alleviate saturation on Redis nodes related to https://gitlab.com/gitlab-com/gl-infra/production/issues/928, we're going to upgrade kernels on Redis nodes. @bjk-gitlab noted that the nodes are spending about 50% of their time on system, and we expect the kernel upgrades to reduce this. - -@bjk-gitlab and @jarv are working on this.", -22425562,2019-07-01 21:32:13.475,clean up production severity labels,"Some **Incident** issues in the production queue use the `Sn` label and others use the `severity::n` label. Some use both and some use none. We need to clean this up: - -* use one label or the other -* ensure all incidents have severity labels",1.0 -22400153,2019-07-01 09:39:47.610,Chef runs fail because user databag is not formatted,"The `check` [job](https://ops.gitlab.net/gitlab-cookbooks/chef-repo/-/jobs/547453) failed with: - -``` -$ find [a-z]* -type f -name \*json | xargs -I{} -r -n 1 bash -c 'jq . {} > /dev/null || ( echo ""bad JSON {}"" && exit 1 )' -$ find [a-z]* nodes -type f -name '*.json' | xargs -I{} -r -n 1 bash -c 'diff <(cat {}) <(jq . {}) > /dev/null || ( echo ""You must format this file {}"" && exit 1 )' -You must format this file data_bags/users/cshobe.json -You must format this file data_bags/users/yguo.json -ERROR: Job failed: exit code 1 -```",1.0 -22357954,2019-06-28 18:52:40.630,Apply vault terraform config via CI/CD,"For consistency, testing and keeping vault deployment secrets in one place we should apply terraform changes for the vault GKE cluster only via CI/CD pipeline. - -Other jobs that should be run via CI: -- re-keying (adding a new pgp key to the repo should trigger a re-keying and pgp encryption of all unseal keys) -- changes of the basic vault config (accounts/roles/policies) -- ...",8.0 -22354600,2019-06-28 16:27:23.104,Metrics from GKE do not have an environment label applied to them,Unlike the majority of our infrastructure we are not applying the `env` label to any metrics that are being captured inside of GKE. This will pose a problem as we heavily rely on this label for alerting and greatly w/i our dashboards. Utilize this issue to track adding that label to our metrics.,3.0 -22354510,2019-06-28 16:23:51.090,Add GCP provider (or alternative) to dev-resources and deprecate DO,"Recently, we've seen an increase in random errors in [dev-resources](https://gitlab.com/gitlab-com/dev-resources/) in the past couple of months. - -It has been a source of frustration for the support team when creating instances for reproducing issues or handling interviews. -* https://gitlab.slack.com/archives/C4Y5DRKLK/p1561730479035000 -* https://gitlab.slack.com/archives/C4Y5DRKLK/p1561711484479900 -* https://gitlab.slack.com/archives/C4Y5DRKLK/p1561564095249700 -* https://gitlab.slack.com/archives/C4Y5DRKLK/p1559761492127800 -* https://gitlab.slack.com/archives/C4Y5DRKLK/p1559759365125100 - -I don't think DO is scaling well. Some of the errors appear to be responses from DO's API (500s). We should consider [adding a GCP provider](https://cloud.google.com/community/tutorials/managing-gcp-projects-with-terraform) and start migrating off DO and maybe eventually deprecating it. - -Related to https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6526",6.0 -22334879,2019-06-28 01:27:54.707,Camoproxy URL blacklist tooling,To implement the URL blacklisting proposed in https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6836#note_182241021,2.0 -22325891,2019-06-27 16:00:54.758,gitlab-com repo pulling master branch of our charts from pages,"The gitlab-com repo pulls a version specified by an ENV variable, defaulting to `master` to determine which version of the chart we want. Prior to switching to helm tiller we were relying on a version specified in a file: https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/gitlab-com/blob/master/GITLAB_CHARTS_VERSION - -https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/gitlab-com/blob/master/bin/.gitlab.bash#L9 - -Setup this repo such that we are pulling a specific version again as defined in this provided file. Ensure that the file is updated to match that of master which is what we've been using to deploy lately. - -Consider the fact that our helm charts are also running on GitLab pages. Should pages be down for any reason, we secondary place to grab our charts. - -/cc @gitlab-org/delivery -/cc @gitlab-com/gl-infra",3.0 -22325710,2019-06-27 15:54:44.977,"Our CI Images pull master for the helm components, consider locking these down","Both the installation of helm and the helm tiller plugin are using the latest and/or master during it's build process: https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/common/blob/master/Dockerfile-ci#L5-9 - -Consider pinning the versions to specific releases of these to prevent issues or breakages.",1.0 -22273197,2019-06-26 14:10:05.735,Build the staging Kubernetes Cluster for the Container Registry,"Utilize this issue to track progress in creating the container registry for the staging environment. - -Keep https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6862 updated to ensure we get ourselves fully documented as we proceed. - -The end goal of this issue, is to have all registry traffic destined for the staging environment sent inside of the GKE platform",3.0 -22261446,2019-06-26 10:06:23.383,Upgrade gitlab packages on postgres DR hosts,"This is a proposed ~""corrective action"" for https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7090. - -We employ two DR replicas which run on the standard omnibus package. However, we don't upgrade the package with a regular deploy and the current version is `11.2.0-rc2.ee.0`. This is because we rarely use the codebase on these hosts and only cared about postgres so far. - -However, especially in the DR context, it is beneficial to have the latest codebase installed. This allows us to easily run a Rails console which talks to the delayed replica, for example. This is helpful when we recover an accidentally deleted project. It is currently not possible because the Rails console with the old codebase doesn't even start anymore on that instance. - -The two DR replicas in question are (gprd here, similar in gstg): - -* `postgres-dr-delayed-01-db-gprd` -* `postgres-dr-archive-01-db-gprd`",2.0 -22261178,2019-06-26 10:02:27.756,Start rails console with read-only database session by default,"This is a proposed ~""corrective action"" for https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7090. - -We employ a ""console host"" on `console-01-sv-gprd` which people can use to start a Rails console (via SSH, using either the console user that starts the console automatically or by excecuting `gitlab-rails c` on the host). - -### Proposal - -By default, the Rails console should have a read-only database session. This allows for safe(r) use of the Rails console without risk of altering any database data accidentally. When read-write access is necessary, the session can be promoted read-write from the console. - -### Implementation - -In postgres, a database session can be set [read-only](https://www.postgresql.org/docs/9.6/sql-set-transaction.html) with `SET SESSION CHARACTERISTICS AS TRANSACTION READ ONLY`. When we start the Rails console, we would have to SET this attribute. This may be possible by configuring `database.yml` accordingly or by using a postgres user dedicated to console access which would have this as a default. - -In order to promote a session read-write, we'd simply set it back with `SET SESSION CHARACTERISTICS AS TRANSACTION READ WRITE`. This can be shipped as part of the codebase in `Gitlab::Database` for convenience.",1.0 -22260511,2019-06-26 09:50:47.905,Validate/Establish backup strategy,"It's possible that slight ""time alignment"" / consistency differences between disk snapshots in a raidz array will result in an umountable zfs dataset. - -If we can't restore a raidz array from GCP snapshots, then we will need to propose an alternative, i.e. by periodically shipping ZFS snapshots elsewhere from day 1 of git storage nodes on ZFS so that we do not lose our ability to recover data from several hours/days ago. - -cc @T4cC0re",8.0 -22249328,2019-06-25 23:42:37.321,Replace uses of monitoring-lb with https-lb,"As soon as https-lb (a slightly more generic version of monitoring-lb) has been proven to work for camoproxy (should be fine, just prove it), circle back and replace uses of monitoring-lb with https-lb, then remove/archive monitoring-lb.",2.0 -22246545,2019-06-25 20:27:45.861,"Make it possible to whitelist an IP for all of GitLab.com, not just the API","Currently we are able to whitelist IPs to get around our rate limiting for the API (`/api`) but we have no way to do it for the site as a whole. In order to facilitate an internal request for whitelisting, we need to implement a way to whitelist an IP from the rate limit for the entire site. - -cc/ @cmcfarland @jsalazar-gitlab",4.0 -22240869,2019-06-25 17:11:33.889,Registry 5xx alerts are numerous with no obvious problem to resolve.,"The on-call SRE's receive alerts that indicate a high number of 5xx errors from the registry service. But there are no obvious service problems (registry is down, serving requests slowly, etc.) that can be repaired. The issue often recovers before any action is taken. - -Either our alerting for 5xx errors in registry are too stringent, or there is a legitimate issue with registry that needs to be fixed. - -One of these should be the criteria to close this issue: -* Do we have an SLO that we must keep registry 5xx errors below a threshold? If so, are our current alerts enforcing that level, or a level higher? Create an issue(s) or merge request(s) to address the changes to our alerting. -* If the alerting is correct, what is the underlying issue? Create an issue (or issues) to resolve that problem.",1.0 -22217251,2019-06-25 09:05:31.797,Create environment for migration testing,"Create an easily re-creatable sandbox environment to test gitaly node migration from ones that are backed by ext4 to ones that are backed by ZFS. - -Ideally, the deliverable will be a repository including: - -* Terraform for a GitLab environment where everything is minimal except gitaly shards, where we will need a few (3?) for testing. Initially the Gitaly shards will use ext4. -* Integration with chef. Consider whether it's worth using another chef organisation or not. -* Scripts to fill the Gitaly nodes with dummy data. -* A readme explaining how everything is set up, how to use it, and how to tear it down. - -Use as much resource isolation as possible, in a separate GCP project.",4.0 -22213445,2019-06-25 07:03:03.575,Blueprint for Encrypting (TLS) internal Gitaly traffic,"During the work on https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/6484, it became evident that we should write a blueprint and a design doc so that: -1. Our work is well thought out in advance -2. The implementation is documented so that our customers can refer to it -3. Provides us pinning points to iterate in the future - -Will work on an MR for it.",6.0 -22211442,2019-06-25 04:53:39.956,Service inventory catalog to product,"### Background -In 2019Q1, we delivered Service Inventory Catalog ([Design Doc](https://about.gitlab.com/handbook/engineering/infrastructure/design/service-inventory-catalog/) | [Production App](https://us-central1-gitlab-infra-automation-stg.cloudfunctions.net/ui/services)) in an effort to help centralize tribal knowledge of our production services, reduce time to search for information and reduce onboarding time among other benefits. - -Given that the Service Inventory Catalog (SIC) takes a step in the direction of addressing problems that are common across teams, organizations and companies, not just GitLab, there is an interest of exploring options to take this solution to product so that our customers can run something similar for their need. - -In all, we believe that any individual or a team building service(s) (requirements, designs, ownerships, dependencies, documentations, configurations, runbooks, playbooks, security aspects) should document and maintain them. SIC aims to find a home for all of these information. Just like how it is a good practice for every version controlled project to have a README.md file, can a service-inventory-catalog become a good practice, too? - -### Objective -The objective of this issue is to keep track of the work to: -* Investigate whether it is feasible to deliver SIC to product -* Explore option(s) of how we want to deliver it (thinking from the perspective on how our customers would/might use it) -* Scope the effort and identify dependencies/resources -* Implement -* Announce - -### References -- [Service Inventory Catalog design doc](https://about.gitlab.com/handbook/engineering/infrastructure/design/service-inventory-catalog) -- [Service Catalog YML file](https://gitlab.com/gitlab-com/runbooks/blob/master/services/service-catalog.yml) -- [Team YML file](https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/team.yml) -- [Deployed version](https://us-central1-gitlab-infra-automation.cloudfunctions.net/ui/) -- [Source code](https://gitlab.com/gitlab-com/gl-infra/service-catalog-app) - -cc @Finotto",8.0 -22194004,2019-06-24 14:13:09.566,Use common CI image for k8s-workloads/gitlab-com and turn on registry metrics,"Now that we have a common project https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/common for the CI image and some CI config we can utilize it for `gitlab-com`. This issue tracks the following: - -- [x] Use the common CI image for running pipelines in ops -- [x] Adapt pipelines so we have similar job names for applying updates -- [x] Adjust configuration so we can start scraping metrics",2.0 -22192027,2019-06-24 13:15:53.824,Alert if redis process is down,We don't get any alerts if the redis process on a slave is down: https://gitlab.com/gitlab-com/gl-infra/production/issues/914,2.0 -22183983,2019-06-24 09:00:20.176,Move gitlab-cookbooks group under gl-infra,"# What? - -Transfer the gitlab-cookbooks group to be a subgroup of gitlab-com/gl-infra on both .com and ops. - -# Why? - -Permissions inheritance from gl-infra, and organisation. - -# How? - -1. Prepare an MR to chef-repo updating all of the Berkshelf references to use the new subgroup path. -1. Move the group in .com. -1. Move the group in ops. -1. Merge that chef-repo MR. -1. Notify the infrastructure team that this has been done so that people can update their remotes. This can be done using this issue. - -@gitlab-com/gl-infra what do you think of the idea? IIRC this has been discussed before. Have I missed any considerations in the ""how"" section?",3.0 -22171958,2019-06-23 18:13:38.750,Fix or remove Redis role portion of shell prompt,"I just noticed that our custom shell prompt is broken on the Redis servers. The prompt always shows the string `REDIS_CHECKCMD_ERROR`. - -That string is set by `/usr/local/bin/custom_ps1.sh`, which on a Redis host tries to determine the current role (primary or secondary) of the local Redis instance by connecting to the instance via Unix socket: -``` - check_cmd=""$(/opt/gitlab/embedded/bin/redis-cli -s /var/opt/gitlab/redis/redis.socket role 2>/dev/null)"" - if [[ $? -ne 0 ]]; then - echo -n ""REDIS_CHECKCMD_ERROR"" - return - fi - if [[ ""${check_cmd}"" == *""master""* ]]; then - echo -n ""PRIMARY-REDIS"" - else - echo -n ""secondary-redis"" - fi -``` - -However, our Redis instance isn't currently configured to accept Unix socket connections. In `/var/opt/gitlab/redis/redis.conf` we set `unixsocketperms` but not `unixsocket` itself. Also, Redis is configured to require authentication (`requirepass`), which `custom_ps1.sh` does not do. - -Ask the SRE/DBRE team whether we should fix or remove this portion of the shell prompt. - -If we decide to fix it, should we make a helper script in /usr/local/bin to successfully open a redis-cli session? That would follow the pattern we use for psql access on Postgres hosts. We'd also need to decide whether to re-enable Unix socket listener or use the TCP listener. TCP is a simpler fix, since it would not require bouncing Redis.",2.0 -22146023,2019-06-21 22:32:39.424,Investigate why API whitelist isn't working,"We added a whitelist entry for @jsalazar-gitlab in [chef-repo!1280](https://ops.gitlab.net/gitlab-cookbooks/chef-repo/merge_requests/1280), but when applied and reloaded it did not seem to work. - -@cmcfarland, @alejandro, and I have spent a while investigating this today and thus I decided to make an issue about it. - -As of now we are in the process of trying to restart haproxy to see if that resolves the issue. - -Otherwise, I'm pretty stumped.",4.0