TECHNOLOGY
stringclasses
22 values
QUESTION
stringlengths
26
425
SOLUTION
stringlengths
34
2.68k
Azure Synapse
The query references an object that is not supported in distributed processing mode
Some objects, like system views, and functions can't be used while you query data stored in Azure Data Lake or Azure Cosmos DB analytical storage. Avoid using the queries that join external data with system views, load external data in a temp table, or use some security or metadata functions to filter external data.
Azure Synapse
Query returning NULL values instead of partitioning columns or can't find the partition columns
troubleshooting steps: If you use tables to query a partitioned dataset, be aware that tables don't support partitioning. Replace the table with the partitioned views. If you use the partitioned views with the OPENROWSET that queries partitioned files by using the FILEPATH() function, make sure you correctly specified the wildcard pattern in the location and used the proper index for referencing the wildcard. If you're querying the files directly in the partitioned folder, be aware that the partitioning columns aren't the parts of the file columns. The partitioning values are placed in the folder paths and not the files. For this reason, the files don't contain the partitioning values.
Azure Synapse
Missing column when using automatic schema inference
You can easily query files without knowing or specifying schema, by omitting WITH clause. In that case column names and data types will be inferred from the files. Have in mind that if you are reading number of files at once, the schema will be inferred from the first file service gets from the storage. This can mean that some of the columns expected are omitted, all because the file used by the service to define the schema did not contain these columns. To explicitly specify the schema, please use OPENROWSET WITH clause. If you specify schema (by using external table or OPENROWSET WITH clause) default lax path mode will be used. That means that the columns that don’t exist in some files will be returned as NULLs (for rows from those files). To understand how path mode is used, please check the following documentation and sample.
Azure Synapse
Failed to execute query. Error: CREATE EXTERNAL TABLE/DATA SOURCE/DATABASE SCOPED CREDENTIAL/FILE FORMAT is not supported in master database.
1. Create a user database: CREATE DATABASE <DATABASE_NAME> 2. Execute a CREATE statement in the context of <DATABASE_NAME>, which failed earlier for the master database. Here's an example of the creation of an external file format: USE <DATABASE_NAME> CREATE EXTERNAL FILE FORMAT [SynapseParquetFormat] WITH ( FORMAT_TYPE = PARQUET)
Azure Synapse
Getting an error while trying to create a new Azure AD login or user in a database
check the login you used to connect to your database. The login that's trying to create a new Azure AD user must have permission to access the Azure AD domain and check if the user exists. Be aware that: SQL logins don't have this permission, so you'll always get this error if you use SQL authentication. If you use an Azure AD login to create new logins, check to see if you have permission to access the Azure AD domain.
Azure Synapse
Resolving Azure Cosmos DB path has failed with error 'This request is not authorized to perform this operation'.
check to see if you used private endpoints in Azure Cosmos DB. To allow serverless SQL pool to access an analytical store with private endpoints, you must configure private endpoints for the Azure Cosmos DB analytical store.
Azure Synapse
Delta table created in Spark is not shown in serverless pool
If you created a Delta table in Spark, and it is not shown in the serverless SQL pool, check the following: 1. Wait some time (usually 30 seconds) because the Spark tables are synchronized with delay. 2. If the table didn't appear in the serverless SQL pool after some time, check the schema of the Spark Delta table. Spark tables with complex types or the types that are not supported in serverless are not available. Try to create a Spark Parquet table with the same schema in a lake database and check would that table appears in the serverless SQL pool. 3. Check the workspace Managed Identity access Delta Lake folder that is referenced by the table. Serverless SQL pool uses workspace Managed Identity to get the table column information from the storage to create the table.
GCP Cloud Storage - Web App
Failed to fetch metadata from the registry, with reason: generic::permission_denied
To resolve this issue, grant the Storage Admin role to the service account: To see which account you used, run the gcloud auth list command. To learn why assigning only the App Engine Deployer (roles/appengine.deployer) role might not be sufficient in some cases, see App Engine roles.
GCP Cloud Storage - Web App
Error: The App Engine appspot and App Engine flexible environment service accounts must have permissions on the image IMAGE_NAME
This error occurs for one of the following reasons: 1. The default App Engine service account does not have the Storage Object Viewer (roles/storage.objectViewer) role. To resolve this issue, grant the Storage Object Viewer role to the service account. 2. Your project has a VPC Service Perimeter which limits access to the Cloud Storage API using access levels. To resolve this issue, add the service account you use to deploy your app to the corresponding VPC Service Perimeter accessPolicies.
GCP Cloud Storage - Web App
Failed to create cloud build: Permission denied
This error occurs if you use the gcloud app deploycommand from an account that does not have the Cloud Build Editor (roles/cloudbuild.builds.editor) role. To resolve this issue, grant the Cloud Build Editor role to the service account that you are using to deploy your app. To see which account you used, run the gcloud auth list command.
GCP Cloud Storage - Web App
Timed out waiting for the app infrastructure to become healthy
To resolve this issue, rule out the following potential causes: 1. Verify that you have granted the Editor (roles/editor) role to your default App Engine service account. 2. Verify that you have granted the following roles to the service account that you use to run your application (usually the default service account, [email protected]): Storage Object Viewer (roles/storage.objectViewer) Logs Writer (roles/logging.logWriter) 3. Grant the roles if the service account does not have them. 4. If you are deploying in Shared VPC setup and passing instance_tag in app.yaml, refer to this section to fix the issue.
GCP Cloud Storage - Web App
Invalid value error when deploying in a Shared VPC setup
To resolve the issue, remove the instance_tag field from app.yaml and redeploy.
GCP Cloud Run
Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable.
To resolve this issue, rule out the following potential causes: Verify that you can run your container image locally. If your container image cannot run locally, you need to diagnose and fix the issue locally first. Check if your container is listening for requests on the expected port as documented in the container runtime contract. Your container must listen for incoming requests on the port that is defined by Cloud Run and provided in the PORT environment variable. See Configuring containers for instructions on how to specify the port. Check if your container is listening on all network interfaces, commonly denoted as 0.0.0.0. Verify that your container image is compiled for 64-bit Linux as required by the container runtime contract. Note: If you build your container image on a ARM based machine, then it might not work as expected when used with Cloud Run. To solve this issue, build your image using Cloud Build. Use Cloud Logging to look for application errors in stdout or stderr logs. You can also look for crashes captured in Error Reporting. You might need to update your code or your revision settings to fix errors or crashes. You can also troubleshoot your service locally.
GCP Cloud Run
The server has encountered an internal error. Please try again later. Resource readiness deadline exceeded.
This issue might occur when the Cloud Run service agent does not exist, or when it does not have the Cloud Run Service Agent (roles/run.serviceAgent) role. To verify that the Cloud Run service agent exists in your Google Cloud project and has the necessary role, perform the following steps: Open the Google Cloud console: Go to the Permissions page In the upper-right corner of the Permissions page, select the Include Google-provided role grants checkbox. In the Principals list, locate the ID of the Cloud Run service agent, which uses the ID service-PROJECT_NUMBER@serverless-robot-prod.iam.gserviceaccount.com. Verify that the service agent has the Cloud Run Service Agent role. If the service agent does not have the role, grant it.
GCP Cloud Run
Can I run Cloud Run applications on a private IP?
"Currently no. Cloud Run applications always have a *.run.app public hostname and they cannot be placed inside a VPC (Virtual Private Cloud) network. If any other private service (e.g. GCE VMs, GKE) needs to call your Cloud Run application, they need to use this public hostname. With ingress settings on Cloud Run, you can allow your app to be accesible only from the VPC (e.g. VMs or clusters) or VPC+Cloud Load Balancer –but it still does not give you a private IP. You can still combine this with IAM to restrict the outside world but still authenticate and authorize other apps running the VPC network."
GCP Cloud Run
The service has encountered an error during container import. Please try again later. Resource readiness deadline exceeded.
To resolve this issue, rule out the following potential causes: 1. Ensure container's file system does not contain non-utf8 characters. 2. Some Windows based Docker images make use of foreign layers. Although Container Registry doesn't throw an error when foreign layers are present, Cloud Run's control plane does not support them. To resolve, you may try setting the --allow-nondistributable-artifacts flag in the Docker daemon.
GCP Cloud Run
The request was not authorized to invoke this service
To resolve this issue: 1. If invoked by a service account, the audience claim (aud) of the Google-signed ID token must be set to the following: i. The Cloud Run URL of the receiving service, using the form https://service-xyz.run.app. The Cloud Run service must require authentication. The Cloud Run service can be invoked by the Cloud Run URL or through a load balancer URL. ii.The Client ID of an OAuth 2.0 Client ID with type Web application, using the form nnn-xyz.apps.googleusercontent.com. The Cloud Run service can be invoked through an HTTPS load balancer secured by IAP. This is great for a GCLB backed by multiple Cloud Run services in different regions. iii. A configured custom audience using the exact values provided. For example, if custom audience is service.example.com, the audience claim (aud) value must also be service.example.com. If custom audience is https://service.example.com, the audience claim value must also be https://service.example.com. 2. The jwt.io tool is good for checking claims on a JWT. 3. If the auth token is of an invalid format a 401 error occurs. If the token is of a valid format and the IAM member used to generate the token is missing the run.routes.invoke permission, a 403 error occurs.
GCP Cloud Run
The request was not authenticated. Either allow unauthenticated invocations or set the proper Authorization header
To resolve this issue: 1. If the service is meant to be invocable by anyone, update its IAM settings to make the service public. 2. If the service is meant to be invocable only by certain identities, make sure that you invoke it with the proper authorization token. i. If invoked by a developer or invoked by an end user: Ensure that the developer or user has the run.routes.invoke permission, which you can provide through the Cloud Run Admin (roles/run.admin) and Cloud Run Invoker (roles/run.invoker) roles. ii. If invoked by a service account: Ensure that the service account is a member of the Cloud Run service and that it has the Cloud Run Invoker (roles/run.invoker) role. iii.Calls missing an auth token or with an auth token that is of valid format, but the IAM member used to generate the token is missing the run.routes.invoke permission, result in this 403 error.
GCP Cloud Run
HTTP 429 The request was aborted because there was no available instance. The Cloud Run service might have reached its maximum container instance limit or the service was otherwise not able to scale to incoming requests. This might be caused by a sudden increase in traffic, a long container startup time or a long request processing time.
To resolve this issue, check the "Container instance count" metric for your service and consider increasing this limit if your usage is nearing the maximum. See "max instance" settings, and if you need more instances, request a quota increase.
GCP Cloud Run
This might be caused by a sudden increase in traffic, a drawn-out container setup process, or a drawn-out request processing process.
To resolve this issue, address the previously listed issues. In addition to fixing these issues, as a workaround you can implement exponential backoff and retries for requests that the client must not drop. Note that a short and sudden increase in traffic or request processing time might only be visible in Cloud Monitoring if you zoom in to 10 second resolution. When the root cause of the issue is a period of heightened transient errors attributable solely to Cloud Run, you can contact Support
GCP Cloud Run
HTTP 500 / HTTP 503: Container instances are exceeding memory limits
To resolve this issue: 1. Determine if your container instances are exceeding the available memory. Look for related errors in the varlog/system logs. 2. If the instances are exceeding the available memory, consider increasing the memory limit. Note that in Cloud Run, files written to the local filesystem count towards the available memory. This also includes any log files that are written to locations other than /var/log/* and /dev/log.
GCP Cloud Run
HTTP 503: Unable to process some requests due to high concurrency setting
To resolve this issue, try one or more of the following: 1. Increase the maximum number of container instances for your service. 2. Lower the service's concurrency. Refer to setting concurrency for more detailed instructions.
GCP Cloud Run
HTTP 504 The request has been terminated because it has reached the maximum request timeout.
To troubleshoot this issue, try one or more of the following: 1. Instrument logging and tracing to understand where your app is spending time before exceeding your configured request timeout. 2. Outbound connections are reset occasionally, due to infrastructure updates. If your application reuses long-lived connections, then we recommend that you configure your application to re-establish connections to avoid the reuse of a dead connection. i. Depending on your app's logic or error handling, a 504 error might be a signal that your application is trying to reuse a dead connection and the request blocks until your configured request timeout. ii. You can use a liveness probe to help terminate an instance that returns persistent errors. 3. Out of memory errors that happen inside the app's code, for example, java.lang.OutOfMemoryError, do not necessarily terminate a container instance. If memory usage does not exceed the container memory limit, then the instance will not be terminated. Depending on how the app handles app-level out of memory errors, requests might hang until they exceed your configured request timeout. i. If you want the container instance to terminate in the above scenario, then configure your app-level memory limit to be greater than your container memory limit. ii. You can use a liveness probe to help terminate an instance that returns persistent errors.
GCP Cloud Run
asyncpg.exceptions.ConnectionDoesNotExistError: connection was closed in the middle of operation
To resolve this issue: 1. If you are trying to perform background work with CPU throttling, try using the "CPU is always allocated" CPU allocation setting. 2. Ensure that you are within the outbound requests timeouts. If your application maintains any connection in an idle state beyond this thresholds, the gateway needs to reap the connection. 3. By default, the TCP socket option keepalive is disabled for Cloud Run. There is no direct way to configure the keepalive option in Cloud Run at the service level, but you can enable the keepalive option for each socket connection by providing the correct socket options when opening a new TCP socket connection, depending on the client library that you are using for this connection in your application. 4. Occasionally outbound connections will be reset due to infrastructure updates. If your application reuses long-lived connections, then we recommend that you configure your application to re-establish connections to avoid the reuse of a dead connection.
GCP Cloud Run
assertion failed: Expected hostname or IPv6 IP enclosed in [] but got <IPv6 ADDRESS>
To resolve this issue: To change the environment variable value and resolve the issue, set ENV SPARK_LOCAL_IP="127.0.0.1" in your Dockerfile. In Cloud Run, if the variable SPARK_LOCAL_IP is not set, it will default to its IPv6 counterpart instead of localhost. Note that setting RUN export SPARK_LOCAL_IP="127.0.0.1" will not be available on runtime and Spark will act as if it was not set.
GCP Cloud Run
mount.nfs: access denied by server while mounting IP_ADDRESS:/FILESHARE
If access was denied by the server, check to make sure the file share name is correct.
GCP Cloud Run
mount.nfs: Connection timed out
If the connection times out, make sure you are providing the correct IP address of the filestore instance.
GCP Cloud Run
How can I specify Google credentials in Cloud Run applications?
For applications running on Cloud Run, you don't need to deliver JSON keys for IAM Service Accounts, or set GOOGLE_APPLICATION_CREDENTIALS environment variable. Just specify the service account (--service-account) you want your application to use automatically while deploying the app. See configuring service identity.
GCP Cloud Run
How to do canary or blue/green deployments on Cloud Run?
If you updated your Cloud Run service, you probably realized it creates a new revision for every new configuration of your service. Cloud Run allows you to split traffic between multiple revisions, so you can do gradual rollouts such as canary deployments or blue/green deployments.
GCP Cloud Run
How to configure secrets for Cloud Run applications?
You can use Secret Manager with Cloud Run. Read how to write code and set permissions to access the secrets from your Cloud Run app in the documentation. Alternatively, if you'd like to store secrets in Cloud Storage (GCS) using Cloud KMS envelope encryption, check out the Berglas tool and library (Berglas also has support for Secret Manager).
GCP Cloud Run
How to connect IPs in a VPC network from Cloud Run?
Cloud Run now has support for "Serverless VPC Access". This feature allows Cloud Run applications to be able to connect private IPs in the VPC (but not the other way). This way your Cloud Run applications can connect to private VPC IP addresses running: GCE VMs Cloud SQL instances Cloud Memorystore instances Kubernetes Pods/Services (on GKE public or private clusters) Internal Load Balancers
GCP Cloud Run
How can I serve responses larger than 32MB with Cloud Run?
Cloud Run can stream responses that are larger than 32MB using HTTP chunked encoding. Add the HTTP header Transfer-Encoding: chunked to your response if you know it will be larger than 32MB.
GCP Security IAM
How can I use Multi Factor Authentication (MFA) with IAM?
When individual users use MFA, the methods they authenticate with will be honored. This means that your own identity system needs to support MFA. For Google Workspace accounts, this needs to be enabled by the user themselves. For Google Workspace-managed credentials, MFA can be enabled with Google Workspace tools.
GCP Security IAM
How do I control who can create a service account in my project?
Owner and editor roles have permissions to create service accounts in a project. If you wish to grant a user the permission to create a service account, grant them the owner or the editor role.
GCP Security IAM
How do I grant permissions to resources in my project to someone who is not part of my organization?
Using Google groups, you can add a user outside of your organization to a group and bind that group to the role. Note that Google groups don't have login credentials, and you cannot use Google groups to establish identity to make a request to access a resource. You can also directly add the user to the allow policy even if they are not a part of your organization. However, check with your administrator if this is compliant with your company's requirements.
GCP Security IAM
How can I manage who can access my instances?
To manage who has access to your instances, use Google groups to grant roles to principals. Granting a role creates a role binding in an allow policy; you can grant the role on the project where the instances will be launched, or on individual instances. If a user (identified by their Google Account, for example, [email protected]) is not a member of the group that is bound to a role, they will not have access to the resource where the allow policy is applied.
GCP Security IAM
How do I list the roles associated with a gcp service account?
To see roles per service account in the console: 1. Copy the email of your service account (from IAM & Admin -> Service Accounts - Details); 2. Go to: IAM & Admin -> Policy Analyzer -> Custom Query; 3. Set Parameter 1 to Principal. Paste the email into Principal field; 4. Click Continue, then click Run Query. You'll get the list of roles of the given service account.
GCP Security IAM
GCP Cloud Build fails with permissions error even though correct role is granted
you need to add the cloudfunctions.developer and iam.serviceAccountUser roles to the [PROJECT_NUMBER]@cloudbuild.gserviceaccount.com account, and (I believe) that the aforementioned cloudbuild service account also needs to be added as a member of the service account that has permissions to deploy your Cloud Function (again shown in the linked SO answer).
GCP Security IAM
How to set Google Cloud application credentials for a Service Account
gcloud auth application-default login uses the active|specified user account to create a local JSON file that behaves like a service account. The alternative is to use gcloud auth activate-service-account but, as you know, you will need to have the service account's credentials as these will be used instead of the credentials created by application-default login.
GCP Security IAM
Is there a way to list all permissions from a user in GCP?
In Google Cloud Platform there is no single command that can do this. Permissions via roles are assigned to resources. Organizations, Folders, Projects, Databases, Storage Objects, KMS keys, etc can have IAM permissions assigned to them. You must scan (check IAM permissions for) every resource to determine the total set of permissions that an IAM member account has.
GCP Security IAM
Can't delete a Google Cloud Project
1. see your project retentions: gcloud alpha resource-manager liens list 2. if you have any retention delete: gcloud alpha resource-manager liens delete "name" 3. delete your project gcloud projects delete "project"
GCP Security IAM
How to read from a Storage bucket from a GCE VM with no External IP?
You simply have to: 1. Go to Console -> VPC network 2. Choose the subnet of your VM instance (for example default -> us-central1) 3. Edit and select Private Google access -> On. Then save. Also make sure that your VM has access to the Cloud Storage API.
GCP Security IAM
I'm getting the error "cannot use role (type string) as type "cloud.google.com/go/iam".RoleName in argument to policy.HasRole.
You can use type conversion as the following: return policy.HasRole(serviceAccount, iam.RoleName(role)) Or simpler by declaring role as iam.RoleName func checkRole(key, serviceAccount, role iam.RoleName) bool { ... return policy.HasRole(serviceAccount, role) }
GCP Security IAM
Can I get a list of all resources for which a user has been added to a role?
Roles are not assigned directly to users. This is why there is no single command that you can use. IAM members (users, service accounts, groups, etc.) are added to resources with roles attached. A user can have permissions to a project and also have permissions at an individual resource (Compute Engine Instance A, Storage Bucket A/Object B). A user can also have no permissions to a project but have permissions at individual resources in the project. You will need to run a command against resources (Org, Folder, Project and items like Compute, Storage, KMS, etc). To further complicate this, there are granted roles and also inherited roles.
GCP Security IAM
Is there a way to prevent deletion of a google spanner database even though developers have been granted broad (i.e. owner) access to the project?
A few approaches. 1. If you're worrying about a Spanner Database getting dropped, you can use the --enable-drop-protection flag when creating the DB, to ensure it cannot be accidentally deleted. 2. You can make negative permissions through IAM Deny Policies in Google Cloud, to expressedly prevent someone, like a developer group or Service Account, from taking a specific action.
GCP Security IAM
How to grant access to all service account in organization?
You can use Google groups which uses a collection of user and/or service accounts. Once this is done, add the service accounts to the Google group and then assign the necessary IAM roles to the Google group.
GCP Security IAM
How to restrict BigQuery's dataset access for everyone having (Project level Viewer) role
The solution here is to have Terraform (or something else) manage the resources for you. You can develop a module that creates the appropriate things for a user e.g. a dataset, a bucket, some perms, a service account etc. That way all you need to do is add another user to your list and re-deploy. The other additional benefit here is that you can use the repo where the TF is stored as a source of truth.
GCP Security IAM
Hoe do I Custom Role for Inserting to Specific BigQuery Dataset
You can drop the bigquery.datasets.get permission from the custom IAM role so that they can’t list all the datasets, and then in the dataset's permissions give the READER role instead of WRITER to the user for that specific dataset.
GCP Security IAM
Service account does not have permission to access Firestore
Creating a service account by itself grants no permissions. The Permissions tab in IAM & Admin > Service Accounts shows a list of "Principals with access to this account" - this is not the inheritance of permissions, it's simply which accounts, aka principals, can make use of the permissions granted to this service account. The "Grant Access" button on this page is about granting other principals access to this service account, not granting access to resources for this service account. For Firestore access specifically - go to IAM & Admin > IAM, and you'll be on the permissions tab. Click "Add" at the top of the page. Type in your newly created service account under "New Principals", and for roles, select "Cloud Datastore Owner".
GCP Security IAM
How to connect to Cloud SQL from Azure Data Studio using an IAM user
We can connect using IAM database authentication using the Cloud SQL Auth proxy. The only step after to be done from the GUI DB tool (mine is Azure Data Studio) would be, to connect to the IP (127.0.0.1 in my case)the Cloud SQL Auth proxy listens on(127.0.0.1 is the default) after starting the Cloud SQL Auth proxy using: ./cloud_sql_proxy -instances=<GCPproject:Region:DBname>=tcp:127.0.0.1:5432
GCP Security IAM
What is the correct GCP user role that I should assign to my external website developer?
you should grant the minimum role level to execute the work. If your developer only need access to the Translation API, you can grant his account with this role: Cloud Translation API Editor. If you want him to have full access to the Cloud Translation resources, you can gran him the Cloud Translation API Admin. In case you have more than one developer and they all need the same permissions, you can create an IAM group, add the developer's mails to the group and assign the necessary roles to it.
GCP Security IAM
How to restrict access to triggering HTTP CLoud Function via trigger URL?
The problem is your access method. You are using your own user account (who has the Cloud FUnction invoker role) but with your browser. Your request with your browser is without any authentication header. If you want to call your cloud function now, you have to add an authorization header, and an identity token as bearer value. That command works curl -H "Authorization: bearer $(gcloud auth print-identity-token)" <cloud function URL> Note that you need an identity token, not an authorization token.
GCP Security IAM
What roles do my Cloud Build service account need to deploy an http triggered unauthenticated Cloud Function?
The solution is replace Cloud Functions Developer role with Cloud Functions Admin role. Use of the --allow-unauthenticated flag modifies IAM permissions. To ensure that unauthorized developers cannot modify function permissions, the user or service that is deploying the function must have the cloudfunctions.functions.setIamPolicy permission. This permission is included in both the Owner and Cloud Functions Admin roles.
GCP Big Query
Getting error as billingNotEnabled
Enable billing for the project in the Google Cloud console.
GCP Big Query
How to create temporary table in Google BigQuery
To create a temporary table, use the TEMP or TEMPORARY keyword when you use the CREATE TABLE statement and use of CREATE TEMPORARY TABLE requires a script , so its better to start with begin statement. Begin CREATE TEMP TABLE <table_name> as select * from <table_name> where <condition>; End ;
GCP Big Query
How to download all data in a Google BigQuery dataset?
Detailed step-by-step to download large query output 1. enable billing You have to give your credit card number to Google to export the output, and you might have to pay. But the free quota (1TB of processed data) should suffice for many hobby projects. 2. create a project 3. associate billing to a project 4. do your query 5. create a new dataset 6. click "Show options" and enable "Allow Large Results" if the output is very large 7. export the query result to a table in the dataset 8. create a bucket on Cloud Storage. 9. export the table to the created bucked on Cloud Storage. make sure to click GZIP compression use a name like <bucket>/prefix.gz. If the output is very large, the file name must have an asterisk * and the output will be split into multiple files. 10. download the table from cloud storage to your computer. Does not seem possible to download multiple files from the web interface if the large file got split up, but you could install gsutil and run: gsutil -m cp -r 'gs://<bucket>/prefix_*' . See also: Download files and folders from Google Storage bucket to a local folder There is a gsutil in Ubuntu 16.04 but it is an unrelated package. You must install and setup as documented at: https://cloud.google.com/storage/docs/gsutil 11. unzip locally: for f in *.gz; do gunzip "$f"; done
GCP Big Query
How to generate date series to occupy absent dates in google BiqQuery?
Generting a list of dates and then joining whatever table you need on top seems the easiest. I used the generate_date_array + unnest and it looks quite clean. To generate a list of days (one day per row): SELECT * FROM UNNEST(GENERATE_DATE_ARRAY('2018-10-01', '2020-09-30', INTERVAL 1 DAY)) AS example
GCP Big Query
How many Google Analytics views can I export to BigQuery?
You can only export one view per Google Analytics property. When selecting which view to export, it is important to consider which views have been customized with various changes to the View Settings (traffic filters, content groupings, channel settings, etc.), or which views have the most historical data. The view that you choose to push to BigQuery will depend on use cases for your data. We recommend selecting the view with the most data, universal customization, and essential filters that have cleaned your data (such as bot filters).
GCP Big Query
How to choose the latest partition in BigQuery table?
You can use with statement, select last few partitions and filter out the result. This is better approach because: You are not limited by fixed partition date (like today - 1 day). It will always take the latest partition from given range. It will only scan last few partitions and not whole table. Example with last 3 partitions scan: WITH last_three_partitions as (select *, _PARTITIONTIME as PARTITIONTIME FROM dataset.partitioned_table WHERE _PARTITIONTIME > TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 3 DAY)) SELECT col1, PARTITIONTIME from last_three_partitions WHERE PARTITIONTIME = (SELECT max(PARTITIONTIME) from last_three_partitions)
GCP Big Query
How can I change the project in BigQuery
You have two ways to do it: 1. Specify --project_id global flag in bq. Example: bq ls -j --project_id <PROJECT> 2. Change default project by issuing gcloud config set project <PROJECT>
GCP Big Query
How to catch a failed CAST statement in BigQuery SQL?
You can use the SAFE_CAST function, which returns NULL if the input is not a valid value when interpreted as the desired type. In your case, you would just use SAFE_CAST(UPDT_DT_TM AS DATETIME). It is in the Functions & Operators documentation.
GCP Big Query
JSON formatting Error when loading into Google Big Query
Yes, BigQuery only accepts new-line delimited JSON, which means one complete JSON object per line. Before you merge the object to one line, BigQuery reads "{", which is start of an object, and expects to read a key, but the line ended, so you see the error message "expected key". For multiple JSON objects, just put them one in each line. Don't enclose them inside an array. BigQuery expects each line to start with an object, "{". If you put "[" as the first character, you will see the second error message which means BigQuery reads an array but not inside an object.
GCP Big Query
I am trying to run the query "select * from tablename ". But it throws error like "Error: Response too large to return".
Set allowLargeResults to true in your job configuration. You must also specify a destination table with the allowLargeResults flag. If querying via API, "configuration": { "query": { "allowLargeResults": true, "query": "select uid from [project:dataset.table]" "destinationTable": [project:dataset.table] } } If using the bq command line tool, $ bq query --allow_large_results --destination_table "dataset.table" "select uid from [project:dataset.table]" If using the browser tool, Click 'Enable Options' Select 'Allow Large Results'
GCP Big Query
How can I refresh datasets/resources in the new Google BigQuery Web UI?
f you click the search box in the project/dataset "Explorer" sidebar, then press enter, it will refresh the list.
GCP Big Query
Failed to save view. Bad table reference "myDataset.myTable"; table references in standard SQL views require explicit project IDs
Your view has reference to myDataset.myTable - which is ok when you just run it as a query (for example in Web UI). But to save it as a view you must fully qualify that reference as below myProject.myDataset.myTable So, just add project to that reference
GCP Big Query
Bigquery Error: UPDATE/MERGE must match at most one source row for each target row
It occurs because the target table of the BigQuery contains duplicated row(w.r.t you are joining). If a row in the table to be updated joins with more than one row from the FROM clause, then BigQuery returns this error: Solution 1. Remove the duplicated rows from the target table and perform the UPDATE/MERGE operation 2. Define Primary key in BigQuery target table to avoid data redundancy
GCP Big Query
Create a BigQuery table from pandas dataframe, WITHOUT specifying schema explicitly
Here's a code snippet to load a DataFrame to BQ: import pandas as pd from google.cloud import bigquery # Example data df = pd.DataFrame({'a': [1,2,4], 'b': ['123', '456', '000']}) # Load client client = bigquery.Client(project='your-project-id') # Define table name, in format dataset.table_name table = 'your-dataset.your-table' # Load data to BQ job = client.load_table_from_dataframe(df, table) If you want to specify only a subset of the schema and still import all the columns, you can switch the last row with # Define a job config object, with a subset of the schema job_config = bigquery.LoadJobConfig(schema=[bigquery.SchemaField('b', 'STRING')]) # Load data to BQ job = client.load_table_from_dataframe(df, table, job_config=job_config)
GCP Big Query
Table name missing dataset while no default dataset is set in the request
Depending on which API you are using, you can specify the defaultDataset parameter when running your BigQuery job. More information for the jobs.query api can be found here https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/query. For example, using the NodeJS API for createQueryJob https://googleapis.dev/nodejs/bigquery/latest/BigQuery.html#createQueryJob, you can do something similar to this: const options = { keyFilename: process.env.GOOGLE_APPLICATION_CREDENTIALS, projectId: process.env.GOOGLE_APPLICATION_PROJECT_ID, defaultDataset: { datasetId: process.env.BIGQUERY_DATASET_ID, projectId: process.env.GOOGLE_APPLICATION_PROJECT_ID }, query: `select * from my_table;` } const [job] = await bigquery.createQueryJob(options); let [rows] = await job.getQueryResults();
GCP Big Query
Is there an easy way to convert rows in BigQuery to JSON?
If you want to glue together all of the rows quickly into a JSON block, you can do something like: SELECT CONCAT("[", STRING_AGG(TO_JSON_STRING(t), ","), "]") FROM `project.dataset.table` t This will produce a table with 1 row that contains a complete JSON blob summarizing the entire table.
GCP Big Query
How do I list tables in Google BigQuery that match a certain name?
You can do something like below in BigQuery Legacy SQL SELECT * FROM publicdata:samples.__TABLES__ WHERE table_id CONTAINS 'github' Or with BigQuery Standard SQL SELECT * FROM publicdata.samples.__TABLES__ WHERE starts_with(table_id, 'github')
GCP Big Query
BigQuery fails to save view that uses functions
BigQuery now supports permanents registration of UDFs. In order to use your UDF in a view, you'll need to first create it. CREATE OR REPLACE FUNCTION `ACCOUNT-NAME11111.test.STR_TO_TIMESTAMP` (str STRING) RETURNS TIMESTAMP AS (PARSE_TIMESTAMP('%Y-%m-%dT%H:%M:%E*SZ', str)); i. Note that you must use a backtick for the function's name. ii. There's no TEMPORARY in the statement, as the function will be globally registered and persisted. iii. Due to the way BigQuery handles namespaces, you must include both the project name and the dataset name (test) in the function's name. Once it's created and working successfully, you can use it a view. create view test.test_view as select `ACCOUNT-NAME11111.test.STR_TO_TIMESTAMP`('2015-02-10T13:00:00Z') as ts You can then query you view directly without explicitly specifying the UDF anywhere. select * from test.test_view
GCP Big Query
Query Failed Error: Resources exceeded during query execution: The query could not be executed in the allotted memory
The only way for this query to work is by removing the ordering applied in the end: SELECT fullVisitorId, CONCAT(CAST(fullVisitorId AS string),CAST(visitId AS string)) AS session, date, visitStartTime, hits.time, hits.page.pagepath FROM `XXXXXXXXXX.ga_sessions_*`, UNNEST(hits) AS hits WHERE _TABLE_SUFFIX BETWEEN "20160801" AND "20170331" ORDER BY operation is quite expensive and cannot be processed in parallel so try to avoid it (or try applying it in a limited result set)
GCP Big Query
How to convert results returned from bigquery to Json format using Python?
There is no current method for automatic conversion, but there is a pretty simple manual method to convert to json: records = [dict(row) for row in query_job] json_obj = json.dumps(str(records)) Another option is to convert using pandas: df = query_job.to_dataframe() json_obj = df.to_json(orient='records')
GCP VM
Getting an error when connecting to VM using the SSH-in-browser from the Google Cloud console
To resolve this issue, have a Google Workspace admin do the following: 1. Confirm that Google Cloud is enabled for the organization. If Google Cloud is disabled, enable it and retry the connection. 2. Confirm that services that aren't controlled individually are enabled. If these services are disabled, enable them and retry the connection. If the problem persists after enabling Google Cloud settings in Google Workspace, do the following: 1. Capture the network traffic in an HTTP Archive Format (HAR) file starting from when you start the SSH-in-Browser SSH connection. 2. Create a Cloud Customer Care case and attach the HAR file.
GCP VM
The following error is occuring when I start an SSH session: Could not connect, retrying …
To resolve this issue, do the following: 1. After the VM has finished booting, retry the connection. If the connection is not successful, verify that the VM did not boot in emergency mode by running the following command: gcloud compute instances get-serial-port-output VM_NAME \ | grep "emergency mode" If the VM boots in emergency mode, troubleshoot the VM startup process to identify where the boot process is failing. 2. Verify that thegoogle-guest-agent.service service is running, by running the following command in the serial console. systemctl status google-guest-agent.service If the service is disabled, enable and start the service, by running the following commands: systemctl enable google-guest-agent.service systemctl start google-guest-agent.service 3. Verify that the Linux Google Agent scripts are installed and running. For more information, see Determining Google Agent Status. If the Linux Google Agent is not installed, re-install it. 4. Verify that you have the required roles to connect to the VM. If your VM uses OS Login, see Assign OS Login IAM role. If the VM doesn't use OS Login, you need the compute instance admin role or the service account user role (if the VM is set up to run as a service account). The roles are needed to update the instance or project SSH keys-metadata. 5. Verify that there is a firewall rule that allows SSH access by running the following command: gcloud compute firewall-rules list | grep "tcp:22" 6. Verify that there is a default route to the Internet (or to the bastion host). For more information, see Checking routes. 7. Make sure that the root volume is not out of disk space. For more information, see Troubleshooting full disks and disk resizing. 8. Make sure the VM has not run out of memory, by running the following command: gcloud compute instances get-serial-port-output instance-name \ | grep "Out of memory: Kill process" - e "Kill process" -e "Memory cgroup out of memory" -e "oom" If the VM is out of memory, connect to serial console to troubleshoot.
GCP VM
The SSH connection failed after upgrading the VM's kernel.
To resolve this issue, do the following: 1. Mount the disk to another VM. 2. Update the grub.cfg file to use the previous version of the kernel. 3. Attach the disk to the unresponsive VM. 4. Verify that the status of the VM is RUNNING by using the gcloud 5. compute instances describe command. 5. Reinstall the kernel. 6. Restart the VM. Alternatively, if you created a snapshot of the boot disk before upgrading the VM, use the snapshot to create a VM.
GCP VM
Connection via Cloud Identity-Aware Proxy Failed
To resolve this issue Create a firewall rule on port 22 that allows ingress traffic from Identity-Aware Proxy.
GCP VM
ERROR: (gcloud.compute.ssh) Could not SSH into the instance. It is possible that your SSH key has not propagated to the instance yet. Try running this command again. If you still cannot connect, verify that the firewall and instance are set to accept ssh traffic.
This error can occur for several reasons. The following are some of the most common causes of the errors: 1. You tried to connect to a Windows VM that doesn't have SSH installed. To resolve this issue, follow the instructions to Enable SSH for Windows on a running VM. 2. The OpenSSH Server (sshd) isn't running or isn't configured properly. The sshd provides secure remote access to the system via SSH protocol. If it's misconfigured or not running, you can't connect to your VM via SSH. To resolve this issue, review OpenSSH Server configuration for Windows Server and Windows to ensure that sshd is set up correctly.
GCP VM
ERROR: (gcloud.compute.ssh) FAILED_PRECONDITION: The specified username or UID is not unique within given system ID.
This error occurs when OS Login attempts to generate a username that already exists within an organization. This is common when a user account is deleted and a new user with the same email address is created shortly after. After a user account is deleted, it takes up to 48 hours to remove the user's POSIX information. To resolve this issue, do one of the following: 1. Restore the deleted account. 2. Remove the account's POSIX information before deleting the account.
GCP VM
Error message: "code": "RESOURCE_OPERATION_RATE_EXCEEDED", "message": "Operation rate exceeded for resource 'projects/project-id/zones/zone-id/disks/disk-name'. Too frequent operations from the source resource."
Resolution: To create multiple disks from a snapshot, use the snapshot to create an image then create your disks from the image: Create an image from the snapshot. Create persistent disks from the image. In the Google Cloud console, select Image as the disk Source type. With the gcloud CLI, use the image flag. In the API, use the sourceImage parameter.
GCP VM
Error message: The resource 'projects/PROJECT_NAME/zones/ZONE/RESOURCE_TYPE/RESOURCE_NAME' already exists"
Resolution: Retry your creation request with a unique resource name.
GCP VM
Error message: Could not fetch resource: - The selected machine type (MACHINE_TYPE) has a required CPU platform of REQUIRED_CPU_PLATFORM. The minimum CPU platform must match this, but was SPECIFIED_CPU_PLATFORM.
Resolution: 1. To learn about which CPU platform your machine type supports, review CPU platforms. 2. Retry your request with a supported CPU platform.
GCP VM
Error Message: Invalid value for field 'resource.sourceMachineImage': Updating 'sourceMachineImage' is not supported
Resolution: 1. Make sure that your VM supports the processor of the new machine type. For more information about the processors supported by different machine types, see Machine family comparison. 2. Try to change the machine type by using the Google Cloud CLI.
GCP VM
ERROR: Registration failed: Registering system to registration proxy https://smt-gce.susecloud.net command '/usr/bin/zypper --non-interactive refs Python_3_Module_x86_64' failed Error: zypper returned 4 with 'Problem retrieving the repository index file for service 'Python_3_Module_x86_64': Timeout exceeded when accessing 'https://smt-gce.susecloud.net/services/2045/repo/repoindex.xml?credentials=Python_3_Module_x86_64'.
To resolve this issue, review the Cloud NAT configuration to verify that the minimum ports per VM instance parameter is set to at least 160.
GCP VM
ERROR: (gcloud.compute.instances.set-machine-type) Could not fetch resource: Invalid resource usage: 'Requested boot disk architecture (X86_64) is not compatible with machine type architecture (ARM64).'
Resolution: Make sure that your VM supports the processor of the new machine type. For more information about the processors supported by different machine types, see Machine family comparison. Try to change the machine type by using the Google Cloud CLI. If you switch from an x86 machine type to an Arm T2A machine type, you might receive a `INVALID_RESOURCE_USAGE' error indicating that your disk type is not compatible with an Arm machine type. Create a new T2A Arm instance using a compatible Arm OS and disk.
GCP VM
using an unapproved resource "Machine type architecture (ARM64) is not compatible with requested boot disc architecture (X86_64)," the notification states.
To resolve this issue, try one of the following: 1. If you are using a zonal MIG, use a regional MIG instead. 2. Create multiple MIGs and split your workload across them—for example by adjusting your load balancing configuration. 3. If you still need a bigger group, contact support to make a request.
GCP VM
Can't move a VM to a sole-tenant node.
Solution: 1. A VM instance with a specified minimum CPU platform can't be moved to a sole-tenant node by updating VM tenancy. To move a VM to a sole-tenant node, remove the minimum CPU platform specification by setting it to automatic. 2. Because each sole-tenant node uses a specific CPU platform, all VMs running on the node cannot specify a minimum CPU platform. Before you can move a VM to a sole-tenant node by updating its tenancy, you must set the VM's --min-cpu-platform flag to AUTOMATIC.
GCP VM
Error Message:No feasible nodes found for the instance given its node affinities and other constraints.
Specify values for the minimum number of CPUs for each VM so that the total for all VMs does not exceed the number of CPUs specified by the sole-tenant node type.
GCP Fire Store
ABORTED ERROR: Too much contention on these datastore entities. Please try again.
To resolve this issue: 1. For rapid traffic increases, Firestore attempts to automatically scale to meet the increased demand. When Firestore scales, latency begins to decrease. 2. Hot-spots limit the ability of Firestore to scale up, review designing for scale to identify hot-spots. 3. Review data contention in transactions and your usage of transactions. 4. Reduce the write rate to individual documents.
GCP Fire Store
RESOURCE_EXHAUSTED Error: Some resource has been exhausted, perhaps a per-user quota, or perhaps the entire file system is out of space.
To resolve this issue: Wait for the daily reset of your free tier quota or enable billing for your project.
GCP Fire Store
INVALID_ARGUMENT: The value of property field_name is longer than 1048487 bytes
To resolve this issue: 1. For indexed field values, split the field into multiple fields. If possible, create an un-indexed field and move data that doesn't need to be indexed into the un-indexed field. 2. For un-indexed field values, split the field into multiple fields or implement compression for the field value.
GCP Fire Store
Firestore : “Error: 9 FAILED_PRECONDITION: The Cloud Firestore API is not available for Cloud Datastore projects” [duplicate]
Three solutions: 1. Firestore is not set as your Datastore Go to https://console.cloud.google.com/firestore/. You'll notice a popup saying you need to initialize Firestore as the Native Datastore. Once done you should see this 2. You are logged into the wrong account in GCloud SDK. you're on localhost - In your terminal you need to switch accounts or create a new configuration that points to the correct account and project. Run gcloud init in a terminal on the machine you are using the service account on. 3. Firestore Database has not yet been created. Open https://console.firebase.google.com/. Add/Create your GCP Project, choose billing plan, and create the database.
GCP Fire Store
I am trying to create a Vue Composable that uploads a file to Firebase Storage. To do this I am using the modular Firebase 9 version. But my current code does not upload anything, and instead returns this error: FirebaseError: Firebase Storage: An unknown error occurred, please check the error payload for server response. (storage/unknown)
To fix that take these steps: 1. Go to https://console.cloud.google.com 2. Select your project in the top blue bar (you will probably need to switch to the "all" tab to see your Firebase projects) 3. Scroll down the left menu and select "Cloud Storage" 4. Select all your buckets then click "Show INFO panel" in the top right hand corner 5. click "ADD PRINCIPAL" 6. Add "[email protected]" to the New Principle box and give it the role of "Storage Admin" and save it
GCP Fire Store
How can I fix Firebase/firestore error in React native?
Issue was fixed by downgrading Firebase to version 6.0.2. Cleaning project's cache was the solution. Cleaning instructons: In /android folder run ./graglew clean. Also use https://www.npmjs.com/package/react-native-clean-project package.
GCP Fire Store
Firestore error : Stream closed with status : PERMISSION_DENIED
Replace your rules with this and try: rules_version = '2'; service cloud.firestore { match /databases/{database}/documents { match /{multiSegment=**} { allow read, write; } } }
GCP Fire Store
How can I fix my firestore database setup error?
Most likely snapshot.docChanges() is an empty array, so snapshot.docChanges()[0].doc.data() then fails. You'll want to check for an empty result set before accessing a member by its index like that.
GCP Fire Store
how do I fix my flutter app not building with cloud firestore?
I had the same issue and noticed, that my firebase_core dependency in pubspec.yaml was not updated. Now use firebase_core: ^1.20.0 and it works Do not forget to run flutter clean.
GCP Fire Store
How do I fix "Could not reach Cloud Firestore Backend" error?
If you are using Android Studio, Go to AVD Manager Your virtual devices Drop down by the right-hand side of the device Wipe Data Cold Boot This should fix your issue
GCP Fire Store
How to solve FirebaseError: Expected first argument to collection() to be a CollectionReference, a DocumentReference or FirebaseFirestore problem?
You need to use in your imports either: 'firebase/firestore' OR 'firebase/firestore/lite' Not both in the same project. In your case, the firebase.ts file is using: import { getFirestore } from 'firebase/firestore/lite' And in your hook: import { doc, onSnapshot, Unsubscribe } from 'firebase/firestore' So you're initialising the lite but using the full version afterwards. Keep in mind that both has it's benefits, but I would suggest in your case to pick one and just use it. Then the error will be gone.
GCP Fire Store
I am getting error while uploading date data to firestore in flutter
Firebase uses ISO8061 format to save dates. Let us say your b'day is 08-11-2004 so your code would be so final date = DateTime(2004, 11, 8).toIso8601String(); Now you can upload the date variable into firebase as Date format.