TECHNOLOGY
stringclasses 22
values | QUESTION
stringlengths 26
425
| SOLUTION
stringlengths 34
2.68k
|
---|---|---|
Azure - AML | Azure Machine Learning - running code in curated environement
gives ModuleNotFoundError: No module named 'azure.ai' | You can try to upgrade pip and then install the azure package using these commands:
pip install --upgrade pip
pip install azure-ai-ml |
Azure - AML | How to solve an error in model profiling where it is not recognizing
the profile attribute provided by model library? | Here are a few steps to resolve this error:
1. Check the library documentation: Make sure that the library you are using to profile the model has a profile attribute and that you are using it correctly.
2. Verify that you have imported the correct library: Check if you have imported the correct library and that the Model class you are using is the one from the library you intended to use.
3. Rename your custom class: If you have a custom Model class with the same name as the one from the library, consider renaming your custom class to avoid any name collisions. |
Azure - AML | How to access the data used during the azure automl pipeline
training? | You can access the data that was used during the training of an Azure AutoML
model by using the TrainingData property of the Model object in the Azure Machine Learning SDK. |
Azure - AML | Can i run multiple jobs/experiments on a single node using
Compute Cluster ? | Use Azure Batch as the compute target in AzureML. With Azure Batch, you can
create a pool of compute nodes and run multiple jobs/experiments concurrently on those nodes. Azure Batch automatically manages the allocation of resources to each job/experiment, so you don't need to worry about dividing your tasks into mini batches. |
Azure - AML | How To Connect To Managed Instance from Machine Learning Studio | To connect to an Azure SQL Database from Azure Machine Learning studio, you need to follow these steps:
1. Create an Azure SQL Database and make sure that it is accessible from your Azure Machine Learning workspace.
2. In Azure Machine Learning studio, go to the Data tab and click on the +New button.
3. Select the SQL Database option and provide the necessary details, such as the server name, database name, and authentication method.
4. Click on the Connect button to establish a connection to the Azure SQL Database.
5. Once the connection is established, you can use the SQL Database as a data source for your machine learning models in Azure Machine Learning studio. |
GCP Cloud Storage | I tried to create a bucket but received the following error:
409 Conflict. Sorry, that name is not available. Please try a different one. | The bucket name you tried to use (e.g. gs://cats or gs://dogs) is
already taken. Cloud Storage has a global namespace so you may not name a bucket with the same name as an existing bucket. Choose a name that is not being used. |
GCP Cloud Storage | How can I serve my content over HTTPS without using a load balancer | You can serve static content through HTTPS using direct URIs such as https://storage.googleapis.com/my-bucket/my-object. For other options to serve your content through a custom domain over SSL, you can:
1. Use a third-party Content Delivery Network with Cloud Storage.
2. Serve your static website content from Firebase Hosting instead of Cloud Storage. |
GCP Cloud Storage | I get an Access denied error message for a web page served by my
website | Check that the object is shared publicly.
If you previously uploaded and shared an object, but then upload a new version of it, then you must reshare the object publicly. This is because the public permission is replaced with the new upload. |
GCP Cloud Storage | I get an error when I attempt to make my data public | Make sure that you have the setIamPolicy permission for your object or bucket. This permission is granted, for example, in the Storage Admin role. If you have the setIamPolicy permission and you still get an error, your bucket might be subject to public access prevention, which does not allow access to allUsers or allAuthenticatedUsers. Public access prevention might be set on the bucket directly, or it might be enforced through an organization policy that is set at a higher level.
|
GCP Cloud Storage | I am prompted to download my page's content, instead of being able to
view it in my browser. | If you specify a MainPageSuffix as an object that does not have a web
content type, then instead of serving the page, site visitors are prompted to download the content. To resolve this issue, update the content-type metadata entry to a suitable value, such as text/html. |
GCP Cloud Storage | I'm seeing increased latency when uploading or downloading | Use the gsutil perfdiag command to run performance diagnostics from the affected environment. Consider the following common causes of upload and download latency:
CPU or memory constraints: The affected environment's operating system should have tooling to measure local resource consumption such as CPU usage and memory usage.
Disk IO constraints: As part of the gsutil perfdiag command, use the rthru_file and wthru_file tests to gauge the performance impact caused by local disk IO.
Geographical distance: Performance can be impacted by the physical separation of your Cloud Storage bucket and affected environment, particularly in cross-continental cases. Testing with a bucket located in the same region as your affected environment can identify the extent to which geographic separation is contributing to your latency.
If applicable, the affected environment's DNS resolver should use the EDNS(0) protocol so that requests from the environment are routed through an appropriate Google Front End. |
GCP Cloud Storage | I'm seeing increased latency when accessing Cloud Storage with gcloud
storage, gsutil, or one of the client libraries. | The CLIs and the client libraries automatically retry requests when it's
useful to do so, and this behavior can effectively increase latency as seen from the end user. Use the Cloud Monitoring metric storage.googleapis.com/api/request_count to see if Cloud Storage is consistenty serving a retryable response code, such as 429 or 5xx. |
GCP Cloud Storage | Do I need to enable billing if I was granted access to someone else's
bucket? | No, in this case another individual has already set up a Google Cloud project and either granted you access to the entire project or to one of their buckets and the objects it contains. Once you authenticate, typically with your Google account, you can read or write data according to the access that you were granted.
|
GCP Cloud Storage | While performing a resumable upload, I received error and the
message Failed to parse Content-Range header. | he value you used in your Content-Range header is invalid. For example, Content-Range: */* is invalid and instead should be specified as Content-Range: bytes */*. If you receive this error, your current resumable upload is no longer active, and you must start a new resumable upload.
|
GCP Cloud Storage | Requests to a public bucket directly, or via Cloud CDN, are failing with a
HTTP 401: Unauthorized and an Authentication Required response. | Check that your client, or any intermediate proxy, is not adding an
Authorization header to requests to Cloud Storage. Any request with an Authorization header, even if empty, is validated as if it were an authentication attempt. |
GCP Cloud Storage | How to get data that is older than 6 weeks from GCP metrics explorer
API | By Default monitoring API stores data only up to 6 weeks only. If you
need data for more than 6 weeks or long term data then as per data retention policy you can extend up to 24 months. There is no additional cost for this extended retention policy. |
GCP Cloud Storage | How can I maximize the availability of my data? | Consider storing your data in a multi-region or dual-region bucket location if high availability is a top requirement. All data is stored geo-redundantly in these locations, which means your data is stored in at least two geographically separated regions. In the unlikely event of a region-wide outage, such as one caused by a natural disaster, buckets in geo-redundant locations remain available, with no need to change storage paths. Also,
because object listing in a bucket is always strongly consistent, regardless of bucket location, there is a zero recovery time objective (RTO) in most circumstances for dual- and multi-regions. Note that to achieve uninterrupted service, other products, such as Compute Engine instances, must be set up to be geo-redundant as well. |
GCP Cloud Storage | How can I get a summary of space usage for a Cloud Storage bucket? | You can use Cloud Monitoring for daily monitoring of your bucket's byte
count, or you can use the gsutil du command to get the total bytes in your bucket at a given moment. For more information, see Getting a bucket's size. |
GCP Cloud Storage | I created a bucket, but don't remember which project I created it in. How can I find it? | For most common Cloud Storage operations, you only need to specify the relevant bucket's name, not the project associated with the bucket. In general, you only need to specify a project identifier when creating a bucket or listing buckets in a project. For more information, see When to specify a project.
To find which project contains a specific bucket:
If you are searching over a moderate number of projects and buckets, use the Google Cloud console, select each project, and view the buckets it contains.
Otherwise, go to the storage.bucket.get page in the API Explorer and enter the bucket's name in the bucket field. When you click Authorize and Execute, the associated project number appears as part of the response. To get the project name, use the project number in the following terminal command:
gcloud projects list | grep PROJECT_NUMBER |
GCP Cloud Storage | How do I prevent race conditions for my Cloud Storage resources? | The easiest way to avoid race conditions is to use a naming scheme that
avoids more than one mutation of the same object name. Often such a design is not feasible, in which case you can use preconditions in your request. Preconditions allow the request to proceed only if the actual state of the resource matches the criteria specified in the preconditions. |
GCP Cloud Storage | How do I Reset Google Cloud? | f you need to reset your Google Cloud for any reason, you can reset Google Cloud by following the steps below.
1. First of all you need to go to Google Cloud Console (https://console.cloud.google.com/) and then you need to sign in with your Google Account.
2. And then from the console dashboard, you need to select the project you want to reset.
3. And then you need to click on the gear icon in the top-right corner to access the project settings.
4. And now you have to scroll down to the "Shut Down" section and then you have to click on the "Shut Down" button.
5. And now you have to confirm that you want to close the project by typing the Project ID in the text field provided.
6. Finally you have to click on the "Shut Down" button again to confirm the action. |
GCP Cloud Storage | Unable to view or edit a shared Google Drive access. | If that is the case then ask the owner to give you the access and then the issue should be resolved. |
GCP Cloud Storage | Unable to access the latest version of Google Cloud. | If that is the case then all you need to do is update your Google Cloud
to latest version so that the same is resolved. |
GCP Cloud Storage | Google Cloud is not being able to perform print operations | In such cases simply check for updates in your printer and update
immediately to fix the same |
GCP Cloud Storage | I should have permission to access a certain bucket or object, but when I attempt to do so, I get a 403 - Forbidden error with a message that is similar to: [email protected] does not have storage.objects.get access to the Google Cloud Storage object. | You are missing a IAM permission for the bucket or object that is required to complete the request. If you expect to be able to make the request but cannot, perform the following checks:
1. Is the grantee referenced in the error message the one you expected? If the error message refers to an unexpected email address or to "Anonymous caller", then your request is not using the credentials you intended. This could be because the tool you are using to make the request was set up with the credentials from another alias or entity, or it could be because the request is being made on your behalf by a service account.
2. Is the permission referenced in the error message one thought you needed? If the permission is unexpected, it's likely because the tool you're using requires additional access in order to complete your request. For example, in order to bulk delete objects in a bucket, gcloud must first construct a list of objects in the bucket to delete. This portion of the bulk delete action requires the storage.objects.list permission, which might be surprising, given that the goal is object deletion, which normally requires only the storage.objects.delete permission. If this is the cause of your error message, make sure you're granted IAM roles that have the additional necessary permissions.
3. Are you granted the IAM role on the intended resource or parent resource? For example, if you're granted the Storage Object Viewer role for a project and you're trying to download an object, make sure the object is in a bucket that's in the project; you might inadvertently have the Storage Object Viewer permission for a different project. |
GCP Cloud SQL | Lost connection to MySQL server during query when dumping table | The source may have become unavailable, or the dump contained packets too large.
Make sure the external primary is available to connect, or use mysqldump with the max_allowed_packet option. |
GCP Cloud SQL | The initial data migration was successful, but no data is being replicated. | One possible root cause could be your source database has defined replication flags which result in some or all database changes not being replicated over.
Make sure the replication flags such as binlog-do-db, binlog-ignore-db, replicate-do-db or replicate-ignore-db are not set in a conflicting way.
Run the command show master status on the primary instance to see the current settings. |
GCP Cloud SQL | The initial data migration was successful but data replication stops
working after a while. | Things to try:
1. Check the replication metrics for your replica instance in the Cloud Monitoring section of the Google Cloud console.
2. The errors from the MySQL IO thread or SQL thread can be found in Cloud Logging in the mysql.err log files.
3. The error can also be found when connecting to the replica instance. Run the command SHOW SLAVE STATUS, and check for the following fields in the output:
Slave_IO_Running
Slave_SQL_Running
Last_IO_Error
Last_SQL_Error |
GCP Cloud SQL | I am getting an error as mysqld check failed: data disk is full. | The data disk of the replica instance is full.
Increase the disk size of the replica instance. You can either manually increase the disk size or enable auto storage increase. |
GCP Cloud SQL | Error message: The slave is connecting ... master has purged binary logs
containing GTIDs that the slave requires. | The primary Cloud SQL instance has automatic backups and binary logs and point-in-time recovery is enabled, so it should have enough logs for the replica to be able to catch up. However, in this case although the binary logs exist, the replica doesn't know which row to start reading from.
Create a new dump file using the correct flag settings, and configure the external replica using that file
1. Connect to your mysql client through a Compute Engine instance.
2. Run mysqldump and use the --master-data=1 and --flush-privileges flags.
Important: Do not include the --set-gtid-purged=OFF flag.
Learn more.
3. Ensure that the dump file just created contains the SET @@GLOBAL.GTID_PURGED='...' line.
4. Upload the dump file to a Cloud Storage bucket and configure the replica using the dump file. |
GCP Cloud SQL | After enabling a flag the instance loops between panicking and crashing. | Contact customer support to request flag removal followed by a hard
drain. This forces the instance t restart on a different host with a fresh configuration without the undesired flag or setting. |
GCP Cloud SQL | Getting the error message Bad syntax for dict arg when trying to set a
flag. | Complex parameter values, such as comma-separated lists, require special treatment when used with gcloud commands. |
GCP Cloud SQL | HTTP Error 409: Operation failed because another operation was already
in progress. | There is already a pending operation for your instance. Only one
operation is allowed at a time. Try your request after the current operation is complete. |
GCP Cloud SQL | The import operation is taking too long. | Too many active connections can interfere with import operations.
Close unused operations. Check the CPU and memory usage of your Cloud SQL instance to make sure there are plenty of resources available. The best way to ensure maximum resources for the import is to restart the instance before beginning the operation.
A restart:
Closes all connections.
Ends any tasks that may be consuming resources. |
GCP Cloud SQL | An import operation failing with an error that a table doesn't exist. | Tables can have foreign key dependencies on other tables, and depending on the order of operations, one or more of those tables might not yet exist during the import operation.
Things to try:
Add the following line at the start of the dump file:
SET FOREIGN_KEY_CHECKS=0;
Additionally, add this line at the end of the dump file:
SET FOREIGN_KEY_CHECKS=1;
These settings deactivate data integrity checks while the import operation is in progress, and reactivate them after the data is loaded. This doesn't affect the integrity of the data on the database, because the data was already validated during the creation of the dump file. |
GCP Cloud SQL | getting Operations information is not found in logs as an error | You want to find more information about an operation.
For example, a user was deleted but you can't find out who did it. The logs show the operation started but don't provide any more information. You must enable audit logging for detailed and personal identifying information (PII) like this to be logged. |
GCP Cloud SQL | Slow performance after restarting MySQL. | Cloud SQL allows caching of data in the InnoDB buffer pool. However,
after a restart, this cache is always empty, and all reads require a round trip to the backend to get data. As a result, queries can be slower than expected until the cache is filled. |
GCP Cloud SQL | I am unable to manually delete binary logs. | Binary logs cannot be manually deleted. Binary logs are automatically
deleted with their associated automatic backup, which generally happens after about seven days. |
GCP Cloud SQL | How do I find information about temporary files. | A file named ibtmp1 is used for storing temporary data. This file is reset upon database restart. To find information about temporary file usage, connect to the database and execute the following query:
SELECT * FROM INFORMATION_SCHEMA.FILES WHERE TABLESPACE_NAME='innodb_temporary'\G |
GCP Cloud SQL | How do I find out about table sizes. | This information is available in the database.
Connect to the database and execute the following query:
SELECT TABLE_SCHEMA, TABLE_NAME, sum(DATA_LENGTH+INDEX_LENGTH)/pow(1024,2) FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA NOT IN ('PERFORMANCE_SCHEMA','INFORMATION_SCHEMA','SYS','MYSQL') GROUP BY TABLE_SCHEMA, TABLE_NAME; |
GCP Cloud SQL | My data is being automatically deleted. | Most likely a script is running somewhere in your environment.
Look in the logs around the time of the deletion and see if there's a rogue script running from a dashboard or another automated process. |
GCP Cloud SQL | When I am trying to delete a user getting error message as user cannot be
deleted. | The user probably has objects in the database that depend on it. You need to drop those objects or reassign them to another user.
Find out which objects are dependent on the user, then drop or reassign those objects to a different user. |
GCP Cloud SQL | Unable to create read replica - unknown error. | There's probably a more specific error in the log files. Inspect the logs in Cloud Logging to find the actual error.
If the error is: set Service Networking service account as servicenetworking.serviceAgent role on consumer project, then disable and re-enable the Service Networking API. This action creates the service account necessary to continue with the process. |
GCP Cloud SQL | While changing parallel replication flags resulting an error. | An incorrect value is set for one of or more of these flags.
On the primary instance that's displaying the error message, set the parallel replication flags:
1. Modify the binlog_transaction_dependency_tracking and transaction_write_set_extractionflags:
binlog_transaction_dependency_tracking=COMMIT_ORDER
transaction_write_set_extraction=OFF
2. Add the slave_pending_jobs_size_max flag:
slave_pending_jobs_size_max=33554432
3. Modify the transaction_write_set_extraction flag:
transaction_write_set_extraction=XXHASH64
4. Modify the binlog_transaction_dependency_tracking flag:
binlog_transaction_dependency_tracking=WRITESET |
GCP Cloud SQL | getting error when deleting an instance. | If deletion protection is enabled for an instance, confirm your plans to
delete the instance. Then disable deletion protection before deleting the instance. |
GCP Cloud SQL | I am not able to see the current operation's status. | The Google Cloud console reports only success or failure when the operation is done. It isn't designed to show warnings or other updates
Run the gcloud sql operations list command to list all operations for the given Cloud SQL instance. |
GCP Functions | Deployment failure: Insufficient permissions to (re)configure a trigger
(permission denied for bucket <BUCKET_ID>). Please, give owner permissions to the editor role of the bucket and try again. | Reset this service account to the default role.
or
Grant the runtime service account the cloudfunctions.serviceAgent role.
or
Grant the runtime service account the storage.buckets.{get, update} and the resourcemanager.projects.get permissions. |
GCP Functions | Function deployment fails while executing function's global scope | For a more detailed error message, look into your function's build logs,
as well as your function's runtime logs. If it is unclear why your function failed to execute its global scope, consider temporarily moving the code into the request invocation, using lazy initialization of the global variables. This allows you to add extra log statements around your client libraries, which could be timing out on their instantiation (especially if they are calling other services), or crashing/throwing exceptions altogether. Additionally, you can try increasing the function timeout. |
GCP Functions | When a function is attempted to be deployed, its global scope is used. | 1. Disable Lifecycle Management on the buckets required by Container Registry.
2. Delete all the images of affected functions. You can access build logs to find the image paths. Reference script to bulk delete the images. Note that this does not affect the functions that are currently deployed.
3. Redeploy the functions. |
GCP Functions | Serving permission error due to "allow internal traffic only" configuration | You can:
1. Ensure that the request is coming from your Google Cloud project or VPC Service Controls service perimeter.
or
2. Change the ingress settings to allow all traffic for the function. |
GCP Functions | Getting error as your client does not have permission to the requested URL | Make sure that your requests include an Authorization:
Bearer ID_TOKEN header, and that the token is an ID token, not an access or refresh token. If you are generating this token manually with a service account's private key, you must exchange the self-signed JWT token for a Google-signed Identity token, following this guide. |
GCP Functions | Attempt to invoke function using curl redirects to Google login page | Make sure you specify the name of your function correctly. You can
always check using gcloud functions call which returns the correct 404 error for a missing function. |
GCP Functions | error message
In Cloud Logging logs: "Infrastructure cannot communicate with function.
There was likely a crash or deadlock in the user-provided code." | Different runtimes can crash under different scenarios. To find the root cause, output detailed debug level logs, check your application logic, and test for edge cases.
The Cloud Functions Python37 runtime currently has a known limitation
on the rate that it can handle logging. If log statements from a Python37 runtime instance are written at a sufficiently high rate, it can produce this error. Python runtime versions >= 3.8 do not have this limitation. We encourage users to migrate to a higher version of the Python runtime to avoid this issue. |
GCP Functions | Function stopping in mid-execution, or continues running after my code
finishes | If your function terminates early, you should make sure all your function's asynchronous tasks have been completed before doing any of the following:
1. returning a value
2. resolving or rejecting a returned Promise object (Node.js functions only)
3. throwing uncaught exceptions and/or errors
sending an HTTP response
calling a callback function
If your function fails to terminate once all asynchronous tasks have completed, you should verify that your function is correctly signaling Cloud Functions once it has completed. In particular, make sure that you perform one of the operations listed above as soon as your function has finished its asynchronous tasks. |
GCP Functions | getting error as User with Project Viewer or Cloud Function role cannot
deploy a function | Assign the user an additional role, the Service Account User IAM role
(roles/iam.serviceAccountUser), scoped to the Cloud Functions runtime service account. |
GCP Functions | Deployment service account missing the Service Agent role when
deploying functions | Reset this service account to the default role. |
GCP Functions | Deployment service account missing Pub/Sub permissions when
deploying an event-driven function | You can:
Reset this service account to the default role.
or
Grant the pubsub.subscriptions.* and pubsub.topics.* permissions to your service account manually. |
GCP Functions | Getting default runtime service account does not exist as error message | 1. Specify a user managed runtime service account when deploying your 1st gen functions.
or
2. Recreate the default service account @appspot.gserviceaccount.com for your project. |
GCP Functions | User with Project Editor role cannot make a function public | 1. Assign the deployer either the Project Owner or the Cloud Functions Admin role, both of which contain the cloudfunctions.functions.setIamPolicy permission.
or
2.Grant the permission manually by creating a custom role. |
GCP Functions | Is there a way to keep track of dates on Firestore using cloud functions | One approach would be to create a scheduled function that scans your database for documents to update every minute or every five minutes. This is a good approach for popular applications with a consistent usage rate.
To improve efficiency, you can use a Firestore onCreate trigger to defer a Cloud Task Function to update the document. As each purchase is made, a Cloud Task can be scheduled to execute in 72 hours from the purchase date where it sets promo to true. This has the benefit of not running jobs that don't have any documents to update. |
GCP Functions | Is it possible to route Google Cloud Functions egress traffic through
multiple rotating IPs? | 1. Create a Serverless VPC Connector
2. Create a Cloud NAT Gateway and have it include the subnet that you assigned to the Serverless VPC Connector
3. Configure your Cloud Function to use the Serverless VPC Connector for all its egress
Now that specific Cloud Function using that specific VPC Connector will route its outbound traffic through that specific Cloud NAT Gateway.
You can repeat this process as many times as necessary. To make this work with your Cloud Function you will have to deploy them as multiple Cloud Functions rather than a single Cloud Function. |
GCP Functions | How do I set entry point in cloud function? | In the Entry point field, enter the entry point to your function in your
source code. This is the code that will be executed when your function runs. The value of this flag must be a function name or fully-qualified class name that exists in your source code |
GCP Functions | Serverless VPC Access connector is not ready or does not exist | List your subnets to check whether your connector uses a /28 subnet mask.
If it does not, recreate or create a new connector to use a /28 subnet. Note the following considerations:
1. If you recreate the connector, you do not need to redeploy other functions. You might experience a network interruption as the connector is recreated.
2. If you create a new alternate connector, redeploy your functions to use the new connector and then delete the original connector. This method avoids network interruption. |
GCP Functions | Cloud Functions logs are not appearing in Log Explorer | Use the client library interface to flush buffered log entries before
exiting the function or use the library to write log entries synchronously. You can also synchronously write logs directly to stdout or stderr. |
GCP Functions | Cloud Functions logs are not appearing via Log Router Sink | Make sure no exclusion filter is set for
resource.type="cloud_functions" |
GCP Functions | Python GCP Cloud function connecting to Cloud SQL Error:
"ModuleNotFoundError: No module named 'google.cloud.sql'" | The error “ModuleNotFoundError: No module named 'google.cloud.sql” occurs as the google.cloud.sql module is not installed in the requirement.txt file. You can install it by using the command pip install “google.cloud.sql”
Also I would like to suggest you to check whether you have assigned the “Cloud SQL Client” role to the service account.
Also I would like to suggest you to check whether you have enabled the "Cloud SQL Admin API" within your Google cloud project.
As you already stated VPC connector and Cloud SQL instance are in the same VPC network, also make sure that they are in the same region.
Also check whether the installed packages in the requirements.txt are compatible with your python version you are using. |
GCP Functions | Unable to give Cloud Functions Admin role to my account on Firebase's
project setting | The origin of this issue is unknow. You can go to Manage roles, find
Cloud Functions Admin and create a custom role out of it. Then you can add this role instead.
|
Azure Functions | When adding two timed function to the same function app, only one of
them is triggered | Could probably be caused by a lot of issues like wrong configuration etc.
In my case, I had the configuration just right, but found a "feature" in Azure Functions. If adding two timed functions with the same class name and the same schedule, Azure executes one of the two functions twice. Changing the class name in one of the functions fixes the issue. |
Azure Functions | Azure Function not triggering when deployed, but works correctly in local
debugging | there are a few things you can check and try to resolve the problem:
Check the connection string: Verify that the connection string for the Event Hub trigger in the local.settings.json file and the connection string in the Azure Function App settings are identical (except for the "Endpoint" part). Make sure that the connection string in the Azure Function App settings is using the correct Event Hub namespace and Event Hub name. Check the function.json file: Ensure that your function.json file has the correct configuration for the Event Hub trigger binding. Verify the type, name, direction, eventHubName, and connection properties. |
Azure Functions | How do I access a virtual machine through point-to-site VPN from a
Function? | You can secure communications between a web app and a virtual
machine using Azure Point-To-Site VPN the solution, is to select App Service Plan in Hosting Plan. Running the Function on the App Service Plan (rather than on the Consumption Plan), opens up for Networking settings in the Function app settings view. |
Azure Functions | How do I set a static IP in Functions? | Deploying a function in an App Service Environment is the primary way to have static inbound and outbound IP addresses for your functions.
You can also use a virtual network NAT gateway to route outbound traffic through a public IP address that you control |
Azure Functions | How do I restrict internet access to my function? | You can restrict internet access in a couple of ways:
1. Private endpoints: Restrict inbound traffic to your function app by private link over your virtual network, effectively blocking inbound traffic from the public internet.
IP restrictions: Restrict inbound traffic to your function app by IP range.
Under IP restrictions, you are also able to configure Service Endpoints, which restrict your Function to only accept inbound traffic from a particular virtual network.
2. Removal of all HTTP triggers. For some applications, it's enough to simply avoid HTTP triggers and use any other event source to trigger your function.
3. Keep in mind that the Azure portal editor requires direct access to your running function. Any code changes through the Azure portal will require the device you're using to browse the portal to have its IP added to the approved list. But you can still use anything under the platform features tab with network restrictions in place. |
Azure Functions | How do I restrict my function app to a virtual network? | You are able to restrict inbound traffic for a function app to a virtual network using Service Endpoints. This configuration still allows the function app to make outbound calls to the internet.
To completely restrict a function such that all traffic flows through a virtual network, you can use a private endpoints with outbound virtual network integration or an App Service Environment. |
Azure Functions | How can I access resources in a virtual network from a function app? | You can access resources in a virtual network from a running function by
using virtual network integration. |
Azure Functions | How can I trigger a function from a resource in a virtual network? | You are able to allow HTTP triggers to be called from a virtual network using Service Endpoints or Private Endpoint connections.
You can also trigger a function from all other resources in a virtual network by deploying your function app to a Premium plan, App Service plan, or App Service Environment. |
Azure Functions | How can I deploy my function app in a virtual network? | Deploying to an App Service Environment is the only way to create a
function app that's wholly inside a virtual network. |
Azure Functions | In the Azure portal, it says 'Azure Functions runtime is unreachable' | Besides the normal network restrictions that could prevent your
function app from accessing the storage account. Here it mentions an issue where the App_Offline.htm was in the file system, thereby instructing the platform your app is unreachable. It's certainly plausible, so check the kudu system (or az rest) to see if that file exists, remove it, and retry the operation. |
Azure Functions | Orchestration is stuck in the Pending state | Use the following steps to troubleshoot orchestration instances that remain stuck indefinitely in the "Pending" state.
1. Check the Durable Task Framework traces for warnings or errors for the impacted orchestration instance ID. A sample query can be found in the Trace Errors/Warnings section.
2. Check the Azure Storage control queues assigned to the stuck orchestrator to see if its "start message" is still there For more information on control queues, see the Azure Storage provider control queue documentation.
3. Change your app's platform configuration version to “64 Bit”. Sometimes orchestrations don't start because the app is running out of memory. Switching to 64-bit process allows the app to allocate more total memory. This only applies to App Service Basic, Standard, Premium, and Elastic Premium plans. Free or Consumption plans do not support 64-bit processes. |
Azure Functions | "ERROR: Exception calling "Fill" with "1" argument(s): "Timeout expired.
The timeout period elapsed prior to completion of the operation or the server is not responding." " | Here are the few suggestions:
1. Have you tried with a simple query from Azure Function and worked (different query that executes within few seconds)? If so, then try setting CommandTimeout as 0.
2. Make sure there is a network connectivity between Azure Functions and SQL Server and Function App can access SQL server. Here is doc Typical causes and resolutions for the error with common causes/resolutions. Any VNET integration, Firewall in between services? Review https://learn.microsoft.com/en-us/azure/azure-functions/functions-networking-options?tabs=azure-cli networking set up of Azure Functions and Use tcpping tool to test the connectivity (Tools). |
Azure Functions | while creating the function app from the portal the storage section is
missing | Retry the same operation by logging-in to portal from different browser
or signing out and signing-in in same browser or by clearing the browser cache. |
Azure Functions | How do I add or access an app.config file in Azure functions to add a
database connection string? | The best way to do this is to add a Connection String from the Azure portal:
1. From your Function App UI, click Function App Settings
2. Settings / Application Settings
3. Add connection strings
They will then be available using the same logic as if they were in a web.config, e.g.
var conn = System.Configuration.ConfigurationManager
.ConnectionStrings["MyConn"].ConnectionString; |
Azure Functions | How to rename an Azure Function?
| The UI does not directly support renaming a Function, but you can work around this using the following manual steps:
1. Stop your Function App. To do this, go under Function app settings / Go To App Service Settings, and click on the Stop button.
2. Go to Kudu Console: Function app settings / Go to Kudu (article about that)
3. In Kudu Console, go to D:\home\site\wwwroot and rename the Function folder to the new name
4. Now go to D:\home\data\Functions\secrets and rename [oldname].json to [newname].json
5. Then go to D:\home\data\Functions\sampledata and rename [oldname].dat to [newname].dat
6. Start your function app, in the same place where you stopped it above In the Functions UI, click the refresh button in the top left corner, and your renamed function should appear |
Azure Functions | Azure function apps logs not showing | The log window is a bit fragile and doesn't always show the logs. However, logs are also written to the log files.
You can access these logs from the Kudu console: https://[your-function-app].scm.azurewebsites.net/
From the menu, select Debug console > CMD
On the list of files, go into LogFiles > Application > Functions > Function > [Name of your function]
There you will see a list of log files. |
Azure Functions | How can I use PostgreSQL with Azure Functions without maxing out
connections? | This is the classic problem of using shared resources. You have 50 of
these resources in this case. The most effective way to support more consumers would be to reduce the time each consumer uses the resource. Reducing the Connection Idle Lifetime substantially is probably the most effective way. Increasing Timeout does help reduce errors (and is a good choice), but it doesn't increase the throughput. It just smooths out the load. Reducing Maximum Pool size is also good. |
Azure Functions | I have a queue based function app, however even after publishing
messages to queue - function does not get triggered? | Azure function expects queue messages to be base64 encoded to trigger it.
So if message pushed to queue is not base64 encoded then the function trigger ignores it. |
Azure Functions | Azure Functions Cannot Authenticate to Storage Account | Must add the Storage Account user.impersonation permission to the
Service Principal! |
Azure Functions | How can I assign Graph Sites.ReadWrite.All permissions in Tenant B to my
Tenant A app? | There are two ways to achieve this:
Using App Registration or Federated Managed Identity
App Registration
In order to assign Graph Sites.ReadWrite.All permissions in Tenant B to your Tenant A app, you will need to create an app registration for your Azure Function in Tenant
Here are the steps you can follow:
1)Register your Azure Function in Tenant B: a. Sign in to the Azure portal (https://portal.azure.com/) using an account with admin privileges in Tenant B. b. Navigate to "Azure Active Directory" > "App registrations" > "New registration". c. Provide a name for your app registration (e.g., "AzFunction-TenantB"), and then click "Register".
2)Grant Graph Sites.ReadWrite.All permissions to the app registration in Tenant B: a. In the app registration page for "AzFunction-TenantB", go to "API permissions" > "Add a permission". b. Select "Microsoft Graph" and choose the "Application permissions" tab. c. Expand the "Sites" group and check the "Sites.ReadWrite.All" permission. d. Click "Add permissions" to save your changes.
3)Grant admin consent for the permissions: a. Still in the "API permissions" tab, click on the "Grant admin consent for [Tenant B]" button. You'll need to be an admin in Tenant B to perform this action.
4)(Share the client ID and tenant ID with Tenant A: a. In the "Overview" tab of the "AzFunction-TenantB" app registration, make a note of the "Application (client) ID" and "Directory (tenant) ID" values.
5)Configure your Azure Function in Tenant A to use the new app registration in Tenant B: a. Sign in to the Azure portal (https://portal.azure.com/) using an account with privileges to manage your Azure Function in Tenant A. b. Go to the Azure Function App, navigate to the "Configuration" tab, and update the following values:
TENANT_B_CLIENT_ID: Set this to the "Application (client) ID" from step 4.
TENANT_B_TENANT_ID: Set this to the "Directory (tenant) ID" from step 4.
6)Update your Azure Function code to use the new app registration when calling Microsoft Graph: a. Use the new TENANT_B_CLIENT_ID and TENANT_B_TENANT_ID values when acquiring a token for Microsoft Graph. This will ensure that your Azure Function uses the app registration from Tenant B when calling the API.
Federated Managed Identity
https://svrooij.io/2022/12/16/poc-multi-tenant-managed-identity/#post
https://blog.identitydigest.com/azuread-federate-mi/
Note: You may also need to configure the necessary network and firewall settings to allow access to Tenant B from Tenant A.
You may also want to consider granting the necessary permissions to users in Tenant A to access the data in Tenant B. This can be done using Azure AD B2B collaboration. |
Azure Synapse | Queries using Azure AD authentication fails after 1 hour | Following steps can be followed to work around the problem.
1. It's recommended switching to Service Principal, Managed Identity or Shared Access Signature instead of using user identity for long running queries.
2. Restarting client (SSMS/ADS) acquires new token to establish the connection. |
Azure Synapse | Query failures from serverless SQL pool to Azure Cosmos DB analytical
store | following actions can be taken as quick mitigation:
1. Retry the failed query. It will automatically refresh the expired token.
2. Disable the private endpoint. Before applying this change, confirm with your security team that it meets your company security policies. |
Azure Synapse | Azure Cosmos DB analytical store view propagates wrong attributes in the
column | following actions can be taken as quick mitigation:
1. Recreate the view by renaming the columns.
2. Avoid using views if possible. |
Azure Synapse | Failed to delete Synapse workspace & Unable to delete virtual network | The problem can be mitigated by retrying the delete operation. |
Azure Synapse | synapse notebook connection has closed unexpectedly | try to switch your network environment, such as inside/outside corpnet, or access Synapse Notebook on another workstation.
If you can run notebook on the same workstation but in a different network environment, please work with your network administrator to find out whether the WebSocket connection has been blocked.
If you can run notebook on a different workstation but in the same network environment, please ensure you didn’t install any browser plugin that may block the WebSocket request. |
Azure Synapse | Websocket connection was closed unexpectedly. | To resolve this issue, rerun your query.
1. Try Azure Data Studio or SQL Server Management Studio for the same queries instead of Synapse Studio for further investigation.
2. If this message occurs often in your environment, get help from your network administrator. You can also check firewall settings, and check the Troubleshooting guide.
3. If the issue continues, create a support ticket through the Azure portal. |
Azure Synapse | Serverless databases aren't shown in Synapse Studio | If you don't see the databases that are created in serverless SQL pool,
check to see if your serverless SQL pool started. If serverless SQL pool is deactivated, the databases won't show. Execute any query, for example, SELECT 1, on serverless SQL pool to activate it and make the databases appear. |
Azure Synapse | Synapse Serverless SQL pool shows as unavailable | Incorrect network configuration is often the cause of this behavior. Make
sure the ports are properly configured. If you use a firewall or private endpoints, check these settings too.
Finally, make sure the appropriate roles are granted and have not been revoked. |
Azure Synapse | Can't read, list, or access files in Azure Data Lake Storage | If you use an Azure AD login without explicit credentials, make sure that your Azure AD identity can access the files in storage. To access the files, your Azure AD identity must have the Blob Data Reader permission, or permissions to List and Read access control lists (ACL) in ADLS. For more information, see Query fails because file cannot be opened.
If you access storage by using credentials, make sure that your managed identity or SPN has the Data Reader or Contributor role or specific ACL permissions. If you used a shared access signature token, make sure that it has rl permission and that it hasn't expired.
If you use a SQL login and the OPENROWSET function without a data source, make sure that you have a server-level credential that matches the storage URI and has permission to access the storage. |
Azure Synapse | query fails with the error File cannot be opened because it does not exist or it is used by another process | If your query fails with the error File cannot be opened because it does not exist or it is used by another process and you're sure that both files exist and aren't used by another process, serverless SQL pool can't access the file. This problem usually happens because your Azure AD identity doesn't have rights to access the file or because a firewall is blocking access to the file.
By default, serverless SQL pool tries to access the file by using your Azure AD identity. To resolve this issue, you must have proper rights to access the file. The easiest way is to grant yourself a Storage Blob Data Contributor role on the storage account you're trying to query. |
Azure Synapse | Query fails because it can't be executed due to current resource constraints | This message means serverless SQL pool can't execute at this moment. Here are some troubleshooting options:
Make sure data types of reasonable sizes are used.
If your query targets Parquet files, consider defining explicit types for string columns because they'll be VARCHAR(8000) by default. Check inferred data types.
If your query targets CSV files, consider creating statistics.
To optimize your query, see Performance best practices for serverless SQL pool. |
Azure Synapse | Query fails with the error message Bulk load data conversion error (type
mismatches or invalid character for the specified code page) for row n, column m [columnname] in the data file [filepath]. | To resolve this problem, inspect the file and the data types you chose. Also
check if your row delimiter and field terminator settings are correct. The following example shows how inspecting can be done by using VARCHAR as the column type. |
Azure Synapse | Query fails with the error message Column [column-name] of type
[type-name] is not compatible with external data type […], it's likely that a PARQUET data type was mapped to an incorrect SQL data type. | To resolve this issue, inspect the file and the data types you chose. This
mapping table helps to choose a correct SQL data type. As a best practice, specify mapping only for columns that would otherwise resolve into the VARCHAR data type. Avoiding VARCHAR when possible leads to better performance in queries. |
Subsets and Splits