Question
stringlengths 0
222
| Description
stringlengths 0
790
| Answer
stringlengths 0
28.2k
| Link
stringlengths 35
92
|
---|---|---|---|
Why am I seeing duplicate messages in Amazon SQS for the same Amazon S3 event? | I'm seeing duplicate messages in Amazon Simple Queue Service (Amazon SQS) for the same Amazon Simple Storage Service (Amazon S3) event. Why is this happening? | "I'm seeing duplicate messages in Amazon Simple Queue Service (Amazon SQS) for the same Amazon Simple Storage Service (Amazon S3) event. Why is this happening?ResolutionAmazon S3 is designed to deliver notifications with a high degree of reliability using built-in backoff and retry mechanisms. In rare occasions, the retry mechanism might cause duplicate notifications for the same object event.Amazon S3 event notifications are delivered as JSON objects that contain a sequencer key. This key is a hexadecimal value that you can use to identify the event sequence of PUTs and DELETEs for the same object. Duplicate event notifications for a specific object event have the same value for the sequencer key.For applications that must identify duplicate notifications, it's a best practice to maintain a secondary database or index of S3 objects using event notifications. Then, store and compare the sequencer key values to check for duplicates as each event notification is processed.Follow" | https://repost.aws/knowledge-center/s3-duplicate-sqs-messages |
How do I calculate the Agent non-response metric in Amazon Connect? | I want to know how to calculate the Agent non-response metric in Amazon Connect. | "I want to know how to calculate the Agent non-response metric in Amazon Connect.ResolutionThe Agent non-response metric increases when an agent doesn't accept an incoming contact, or the caller disconnects the call. Amazon Connect tracks the Agent non-response metric as a real-time metric and a historical metric.You can use the contact trace records (CTRs) and contact ID to manually calculate the Agent non-response metric. The contact ID must have the queue name for the queue that you're calculating the Agent non-response metric for. The following attributes indicate that the agent missed the contact:The Agent connection attempt value is greater than zero.The initiation method is either Inbound, Transfer, or Callback.No value is present under the Agent column, such as, a Connected to Agent timestamp or Agent interaction duration.For more information, see Contact records data model.The following scenarios increase Agent non-response metrics:Each time Amazon Connect routes a contact to an agent, but the agent doesn't answer the call. If the agent doesn't answer the call, then Amazon Connect routes the call to another agent to handle. Because agents can miss a single contact multiple times (including by the same agent), you count it multiple times.A caller disconnects when in a customer queue flow. For example, if the caller reaches a loop prompt and disconnects before the agent connects, the Agent non-response metric increases. By default, when Amazon Connect routes a call to an agent, the agent has 20 seconds to accept or decline the incoming contact.Example scenarioFor example, Amazon Connect transfers an inbound call to Queue A. Because Amazon Connect transfers the call to the queue, the customer queue flow runs. Two agents are available in Queue A, Agent A and Agent B.A call initiates to Agent A for 20 seconds, but Agent A misses the call. The call then initiates to Agent B for 20 seconds, but Agent B misses the call. There are now no agents to route the call to, and the customer queue flow starts over. Then, Agent A becomes Available, and the call initiates to Agent A for 10 seconds, but the caller disconnects before Agent A accepts the call.The Agent non-response metric value for this scenario is three.The Agent non-response metric calculates a value of three because both Agent A and Agent B miss the initiated call. Agent A is available again for the initiated call, but the caller disconnects before the agent answers the call. Because the disconnected call rang for the agent, you calculate it as an Agent non-response.Related informationHow do I identify who disconnected a call in my Amazon Connect contact center?How do I connect agents in my Amazon Connect contact center to incoming calls automatically?Follow" | https://repost.aws/knowledge-center/connect-calculate-agent-non-response |
How can I fix AWS CodePipeline when it runs twice? | My AWS CodePipeline runs twice. How can I resolve this issue? | "My AWS CodePipeline runs twice. How can I resolve this issue?Short descriptionThere are two common reasons AWS CodePipeline runs more than once:The PollForSourceChanges parameter is set to true, and that causes a second, polling-triggered launch of the pipeline.There's a duplicate CloudWatch Events rule with the same target as the pipeline, and that causes the pipeline to run twice.To resolve the issue, first look at the pipeline history to confirm what's causing the pipeline to run twice.If there are more than one of the same CloudWatch Events rule-triggered launches of the pipeline, delete or disable any duplicate rules.If there are polling-triggered launches of the pipeline, read the Understand the default behavior of the PollForSourceChanges parameter section. Then, based on your scenario, complete the steps in one of the following sections:If you created your pipeline with AWS CloudFormation, then complete the steps in the Update your AWS CloudFormation template section.If you created your pipeline with the AWS Command Line Interface (AWS CLI), then complete the steps in the Update your pipeline with a JSON file section.If you created your pipeline with the AWS SDK, then complete the steps in the Update your pipeline based on the configuration syntax of your language section.Important: Update your pipeline using the same method that you created it with. Avoid making out-of-band changes to your pipeline, and make sure that you complete the steps in the section that applies to your scenario only. For example, if you created your pipeline with AWS CloudFormation, then follow the steps in the Update your AWS CloudFormation template section only.ResolutionNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Confirm what's causing the pipeline to run twice1. Open the CodePipeline console.2. In Name, choose the name of the pipeline.3. Choose View history.4. In the Trigger column, check to see if there are any duplicate CloudWatch Events rule-triggered or polling-triggered launches of the pipeline.5. If there are more than one of the same CloudWatch Events rule-triggered launches, delete or disable any duplicate rules.-or-If there are polling-triggered launches of the pipeline, take the following troubleshooting steps.Understand the default behavior of the PollForSourceChanges parameterConsider the following:The default behavior of the PollForSourceChanges parameter is determined by the method used to create the pipeline.In many cases, the value of PollForSourceChanges is set to true by default and must be disabled.If you create your pipeline with the CodePipeline console, then the source detection method is automatically set to Amazon CloudWatch Events (the recommended way of detecting change in your source).If you create your pipeline with AWS CloudFormation, the AWS CLI, or the AWS SDK and don't specify the change detection method, then PollForSourceChanges is set to true by default (depending on the creation method).If you create your pipeline using a method other than the CodePipeline console and then update your pipeline out-of-band using the console, CodePipeline automatically creates an extra CloudWatch Events rule.If you create a CloudWatch Events rule in your AWS CloudFormation template or create a webhook for your GitHub version 1 repository and don't set the PollForSourceChanges parameter, then you end up with two ways to detect changes in the source. This causes your pipeline to run twice.Update your AWS CloudFormation templateIn your AWS CloudFormation template or pipeline configuration file, set the PollForSourceChanges parameter to false.Note: The PollForSourceChanges parameter is set to true by default.For more information on GitHub version 1 Webhooks, see Use webhooks to start a pipeline.Update your pipeline with a JSON file1. Copy your pipeline structure to a JSON file:$ aws codepipeline get-pipeline --name NAME_OF_YOUR_PIPELINE > pipeline.json2. Open the pipeline.json file in a text editor, and then add the PollForSourceChanges parameter to the source actions configuration section. Set the parameter to false.3. Remove the following metadata fields from the file:"metadata":{}"created""pipelineARN""updated"Important: The metadata lines must be removed from the pipeline.json file so the following update-pipeline command can use it.4. Save the pipeline.json file, and then run the following update-pipeline command to apply the changes to the file:$ aws codepipeline update-pipeline --cli-input-json file://pipeline.jsonFor more information, see Edit a pipeline (AWS CLI).Update your pipeline based on the configuration syntax of your languageFor instructions on updating your pipeline, see the SDK documentation for your language.For example, if you deployed your pipeline with Python, you can set PollForSourceChanges to false in the configuration section of your pipeline.Follow" | https://repost.aws/knowledge-center/codepipeline-running-twice |
How can I integrate AWS services into my iOS app? | What's the best way to integrate AWS Cloud services into a native iOS app? | "What's the best way to integrate AWS Cloud services into a native iOS app?ResolutionIt's a best practice to use Amplify iOS to integrate AWS Cloud services into a native iOS app. For information on how to start a project, see Amplify libraries for iOS.Related informationGetting started (Amplify docs)How is the SDK different from the Amplify Libraries for iOS?Follow" | https://repost.aws/knowledge-center/mobile-amplify-ios |
How can I avoid Route 53 API throttling errors? | How can I avoid API throttling errors in Amazon Route 53? | "How can I avoid API throttling errors in Amazon Route 53?Short descriptionWhen you perform bulk API calls to Route 53, you might receive an HTTP 400 (Bad Request) error.A response header that contains a Code element with a value of Throttling and a Message element with a value of Rate exceeded indicates rate throttling. Rate throttling happens when the number of API requests is greater than the hard limit of five requests per second (per account).If Route 53 can't process the request before the next request for the same hosted zone, Then subsequent requests are rejected with another HTTP 400 error. The response header contains both of the following:A Code element with a value of PriorRequestNotCompleteA Message element with a value of the request was rejected because Route 53 was still processing a prior request.API calls from AWS Identity and Access Management (IAM) users in the same account count towards global rate throttling for the account. API calls from these IAM users also impact API calls made from the AWS Management Console.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Use the following methods to avoid rate throttling.Batching requestsGroup individual operations of the same type into one change batch operation to reduce API calls using the AWS CLI or your preferred SDK.For example, request to CREATE, DELETE, or UPSERT (update and insert) several records with one batch operation. Use the change-resource-record-sets command in the AWS CLI to perform bulk resource record operations.Be aware that:UPSERT requests (update and insert) are counted twice.There are quotas for the elements and characters in change-resource-record-sets API calls.Use error retries and exponential backoffTo avoid throttling, add error retries and exponential backoff to your Route 53 API calls. For example, use a simple exponential backoff algorithm that retries the call in 2^i seconds, where i is the number of retries.Randomize start timesRandomize the start time for calling Route 53 APIs. Be sure that there aren't multiple applications processing the logic at the same time. Simultaneous requests can cause throttling.Introduce "sleep time" between callsIf the code function calls to Route 53 APIs are consecutive, add "sleep time" between the two calls to minimize the risk of throttling.Related informationQuotas (Route 53)Follow" | https://repost.aws/knowledge-center/route-53-avoid-throttling-errors |
How do I create an Amazon RDS MySQL cross-Region replica in another AWS account? | I want to create an Amazon Relational Database Service (Amazon RDS) for MySQL replica in a different AWS Region and account from the source DB instance. How can I do this? | "I want to create an Amazon Relational Database Service (Amazon RDS) for MySQL replica in a different AWS Region and account from the source DB instance. How can I do this?Short descriptionYou can create an Amazon RDS for MySQL replica in another AWS Region and account from the source DB instance for the following use cases:Improving disaster recoveryScaling out globallyMigrating between AWS Regions and accountsNote: There's no direct way to create a cross-Region replica in another AWS account using the Amazon RDS console or AWS Command Line Interface (AWS CLI). The steps outlined in this article set up an external binlog-based replication between two RDS for MySQL instances in different AWS accounts or Regions.ResolutionTo create an Amazon RDS for MySQL cross-Region replica in another AWS account, follow these steps.Note: Account A contains the primary RDS for MySQL instance in the source Region. Account B contains the replica RDS for MySQL in the target Region.1. In Account A (the primary Amazon RDS for MySQL instance), make sure that binary logging is active. By default, automated backups and binary logging are activated in RDS for MySQL. Binary logging is activated whenever automated backups are activated.Note: An outage occurs if you change the backup retention period from "0" to a non-zero value, or vice versa.2. Update your binlog retention period using the following command:mysql> CALL mysql.rds_set_configuration(name,value);Tip: Choose a time period that retains the binary log files on your replication source long enough for changes to be applied before deletion. Amazon RDS retains binlog files on a MySQL instance for up to 168 hours (7 days). For more information, see mysql.rds_set_configuration.For example, the following syntax sets the binlog retention period to 24 hours:mysql> call mysql.rds_set_configuration('binlog retention hours', 24);3. Create a replication user on the primary instance in Account A, and then grant the required privileges:mysql> CREATE USER 'repl_user'@'<domain_name>' IDENTIFIED BY '<password>';4. Grant the (required) REPLICATION CLIENT and REPLICATION SLAVE privileges to the user created in Step 3:mysql> GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'repl_user'@'<domain_name>';5. Create a cross-Region read replica in the primary account (Account A) with the target Region selected.6. Log in to the created replica instance in Account A. Then, confirm that the replica is caught up with the primary DB instance:mysql> SHOW SLAVE STATUS\GNote: Check to make sure that the Seconds_Behind_Master value is "0". When the value is "0", the replica is caught up to the primary DB instance. For more information, see Monitoring read replication.7. After the read replica is caught up to the primary DB instance, stop replication on the replica instance created in Step 5. To stop replication, use the following syntax:mysql> call mysql.rds_stop_replication();8. Run the SHOW SLAVE STATUS command on the replica, and then record the output values for Relay_Master_Log_File and Exec_Master_Log_Pos. The Relay_Master_Log_File and Exec_Master_Log_Pos values are your binary log coordinates, which are used for setting up external replication in later steps.9. Create a DB snapshot of your replica instance in Account A.(Optional) Or, you can use a native tool that generates logical backups (such as mysqldump) to export data from the replica instance in Account A. The native tool can then be used to import the data to a newly created RDS for MySQL instance of the same version in Account B. With this approach, you don't have to copy and share snapshots or AWS KMS keys between the two accounts. If you decide to use this approach, skip to Step 12 to set up network access and replication between both instances. Before you take this approach, you must import data into the Amazon RDS for MySQL instance in Account B.10. Share the DB snapshot with Account B.Note: If your DB snapshot is encrypted, then the AWS KMS key used to encrypt the snapshot must be shared with the target account. For more information, see Sharing encrypted snapshots.11. Restore the DB snapshot in Account B.Note: A DB instance can't be restored from a shared encrypted snapshot. Instead, make a copy of the DB snapshot and restore the DB instance from the copied version.12. Set up network access between Account A (the source account) and Account B (destination account). The network access allows traffic to flow between the source and destination accounts.13. Configure the inbound security group rules for Account A's primary DB instance. This configuration allows traffic to flow over the public internet from the newly created RDS for MySQL instance in Account B (the destination account). The security groups protect your Amazon RDS for MySQL instance.For private replication traffic, a VPC peering connection must be created and accepted between the two AWS accounts.14. Set up external replication on your target instance in Account B. Use repl_user within the command as a parameter. Note: The CALL mysql.rds_set_external_master command should be run by a DB user with execute command privileges.mysql> CALL mysql.rds_set_external_master ( host_name , host_port , replication_user_name , replication_user_password , mysql_binary_log_file_name , mysql_binary_log_file_location );For example:mysql> CALL mysql.rds_set_external_master (mytestinnstance.us-east-1.rds.amazonaws.com', 3306, 'repl_user', '<password>', 'mysql-bin-changelog.000031', 107, 0);mytestinnstance.us-east-1.rds.amazonaws.com: primary instance endpoint 3306: primary instance port repl_user: replication user name created in Step 3 password: user password created in Step 3 mysql-bin-changelog.000031: binary log file name from the output of Step 8 107: binary log position from the output of Step 815. Start the replication on the restored instance in Account B:CALL mysql.rds_start_replication;Here's an example output:+-------------------------+| Message |+-------------------------+| Slave running normally. |+-------------------------+16. Run the following command on Account B instance to check your replication status:mysql> show replica status \GNote: For MySQL version 8.0.22 and higher, SHOW SLAVE STATUS is deprecated, and SHOW REPLICA STATUS is available to use instead. For more information, see Checking replication status on the MySQL website.17. Delete the replica (which acted as an intermediate instance) created in Step 5. This replica was used to extract binary log coordinates without having to suspend writes on the primary instance in Account A.Cross-Region replication considerationsConsider the following approaches for cross-Region replication:A source DB instance can have cross-Region read replicas in multiple AWS Regions. For more information, see Creating a read replica in a different AWS Region.You can expect to see a higher lag time for any read replica that is in a different AWS Region than the source instance. This lag time comes from the longer network channels between regional data centers. For information about replication lag time, see Monitoring read replication.The data transferred for cross-Region replication incurs Amazon RDS data transfer charges. For more information about these data transfer charges, see cross-Region replication costs.Related informationCross-Region read replicas for Amazon RDS for MySQLFollow" | https://repost.aws/knowledge-center/rds-mysql-cross-region-replica |
How can I delete an Amazon VPC link for my Amazon Gateway REST API? | I'm trying to delete my Amazon Virtual Private Cloud (Amazon VPC) link for my Amazon Gateway REST API and received the following error:"Cannot delete VPC link referenced in format of [Method:Resource]."How can I resolve this? | "I'm trying to delete my Amazon Virtual Private Cloud (Amazon VPC) link for my Amazon Gateway REST API and received the following error:"Cannot delete VPC link referenced in format of [Method:Resource]."How can I resolve this?Short descriptionIf a resource using the Amazon VPC link integration is deleted, then you can't delete the link.ResolutionTo delete the Amazon VPC link, follow these steps depending on whether the REST API was or was not deployed.Deployed REST APIsSwitch the integration type from the Amazon VPC link to another type. For example, a mock integration, HTTP integration, or AWS integration type. After switching the integration type, redeploy the REST API to the same stage name that you previously deployed. Then, try deleting the Amazon VPC link again.Undeployed REST APIsSwitch the integration type from the Amazon VPC link to another type. For example, a mock integration, HTTP integration, or AWS integration type. Then, try deleting the Amazon VPC link again.Related informationTutorial: Build a REST API with API Gateway private integrationHow can I access an API Gateway private REST API in another AWS account using an interface VPC endpoint?Follow" | https://repost.aws/knowledge-center/delete-vpc-link-amazon-gateway |
How can I test the resiliency of my Direct Connect connection? | I want to be sure that traffic is routed over redundant virtual interfaces when one of my virtual interfaces is out of service. | "I want to be sure that traffic is routed over redundant virtual interfaces when one of my virtual interfaces is out of service.Short descriptionUse the Failover Testing feature to test the resiliency of AWS Direct Connect connections. With this feature, turn off one or more Border Gateway Protocol (BGP) sessions on a Direct Connect virtual interface for a configured duration. Then, verify that traffic is routed to redundant virtual interfaces as appropriate.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Before you begin testing, be sure that you have redundant Direct Connect virtual interfaces or VPN connections to avoid an outage.Start the failover testNote: You can run the test on any type of virtual interface (public, private, or transit). However, only the owner of the AWS account that includes the virtual interface can initiate the test.1. Open the Direct Connect console.2. In the navigation pane, choose Virtual Interfaces.3. Select your virtual interface.4. Choose Actions, and then choose Bring down BGP.5. In the Start failure test dialog box, complete the following:For Peerings, choose the peering session to bring down for your test (IPv4 or IPv6).For Test maximum time, enter the duration of the test in minutes. The maximum value is 4,320 minutes (72 hours), and the default value is 180 minutes (3 hours).For To Confirm test, enter Confirm, and then choose Confirm.The BGP peering session is now in the DOWN state. To verify that there are no outages and validate the resiliency of your connection, send traffic to your virtual interface.Note: If required, you can stop the test immediately.You can also use the StartBgpFailoverTest API call with AWS Command Line Interface (AWS CLI) or AWS SDK to perform the failover test.View the failover test historyIn the Direct Connect console, check the Test history column on your virtual interface page. Or, use the ListVirtualInterfaceTestHistory API call in the AWS CLI or AWS SDK.Test history data is stored for up to 365 days.Stop the failover testNote: You can stop the failover test at any time.1. Open the Direct Connect console.2. In the navigation pane, choose Virtual Interfaces.3. Select your virtual interface.4. Choose Actions, and then choose Cancel test.You can also use the StopBgpFailoverTest API call with AWS CLI or AWS SDK to stop the failover test.Related informationWhat is AWS Direct Connect?Configure redundant connectionsFollow" | https://repost.aws/knowledge-center/direct-connect-test-resiliency |
"Why do I see "audit: backlog limit exceeded" errors in my EC2 Linux instance's screenshot and system logs, and what can I do to avoid this?" | "I see "audit callbacks suppressed" and "audit: backlog limit exceeded" error messages in my Amazon Elastic Compute Cloud (Amazon EC2) Linux instance's screenshot and system logs. Why am I receiving these messages, and how can I prevent them from reoccurring?" | "I see "audit callbacks suppressed" and "audit: backlog limit exceeded" error messages in my Amazon Elastic Compute Cloud (Amazon EC2) Linux instance's screenshot and system logs. Why am I receiving these messages, and how can I prevent them from reoccurring?Short descriptionThe audit backlog buffer in a Linux system is a kernel level socket buffer queue that the operating system uses to maintain or log audit events. When a new audit event triggers, the system logs the event and adds it to the audit backlog buffer queue.The backlog_limit parameter value is the number of audit backlog buffers. The parameter is set to 320 by default, as shown in the following example:# auditctl -senabled 1failure 1pid 2264rate_limit 0backlog_limit 320lost 0backlog 0Audit events logged beyond the default number of 320 cause the following errors on the instance:audit: audit_backlog=321 > audit_backlog_limit=320 audit: audit_lost=44393 audit_rate_limit=0 audit_backlog_limit=320 audit: backlog limit exceeded-or-audit_printk_skb: 153 callbacks suppressedaudit_printk_skb: 114 callbacks suppressedAn audit buffer queue at or exceeding capacity might also cause the instance to freeze or remain in an unresponsive state.To avoid backlog limit exceeded errors, increase the backlog_limit parameter value. Large servers have a larger number of audit logs triggered, so increasing buffer space helps avoid error messages.Note: Increasing the audit buffer consumes more of the instance's memory. How large you make the backlog_limit parameter depends on the total memory of the instance. If the system has enough memory, you can try doubling the existing backlog_limit parameter value.The following is a calculation of the memory required for the auditd backlog. Use this calculation to determine how large you can make the backlog queue without causing memory stress on your instance.One audit buffer = 8970 BytesDefault number of audit buffers (backlog_limit parameter) = 320320 * 8970 = 2870400 Bytes, or 2.7 MiBThe size of the audit buffer is defined by the MAX_AUDIT_MESSAGE_LENGTH parameter. For more information, see MAX_AUDIT_MESSAGE_LENGTH in the Linux audit library on github.com.Note: If your instance is inaccessible and you see backlog limit exceeded messages in the system log, stop and start the instance. Then, perform the following steps to change the audit buffer value.ResolutionNote: In this example, we're changing the backlog_limit parameter value to 8192 buffers. 8192 buffers equals 70 MiB of memory based on the preceding calculation. You can use any value based on your memory calculation.Access the instance using SSH.Verify the current audit buffer size.Note: The backlog_limit parameter is listed as -b. For more information, see auditctl(8) on the auditctl-man-pageAmazon Linux 1 and other operating systems that don't have systemd:$ sudo cat /etc/audit/audit.rules# This file contains the auditctl rules that are loaded# whenever the audit daemon is started via the initscripts.# The rules are simply the parameters that would be passed# to auditctl.# First rule - delete all-D# Increase the buffers to survive stress events.# Make this bigger for busy systems-b 320# Disable system call auditing.# Remove the following line if you need the auditing.-a never,task# Feel free to add below this line. See auditctl man pageAmazon Linux 2 and other operating systems that use systemd:$ sudo cat /etc/audit/audit.rules# This file is automatically generated from /etc/audit/rules.d-D-b 320-f 1Access the audit.rules file using an editor, such as the vi editor:Amazon Linux 1 and other operating systems that don't use systemd:$ sudo vi /etc/audit/audit.rulesAmazon Linux 2 and other operating systems that use systemd:$ sudo vi /etc/audit/rules.d/audit.rulesEdit the -b parameter to a larger value. The following example changes the -b value to 8192.$ sudo cat /etc/audit/audit.rules# This file contains the auditctl rules that are loaded# whenever the audit daemon is started via the initscripts.# The rules are simply the parameters that would be passed# to auditctl.# First rule - delete all-D# Increase the buffers to survive stress events.# Make this bigger for busy systems-b 8192# Disable system call auditing.# Remove the following line if you need the auditing.-a never,task# Feel free to add below this line. See auditctl man page$ sudo auditctl -senabled 1failure 1pid 2264rate_limit 0backlog_limit 320lost 0backlog 0Restart the auditd service. The new backlog_limit value takes effect. The value also updates in auditctl -s, as shown in the following example:# sudo service auditd stopStopping auditd: [ OK ]# sudo service auditd startStarting auditd: [ OK ]# auditctl -senabled 1failure 1pid 26823rate_limit 0backlog_limit 8192lost 0backlog 0Note: If your instance is inaccessible and you see backlog limit exceeded messages in the system log, stop and start the instance. Then, perform the preceding steps to change the audit buffer value.Follow" | https://repost.aws/knowledge-center/troubleshoot-audit-backlog-errors-ec2 |
Why is the burst balance for my Amazon EBS volume low? | "The burst balance for my Amazon Elastic Block Store (Amazon EBS) volume is low, and I don't know why." | "The burst balance for my Amazon Elastic Block Store (Amazon EBS) volume is low, and I don't know why.ResolutionAmazon EBS volume types that use burst performance, such as gp2, st1, and sc1, have a baseline performance depending on volume size. If your workload is driving I/O traffic to one of these volume types beyond its baseline performance, then burst credit gets spent. If your workload continues to drive I/O traffic beyond the baseline performance, then your burst credit runs low. If your burst credit reaches zero, then these volume types get throttled at their baseline IOPS or throughput. Volumes start to earn burst credits again only after the workload stops, or its activity reaches below the volume type's baseline performance.Note: Volume type gp3 doesn't use burst performance.For more information, see the following AWS Documentation:Overview of General Purpose SSD volumes (gp2 volume type)Throughput Optimized HDD volumes (st1 volume type)Cold HDD volumes (sc1 volume type)Follow" | https://repost.aws/knowledge-center/ebs-volume-burst-balance-low |
Why isn't my attached EBS volume showing in my OS or in Disk Management on my EC2 Windows instance? | An Amazon Elastic Block Store (Amazon EBS) volume isn’t reflected in my operating system or in Disk Management on my Amazon Elastic Compute Cloud (Amazon EC2) Windows instance. How do I troubleshoot this? | "An Amazon Elastic Block Store (Amazon EBS) volume isn’t reflected in my operating system or in Disk Management on my Amazon Elastic Compute Cloud (Amazon EC2) Windows instance. How do I troubleshoot this?Resolution1. Connect to your Windows instance using Remote Desktop.2. Open the Disk Management menu, and then choose Action, Rescan Disks. Verify that the attached volume appears.3. If the volume doesn't appear in Disk Management, then the instance might have outdated storage drivers (AWS PV or AWS NVMe). EC2 Xen instances, such as t2 instances, use AWS PV drivers as storage drivers. AWS NVMe drivers are needed for storage on instances built on the Nitro system, such as m5 instances.Note: For instance type specifications, see Amazon EC2 instance types.To verify the driver versions installed on your EC2 Windows instance, run the following command in PowerShell:Get-WmiObject Win32_PnpSignedDriver | Select-Object DeviceName, DriverVersion, InfName | Where-Object {$_.DeviceName -like "*AWS*" -OR $_.DeviceName -like "*Amazon*"}The preceding command returns the current driver used by the instance. The command returns the ENA and NVMe driver versions for Nitro-based instances. The command returns the PV driver package version for Xen-based systems.Verify that the installed driver versions are the latest version. For Xen-based instance types, verify the version of the AWS PV Storage Host Adapter. For Nitro-based instance types, verify the driver version of the AWS NVMe Elastic Block Storage Adapter. For more information, see AWS PV driver package history or AWS NVMe driver version history.Note: It's a best practice to create a backup of your instance before upgrading the drivers.4. If the drivers aren't the latest version, see Upgrade PV drivers or AWS NVMe drivers for Windows instances.Related informationHow do I attach a new EBS volume to a running Amazon EC2 Windows instance?Follow" | https://repost.aws/knowledge-center/ec2-windows-volumes-not-showing |
How do I determine if my Lambda function is timing out? | "My AWS Lambda function is experiencing intermittent errors, and it's not clear why when I review the function's Amazon CloudWatch metrics. Could my Lambda function's increased error rate be caused by timeout issues? If so, how do I determine if my Lambda function is timing out?" | "My AWS Lambda function is experiencing intermittent errors, and it's not clear why when I review the function's Amazon CloudWatch metrics. Could my Lambda function's increased error rate be caused by timeout issues? If so, how do I determine if my Lambda function is timing out?Short descriptionWhen reviewing your Lambda function's CloudWatch log group, search for the phrase Task timed out. Then, use the request IDs of the associated timed-out invocations to retrieve the full logs for each invocation timeout.To troubleshoot any timeout errors that you identify, see How do I troubleshoot Lambda function invocation timeout errors?Note: When a Lambda function invocation times out, a Task timed out error message appears in the failed invocation's CloudWatch logs, not an Error message. If you search your function's CloudWatch logs for Error messages only, then the search returns only code-related runtime errors, not invocation timeout errors. For more information, see Monitoring AWS Lambda errors using Amazon CloudWatch.ResolutionPrerequisitesIf you haven't done so already, grant CloudWatch logging permissions to your Lambda function. For more information, see AWS managed policies for Lambda features .Retrieve the request IDs of any timed-out invocations by searching the function's CloudWatch log group for the phrase "Task timed out"Note: CloudWatch Logs Insights queries incur charges based on the amount of data that's queried. For more information, see Amazon CloudWatch pricing.1. Open the Functions page of the Lambda console.2. Choose a function.3. Choose Monitor.4. Choose View logs in CloudWatch. The function's Log group details page opens in the CloudWatch console.5. Choose View in Logs Insights.6. In the Logs Insights query text box, enter the following query, and then choose Run query:fields @timestamp, @requestId, @message, @logStream| filter @message like "Task timed out"| sort @timestamp desc| limit 100The response returns a list of request IDs for the timed-out invocations.For more information, see Analyzing log data with CloudWatch Logs Insights.Note: For large log groups, consider limiting the scope of the search by adding a datetime function to the Insights query. For more information, see CloudWatch Logs Insights query syntax.Use the request IDs of the timed-out invocations to retrieve the full logs for each invocation timeoutOutput logs using the standard logging functionality for the programming language that you're using. For language-specific instructions, see the Using the AWS CLI section of Accessing Amazon CloudWatch Logs for AWS Lambda.Follow" | https://repost.aws/knowledge-center/lambda-verify-invocation-timeouts |
Why are Lambda@Edge CloudTrail logs not being delivered? | I associated an AWS Lambda@Edge function with an Amazon CloudFront distribution as a trigger. I am unable to find the Lambda@Edge function's execution logs in Amazon CloudWatch populated in the AWS CloudTrail log stream. How can I troubleshoot why are they missing? | "I associated an AWS Lambda@Edge function with an Amazon CloudFront distribution as a trigger. I am unable to find the Lambda@Edge function's execution logs in Amazon CloudWatch populated in the AWS CloudTrail log stream. How can I troubleshoot why are they missing?Lambda@Edge logs fail to populate if the AWS Identity and Access Management (IAM) role associated with the Lambda@Edge function lacks the required permission. Logs can also appear missing if you are checking the incorrect Region from the console.ResolutionCheck the permission for the IAM role associated with the Lambda@Edge functionVerify that the function execution role has the required permissions to create log groups and streams and put log events into any AWS Region. Log delivery will fail if the execution role associated with the Lambda function does not have required permissions.An example IAM policy attached to the Lambda@Edge execution role:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:*:*:*" ] } ]}For more information about the permissions required to send data to CloudWatch logs, see Setting IAM permissions and roles for [email protected] the logs in the Region where the Lambda function was activatedWhen the Lambda@Edge function is activated, Lambda creates CloudWatch log streams in the AWS Region closest to the location where the function is activated. The log group name is formatted as /aws/lambda/us-east-1.function-name, where function-name is the name of the Lambda function.To locate the Lambda@Edge function logs, you must determine which Region(s) the function is being invoked then view the logs.To find the Region where the function was invoked:Log in to the AWS Management Console and open the CloudFront console.Choose Monitoring under the Telemetry category.Select the Lambda@Edge tab.Select the Lambda@Edge function, then choose View function metrics.From the monitoring page we can now see which Regions our replica functions are being invoked during a specific time period. From here we can find the CloudWatch logs for our function in each region. To do this, select View function logs then selecting the Region where the function is being invoked.Note: If you see errors in a particular Region, choose the Region showing errors in the graph. To learn more, see Determining the Lambda@Edge Region.Or, to determine the Edge location where the request was routed to, check the x-amz-cf-pop response's header value. Then, check the corresponding Region in CloudWatch to see the log files. For example, if x-amz-cf-pop is set to IAD89-P1, it indicates that the request was served in us-east-1 Region where IAD is the airport code.When Lambda returns a not invalid response to CloudFront, CloudFront pushes the error messages written to the log files. CloudFront then pushes to the CloudWatch Region where the Lambda function executed. Log groups have the following format: /aws/cloudfront/LambdaEdge/DistributionId, where DistributionId is the distribution’s ID. To find the Region where the CloudWatch log file is located, see Determining the Lambda@Edge Region.Related informationSetting IAM permissions and roles for Lambda@EdgeCloudWatch metrics and logs for Lambda@Edge functionsDetermining if your account pushes logs to CloudWatchFollow" | https://repost.aws/knowledge-center/lambda-edge-logs |
How can I configure a custom domain endpoint for multiple API Gateway APIs behind a CloudFront web distribution? | "I want to use an Amazon API Gateway custom domain endpoint behind an Amazon CloudFront web distribution. Then, I want to forward the API request to multiple APIs using base path mapping. How can I do this?" | "I want to use an Amazon API Gateway custom domain endpoint behind an Amazon CloudFront web distribution. Then, I want to forward the API request to multiple APIs using base path mapping. How can I do this?ResolutionCreate the custom domain name for your REST API, HTTP API, or WebSocket APIIf you haven't already done so, create your custom domain name, and then associate it with two different APIs.Note: A custom domain name for a WebSocket API can't be mapped to REST APIs or HTTP APIs.For REST APIs, follow the instructions in Setting up custom domain names for REST APIs.For HTTP APIs, follow the instructions in Setting up custom domain names for HTTP APIs.For WebSocket APIs, follow the instructions in Setting up custom domain names for WebSocket APIs.Note: After a custom domain name is created in API Gateway, you must create or update your DNS provider's resource record to map to your API endpoint. For more information, see Register a domain name.The example in this article uses a REST API Regional custom domain name setup.Example API endpoint URLshttps://restapiId1.execute-api.us-west-2.amazonaws.com/example1/homehttps://restapiId2.execute-api.us-west-2.amazonaws.com/example2/homeExample custom domain URL (without base path mapping)https://apigw.customdomain.com/example1/homehttps://apigw.customdomain.com/example2/homeCreate a CloudFront web distribution1. Open the CloudFront console, and then choose Create Distribution.2. On the Select a delivery method for your content page, under Web, choose Get Started.3. On the Create Distribution page, for Origin Domain, paste your API's custom domain URL similar to the following example:Origin domain name examplehttps://apigw.customdomain.comFor Origin path, leave it blank . Note: Entering an incorrect base path for origin path when invoking the CloudFront web distribution can result in an error. For example, an unauthorized request error that returns the error "Missing Authentication Token" and a 403 Forbidden response code.5. For Minimum Origin SSL Protocol, it's a best practice to choose TLSv1.2. Don't choose SSLv3. API Gateway doesn't support the SSLv3 protocol.6. For Protocol, choose HTTPS Only. Note: API Gateway doesn't support unencrypted HTTP endpoints. For more information, see Amazon API Gateway FAQs.7. (Optional) To forward custom headers to your origin, choose Add header, and enter your Header name and Value. Note: For a list of custom headers that CloudFront can't add, see Custom headers that CloudFront can't add to origin requests.8. Choose Create Distribution.After CloudFront creates your distribution, the value of the Status column for your distribution changes from InProgress to Deployed.For more information, see Creating a distribution.Test your CloudFront web distribution1. Open the CloudFront console, copy the Domain Name of your web distribution to your clipboard similar to the following example:Non-custom domain name examplea222222bcdefg5.cloudfront.net2. Follow the instructions for Testing a distribution.A successful test returns a 200 OK response. If you get a 500 server error code, then the web distribution might not be deployed. If you get no response, the CloudFront DNS record hasn't propagated yet.After the CloudFront distribution is created, your setup is configured as follows:a222222bcdefg5.cloudfront.net/example1/home --> apigw.customdomain.com/example1/home --> API-1a222222bcdefg5.cloudfront.net/example2/home --> apigw.customdomain.com/example2/home --> API-2You are now able to make a request to two APIs from a single CloudFront distribution with your API Gateway custom domain name.To configure forwarding for incoming authorization headers for your CloudFront web distribution, see How do I set up API Gateway with my own CloudFront distribution?Related informationChoose an endpoint type to set up for an API Gateway APIWorking with API mappings for Websocket APIsWorking with API mappings for REST APIsWorking with API mappings for HTTP APIsFollow" | https://repost.aws/knowledge-center/api-gateway-domain-cloudfront |
How do I add HTTP security headers to CloudFront responses? | I want to add HTTP security headers to Amazon CloudFront responses. How can I do this? | "I want to add HTTP security headers to Amazon CloudFront responses. How can I do this?Short descriptionHTTP Security headers improve the privacy and security of a web application and protect it from vulnerabilities at the client side. The most common HTTP security headers are:Referrer PolicyStrict Transport Security (HSTS)Content Security Policy (CSP)X-Content-Type-OptionsX-Frame-OptionsX-XSS-ProtectionCloudFront response header policies allow you to add one or more HTTP security headers to a response from CloudFront.ResolutionYou can use the managed security headers response policy that includes pre-defined values for the most common HTTP security headers. Or, you can create a custom response header policy with custom security headers and values that can be added to the required CloudFront behavior.Create a custom response headers policy from AWS consoleOpen the CloudFront console.From the navigation menu, choose Policies. Then, choose Response headers.Choose Create response headers policy.Under Security headers, select each of the security headers that you want to add to the policy. Add or select the required values for each header.Under Custom headers, add the custom security headers and values that you want CloudFront to add to the responses.Fill out other fields as required. Then, select Create.Attach response headers policy to a cache behaviorAfter you create a response headers policy, attach it to a cache behavior in a CloudFront distribution. To attach a managed or custom security headers response policy to an existing CloudFront distribution:Open the CloudFront console.Choose the distribution you want to update.Under the Behaviors tab, select the cache behavior you want to modify. Then, choose Edit.For Response headers policy, choose SecurityHeadersPolicy or choose the custom policy that you created.Choose Save changes.The following is an example of CloudFront response with HTTP security response headers :curl -I https://dxxxxxxxbai33q.cloudfront.netHTTP/2 200content-type: text/htmlcontent-length: 9850vary: Accept-Encodingdate: xxxxxxxxxlast-modified: xxxxxxxetag: "c59c5ef71f3350489xxxxxxxxxx"x-amz-server-side-encryption: AES256cache-control: no-store, no-cache, privatex-amz-version-id: nullaccept-ranges: bytesserver: AmazonS3x-xss-protection: 1; mode=blockx-frame-options: SAMEORIGINreferrer-policy: strict-origin-when-cross-originx-content-type-options: nosniffstrict-transport-security: max-age=31536000x-cache: Miss from cloudfrontvia: 1.1 12142717248e0e7148a5c1a9151ab918.cloudfront.net (CloudFront)x-amz-cf-pop: BOS50-C3x-amz-cf-id: nHNANTZYdkQkE5BmsqlisPTiodFhVCK-Sf9Zp4iJzNs04eWi1_hEig==Follow" | https://repost.aws/knowledge-center/cloudfront-http-security-headers |
How can I resolve the "CNAMEAlreadyExists" error when I create an edge-optimized custom domain name for my API Gateway API? | I get an "CNAMEAlreadyExists" error when I try to create an edge-optimized custom domain name for my Amazon API Gateway API. How can I resolve this? | "I get an "CNAMEAlreadyExists" error when I try to create an edge-optimized custom domain name for my Amazon API Gateway API. How can I resolve this?Short descriptionThe "CNAMEAlreadyExists" error occurs if:The CNAME record type for your custom domain name already exists and points to an Amazon CloudFront distribution.There is a CloudFront distribution configured with an alternate domain name or CNAME that matches your custom domain name.Note: It's not uncommon to receive "Too Many Requests" errors when you make several custom domain name updates in a short timeframe. These errors occur because of low quota for the CreateDomainName API (one request every 30 seconds per account). For more information, see API Gateway quotas for creating, deploying and managing an API.Important: You can't use the same CNAME record for more than one CloudFront distribution. Using the same CNAME record returns the following error:One or more of the CNAMEs you provided are already associated with a different resource. (Service: AmazonCloudFront; Status Code: 409; Error Code: CNAMEAlreadyExists; Request ID: a123456b-c78d-90e1-23f4-gh5i67890jkl*To resolve these errors and create an edge-optimized custom domain name, you must first delete the existing CNAME record pointing to a CloudFront distribution.ResolutionConfirm if the custom domain name previously existed1.FSPTo confirm if the custom domain name previously existed, run a DNS lookup command on the domain name.On Linux, Unix, or macOS systems:dig abc.example.com +allOn Windows:nslookup abc.example.comNote: Replace abc.example.com with your domain name.2.FSPIf the custom domain name previously existed and its DNS record is still there, then use dig to get the CNAME record in the output:abc.example.comcom. 0 IN CNAME d27am47dhauq2.cloudfront.net.Important:You must delete this record before you can create the custom domain name.It's a best practice to modify DNS settings in a development or testing environment first. Manually modifying production DNS settings might cause downtime.If the output shows an A record (IPv4 address) instead of a CNAME record, then you must update the record. The updated record must point the custom domain name (A alias) to the CloudFront distribution.If a dig or nslookup is done on the domain name and the record is an A alias, check the CloudFront distribution. Make sure that the CloudFront distribution isn't configured with an alternate domain name. For more information, see Comparison of alias and CNAME records.Delete the CNAME record or update your CloudFront distributionDo one or both of the following depending on your configuration:Remove the CNAME record that points to your CloudFront distribution.Update your CloudFront distribution and remove the alternate domain name or CNAME record.If you have a third-party DNS service provider, then follow your providers process to delete the CNAME record that points to your CloudFront distribution.If you use Amazon Route 53, delete the record in Route 53 that points to CloudFront.After you have made the configuration changes, wait several minutes for the DNS changes to propagate. Then, retry creating the custom domain name.Note: If you receive "CNAMEAlreadyExists" errors, see How do I resolve the error CNAMEAlreadyExists when setting up a CNAME alias for my Amazon CloudFront distribution?Related informationHow can I set up a custom domain name for my API Gateway API?Building a multi-Region serverless application with Amazon API Gateway and AWS LambdaFollow" | https://repost.aws/knowledge-center/api-gateway-cname-request-errors |
How do I resolve the "No 'Access-Control-Allow-Origin' header is present on the requested resource" error from CloudFront? | I'm getting the CORS error "No 'Access-Control-Allow-Origin'" on my requested resource in Amazon CloudFront. Why am I getting this and how can I resolve it? | "I'm getting the CORS error "No 'Access-Control-Allow-Origin'" on my requested resource in Amazon CloudFront. Why am I getting this and how can I resolve it?ResolutionConfirm the origin's cross-origin resource sharing (CORS) policy allows the origin to return the Access-Control-Allow-Origin headerNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Run the following command to confirm the origin server returns the Access-Control-Allow-Origin header. Replace example.com with the required origin header. Replace https://www.example.net/video/call/System.generateId.dwr with the URL of the resource that's returning the header error.curl -H "Origin: example.com" -v "https://www.example.net/video/call/System.generateId.dwr"If the CORS policy allows the origin server to return the Access-Control-Allow-Origin header, you see a response similar to the following:HTTP/1.1 200 OKServer: nginx/1.10.2Date: Mon, 01 May 2018 03:06:41 GMTContent-Type: text/htmlContent-Length: 3770Last-Modified: Thu, 16 Mar 2017 01:50:52 GMTConnection: keep-aliveETag: "58c9ef7c-eba"Access-Control-Allow-Origin: example.comAccept-Ranges: bytesIf CORS headers are not returned in the response, then the origin server is not correctly setup for CORS. Set up a CORS policy on your custom origin or Amazon Simple Storage Service (Amazon S3) origin.Configure the CloudFront distribution to forward the appropriate headers to the origin serverAfter you set up a CORS policy on your origin server, configure your CloudFront distribution to forward the origin headers to the origin server. If your origin server is an Amazon S3 bucket, then configure your distribution to forward the following headers to Amazon S3:Access-Control-Request-HeadersAccess-Control-Request-MethodOriginTo forward the headers to the origin server, CloudFront has two pre-defined policies depending on your origin type: CORS-S3Origin and CORS-CustomOrigin.To add a pre-defined policy to your distribution:Open your distribution from the CloudFront console.Choose the Behaviors tab.Choose Create Behavior. Or, select an existing behavior, and then choose Edit.Under Cache key and origin requests, choose Cache policy and origin request policy. Then, for Origin request policy, choose CORS-S3Origin or CORS-CustomOrigin from the dropdown list. For more information, see Using the managed origin request policies.Note: To create your own cache policy instead, see Creating cache policies.Choose Create Behavior. Or, choose Save changes if you're editing an existing behavior.To forward the headers using a cache policy:Create a cache policy.Under Cache key settings, for Headers, select Include the following headers. From the Add header dropdown list, choose one of the headers required by your origin.Fill in the cache policy settings as required by the behavior that you're attaching the policy to.Attach the cache policy to the behavior of your CloudFront distribution.To forward the headers using legacy cache settings:Open your distribution from the CloudFront console.Choose the Behaviors tab.Choose Create Behavior. Or, select an existing behavior, and then choose Edit.Under Cache key and origin requests, select Legacy cache settings.In the Headers dropdown list, choose the headers required by your origin. Choose Add custom to add headers required by your origin that don't appear in dropdown list.Choose Create Behavior. Or, choose Save changes if you're editing an existing behavior.Note: Be sure also to forward the header as part of your client request to CloudFront, which CloudFront forwards to the origin.Configure the CloudFront distribution's cache behavior to allow the OPTIONS method for HTTP requestsIf you still see errors after updating your CORS policy and forwarding the appropriate headers, allow the OPTIONS HTTP method in your distribution's cache behavior. By default, CloudFront only allows the GET and HEAD methods. However, some web browsers can issue requests for the OPTIONS method. To turn on the OPTIONS method on your CloudFront distribution:Open your distribution from the CloudFront console.Choose the Behaviors tab.Choose Create Behavior. Or, select an existing behavior, and then choose Edit.For Allowed HTTP Methods, select GET, HEAD, OPTIONS.Choose Create Behavior. Or, choose Save changes if you're editing an existing behavior.Configure your CloudFront response policy to return the required Access-Control-Allow-Origin headersIf the origin server isn't accessible or can't be set up to return the appropriate CORS headers, configure a CloudFront to return the required CORS headers. To configure, create response headers policies:Open your distribution from the CloudFront console.Choose the Behaviors tab.Choose Create Behavior. Or, select an existing behavior, and then choose Edit.For Response headers policy:Select an existing response policy from the dropdown list.-or-Choose Create policy to create a new response headers policy . In the new policy, under Cross-origin resource sharing, turn on CORS.Fill in other settings as needed and choose Create policy.From the Create Behavior page, choose the policy you created from the dropdown list.Choose Create Behavior. Or, choose Save changes if you're editing an existing behavior.Note: CloudFront typically deploys changes to distributions within five minutes. After you edit your distribution, invalidate the cache to clear previously cached responses.Related informationConfiguring CloudFront to respect CORS settingsConfiguring cross-origin resource sharing (CORS)Using the managed response headers policiesAdd a cross-origin resource sharing (CORS) header to the responseFollow" | https://repost.aws/knowledge-center/no-access-control-allow-origin-error |
"How do I configure the NGINX Ingress Controller to increase the client request body, activate CORS to allow additional headers, and use WebSocket to work with Amazon EKS?" | "I want to configure the NGINX Ingress Controller to increase the size of the client request body with my Amazon Elastic Kubernetes Service (Amazon EKS) cluster. I also want to activate Cross-Origin Resource Sharing (CORS) to allow additional headers, and use WebSocket with the NGINX Ingress Controller." | "I want to configure the NGINX Ingress Controller to increase the size of the client request body with my Amazon Elastic Kubernetes Service (Amazon EKS) cluster. I also want to activate Cross-Origin Resource Sharing (CORS) to allow additional headers, and use WebSocket with the NGINX Ingress Controller.Short descriptionChoose one of the following configuration options:To increase the size of the client request body, complete the steps in the Configure maximum body size section.To activate CORS to allow additional headers, complete the steps in the Activate CORS section.To use WebSocket with the NGINX Ingress Controller, complete the steps in the Use WebSocket section.ResolutionConfigure maximum body sizeIf your body size request exceeds the maximum allowed size of the client request body, then the NGINX Ingress Controller returns an HTTP 413 error. Use the client_max_body_size parameter to configure a larger size:nginx.ingress.kubernetes.io/proxy-body-size: 8mNote: The default value of the proxy-body-size is 1 M. Make sure to change the number to the size you need.Note: In some cases, you might need to increase the maximum size for all post body data and file uploads. For PHP, you must increase the post_max_size and upload_max_file_size values in the php.ini configuration.Activate CORSTo activate CORS in an Ingress rule, add the following annotation:nginx.ingress.kubernetes.io/enable-cors: "true"The following example shows that header X-Forwarded-For is accepted:nginx.ingress.kubernetes.io/cors-allow-headers: "X-Forwarded-For"You can find other headers in the Enable CORS (from the GitHub website) section of the NGINX Ingress Controller documentation.Use WebSocketNGINX supports WebSocket (from the NGINX website) versions 1.3 or later, without requirement. To avoid a closed connection, you must increase the proxy-read-timeout and proxy-send-timeout values.In the following example, 120 seconds is set for proxy read timeout and proxy send timeout:nginx.ingress.kubernetes.io/proxy-read-timeout: "120"nginx.ingress.kubernetes.io/proxy-send-timeout: "120"Note: The default value of the preceding two annotations is 60 seconds.Follow" | https://repost.aws/knowledge-center/eks-configure-nginx-ingress-controller |
Amazon Cognito isn't delivering MFA text messages to my app's users. How do I troubleshoot this? | "When the users in my Amazon Cognito user pool sign in to my app, they don't receive a multi-factor authentication (MFA) text message with their one-time password (OTP) as expected. How do I troubleshoot these message delivery failures?" | "When the users in my Amazon Cognito user pool sign in to my app, they don't receive a multi-factor authentication (MFA) text message with their one-time password (OTP) as expected. How do I troubleshoot these message delivery failures?Short descriptionAmazon Cognito's MFA SMS (text) messages are sent using Amazon Simple Notification Service (Amazon SNS). When SMS messages from Amazon SNS aren't delivered as expected, you can troubleshoot the delivery failure reason using Amazon CloudWatch Logs.Delivery failure commonly occurs when an AWS account hits its monthly service quota for SMS spending. If your account hit that service quota, see how to request a service quota increase and then begin monitoring your account's SMS usage.ResolutionIf you haven't done so already, enable CloudWatch logs for your SMS messages. Then, follow these instructions.Review delivery logs using CloudWatchCheck the provider response logs of SMS deliveries in the CloudWatch console. In each delivery status log, the providerResponse attribute contains the reason for delivery success or failure.Note: If you just now enabled CloudWatch logs for SMS messages, you won't see logs of your account's past SMS usage from before you enabled logging.As a test, you can use Amazon SNS to send an SMS message to your own mobile phone. If the test message doesn't arrive, then check the logs for the provider response.View the month-to-date SMS spendingLook at your account's Amazon SNS metrics to see the month-to-date SMS spending (SMSMonthToDateSpentUSD).Open the CloudWatch console.In the left navigation pane, choose Metrics.Under All metrics, choose SNS, and then choose Metrics with no dimensions.Under Metric Name, expand SMSMonthToDateSpentUSD, and then choose Graph this metric only.Note: On the Graphed metrics tab, confirm that Statistic is set to Maximum.In the graph, note the value of the metric.For more information, see Graphing a metric.Check the monthly service quota for SMS spendingLook at your account's monthly Amazon SNS service quota for SMS spending. Compare it to your account's month-to-date SMS spending to determine if it hit the monthly quota**.**Open the Amazon SNS console.In the left navigation pane, choose Text messaging (SMS).Under Text messaging preferences, note the value for Account spend limit.For more information, see Setting SMS messaging preferences and Amazon Simple Notification Service endpoints and quotas.(Optional) Request a service quota increase for SMS spendingIf your account hit the monthly Amazon SNS service quota for SMS spending but you want to send more SMS messages, request a service quota increase. If you expect your monthly SMS usage to stay the same (or increase), then a service quota increase also prevents the issue from happening again.Set an alarm and monitor SMS usageIn addition to a service quota increase, keeping informed of your account's SMS activity can help you avoid hitting the monthly service quota. Do any of the following:Create a CloudWatch alarm for the SMSMonthToDateSpentUSD metric. Set the alarm to notify you well in advance of hitting the SMS spending quota.Monitor your account's SMS metrics and logs using CloudWatch to stay aware of your account's usage and anticipate your costs.View SMS delivery statistics and subscribe to daily SMS usage reports from Amazon SNS.Related informationAmazon SNS FAQsSMS nessage spending in USD (Service Quotas console)Monitoring Amazon SNS topics using CloudWatchAdding multi-factor authentication (MFA) to a user poolAdding advanced security to a user poolFollow" | https://repost.aws/knowledge-center/cognito-troubleshoot-mfa-sms-delivery |
How can I access resources in a peered VPC over Client VPN? | I have an AWS Client VPN endpoint connected to a target virtual private cloud (VPC). I have other VPCs peered with the target VPC. I need to access the peered VPCs through the same endpoint. How can I access resources in a peered VPC over Client VPN? | "I have an AWS Client VPN endpoint connected to a target virtual private cloud (VPC). I have other VPCs peered with the target VPC. I need to access the peered VPCs through the same endpoint. How can I access resources in a peered VPC over Client VPN?ResolutionBefore you begin:Create or identify a VPC with at least one subnet. Find the subnet in the VPC that you plan to associate with the Client VPN endpoint, and then note its IPv4 CIDR ranges. For more information, see Subnets for your VPC.Identify a suitable CIDR range for the client IP addresses that doesn't overlap with the VPC CIDR.Review the limitations and rules for Client VPN endpoints.To provide access to resources in a peered VPC:Create a VPC peering connection between the VPCs.Test the VPC peering connection. Confirm that instances in both VPCs can communicate as if they're in the same network.Create a Client VPN endpoint in the same Region as the target VPC.Associate the subnet that you previously identified with the Client VPN endpoint that you created in step 3.Add an authorization rule to give clients access to the target VPC. For Destination network to enable, enter the IPv4 CIDR range of the VPC.Add an authorization rule to give clients access to the peered VPC. For Destination network, enter the IPv4 CIDR range of the peered VPC.Add an endpoint route to direct traffic to the peered VPC. For Route destination, enter the IPv4 CIDR range of the peered VPC. For Target VPC Subnet ID, select the subnet that you associated with the Client VPN endpoint.Add a rule to the security groups for your resources in both VPCs. Use this rule to allow traffic from the security group that you applied to the subnet association in step 4. Then, confirm that the network access control lists (ACLs) in both VPCs allow traffic between the target VPC and the peered VPC.Follow" | https://repost.aws/knowledge-center/client-vpn-access-peered-vpc-resources |
How can I use IAM policy tags to restrict how an EC2 instance or EBS volume can be created? | "I want to allow AWS Identity and Access Management (IAM) users or groups access to launch new Amazon Elastic Compute Cloud (Amazon EC2) instances. I also want to allow IAM users access to create new Amazon Elastic Block Store (Amazon EBS) volumes, but only when they apply specific tags." | "I want to allow AWS Identity and Access Management (IAM) users or groups access to launch new Amazon Elastic Compute Cloud (Amazon EC2) instances. I also want to allow IAM users access to create new Amazon Elastic Block Store (Amazon EBS) volumes, but only when they apply specific tags.Short descriptionSpecify tags for EC2 instances and EBS volumes as part of the API call that creates the resources. Using this principle, you can require IAM users to tag specific resources by applying conditions to their IAM policy. The following example policies don't allow users to create security groups or key pairs, so users must select pre-existing security groups and key pairs.The following example IAM policies allow users to:Launch EC2 instances that have matching tag keys and valuesLaunch EC2 instances that have at least one matching tag and valueLaunch EC2 instances that have at least one matching tag keyLaunch EC2 instances that have only the specified list of tagsResolutionLaunch EC2 instances that have matching tag keys and valuesThe following example policy allows a user to launch an EC2 instance and create an EBS volume only if the user applies all the tags that are defined in the policy using the qualifier ForAllValues. If the user applies a tag that's not included in the policy, then the action is denied. To enforce case sensitivity, use the condition aws:TagKeys.Note: Modify key1 and value1 in the example policies to include the tags and values that apply to your resources:{ "Version": "2012-10-17", "Statement": [ { "Sid": "AllowToDescribeAll", "Effect": "Allow", "Action": [ "ec2:Describe*" ], "Resource": "*" }, { "Sid": "AllowRunInstances", "Effect": "Allow", "Action": "ec2:RunInstances", "Resource": [ "arn:aws:ec2:*::image/*", "arn:aws:ec2:*::snapshot/*", "arn:aws:ec2:*:*:subnet/*", "arn:aws:ec2:*:*:network-interface/*", "arn:aws:ec2:*:*:security-group/*", "arn:aws:ec2:*:*:key-pair/*" ] }, { "Sid": "AllowRunInstancesWithRestrictions", "Effect": "Allow", "Action": [ "ec2:CreateVolume", "ec2:RunInstances" ], "Resource": [ "arn:aws:ec2:*:*:volume/*", "arn:aws:ec2:*:*:instance/*", "arn:aws:ec2:*:*:network-interface/*" ], "Condition": { "StringEquals": { "aws:RequestTag/key1": "value1", "aws:RequestTag/key2": "value2" }, "ForAllValues:StringEquals": { "aws:TagKeys": [ "key1", "key2" ] } } }, { "Sid": "AllowCreateTagsOnlyLaunching", "Effect": "Allow", "Action": [ "ec2:CreateTags" ], "Resource": [ "arn:aws:ec2:*:*:volume/*", "arn:aws:ec2:*:*:instance/*", "arn:aws:ec2:*:*:network-interface/*" ], "Condition": { "StringEquals": { "ec2:CreateAction": [ "RunInstances", "CreateVolume" ] } } } ]}Important: To launch EC2 instances successfully, this policy must include matching tag keys and values. If the key and value pairs don't match, then you receive the error "Launch Failed" or similar type of API failure message.Example resultsKey/ValueResultkey1/value1 and key2/value2allowkey1/value1denykey1/value2denyno keys and valuesdenyLaunch EC2 instances that have at least one matching tag and valueIn the following example, replace the AllowRunInstancesWithRestrictions condition block to allow a user to launch an EC2 instance and create EBS volumes when at least one tag key is named key1 and its value is value1. Any number of additional tags can be added in the RunInstances request:"Condition": { "StringEquals": { "aws:RequestTag/key1": "value1" }, "ForAnyValue:StringEquals": { "aws:TagKeys": [ "key1" ] }}Example resultsKey/ValueResultkey1/value1 and key2/value2allowkey1/value1allowkey1/value2denyno keys and valuesdenyLaunch EC2 instances that have at least one matching tag keyIn the following policy example, replace the AllowRunInstancesWithRestrictions condition block to allow a user to launch an EC2 instance and create EBS volumes when at least one tag key is named key1. No specific value is required for the key1 tag and any number of additional tags can be added in the RunInstances request."Condition": { "ForAnyValue:StringEquals": { "aws:TagKeys": [ "key1" ] }}Example resultsKey/ValueResultkey1/value1 and key2/value2allowkey1/value1allowkey1/value2allowno keys and valuesdenyLaunch EC2 instances that have only the specified list of tagsIn the following example policy, replace the AllowRunInstancesWithRestrictions condition block to allow a user to launch an EC2 instance and create EBS volumes only when tag keys key1 and key2 are provided in the request. No specific value is required for either tag keys, and no additional tags can be added in the RunInstances request:"Condition": { "StringLike": { "aws:RequestTag/key1": "*", "aws:RequestTag/key2": "*" }, "ForAllValues:StringEquals": { "aws:TagKeys": [ "key1", "key2" ] }}Note: The StringLike condition is required to make sure that all tags are present.Example resultsKey/ValueResultkey1/AnyValue and key2/AnyValueAllowkey1/AnyValueDenykey2/AnyValueDenyNo keys or valuesDenykey1/AnyValue, key2/AnyValue, key3/AnyValueDenyRelated informationHow do I create an IAM policy to control access to Amazon EC2 resources using tags?Creating a condition with multiple keys or valuesExample IAM identity-based policiesTag your Amazon EC2 resourcesActions, resources, and condition keys for Amazon EC2Follow" | https://repost.aws/knowledge-center/iam-policy-tags-restrict |
Why did Security Hub initiate the finding "Lambda function policies should prohibit public access"? | AWS Security Hub returned a control check response for an AWS Lambda function. | "AWS Security Hub returned a control check response for an AWS Lambda function.Short descriptionSecurity Hub contains a finding type similar to the following one:"[Lambda.1] Lambda function policies should prohibit public access"This control response fails for the following reasons:The Lambda function is publicly accessible.You invoke the Lambda function from Amazon Simple Storage Service (Amazon S3), and the policy doesn't include a condition for AWS:SourceAccount.ResolutionTo resolve this issue, either update the policy to remove the permissions that allows public access, or add the AWS:SourceAccount condition to the policy.Note:To update the resource-based policy, you must use the AWS Command Line Interface (AWS CLI).If you receive errors when running AWS CLI commands, make sure that you’re using the most recent AWS CLI version.Use the Lambda console to view a function's resource-based policy. Depending on your use case, you can remove or update permissions for the Lambda function.To remove permissions from the Lambda function, run the AWS CLI command remove-permission similar to the following:$ aws lambda remove-permission --function-name <function-name> --statement-id <statement-id>To update permissions for the Lambda function, rule the AWS CLI command add-permission similar to the following:$ aws lambda add-permission --function <function-name> --statement-id <new-statement-id> --action lambda:InvokeFunction --principal s3.amazonaws.com --source-account <account-id> --source-arn <bucket-arn>To verify that the permissions are removed or updated, repeat the instructions to view a function's resource-based policy.Note: If there's only one statement in the policy, then the policy is empty.For more information, see Security Hub controls reference.Related informationlambda-function-public-access-prohibitedHow can I use Security Hub to monitor security issues for my AWS environment?Follow" | https://repost.aws/knowledge-center/security-hub-lambda-finding |
How do I use multiple CIDR ranges with Amazon EKS? | "I want to use multiple CIDR ranges with Amazon Elastic Kubernetes Service (Amazon EKS) to address issues with my pods. For example, how do I run pods with different CIDR ranges added to my Amazon Virtual Private Cloud (Amazon VPC)? Also, how can I add more IP addresses to my subnet when my subnet runs out of IP addresses? Finally, how can I be sure that pods running on worker nodes have different IP ranges?" | "I want to use multiple CIDR ranges with Amazon Elastic Kubernetes Service (Amazon EKS) to address issues with my pods. For example, how do I run pods with different CIDR ranges added to my Amazon Virtual Private Cloud (Amazon VPC)? Also, how can I add more IP addresses to my subnet when my subnet runs out of IP addresses? Finally, how can I be sure that pods running on worker nodes have different IP ranges?Short descriptionBefore you complete the steps in the Resolution section, confirm that you have the following:A running Amazon EKS clusterAccess to a version (no earlier than 1.16.284) of the AWS Command Line Interface (AWS CLI)AWS Identity and Access Management (IAM) permissions to manage an Amazon VPCA kubectl with permissions to create custom resources and edit the DaemonsSetAn installed version of jq (from the jq website) on your systemA Unix-based system with a Bash shellKeep in mind:You can associate private (RFC 1918) and public (non-RFC 1918) CIDR blocks to your VPC before or after you create your cluster.In scenarios with carrier-grade network address translation (NAT), 100.64.0.0/10 is a private network range. This private network range is used in shared address space for communications between a service provider and its subscribers. You must have a NAT gateway configured at the route table for pods to communicate with the internet. Daemonsets aren't supported on AWS Fargate clusters. To add secondary CIDR ranges to Fargate profiles, use subnets from your VPC's secondary CIDR blocks. Then, tag your new subnets before adding the subnets to your Fargate profile.Important: In certain circumstances, Amazon EKS can't communicate with nodes launched in subnets from additional CIDR blocks added to a VPC after a cluster is created. An updated range caused by adding CIDR blocks to an existing cluster can take as long as five hours to appear.ResolutionNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent AWS CLI version.In the following resolution, first set up your VPC. Then, configure the CNI plugin to use a new CIDR range.Add additional CIDR ranges to expand your VPC network1. Find your VPCs.If your VPCs have a tag, then run the following command to find your VPC:VPC_ID=$(aws ec2 describe-vpcs --filters Name=tag:Name,Values=yourVPCName | jq -r '.Vpcs[].VpcId')If your VPCs don't have a tag, then run the following command to list all the VPCs in your AWS Region:aws ec2 describe-vpcs --filters | jq -r '.Vpcs[].VpcId'2. To attach your VPC to a VPC_ID variable, run the following command:export VPC_ID=vpc-xxxxxxxxxxxx3. To associate an additional CIDR block with the range 100.64.0.0/16 to the VPC, run the following command:aws ec2 associate-vpc-cidr-block --vpc-id $VPC_ID --cidr-block 100.64.0.0/16Create subnets with a new CIDR range1. To list all the Availability Zones in your AWS Region, run the following command:aws ec2 describe-availability-zones --region us-east-1 --query 'AvailabilityZones[*].ZoneName'Note: Replace us-east-1 with your AWS Region.2. Choose the Availability Zone where you want to add the subnets, and then assign those Availability Zones to variables. For example:export AZ1=us-east-1aexport AZ2=us-east-1bexport AZ3=us-east-1cNote: You can add more Availability Zones by creating more variables.3. To create new subnets under the VPC with the new CIDR range, run the following commands:CUST_SNET1=$(aws ec2 create-subnet --cidr-block 100.64.0.0/19 --vpc-id $VPC_ID --availability-zone $AZ1 | jq -r .Subnet.SubnetId)CUST_SNET2=$(aws ec2 create-subnet --cidr-block 100.64.32.0/19 --vpc-id $VPC_ID --availability-zone $AZ2 | jq -r .Subnet.SubnetId)CUST_SNET3=$(aws ec2 create-subnet --cidr-block 100.64.64.0/19 --vpc-id $VPC_ID --availability-zone $AZ3 | jq -r .Subnet.SubnetId)Tag the new subnetsFor clusters running on Kubernetes 1.18 and earlier, you must tag all subnets so that Amazon EKS can discover the subnets.Note: Amazon EKS supports the automatic discovery of subnets without any kubernetes.io tags starting at Kubernetes version 1.19. For more information, see the changelog on the Kubernetes GitHub site.1. (Optional) Add a name tag for your subnets by setting a key-value pair. For example:aws ec2 create-tags --resources $CUST_SNET1 --tags Key=Name,Value=SubnetAaws ec2 create-tags --resources $CUST_SNET2 --tags Key=Name,Value=SubnetBaws ec2 create-tags --resources $CUST_SNET3 --tags Key=Name,Value=SubnetC2. For clusters running on Kubernetes 1.18 and below, tag the subnet for discovery by Amazon EKS. For example:aws ec2 create-tags --resources $CUST_SNET1 --tags Key=kubernetes.io/cluster/yourClusterName,Value=sharedaws ec2 create-tags --resources $CUST_SNET2 --tags Key=kubernetes.io/cluster/yourClusterName,Value=sharedaws ec2 create-tags --resources $CUST_SNET3 --tags Key=kubernetes.io/cluster/yourClusterName,Value=sharedNote: Replace yourClusterName with the name of your Amazon EKS cluster.If you're planning to use Elastic Load Balancing, then consider adding additional tags.Associate your new subnet to a route table1. To list the entire route table under the VPC, run the following command:aws ec2 describe-route-tables --filters Name=vpc-id,Values=$VPC_ID |jq -r '.RouteTables[].RouteTableId'2. For the route table that you want to associate with your subnet, run the following command to export to the variable. Then, replace rtb-xxxxxxxxx with the values from step 1:export RTASSOC_ID=rtb-xxxxxxxxx3. Associate the route table to all new subnets. For example:aws ec2 associate-route-table --route-table-id $RTASSOC_ID --subnet-id $CUST_SNET1aws ec2 associate-route-table --route-table-id $RTASSOC_ID --subnet-id $CUST_SNET2aws ec2 associate-route-table --route-table-id $RTASSOC_ID --subnet-id $CUST_SNET3For more information, see Routing.Configure the CNI plugin to use the new CIDR range1. Make sure that the latest recommended version of the vpc-cni plugin is running in the cluster.To verify the version that's running in the cluster, run the following command:kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -f 2<br>To check the latest recommended version of vpc-cni, and update the plugin if needed, see Updating the Amazon VPC CNI plugin for Kubernetes add-on.2. To turn on custom network configuration for the CNI plugin, run the following command:kubectl set env daemonset aws-node -n kube-system AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=true3. To add the ENIConfig label for identifying your worker nodes, run the following command:kubectl set env daemonset aws-node -n kube-system ENI_CONFIG_LABEL_DEF=failure-domain.beta.kubernetes.io/zone4. To create an ENIConfig custom resource for all subnets and Availability Zones, run the following commands:cat <<EOF | kubectl apply -f -apiVersion: crd.k8s.amazonaws.com/v1alpha1kind: ENIConfigmetadata: name: $AZ1spec: subnet: $CUST_SNET1EOFcat <<EOF | kubectl apply -f -apiVersion: crd.k8s.amazonaws.com/v1alpha1kind: ENIConfigmetadata: name: $AZ2spec: subnet: $CUST_SNET2EOFcat <<EOF | kubectl apply -f -apiVersion: crd.k8s.amazonaws.com/v1alpha1kind: ENIConfigmetadata: name: $AZ3spec: subnet: $CUST_SNET3EOFNote: The ENIConfig must match the Availability Zone of your worker nodes.5. Launch the new worker nodes.Note: This step allows the CNI plugin (ipamd) to allocate IP addresses from the new CIDR range on the new worker nodes.When using custom networking, the primary network interface isn't used for pod placement. In this case, you must first update max-pods using the following formula:maxPods = (number of interfaces - 1) * (max IPv4 addresses per interface - 1) + 2For a self-managed node group: Deploy the node group using the instructions in Launching self-managed Amazon Linux nodes. Don’t specify the subnets that you used in the ENIConfig resources that you deployed. Instead, specify the following text for the BootstrapArguments parameter:--use-max-pods false --kubelet-extra-args '--max-pods=<20>'For a managed node group without a launch template or with a launch template without an AMI ID specified: Managed node groups automatically calculate the Amazon EKS recommended max pods value. Follow the steps in Creating a managed node group. Or, use the Amazon EKS CLI to create the managed node group:aws eks create-nodegroup --cluster-name <sample-cluster-name> --nodegroup-name <sample-nodegroup-name> --subnets <subnet-123 subnet-456> --node-role <arn:aws:iam::123456789012:role/SampleNodeRole>Note: For the subnet field, don’t specify the subnets that you specified in the ENIConfig resources. More options can be specified as needed.For a managed node group with a launch template with a specified AMI ID: Provide the **'--max-pods=’** extra argument as user data in the launch template. In your launch template, specify an Amazon EKS optimized AMI ID, or a custom AMI built off the Amazon EKS optimized AMI. Then, deploy the node group using a launch template and provide the following user data in the launch template:#!/bin/bash/etc/eks/bootstrap.sh <my-cluster-name> --kubelet-extra-args <'--max-pods=20'>6. After creating your node group, note the security group for the subnet and apply the security group to the associated ENIConfig.In the following example, replace sg-xxxxxxxxxxxx with your security group:cat <<EOF | kubectl apply -f -apiVersion: crd.k8s.amazonaws.com/v1alpha1kind: ENIConfigmetadata: name: $AZ1spec: securityGroups: - sg-xxxxxxxxxxxx subnet: $CUST_SNET1EOFcat <<EOF | kubectl apply -f -apiVersion: crd.k8s.amazonaws.com/v1alpha1kind: ENIConfigmetadata: name: $AZ2spec: securityGroups: - sg-xxxxxxxxxxxx subnet: $CUST_SNET2EOFcat <<EOF | kubectl apply -f -apiVersion: crd.k8s.amazonaws.com/v1alpha1kind: ENIConfigmetadata: name: $AZ3spec: securityGroups: - sg-xxxxxxxxxxxx subnet: $CUST_SNET3EOF7. Terminate the old worker nodes. Then, test the configuration by launching a new deployment. Ten new pods are added, and the new CIDR range is scheduled on new worker nodes:kubectl create deployment nginx-test --image=nginx --replicas=10 kubectl get pods -o wide --selector=app=nginx-testFollow" | https://repost.aws/knowledge-center/eks-multiple-cidr-ranges |
Which Amazon VPC options do I need to turn on to use my private hosted zone? | "I created a private hosted zone and associated it with a virtual private cloud (VPC). However, my domain names still aren't resolving. Which Amazon Virtual Private Cloud (Amazon VPC) options do I need to turn on to get my private hosted zone to work?" | "I created a private hosted zone and associated it with a virtual private cloud (VPC). However, my domain names still aren't resolving. Which Amazon Virtual Private Cloud (Amazon VPC) options do I need to turn on to get my private hosted zone to work?ResolutionDomain Name System (DNS) hostnames and DNS resolution are required settings for private hosted zones. DNS queries for private hosted zones can be resolved by the Amazon-provided VPC DNS server only. As a result, these options must be turned on for your private hosted zone to work. To modify these options, see View and update DNS attributes for your VPC.DNS hostnamesFor non-default VPCs that aren't created using the Amazon VPC wizard, this option is turned off by default. If you create a private hosted zone for a domain and create records without turning on DNS hostnames, private hosted zones aren't turned on.To use a private hosted zone, this option must be turned on.DNS resolutionPrivate hosted zones accept DNS queries only from a VPC DNS server. The IP address of the VPC DNS server is the reserved IP address at the base of the VPC IPv4 network range plus two. Turning on DNS resolution allows you to use the VPC DNS server as a resolver for performing DNS resolution.Keep this option turned off if you're using a custom DNS server in the DHCP options set and you're not using a private hosted zone.This option and DNS hostnames must be turned on to resolve endpoint domains to private IP addresses for AWS Managed Services. Examples of these services include AWS PrivateLink and Amazon Relational Database Service (Amazon RDS).Related informationWorking with private hosted zonesFollow" | https://repost.aws/knowledge-center/vpc-enable-private-hosted-zone |
How can I use Amazon CloudWatch metrics to identify NAT gateway bandwidth issues? | "My NAT gateway is not receiving the bandwidth that I expect, and I want to identify bandwidth issues using Amazon CloudWatch metrics." | "My NAT gateway is not receiving the bandwidth that I expect, and I want to identify bandwidth issues using Amazon CloudWatch metrics.Short descriptionTo identify the source of bandwidth issues with your NAT gateway, follow these steps:Benchmark the networking throughput for your NAT gateway traffic and bytes per second for your Amazon Elastic Compute Cloud (Amazon EC2) instances.Review the CloudWatch metrics for the NAT gateway that has issues.Check all the instances behind the NAT gateway, and verify their CloudWatch metrics.Compare the results between the benchmarking tests and the CloudWatch metrics.ResolutionBenchmark the networking throughput1. Set up a test environment to benchmark your network throughput between Amazon EC2 Linux instances in the same virtual private cloud (VPC).2. Benchmark the traffic (bytes per second) that an instance can handle.3. Repeat these steps for the different instance types that you have running behind the NAT gateway. To identify the instance types, see Check the instances behind the NAT gateway section below.Review the CloudWatch metrics for issues with throughput or NAT gateway bandwidth1. Open the CloudWatch console.2. In the navigation pane, under Metrics, search for the NAT gateway.3. Select the NAT gateway, and then choose the PacketsDropCount metric. Note: A healthy NAT gateway always has a value of zero. A non-zero value indicates an ongoing transient issue with the NAT gateway. If the value isn't zero, then refer to the AWS Health Dashboard. If there are no notifications on the AWS Personal Health Dashboard, then open a case with AWS Support.4. Select the NAT gateway, and then confirm that there's a value of zero for the ErrorPortAllocation metric**.Note**: A value greater than zero indicates that too many concurrent connections to the same destination are open through the NAT gateway.5. Select BytesOutToDestination, BytesOutToSource, BytesInFromDestination, and BytesInFromSource. Note: Bandwidth is calculated as [( BytesOutToDestination + BytesOutToSource + BytesInFromDestination + BytesInFromSource) * 8 / Time period in seconds].If you need more than 100 Gbps of bandwidth bursts, then split the resources between multiple subnets and create multiple NAT gateways. For optimal performance, create your EC2 instances across private subnets that are in the same Availability Zone as your NAT gateway.Check the instances behind the NAT gateway1. Open the Amazon VPC console.2. In the navigation pane, under Route Tables, select the route tables that have routes pointing to the NAT gateway.3. Select the Subnet Association view, and note all the subnet IDs.4. Open the Amazon EC2 console.5. In the navigation pane, under Instances, choose the settings icon to view the Show/Hide Columns.6. Select Subnet ID and Instance Type.7. Note the IDs of all the instances that are launched in the subnets noted in step 3.Verify the CloudWatch metrics for all the instances behind the NAT gateway1. Open the Amazon CloudWatch console.2. In the navigation pane, under Metrics, choose EC2.3. Select the IDs of all the instances behind the NAT gateway that were noted previously.4. Under the Metric Name column, select NetworkIn/NetworkOut and CPUUtilization on all the instances during the time that you experienced bandwidth issues.5. Confirm that there are no CPU spikes or abnormal increases in traffic at the same time as the bandwidth issue.6. Activate the flow logs at the subnet level to review the traffic flowing through the NAT gateway. For more information about enabling flow logs, see Logging IP traffic using VPC Flow Logs.Compare the resultsCheck if the sum of networking throughput metrics across all instances behind the NAT gateway is more than 100 Gbps bursts. In this case, your bandwidth on the NAT gateway reflects a value that's greater than 100 Gbps. If your bandwidth on the NAT gateway is greater than 100 Gbps, then it's a best practice to split your traffic across multiple NAT gateways.If the sum of throughput metrics is less than 100 Gbps bursts, then the NAT gateway's bandwidth reflects a value that's less than 100 Gbps. If your bandwidth on the NAT gateway is less than 100 Gbps, then the NAT gateway can sufficiently handle the traffic flowing through it.Related informationHow do I set up a NAT gateway for a private subnet in Amazon VPC?NAT gatewaysWhy can't my Amazon EC2 instance in a private subnet connect to the internet using a NAT gateway?Compare NAT gateways and NAT instancesFollow" | https://repost.aws/knowledge-center/cloudwatch-nat-gateway-bandwidth |
How do I resolve Route 53 private hosted zones when using an AWS Managed Microsoft AD directory? | Resources in my AWS Directory Service for Microsoft Active Directory domain can’t resolve DNS records in my Amazon Route 53 private hosted zone. How can I resolve this issue? | "Resources in my AWS Directory Service for Microsoft Active Directory domain can’t resolve DNS records in my Amazon Route 53 private hosted zone. How can I resolve this issue?Short descriptionBy default, DNS queries for private hosted zones are resolved only by the AmazonProvidedDNS server. However, you can configure DNS forwarder settings to send requests destined for the Route 53 private hosted zone to the AmazonProvidedDNS instead.Note: the AWS Managed Microsoft AD server won't contact the AmazonProvidedDNS server for private hosted zone domains under the following circumstances:The AWS Managed Microsoft AD server hosts a zone with the same Route 53 private hosted name. For example, a DNS zone named example1.com manually created on an AWS Managed Microsoft AD and Route 53 has two private hosted zones: example1.com and example2.com. The AWS Managed Microsoft AD will respond all DNS queries to example1.com authoritatively and won't forward example1.com queries to Route 53. DNS queries targeting the domain example2.com will be successfully forwarded to Route 53.The AWS Managed Microsoft AD domain has the same name of the Route 53 private hosted zone. For example, the AWS Managed Microsoft AD is named example1.com during its launch. A DNS zone named example1.com is automatically created on the AWS Managed Microsoft AD. If Route 53 has a private hosted zone named example1.com, then AWS Managed Microsoft AD responds all DNS queries to example1.com authoritatively. It won't forward example1.com queries to Route 53. DNS queries targeting other domains, such as example2.com, are successfully forwarded to Route 53.The AWS Managed Microsoft AD has a DNS zone named "." (root). For example, the AWS Managed Microsoft AD is named myexample.com during its launch, so a DNS Zone myexample.com is created automatically on AWS Managed Microsoft AD. Route 53 hosts two private hosted zones example1.com and example2.com. In this case, the AWS Managed Microsoft AD won't forward any requests to Route 53. Name resolution fails for DNS zones example1.com and example2.com and internet names such as www.amazon.com.For more information, see DNS terminology on the IETF website.ResolutionFirst, install the Active Directory Domain Services and Active Directory Lightweight Directory Services Tools on a domain-joined Amazon Elastic Compute Cloud (Amazon EC2) instance.Note: In the Features tree, be sure to select AD DS, AD LDS Tools, and DNS Server Tools.Then, follow these steps:Log in to the Remote Server Administration Tools (RSAT) instance using the Administrator account.Open the DNS management tool from Windows Administrative Tools.Connect to the DNS server using the IP address of one of your Managed AD domain controllers.Expand DNS, open the context (right-click) menu for the domain name, and then choose Properties.From the Forwarders tab, edit the IP address of the forwarding servers to point to the AmazonProvidedDNS.Note: The AmazonProvidedDNS is the second address of the VPC. For example, if the VPC CIDR is 10.0.0.0/16, then the AmazonProvidedDNS is 10.0.0.2. For more information, see Amazon DNS server.Repeat steps 3 to 5 entering the IP address of each additional domain controller in your Managed AD domain.Related informationRemote Server Administration Tools (RSAT) for Windows on the Microsoft websiteFollow" | https://repost.aws/knowledge-center/ds-private-hosted-zones-msft-ad |
How do I grant IAM permissions to a Lambda function using an AWS SAM template? | I want to grant permissions to AWS Lambda functions in my AWS Serverless Application Model (AWS SAM) application. How do I define a Lambda execution role with scoped permissions in an AWS SAM template? | "I want to grant permissions to AWS Lambda functions in my AWS Serverless Application Model (AWS SAM) application. How do I define a Lambda execution role with scoped permissions in an AWS SAM template?Short descriptionTo define a Lambda execution role in an AWS SAM template, you can use the following AWS::Serverless::Function resource properties:Policies—Allow you to create a new execution role using predefined policies that can be scoped to your Lambda function.Role—Allows you to define an AWS Identity and Access Management (IAM) role to use as the function's execution role.PermissionsBoundary—Allows you to set an IAM permissions boundary for the execution role that you create.Note: The Policies and Roles properties can't be used together. Using the Role property is helpful when your execution role requires permissions that are too specific to use predefined policies.ResolutionSpecify policies for a new Lambda execution roleFor the Policies property, enter any combination of the following:The name of an AWS managed policyThe name of an AWS SAM policy templateAn inline policy documentNote: AWS SAM policy templates are scoped to specific AWS resources. See Policy template table for a list of policy templates and the permissions that they give to your Lambda functions.The following are some example AWS SAM YAML templates with Policies defined:Example AWS SAM YAML template with an AWS managed policy namedAWSTemplateFormatVersion: '2010-09-09'Transform: 'AWS::Serverless-2016-10-31' bResources: MyFunction: Type: 'AWS::Serverless::Function' Properties: Handler: index.handler Runtime: nodejs8.10 CodeUri: 's3://my-bucket/function.zip' Policies: # Give the Lambda service access to poll your DynamoDB Stream - AmazonDynamoDBFullAccessExample AWS SAM YAML template with an AWS SAM policy template (SQSPollerPolicy) definedMyFunction: Type: 'AWS::Serverless::Function' Properties: CodeUri: ${codeuri} Handler: hello.handler Runtime: python2.7 Policies: - SQSPollerPolicy: QueueName: !GetAtt MyQueue.QueueNameExample AWS SAM YAML template with an inline policy document definedAWSTemplateFormatVersion: '2010-09-09'Transform: 'AWS::Serverless-2016-10-31'Resources: MyFunction: Type: 'AWS::Serverless::Function' Properties: Handler: index.handler Runtime: nodejs8.10 CodeUri: 's3://my-bucket/function.zip' Policies: - Statement: - Sid: SSMDescribeParametersPolicy Effect: Allow Action: - ssm:DescribeParameters Resource: '*' - Sid: SSMGetParameterPolicy Effect: Allow Action: - ssm:GetParameters - ssm:GetParameter Resource: '*'(Optional) Specify an IAM permissions boundaryTo set the maximum permissions allowed for your Lambda function's execution role, use an IAM permissions boundary.To set an IAM permissions boundary, do the following in your AWS SAM YAML template:Specify the Amazon Resource Name (ARN) of a permissions boundaryFor the PermissionsBoundary property, enter the ARN of a permissions boundary. For example:Properties: PermissionsBoundary: arn:aws:iam::123456789012:policy/LambdaBoundariesNote: You can define PermissionsBoundary only if you're creating a new role with your AWS SAM template. You can't set a permissions boundary for an existing Role that you specify.Specify a Lambda execution roleFor the Role property, enter one of the following:The ARN of a Lambda execution role that has an IAM permissions policy attached.A reference to a Role resource that you've defined in the same AWS SAM template.Note: If you don't specify a Role in your AWS SAM template, then an execution role is created when you deploy your application. This execution role includes any Policies that you define.Example AWS SAM YAML template with the Role property definedAWSTemplateFormatVersion: '2010-09-09'Transform: 'AWS::Serverless-2016-10-31'Resources: MyFunction: Type: 'AWS::Serverless::Function' Properties: Handler: index.handler Runtime: nodejs8.10 CodeUri: 's3://my-bucket/function.zip' Role: arn:aws:iam::111111111111:role/SAMPolicyPackage and deploy your application1. In the AWS SAM command line interface (AWS SAM CLI), run the sam build command to build and package your application.Note: If you receive errors when running the AWS CLI commands, make sure that you're using the most recent version of the AWS CLI.2. Run the sam deploy command to deploy your AWS SAM application package.For more information, see Building applications and Deploying serverless applications.Related informationGetting started with AWS SAMAWS Serverless Application Model (AWS SAM) (AWS SAM GitHub repo)Policy templates (AWS SAM GitHub repo)Managed policies and inline policiesValidating AWS SAM template filesFollow" | https://repost.aws/knowledge-center/lambda-sam-template-permissions |
How do I migrate MySQL data to an Aurora MySQL DB cluster using Amazon S3? | I want to migrate data from MySQL to an Amazon Aurora MySQL DB cluster. How do I restore MySQL data to an Aurora MySQL DB cluster using Amazon Simple Storage Service (Amazon S3)? | "I want to migrate data from MySQL to an Amazon Aurora MySQL DB cluster. How do I restore MySQL data to an Aurora MySQL DB cluster using Amazon Simple Storage Service (Amazon S3)?ResolutionAmazon Aurora MySQL is compatible with MySQL 5.6 and MySQL 5.7 versions. To restore a MySQL innobackupex backup, first go to the Percona website and install Percona Xtrabackup (version 2.3 or later) to your Amazon Elastic Compute Cloud (Amazon EC2) instance:sudo yum install percona-xtrabackup-file_name_and_extensionNote: Replace file_name_and_extension with the appropriate file name and extension based on your Percona Xtrabackup package. See the following example:sudo yum install percona-xtrabackup-24-2.4.7-1.el7.x86_64.rpmAfter installing Percona Xtrabackup, back up the data that you want to migrate to Aurora MySQL. Then, upload the backup to Amazon S3 to perform the restoration. For more information see the Percona documentation for The Backup Cycle.Connect to an EC2 instance and back up your MySQL database Connect to the instance where the MySQL database is running by using SSH.2. Install Percona Xtrabackup:sudo yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm -ysudo yum install perl-DBD-MySQL -ysudo yum install percona-xtrabackup -y3. Back up the database:xtrabackup --backup --user=<myuser> --password --stream=xbstream \--target-dir=</on-premises/s3-restore/backup> | split -d --bytes=500MB \- </on-premises/s3-restore/backup/backup>.xbstreamThis command creates a backup of your MySQL database that is split into multiple xbstream files.Note: Aurora doesn't restore everything from your source. After your database is restored successfully, you can re-create the following:User accountsFunctionsStored proceduresTime zone informationUpload your backup to an S3 bucket1. Create an S3 bucket.Note: Your bucket must be in the same Region as your EC2 instance and your new Aurora DB cluster.2. Choose the bucket that you created, and then choose Create Folder.3. Choose the folder, and then choose Upload.4. Upload the files, and then set the permissions.5. Set the properties, and then choose Upload.Note: When you upload a file to an Amazon S3 bucket, you can use server-side encryption to encrypt the data.Import your database from Amazon S3 to Aurora1. Open the Amazon Relational Database Service (Amazon RDS) console, and then choose Dashboard in the navigation pane.2. Choose Restore Aurora DB Cluster from S3.3. Enter the Source Engine Version that you noted earlier.4. From the S3 Backup Location dropdown menu, select the S3 bucket that you created. Enter your S3 Bucket Prefix.Note: Don't use leading or trailing slashes ("/") when entering the bucket name in the S3 Bucket Prefix field.5. Create an AWS Identity and Access Management (IAM) role to give Amazon RDS permission to access the S3 bucket, and then choose Next Step.6. Specify your DB details and choose Next Step.7. Configure your Advanced Settings and Database Options. Enter the IAM role that you created for the DB Cluster Identifier.8. Choose Launch DB Instance.9. After the cluster is available, choose View Your DB Instances to confirm that the Aurora DB instance was created successfully.Related informationMigrating data from MySQL by using an Amazon S3 bucketMigrating data from an external MySQL database to an Amazon Aurora MySQL DB clusterFollow" | https://repost.aws/knowledge-center/migrate-mysql-aurora-innobackup |
How do I troubleshoot HTTP 502 errors when I make requests through a Classic Load Balancer? | I see HTTP 502 errors when my client makes requests to a website through a Classic Load Balancer (CLB). How can I troubleshoot this? | "I see HTTP 502 errors when my client makes requests to a website through a Classic Load Balancer (CLB). How can I troubleshoot this?Short descriptionHTTP 502 (bad gateway) errors can occur for one of the following reasons:The web server or associated backend application servers running on EC2 instances return a message that can't be parsed by your Classic Load Balancer.The web server or associated backend application servers return a 502 error message of their own.To find the source of these 502 errors:Turn on Elastic Load Balancing (ELB) access logs on your Classic Load Balancer to see the backend and ELB response code for each request. An access log entry contains two fields: an elb_status_code and a backend_status_code. Use these codes to determine the source of the 502 error.View the load balancer Amazon CloudWatch metrics to see backend-generated 502 errors, which appear under the HTTPCode_Backend_5XX metric. ELB-generated 502 errors appear under the HTTPCode_ELB_5XX metric.If the backend response is the source of the ELB 502 error, the issue might be caused by:A response containing more than one CRLF between each header.A response containing a Content-Length header which contains a non-integer.A response that has more bytes in the body than the Content-Length header value.If the 502 error is generated by your backend servers, contact your application's owner. If the 502 error is generated by the Classic Load Balancer, the HTTP response from the backend is malformed. Follow these steps to troubleshoot ELB-generated 502 errors:Resolution1. Check if the response body returned by the backend application complies with HTTP specifications. Refer to the following documentation from RFC Editor:RFC 7230 - HTTP/1.1: Message Syntax and RoutingRFC 7231 - HTTP/1.1: Semantics and ContentRFC 7232 - HTTP/1.1: Conditional RequestsRFC 7233 - HTTP/1.1: Range RequestsRFC 7234 - HTTP/1.1: CachingRFC 7235 - HTTP/1.1: Authentication2. Confirm that the response header has the correct syntax: a key and the value, such as Content-Type:text. Be sure that Content-Length or transfer encoding is not missed in the HTTP response header. For more information about web server HTTP header fields, see the Internet Assigned Numbers Authority documentation at List of HTTP header fields. Examine the HTTP responses returned by running a command similar to the following:curl -vko /dev/null server_instance_IP3. Check the ELB access log for duplicate HTTP 502 errors. 502 errors for both elb_status_code and backend_status_code indicate that there's a problem with one or more of the web server instances. Identify which web server instances are exhibiting the problem, then check the web server logs of the backend web server instances. See the following log locations for some common web servers and operating systems:Apache logsThe web server logs for CentOS, RHEL, Fedora, and Amazon Linux are located in the /var/log/httpd/ directory.The web server logs for Debian and Ubuntu Linux are located in the /var/log/apache2 and /var/log/lighthttpd/ directory.NGINX logsThe NGINX access log location is defined in the nginx.conf file: access_log /path/to/access.logThe default location is /var/log/nginx/access.logIIS logsThe web server logs for Windows IIS 7, IIS 7.5 and IIS 8.0 are stored in the inetpub\logs\Logfiles directory. For more information about the Internet Information Server (IIS) logs, see Microsoft's documentation at The HTTP status code in IIS 7.0 and later versions. If you confirmed that your 502 errors are ELB-generated and that your backend's response conforms to RFC conventions, contact AWS Support.Related informationTroubleshoot a Classic Load Balancer: Response code metricsTutorial: Create a Classic Load BalancerIdentity and access management for Elastic Load BalancingConfigure health checks for your Classic Load BalancerElastic Load Balancing Connection timeout managementFollow" | https://repost.aws/knowledge-center/load-balancer-http-502-errors |
How do I troubleshoot slow logs in Amazon OpenSearch Service? | "I enabled search slow logs for my Amazon OpenSearch Service domain. However, I receive an error, or the slow logs don't appear in my Amazon CloudWatch log group. How do I resolve this?" | "I enabled search slow logs for my Amazon OpenSearch Service domain. However, I receive an error, or the slow logs don't appear in my Amazon CloudWatch log group. How do I resolve this?ResolutionI receive an error when I try to set up slow logsIf your AWS account exceeds ten resource policies for your Region, you receive the following error message in Amazon CloudWatch Logs:"Unable to create the Resource Access Policy - You have reached the maximum limit for number of Resource Access Policies for CloudWatch Logs. Please select an existing policy and edit it or delete an older policy and try again."To resolve this error message, create a resource policy that includes multiple log groups.For example:{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": { "Service": "es.amazonaws.com" }, "Action": [ "logs:PutLogEvents", "logs:CreateLogStream" ], "Resource": [ "ARN-Log-Group-1", "ARN-Log-Group-2", "ARN-Log-Group-3", "ARN-Log-Group-4" ] }]}Note: The AWS Identity and Access Management (IAM) policy limit can't be increased.I don't see any slow logs being deliveredIf you don't see your slow logs being delivered to CloudWatch, check your IAM policy or OpenSearch Service thresholds.Because OpenSearch Service requires permission to write to CloudWatch Logs, you must have the proper IAM policy to log your queries. To update your IAM policy, navigate to Search Slow Logs, and then choose Select Setup.For example:{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": { "Service": "es.amazonaws.com" }, "Action": [ "logs:PutLogEvents", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:us-east-1:588671893395:log-group:/aws/aes/domains/myes/search-logs:*" }]}Also, make sure to set an appropriate timing threshold for your domain. For example, if all your requests complete before the set threshold, then your logs won't be delivered to your log group.You can also set individual INDEX level thresholds for each debug level (TRACE, DEBUG, INFO, and WARN).For example, you can set the threshold for WARN debug levels to ten seconds for the YOURINDEXNAME index in OpenSearch Dashboards:PUT /YOURINDEXNAME/_settings{"index.search.slowlog.threshold.query.warn": "10s"}Note: You can set TRACE to "0" milliseconds to log all queries for your domain. However, because logging all queries is resource-intensive, your domain performance might be impacted.Then, check your threshold using the following command:GET/YOURINDEXNAME/_settings?prettyOpenSearch Service logs any queries that exceed the defined threshold.Best practicesAvoid making multiple configuration changes (such as enabling or disabling logs that are published to CloudWatch) at the same time. Too many configuration changes at one time trigger multiple blue/green deployments. Multiple blue/green deployments can cause the OpenSearch Service domain to get stuck in a processing state. For more information about blue/green deployment, see Making configuration changes in OpenSearch Service.Set your threshold for both the query phase and fetch phase to identify slow search queries.Test with a low threshold value, and slowly increase the threshold to log only the queries that are affecting performance or requiring optimization.Choose the appropriate number of shards for your cluster to optimize cluster performance. For more information about shard maintenance, see Amazon OpenSearch Service best practices.For slow logs, enable logging at the TRACE, DEBUG, INFO, and WARN debug levels. Because each debug level logs different categories of information, it's a best practice to enable logging according to the request status.Related informationAnalyzing Amazon OpenSearch Service slow logs using CloudWatch Logs streaming and OpenSearch DashboardsHow do I troubleshoot CloudWatch Logs so that it streams to my Amazon OpenSearch Service domain?Viewing Amazon OpenSearch Service error logsFollow" | https://repost.aws/knowledge-center/opensearch-troubleshoot-slow-logs |
How do I resolve the error "The association iip-assoc-xxxxxxxx is not the active association" on my EC2 instance? | I'm receiving the following error message on my Amazon Elastic Compute Cloud (Amazon EC2) instance while updating the instance profile:"The association iip-assoc-xxxxxxxx is not the active association"How do I resolve this error? | "I'm receiving the following error message on my Amazon Elastic Compute Cloud (Amazon EC2) instance while updating the instance profile:"The association iip-assoc-xxxxxxxx is not the active association"How do I resolve this error?Short descriptionThis error usually occurs when you attempt to update the instance profile while a previous disassociation is still unfulfilled by the API. You can use the AWS Command Line Interface (AWS CLI) to identify if an unfulfilled disassociation is causing the error, and to correct the issue.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Resolution1. Run the following command to identify the instance profile associations for the instance:aws ec2 describe-iam-instance-profile-associations --filters "Name=instance-id,Values=i-xxxxxxxxxxxxxxxxx"The command output has multiple associations, each with a unique association ID (AssociationID) and status (State). Some of the associations are in the associating state and some are in the disassociating state, as shown in the following example output:{"IamInstanceProfileAssociations": [ { "AssociationId": "iip-assoc-xxxxxxxxxxxxxxxx", "InstanceId": "i-xxxxxxxxxxxxxxxx", "IamInstanceProfile": { "Arn": "arn:aws:iam::xxxxxxxxxx:instance-profile/xxxxxxx", "Id": "xxxxxxxxxxxxxxxxxx" }, "State": "disassociating" }, { "AssociationId": "iip-assoc-xxxxxxxxxxxxxxxx", "InstanceId": "i-xxxxxxxxxxxxxxxx", "IamInstanceProfile": { "Arn": "arn:aws:iam::xxxxxxxxxxxx:instance-profile/xxxxxxxxx", "Id": "xxxxxxxxxxxxxxxx" }, "State": "associating" } ]}2. Run the following command to disassociate all the association IDs, including those in associating and disassociating states. Replace iip-assoc-xxxxxxxxxxxxxxxxxx with the appropriate association-id.aws ec2 disassociate-iam-instance-profile --association-id iip-assoc-xxxxxxxxxxxxxxxxxx3. After disassociating all association IDs, try updating the instance profile again.Note: If the error persists after following the resolution steps, stop and start the instance. Then, run the disassociate-iam-instance-profile command again. Be aware that data stored in instance store volumes is lost when you stop the instance. Before stopping the instance, review the list of the effects of stopping an instance.Related informationUsing instance profilesHow do I attach or replace an instance profile on an Amazon EC2 instance?Follow" | https://repost.aws/knowledge-center/ec2-resolve-active-assocation-error |
How can I resolve a custom domain name from resources in my VPC without modifying the DHCP options set? | I want to resolve a custom domain name from resources in my Amazon Virtual Private Cloud (Amazon VPC) without modifying the DHCP options set. How can I do this? | "I want to resolve a custom domain name from resources in my Amazon Virtual Private Cloud (Amazon VPC) without modifying the DHCP options set. How can I do this?ResolutionTo configure a private hosted zone to specify how Amazon Route 53 responds to DNS queries for domains within your VPCs, do the following:Create a private hosted zone for your domain name and attach it to the required VPC.Create records in the private hosted zone to specify how Route 53 responds to DNS queries from resources in the attached VPC.(Optional) Associate a Route 53 private hosted zone with a VPC on a different AWS account.Follow" | https://repost.aws/knowledge-center/vpc-resolve-domain-without-dhcp-changes |
"Why is my Windows WorkSpace stuck in the starting, rebooting, or stopping status?" | "My Amazon WorkSpaces Windows WorkSpace is stuck in the starting, rebooting, or stopping status." | "My Amazon WorkSpaces Windows WorkSpace is stuck in the starting, rebooting, or stopping status.Short descriptionThe WorkSpace status depends on health checks of the WorkSpace's components, such as PCOIP Standard Agent, SkylightWorkSpacesConfigService, or Web Access agent. Sometimes the WorkSpace's operating system (OS) takes a long time to boot. Or, the WorkSpace's components fail to report the actual status of the WorkSpace. These issues might cause the status to become stuck in the starting, rebooting, or stopping state.ResolutionIf your Windows WorkSpace is stuck in the starting, rebooting, or stopping status, then review the following common causes and troubleshooting tips:Pending updatesIf the Workspaces console or client application sends a reboot command to the WorkSpace, then the WorkSpace might be busy applying updates. A reboot action also initiates any pending WorkSpaces component updates.Windows updates or other operating system (OS) activity can delay the boot process, and WorkSpaces health checks might take longer to reflect the actual status. Wait a few minutes, and then try again to connect to the WorkSpace.WorkSpaces services in stopped state or blocked by antivirus softwareTo verify that the required WorkSpaces services are started at the OS level, follow these steps:1. Connect to the WorkSpace using Remote Desktop Connection (RDP).2. Open Administrative Tools.3. Open the context (right-click) menu for Services, and then choose Open. Check if the Status is Stopped for any of the following services:STXHD Hosted Application ServicePCoIP Standard Agent for WindowsSkyLightWorkspaceConfigService4. If the status is Stopped, then open the context (right-click) menu for the service, and choose Start. The status now Running.5. Verify that the Startup Type for all three services is set to Automatic. To change the Startup Type, open the context (right-click) menu for the service, and then choose Properties. For Startup type, choose Automatic, and then choose Ok.If the status of the three services is Running, then antivirus software might be blocking the services. To fix this, set up an allow list for the locations where the service components are installed. For more information, see Required configuration and service components for WorkSpaces.Network adapters turned onVerify that both network adapters on the WorkSpace are turned on:1. Connect to the WorkSpace using RDP.2. Open the Start menu, and then navigate to Control Panel > Network and Internet > Network and Sharing Center.3. Verify that there are two networks listed in the View your active networks section. If there's only one network listed, then choose Change adapter settings. Confirm that Ethernet 3 is turned on. If necessary, open the context (right-click) menu for Ethernet 3, and then choose Enable.Restart the WorkSpaceIf you still have issues, then restart the WorkSpace from the WorkSpaces console. Wait a few minutes for the WorkSpace to come online, and then try again to connect to the WorkSpace. The status of the WorkSpace changes within 90 minutes.Related informationWhy can't I create a new WorkSpace?Follow" | https://repost.aws/knowledge-center/workspaces-stuck-status |
Why is CloudFront serving outdated content from Amazon S3? | "I'm using Amazon CloudFront to serve objects stored in Amazon Simple Storage Service (Amazon S3). I updated my objects in Amazon S3, but my CloudFront distribution is still serving the previous versions of those files." | "I'm using Amazon CloudFront to serve objects stored in Amazon Simple Storage Service (Amazon S3). I updated my objects in Amazon S3, but my CloudFront distribution is still serving the previous versions of those files.Short descriptionBy default, CloudFront caches a response from Amazon S3 for 24 hours (Default TTL of 86,400 seconds). If your request lands at an edge location that served the Amazon S3 response within 24 hours, then CloudFront uses the cached response. This happens even if you updated the content in Amazon S3.Use one of the following ways to push the updated Amazon S3 content from CloudFront:Invalidate files.Update existing content with a CloudFront distribution.ResolutionInvalidate the Amazon S3 objectsYou can invalidate an Amazon S3 object to remove it from the CloudFront distribution's cache. After the object is removed from the cache, the next request retrieves the object directly from Amazon S3.Before you run this process, consider the following:You can't invalidate specific versions of an object that uses cookies or headers to vary the response. CloudFront invalidates all versions of the object in this case.Each AWS account is allowed 1,000 free invalidation paths per month. For more information, see Amazon CloudFront pricing.When you create an invalidation, be sure that the object paths meet the following requirements:The object paths must be for individual objects or the paths must end with the wildcard character (*). For example, you can't run an invalidation on a path similar to /images/*.jpeg because the path isn't for an individual object, and it doesn't end in a wildcard.The specified path must exactly match the capitalization of the object's path. Invalidation requests are case-sensitive.To remove specific versions of an object based on a query string, include QueryString in the invalidation path.Object invalidations typically take from 10 to 100 seconds to complete. You can check the status of an invalidation by viewing your distribution from the CloudFront console.Use object versioningIf you update content frequently, it's a best practice that you use object versioning to clear the CloudFront distribution's cache. For frequent cache refreshes, using object versioning might cost less than using invalidations.Use one of these ways to add versioning to your objects:Store new versions of the object at the origin with the version number in the key name. For example, if you update /image_v1.png, then you store a new version of the object as /image_v2.png.Update the object at the origin but cache based on a query string with the object version. For example, the query string updates from /image.png?ver=1 to /image.png?ver=2. You can use a cache policy to specify which query strings are included in the cache key and origin requests.Note: You can still request the previous version (/image.png?ver=1) while it's available in the CloudFront cache.Consider the following advantages and disadvantages for each method of object versioning:Storing new versions of the object at the origin (Amazon S3) allows you to revert changes to previous versions that are still available under the previous names. However, storing multiple versions of an object can increase your storage costs.Updating the object at the origin but caching based on the query string can reduce your storage costs. However, to prepare for any rollbacks, it's a best practice to keep previous object versions offline.Note: Specifying versioned file names or directory names is not related to Amazon S3 Object Versioning. Using the Amazon S3 Versioning feature does not update the content automatically. You must specify file paths carefully, as you can't cancel an invalidation request after you have started one.Related informationManaging how long content stays in the cache (expiration)Query string forwarding and cachingFollow" | https://repost.aws/knowledge-center/cloudfront-serving-outdated-content-s3 |
Why are the metrics on the DynamoDB console different from the CloudWatch metrics? | The graphs on the Metrics tab in the Amazon DynamoDB console are different from the graphs in the Amazon CloudWatch console. | "The graphs on the Metrics tab in the Amazon DynamoDB console are different from the graphs in the Amazon CloudWatch console.ResolutionThe metrics in the CloudWatch console are raw and provide more statistics options than the metrics in the DynamoDB console. The metrics in the DynamoDB console are average values over one-minute intervals. For example, ConsumedWriteCapacityUnits is the sum of the consumed units over one minute, divided by the number of seconds in a minute (60).To set the graphs to look the same in both CloudWatch and DynamoDB, be sure that the period and the time range are the same:Open the DynamoDB console.In the navigation pane, choose Tables.Choose your table, and then choose the Metrics tab.Choose View all CloudWatch metrics to open the CloudWatch console.Choose the category that the metric is in, such as Table metrics.Check the boxes next to the table name for the metrics that you want to see.Choose the Graphed metrics tab.In the Statistic dropdown list, choose Sum.In the Period dropdown list, choose 1 Minute.If there are provisioned and consumed metrics on the CloudWatch graph, then use the arrows in the Y Axis column to move the provisioned values to the right Y axis and the consumed values to the left Y axis.Use a math expression to divide the metrics by 60 (for example, m2/60).Missing metricsIf CloudWatch doesn't list a metric for DynamoDB, then it's probably because DynamoDB doesn't have recent data for that metric. CloudWatch lists only the metrics that were active in the past two weeks. This prevents you from seeing too many older metrics when you call ListMetrics.Related informationDynamoDB metrics and dimensionsUsing metric mathMonitoring with Amazon CloudWatchGraphing a metricFollow" | https://repost.aws/knowledge-center/dynamodb-cloudwatch-metrics-differences |
Why do I get zero records when I query my Amazon Athena table? | "I ran a CREATE TABLE statement in Amazon Athena with expected columns and their data types. When I run the query SELECT * FROM table-name, the output is "Zero records returned."" | "I ran a CREATE TABLE statement in Amazon Athena with expected columns and their data types. When I run the query SELECT * FROM table-name, the output is "Zero records returned."ResolutionHere are some common reasons why the query might return zero records.File selected in crawler settingsIf you're using a crawler, be sure that the crawler is pointing to the Amazon Simple Storage Service (Amazon S3) bucket rather than to a file.Incorrect LOCATION pathVerify the Amazon S3 LOCATION path for the input data. If the input LOCATION path is incorrect, then Athena returns zero records.Double slash in LOCATION pathAthena doesn't support table location paths that include a double slash (//). For example, the following LOCATION path returns empty results:s3://doc-example-bucket/myprefix//input//To resolve this issue, copy the files to a location that doesn't have double slashes. Here is an example AWS Command Line Interface (AWS CLI) command to do so:aws s3 cp s3://doc-example-bucket/myprefix//input// s3://doc-example-bucket/myprefix/input/ --recursiveNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Data for multiple tables stored in the same S3 prefixGlue crawlers create separate tables for data that's stored in the same S3 prefix. However, when you query those tables in Athena, you get zero records.For example, your Athena query returns zero records if your table location is similar to the following:s3://doc-example-bucket/table1.csvs3://doc-example-bucket/table2.csvTo resolve this issue, create individual S3 prefixes for each table similar to the following:s3://doc-example-bucket/table1/table1.csvs3://doc-example-bucket/table2/table2.csvThen, run a query similar to the following to update the location for your table table1:ALTER TABLE table1 SET LOCATION 's3://doc-example-bucket/table1';Partitions not yet loadedAthena creates metadata only when a table is created. The data is parsed only when you run the query. If your table has defined partitions, the partitions might not yet be loaded into the AWS Glue Data Catalog or the internal Athena data catalog. Use MSCK REPAIR TABLE or ALTER TABLE ADD PARTITION to load the partition information into the catalog.MSCK REPAIR TABLE: If the partitions are stored in a format that Athena supports, run MSCK REPAIR TABLE to load a partition's metadata into the catalog. For example, if you have a table that is partitioned on Year, then Athena expects to find the data at Amazon S3 paths similar to the following:s3://doc-example-bucket/athena/inputdata/year=2020/data.csvs3://doc-example-bucket/athena/inputdata/year=2019/data.csvs3://doc-example-bucket/athena/inputdata/year=2018/data.csvIf the data is located at the Amazon S3 paths that Athena expects, then repair the table by running a command similar to the following:CREATE EXTERNAL TABLE Employee ( Id INT, Name STRING, Address STRING) PARTITIONED BY (year INT)ROW FORMAT DELIMITED FIELDS TERMINATED BY ','LOCATION 's3://doc-example-bucket/athena/inputdata/';After the table is created, load the partition information:MSCK REPAIR TABLE Employee;After the data is loaded, run the following query again:SELECT * FROM Employee;ALTER TABLE ADD PARTITION: If the partitions aren't stored in a format that Athena supports, or are located at different Amazon S3 paths, run ALTER TABLE ADD PARTITION for each partition. For example, suppose that your data is located at the following Amazon S3 paths:s3://doc-example-bucket/athena/inputdata/2020/data.csvs3://doc-example-bucket/athena/inputdata/2019/data.csvs3://doc-example-bucket/athena/inputdata/2018/data.csvGiven these paths, run a command similar to the following:ALTER TABLE Employee ADD PARTITION (year=2020) LOCATION 's3://doc-example-bucket/athena/inputdata/2020/' PARTITION (year=2019) LOCATION 's3://doc-example-bucket/athena/inputdata/2019/' PARTITION (year=2018) LOCATION 's3://doc-example-bucket/athena/inputdata/2018/'After the data is loaded, run the following query again:SELECT * FROM Employee;Hive hidden filesVerify that your file names don't start with an underscore (_) or a dot (.).Example:s3://doc-example-bucket/athena/inputdata/_file1s3://doc-example-bucket/athena/inputdata/.file2If the files in your S3 path have names that start with an underscore or a dot, then Athena considers these files as placeholders. Athena ignores these files when processing a query. For more information, see Athena cannot read hidden files. If all the files in your S3 path have names that start with an underscore or a dot, then you get zero records.Note: If your S3 path includes placeholders along with files whose names start with different characters, then Athena ignores only the placeholders and queries the other files. Therefore, you might get one or more records.Related informationCreating tables in AthenaUsing AWS Glue crawlersFollow" | https://repost.aws/knowledge-center/athena-empty-results |
How can I resolve the AWS STS AssumeRoleWithWebIdentity API call error "InvalidIdentityToken"? | The AWS Security Token Service (AWS STS) API call AssumeRoleWithWebIdentity failed with an "InvalidIdentityToken" error. | "The AWS Security Token Service (AWS STS) API call AssumeRoleWithWebIdentity failed with an "InvalidIdentityToken" error.Short descriptionIf your AssumeRoleWithWebIdentity API call fails, then you might receive an error that's similar to the following message:"An error occurred (InvalidIdentityToken) when calling the AssumeRoleWithWebIdentity operation. Couldn't retrieve verification key from your identity provider."This error might occur in the following scenarios:The .well_known URL and jwks_uri of the identity provider (IdP) are inaccessible from the public internet.A custom firewall is blocking the requests.There's latency of more than 5 seconds in API requests from the IdP to reach the AWS STS endpoint.STS is making too many requests to your .well_known URL or the jwks_uri of the IdP.Note: Because this issue fails on the client side, AWS CloudTrail event history doesn't log this error.ResolutionVerify public access for .well_known and jwks_uriVerify that the .well_known URL and jwks_uri of the IdP are publicly accessible. This can be checked using your browser, Windows command, or Linux command. To do this, complete one of the following actions:To check access, navigate to the following links in your browser:https://BASE_SERVER_URL/.well-known/openid-configurationhttps://BASE_SERVER_URL/.well-known/jwks.json-or-Run the following commands:Windows:wget https://BASE_SERVER_URL/.well-known/openid-configurationwget https://BASE_SERVER_URL/.well-known/jwks.jsonLinux:curl https://BASE_SERVER_URL/.well-known/openid-configurationcurl https://BASE_SERVER_URL/.well-known/jwks.jsonNote: To confirm if you can access the links, check for the 200 status code in the request response.Check firewall settingsIf the .well_known URL and jwks_uri of the IdP aren't accessible, then check the firewall settings. Make sure that the domains aren't on a deny list.Depending on the current configuration of the firewall, the domains might need to be added to an allow list.If the firewall settings aren't accessible, then use the browser with a device from a different network, such as a phone. To check access from the browser, use the instructions in step 1. If the web request succeeds, then the firewall is blocking the request.If the server that's making the AssumeRoleWithWebIdentity API call is an Amazon Elastic Compute Cloud (Amazon EC2) instance, then check the configuration settings. For instructions, see Why can't I connect to a website that is hosted on my EC2 instance?Check operation latencyCheck the latency for the total operation. This includes the following attributes:Request/Response time from STSRequest/Response time from IdPMinimize STS latencyUse AWS Regional endpoints instead of global endpoints for the STS service. This verifies that the requests are routed to the geographically closest server to minimize latency. For more information, see Writing code to use AWS STS Regions.Note: For AWS SDKs, the Region parameter routes the request's destination endpoint to where the call is made within the sts_regional_endpoint configuration.Evaluate IdP latencyThe IdP makes requests to the STS endpoint. To check if the request to the STS endpoint takes too long, analyze the IdP's outgoing packets within the IdP logs.Note: If the request from the IdP to the STS endpoint takes more than 5 seconds, then the request might time out and fail. You can contact your identity provider to request an increase for geographical availability to reduce latency for this API call.(Optional) Use exponential backoff and increase retriesThe AssumeRoleWithWebIdentity API depends on retrieving information from the identity provider (IdP). To avoid throttling errors, most IdPs have API limits, and API calls might not get the required keys back from the IdP. To help successfully assume a role if the API has intermittent issues reaching your IdP, take the following steps:Use exponential backoff.Increase your retries, and set a maximum number of retries. Also, implement a maximum delay interval. For more information, see Error retries and exponential backoff in AWS.Reduce STS requests to .well_known and jwks_uriIf your JSON Web Key Set (JWKS) sets either Pragma: no-cache or Cache-Control: no-cache response headers, then STS doesn't cache your JWKS. For keys that are referenced in an ID_TOKEN but aren't in the cache, STS performs a callback. In this case, STS might make too many requests to your .well_known URL and jwks_uri.Therefore, to reduce callbacks from STS, verify that your JWKS doesn't set either of these response headers. This allows STS to cache your JWKS.Related informationWelcome to the AWS Security Token Service API ReferenceHow can I resolve API throttling or "Rate exceeded" errors for IAM and AWS STS?Follow" | https://repost.aws/knowledge-center/iam-sts-invalididentitytoken |
How can I troubleshoot Lambda function invoke issues with Amazon EFS integration? | I invoked an AWS Lambda function with Amazon Elastic File System (Amazon EFS) integration and received an error message. | "I invoked an AWS Lambda function with Amazon Elastic File System (Amazon EFS) integration and received an error message.Short descriptionThe following are pre-requisites for mounting Amazon EFS access points with Lambda:The Lambda function's execution role must have the following elasticfilesystem permissions:elasticfilesystem:ClientMountelasticfilesystem:ClientWrite (not required for read-only connections)Your AWS Identify and Access Management (IAM) user must have the following permissions:elasticfilesystem:DescribeMountTargetsThe EFS File System security group must allow NFS (port 2049) inbound traffic from the Lambda security group or IP address range.The Lambda security group must allow NFS (port 2049) outbound traffic to the EFS security group or IP address range.The Lambda function and Amazon EFS access points must be in the same AWS Region and Availability Zone.For more information, see How do I create the correct EFS access point configuration to mount my file system using a Lambda function?ResolutionFollow these troubleshooting steps for the error message with your Lambda function.EFSMountFailureException:The Lambda function couldn't mount the configured EFS file system due to a permission or configuration issue. Check the Lambda functions permissions. Then, confirm that the EFS file system and access point exist and are ready for use. For more information, see Function could not mount the EFS file system.EFSMountConnectivityException:The Lambda function couldn't make a network connection to the configured EFS file system with the NFS protocol (TCP port 2049). Check the security group and routing configuration for the Amazon Virtual Private Cloud (Amazon VPC) subnets. For more information, see Function could not connect to the EFS file system.EFSMountTimeoutException:The Lambda function was able to make a network connection to the configured EFS file system, but the mount operation timed out. Retry invoking the Lambda function. If the Lambda function times out again, then limit the functions reserved concurrency to reduce the load volume on the EFS file system. For more information, see Function could not mount the EFS file system due to timeout.PermissionError: Permission denied: '/mnt/xyz/abc':Lambda doesn't have access to the specified Amazon EFS access point. To troubleshoot Amazon EFS access points, see What are common EFS access point configurations?For more information, see Troubleshoot invocation issues in Lambda.Related informationWorking with Amazon EFS access pointsFollow" | https://repost.aws/knowledge-center/lambda-invoke-efs |
How do I synchronize SQL Server Agent jobs between the primary and secondary hosts in my RDS for SQL Server Multi-AZ instance? | I want to synchronize SQL Server Agent jobs between the primary and secondary hosts in my Amazon Relational Database (Amazon RDS) for Microsoft SQL Server Multi-AZ instance. How do I do this? | "I want to synchronize SQL Server Agent jobs between the primary and secondary hosts in my Amazon Relational Database (Amazon RDS) for Microsoft SQL Server Multi-AZ instance. How do I do this?Short descriptionAmazon RDS for SQL Server uses always on/mirroring for the Multi-AZ setup behind the scenes. SQL Server Agent jobs are stored in the msdb system database. This system database isn't replicated as part of your Multi-AZ deployment. So, the SQL Server Agent jobs aren't synchronized automatically. You must recreate the jobs on the new primary after failover. However, the jobs are present on the old primary where they were originally created. If you fail back the instance to the previous primary (where the jobs were initially created) you still see the jobs. To create the jobs in Multi-AZ, create the jobs in the primary (active) instance first. Then, fail over the RDS instance and create the same jobs on the new primary instance.To avoid manually creating jobs on the new primary, turn on SQL Agent Job replication. When job replication is turned on in your Multi-AZ environment, the SQL Server Agent jobs are copied from the primary host to the secondary host automatically. You don't have to create the jobs manually on the new primary replica because they synchronized through the agent replication feature. The jobs are available in both the replicas.For more information, see Multi-AZ deployments for Amazon RDS for Microsoft SQL Server.ResolutionTurn on the SQL agent replication featureRun the following procedure with the admin account on the primary instance to turn on SQL Server Agent job replication.Important note: Make sure that you run this procedure on the instance that has all the agent jobs available. If the instance without the available agent jobs becomes the primary and you turn on this feature, then any jobs on the secondary instance are deleted. Also note that all existing and newly created jobs are replicated as part of this feature.EXECUTE msdb.dbo.rds_set_system_database_sync_objects@object_types = 'SQLAgentJob';Validate that the SQL agent replication feature is turned onUse the following query to verify that the replication feature is turned on:SELECT * from msdb.dbo.rds_fn_get_system_database_sync_objects();The preceding query returns SQLagentjob for object_class if the replication feature is turned on. If the query returns nothing, then the feature isn't turned on.Verify when modified and new jobs last synchronized with the secondaryUse the following query to check the last_sync_time of the replication:SELECT * from msdb.dbo.rds_fn_server_object_last_sync_time();In the query results, if the sync time is past the job updated or creation time, then the job is synced with the secondary.Note: If you don't know the time of the job creation or update, run the following query to retrieve the timestamp, and then run the preceding query:select name as 'JobName',JobStatus = CASE When enabled =1 Then 'Active' Else 'Inactive' END,date_created As 'JobCreatedOn' ,date_modified as 'LastModified'from msdb..sysjobsNote: It takes few minutes for the jobs to synchronize between the replicas.If you want to perform DB failover to confirm that the jobs are replicated, wait for the last_sync_time to update before proceeding with the Multi-AZ failover.Status of the jobs on secondaryA SQL Server Agent XP is in the Disabled state with or without using the replication feature on the secondary replica. So, the jobs don't run on the secondary server.Supported and unsupported job categories for agent job replicationJobs in the following categories are replicated:[Uncategorized (Local)][Uncategorized (Multi-Server)][Uncategorized]Data CollectorDatabase Engine Tuning AdvisorDatabase MaintenanceFull-TextNote: Only jobs that use step type as T-SQL are replicated.The following are categories that don't support replication:Jobs with step types such as SQL Server Integration Services (SSIS), SQL Server Reporting Services (SSRS), replication, and PowerShell.Jobs that use Database Mail and server-level objects.Turn off SQL Server Agent job replicationTo turn off SQL Server Agent job replication, run the following command:EXECUTE msdb.dbo.rds_set_system_database_sync_objects @object_types = '';After you turn off replication, modifications to existing and newly created jobs no longer sync with the other replica.Follow" | https://repost.aws/knowledge-center/rds-sql-server-sync-sql-agent-jobs |
Why is the traffic for my web content getting routed to the wrong CloudFront edge location? | "I'm using Amazon CloudFront to distribute my web content. However, the traffic to my website is routed to the wrong edge location. How can I fix this?" | "I'm using Amazon CloudFront to distribute my web content. However, the traffic to my website is routed to the wrong edge location. How can I fix this?Short descriptionCloudFront routes traffic based on the distribution's price class, associated geolocation databases, and EDNS0-Client-Subnet support. Depending on the combination of these factors, your website's viewers might be routed to an unexpected edge location. This can increase the overall latency for retrieving an object from a CloudFront edge location.To troubleshoot routing to an unexpected edge location, check the following:The price class supports the edge location that you expect.The DNS resolver supports Anycast routing.The DNS resolver supports EDNS0-Client-Subnet.ResolutionThe price class supports the edge location that you expectCheck the edge locations that are included in the price class of your CloudFront distribution. You can update the price class of your distribution if you want to include other edge locations.The DNS resolver supports Anycast routingIf the DNS resolver supports Anycast routing, then there are multiple edge locations that the DNS resolver uses. This means that a requester's edge location is based on optimal latency, which might result in an unexpected location for the resolver's IP address.To check if the DNS resolver supports Anycast, run one of these commands multiple times:Note: In these example commands, be sure to replace example.com with the DNS resolver domain name that you're using.On Linux or macOS, run a dig command, similar to the following:dig +nocl TXT o-o.myaddr.l.example.comOn Windows, run an nslookup command, similar to the following:nslookup -type=txt o-o.myaddr.l.example.comIf the output includes the same IP address each time you run the command, then the DNS resolver doesn't support Anycast. If the output includes a different IP address each time you run the command, then the DNS resolver supports Anycast. This might explain an unexpected edge location.The DNS resolver supports EDNS0-Client-SubnetTo determine how you can avoid incorrect routing, first check if the DNS resolver supports EDNS0-Client-Subnet by running one of these commands:Note: In these example commands, be sure to replace example.com with the DNS resolver domain name that you're using.On Linux or macOS, run a dig command, similar to the following:dig +nocl TXT o-o.myaddr.l.example.comOn Windows, run an nslookup command, similar to the following:nslookup -type=txt o-o.myaddr.l.example.comNote: Check the TTL value, and be sure to run the command when the TTL expires. Otherwise, you might get a cached response from the recursive resolver.If the DNS resolver doesn't support EDNS0-Client-Subnet, then the output is similar to the following:$ dig +nocl TXT o-o.myaddr.l.example.com +short"192.0.2.1"In the previous example, 192.0.2.1 is the IP address of the closest DNS server that's using Anycast. This DNS resolver doesn't support EDNS0-Client-Subnet. To avoid incorrect routing, you can do one of the following:Change to a DNS resolver to a recursive DNS resolve that's located geographically closer to your website's clients.Change to a DNS resolver that does support EDNS0-Client-Subnet.If the DNS resolver supports EDNS0-Client-Subnet, then the output contains a truncated client subnet (/24 or /32) to the CloudFront authoritative name server, similar to the following:$ dig +nocl TXT o-o.myaddr.l.example.com @8.8.8.8 +short"192.0.2.1""edns0-client-subnet 198.51.100.0/24"In the previous example, 192.0.2.1 is the closest DNS resolver IP address. Additionally, the client-subnet range is 198.51.100.0/24, which is used to respond to DNS queries. To avoid incorrect routing when the DNS resolver does support EDNS0-Client-Subnet, confirm that a public geolocation database is associated with the client-subnet range that's sending the query to the DNS resolver. If the DNS resolver is forwarding the truncated version of the client IP addresses to CloudFront name servers, then CloudFront checks a database that's based on several public geolocation databases. The IP addresses must be correctly mapped in the geolocation database so that requests are routed correctly.If the DNS resolver supports EDNS0-Client-Subnet, you can verify the edge location that traffic is routed to by first resolving your CloudFront CNAME by running a DNS lookup command like dig:$ dig dftex7example.cloudfront.net. +short13.224.77.10913.224.77.6213.224.77.6513.224.77.75Then, run a reverse DNS lookup on the IP addresses that are returned by the previous command:$ dig -x 13.224.77.62 +shortserver-13-224-77-62.man50.r.cloudfront.net.In the previous example, the traffic is routed to the Manchester edge location.Tip: For an additional test, you can use a public DNS resolver that supports EDNS0-client-subnet, such as 8.8.8.8 or 8.8.4.4. Send queries with edge location IP addresses to the public DNS resolver. Then, check the results of the DNS queries to see whether CloudFront has the correct information about your EDNS0-client-subnet.Follow" | https://repost.aws/knowledge-center/cloudfront-unexpected-edge-location |
How can I see who's been accessing my Amazon S3 buckets and objects? | I'm looking for a way to track who's accessing my Amazon Simple Storage Service (Amazon S3) buckets and objects. How can I do that? | I'm looking for a way to track who's accessing my Amazon Simple Storage Service (Amazon S3) buckets and objects. How can I do that?ResolutionYou can track who's accessing your bucket and objects in the following ways:Use Amazon S3 server access logging to see information about requests to your buckets and objects. You can use Amazon Athena to analyze your server access logs.Use AWS CloudTrail to track API calls to your Amazon S3 resources. You can also use Athena to query your CloudTrail logs.Related informationMonitoring Amazon S3Follow | https://repost.aws/knowledge-center/s3-see-bucket-users |
How do I restore the backup of my Amazon DynamoDB table to a different Region? | I want to restore the backup of my Amazon DynamoDB table to a different Region. | "I want to restore the backup of my Amazon DynamoDB table to a different Region.ResolutionTo restore your DynamoDB table to a different Region, you can use either of the following approaches.Restore a DynamoDB table to a different Region using DynamoDBOpen the DynamoDB console.In the navigation pane, choose Backups.In the list displayed, choose the backup from which you want to restore the table.Choose Restore.For Name of restored table, enter the new table name.For Secondary indexes, select your desired option.For Destination AWS Region, select Cross-Region.For Select the destination AWS Region, choose the Region of your choice.For Encryption key management, select your desired option.Choose Restore.Restore a DynamoDB table to a different Region using AWS GlueYou can use an AWS Glue job to restore a DynamoDB table to another Region. AWS Glue provides more flexibility with the restoration process. You might choose this approach if you don't want to restore all attributes or fields to the target table in the new Region. This approach works only for a table that's exported to Amazon Simple Storage Service (Amazon S3).1. After exporting the DynamoDB table to Amazon S3 using the Export to S3 feature, create an AWS Glue job. Be sure to specify the following information in the Script tab:datasource0 = glueContext.create_dynamic_frame.from_options( connection_type="dynamodb", connection_options={ "dynamodb.export": "ddb", "dynamodb.tableArn": "arn:aws:dynamodb:source-region:account-number:table/TableName", "dynamodb.unnestDDBJson": True, "dynamodb.s3.bucket": "example-bucket", "dynamodb.s3.prefix": "dynamodb", "dynamodb.s3.bucketOwner": "account-number", })Note: Be sure to use the transform node ApplyMapping, and specify the fields that must be present in the target table. This setting automatically generates the PySpark code based on the input provided.Example:applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("resource_id", "string", "resource_id", "string")], transformation_ctx = "applymapping1")2. Specify a sink operation to directly write to the destination table in the target Region.Example:glueContext.write_dynamic_frame_from_options (frame = MappedFrame, connection_type = "dynamodb", connection_options = { "dynamodb.region": "example-region", "dynamodb.output.tableName": "example_table", "dynamodb.throughput.write.percent": "1.0" })3. Run the job from the AWS Glue console to load data from the current Region to the target Region.Related informationRestoring a DynamoDB table from a backupPoint-in-time recovery for DynamoDBWorking with jobs on the AWS Glue consoleFollow" | https://repost.aws/knowledge-center/dynamodb-restore-cross-region |
How can I associate an Application Load Balancer with my Lightsail instance? | I want to use the features of an Amazon Application Load Balancer on my Amazon Lightsail instance. How do I do that? | "I want to use the features of an Amazon Application Load Balancer on my Amazon Lightsail instance. How do I do that?Short descriptionYou can use a Lightsail load balancer to add redundancy to your web application or to handle more web traffic. You can also use a Lightsail load balancer to install an Amazon provided SSL certificate for your website hosted in a Lightsail instanceHowever, the Lightsail load balancer has limitations when compared to the Amazon Application Load Balancer. Some of the areas Application Load Balancer provides increased flexibility include the following:Flexibility in load balancer health checks.End-to-end encryption of data in transit between the load balancer and the instance.Use of firewall services such as AWS WAF, and so on.If you want to use these features for your website in Lightsail, you can associate an Application Load Balancer with the Lightsail instance. To set up Application Load Balancer, do the following:Configure Amazon VPC peering in Lightsail to allow the instance to connect to AWS resources such as Application Load Balancer.(Optional) Generate an AWS Certificate Manager (ACM) SSL certificate if you want to associate it with your website.Configure the Load Balancer with the target in the target group set to the Lightsail instance's private IP address.Point your domain to the new load balancer in DNS.ResolutionConfigure VPC peering in LightsailFor instructions on configuring VPC peering, see Set up Amazon VPC peering to work with AWS resources outside of Lightsail.You must to activate VPC peering for the Region where your Lightsail instance is located. To do this, you must have a default Amazon VPC in that Region and the necessary AWS Identity and Access Management (IAM) permissions. For more information, see What are the minimum IAM permissions needed to set up communication between Lightsail and other AWS services using VPC peering?To check whether you have a default VPC, see View your default VPC and default subnets.If you don’t have a default Amazon VPC, then you can create one. To learn more, see Create a default VPC.(Optional) Generate an ACM certificateTo associate an ACM certificate with your domain and access websites using HTTPS, see Requesting a public certificate using the console.It's a best practice to give two names to the certificate. For example, example.com and *.example.com. By doing this, the same certificate can be used for the main domain and for subdomains, such as www.example.com or xyz.example.com. Keep in mind that this doesn't cover a Wildcard SSL certificate for two levels for the subdomain, for example abc.xyz.example.com.It's also a best practice to use DNS domain validation instead of email validation. DNS validation has multiple benefits over email validation.If you have issues validating the ownership of the domain using either DNS or email, see the following:Why is my AWS Certificate Manager (ACM) certificate DNS validation status still pending validation?Why am I not receiving validation emails when using ACM to issue or renew a certificate?Configure the target group for the load balancer and register the targetFor instructions, see Configure a target group.When configuring your target, keep the following in mind:Make sure to choose IP as the target type.Keep the protocol as HTTP and port as 80 if you don't have an SSL certificate installed inside your Lightsail instance. Make sure that there aren't any HTTPs redirections configured inside your instance. Otherwise, this might cause an infinite redirection loop error.Keep the protocol as HTTPS and the port as 443. If you want to encrypt the data in transit from the Application Load Balancer to your Lightsail instance. Make sure that you have an SSL certificate installed in your instance for this.Choose the default VPC in the VPC sectionIn the Register target section, choose Other Private IP addresses under Network and specify the private IP address of your Lightsail instance. For information on obtaining the private IP address of your Lightsail instance, see Private and public IPv4 addresses for instances.Configure the load balancerFor instructions, see Configure a load balancer and a listener.When configuring the load balancer, keep the following in mind:Make sure that you choose the default VPC and at least two Availability Zones. You can choose any Availability Zones.Choose a security group or create a new one. Make sure that the security group has port 80 open. Also, open port 443 if you're attaching an ACM certificate with the load balancer.Add a new HTTPS listener if you want to access your website with HTTPS using an ACM certificate.Point both the HTTP and the HTTPS listener to the target group created in the previous step.Update the DNS entries of the domain to point to the ALB DNS nameIt's a best practice to use Amazon name servers and Amazon Route 53 for the domains that use Application Load Balancer with the website. This is because AWS provides the DNS name for the load balancer, not the IP address. Most name servers don't support adding a hostname for apex domain, such as example.com. They only support this for subdomains such as www.example.com or blog.example.com. However, Route 53 provides the alias feature that allows you to directly point the apex domain, example.com, to the load balancer DNS name.Note: Even if you use Lightsail DNS for your domain, you must switch the DNS to Route 53. This is because it's not possible to point the apex domain to the Application Load balancer DNS name in Lightsail DNS.To update the name servers of the domain to Amazon, if it's not already using Amazon, see Making Route 53 the DNS service for a domain that's in use.To get the DNS name for a load balancer, see Getting the DNS name for an ELB load balancer.To update your Route 53 hosted zone to point the domain to the load balancer DNS name, see Routing traffic to an ELB load balancer.Note: If there are already DNS records for the domain in Route 53 that are pointing to an EC2 instance IP address, edit those records instead of adding new records.Final checkAccess the domain in your browser and confirm that the website is loading correctly. Now that you have connected an Application Load Balancer with your Lightsail instance, you can use its different features that aren't present with Lightsail Load Balancer. You can also set up firewall services such as AWS WAF.Follow" | https://repost.aws/knowledge-center/lightsail-add-application-load-balancer |
Why does failover take longer than expected on my AWS Direct Connect connection after configuring BFD? | "I enabled Bidirectional Forwarding Detection (BFD) for my AWS Direct Connect connection for fast failure detection and failover with graceful restart enabled. However, I have failover delays or connection issues." | "I enabled Bidirectional Forwarding Detection (BFD) for my AWS Direct Connect connection for fast failure detection and failover with graceful restart enabled. However, I have failover delays or connection issues.ResolutionIt's a best practice not to configure graceful restart and BFD at the same time to avoid failover or connection issues. For fast failover, configure BFD without graceful restart enabled.For more information, see BGP quotas.Related informationDirect Connect network requirementsFollow" | https://repost.aws/knowledge-center/bfd-failover-direct-connect |
How do I troubleshoot Amazon Pinpoint campaign message failures? | "My Amazon Pinpoint campaign isn't processing or targeting any endpoints, and it isn't sending any messages. How do I troubleshoot campaign failures in Amazon Pinpoint?" | "My Amazon Pinpoint campaign isn't processing or targeting any endpoints, and it isn't sending any messages. How do I troubleshoot campaign failures in Amazon Pinpoint?ResolutionFor endpoints that aren't processed or targeted1. Verify if the campaign has the Use recipients local time setting activated. For instructions, see Sending the campaign at a specific date and time in the Amazon Pinpoint User Guide.2. If the setting is activated, then make sure that each endpoint definition includes a valid Demographic.Timezone attribute.Note: If an endpoint doesn't have a valid Demographic.Timezone attribute defined and the campaign's Use recipients local time setting is activated, the endpoint isn't processed.For endpoints that are processed and targeted, but not receiving messagesAmazon Pinpoint campaigns can fail to deliver messages to specific endpoints for many reasons. To troubleshoot message failures to specific endpoints, first configure Amazon Pinpoint to send information about events to Amazon Kinesis. Then, recreate the message failures and review the failed events in Amazon Kinesis to identify what's causing the error. After you've identified what's causing the error, remediate the issue.Note: The event_type value in each streaming event indicates the cause of most message failures. If the event_type value doesn't provide a clear indication of what's causing the message failure, then review your campaign's CampaignSendMessageThrottled metric in Amazon CloudWatch. This metric shows the number of campaign messages that weren't sent because your AWS account's ability to send messages was throttled. For more information, see Amazon Pinpoint quotas.For failed emailsMake sure that you identify any streaming events with the following event_type value: _email.rendering_failure. This event type usually indicates that the email template includes a variable that is either not valid or missing.-or-(If you don't have Kinesis streaming activated) Review your campaign's Amazon Simple Email Service (Amazon SES) Rendering Failures metric in Amazon CloudWatch. This metric also usually indicates that the email template includes a variable that is either not valid or missing.If you see streaming events with an _email.rendering_failure event type, or any Amazon SES Rendering Failures metrics in CloudWatch, do the following:1. Verify that all of the message variables in the template file have a corresponding endpoint attribute in the endpoint definition (segment file).2. Verify that all of the message variables in the template file are in the correct format in the endpoint definition.3. Specify the Default attribute values for all of the message variables in the template file. For more information, see Creating email templates in the Amazon Pinpoint User Guide.Note: If you don't specify a default value and a value doesn't exist for a recipient, then Amazon Pinpoint doesn't send the message.For more information, see Adding personalized content to message templates in the Amazon Pinpoint User Guide.Follow" | https://repost.aws/knowledge-center/pinpoint-campaign-message-failures |
"Why does my Amazon SageMaker notebook instance get stuck in the Pending state, and then fail?" | "When I create or start an Amazon SageMaker notebook instance, the instance enters the Pending state. The notebook instance appears to be stuck in this state, and then it fails." | "When I create or start an Amazon SageMaker notebook instance, the instance enters the Pending state. The notebook instance appears to be stuck in this state, and then it fails.Short descriptionThe Pending status means that SageMaker is creating the notebook instance. If any step in the creation process fails, SageMaker attempts to create the notebook again. This is why a notebook might stay in the Pending state longer than expected. If SageMaker still can't create the notebook instance, the status eventually changes to Failed .ResolutionConfirm the failure reasonCheck the FailureReason response in the DescribeNotebookInstance API. You can also find the failure reason on the SageMaker console:To see a pop-up window that shows a shortened version of the failure reason, pause on Failed in the Status column.To see the full failure reason, choose the name of the notebook instance. The failure reason appears at the top of the Notebook instance settings section.Use the failure reason to troubleshoot the root cause.Common errors"fatal: unable to access 'https://github.com/aws-samples/amazon-sagemaker-notebook-instance-lifecycle-config-samples/': Failed to connect to github.com port 443: Connection timed out"This error happens when the networking configuration for the notebook instance doesn't support the domain name or connection for the external Git repository.Important: Notebook instances that are deployed in a Virtual Private Cloud (VPC) don't automatically inherit custom route tables, like subnet route tables for VPC peering connections. If you need a custom route table, create a lifecycle configuration script that adds the route on startup. For more information, see Understanding Amazon SageMaker notebook instance networking configurations and advanced routing options.To validate that the Git connection is active and that you can connect to the repository from a notebook instance: Create a new notebook instance without an associated Git repository. Then, open the Jupyter console and use a terminal session to run the following commands:1.FSPResolve the hostname of the server:dig repo_hostnameIf the answer section of the output is empty, the notebook wasn't able to resolve the hostname. For example, the answer section for github.com displays as:;; ANSWER SECTION:github.com. 16 IN A 20.248.137.482.FSPIf the answer section of the output contains a response, the domain name resolution works. You can then run the following command to test the connection to the hostname:curl -v your-git-repo-url:4433.FSPIf the connection is refused or times out, verify the VPC security group rules and route tables. If the connection is successful, use git commands to test your credentials:git pull https://your-git-repo-url"Lifecycle Configuration failed"If a lifecycle configuration script runs for longer than five minutes, it fails, and the notebook instance is neither created nor started. For suggestions on how to decrease script runtime, see Customize a notebook instance using a lifecycle configuration script. To troubleshoot issues with the script, check the Amazon CloudWatch logs for the lifecycle configuration:Log group: /aws/sagemaker/NotebookInstancesLog stream: notebook-instance-name/LifecycleConfigOnStart or notebook-instance-name/LifecycleConfigOnCreate"This Notebook Instance type 'ml.m4.xlarge' is temporarily unavailable. We apologize for the inconvenience. Please try again in a few minutes, or try a different instance type."This error happens when Amazon Elastic Compute Cloud (Amazon EC2) doesn't have enough available capacity for the instance type that you selected. Capacity varies based on the demand for that instance type in that Region at that time. Try the request again later to see if capacity levels have changed. Or, choose a different instance type.HTTP 500 internal errorsAn HTTP 500 error indicates that an unexpected error occurred while creating the notebook instance. To rule out transient issues, try creating the notebook instance again.Related informationAssociate Git repositories with SageMaker notebook instancesCommon errorsFollow" | https://repost.aws/knowledge-center/sagemaker-notebook-pending-fails |
How can I identify and resolve unwanted health checks from Route 53? | My server is receiving unwanted requests from Amazon Route 53 health check servers. | "My server is receiving unwanted requests from Amazon Route 53 health check servers.Short descriptionWhen you associate health checks with an endpoint, Amazon Route 53 sends health check requests to the endpoint IP address. These health checks validate that the endpoint IP addresses are operating as intended. An issue might occur if an incorrect IP address is specified or if a health check isn't updated or deleted when necessary.ResolutionIdentify the source of unwanted requests1. Use Route 53 IP address ranges to find the source IP address of the unwanted request. For more information, see ROUTE53_HEALTHCHECKS in IP address ranges of Route 53 servers.2. Check the application server logs to determine whether the Route 53 health check servers sent the request. When performing health checks, Route 53 health checks set the following HTTP header:"Amazon-Route53-Health-Check-Service (ref <reference ID/ b5996862-d894-4595-88da-7940808e9665>; report http://amzn.to/1vsZADi)"Example Application Load Balancer access log:http 2020-05-12T14:14:25.000265Z app/myapplicationloadbalancer 54.241.32.97:49816 10.0.3.64:80 -1 -1 -1 502 - 241 288 "GET http:// <ALB DNS NAME>:80/ HTTP/1.1" "Amazon-Route53-Health-Check-Service (ref b5996862-d894-4595-88da-7940808e9665; report http://amzn.to/1vsZADi)" - - arn:aws:elasticloadbalancing:us-east-1:<account ID>:targetgroup/mytargetgroupExample Microsoft Internet Information Services (IIS) access log:Amazon+Route+53+Health+Check+Service;+ref:b5996862-d894-4595-88da-7940808e9665;+report+http://amzn.to/1vsZADiExample Apache access log:54.228.16.1 - - [time] "GET / HTTP/1.1" 403 3839 "-" "Amazon Route 53 Health Check Service; ref:47d9bc51-39d6-4cd9-9a7f-4c981c5db165; report http://amzn.to/1vsZADi"Example NGINX access log:NGINX access log entry: 54.232.40.80 - - [time] "GET / HTTP/1.1" 200 3770 "-" "Amazon Route 53 Health Check Service; ref:2e44063d-3b85-47c3-801e-6748cd542386; report http://amzn.to/1vsZADi" "-"Delete or block the source of unwanted requests1. Copy the health check ID from the application service logs.2. If the health check is available from your AWS account, then update the health check to monitor the intended IP address or domain name. Or, if it's no longer required, then delete the health check.If the health check isn't available from your AWS account, then block the IP address of the health check. To block the IP address, use firewall rules, security groups, or network access control lists (NACLs).Important: To report suspected Route 53 health check abuse, see Stop unwanted Amazon Route 53 health checks.Related informationHow can I stop Route 53 health check requests that are sent to my application?Follow" | https://repost.aws/knowledge-center/route-53-fix-unwanted-health-checks |
How do I troubleshoot a circuit breaker exception in Amazon OpenSearch Service? | "I'm trying to send data to my Amazon OpenSearch Service cluster. However, I receive a circuit breaking exception error that states that my data is too large. Why is this happening and how do I resolve this?" | "I'm trying to send data to my Amazon OpenSearch Service cluster. However, I receive a circuit breaking exception error that states that my data is too large. Why is this happening and how do I resolve this?Short descriptionWhen a request reaches OpenSearch Service nodes, circuit breakers estimate the amount of memory needed to load the required data. OpenSearch Service then compares the estimated size with the configured heap size limit. If the estimated size of your data is greater than the available heap size, then the query is terminated. As a result, a CircuitBreakerException is thrown to prevent overloading the node.OpenSearch Service uses the following circuit breakers to prevent JVM OutofMemoryError exceptions:RequestFielddataIn flight requestsAccountingParentNote: It's important to know which of these five circuit breakers raised the exception because each circuit breaker has its own tuning needs. For more information about circuit breaker types, see Circuit breaker settings on the Elasticsearch website.To obtain the current memory usage per node and per breaker, use the following command:GET _nodes/stats/breakerAlso, note that circuit breakers are only a best-effort mechanism. While circuit breakers provide some resiliency against overloading a node, you might still end up receiving an OutOfMemoryError. Circuit breakers can track memory only if it's explicitly reserved, so estimating the exact memory usage upfront isn't always possible. For example, if you have a small amount of memory heap, the relative overhead of untracked memory is larger. For more information about circuit breakers and node resiliency, see Improving node resiliency with the real memory circuit breaker on the Elasticsearch website.To avoid overloading your data nodes, follow the tips provided in the Troubleshooting high JVM memory pressure section.ResolutionCircuit breaker exceptionIf you're using Elasticsearch version 7.x and higher with 16 GB of heap, then you receive the following error when the circuit breaker limit is reached:{ "error": { "root_cause": [{ "type": "circuit_breaking_exception", "reason": "[parent] Data too large, data for [<http_request>] would be [16355096754/15.2gb], which is larger than the limit of [16213167308/15gb]", "bytes_wanted": 16355096754, "bytes_limit": 16213167308 }], "type": "circuit_breaking_exception", "reason": "[parent] Data too large, data for [<http_request>] would be [16355096754/15.2gb], which is larger than the limit of [16213167308/15gb]", "bytes_wanted": 16355096754, "bytes_limit": 16213167308 }, "status": 503}This example output indicates that the data to be processed is too large for the parent circuit breaker to handle. The parent circuit breaker (a circuit breaker type) is responsible for the overall memory usage of your cluster. When a parent circuit breaker exception occurs, the total memory used across all circuit breakers has exceeded the set limit. A parent breaker throws an exception when the cluster exceeds 95% of 16 GB, which is 15.2 GB of heap.You can verify this logic by calculating the difference between memory usage and set circuit breaker limit. Use the values from our example output, and subtract "real usage: [15283269136/14.2gb]" from "limit of [16213167308/15gb]". This calculation shows that this request needs around 1.02 GB of new bytes reserved memory to successfully process the request. However, in this example, the cluster had less than 0.8 GB of available free memory heap when the data request came in. As a result, the circuit breaker trips.The circuit breaker exception message can be interpreted like this:data for [<http_request>]: Client sends HTTP requests to a node in your cluster. OpenSearch Service either processes the request locally or passes it onto another node for additional processing.would be [#]: How the heap size looks when the request is processed.limit of [#]: Current circuit breaker limit.real usage: Actual usage of the JVM heap.new bytes reserved: Actual memory needed to process the request.JVM memory pressureA circuit breaking exception is often caused by high JVM memory pressure. JVM memory pressure refers to the percentage of Java heap that is used for all data nodes in your cluster. Check the JVMMemoryPressure metric in Amazon CloudWatch to determine current usage.Note: JVM heap size of a data node is set to half the size of physical memory (RAM), up to 32 GB. For example, if the physical memory (RAM) is 128 GB per node, the heap size will still be 32 GB (the maximum heap size). Otherwise, heap size is calculated as half the size of physical memory.High JVM memory pressure can be caused by following:Increase in the number of requests to the cluster. Check the IndexRate and SearchRate metrics in Amazon CloudWatch to determine your current load.Aggregation, wildcards, and using wide time ranges in your queries.Unbalanced shard allocation across nodes or too many shards in a cluster.Index mapping explosions.Using the fielddata data structure to query data. Fielddata can consume a large amount of heap space, and remains in the heap for the lifetime of a segment. As a result, JVM memory pressure remains high on the cluster when fielddata is used. For more information, see Fielddata on the Elasticsearch website.Troubleshooting high JVM memory pressureTo resolve high JVM memory pressure, try the following tips:Reduce incoming traffic to your cluster, especially if you have a heavy workload.Consider scaling the cluster to obtain more JVM memory to support your workload.If cluster scaling isn't possible, try reducing the number of shards by deleting old or unused indices. Because shard metadata is stored in memory, reducing the number of shards can reduce overall memory usage.Enable slow logs to identify faulty requests.Note: Before enabling configuration changes, verify that JVM memory pressure is below 85%. This way, you can avoid additional overhead to existing resources.Optimize search and indexing requests, and choose the correct number of shards. For more information about indexing and shard count, see Get started with OpenSearch Service: How many shards do I need?Disable and avoid using fielddata. By default, fielddata is set to "false" on a text field unless it's explicitly defined as otherwise in index mappings.Change your index mapping type to a keyword, using reindex API or create or update index template API. You can use the keyword type as an alternative for performing aggregations and sorting on text fields.Avoid aggregating on text fields to prevent increases in field data. When you use more field data, more heap space is consumed. Use the cluster stats API operation to check your field data.Clear the fielddata cache with the following API call:POST /index_name/_cache/clear?fielddata=true (index-level cache)POST */_cache/clear?fielddata=true (cluster-level cache)Warning: If you clear the fielddata cache, any in-progress queries might be disrupted.For more information, see How do I troubleshoot high JVM memory pressure on my OpenSearch Service cluster?Related informationBest practices for Amazon OpenSearch ServiceHow can I improve the indexing performance on my Amazon OpenSearch Service cluster?Follow" | https://repost.aws/knowledge-center/opensearch-circuit-breaker-exception |
Why can't my Secrets Manager rotation function connect to an Aurora PostgreSQL database using scram-sha-256? | My AWS Secrets Manger rotation function can't connect to an Amazon Aurora PostgreSQL database using the scram-sha-256 algorithm. | "My AWS Secrets Manger rotation function can't connect to an Amazon Aurora PostgreSQL database using the scram-sha-256 algorithm.Short descriptionIf your database is Aurora PostgreSQL version 13 or later, the rotation function might fail to connect to the database if:The database uses scram-sha-256 to encrypt passwords.The rotation function uses libpq-based client's version 9 or earlier.Important: If you set up automatic secret rotation before December 30, 2021, then your rotation function bundled an older version of libpq that doesn't support scram-sha-256.ResolutionFollow these steps to check for database users for scram-sha-256 encryption and the rotation function libpq version.Determine which database users use scram-sha-256 encryptionTo check for users with scram-sha-256 encrypted passwords, see the AWS blog SCRAM authentication in Amazon Relational Database Service for PostgreSQL 13.Determine what version of libpq your rotation function uses1. Open the Lambda console.2. In the navigation pane, choose Functions, and then select the Lambda function name that failed to rotate.3. Choose the Code tab.4. Choose Actions, choose Export function, and then choose Download deployment package.5. Uncompress the zip file into the work directory.6. Run the following Linux command in the work directory:readelf -a libpq.so.5 | grep RUNPATHIf you see the string PostgreSQL-9.4.x, or any major version less than 10, then the rotation function doesn't support scram-sha-256.Example output for a rotation function that doesn't support scram-sha-256:0x000000000000001d (RUNPATH) Library runpath: [/local/p4clients/pkgbuild-a1b2c/workspace/build/PostgreSQL/PostgreSQL-9.4.x_client_only.123456.0/AL2_x86_64/DEV.STD.PTHREAD/build/private/tmp/brazil-path/build.libfarm/lib:/local/p4clients/pkgbuild-a1b2c/workspace/src/PostgreSQL/build/private/install/lib] * Example output for a rotation function that supports scram-sha-256:Example output for a rotation function that supports scram-sha-256:0x000000000000001d (RUNPATH) Library runpath: [/local/p4clients/pkgbuild-a1b2c/workspace/build/PostgreSQL/PostgreSQL-10.x_client_only.123456.0/AL2_x86_64/DEV.STD.PTHREAD/build/private/tmp/brazil-path/build.libfarm/lib:/local/p4clients/pkgbuild-a1b2c/workspace/src/PostgreSQL/build/private/install/lib]If your database uses scram-sha-256 and the example output indicate that the rotation function doesn't support scram-sha-256, then you must recreate your rotation function.Related informationTroubleshoot AWS Secrets Manager rotationFollow" | https://repost.aws/knowledge-center/secrets-manager-scram-sha-256 |
How can I calculate my burst credit billing on my EC2 instance? | I want to calculate my burst credit billing on my Amazon Elastic Compute Cloud (Amazon EC2) instance. How can I do this? | "I want to calculate my burst credit billing on my Amazon Elastic Compute Cloud (Amazon EC2) instance. How can I do this?ResolutionNote: Burst credit billing applies to EC2 burstable instance types only. For example, T4g, T3a, and T3 instance types, and the previous generation T2 instance types.You can configure burstable instances to run in one of two modes: standard mode or unlimited mode.You can check whether your burstable performance instance is configured as unlimited or standard using the EC2 console or the AWS Command Line Interface (AWS CLI). For more information, see View the credit specification of a burstable performance instance and View the default credit specification.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Billing in standard modeIn standard mode, you're charged using the standard On-Demand hourly rate for your instance.Billing in unlimited modeIn unlimited mode, the hourly instance price automatically covers all CPU usage spikes for the shorter of the following:The time that the instance's average CPU utilization is at or below the baseline over a rolling 24-hour period.The instance's lifetime.If the instance runs at a higher CPU utilization for a prolonged period, it can do so for a flat additional rate per vCPU-Hour. For information on vCPU-Hour charges, see T2/T3/T4g Unlimited Mode Pricing.Credits that are spent by an instance in unlimited mode after it depletes its accrued credit balance are called surplus credits. For information on when surplus credit charges occur, see Surplus credits can incur charges.You can use the Amazon CloudWatch metric CPUSurplusCreditsCharged to see surplus credits that are charged for your instance. For more information, see Additional CloudWatch metrics for burstable performance instances.For examples of how EC2 burstable instances in unlimited mode spend credits and are charged, see Unlimited mode examples.Related informationKey concepts and definitions for burstable performance instancesFollow" | https://repost.aws/knowledge-center/ec2-calculate-burst-credit-billing |
How can I automatically attach a persistent secondary EBS volume to a new EC2 Linux Spot Instance at boot? | I want to use a user data script to automatically launch a persistent secondary Amazon Elastic Block Store (Amazon EBS) volume to my new Amazon Elastic Compute Cloud (Amazon EC2) Linux Spot Instance at boot. How do I do this? | "I want to use a user data script to automatically launch a persistent secondary Amazon Elastic Block Store (Amazon EBS) volume to my new Amazon Elastic Compute Cloud (Amazon EC2) Linux Spot Instance at boot. How do I do this?Short descriptionTo automatically attach a persistent secondary EBS volume to a new EC2 Linux Spot Instance at boot, add a user data script to an EC2 launch template. Use the template when configuring your Spot Instance Request.PrerequisiteCreate or use an AWS Identity and Access Management (IAM) role that at a minimum has attach-volume access granted for Amazon EC2. This role will be attached to the launch template.ResolutionStep 1: Configure a launch template with an IAM Role and user data script1. Open the Amazon EC2 console.2. Select Launch Templates, and then select Create launch template.3. Choose the instance AMI, type, and size. Or, choose an existing AMI.4. Associate a key pair with the template.5. Choose a subnet in the same Availability Zone as the EBS volume.6. Select Advanced details.7. Add the IAM role that at a minimum has attach-volume access granted, as shown in the following example:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:AttachVolume", "ec2:DetachVolume" ], "Resource": [ "arn:aws:ec2:*:*:instance/*", "arn:aws:ec2:*:*:volume/vol-xxxxxxxxxxxx" ] }, { "Effect": "Allow", "Action": "ec2:DescribeVolumes", "Resource": "arn:aws:ec2:*:*:volume/vol-xxxxxxxxxxxx" } ]8. Add a user data script to the template. The following is an example user data script. Replace the region and volume-id to match your environment.#!/bin/bash OUTPUT=$(curl http://169.254.169.254/latest/meta-data/instance-id) aws ec2 attach-volume --volume-id vol-xxxxxxxxxxxx --device /dev/xvdf --instance-id $OUTPUT --region ap-southeast-1Step 2: Configure a Spot Request using the launch template created in Step 11. Select Spot Instance, and then select Request Spot Instance.2. Select Launch Templates, and then choose the Launch Template created in Step 1. All the information configured on the template auto-populates.3. Choose the same Availability Zone as the EBS volume.4. Select create Spot Request.After the Spot request completes, the persistent secondary EBS volume is attached to the new Spot Instance automatically at boot.Follow" | https://repost.aws/knowledge-center/ec2-linux-spot-instance-attach-ebs-volume |
How can I set up a private network connection between a file gateway and Amazon S3? | I want to set up a private network connection between AWS Storage Gateway's file interface (file gateway) and Amazon Simple Storage Service (Amazon S3). I don't want my gateway to communicate with AWS services over the internet. How can I do that? | "I want to set up a private network connection between AWS Storage Gateway's file interface (file gateway) and Amazon Simple Storage Service (Amazon S3). I don't want my gateway to communicate with AWS services over the internet. How can I do that?Short descriptionYou can set up a private network connection between a file gateway and Amazon S3 within an Amazon Virtual Private Cloud (Amazon VPC) where the gateway appliance connects with service endpoints over an internal private network. To set up this private connection within a VPC, do the following:Create either a VPC gateway endpoint or an interface endpoint for Amazon S3.Create a file gateway using a VPC endpoint.Amazon S3 File Gateway supports two Amazon S3 endpoints. However, you need to create only one type of endpoint based on your use case.Note: Amazon S3 gateway endpoints can't be used with on-premises gateways. An Amazon S3 gateway endpoint is used with Amazon EC2 instance-based gateways. Amazon S3 interface endpoints can be used with both on-premises and EC2 instance-based gatewaysResolutionCreate a VPC gateway endpoint for Amazon S3Open the Amazon VPC console.In the navigation pane, choose Endpoints.Choose Create Endpoint.For Service category, select AWS services.For Service Name, select the Service Name that ends with s3 and has Type as Gateway.For VPC, select the VPC that you want to use when accessing Storage Gateway.For Configure route tables, select the Route Table ID for your configuration.Choose Create endpoint.When using Gateway VPC endpoints, VPC endpoint policies are used to restrict access and only allow requests to S3 buckets from authorized users. In addition, you can control which buckets are accessible from a particular VPC. This is the best practices model for accessing S3 from a VPC in the same Region. To use a Gateway VPC endpoint from on-premises applications, or to access S3 from a VPC in a different AWS Region, you must have set up a fleet of proxy servers with private IP addresses in your VPC. This results in changes to your on-premises applications so that they direct requests to the proxy servers, and then forward them to S3 through your VPC endpoint.Create a VPC interface endpoint for Amazon S3Open the Amazon VPC console.In the navigation pane, choose Endpoints.Choose Create Endpoint.For Service category, select AWS services.For Service Name, select the service name that ends with s3 and has Type as Interface.For VPC, select the VPC and subnets that you want to use when accessing Storage Gateway.For Security group, select the security group where port 443 is opened.Choose Create endpoint.Create a file gateway using the VPC endpointTo create a file gateway using a VPC endpoint, you must create a VPC endpoint for Storage Gateway, create and configure a file gateway and activate your gateway in a VPC.Note: If you're using on-premises Storage Gateway using a private connectivity with AWS, then you can use interface endpoint for Amazon S3 that works without an Amazon Elastic Compute Cloud (Amazon EC2) proxy.Create file share using the VPC interface endpoint for Amazon S3With Amazon S3 File Gateway, you can create a file share that can be accessed using either the Network File System (NFS) or Server Message Block (SMB) protocol. For more information on creating a file share, see Creating a file share.Test the network connectivityNote: Testing the connectivity helps you to check if the Storage Gateway appliance can connect with the service endpoint over the required TCP port.Connect to the file gateway's Amazon EC2 host instance using SSH.In the SSH session, enter 3 to select 3: Test Network Connectivity.The tests return [ PASSED ] for a successful network connection.Related informationUse cases (AWS PrivateLink and VPC endpoints)Performing maintenance tasks on the local consoleSecure hybrid access to Amazon S3 using AWS PrivateLinkFollow" | https://repost.aws/knowledge-center/storage-gateway-file-gateway-private-s3 |
How can I share CloudHSM clusters with other AWS accounts? | My organization has multiple AWS accounts. How can I share my AWS CloudHSM clusters with these AWS accounts? | "My organization has multiple AWS accounts. How can I share my AWS CloudHSM clusters with these AWS accounts?Short descriptionYou can use AWS Resource Access Manager to share subnets for the Amazon Virtual Private Cloud (Amazon VPC) containing your CloudHSM with other AWS accounts.ResolutionUse AWS RAM to access CloudHSM with another AWS account in AWS Organizations. In the following example, Account 1 contains the CloudHSM cluster, and Account 2 contains the CloudHSM client instance.Use AWS RAM to enable sharingWith your Organizations management account, open the AWS RAM console in the same Region as your CloudHSM, and choose Settings.Select the Enable sharing within your AWS Organization check box.With your Organizations management account, open the AWS Organization Console.Choose Settings, and note the Organization ID.Create a resource share with Account 1 for with other accountsOpen the AWS RAM console with Account 1 in the same Region as your CloudHSM.In the navigation pane, in Shared by me, choose Resource shares.Choose Create resource share.In Name, enter a name for the resource share.In Resources, choose the Amazon VPC subnet ID for your CloudHSM.In Principals, uncheck select Allow external accounts.In the Add AWS account number search pane, enter the Organization ID, choose Add, and then choose Create resource share.Note: You can also share Organizational Units (OUs) and AWS accounts.Configure the security group to allow the CloudHSM client to connect to the CloudHSM clusterOpen the CloudHSM console with Account 1 in the same Region as your CloudHSM cluster.In the navigation pane, choose Clusters.In Cluster ID, choose the CloudHSM cluster that you want to share.In Security group, choose the security group.Choose the Inbound tab, and then choose Edit.Choose Add Rule.In Port Range, enter 2223-2225.In Source, enter the private IP address of your client instance, and then choose Save.Note: To get the client instance private IP address, see view the IPv4 addresses using the EC2 console.Create client instances for the subnets shared with Account 2Open the Amazon EC2 console with Account 2, choose Launch Instance, and then select an Amazon Machine Image (AMI).Choose Next: Configure Instance Details.In Network, choose the Amazon VPC that's shared with Account 2.In Subnet, choose the subnet that's shared with Account 2.In Auto-assign Public IP, choose Enable, and then choose Next: Add Storage.Choose Next: Add Tags, and then choose Next: Configure Security Group.In Assign a security group, choose either Create a new security group or Select an existing security group (depending on your instance type).Choose Review and Launch, and then choose Launch.Choose an existing key pair or create a new one (depending on your instance type), and then select the agreement check box.Choose Launch Instances.Related informationSharing your AWS resourcesWorking with security groupsShared subnets permissionsFollow" | https://repost.aws/knowledge-center/cloudhsm-share-clusters |
Why is the AWS Glue crawler running for a long time? | "The AWS Glue crawler has been running for several hours or longer, and is still not able to identify the schema in my data store." | "The AWS Glue crawler has been running for several hours or longer, and is still not able to identify the schema in my data store.Short descriptionHere are some common causes of long crawler run times:Frequently adding new data: During the first crawler run, the crawler reads the first megabyte of each file to infer the schema. During subsequent crawler runs, the crawler lists files in the target, including files that were crawled during the first run, and reads the first megabyte of new files. The crawler doesn't read files that were read in the previous crawler run. This means that subsequent crawler runs are often faster. This is due to the incremental crawl feature, if activated. With this option, crawler only reads new data in subsequent crawl runs. However, when you add a lot of files or folders to your data store between crawler runs, the run time increases each time.**Crawling compressed files:**Compressed files take longer to crawl. That's because the crawler must download the file and decompress it before reading the first megabyte or listing the file.Note: For Apache Parquet, Apache Avro, and Apache Orc files, the crawler doesn't crawl the first megabyte. Instead, the crawler reads the metadata stored in each file.ResolutionBefore you start troubleshooting, consider whether or not you need to run a crawler. Unless you need to create a table in the AWS Glue Data Catalog and use the table in an extract, transform, and load (ETL) job or a downstream service, such as Amazon Athena, you don't need to run a crawler. For ETL jobs, you can use from_options to read the data directly from the data store and use the transformations on the DynamicFrame. When you do this, you don't need a crawler in your ETL pipeline. If you determine that a crawler makes sense for your use case, use one or more of the following methods to reduce crawler run times.Use an exclude patternAn exclude pattern tells the crawler to skip certain files or paths. Exclude patterns reduce the number of files that the crawler must list, making the crawler run faster. For example, use an exclude pattern to exclude meta files and files that have already been crawled. For more information, including examples of exclude patterns, see Include and exclude patterns.Use the sample size featureThe AWS Glue crawler supports the sample size feature. With this feature, you can specify the number of files in each leaf folder to be crawled when crawling sample files in a dataset. When this feature is turned on, the crawler randomly selects some files in each leaf folder to crawl instead of crawling all the files in the dataset. If you have previous knowledge about your data formats and know that schemas in your folders do not change, then use the sampling crawler. Turning on this feature significantly reduces the crawler run time.Run multiple crawlersInstead of running one crawler on the entire data store, consider running multiple crawlers. Running multiple crawlers for a short amount of time is better than running one crawler for a long time. For example, assume that you are partitioning your data by year, and that each partition contains a large amount of data. If you run a different crawler on each partition (each year), the crawlers complete faster.Combine smaller files to create larger onesIt takes more time to crawl a large number of small files than a small number of large files. That's because the crawler must list each file and must read the first megabyte of each new file.Related informationHow do I resolve the "Unable to infer schema" exception in AWS Glue?Why does the AWS Glue crawler classify my fixed-width data file as UNKNOWN when I use a built-in classifier to parse the file?Follow" | https://repost.aws/knowledge-center/long-running-glue-crawler |
How do I get SSM agent logs for Fargate tasks that have Amazon ECS Exec activated? | "I want to get AWS Systems Manager Agent (SSM Agent) logs for AWS Fargate tasks that have Amazon Elastic Container Service (Amazon ECS) Exec activated. But, I don't know how." | "I want to get AWS Systems Manager Agent (SSM Agent) logs for AWS Fargate tasks that have Amazon Elastic Container Service (Amazon ECS) Exec activated. But, I don't know how.Short descriptionBefore you start using Amazon ECS Exec, see Prerequisites for using ECS Exec.To get SSM agent logs for Fargate tasks that have ECS Exec activated, create an Amazon Elastic File System (Amazon EFS) file system. Then, mount the Amazon EFS file system on the Fargate container. Finally, mount the same file system on an Amazon Elastic Compute Cloud (Amazon EC2) instance to get the SSM agent logs.Important: Your Amazon EFS file system, Amazon ECS cluster, and Fargate tasks must all be in the same Amazon Virtual Private Cloud (Amazon VPC).Note: The following resolution is only for Fargate tasks that have ECS Exec activated. Use the resolution steps solely for debugging. Launch it as a standalone task, or keep your desiredCount of tasks to "1" in your Amazon ECS service to avoid overriding logs. You can also use the following resolution for scenarios where you must check non stderr/stdout logs from the containers.ResolutionCreate your Amazon EFS file system and mount it on a Fargate container in a Task or Service.Create your Amazon EFS file system.Note the Amazon EFS ID and security group ID.Edit your file system security group rules to allow inbound connections on port 2049 from the security group that's associated with your Fargate task.Update your Amazon ECS security group to allow outbound connections on port 2049 to your file system's security group.Open the Amazon ECS console, and navigate to your Amazon ECS task definition.In the Volumes section, choose Add volume.For Name, enter a name for your volume.For Volume type, enter "EFS".For File system ID, enter the ID for your file system.In the Containers definition section, navigate to the STORAGE AND LOGGING section, and select the volume that you created for the source volume.For Container path, select /var/log/amazon.Update the Fargate service or task with the task definition that you created.Mount the Amazon EFS file system on an Amazon EC2 instance and get the SSM Agent logs1. Mount your file system on an EC2 instance.2. Run the following command to get the log data:sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-01b0bxxxxxxxx.efs.ap-southeast-1.amazonaws.com:/ /efsExample output:# df -hFilesystem Size Used Avail Use% Mounted onfs-01b0bxxxxxxxx.efs.us-west-2.amazonaws.com:/ 8.0E 0 8.0E 0% /efsThe following is an example of logs stored at path /var/log/amazon/ssm/amazon-ssm-agent.log in the Fargate container:[root@ip-172-31-32-32 efs]# cd ssm/[root@ip-172-31-32-32 ssm]# lsamazon-ssm-agent.log audits[root@ip-172-31-32-32 ssm]# cat amazon-ssm-agent.log | tail -n 102022-10-20 11:50:34 INFO [ssm-agent-worker] [MessageService] [MessageHandler] ended idempotency deletion thread2022-10-20 11:50:37 INFO [ssm-agent-worker] [MessageService] [MGSInteractor] send failed reply thread started2022-10-20 11:50:37 INFO [ssm-agent-worker] [MessageService] [MGSInteractor] send failed reply thread done2022-10-20 11:55:37 INFO [ssm-agent-worker] [MessageService] [MGSInteractor] send failed reply thread started2022-10-20 11:55:37 INFO [ssm-agent-worker] [MessageService] [MGSInteractor] send failed reply thread done2022-10-20 12:00:34 INFO [ssm-agent-worker] [MessageService] [MessageHandler] started idempotency deletion thread2022-10-20 12:00:34 WARN [ssm-agent-worker] [MessageService] [MessageHandler] [Idempotency] encountered error open /var/lib/amazon/ssm/170b15cacf5846ed836bcd7903cbee48-2531612879/idempotency: no such file or directory while listing replies in /var/lib/amazon/ssm/170b15cacf5846ed836bcd7903cbee48-2531612879/idempotency2022-10-20 12:00:34 INFO [ssm-agent-worker] [MessageService] [MessageHandler] ended idempotency deletion thread2022-10-20 12:00:37 INFO [ssm-agent-worker] [MessageService] [MGSInteractor] send failed reply thread started2022-10-20 12:00:37 INFO [ssm-agent-worker] [MessageService] [MGSInteractor] send failed reply thread done[root@ip-172-31-32-32 ssm]#Follow" | https://repost.aws/knowledge-center/ecs-exec-ssm-logs-fargate-tasks |
How do I make sure that my client I/O isn't disrupted because of security patches? | I want to know some best practices for maintaining high availability in MSK clusters during security patching. | "I want to know some best practices for maintaining high availability in MSK clusters during security patching.Short descriptionAmazon Managed Streaming for Apache Kafka (Amazon MSK) uses rolling updates to maintain high availability, and support cluster I/O during patching. During this process, brokers are rebooted one by one, and the next broker isn't rebooted until the partitions on the current rebooted broker fully catches up (in-sync). It's normal to see transient disconnect errors on your clients during this update process.To prevent clients from experiencing downtime during security patching, use the following best practices to make the clusters highly available.ResolutionSet up a three-AZ clusterIf an Availability Zone fails, then a three-AZ cluster guards against any downtime.Amazon MSK sets the broker.rack broker property to achieve a rack-aware replication assignment for fault tolerance at the Availability Zone level. This means that when you use a three-AZ cluster with a replication factor (RF) of three, each of the three partition replicas is in a separate Availability Zone.Note: Having a two-AZ cluster with an RF-3 doesn’t allow each of the three partition replicas to be in a separate Availability Zone. Amazon MSK doesn't allow you to create a cluster in a single Availability Zone.Make sure that the replication factor is the Availability Zone countWhen you restart a broker during security patching, the leader becomes unavailable. As a result, one of the follower replicas gets elected as the new leader so that the cluster can continue to service clients.An RF-1 can lead to unavailable partitions during a rolling update because the cluster doesn’t have any replicas to promote as a new leader. An RF-2, with a minimum in-sync replica (minISR) of one, might result in data loss, even when producer acknowledgement (acks) is set to "all." For a write to be successful, a minISR of one requires only the leader to acknowledge the write. If the leader replica's broker goes down immediately after the acknowledgement but before the follower replica catches up, then data loss occurs. For more information about min.insync.replicas, see the Apache Kafka Documentation.Set minimum minISR to at most RF-1Setting minISR to the value of RF can result in producer failures when one broker is out of service because of a rolling update. If the replicas don't send an acknowledgement for the producer to write, then the producer raises an exception. For example, if AZ equals three and RF equals three, then the producer waits for all three partition replicas (including the leader) to acknowledge the messages. When one of the brokers is out of service, only two of the three partitions return the acknowledgements, resulting in producer exceptions.This scenario assumes the producer acks is set to "all." When you set producer acks to "all", the record isn't lost as long as at least one in-sync replica remains alive. For more details about producer acks, see the Apache Kafka Documentation.Include at least one broker from each AZ in the client connection stringThe client uses a single broker's endpoint to bootstrap a connection to the cluster. During the initial client connection, the broker sends metadata with information about the brokers that the client must access.When a broker becomes unavailable, the connection fails. For example, you have only one broker in a client's connection string. During patching, the client fails to establish an initial connection with the cluster when you restart the broker.Or, you have multiple brokers in a client connection string. In this case, the client's connection string allows failover when the broker that's used for establishing the connection goes offline. For more information on how to set up a connection string with multiple brokers, see Getting the bootstrap brokers for an Amazon MSK cluster.Allow retriesWhen you reboot a broker, leader partitions on that broker become unavailable. As a result, Apache Kafka promotes replica partitions on another broker as new leaders. The client now requests a metadata update to locate the new leader partitions. During this change, it's normal for your client to experience transient errors.By default, clients have retries built in to them to handle these types of transient errors. Confirm that your client is configured for retries. For more information on configuring retries, see the Apache Kafka Documentation.Follow" | https://repost.aws/knowledge-center/msk-avoid-disruption-during-patching |
How do I delete a stack instance from a CloudFormation stack set in a closed or suspended AWS account? | "I want to delete a stack instance from an AWS CloudFormation stack set, but the deletion fails because the target AWS account is closed or suspended." | "I want to delete a stack instance from an AWS CloudFormation stack set, but the deletion fails because the target AWS account is closed or suspended.Short descriptionWhen an AWS account is closed or suspended, the CloudFormation StackSets administration role can no longer access the StackSets execution role in that account. This prevents stack set operations from running on stack instances for that account. If you try to delete a stack instance in a closed or suspended account, you can get an error message. Then, the stack instance status can change to INOPERABLE.To delete stack instances for closed or suspended accounts, you must perform the DeleteStackInstances operation with the RetainStacks option set to true. This decouples the stack instance from the stack set without deleting the stack instance in the target account.The following resolution steps depend on the permissions model that the stack set uses: self-managed permissions or service-managed permissions with AWS Organizations.Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.ResolutionDelete inoperable stack instances for stack sets with self-managed permissionsFor stack sets that use self-managed permissions, you can delete the INOPERABLE stack instance with either the CloudFormation console or AWS CLI.CloudFormation console:1. Open the CloudFormation console.2. From the navigation pane, choose StackSets.3. From the StackSet name column, select the stack set that contains the instance that you want to delete.4. Choose Actions, and then choose Delete stacks from StackSets.5. On the Set deployment options page, for Account numbers, enter the 12-digit account number of the AWS account that's closed or suspended.6. In the Specify regions section, choose the AWS Region of your stack instance.7. In the Deployment options section, turn on Retain stacks, and then choose Next.8. On the Review page, choose Submit.AWS CLI:In the AWS CLI, run the following command:$ aws cloudformation delete-stack-instances --stack-set-name YourStackSetName --accounts YourStackInstanceAccount --regions YourStackInstanceRegion --retain-stacksNote: Replace YourStackSetName with the name of your stack set. Replace YourStackInstanceAccount with the AWS account number of the closed or suspended account. Replace YourStackInstanceRegion with the Region where the stack instance is located.Delete inoperable stack instances for stack sets with service-managed permissionsFor stack sets that use service-managed permissions, operations from the CloudFormation console can target only entire organizational units (OUs). You must use the AWS CLI to delete a specific stack instance from a single account.In the AWS CLI, run the following command:aws cloudformation delete-stack-instances --stack-set-name YourStackSetName --deployment-targets Accounts=YourStackInstanceAccount --regions YourStackInstanceRegion --retain-stacksNote: Replace YourStackSetName with the name of your stack set. Replace YourStackInstanceAccount with the AWS account number of the closed or suspended account. Replace YourStackInstanceRegion with the Region where the stack instance is located.Related informationPermission models for stack setsStack set and stack instance status codesFollow" | https://repost.aws/knowledge-center/cloudformation-removal-stacksets |
How do I resolve a failed health check for a load balancer in Amazon EKS? | My load balancer keeps failing the health check in my Amazon Elastic Kubernetes Service (Amazon EKS). | "My load balancer keeps failing the health check in my Amazon Elastic Kubernetes Service (Amazon EKS).Short descriptionTo troubleshoot health check issues with the load balancer in your Amazon EKS, complete the steps in the following sections:Check the status of the podCheck the pod and service label selectorsCheck for missing endpointsCheck the service traffic policy and cluster security groups for Application Load BalancersVerify that your EKS is configured for targetPortVerify that your AWS Load Balancer Controller has the correct permissionsCheck the ingress annotations for issues with Application Load BalancersCheck the Kubernetes Service annotations for issues with Network Load BalancersManually test a health checkCheck the networkingRestart the kube-proxyResolutionCheck the status of the podCheck if the pod is in Running status and the containers in the pods are ready:$ kubectl get pod -n YOUR_NAMESPACENote: Replace YOUR_NAMESPACE with your Kubernetes namespace.Example output:NAME READY STATUS RESTARTS AGEpodname 1/1 Running 0 16sNote: If the application container in the pod isn't running, then the load balancer health check isn't answered and fails.Check the pod and service label selectorsFor pod labels, run the following command:$ kubectl get pod -n YOUR_NAMESPACE --show-labelsExample output:NAME READY STATUS RESTARTS AGE LABELSalb-instance-6cc5cd9b9-prnxw 1/1 Running 0 2d19h app=alb-instance,pod-template-hash=6cc5cd9b9To verify that your Kubernetes Service is using the pod labels, run the following command to check that its output matches the pod labels:$ kubectl get svc SERVICE_NAME -n YOUR_NAMESPACE -o=jsonpath='{.spec.selector}{"\n"}'Note: Replace SERVICE_NAME with your Kubernetes Service and YOUR_NAMESPACE with your Kubernetes namespace.Example output:{"app":"alb-instance"}Check for missing endpointsThe Kubernetes controller for the service selector continuously scans for pods that match its selector, and then posts updates to an endpoint object. If you selected an incorrect label, then no endpoint appears.Run the following command:$ kubectl describe svc SERVICE_NAME -n YOUR_NAMESPACEExample output:Name: alb-instanceNamespace: defaultLabels: <none>Annotations: <none>Selector: app=alb-instance-1 Type: NodePortIP Family Policy: SingleStackIP Families: IPv4IP: 10.100.44.151IPs: 10.100.44.151Port: http 80/TCPTargetPort: 80/TCPNodePort: http 32663/TCPEndpoints: <none> Session Affinity: NoneExternal Traffic Policy: ClusterEvents: <none>Check if the endpoint is missing:$ kubectl get endpoints SERVICE_NAME -n YOUR_NAMESPACEExample output:NAME ENDPOINTS AGEalb-instance <none> 2d20hCheck the service traffic policy and cluster security groups for issues with Application Load BalancersUnhealthy targets in the Application Load Balancer target groups happen for two reasons. Either the service traffic policy, spec.externalTrafficPolicy, is set to Local instead of Cluster. Or, the node groups in a cluster have different cluster security groups associated with them, and traffic cannot flow freely between the node groups.Verify that the traffic policy is correctly configured:$ kubectl get svc SERVICE_NAME -n YOUR_NAMESPACE -o=jsonpath='{.spec.externalTrafficPolicy}{"\n"}'Example output:LocalChange the setting to Cluster:$ kubectl edit svc SERVICE_NAME -n YOUR_NAMESPACECheck the cluster security groups1. Open the Amazon EC2 console.2. Select the healthy instance.3. Choose the Security tab and check the security group ingress rules.4. Select the unhealthy instance.5. Choose the Security tab and check the security group ingress rules.If the security group for each instance is different, then you must modify the security ingress rule in the security group console:1. From the Security tab, select the security group ID.2. Choose the Edit inbound rules button to modify ingress rules.3. Add inbound rules to allow traffic from the other node groups in the cluster.Verify that your service is configured for targetPortYour targetPort must match the containerPort in the pod that the service is sending traffic to.To verify what your targetPort is configured to, run the following command:$ kubectl get svc SERVICE_NAME -n YOUR_NAMESPACE -o=jsonpath="{.items[*]}{.metadata.name}{'\t'}{.spec.ports[].targetPort}{'\t'}{.spec.ports[].protocol}{'\n'}"Example output:alb-instance8080TCPIn the preceding example output, the targetPort is configured to 8080. However, because the containerPort is set to 80 you must configure the targetPort to 80.Verify that your AWS Load Balancer Controller has the correct permissionsThe AWS Load Balancer Controller must have the correct permissions to update security groups to allow traffic from the load balancer to instances or pods. If the controller doesn't have the correct permissions, then you receive errors.Check for errors in the AWS Load Balancer Controller deployment logs:$ kubectl logs deploy/aws-load-balancer-controller -n kube-systemCheck for errors in the individual controller pod logs:$ kubectl logs CONTROLLER_POD_NAME -n YOUR_NAMESPACENote: Replace CONTROLLER_POD_NAME with your controller pod name and YOUR_NAMESPACE with your Kubernetes namespace.Check the ingress annotations for issues with Application Load BalancersFor issues with the Application Load Balancer, check the Kubernetes ingress annotations:$ kubectl describe ing INGRESS_NAME -n YOUR_NAMESPACENote: Replace INGRESS_NAME with the name of your Kubernetes Ingress and YOUR_NAMESPACE with your Kubernetes namespace.Example output:Name: alb-instance-ingressNamespace: defaultAddress: k8s-default-albinsta-fcb010af73-2014729787.ap-southeast-2.elb.amazonaws.comDefault backend: alb-instance:80 (192.168.81.137:8080)Rules: Host Path Backends ---- ---- -------- awssite.cyou / alb-instance:80 (192.168.81.137:8080)Annotations: alb.ingress.kubernetes.io/scheme: internet-facing kubernetes.io/ingress.class: alb Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfullyReconciled 25m (x7 over 2d21h) ingress Successfully reconciledTo find ingress annotations that are specific to your use case, see Ingress annotations (from the Kubernetes website).Check the Kubernetes Service annotations for issues with Network Load BalancersFor issues with the Network Load Balancer, check the Kubernetes Service annotations:$ kubectl describe svc SERVICE_NAME -n YOUR_NAMESPACEExample output:Name: nlb-ipNamespace: defaultLabels: <none>Annotations: service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing service.beta.kubernetes.io/aws-load-balancer-type: external Selector: app=nlb-ipType: LoadBalancerIP Family Policy: SingleStackIP Families: IPv4IP: 10.100.161.91IPs: 10.100.161.91LoadBalancer Ingress: k8s-default-nlbip-fff2442e46-ae4f8cf4a182dc4d.elb.ap-southeast-2.amazonaws.comPort: http 80/TCPTargetPort: 80/TCPNodePort: http 31806/TCPEndpoints: 192.168.93.144:80Session Affinity: NoneExternal Traffic Policy: ClusterEvents: <none>Note: Take note of the APPLICATION_POD_IP. You'll need it to run a health check command.To find Kubernetes Service annotations that are specific to your use case, see Service annotations (from the Kubernetes website).Manually test a health checkCheck your application pod IP address:$ kubectl get pod -n YOUR_NAMESPACE -o wideRun a test pod to manually test a health check within the cluster for HTTP health checks:$ kubectl run -n YOUR_NAMESPACE troubleshoot -it --rm --image=amazonlinux -- /bin/bashFor HTTP health checks:# curl -Iv APPLICATION_POD_IP/HEALTH_CHECK_PATHNote: Replace APPLICATION_POD_IP with your application pod IP and HEALTH_CHECK_PATH with the ALB Target group health check path.Example command:# curl -Iv 192.168.81.137Example output:* Trying 192.168.81.137:80...* Connected to 192.168.81.137 (192.168.81.137) port 80 (#0)> HEAD / HTTP/1.1> Host: 192.168.81.137> User-Agent: curl/7.78.0> Accept: */*> * Mark bundle as not supporting multiuse< HTTP/1.1 200 OKHTTP/1.1 200 OK< Server: nginx/1.21.3Server: nginx/1.21.3< Date: Tue, 26 Oct 2021 05:10:17 GMTDate: Tue, 26 Oct 2021 05:10:17 GMT< Content-Type: text/htmlContent-Type: text/html< Content-Length: 615Content-Length: 615< Last-Modified: Tue, 07 Sep 2021 15:21:03 GMTLast-Modified: Tue, 07 Sep 2021 15:21:03 GMT< Connection: keep-aliveConnection: keep-alive< ETag: "6137835f-267"ETag: "6137835f-267"< Accept-Ranges: bytesAccept-Ranges: bytes< * Connection #0 to host 192.168.81.137 left intactCheck the HTTP response status code. If the response status code is 200 OK, then it means that your application is responding correctly on the health check path.If the HTTP response status code is 3xx or 4xx, then you can change your health check path. The following annotation can respond with 200 OK:alb.ingress.kubernetes.io/healthcheck-path: /ping-or-You can use the following annotation on the ingress resource to add a successful health check response status code range:alb.ingress.kubernetes.io/success-codes: 200-399For TCP health checks, use the following command to install the netcat command:# yum update -y && yum install -y ncTest the TCP health checks:# nc -z -v APPLICATION_POD_IP CONTAINER_PORT_NUMBERNote: Replace APPLICATION_POD_IP with your application pod IP and CONTAINER_PORT_NUMBER with your container port.Example command:# nc -z -v 192.168.81.137 80Example output:Ncat: Version 7.50 ( https://nmap.org/ncat )Ncat: Connected to 192.168.81.137:80.Ncat: 0 bytes sent, 0 bytes received in 0.01 seconds.Check the networkingFor networking issues, verify the following:The multiple node groups in the EKS cluster can freely communicate with each otherThe network access control list (network ACL) that's associated with the subnet where your pods are running allows traffic from the load balancer subnet CIDR rangeThe network ACL that's associated with your load balancer subnet should allow return traffic on the ephemeral port range from the subnet where the pods are runningThe route table allows local traffic from within the VPC CIDR rangeRestart the kube-proxyIf the kube-proxy that runs on each node isn't behaving correctly, then it could fail to update the iptables rules for the service and endpoints. Restart the kube-proxy to force it to recheck and update iptables rules:kubectl rollout restart daemonset.apps/kube-proxy -n kube-systemExample output:daemonset.apps/kube-proxy restartedRelated informationHow do I set up an Application Load Balancer using the AWS Load Balancer Controller on an Amazon EC2 node group in Amazon EKS?How do I troubleshoot service load balancers for Amazon EKS?How can I tag the Amazon VPC subnets in my Amazon EKS cluster for automatic subnet discovery by load balancers or ingress controllers?Follow" | https://repost.aws/knowledge-center/eks-resolve-failed-health-check-alb-nlb |
How can I resolve an ERROR 2026 SSL connection error when connecting to an Amazon RDS for MySQL or Aurora DB instance? | "I'm trying to connect to my Amazon Relational Database Service (Amazon RDS) DB instance or cluster using Secure Sockets Layer (SSL). I received the following error:"ERROR 2026 (HY000): SSL connection error"How can I resolve ERROR 2026 for Amazon RDS for MySQL, Amazon Aurora for MySQL, or Amazon Aurora Serverless?" | "I'm trying to connect to my Amazon Relational Database Service (Amazon RDS) DB instance or cluster using Secure Sockets Layer (SSL). I received the following error:"ERROR 2026 (HY000): SSL connection error"How can I resolve ERROR 2026 for Amazon RDS for MySQL, Amazon Aurora for MySQL, or Amazon Aurora Serverless?Short descriptionThere are three different types of error messages for ERROR 2026:ERROR 2026 (HY000): SSL connection error: SSL certificate validation failureERROR 2026 (HY000): SSL connection error: Server doesn't support SSLERROR 2026 (HY000): SSL connection error: ASN: bad other signature confirmationSee the following troubleshooting steps for each error message.ResolutionERROR 2026 (HY000): SSL connection error: SSL certificate validation failureTo troubleshoot this error, first validate whether you're using the cluster endpoint or the DB instance endpoint. To learn how Amazon RDS supports SSL, see Using SSL with a MySQL DB instance or Using SSL with Aurora MySQL DB clusters.If you use a client that supports Subject Alternative Names (SAN), then you can use only the cluster endpoint. If your client doesn't support SAN, you must use the endpoint of the primary DB instance.Note: The default MySQL command line client doesn't support SAN.If you receive this error when trying to connect to the cluster endpoint, try connecting to the endpoint of the primary DB instance in the connection string. For example, you can connect to the cluster endpoint. In the following example, the cluster endpoint is abcdefg-clust.cluster-xxxx.us-east-1.rds.amazonaws.com. The DB instance endpoint is abcdefg-inst.xxxx.us-east-1.rds.amazonaws.com.Connect using the cluster endpoint[ec2-user@ip-192-0-2-0 ~]$ mysql -h abcdefg-clust.cluster-xxxx.us-east-1.rds.amazonaws.com --ssl-ca rds-combined-ca-bundle.pem --ssl-mode=VERIFY_IDENTITY -u test -p testEnter password:ERROR 2026 (HY000): SSL connection error: SSL certificate validation failureConnect using the DB instance endpoint[ec2-user@ip-192-0-2-0 ~]$ mysql -h abcdefg-inst.xxxx.us-east-1.rds.amazonaws.com --ssl-ca rds-combined-ca-bundle.pem --ssl-mode=VERIFY_IDENTITY -u test -p testEnter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 26ERROR 2026 (HY000): SSL connection error: Server doesn't support SSLYou can receive this error if the server or engine version that you use doesn't support SSL. To resolve this error, migrate to an engine that supports SSL connections.ERROR 2026 (HY000): SSL connection error: SSL_CTX_set_default_verify_paths failed or ERROR 2026 (HY000): SSL connection error: ASN: bad other signature confirmationYou can receive this error if the certificate identifier (certificate file name) isn't correct. You can also receive this error if the certificate identifier isn't supported by the MySQL client, for example with Aurora Serverless. If you use Aurora Serverless clusters and you use the MySQL client to connect to Aurora Serverless, then you must use the MySQL 8.0-compatible MySQL commands.Be sure to use the correct certificate identifier name and the correct path to the certificate to connect successfully. Before connecting, confirm that you have downloaded the correct certificate. For more information, see Using SSL to encrypt a connection to a DB instance.The root certificate file is in the Downloads directory in an Amazon Elastic Compute Cloud (Amazon EC2) instance. In the following example, you enter the incorrect path, which results in ERROR 2026:[ec2-user@ip-192-0-2-0 ~]$ mysql -h abcdefg-clust.cluster-xxxxx.us-east-1.rds.amazonaws.com --ssl-ca rds-combined-ca-bundle.pem --ssl-mode=VERIFY_IDENTITY -u test -p testEnter password:ERROR 2026 (HY000): SSL connection error: SSL_CTX_set_default_verify_paths failedNote: This example uses the connection string in the home directory, but the root certificate is in the Downloads directory.In the following example, you use the path to the root certificate to connect successfully:[ec2-user@ip-192-0-2-0 ~]$ mysql -h abcdefg-clust.cluster-xxxx.us-east-1.rds.amazonaws.com --ssl-ca /home/ec2-user/Downloads/rds-combined-ca-bundle.pem --ssl-mode=VERIFY_IDENTITY -u test -p testEnter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 26You can also receive this error if you don't have permissions to the directory that the certificate is stored in. Be sure that the certificate is in a directory that you have permissions to access. See the following examples to connect with and without permissions:Connecting with insufficient permissions[ec2-user@ip-192-0-2-0 ~]$ sudo chmod 700 rds-combined-ca-bundle.pem [ec2-user@ip-192-0-2-0 ~]$ mysql -h abcdefg-inst.xxxx.us-east-1.rds.amazonaws.com --ssl-ca rds-combined-ca-bundle.pem --ssl-mode=VERIFY_IDENTITY -u test -p testEnter password: ERROR 2026 (HY000): SSL connection error: SSL_CTX_set_default_verify_paths failedConnecting with the correct permissions[ec2-user@ip-192-0-2-0 ~]$ sudo chmod 755 rds-combined-ca-bundle.pem[ec2-user@ip-192-0-2-0 ~]$ mysql -h abcdefg-inst.xxxx.us-east-1.rds.amazonaws.com --ssl-ca rds-combined-ca-bundle.pem --ssl-mode=VERIFY_IDENTITY -u test -p testEnter password: Welcome to the MySQL monitor. Commands end with ; or \g.Your MySQL connection id is 810Related informationUsing TLS/SSL with Aurora serverlessFollow" | https://repost.aws/knowledge-center/rds-error-2026-ssl-connection |
What do I do when I receive an abuse report from AWS about my resources? | "AWS notified me that my account's resources were reported for abusive activity. What does this mean, and what do I need to do?" | "AWS notified me that my account's resources were reported for abusive activity. What does this mean, and what do I need to do?Short descriptionThe AWS Trust & Safety Team sends abuse reports to the security contact on your account. If there is no security contact listed, then the AWS Trust & Safety Team contacts you using the email address listed on your account.If you require more information, reply to the email from the AWS Trust & Safety Team. The team will request additional information from the reporter.If you observe unauthorized activity, or if you believe that an unauthorized party has accessed your AWS account, see What do I do if I notice unauthorized activity in my AWS account? Make sure that your instances and applications are properly secured by following the shared responsibility model.You can use AWS services, such as Amazon GuardDuty and AWS Trusted Advisor, to help you identify opportunities to secure your resources.ResolutionIf you receive an abuse notice from AWS, do the following:Review the abuse notice to see what content or activity was reported. Logs that implicate abuse are included along with the abuse report, as provided by the reporter.Reply directly to the abuse report and explain how you're preventing the abusive activity from recurring in the future. Note: If you don't respond to an abuse notice within 24 hours, AWS might block your resources or suspend your AWS account.The AWS Trust & Safety Team doesn't provide technical support. If you need technical help and have a Developer or Business AWS Support plan, see How do I get technical support from AWS? For additional resources, see How do I get help with my AWS account and resources?Related informationHow do I report abuse of AWS resources?Follow" | https://repost.aws/knowledge-center/aws-abuse-report |
"I am unable to connect to an EC2 Windows instance using RDP and receive an error, "The remote session was disconnected because there are no Remote Desktop License Servers available." How can I resolve this?" | "When connecting to an Amazon Elastic Compute Cloud (Amazon EC2) Windows instance using Remote Desktop Protocol (RDP), I receive this error message: "The remote session was disconnected because there are no Remote Desktop License Servers available to provide a license. Please contact the server administrator." How can I resolve this?" | "When connecting to an Amazon Elastic Compute Cloud (Amazon EC2) Windows instance using Remote Desktop Protocol (RDP), I receive this error message: "The remote session was disconnected because there are no Remote Desktop License Servers available to provide a license. Please contact the server administrator." How can I resolve this?ResolutionThe error indicates there is an issue with the Remote Desktop Services host settings on your Amazon EC2 Windows instance.If you don't need to run more than two concurrent RDP sessions, uninstall the Remote Desktop Service.Troubleshooting CALsIf you installed the Remote Desktop services host settings on your Amazon EC2 Windows instance to allow more than 2 concurrent RDP client sessions, then you must purchase Client Access Licenses (CALs). RDS provides a licensing grace period of 120 days. After the grace period expires, you must purchase CALs to allow more than two concurrent RDP user sessions.For assistance with purchasing and installing CALs, contact your VAR or Microsoft Partner.If you purchased CALs and receive the error, then the server might not be reachable by the instance that you're connecting to. For more information, see Cannot connect to RDS because no RD Licensing servers are available.Uninstall Remote Desktop ServicesOpen Remote Desktop client using below command and connect to the Windows server:mstsc /adminRun the following Windows PowerShell command as an administrator:Uninstall-WindowsFeature -name Remote-Desktop-Services -includemanagementtools -confirmReboot the machine.Note: If you still can't connect to a Windows instance using RDP, you might have to perform additional troubleshooting. For more information, see How do I troubleshoot Remote Desktop connection issues to my Amazon EC2 Windows instance?Related informationConnect to your Windows instanceCommand-line reference mstscRDP displays a black screen instead of the desktopFollow" | https://repost.aws/knowledge-center/concurrent-users-windows-ec2-rdp |
My static website hosted on Amazon S3 and served through CloudFront is down. Why? | "I'm using Amazon Simple Storage Service (Amazon S3) to host a static website, and I'm using Amazon CloudFront to serve the website. However, the website is down. How can I fix this?" | "I'm using Amazon Simple Storage Service (Amazon S3) to host a static website, and I'm using Amazon CloudFront to serve the website. However, the website is down. How can I fix this?ResolutionBefore you begin, confirm the following:You have internet access.The origin domain name that's specified on your CloudFront distribution points to the correct S3 bucket with no typos or other errors.If you have internet access and the origin domain name is correct, then check the error response that you get from trying to access your website:403 Access Denied errorA 403 Access Denied error indicates that there's a permissions problem that's causing your website to appear down. For troubleshooting instructions, see I’m using an S3 website endpoint as the origin of my CloudFront distribution. Why am I getting 403 Access Denied errors?Important: Be sure to check the Block Public Access settings for your website's S3 bucket. These settings can block anonymous requests to your website. Block Public Access settings can apply to AWS accounts or individual buckets.404 Not Found errorA 404 Not Found error indicates that the request is pointing to a website object that doesn't exist. Check the following:Verify that the URL to the website object doesn't have any typos.Verify that the website object exists on the S3 bucket that's hosting your website. You can check the bucket using the Amazon S3 console. Or, you can run the list-objects command using the AWS Command Line Interface (AWS CLI).Internal errorIf the response indicates there's an internal error, then there might be an internal service issue that's affecting your website. Check the AWS Service Health Dashboard for possible issues.Related informationWeb distribution diagnosticFollow" | https://repost.aws/knowledge-center/s3-website-cloudfront-down |
Why am I receiving "imported-openssh-key" or "Putty Fatal Error" errors when connecting to my Amazon Elastic Compute Cloud (Amazon EC2) Linux instance? | I'm receiving "imported-openssh-key" or "Putty Fatal Error" errors when connecting to my Amazon Elastic Compute Cloud (Amazon EC2) Linux instance. How can I fix these errors? | "I'm receiving "imported-openssh-key" or "Putty Fatal Error" errors when connecting to my Amazon Elastic Compute Cloud (Amazon EC2) Linux instance. How can I fix these errors?Short descriptionWhen connecting to my Linux instance using SSH, I receive an error similar to the following:"Using username "root". Authentication with public key "imported-openssh-key" Please login as the user "ec2-user" rather than the user "root"."-or-When using the PuTTY client, I receive an error similar to the following:"PuTTY Fatal Error: Disconnected: No supported authentication methods available (server sent: publickey) OKThese errors might occur under the following circumstances:You're not connecting with the appropriate user name for your AMI when you negotiate an SSH session with an EC2 instance.You're using the wrong private key when you negotiate an SSH session with an EC2 instance.ResolutionVerify that you're connecting with the correct user nameOn your local machine, verify that you're connecting with an appropriate user name. For a complete list of appropriate user names, see Troubleshoot connecting to your instance.Verify that the private key is correct1.FSPOpen the Amazon EC2 console, and then choose Instances.2.FSPFind the EC2 instance that you want to connect to using SSH.3.FSPIn the Key Name column, verify the name of the private key that you're using to connect through SSH:PuTTYVerify that the SSH private key matches the private key you see in the Key Name column for your EC2 instance in the console.Verify that you converted your private key (.pem) file to the format recognized by PuTTY (.ppk). For more information, see Convert your private key using PuTTYgen.macOS or LinuxRun the following command to make sure that you changed the permissions on your key pair file so that only you can view it:$ chmod 400 my-key-pair.pemCheck the directory and file name that you specify after the -i flag to make sure that it's the correct path to your private key, as shown in the following example command:$ ssh -i my-key-pair.pem [email protected] the EC2 serial consoleIf you turned on EC2 serial console for Linux, then you can use the serial console to troubleshoot supported Nitro-based instance types. The serial console helps you troubleshoot boot issues, network configuration, and SSH configuration issues. The serial console connects to your instance without the need for a working network connection. You can access the serial console using the Amazon EC2 console or the AWS Command Line Interface (AWS CLI).Before using the serial console, grant access to it at the account level. Then, create AWS Identity and Access Management (IAM) policies granting access to your IAM users. Also, every instance using the serial console must include at least one password-based user. If your instance is unreachable and you haven’t configured access to the serial console, then follow the instructions in Method 2, 3, or 4. For information on configuring the EC2 serial console for Linux, see Configure access to the EC2 serial console.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Related informationWhy can't I connect to my Amazon EC2 Linux instance using SSH?Connect to your Linux Instance from Windows using PuTTYAmazon EC2 key pairs and Windows instancesFollow" | https://repost.aws/knowledge-center/linux-credentials-error |
How do I use customer profiles in Amazon Connect? | "In Amazon Connect, I want to start using customer profiles." | "In Amazon Connect, I want to start using customer profiles.ResolutionTo start using customer profiles, it's a best practice to begin with the Amazon Connect customer profiles workshop.Frequently asked questionsHow do I see the customer profiles for a specific customer?You can see specific customer profiles by changing the instance URL for your Contact Control Panel (CCP) to replace ccp-v2 with agent-app-v2.For example, if your URL is instance-test.my.connect.aws, then the URLs look similar to the following:To access the CCP for calls, chats, and tasks, the URL is: https://instance-test.my.connect.aws/ccp-v2/To access the CCP for calls, chats, tasks, and to use customer profiles and the Voice ID feature, the URL is: https://instance-test.my.connect.aws/agent-app-v2/How do I know when a specific profile was created?When a profile is created, the CreateProfile API is logged in AWS CloudTrail. For more information, see Amazon Connect information in CloudTrail.Tip: You can configure an Amazon EventBridge rule to notify you when the CreateProfile API is logged. For more information, see Getting started with Amazon EventBridge.How do I associate, create, and update a customer profile during a call flow using a contact flow block?To change a customer profile during a call flow, use the Customer Profiles flow block. The Customer Profiles flow block lets you retrieve, update, and create a customer profile.How do I use a customer profiles contact flow block to validate an inbound call?You can create customer profiles to validate every inbound call using a customer's phone number. Use the customer's phone number from System Attributes to perform a profile search with the field _phone. Be sure to perform this search within the same block. If the search returns a profile, then you have validated that the customer's phone number is from an existing customer.How do I see custom attributes in the customer profile in the agent application?Custom attributes aren't visible in the agent application by default. To see custom attributes in the agent application, you must design a custom agent application panel.How can I run the Identity Resolution manually?You can use the JobSchedule API to schedule the Identity Resolution to run at a specific time.Troubleshooting the "Conflict executing request: Duplicate key" error messageThis error message occurs when configuring Amazon Simple Storage Service (Amazon S3) data mapping on your customer profile. This error happens because the CSV file isn't formatted correctly, such as when it contains empty columns or extra data.To resolve this error, make sure that there isn't extra data in the file when formatting the CSV. Refer to the sample CSV file for an example of the correct format.Related informationActivate customer profiles for your instanceFollow" | https://repost.aws/knowledge-center/connect-customer-profiles |
How can I resolve the errors that I receive when I integrate API Gateway with a Lambda function? | I want to resolve the errors that I receive when I integrate Amazon API Gateway with an AWS Lambda function. | "I want to resolve the errors that I receive when I integrate Amazon API Gateway with an AWS Lambda function.ResolutionTurn on logging for your API and stage1. In the API Gateway console, find the Stage Editor for your API.2. On the Stage Editor pane, choose the Logs/Tracing tab.3. On the Logs/Tracing tab, for CloudWatch Settings, do the following to turn on logging:Choose the Enable CloudWatch Logs check box.For Log level, choose INFO to generate logs for all requests. Or, choose ERROR to generate logs only for requests to your API that result in an error.For REST APIs, choose the Log full requests/responses data check box. Or, for WebSocket APIs, choose the Log full message data check box.4. Under Custom Access Logging, do the following to turn on access logging:Choose the Enable Access Logging check box.For Access Log Destination ARN, enter the Amazon Resource Name (ARN) of a CloudWatch log group or an Amazon Kinesis Data Firehose stream.Enter a Log Format. For guidance, choose CLF, JSON, XML, or CSV to see an example in that format.5. Choose Save Changes.Note: The console doesn't confirm that the settings are saved.For more information, see Set up CloudWatch API logging using the API Gateway console.Determine integration types, verify errors, and take next steps1. Determine whether a Lambda proxy integration or a Lambda custom integration is set up in API Gateway. You can verify the integration type by reviewing the Lambda function output or by running the get-integration command.2. Verify that the errors in API Gateway correspond to the errors in Lambda. Run the following CloudWatch Logs Insights query to find an error status code during a specified time frame:parse @message '(*) *' as reqId, message | filter message like /Method completed with status: \d\d\d/ | parse message 'Method completed with status: *' as status | filter status != 200 | sort @timestamp asc | limit 50Then, run the following CloudWatch Logs Insights query to search for Lambda error logs during the same time frame:fields @timestamp, @message | filter @message like /(?i)(Exception|error|fail)/ | sort @timestamp desc | limit 203. Based on the type of error that you identify in your logs, choose one of the following:If you receive the following error, then complete the steps in the Resolve concurrency issues section.(XXXXX) Lambda invocation failed with status: 429. Lambda request id: XXXXXXXXXX(XXXXX) Execution failed due to configuration error: Rate Exceeded.(XXXXX) Method completed with status: 500If you receive either of the following errors, then complete the steps in the Resolve timeout issues section.For a Lambda custom integration:< 29 sec:(XXXXX) Method response body after transformations: {"errorMessage":"2019-08-14T02:45:14.133Z xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Task timed out after xx.01 seconds"}> 29 sec:(XXXXX) Execution failed due to a timeout errorFor a Lambda proxy integration:< 29 sec:(XXXXX) Endpoint response body before transformations: {"errorMessage":"2019-08-14T02:50:25.865Z xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Task timed out after xx.01 seconds"}> 29 sec:(XXXXX) Execution failed due to a timeout errorIf you receive the following error, then complete the steps in the Resolve function errors section.(XXXXX) Execution failed due to configuration error: Malformed Lambda proxy response(XXXXX) Method response body after transformations: {"errorMessage": "Syntax error in module 'lambda_function'"}Resolve concurrency issuesYou receive 429 throttling errors or 500 errors when additional requests come in from API Gateway faster than your Lambda function can scale.To resolve these errors, analyze the Count (API Gateway), Throttles (Lambda), and ConcurrentExecutions (Lambda) metrics in CloudWatch. Consider the following:Count (API Gateway) is the total number API requests in a given period.Throttles (Lambda) are the number of invocation requests that are throttled. When all function instances are processing requests and no concurrency is available to scale up, Lambda rejects additional requests with the TooManyRequestsException error. Throttled requests and other invocation errors don't count as invocations or errors.ConcurrentExecutions (Lambda) is the number of function instances that are processing events. If this number reaches your concurrent executions quota for the AWS Region, then additional invocation requests are throttled. Invocation requests are also throttled when the number of function instances reaches the reserved concurrency limit that you configured on the function.Note: For more information, see API Gateway metrics and Working with Lambda function metrics.If you set the reserve concurrency in your Lambda function, then set a higher reserve concurrency value for the Lambda function. Or, remove the reverse concurrency value from the Lambda function. The function then draws from the pool of unreserved concurrent executions.If you don't set the reserve concurrency in the Lambda function, then check the ConcurrentExecutions metric to find out the usage. For more information, see Lambda quotas.Resolve timeout issuesThe integration timeout is 29 seconds (a hard limit) for all API Gateway integrations. You might encounter two scenarios when you build an API Gateway API with Lambda integration. The scenarios are when the timeout is less than 29 seconds or greater than 29 seconds.If your Lambda function timeout is less than 29 seconds, then check your Lambda logs to investigate this issue. If your Lambda function must run after 29 seconds, consider invoking the Lambda function asynchronously.For Lambda custom integration, complete the following steps:1. Open the API Gateway console.2. From the navigation pane, choose APIs, and then choose your API.3. Choose Resources, and then choose your method.4. Choose Integration Request.5. Choose Method Request.6. Expand HTTP Request Headers.7. Choose Add header.8. For Name, enter a name for your header. For example: X-Amz-Invocation-TypeImportant: You must map your header from 'Event' (single quotes are required).For Lambda proxy integration:Use two Lambda functions: function A and function B. API Gateway first invokes function A synchronously. Then, function A invokes function B asynchronously. Function A can return a successful response to API Gateway when function B is invoked asynchronously.If you're using Lambda proxy integration, consider changing it to a custom integration. However, you must configure the mapping templates to transform the request/response to your desired format. For more information, see Set up asynchronous invocation of the backend Lambda function.Note: Because an asynchronous Lambda function runs in the background, your client can't receive data from a Lambda function directly. You must have an intermediate database to store any persistent data.Resolve function errorsIf you receive a function error when you invoke your API, then check your Lambda function for any syntax errors. This error also appears if your Lambda function isn't returning a valid JSON object that API Gateway expects for proxy integrations.To resolve this error, complete the steps in the Turn on logging for your API and stage section.Related informationHandle standard Lambda errors in API GatewayHandle custom Lambda errors in API GatewayFollow" | https://repost.aws/knowledge-center/api-gateway-lambda-integration-errors |
How do I port my phone number that's not registered in the US or Canada (non +1 country code) in Amazon Connect? | "I am trying to port my non-US (non +1 country code) numbers in Amazon Connect, but I am not sure how I can do this. How do I start the process for porting my numbers?" | "I am trying to port my non-US (non +1 country code) numbers in Amazon Connect, but I am not sure how I can do this. How do I start the process for porting my numbers?ResolutionCreate a support case to port your telephone numbers into Amazon Connect. Submit the port request only when you are prepared to port the telephone number because it can be difficult to cancel or change the request after it has started.Note: This process is for phone numbers not registered in the US or Canada. For all other numbers, see How do I port my phone number registered in the US or Canada (+1 country code) in Amazon Connect?Include the following information when submitting your case to AWS Support:Telephone number, including all digits and any leading zerosNumber type (local/DID, toll-free, ITFS, UIFN, and so on)Country that the number is registered inName of the current carrierAccount number with current carrier, as well as any PIN or billing telephone number informationCopy of an invoice or bill with current carrier, with sensitive information redactedAmazon Connect target instance ARNExact name of the contact flow where the numbers must be mapped to after receiving porting approvalNote: Specifying the contact flow information allows AWS Support to map the number to a contact flow when adding the telephone number to the Amazon Connect instance. This helps to avoid downtime during the porting process.Preferred porting window timeframeNote: AWS Support will attempt to meet the window timeframe, but can't guarantee to meet it.After submitting the port request, it takes up to five business days to check telephone number portability with our carriers. If the number is portable, then AWS Support will reach out for more information.Note: AWS Support notifies you if there is a cost to port an international number before submitting the port to the current carrier.After AWS Support collects all additional information and completed forms from you, the information is submitted to our partner carrier. Then, the partner carrier submits the port request to the current carrier. If the current carrier accepts the request to port, they propose a porting date. This is known as the mutual acceptance of a port. Mutual port acceptance takes from five to fifteen business days to complete after all correct and completed forms are submitted. At this point, your preferred porting window timeframe is considered. The international number porting process varies by country, carrier, and local regulations. The porting process is completed on the scheduled date and time provided to you by AWS Support.AWS Support informs you when the process is complete and successful porting has been verified. It's best practice for you to test the ported numbers by placing test calls to make sure that the attached contact flow is initiated.Follow" | https://repost.aws/knowledge-center/connect-port-non-us-number |
How do I resolve the error "Timeout waiting for connection from pool" in Amazon EMR? | My Apache Hadoop job in Amazon EMR fails with the error message "Timeout waiting for connection from pool". | "My Apache Hadoop job in Amazon EMR fails with the error message "Timeout waiting for connection from pool".ResolutionThis error usually happens when you reach the Amazon EMR File System (EMRFS) connection limit for Amazon Simple Storage Service (Amazon S3). To resolve this error, increase the value of the fs.s3.maxConnections property. You can do this while your cluster is running or when you create a new cluster.1. Connect to the master node using SSH.2. Run the following command to open the emrfs-site.xml file as sudo. This file is located in the /usr/share/aws/emr/emrfs/conf directory.sudo vi /usr/share/aws/emr/emrfs/conf/emrfs-site.xml3. Set the fs.s3.maxConnections property to a value above 50. In the following example, the value is set to 100. You might need to choose a higher value, depending on how many concurrent S3 connections that your applications need.Note: If you launch your cluster with Apache HBase, then the fs.s3.maxConnections value is set to 1000 by default. If increasing the fs.s3.maxConnections value doesn't resolve the timeout error, then check your applications for connection leaks.<property> <name>fs.s3.maxConnections</name> <value>100</value></property>4. Repeat steps 2 and 3 on all core and task nodes. Use the same fs.s3.maxConnections value that you used on the master node.Note: With Amazon EMR version 5.21.0 and later, you can reconfigure cluster applications and specify additional configuration classifications for each instance group in a running cluster. For more information, see Reconfigure an instance group in a running cluster.5. Run the Hadoop job again. Your application must use the new value for fs.s3.maxConnections without a service restart.To set the value of the fs.s3.maxConnections property on all nodes when you launch a new cluster, use a configuration object similar to the following. For more information, see Configure applications.[ { "Classification": "emrfs-site", "Properties": { "fs.s3.maxConnections": "100" } }]Related informationWork with storage and file systemsFollow" | https://repost.aws/knowledge-center/emr-timeout-connection-wait |
How can I troubleshoot "ERROR: null value in column violates not-null constraint" for my AWS DMS full load and CDC task? | I have an AWS Database Migration Service (AWS DMS) task that is both full load and change data capture (CDC). I received an error that says: "ERROR: null value in column violates not-null constraint." How do I resolve this issue? | "I have an AWS Database Migration Service (AWS DMS) task that is both full load and change data capture (CDC). I received an error that says: "ERROR: null value in column violates not-null constraint." How do I resolve this issue?ResolutionIf you have large object (LOB) columns in your migration task, then you see a log entry similar to the following:MessagesE: Command failed to load data with exit error code 1, Command output: ERROR: null value in column ’xyz’ violates not-null constraintWhen AWS DMS migrates LOB columns, first all the data except for the LOB column is migrated into your target table, and AWS DMS inserts a NULL record in the LOB column. Then, AWS DMS updates the rows in the target table with the LOB column data.If the target isn't created by AWS DMS, check the target data description language (DDL) to see if a NOT NULL attribute is specified. If there is a NOT NULL attribute, alter the table to remove the NOT NULL constraint on LOB column datatypes. Then, resume your AWS DMS task.See the following table for AWS DMS task behavior for LOB modes and task phases:LOB ModeFull LoadChange Data CaptureFull LOB ModeNOT NULL constraint isn't allowedNOT NULL constraint isn't allowedLimited LOB ModeNULL constraint is allowedNOT NULL constraint isn't allowedRelated InformationMigrating Large Binary Objects (LOBs)Follow" | https://repost.aws/knowledge-center/dms-error-null-value-column |
How can I collect logs from the Windows instances in my Elastic Beanstalk environment? | I want to collect logs from the Windows instances in my AWS Elastic Beanstalk environment. | "I want to collect logs from the Windows instances in my AWS Elastic Beanstalk environment.Short descriptionYou can use the AWSSupport-CollectElasticBeanstalkLogs automation to collect logs from the Windows instances in your Elastic Beanstalk environment. For Windows instances, you must collect logs one at a time by connecting to each individual Windows instance using the Remote Desktop Protocol (RDP). However, you can avoid this manual process by using the AWSSupport-CollectElasticBeanstalkLogs automation to collect logs from multiple Windows instances automatically.By default, the automation uploads the log bundles for your instances as .zip files to either of the following:The default Elastic Beanstalk bucket in the your accountAn Amazon Simple Storage Service (Amazon S3) bucket that you specifyThe automation collects log files from the following locations:C:\Program Files\Amazon\ElasticBeanstalk\HealthD\Logs\*C:\Program Files\Amazon\ElasticBeanstalk\logs\*C:\cfn\log\*C:\inetpub\logs\*Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.ResolutionYou can run the automation using the AWS Systems Manager console or AWS CLI.Console:1. Open the Systems Manager console.2. On the navigation pane, in the Change Management section, choose Automation.3. Choose Execute automation.4. On the Owned by Amazon tab, search for and select the AWSSupport-CollectElasticBeanstalkLogs automation document, and then choose Next.5. In the Input parameters section, enter the fields appropriate for your scenario.6. Choose Execute.To monitor the progress of your automation:1. On the navigation pane of the Systems Manager console, in the Change Management section, choose Automation.2. From the Execution ID column, choose your automation.3. Review the progress of your automation. The Execution steps section shows what stage the automation is currently in. The Outputs section includes logs that can help you troubleshoot issues if your automation fails.AWS CLI:1. Collect bundle logs and upload them to the default S3 bucket for Elastic Beanstalk in your account:aws ssm start-automation-execution --document-name "AWSSupport-CollectElasticBeanstalkLogs"\ --parameters "InstanceId=INSTANCEID,\ EnvironmentId=ENVIRONMENTID,\ AutomationAssumeRole=AUTOMATIONROLE"2. Collect bundle logs and upload them to your S3 bucket:aws ssm start-automation-execution --document-name "AWSSupport-CollectElasticBeanstalkLogs"\ --parameters "InstanceId=INSTANCEID,\ EnvironmentId=ENVIRONMENTID,\ S3BucketName=BUCKETNAME,\ S3BucketPath=BUCKETPATH,\ AutomationAssumeRole=AUTOMATIONROLE"3. Retrieve the execution output:aws ssm get-automation-execution --automation-execution-id EXECUTIONID --output text --query 'AutomationExecution.Outputs'Follow" | https://repost.aws/knowledge-center/elastic-beanstalk-windows-instance-logs |
How can I automate the configuration of HTTP proxy for EKS worker nodes with user data? | I want to automate the configuration of HTTP proxy for Amazon Elastic Kubernetes Service (Amazon EKS) worker nodes with user data. | "I want to automate the configuration of HTTP proxy for Amazon Elastic Kubernetes Service (Amazon EKS) worker nodes with user data.Short descriptionTo set up a proxy on worker nodes, you must configure the necessary components of your Amazon EKS cluster to communicate from the proxy. Components include, but are not limited to, the kubelet systemd service, kube-proxy, aws-node pods, and yum update. To automate the configuration of proxy for worker nodes with Docker runtime, do the following: Note: The following resolution applies only nodes where the underlying runtime is Docker, and doesn't apply to nodes with containerd runtime. For nodes with containerd runtime, see How can I automate the configuration of HTTP proxy for EKS containerd nodes?ResolutionFind your cluster's IP CIDR block:$ kubectl get service kubernetes -o jsonpath='{.spec.clusterIP}'; echoThe preceding command returns either 10.100.0.1 or 172.20.0.1. This means that your cluster IP CIDR block is either 10.100.0.0/16 or 172.20.0.0/16.Create a ConfigMap file named proxy-env-vars-config.yaml based on the output from the command in step 1.If the output has an IP from the range 172.20.x.x, then structure your ConfigMap file as follows:apiVersion: v1kind: ConfigMapmetadata: name: proxy-environment-variables namespace: kube-systemdata: HTTP_PROXY: http://customer.proxy.host:proxy_port HTTPS_PROXY: http://customer.proxy.host:proxy_port NO_PROXY: 172.20.0.0/16,localhost,127.0.0.1,VPC_CIDR_RANGE,169.254.169.254,.internal,s3.amazonaws.com,.s3.us-east-1.amazonaws.com,api.ecr.us-east-1.amazonaws.com,dkr.ecr.us-east-1.amazonaws.com,ec2.us-east-1.amazonaws.comNote: Replace VPC_CIDR_RANGE with the IPv4 CIDR block of your cluster's VPC.If the output has an IP from the range 10.100.x.x, then structure your ConfigMap file as follows:apiVersion: v1kind: ConfigMapmetadata: name: proxy-environment-variables namespace: kube-systemdata: HTTP_PROXY: http://customer.proxy.host:proxy_port HTTPS_PROXY: http://customer.proxy.host:proxy_port NO_PROXY: 10.100.0.0/16,localhost,127.0.0.1,VPC_CIDR_RANGE,169.254.169.254,.internal,s3.amazonaws.com,.s3.us-east-1.amazonaws.com,api.ecr.us-east-1.amazonaws.com,dkr.ecr.us-east-1.amazonaws.com,ec2.us-east-1.amazonaws.comNote: Replace VPC_CIDR_RANGE with the IPv4 CIDR block of your cluster's VPC.Amazon EKS clusters with private API server endpoint access, private subnets, and no internet access require additional endpoints. If you're building a cluster with the preceding configuration, then you must create and add endpoints for the following services:Amazon Elastic Container Registry (Amazon ECR)Amazon Simple Storage Service (Amazon S3)Amazon Elastic Compute Cloud (Amazon EC2)Amazon Virtual Private Cloud (Amazon VPC)For example, you can use the following endpoints: api.ecr.us-east-1.amazonaws.com, dkr.ecr.us-east-1.amazonaws.com, s3.amazonaws.com, s3.us-east-1.amazonaws.com, and ec2.us-east-1.amazonaws.com.Important: You must add the public endpoint subdomain to the NO_PROXY variable. For example, add the .s3.us-east-1.amazonaws.com domain for Amazon S3 in the us-east-1 AWS Region. If you activate endpoint private access for your Amazon EKS cluster, then you must add the Amazon EKS endpoint to the NO_PROXY variable. For example, add the .us-east-1.eks.amazonaws.com domain for your Amazon EKS cluster in the us-east-1 AWS Region.Verify that the NO_PROXY variable in configmap/proxy-environment-variables (used by the kube-proxy and aws-node pods) includes the Kubernetes cluster IP address space. For example, 10.100.0.0/16 is used in the preceding code example for the ConfigMap file where the IP range is from 10.100.x.x.Apply the ConfigMap:$ kubectl apply -f /path/to/yaml/proxy-env-vars-config.yamlTo configure the Docker daemon and kubelet, inject user data into your worker nodes. For example:Content-Type: multipart/mixed; boundary="==BOUNDARY=="MIME-Version: 1.0--==BOUNDARY==Content-Type: text/cloud-boothook; charset="us-ascii"#Set the proxy hostname and portPROXY="proxy.local:3128"MAC=$(curl -s http://169.254.169.254/latest/meta-data/mac/)VPC_CIDR=$(curl -s http://169.254.169.254/latest/meta-data/network/interfaces/macs/$MAC/vpc-ipv4-cidr-blocks | xargs | tr ' ' ',')#Create the docker systemd directorymkdir -p /etc/systemd/system/docker.service.d#Configure yum to use the proxycloud-init-per instance yum_proxy_config cat << EOF >> /etc/yum.confproxy=http://$PROXYEOF#Set the proxy for future processes, and use as an include filecloud-init-per instance proxy_config cat << EOF >> /etc/environmenthttp_proxy=http://$PROXYhttps_proxy=http://$PROXYHTTP_PROXY=http://$PROXYHTTPS_PROXY=http://$PROXYno_proxy=$VPC_CIDR,localhost,127.0.0.1,169.254.169.254,.internal,s3.amazonaws.com,.s3.us-east-1.amazonaws.com,api.ecr.us-east-1.amazonaws.com,dkr.ecr.us-east-1.amazonaws.com,ec2.us-east-1.amazonaws.comNO_PROXY=$VPC_CIDR,localhost,127.0.0.1,169.254.169.254,.internal,s3.amazonaws.com,.s3.us-east-1.amazonaws.com,api.ecr.us-east-1.amazonaws.com,dkr.ecr.us-east-1.amazonaws.com,ec2.us-east-1.amazonaws.comEOF#Configure docker with the proxycloud-init-per instance docker_proxy_config tee <<EOF /etc/systemd/system/docker.service.d/proxy.conf >/dev/null[Service]EnvironmentFile=/etc/environmentEOF#Configure the kubelet with the proxycloud-init-per instance kubelet_proxy_config tee <<EOF /etc/systemd/system/kubelet.service.d/proxy.conf >/dev/null[Service]EnvironmentFile=/etc/environmentEOF#Reload the daemon and restart docker to reflect proxy configuration at launch of instancecloud-init-per instance reload_daemon systemctl daemon-reload cloud-init-per instance enable_docker systemctl enable --now --no-block docker--==BOUNDARY==Content-Type:text/x-shellscript; charset="us-ascii"#!/bin/bashset -o xtrace#Set the proxy variables before running the bootstrap.sh scriptset -asource /etc/environment/etc/eks/bootstrap.sh ${ClusterName} ${BootstrapArguments}# Use the cfn-signal only if the node is created through an AWS CloudFormation stack and needs to signal back to an AWS CloudFormation resource (CFN_RESOURCE_LOGICAL_NAME) that waits for a signal from this EC2 instance to progress through either:# - CreationPolicy https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-creationpolicy.html# - UpdatePolicy https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html# cfn-signal will signal back to AWS CloudFormation using https transport, so set the proxy for an HTTPS connection to AWS CloudFormation/opt/aws/bin/cfn-signal --exit-code $? \ --stack ${AWS::StackName} \ --resource CFN_RESOURCE_LOGICAL_NAME \ --region ${AWS::Region} \ --https-proxy $HTTPS_PROXY--==BOUNDARY==--Important: You must update or create yum, Docker, and kubelet configuration files before starting the Docker daemon and kubelet.For an example of user data that's injected into worker nodes using an AWS CloudFormation template, see Launching self-managed Amazon Linux nodes.Update the aws-node and kube-proxy pods:$ kubectl patch -n kube-system -p '{ "spec": {"template": { "spec": { "containers": [ { "name": "aws-node", "envFrom": [ { "configMapRef": {"name": "proxy-environment-variables"} } ] } ] } } } }' daemonset aws-node$ kubectl patch -n kube-system -p '{ "spec": {"template":{ "spec": { "containers": [ { "name": "kube-proxy", "envFrom": [ { "configMapRef": {"name": "proxy-environment-variables"} } ] } ] } } } }' daemonset kube-proxyIf you change the ConfigMap, then apply the updates, and set the ConfigMap in the pods again. For example:$ kubectl set env daemonset/kube-proxy --namespace=kube-system --from=configmap/proxy-environment-variables --containers='*'$ kubectl set env daemonset/aws-node --namespace=kube-system --from=configmap/proxy-environment-variables --containers='*'Important: You must update all YAML modifications to the Kubernetes objects kube-proxy or aws-node when these objects are upgraded. To update a ConfigMap to a default value, use the eksctl utils update-kube-proxy or eksctl utils update-aws-node commands.Tip: If the proxy loses connectivity to the API server, then the proxy becomes a single point of failure and your cluster's behavior can be unpredictable. To prevent your proxy from becoming a single point of failure, run your proxy behind a service discovery namespace or load balancer.Confirm that the proxy variables are used in the kube-proxy and aws-node pods:$ kubectl describe pod kube-proxy-xxxx -n kube-systemThe output should like similar to the following:Environment: HTTPS_PROXY: <set to the key 'HTTPS_PROXY' of config map 'proxy-environment-variables'> Optional: false HTTP_PROXY: <set to the key 'HTTP_PROXY' of config map 'proxy-environment-variables'> Optional: false NO_PROXY: <set to the key 'NO_PROXY' of config map 'proxy-environment-variables'> Optional: falseIf you're not using AWS PrivateLink, then verify access to API endpoints through a proxy server for Amazon EC2, Amazon ECR, and Amazon S3.Follow" | https://repost.aws/knowledge-center/eks-http-proxy-configuration-automation |
How do I move a Direct Connect connection from one AWS account to another? | I want to move an AWS Direct Connect connection from one AWS account to another. | "I want to move an AWS Direct Connect connection from one AWS account to another.Short descriptionTo move a Direct Connect connection between AWS accounts, you must transfer ownership of that connection. The source (current owner) and destination (new owner) AWS accounts must each open separate AWS Support cases to transfer ownership of the Direct Connect connection. This process confirms that both the source and the destination AWS accounts agree to the transfer.The transfer of a Direct Connect connection changes ownership and billing of the connection to the transferred account. No changes are made to the Direct Connect connection during the transfer. No network outages occur during the transfer of ownership also.Before beginning an ownership transfer, understand the following:Transfer of ownership is only available for Direct Connect dedicated connections. If you have a hosted connection from an AWS partner, then transfer of ownership isn't possible. Reach out to your Direct Connect partner and inform them to order a new connection in the required account and to delete the previous account.Virtual interfaces associated with the Direct Connect connection aren't part of the transfer. They remain associated with the AWS account where they were created.Direct Connect gateways can't be moved from one AWS account to another.ResolutionImportant: Make sure that the current owner and new owner agree to the Direct Connect connection transfer before opening an AWS Support case to start the ownership transfer.1. Open an AWS Support case from the source account (the current owner of the Direct Connect connection) containing the below information:Direct Connect connection IDCurrent owner account IDNew owner account IDThis statement or similar: I would like to migrate my Direct Connect connection (connection name/ID) from account (account number) to account (account number).2. Open an AWS Support case from the destination account (the new owner of the Direct Connect connection) containing the below information:Direct Connect connection IDCurrent owner account IDNew owner account IDThis statement or similar: I would like to migrate my Direct Connect connection (connection name/ID) from account (account number) to account (account number).3. Copy and save the case number/ID of the AWS Support case created in step 2.4. Update the source account's case with the destination account's case number/ID.5. Wait for AWS Support to review both cases and process the ownership transfer.6. When you receive confirmation of the migration completion, verify that the Direct Connect connection has moved to the new account. If you have any issues, reach out to AWS support on the support case ticket.Related informationAWS Direct Connect connectionsFollow" | https://repost.aws/knowledge-center/direct-connect-ownership-transfer |
Why am I getting the error "FATAL: remaining connection slots are reserved for non replicate superuser connections" when connecting to my Amazon RDS for PostgreSQL even though I haven't reached the max_connections limit? | I'm getting the error "FATAL: remaining connection slots are reserved for non replicate superuser connections" when I connect to my Amazon Relational Database Service (Amazon RDS) for PostgreSQL even though I haven't reached the max_connections limit. | "I'm getting the error "FATAL: remaining connection slots are reserved for non replicate superuser connections" when I connect to my Amazon Relational Database Service (Amazon RDS) for PostgreSQL even though I haven't reached the max_connections limit.Short descriptionIn Amazon RDS for PostgreSQL, the actual maximum number of available connections to non-superusers is calculated as follows:max_connections - superuser_reserved_connections - rds.rds_superuser_reserved_connections.The default value for superuser_reserved_connections is 3, and the default value for rds.rds_superuser_reserved_connections is 2.For example, if you set the value of max_connections to 100, then the actual number of available connections for a non-superuser is calculated as follows:100 - 3 - 2 = 95.The Amazon CloudWatch metric DatabaseConnections indicates the number of client network connections to the database instance at the operating system level. This metric is calculated by measuring the actual number of TCP connections to the instance on port 5432. The number of database sessions might be higher than this metric value because the metric value doesn't include the following:Backend processes that no longer have a network connection but aren't cleaned up by the database. (For example: The connection is terminated due to network issues but the database isn't aware until it attempts to return the output to the client.)Backend processes created by the database engine job scheduler. (For example: pg_cron)Amazon RDS connections.You might get this error because the application that connects to the RDS for PostgreSQL instance abruptly creates and drops connections. This might cause the backend connection to remain open for some time. This condition might create a discrepancy between the values of pg_stat_activity view and the CloudWatch metric DatabaseConnections.ResolutionTroubleshoot the errorTo troubleshoot this error, perform the following checks:Review the CloudWatch metric DatabaseConnections.Use Performance Insights to view the numbackends counter metric. This value provides information on the number of connections at the time that the error occurred. If you didn't turn on Performance Insights, log in to your instance as the primary user. Then, view the number of backends by running the following query:SELECT count(*) FROM pg_stat_activity;If you find some idle connections that can be terminated, you can terminate these backends using the pg_terminate_backend() function. You can view all the idle connections that you want to terminate by running the following query. This query displays information about backend processes with one of the following states for more than 15 minutes: 'idle', 'idle in transaction', 'idle in transaction (aborted)' and 'disabled'.SELECT * FROM pg_stat_activityWHERE pid <> pg_backend_pid()AND state in ('idle', 'idle in transaction', 'idle in transaction (aborted)', 'disabled')AND state_change < current_timestamp - INTERVAL '15' MINUTE;Note: Be sure to update the query according to your use case.After identifying all the backend processes that must be terminated, terminate these processes by running the following query.Note: This example query terminates all backend processes in one of the states mentioned before for more than 15 minutes.SELECT pg_terminate_backend(pid) FROM pg_stat_activityWHERE pid <> pg_backend_pid()AND state in ('idle', 'idle in transaction', 'idle in transaction (aborted)', 'disabled')AND state_change < current_timestamp - INTERVAL '15' MINUTEAND usename != 'rdsadmin';To terminate all idle backend processes, run the following query:SELECT pg_terminate_backend(pid) FROM pg_stat_activityWHERE pid <> pg_backend_pid()AND state in ('idle', 'idle in transaction', 'idle in transaction (aborted)', 'disabled')AND usename != 'rdsadmin';Note: You can't terminate backend processes that are created with rdsadmin. Therefore, you must exclude them from termination.Important: If you can't connect to your RDS for PostgreSQL instance with the rds_superuser privileges, then consider closing your application gracefully to free some connections.Manage the number of database connectionsUse connection poolingIn most cases, you can use a connection pooler, such as an RDS Proxy or any third party connection pooler, to manage the number of connections that are open at any one given time. For example, if you set the max_connections value of your RDS for PostgreSQL instance to 500, you can prevent errors related to max_connection by having a connection pooler that's configured for a maximum of 400 connections.Increase max_connections valueYou might consider increasing the value of max_connections depending on your use case. However, setting a very high value for max_connections might result in memory issues based on the workload and instance class of the database instance.Note: If you increase the value of max_connections, you must reboot the instance for the change to take effect.Terminate idle connectionsYou might set the idle_in_transaction_session_timeout parameter to a value that's appropriate for your use case. Any session that's idle within an open transaction for longer than the time specified in this parameter is terminated. For example if you set this parameter to 10 minutes, any query that's idle in transaction for more than 10 minutes is terminated. This parameter helps in managing connections that are stuck in this particular state.For PostgreSQL versions 14 and later, you can use the idle_session_timeout parameter. After you set this parameter, any session that's idle for more than the specified time, but not within an open transaction, is terminated.For PostgreSQL versions 14 and later, you can use the client_connection_check_interval parameter. With this parameter, you can set the time interval between optional checks for client connection when running queries. The check is performed by polling the socket. This check allows long-running queries to be ended sooner if the kernel reports that the connection is closed. This parameter helps in situations where PostgreSQL doesn't know about the lost connection with a backend process.Increase the rds.rds_superuser_reserved_connections valueYou might consider increasing the value of the rds.rds_superuser_reserved_connections parameter. The default value for this parameter is set to 2. Increasing the value of this parameter allows for more connections from users with the rds_superuser role attached. With this role, the users can run administrative tasks, such as terminating an idle connection using the pg_terminate_backend() command.Follow" | https://repost.aws/knowledge-center/rds-postgresql-error-connection-slots |
How do I invoke a Lambda function using a cross-account Kinesis stream? | I want to invoke an AWS Lambda function using an Amazon Kinesis stream that's in another AWS account. How do I set that up? | "I want to invoke an AWS Lambda function using an Amazon Kinesis stream that's in another AWS account. How do I set that up?Short descriptionLambda doesn't currently support cross-account triggers from Kinesis.As a workaround, create a "poller" Lambda function in the same account as the Kinesis stream (account 1). Then, configure the function to invoke a second "processor" Lambda function in the other account (account 2).Warning: This configuration removes many of the benefits of using Kinesis Data Streams. You can't block records or create sequential ordering within a shard after completing this procedure. It's a best practice to use this workaround only if your application doesn't need those features.ResolutionConfigure a "poller" Lambda function in the same account as the Kinesis stream (account 1)1. Create a Lambda function in account 1 that includes an execution role.Note: You can create the function using the Lambda console, or by building and uploading your own deployment package.2. Add the required permissions for your Kinesis stream to the function's execution role.3. Configure the Kinesis stream as the function's event source.Important: When you create the event source mapping, make sure that the Lambda function and the Kinesis stream are in the same account.Configure a "processor" Lambda function in the second account (account 2)1. Create a Lambda function in account 2 that includes an execution role.2. Create an AWS Identity and Access Management (IAM) role (invocation role) in account 2.Note: This invocation role is assumed by the "poller" function in account 1 to invoke the "processor" function in account 2.3. Modify the invocation role's policy in account 2 in the following ways:Give the invocation role permissions to invoke (using the lambda:InvokeFunction action) the "processor" Lambda function in account 2.Edit the trust relationship to allow the "poller" Lambda function's execution role in account 1 to assume the invocation role in account 2.For more information, see Identity-based IAM policies for Lambda and Creating a role to delegate permissions to an AWS service.Configure the "poller" Lambda function in account 1 to assume the invocation role in account 21. Give the execution role policy in account 1 permissions to call the AssumeRole API and assume the invocation role in account 2. Make sure that you use the sts:AssumeRole action. For more information, see Granting a user permissions to switch roles.2. Update the "poller" function in account 1 so that it does the following:Assumes the invocation role in account 2. For instructions, see Switching to an IAM role (AWS API).Passes the input event from Kinesis to the "processor" function in account 2. To have the function pass the input event, instantiate a service client and use the appropriate SDK method to request asynchronous invocation (Event invocation type).Note: To determine the appropriate SDK method to call, see the SDK documentation for your runtime.3. Configure a dead-letter queue (DLQ) for the function in account 2. This allows you to investigate or retry any missed events if a function error occurs.Configure a destination for failed-event recordsConfigure a destination for failed-event records for the event source that's set up for the "poller" function in account 1.Note: Configuring a destination for failed-event records tells Lambda to send details about discarded records to the destination queue or topic.Related informationUsing AWS Lambda with Amazon KinesisTutorial: Using AWS Lambda with Amazon KinesisServerless cross account stream replication using AWS Lambda, Amazon DynamoDB, and Amazon Kinesis FirehoseEasy authorization of AWS Lambda functionsFollow" | https://repost.aws/knowledge-center/lambda-cross-account-kinesis-stream |
How do I assign a static source IP address for all instances in a load-balanced Elastic Beanstalk environment? | I want to assign a single static IP address to my load balanced AWS Elastic Beanstalk environment to identify traffic coming from it. | "I want to assign a single static IP address to my load balanced AWS Elastic Beanstalk environment to identify traffic coming from it.Short descriptionYou can use a network address translation (NAT) gateway to map multiple IP addresses into a single, publicly exposed IP address. When your Elastic Beanstalk environment uses a NAT gateway, the backend instances in your environment are launched in private subnets. Elastic Beanstalk routes outbound traffic through the NAT gateway. You can identify the source of the outbound traffic from the backend instances by the Elastic IP address. The Elastic IP address is a static IP address required by the NAT gateway.ResolutionElastic Beanstalk launches your Amazon Elastic Compute Cloud (Amazon EC2) instances into private subnets. These private subnets use a NAT gateway with an attached Elastic IP address as a default route. The load balancer is in public subnets. Elastic Beanstalk routes all external traffic to and from the load balancer through an internet gateway.Important: Before getting started, set up a NAT gateway.Open the Elastic Beanstalk console.Choose Create New Application, and then complete the steps to create your application.Note: If you're using an existing application, then skip to step 3 and configure your environment.Choose Actions, and then choose Create environment.Choose Web server environment, and then choose Select.Choose the options in the Environment information and rest of the Platform and Application Code sections based on the needs of your environment.Choose Configure more options.For Configuration presets, choose High availability. This sets your environment to a load-balanced environment.For the Network card, choose Edit, and then edit as follows:For VPC, choose your VPC.In the Load balancer settings section, for Visibility, choose Public.In the Load balancer subnets table, choose the public subnets.In the Instance settings section, clear Public IP address.In the Instance subnets table, choose only private subnets with the NAT gateway that you set up earlierChoose Save.Choose Create environment.The Elastic Beanstalk environment that you created now has all outbound traffic originating from a single Elastic IP address.Follow" | https://repost.aws/knowledge-center/elastic-beanstalk-static-IP-address |
How does AWS WAF handle body inspections for HTTP requests? | How does AWS WAF handle body inspections for HTTP requests? | "How does AWS WAF handle body inspections for HTTP requests?ResolutionAWS WAF inspects the first 8 KB (8,192 bytes) of the request body. This is a hard service limit and can't be changed.For example:If the body is 5,000 bytes: All the content in the body can be inspected by AWS WAF.If the body is 8,500 bytes: Contents from bytes 1 through 8,192 bytes are inspected by AWS WAF. All content from 8,193 bytes to 8,500 bytes isn't inspected.This limit is important when configuring rules because AWS WAF can't check the body content after 8,192 bytes. Any attack XSS or SQL injection pattern won't be detected after 8,192 bytes.To protect against attacks on uninspected body portions, use one of the following:AWS Managed Rules Core rule setThe SizeRestrictions_BODY rule within the AWS Managed Rules Core rule set (CRS) checks request bodies that are over 8 KB (8,192 bytes). Request bodies over 8 KB are blocked.Custom body inspection ruleWhen you configure a custom body inspection rule, you can choose the oversize request handling action. This action takes effect when the request body is larger than 8,192 bytes.For example, you configure a custom rule with a request body that contains XSS injection attacks and your request body is 9,000 bytes. You can choose from the following oversize handling actions:Continue: AWS WAF inspects bytes 1 through 8,192 bytes of the body content for XSS attack. The remaining 8,193 through 9000 byte content isn't inspected.Match: AWS WAF marks this request as containing an XSS attack and takes the rule action (either ALLOW or BLOCK). It doesn’t matter whether the request body includes an XSS attack pattern or not.Not match: AWS WAF marks this request as not containing an XSS attack regardless of the request body content.When using the AWS Managed Core rule set, legitimate requests with a body size larger than 8,192 bytes might be blocked by the SizeRestrictions_BODY rule. You can create an allow rule to explicitly allow the request.For example, if a customer has a legitimate request from the URL “/upload”, you can configure the rules as follows:1. In your web ACL, override the SizeRestrictions action to count from the rule group.2. Add a label matching rule to your web ACL after the Core rule set. Use the following logic in the rule:Has a label “awswaf:managed:aws:core-rule-set:SizeRestrictions_Body”AND NOT (URL path contains “/upload”)Action: BLOCKUsing the preceding configuration, requests with the URL " /upload" that have a body size larger than 8,192 bytes are allowed. Any requests that aren't from this URL are blocked.Related informationOversize handling for request componentsFollow" | https://repost.aws/knowledge-center/waf-http-request-body-inspection |
How can I assign a static IP address to a Lambda function? | I want to assign a static IP address to an AWS Lambda function. | "I want to assign a static IP address to an AWS Lambda function.ResolutionFollow these steps to assign a static IP address to a Lambda function.Step 1: Connect a Lambda function to an Amazon Virtual Private Cloud (Amazon VPC)Configure your Lambda function to connect to an Amazon VPC. The Lambda function is assigned an elastic network interface (ENI) with a private IP address. The Lambda elastic network interface private IP address can't be assumed as the static IP address because it's changed during the elastic network interface lifecycle.For more information, see Configuring a Lambda function to access resources in a VPC.Note: It's a best practice not to place Lambda functions in an Amazon VPC unless the function must access other resources in the Amazon VPC.Step 2: Grant internet access to a Lambda function in an Amazon VPCInternet access from a private subnet requires network address translation (NAT). To give internet access to an Amazon VPC-connected Lambda function, route its outbound traffic to a NAT gateway or NAT instance in a public subnet. Make sure that the NAT gateway or NAT instance has a route to an internet gateway.For more information, see How do I give internet access to a Lambda function that's connected to an Amazon VPC?Step 3: Associate the NAT gateway or instance with an Elastic IP addressAssociate an Elastic IP address with the public NAT gateway or instance. The NAT gateway or instance replaces the source IP address of the instances with the Elastic IP address. This Elastic IP address can be assumed as the static IP address for the Lambda function.Note:It's a best practice to create multiple subnets across different Availability Zones. This practice creates redundancy and allows the Lambda service to maintain high availability for your function.You can't associate an Elastic IP address with a private NAT gateway or instance.You're limited to associating 2 Elastic IP addresses to your public NAT gateway or instance by default. For more information, see Elastic IP addresses quotas.Related informationConnect to the internet using an internet gatewayInternet and service access for VPC-connected functionsFollow" | https://repost.aws/knowledge-center/lambda-static-ip |
Why are my daily SMS usage reports from Amazon SNS not appearing in my Amazon S3 bucket? | "I subscribed to daily SMS usage reports from Amazon Simple Notification Service (Amazon SNS). However, the reports aren't appearing in the Amazon Simple Storage Service (Amazon S3) bucket that I created to receive the reports. How do I troubleshoot the issue?" | "I subscribed to daily SMS usage reports from Amazon Simple Notification Service (Amazon SNS). However, the reports aren't appearing in the Amazon Simple Storage Service (Amazon S3) bucket that I created to receive the reports. How do I troubleshoot the issue?ResolutionImportant: It takes 24 hours for an SMS usage report to be available in the Amazon S3 bucket that you created to receive the reports. If more than 24 hours has passed and the report still hasn't populated in your Amazon S3 bucket, then follow the troubleshooting steps in this article.Verify that your S3 bucket policy has the required permissionsReview your S3 bucket policy to confirm that it allows Amazon SNS to perform the following actions:s3:PutObjects3:GetBucketLocations3:ListBucketTo review and edit your S3 bucket policy, follow the instructions in Adding a bucket policy using the Amazon S3 console. For an example bucket policy that includes the required permissions, see Subscribing to daily usage reports.Verify that you've subscribed the correct S3 bucket to receive daily SMS usage reportsFollow the instructions in the To subscribe to daily usage reports section of Subscribing to daily usage reports. Make sure that the S3 bucket that you subscribed is the same bucket that you verified has the required permissions to receive the reports.Test the setupSend two or more SMS messages using Amazon SNS. If your S3 bucket is configured correctly, then Amazon SNS puts a CSV file with usage data in the following location after 24 hours:<my-s3-bucket>/SMSUsageReports/<region>/YYYY/MM/DD/00x.csv.gzNote: Each file can contain up to 50,000 records. If the records for a day exceed this quota, then Amazon SNS adds multiple files. For an example daily usage report, see Viewing daily SMS usage reports.Follow" | https://repost.aws/knowledge-center/sns-not-getting-daily-sms-usage-reports |
How can I upgrade my Amazon Aurora MySQL DB cluster to a new version? | I have an Amazon Aurora MySQL-Compatible DB cluster that is currently running version 2.x. How can I upgrade it to Aurora MySQL-Compatible version 3.x? | "I have an Amazon Aurora MySQL-Compatible DB cluster that is currently running version 2.x. How can I upgrade it to Aurora MySQL-Compatible version 3.x?Short descriptionAmazon Aurora versions 2.x are MySQL 5.7 compatible and Amazon Aurora versions 3.x are MySQL 8.0 compatible. Currently, Amazon Relational Database Service (Amazon RDS) doesn't allow in-place upgrade of Aurora MySQL 2.x clusters to Aurora MySQL 3.x. In-place upgrades apply only to Aurora MySQL 1.x clusters to Aurora MySQL 2.x.Note: Perform the update on a non-production DB cluster first. Then, monitor how the changes in the new version affect your instances and applications.Upgrade your Amazon Aurora MySQL DB cluster from version 2.x to version 3.x using the following methods:Take a snapshot of your DB cluster and then restore it to Aurora MySQL 3.xSet up manual replication to switch your serviceUse the AWS Database Migration Service (AWS DMS) to migrate your serviceNote: Downtime varies depending on the method that you use.ResolutionUpgrade using SnapshotFollow these steps to upgrade from Aurora MySQL 2.x to Aurora MySQL 3.x using a snapshot:Open the AWS RDS console.From the navigation pane, choose Databases, and then choose your Aurora 2.x DB cluster.Choose Actions, and then choose Take Snapshot.From the navigation panel, choose Snapshots.After the snapshot of the DB cluster is created, choose the snapshot and then choose Actions.Choose Restore Snapshot.In the Instance Specification section, for DB Engine Version, choose one of the Aurora 3.X (compatible with MySQL 8.0.23) versions available.Enter the configuration details, and then choose Restore DB Instance.After the Aurora 3.x cluster becomes available, you can redirect connections to the new DB instance.Note: If you use a snapshot to upgrade your Aurora DB cluster from version 2.x to version 3.x, and your database supports a live application, stop the application before taking the snapshot. This makes sure that you don't lose recent changes to your data. Downtime occurs from the time the snapshot creation starts until the new database is created and enters a running state.Upgrade using manual replicationNote: When you set up manual replication to upgrade your application, downtime occurs when switching from Aurora MySQL 2.x to Aurora MySQL 3.x.Turn on binary logs on the source Aurora MySQL 2.x DB cluster.Increase the retention period of your DB cluster.Take a snapshot of the Aurora MySQL 2.x DB cluster.Restore the snapshot to Aurora MySQL version 3.x.Capture the bin log position from the restored DB cluster.Start the replication from Aurora MySQL 2.x to Aurora MySQL 3.x. For more information, see Configuring binary log file position replication with an external source instance.After the replication is in sync, point your application to Aurora MySQL 3.x.Upgrade using AWS DMSYou can also use AWS DMS to upgrade your application, with minimal downtime. This upgrade is more complex than the previous options. To perform this migration, create an Aurora MySQL DB instance version 3.x. Then, perform data replication from Aurora MySQL version 2.x to 3.x using AWS DMS. Downtime occurs when the application moves to Aurora MySQL 3.x.Related informationCreating a DB cluster snapshotRestoring from a DB cluster snapshotGetting started with AWS Database Migration ServiceFollow" | https://repost.aws/knowledge-center/aurora-mysql-upgrade-cluster |
How do I resolve "Invalid mapping expression specified" errors from API Gateway? | "I used an AWS CloudFormation template—or OpenAPI definition—to create an Amazon API Gateway API with a proxy resource. When I try to add URL path parameters to the API, I get an "Invalid mapping expression specified" error message. How do I resolve the error?" | "I used an AWS CloudFormation template—or OpenAPI definition—to create an Amazon API Gateway API with a proxy resource. When I try to add URL path parameters to the API, I get an "Invalid mapping expression specified" error message. How do I resolve the error?Short descriptionAPI Gateway returns an Invalid mapping expression specified error when the proxy path parameter {proxy+} doesn't have a defined a URL path parameter mapping.To resolve the error, do the following:For an AWS CloudFormation template, define the RequestParameters property of the AWS::ApiGateway::Method section.For an OpenAPI definition, define the parameters section under the x-amazon-apigateway-any-method object.ResolutionFor CloudFormation templates1. Update the CloudFormation template so that the RequestParameters value is set to true.Example CloudFormation template RequestParameters..... ProxyMethod: Type: 'AWS::ApiGateway::Method' Properties: . . RequestParameters: method.request.path.proxy: true Integration: RequestParameters: integration.request.path.proxy: 'method.request.path.proxy' IntegrationHttpMethod: ANY . ....2. Update your API by using the CloudFormation template to update the AWS CloudFormation stack.Note: For more information on how to update API Gateway resources, see Amazon API Gateway resource type reference.For OpenAPI definitions1. Update your API definition so that the parameters under the x-amazon-apigateway-any-method section have the following values: "x-amazon-apigateway-any-method": { "parameters": [ { "name": "proxy", "in": "path", "required": true, "type": "string" } ]........}2. Update your API by importing your updated API definition file into API Gateway.Note: For more information, see Describing parameters on the OpenAPI website.Test the setup1. In the API Gateway console, choose the name of your API.2. Add URL path parameters as needed for your use case. If your proxy path parameter includes a correctly defined a URL path parameter mapping, then no error appears.Related informationx-amazon-apigateway-integration.requestParameters objectSet up a proxy integration with a proxy resourceSetting up data transformations for REST APIsFollow" | https://repost.aws/knowledge-center/api-gateway-proxy-path-parameter-error |
How do I configure a white label name server using Route 53? | I want to configure a white label name server for multiple domains using Amazon Route 53. How can I do this? | "I want to configure a white label name server for multiple domains using Amazon Route 53. How can I do this?ResolutionYou can use white label name servers (also known as vanity name servers or private name servers) to use the same name server for multiple domains. For example, if you owned both example.com and example.net, you could set up one name server to serve requests for both domains.For instructions on how to set up a white label name server, see Configuring white label name servers.Related informationWorking with public hosted zonesFollow" | https://repost.aws/knowledge-center/route53-white-label-name-server |
Why does the rollover index action in my ISM policy keep failing in Amazon OpenSearch Service? | "I want to use Index State Management (ISM) to roll over my indices on my Amazon OpenSearch Service cluster. However, my index fails to roll over, and I receive an error. Why is this happening and how do I resolve this?" | "I want to use Index State Management (ISM) to roll over my indices on my Amazon OpenSearch Service cluster. However, my index fails to roll over, and I receive an error. Why is this happening and how do I resolve this?Short descriptionIf you received a "Failed to rollover index" error, your rollover action might have failed for one of the following reasons:The rollover target doesn't exist.The rollover alias is missing.The index name doesn't match the index pattern.The rollover alias is pointing to a duplicated alias in an index template.You have maximum resource utilization on your cluster.To resolve this issue, use the explain API to identify the cause of your error. Then, check your ISM policy. For more information about setting up the rollover action in your ISM policy, see How do I use ISM to manage low storage space in OpenSearch Service?Note: The following resolution applies only to the OpenSearch API. For the legacy Open Distro API, refer to Open Distro's ISM API operations.ResolutionUsing the explain APITo identify the root cause of your "Failed to rollover index" error, use the explain API:GET _plugins/_ism/explain/logs-000001?prettyHere's an example output of the explain API:{ "logs-000001": { "index.plugins.index_state_management.policy_id": "rollover-workflow", "index": "logs-000001", "index_uuid": "JUWl2CSES2mWYXqpJJ8qlA", "policy_id": "rollover-workflow", "policy_seq_no": 2, "policy_primary_term": 1, "rolled_over": false, "state": { "name": "open", "start_time": 1614738037066 }, "action": { "name": "rollover", "start_time": 1614739372091, "index": 0, "failed": true, "consumed_retries": 0, "last_retry_time": 0 }, "retry_info": { "failed": false, "consumed_retries": 0 }, "info": { "cause": "rollover target [rolling-indices] does not exist", "message": "Failed to rollover index [index=logs-000001]" } }}This example output shows that the indices failed to roll over because the target rollover alias (rolling-indices) didn't exist.The rollover target doesn't existIf the explain API returns the cause as "rollover target [rolling-indices] does not exist", then check whether the index was bootstrapped with the rollover alias:GET _cat/aliasesThe output lists all the current aliases in the cluster and their associated indices. If ISM indicates that your rollover target doesn't exist, then a rollover alias name and failed index association are missing.To resolve the failed index association, attach the rollover alias to the index:POST /_aliases{ "actions": [{ "add": { "index": "logs-000001", "alias": "my-data" } }]}After you attach the rollover alias, retry the rollover action on the managed index in OpenSearch Service:POST _plugins/_ism/retry/logs-000001For more information, see Retry failed index on the OpenSearch website.When you retry the failed index, you might receive an "Attempting to retry" status message. If OpenSearch Service is attempting to retry, then wait for the next ISM cycle to run. ISM cycles run every 30 to 48 minutes. If the rollover action is successful, then you receive the following message: "Successfully rolled over index".The rollover alias is missingIf the explain API output identifies the cause of your rollover failure to be a missing rollover alias, then check the settings of the failed index:GET <failed-index-name>/_settingsIf you see that the index.plugins.index_state_management.rollover_alias setting is missing, then manually add the setting to your index:PUT /<failed-index-name>/_settings{ "index.plugins.index_state_management.rollover_alias" : "<rollover-alias>"}Use the retry failed index API to retry the rollover operation on the failed index. While the rollover action is being retried, update your policy template:PUT _index_template/<template-name>Make sure to use the same settings from your existing policy template so that your rollover alias is applied to the newly created indices. For example:PUT _index_template/<existing-template> { "index_patterns": [ "<index-pattern*>" ], "template": { "settings": { "plugins.index_state_management.rollover_alias": "<rollover-alias>" } }}The index name doesn't match the index patternIf your ISM policy indicates that your rollover operation failed because your index name and index pattern don't match, then check the failed index's name. For successful rollovers, the index names must match the following regex pattern:`^.*-\d+$`This regex pattern conveys that index names must include text followed by a hyphen (-), and one or more digits. If the index name doesn't follow this pattern, and your first index has data written onto it, then consider re-indexing the data. When you re-index the data, use the correct name for your new index. For example:POST _reindex{ "source": { "index": "<failed-index>" }, "dest": { "index": "my-new-index-000001" }}While the reindex data API is running, detach the rollover alias from the failed index. Then, add the rollover alias to the new index so that the data source can continue to write the incoming data to a new index.For example:POST /_aliases{ "actions": [{ "remove": { "index": "<failed-index>", "alias": "<rollover-alias>" } }, { "add": { "index": "my-new-index-000001", "alias": "<rollover-alias>" } }]}Manually attach the ISM policy to the new index using the following API call:POST _plugins/_ism/add/my-new-index-*{ "policy_id": "<policy_id>"}Update the existing template to reflect the new index pattern name. For example:PUT _index_template/<existing-template> { "index_patterns": ["<my-new-index-pattern*>"],}Note: Your ISM policy and rollover alias must reflect the successive indices created with the same index pattern.The rollover alias is pointing to a duplicated alias in an index templateIf the explain API indicates that your index rollover failed because a rollover alias is pointing to a duplicated alias, then check your index template settings:GET _index_template/<template-name>Check whether your template contains an additional aliases section (with another alias that points to the same index):{ "index_patterns": ["my-index*"], "settings": { "index.plugins.index_state_management.rollover_alias": "<rollover-alias>" }, "aliases": { "another_alias": { "is_write_index": true } }}The presence of an additional alias confirms the reason for your rollover operation failure, because multiple aliases cause the rollover to fail. To resolve this failure, update the template settings without specifying any aliases:PUT _index_template/<template-name>Then, perform the retry API on the failed index:POST _plugins/_ism/retry/logs-000001Important: If an alias points to multiple indices, then make sure that only one index has write access enabled. The rollover API automatically enables write access for the index that the rollover alias points to. This means that you don't need to specify any aliases for the "is_write_index" setting when you perform the rollover operation in ISM.You have maximum resource utilization on your clusterThe maximum resource utilization on your cluster could be caused by either a circuit breaker exception or lack of storage space.Circuit breaker exceptionIf the explain API returns a circuit breaker exception, your cluster was likely experiencing high JVM memory pressure when the rollover API was called. To troubleshoot JVM memory pressure issues, see How do I troubleshoot high JVM memory pressure on my OpenSearch Service cluster?After the JVM memory pressure falls below 75%, you can retry the activity on the failed index with the following API call:POST _plugins/_ism/retry/<failed-index-name>Note: You can use index patterns (*) to retry the activities on multiple failed indices.If you experience infrequent JVM spikes on your cluster, you can also update the ISM policy with the following retry block for the rollover action:{ "actions": { "retry": { "count": 3, "backoff": "exponential", "delay": "10m" } }}In your ISM policy, each action has an automated retry based on the count parameter. If your previous operation fails, check the "delay" parameter to see how long you'll need to wait for ISM to retry the action.Lack of storage spaceIf your cluster is running out of storage space, then OpenSearch Service triggers a write block on the cluster causing all write operations to return a ClusterBlockException. Your ClusterIndexWritesBlocked metric values shows a value of "1", indicating that the cluster is blocking requests. Therefore, any attempts to create a new index fail. The explain API call also returns a 403 IndexCreateBlockException, indicating that the cluster is out of storage space. To troubleshoot the cluster block exception, see How do I resolve the 403 "index_create_block_exception" error in OpenSearch Service?After the ClusterIndexWritesBlocked metric returns to "0", retry the ISM action on the failed index. If your JVM memory pressure exceeds 92% for more than 30 minutes, a write block could be triggered. If you encounter a write block, you must troubleshoot the JVM memory pressure instead. For more information about how to troubleshoot JVM memory pressure, see How do I troubleshoot high JVM memory pressure on my OpenSearch Service cluster?Follow" | https://repost.aws/knowledge-center/opensearch-failed-rollover-index |
How do I update the mailing address associated with my AWS account? | "My mailing address changed, and I want to update my address information in my AWS account." | "My mailing address changed, and I want to update my address information in my AWS account.ResolutionYou can change the AWS account root user mailing address associated with an AWS account only after you sign in as that user.You can update the AWS account mailing address using an AWS Identity and Access Management (IAM) user if:The IAM user activated on the AWS account has administrator access permission.The IAM user has access to the AWS Billing console.Important: If your AWS account and Amazon.com retail account share the same email address and password, then updating the mailing address for one of the accounts changes information for the other account.To update your mailing address information, do the following:Log in the AWS Billing and Cost Management console for the AWS account that you want to update.If you're already logged in to the console and don't see your account information: select your account ID at the upper-right corner of the screen, and then select Account from the dropdown list.Choose Edit next to Contact Information.Update your contact information associated with your account, including your mailing address, telephone number, and website address.Choose Update.The address in PDF invoices that are downloadable from the AWS Billing Dashboard is related to your payment method. To update the billing address, see How do I change billing information on the PDF version of the AWS invoice that I receive by email?Related informationManaging an AWS accountHow do I update my telephone number associated with my AWS account?How do I change the email address that's associated with my AWS account?Follow" | https://repost.aws/knowledge-center/update-aws-account-mailing-address |
How do I allow a domain user to access the SQL Server instance on an EC2 instance? | I want to access the SQL Server instance on an Amazon Elastic Compute Cloud (Amazon EC2) instance as a domain user. How can I do this? | "I want to access the SQL Server instance on an Amazon Elastic Compute Cloud (Amazon EC2) instance as a domain user. How can I do this?Short descriptionBy default, only the built-in local administrator account can access a SQL Server instance launched from an Amazon Web Services (AWS) Windows AMI. You can use SQL Server Management Studio (SSMS) to add domain users so that they can access and manage SQL Server.ResolutionTo give a domain user access to the SQL Server instance, follow these steps:Connect to your instance using Remote Desktop Protocol (RDP) as a local administrator.Open SSMS. For Authentication, choose Windows Authentication to log in with the built-in local administrator.Choose Connect.In Object Explorer, expand Security.Open the context (right-click) menu for Logins, and then select New Login.For Login name, select Windows authentication. Enter DomainName\username, replacing DomainName with your domain NetBIOS name, and username with your Active Directory user name.On the Server Roles page, select the server roles that you want to grant to the Active Directory user.Select the General page, and then choose Ok.Log out from the instance, and then log in again as a domain user.Open SSMS. For Authentication, choose Windows authentication to log in with your domain user account.Choose Connect.Note: Performing these steps allows the user to access the SQL Server tables.Related informationTutorial: Get started with Amazon EC2 Windows instancesTroubleshoot EC2 Windows instancesFollow" | https://repost.aws/knowledge-center/ec2-domain-user-sql-server |
How do I get a report that shows the usage of individual member accounts in my organization in AWS Organizations? | I want to see a breakdown of usage by individual member accounts in my organization. | "I want to see a breakdown of usage by individual member accounts in my organization.ResolutionYou can filter the cost data associated with each member account in an organization using Cost Explorer. After you configure a report in Cost Explorer, you can save the data as a .csv file. For more information about setting up and using Cost Explorer, see Analyzing your costs with Cost Explorer.You can also have a daily AWS Cost and Usage report delivered to an Amazon Simple Storage Service (Amazon S3) bucket of your choice. For more information, see What are AWS Cost and Usage Reports?Note: Only someone signed in to the management account in an organization can activate and download billing reports.Related informationViewing your billFiltering the data that you want to viewWhat is AWS Organizations?Follow" | https://repost.aws/knowledge-center/consolidated-linked-billing-report |
How can I access OpenSearch Dashboards from outside of a VPC using Amazon Cognito authentication? | My Amazon OpenSearch Service cluster is in a virtual private cloud (VPC). I want to access the OpenSearch Dashboards endpoint from outside this VPC. | "My Amazon OpenSearch Service cluster is in a virtual private cloud (VPC). I want to access the OpenSearch Dashboards endpoint from outside this VPC.ResolutionUse one of these methods to access OpenSearch Dashboards from outside of a VPC using Amazon Cognito authentication:Use an SSH tunnelFor more information, see How can I use an SSH tunnel to access OpenSearch Dashboards from outside of a VPC with Amazon Cognito authentication?Advantages: Provides a secure connection over the SSH protocol. All connections use the SSH port.Disadvantages: Requires client-side configuration and a proxy server.Use an NGINX proxyFor more information, see How can I use an NGINX proxy to access OpenSearch Dashboards from outside a VPC that's using Amazon Cognito authentication?Advantages: Setup is easier because only server-side configuration is required. Uses standard HTTP (port 80) and HTTPS (port 443).Disadvantages: Requires a proxy server. The security level of the connection depends on how the proxy server is configured.(Optional) If fine-grained access control (FGAC) is turned on, add an Amazon Cognito authenticated roleIf fine-grained access control (FGAC) is turned on for your OpenSearch Service cluster, you might encounter a missing role error. To resolve the missing role error, perform the following steps:1. Sign in the Amazon OpenSearch Service console.2. From the navigation pane, under Managed clusters, choose Domains.3. Choose Actions, and then choose Edit security configurations.4. Choose Set IAM ARN as your master user.6. In the IAM ARN field, add the Amazon Cognito authenticated ARN role.7. Choose Submit.For more information about fine-grained access control, see Tutorial: IAM master user and Amazon Cognito.Use VPNFor more information, see What is AWS Site-to-Site VPN?Advantages: Secure connection between your on-premises equipment and your VPCs. Uses standard TCP and UDP for TLS VPN.Disadvantages: Requires VPN setup and client-side configuration.Note: To allow or restrict access to resources, you must modify the VPC network configuration and the security groups associated with the OpenSearch Service domain. For more information, see Testing VPC domains.Related informationHow do I troubleshoot Amazon Cognito authentication issues with OpenSearch Dashboards?Configuring Amazon Cognito authentication for OpenSearch DashboardsI get a "User: anonymous is not authorized" error when I try to access my OpenSearch Service clusterFollow" | https://repost.aws/knowledge-center/opensearch-dashboards-vpc-cognito |
How do I allow users to authenticate to an Amazon RDS MySQL DB instance using their IAM credentials? | I want to connect to an Amazon Relational Database Service (Amazon RDS) database (DB) instance that is running MySQL. I want to use AWS Identity and Access Management (IAM) credentials instead of using native authentication methods. How can I do that? | "I want to connect to an Amazon Relational Database Service (Amazon RDS) database (DB) instance that is running MySQL. I want to use AWS Identity and Access Management (IAM) credentials instead of using native authentication methods. How can I do that?Short descriptionUsers can connect to an Amazon RDS DB instance or cluster using IAM user or role credentials and an authentication token. IAM database authentication is more secure than native authentication methods because of the following:IAM database authentication tokens are generated using your AWS access keys. You don't need to store database user credentials.Authentication tokens have a lifespan of 15 minutes, so you don't need to enforce password resets.IAM database authentication requires a secure socket layer (SSL) connection. All data transmitted to and from your DB instance is encrypted.If your application is running on Amazon Elastic Compute Cloud (Amazon EC2), then you can use your EC2 instance profile credentials to access the database. You don't need to store database passwords on your instance.To set up IAM database authentication using IAM roles, follow these steps:Activate IAM DB authentication on the RDS DB instance.Create a database user account that uses an AWS authentication token.Add an IAM policy that maps the database user to the IAM role.Create an IAM role that allows Amazon RDS access.Attach the IAM role to the Amazon EC2 instance.Generate an AWS authentication token to identify the IAM role.Download the SSL root certificate file or certificate bundle file.Connect to the RDS DB instance using IAM role credentials and the authentication token.Connect to the RDS DB instance using IAM role credentials and SSL certificates.ResolutionBefore you begin, you must launch a DB instance that supports IAM database authentication and an Amazon EC2 instance to connect to the database.Activate IAM DB authentication on the RDS DB instanceYou can turn on IAM database authentication by using the Amazon RDS console, AWS Command Line Interface (AWS CLI), or the Amazon RDS API. If you use the Amazon RDS console to modify the DB instance, then choose Apply Immediately to activate IAM database authentication. Activating IAM Authentication requires a brief outage. For more information on which modifications require outages, see Amazon RDS DB instances.Note: If you choose Apply Immediately, any pending modifications are also applied immediately instead of during your maintenance window. This can cause an extended outage for your instance. For more information, see Using the Apply Immediately setting.Create a database user account that uses an AWS authentication token1. Connect to the DB instance or cluster endpoint by running the following command. Enter the master password to log in.$ mysql -h {database or cluster endpoint} -P {port number database is listening on} -u {master db username} -p2. Create a database user account that uses an AWS authentication token instead of a password:CREATE USER {dbusername} IDENTIFIED WITH AWSAuthenticationPlugin as 'RDS';3. By default, the database user is created with no privileges. This appears as GRANT USAGE when you run SHOW GRANTS FOR {dbusername}. To require a user account to connect using SSL, run this command:ALTER USER {dbusername} REQUIRE SSL;4. Run the exit command to close MySQL. Then, log out from the DB instance.Add an IAM policy that maps the database user to the IAM role1. Open the IAM console.2. Choose Policies from the navigation pane.3. Choose Create Policy.4. Enter a policy that allows the rds-db:connect Action to the required user. For more information on creating this policy, see Creating and using an IAM policy for IAM database access.Note: Make sure to edit the Resource value with the details of your database resources, such as your DB instance identifier and database user name.5. Choose Next: Tags.6. Choose Next: Review.7. For Name, enter a policy name.8. Choose Create policy.Create an IAM role that allows Amazon RDS access1. Open the IAM console.2. Choose Roles from the navigation pane.3. Choose Create role.4. Choose AWS service.5. Choose EC2.6. For Select your use case, choose EC2, and then choose Next: Permissions.7. In the search bar, find the IAM policy that you previously created in the “Add an IAM policy that maps the database user” section.8. Choose Next: Tags.9. Choose Next: Review.10. For Role Name, enter a name for this IAM role.11. Choose Create Role.Attach the IAM role to the Amazon EC2 instance1. Open the Amazon EC2 console.2. Choose the EC2 instance that you use to connect to Amazon RDS.3. Attach your newly created IAM role to the EC2 instance.4. Connect to your EC2 instance using SSH.Generate an AWS authentication token to identify the IAM roleAfter you connect to your Amazon EC2 instance, run the following AWS CLI command to generate an authentication token.Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, confirm that you're running a recent version of the AWS CLI.$ aws rds generate-db-auth-token --hostname {db or cluster endpoint} --port 3306 --username {db username}Copy and store this authentication token for later use. The token expires within 15 minutes of creation.Or, you can generate a token using an AWS SDK.Download the SSL root certificate file or certificate bundle fileRun this command to download the root certificate that works for all Regions:$ wget https://s3.amazonaws.com/rds-downloads/rds-ca-2019-root.pemConnect to the RDS DB instance using IAM role credentials and the authentication tokenAfter you download the certificate file, run one of the following commands to connect to the DB instance with SSL.Note: If your application doesn't accept certificate chains, then run the following command to download the certificate bundle:$ wget https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pemRDSHOST="rdsmysql.abcdefghijk.us-west-2.rds.amazonaws.com"TOKEN="$(aws rds generate-db-auth-token --hostname $RDSHOST --port 3306 --region us-west-2 --username {db username})"Depending on the certificate that you are using (RootCA or Bundle), run one of the following commands:RootCA command:mysql --host=$RDSHOST --port=3306 --ssl-ca=/sample_dir/rds-ca-2019-root.pem --enable-cleartext-plugin --user={db username} --password=$TOKENBundle command:mysql --host=$RDSHOST --port=3306 --ssl-ca=/sample_dir/rds-combined-ca-bundle.pem --enable-cleartext-plugin --user={db username} --password=$TOKENNote: If you're using a MariaDB client, the --enable-cleartext-plugin option isn't required.Connect to the RDS DB instance using IAM role credentials and SSL certificatesAfter you download the certificate file, connect to the DB instance with SSL. For more information, see Connecting to a DB instance running the MySQL database engine.Related informationIAM database authentication for MariaDB, MySQL, and PostgreSQLWhat are the least privileges required for a user to perform creates, deletes, modifications, backup, and recovery for an Amazon RDS DB instance?Follow" | https://repost.aws/knowledge-center/users-connect-rds-iam |
How can I troubleshoot and resolve high CPU utilization on my DocumentDB instances? | I am experiencing high CPU utilization on my Amazon DocumentDB (with MongoDB compatibility) instances. How can I resolve high CPU utilization? | "I am experiencing high CPU utilization on my Amazon DocumentDB (with MongoDB compatibility) instances. How can I resolve high CPU utilization?Short DescriptionCPU Utilization on your Amazon DocumentDB instances can increase for the following reasons:User-initiated heavy workloadsNon-efficient queriesOverburdening the writer or specific DB instance in the cluster, instead of balancing the load within the cluster.Use the following resources to troubleshoot CPU performance issues:To determine why a DB instance is running slowly, see How Do I Determine Why a System Suddenly Runs Slowly?To identify and terminate long running queries on a DB instance, see How Do I Find and Terminate Long Running or Blocked Queries?To check if a query is progressing, see How Do I Know When a Query Is Making Progress?To determine why a query takes a long time to run, see How Can I See a Query Plan and Optimize a Query?ResolutionSplit workload using replicaSetIf you have a DocumentDB cluster with multiple DB instances, check if the writer CPU is high and if the readers are sitting idle. This means the writer is overloaded.To resolve this, split the workload using replicaSet or use multiple connection pools to route read queries to the reader DB instances.Specify the readPreference for your connectionWhen you connect as a replica set, you can specify the readPreference for the connection. If you specify a read preference of secondaryPreferred, the client routes read queries to your replicas, and write queries to your primary DB instance. The following example shows the connection string in Python:## Create a MongoDB client, open a connection to Amazon DocumentDB as a## replica set and specify the read preference as secondary preferredNote: Reads from a read-replica are eventually consistent.Add one or more reader instances to the clusterIf you have a DocumentDB cluster with a single DB instance (writer only), add one or multiple reader DB instances to the cluster. Then, use readPreference=secondaryPreferred to handle the load efficiently.Use Profiler to identify slow queriesIf the load is spread evenly across all replicas, use profiler to identify slow queries over time.Scale up the instance class of your DB instancesYou can also scale up the instance class of the DB instances in the DocumentDB cluster to handle the workload.Note: Scaling up the instance class increases the cost. Refer to DocumentDB pricing for more information.Related InformationPerformance and Resource UtilizationBest practices for DocumentDBFollow" | https://repost.aws/knowledge-center/documentdb-troubleshoot-high-cpu |
How do I direct traffic to specific resources or AWS Regions based on the query's geographic location? | "I have resources in multiple AWS Regions, and I want users to be routed to the resource that's geographically closest to them." | "I have resources in multiple AWS Regions, and I want users to be routed to the resource that's geographically closest to them.ResolutionTo route client requests to:The resource endpoint geographically closest to the source of the request.Based on the client’s location.Or, based on the client’s resolver location.You can use an Amazon Route 53 hosted zone with a geolocation routing policy:(Optional) If you haven't done so already, create a Route 53 hosted zone.In the Route 53 console, open the Hosted zones pane, select your hosted zone, and choose Go to Record Sets.Choose Create Record Set, and fill in the fields with the following:For Name, enter www.For Type, choose A-IPV4 Address.For Alias, choose Yes and select a resource endpoint.For Routing Policy, choose Geolocation. Then, for Location, choose the geographic region that you want to direct to the endpoint you selected.For Set ID, enter a unique friendly name that's meaningful to you for the record set.Repeat step 3 for any other geographic regions and endpoints you want to use.Note: To choose an endpoint for traffic to be routed to from regions other than the ones you specifically selected:Create a new record set by using these steps.Choose Default under Location.Related informationValues specific for geolocation recordsChoosing a routing policyFollow" | https://repost.aws/knowledge-center/geolocation-routing-policy |
Why did I receive an Amazon S3 GetBucketAcl permission error when updating my ACM Private Certificate Authority CRL configuration? | "I updated my AWS Certificate Manager (ACM) private certificate authority (CA) to configure a certificate revocation list (CRL). However, I received an error similar to the following:"The ACM Private CA Service Principal 'acm-pca.amazonaws.com' requires 's3:GetBucketAcl' permissions."How can I resolve this?" | "I updated my AWS Certificate Manager (ACM) private certificate authority (CA) to configure a certificate revocation list (CRL). However, I received an error similar to the following:"The ACM Private CA Service Principal 'acm-pca.amazonaws.com' requires 's3:GetBucketAcl' permissions."How can I resolve this?Short descriptionACM Private CA places the CRL into an Amazon Simple Storage Service (Amazon S3) bucket that you designate for use. Your Amazon S3 bucket must be secured by an attached permissions policy. Authorized users and service principals require Put permission to allow ACM Private CA to place objects in the bucket, and Get permission to retrieve them.For more information, see Access policies for CRLs in Amazon S3.ResolutionFollow these instructions to replace the default Amazon S3 policy with the following less permissive policy.Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.1. Open the Amazon S3 console.2. From the list of buckets, open the bucket that you want to place the CRL in.3. Choose the Permissions tab.4. In Bucket policy, choose Edit.5. In Policy, copy and paste the following policy:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "acm-pca.amazonaws.com" }, "Action": [ "s3:PutObject", "s3:PutObjectAcl", "s3:GetBucketAcl", "s3:GetBucketLocation" ], "Resource": [ "arn:aws:s3:::your-crl-storage-bucket/*", "arn:aws:s3:::your-crl-storage-bucket" ], "Condition": { "StringEquals": { "aws:SourceAccount": "account", "aws:SourceArn": "arn:partition:acm-pca:region:account:certificate-authority/CA_ID" } } } ]}Note: Replace the S3 bucket name, account ID, and ACM PCA ARN with your variables.6. Choose Save changes.7. Follow the instructions to encrypt your CRLs.8. Update the CA revocation configuration using the AWS CLI command update-certificate-authority similar to the following:$ aws acm-pca update-certificate-authority --certificate-authority-arn <Certification_Auhtority_ARN> --revocation-configuration file://revoke_config.txtThe revoke_config.txt file contains revocation information similar to the following:{ "CrlConfiguration": { "Enabled": <true>, "ExpirationInDays": <7>, "CustomCname": "<example1234.cloudfront.net>", "S3BucketName": "<example-test-crl-bucket-us-east-1>", "S3ObjectAcl": "<BUCKET_OWNER_FULL_CONTROL>" }}Note:If you have disabled the Block Public Access (BPA) feature in Amazon S3, then you can specify either BUCKET_OWNER_FULL_CONTROL or PUBLIC_READ as the value.If you configured your CRL using the AWS Management Console, you might receive a "ValidationException" error. Repeat step 8 to update the CA revocation configuration using the AWS CLI.Related informationEnabling the S3 Block Public Access (BPA) featureSecurity best practices for Amazon S3GetBucketAclFollow" | https://repost.aws/knowledge-center/acm-ca-crl-s3-getbucketacl |
How can I reset my WordPress login password in my Lightsail instance? | I want to change or reset the login password of my WordPress website in an Amazon Lightsail instance. | "I want to change or reset the login password of my WordPress website in an Amazon Lightsail instance.Short descriptionYou can change your WordPress admin password through one of two methods, depending on how your Bitnami stack is installed.Note: The following resolution applies only to your WordPress admin password, and doesn't address OS or database password recovery. For information on how to change your database password, see Modify the database password on the Bitnami website.ResolutionThe file paths that are used in the following steps depend on your Bitnami stack. Follow the resolution that's appropriate for your setup:The Bitnami stack uses native Linux system packagesThe Bitnami stack is a self-contained installationFor more information on Bitnami stack installation, see Understand upcoming changes to Bitnami Stacks on the Bitnami website.To identify your Bitnami installation type, run the following command:test ! -f "/opt/bitnami/common/bin/openssl" && echo "Approach A: Using system packages." || echo "Approach B: Self-contained installation."Run the following command in your instance to retrieve the admin login credentials of Lightsail WordPress websites:cat /home/bitnami/bitnami_credentialsThe user name for the login is always user. To reset the password of this user, follow the resolution steps that apply to your Bitnami stack.The Bitnami stack uses native Linux system packages1. Run the following command to see the list of login users in the database. You must enter the MySQL root password. This password is located in the/home/bitnami/bitnami_application_password file:mysql -u root -p bitnami_wordpress -e "SELECT * FROM wp_users;"Note: The password isn't displayed as you enter it so that it isn't visible to other users. If you receive an Access Denied error when using the preceding command, then reset the password. For more information, see Modify the default MariaDB administrator password and Modify the MySQL administrator password on the Bitnami site.2. Note the ID of the user that you want to reset the password for. Then, run the following command. Be sure to replace NEWPASSWORD with your desired password and ADMIN-ID with the user ID obtained in step 1:mysql -u root -p bitnami_wordpress -e "UPDATE wp_users SET user_pass=MD5('NEWPASSWORD') WHERE ID='ADMIN-ID';"Note: The preceding command asks you for the MySQL password that you obtained from the /home/bitnami/bitnami_application_password file. The password isn't displayed as you enter it so that it isn't visible to other users.The Bitnami stack is a self-contained installationThe Bitnami stack provides the bnconfig. This script resets the WordPress admin login password. Run the following command to use the script and reset the password. Be sure to replace NEWPASSWORD with your desired password:/opt/bitnami/apps/wordpress/bnconfig --userpassword "NEWPASSWORD"Note: The bnconfig script can only reset the password of the user that's named user. To reset the password of any other users, follow the steps in the previous section The Bitnami stack uses native Linux system packages.Follow" | https://repost.aws/knowledge-center/lightsail-reset-wordpress-password |
How do I migrate a Lambda function to another AWS account or Region using the AWS CLI? | I need to move an AWS Lambda function from one AWS account (or AWS Region) to another. How can I do that using the AWS Command Line Interface (AWS CLI)? | "I need to move an AWS Lambda function from one AWS account (or AWS Region) to another. How can I do that using the AWS Command Line Interface (AWS CLI)?Short descriptionTo migrate a Lambda function to a second AWS account or Region using the AWS CLI, do the following:1. Run the GetFunction command to download the Lambda function deployment package.2. Configure the AWS CLI for the second AWS account or Region that you want to move the function to.Note: You can configure a new AWS CLI profile for your second AWS account or Region as well.3. Run the CreateFunction command to create a new function in the second AWS account or Region.Note: You can also migrate a Lambda function using the Lambda console or an AWS Serverless Application Model (AWS SAM).ResolutionNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Run the GetFunction command to download the Lambda function deployment package1. Run the following GetFunction command:aws lambda get-function --function-name my-functionImportant: Replace my-function with the name of the function you want to migrate.2. In the command response, open the URL link after "Location":. The link will appear in a code block similar to the following one:"Code": { "RepositoryType": "S3", "Location": "https://awslambda-us-west-2-tasks.s3.us-west-2.amazonaws.com/snapshots/123456789012/my-function..." },Note: Opening the link following "Location": will download the deployment package.Configure the AWS CLI for the second AWS account or Region that you want to move the function to1. Run the following Configure command:aws configure --profile profilenameImportant: Change profilename to a recognizable name for your second AWS account or Region.2. Enter the following input values to pass the AWS Identity and Access Management (IAM) user credentials of the second AWS account and the Region:For AWS Access Key ID [None]: enter the access key of an IAM user in the second AWS account. Or, if you’re migrating the function to another Region, enter the access key of an IAM user in your first AWS account.For AWS Secret Access Key [None]: enter the secret access key of the same IAM user.For Default region name [None]: enter the AWS Region you’re migrating your function to.For more information, see Configuring the AWS CLI.Run the CreateFunction command to create a new function in the second AWS account or Region.Note: You need the Lambda function deployment package and execution role to run the CreateFunction command.1. Run the following CreateFunction command using the AWS CLI profile that you just configured:aws lambda create-function \ --function-name my-function \ --runtime nodejs10.x \ --zip-file fileb://my-function.zip \ --handler my-function.handler \ --role arn:aws:iam::123456789012:role/service-role/MyTestFunction-role-tges6bf4 \ --profile profilenameImportant: Before running the command, replace the following values with the information from the function that you want to migrate:For function-name, enter the name of your function.For runtime, enter the runtime of your function.For zip-file, enter the file path of your function’s deployment package.For handler, enter the handler name of your function.For role, enter the Lambda execution role ARN that’s in the AWS account that you want to migrate your function to.For profile, enter the AWS CLI profile name you created when you ran the Configure command.Note: If you’re migrating a function to another AWS Region, but keeping it in the same AWS account, you can keep using the same execution role.2. Run the following list-functions command to confirm that the migration worked:aws lambda list-functions \ --profile profilenameImportant: Replace profilename with the AWS CLI profile name that you created when you ran the Configure command.Follow" | https://repost.aws/knowledge-center/lambda-function-migration-aws-cli |
How do I troubleshoot setup issues when I integrate Fluent Bit with Container Insights for Amazon EKS? | I want to troubleshoot setup issues when I integrate Fluent Bit with Container Insights for Amazon Elastic Kubernetes Service (Amazon EKS). | "I want to troubleshoot setup issues when I integrate Fluent Bit with Container Insights for Amazon Elastic Kubernetes Service (Amazon EKS).Short descriptionFluent Bit is a lightweight log processor and forwarder that you use to collect container logs in Amazon CloudWatch.It's a best practice to use Fluent Bit instead of Fluentd (a legacy open-source program) because of its low resource footprint and the Use_Kubelet feature. For more information, see Turn on the Use_Kubelet feature for large clusters.To allow Fluent Bit to deliver container logs to Amazon CloudWatch Logs, you must grant AWS Identity and Access Management (IAM) permissions to Fluent Bit. With Amazon EKS, there are two ways to grant IAM permissions:Attach a policy to the IAM role of your worker nodes.Use an IAM service account role.If you grant IAM permissions to Fluent Bit, then Fluent Bit is able to perform the following actions:logs:DescribeLogGroupslogs:DescribeLogStreamslogs:CreateLogGrouplogs:CreateLogStreamlogs:PutLogEventsCommon issues include:Fluent Bit pods crash.Fluent Bit doesn't send logs to CloudWatch.Fluent Bit pods return CreateContainerConfigError.ResolutionSet up an IAM role for service accountCreate an IAM role for the cloudwatch-agent service account for the amazon-cloudwatch namespace with the AWS managed policy CloudWatchAgentServerPolicy.1. Run the following commands to set up environment variables:export CLUSTER="clustername"export AWS_REGION="awsregion"export AWS_ACCOUNT="awsaccountid"Note: Replace "clustername", "awsregion", and "awsaccountid" (including the quotation marks) with your own cluster name, AWS Region, and account ID.2. Run the following eksctl command:eksctl create iamserviceaccount \ --name cloudwatch-agent \ --namespace amazon-cloudwatch \ --cluster $CLUSTER \ --attach-policy-arn "arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy" \ --approve \ --override-existing-serviceaccountsFor more information on Container Insights prerequisites, see Verify prerequisites.Set up the CloudWatch agent to collect cluster metrics and turn on Container Insights1. To deploy Container Insights using the quick start, run the following command:ClusterName="my-cluster-name"RegionName="my-cluster-region"FluentBitHttpPort='2020'FluentBitReadFromHead='Off'[[ ${FluentBitReadFromHead} = 'On' ]] && FluentBitReadFromTail='Off'|| FluentBitReadFromTail='On'[[ -z ${FluentBitHttpPort} ]] && FluentBitHttpServer='Off' || FluentBitHttpServer='On'curl https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/quickstart/cwagent-fluent-bit-quickstart.yaml | sed 's/{{cluster_name}}/'${ClusterName}'/;s/{{region_name}}/'${RegionName}'/;s/{{http_server_toggle}}/"'${FluentBitHttpServer}'"/;s/{{http_server_port}}/"'${FluentBitHttpPort}'"/;s/{{read_from_head}}/"'${FluentBitReadFromHead}'"/;s/{{read_from_tail}}/"'${FluentBitReadFromTail}'"/' | kubectl apply -f -Note: Replace "my-cluster-name" and "my-cluster-region" (including the quotation marks) with your own cluster name and AWS Region.The preceding command creates a namespace, ClusterRole, ClusterRoleBinding, and ConfigMap for the CloudWatch agent and Fluent Bit.2. After that command runs, run the following command to create the Fluent Bit service account:eksctl create iamserviceaccount \ --name fluent-bit \ --namespace amazon-cloudwatch \ --cluster $CLUSTER \ --attach-policy-arn "arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy" \ --approve \ --override-existing-serviceaccounts3. Run the following command to validate that the CloudWatch agent deploys:kubectl get pods -n amazon-cloudwatch4. When complete, the CloudWatch agent creates the log group /aws/containerinsights/Cluster_Name/performance. Then, the CloudWatch agent sends the performance log events to the log group.TroubleshootingFluent Bit pods crash1. Check for error messages in the Fluent Bit pod logs. Run these commands to look for events from Fluent Bit pods:kubectl -n amazon-cloudwatch logs -l k8s-app=fluent-bit kubectl -n amazon-cloudwatch describe pod fluent_pod pod_name2. Verify that cluster-info (the Fluent Bit configuration file stored in ConfigMap) is accurate and has no syntax mistakes. Make sure that all cluster name and Region values are set. For more information, see amazon-cloudwatch-container-insights on the GitHub website.Fluent Bit doesn't send logs to CloudWatch1. Verify that the output plugin is set up properly in the Fluent Bit configuration file. To check if there are any data ship errors, use the following command to check the Fluent Bit pod logs:kubectl -n amazon-cloudwatch logs fluent_pod_name2. Make sure that the Fluent Bit pods have the necessary IAM permissions to stream logs to CloudWatch. Your Amazon EKS worker nodes submit metrics and logs to CloudWatch because of the CloudWatchAgentServerPolicy enforced by IAM. There are two ways to grant the necessary IAM permissions:Add a policy to the worker nodes' IAM roles.Create an IAM role for the cluster's service accounts, and affix the policy to it.See the section Set up an IAM role for service account for more information.Fluent Bit pods stuck in CreateContainerConfigErrorIf the pod status is CreateContainerConfigError, then run the following command to get the exact error message:kubectl describe pod pod_name -n amazon-cloudwatchIn the Events section from the output of the command, look for an error message like the following:Error syncing pod ("fluent-bit-xxxxxxx"), skipping: failed to "StartContainer" with CreateContainerConfigError: "configmap \"fluent-bit-config\" not found"If you see this error message, then it's likely that you didn't create the ConfigMap for Fluent Bit (fluent-bit-config). Follow the installation steps again to be sure to create the ConfigMap.Related informationSet up the CloudWatch agent to collect cluster metricsQuick Start with the CloudWatch agent and Fluent BitEnable debug logging (GitHub website)Follow" | https://repost.aws/knowledge-center/eks-fluent-bit-container-insights |
How can I start over with a fresh installation of my current Lightsail instance while keeping the same static IP address? | I have a running Amazon Lightsail instance but want to start over with a fresh install of the same instance. How can I do this while keeping the same static IP address? | "I have a running Amazon Lightsail instance but want to start over with a fresh install of the same instance. How can I do this while keeping the same static IP address?Short descriptionTo start over with a clean version of the server image on the existing instance, clean up the installations and data on the server manually.Or, you can launch a new Lightsail instance from the same Lightsail blueprint of your instance. Then you can attach the same static IP addressed used on the existing instance to the new instance.ResolutionTo launch a new Lightsail instance and attach the static IP from the previous instance to it, do the followingOpen the Amazon Lightsail console.Select Create instance.Select the same platform as the existing instance (for example, Linux/Unix). Then, select the same blueprint as the existing instance (for example, Django).(Optional) Change the SSH key pair, if required.Select the Lightsail instance plan of your choice.Name your new Lightsail instance and then select Create Instance.After the new Lightsail instance launches, return to the Lightsail home page and select the original instance.On the Networking tab of the original instance, locate the static IP address.Select the menu icon (⋮) and then select Manage, Detach.On the confirmation screen, select Yes, detach.After the static IP detaches, in the Attach to an instance option, select the new Lightsail instance name, and then select Attach.Note: You can attach static IP addresses to instances only in the same Region as the original instance.Related informationCreate a static IP and attach it to an instance in Amazon LightsailFollow" | https://repost.aws/knowledge-center/lightsail-fresh-install-of-instance |
Why did I receive an Amazon GuardDuty CryptoCurrency:EC2/BitcoinTool.B!DNS finding type for my Amazon EC2 instance? | Amazon GuardDuty detected a CryptoCurrency finding with my Amazon Elastic Compute Cloud (Amazon EC2) instance. | "Amazon GuardDuty detected a CryptoCurrency finding with my Amazon Elastic Compute Cloud (Amazon EC2) instance.Short descriptionThe GuardDuty CryptoCurrency:EC2/BitcoinTool.B!DNS finding type indicates that an Amazon EC2 instance in your AWS environment is querying a domain name. The domain name is associated with cryptocurrency-related activity such as Bitcoin mining. If you don't expect this behavior, it might be a result of unauthorized activity on your AWS account.ResolutionIf you use your EC2 instance with cryptocurrency or with blockchain activity, this finding type might be expected activity for your environment. It's a best practice to set up a suppression rule for this finding type. For more information and instructions, see CryptoCurrency:EC2/BitcoinTool.B!DNS.If this activity is unexpected, then follow the instructions to remediate a compromised EC2 instance in your AWS environment.For more information, see How Amazon GuardDuty uses its data sources.Related informationCreating custom responses to GuardDuty findings with Amazon CloudWatch EventsHow to use Amazon GuardDuty and AWS Web Application Firewall to automatically block suspicious hostsFollow" | https://repost.aws/knowledge-center/resolve-guardduty-crypto-alerts |
How do I use the results of an Amazon Athena query in another query? | I want to use the results of an Amazon Athena query to perform a second query. | "I want to use the results of an Amazon Athena query to perform a second query.ResolutionUse one of the following methods to use the results of an Athena query in another query:CREATE TABLE AS SELECT (CTAS): A CTAS query creates a new table from the results of a SELECT statement in another query. CTAS is useful for transforming data that you want to query regularly. CTAS has some limitations. For example, you can specify a maximum of 100 new partitions. For more information, see Considerations and limitations for CTAS queries. For examples, see Examples of CTAS queries.Create a view: Views are useful for querying the results of small-to-medium size queries that are specific and not expected to change. For more information, see Working with views.Use the WITH clause to run multiple select statements at the same time: Use the WITH clause to define one or more subqueries. Each subquery defines a temporary table, similar to a view definition. Use WITH clause subqueries to efficiently define tables that you can use when the query runs. For more information, see Parameters. Example:WITH temp AS (SELECT * FROM tbl1 WHERE col1 = 1) SELECT * FROM tbl2, temp;Related informationHow can I access and download the results of an Amazon Athena query?Follow" | https://repost.aws/knowledge-center/athena-query-results |
How do I troubleshoot issues with the PTR record that I'm using for reverse DNS in Route 53? | How do I troubleshoot issues with the pointer record (PTR) that I'm using for reverse DNS in Amazon Route 53? | "How do I troubleshoot issues with the pointer record (PTR) that I'm using for reverse DNS in Amazon Route 53?Short descriptionReverse DNS resolution (rDNS) is used to determine the domain name associated with an IP address. This resolution is the reverse of the usual forward DNS lookup of an IP address from a domain name.Reverse DNS records in a public hosted zone might not work if:The reverse DNS record for the AWS resource isn't properly configured. If you're using an AWS Elastic IP address, complete the following steps to create a reverse DNS record:For AWS Elastic IP addresses in the US East (Ohio), Africa (Cape Town), Asia Pacific (Mumbai), Canada (Central), and Europe (Milan) Regions – Update the reverse DNS address using the Amazon Elastic Compute Cloud (Amazon EC2) console or the AWS Command Line Interface (AWS CLI).For AWS Elastic IP addresses in all other Regions – See Request to remove reverse DNS and email sending limitations.(When using non-AWS resources) The IP addresses belong to a third party, such as another cloud computing platform or your internet service provider (ISP). Contact the owner of the IP addresses to configure reverse DNS.Reverse DNS records in a private hosted zone might not work if:The private hosted zone for the reverse DNS domain isn't associated with the Amazon Virtual Private Cloud (Amazon VPC).The IP address that's queried doesn't match the private hosted zone reverse DNS domain name.The "DNS support" and "DNS hostname" options aren't enabled in the Amazon VPC.The private hosted zone can be queried using only the VPC DNS server.ResolutionNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent AWS CLI version.Check for reverse DNS record set issuesUse the following command to check whether the reverse DNS record value returned from the DNS resolver matches the expected value. If the IP address isn't resolving to the expected reverse DNS record, check the IP address owner.On Linux or macOS, use dig -x <IP_Address>:$ dig -x 3.23.155.245; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.amzn2.2 <<>> -x 3.23.155.245;; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 31167;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1;; OPT PSEUDOSECTION:; EDNS: version: 0, flags:; udp: 4096;; QUESTION SECTION:;245.155.23.3.in-addr.arpa. IN PTR;; ANSWER SECTION:245.155.23.3.in-addr.arpa. 298 IN PTR ec2-3-23-155-245.us-east-2.compute.amazonaws.com.;; Query time: 0 msec;; SERVER: 10.10.0.2#53(10.10.0.2);; WHEN: Fri Apr 09 16:14:57 UTC 2021;; MSG SIZE rcvd: 116On Windows, use nslookup <IP_Address>:$ nslookup 3.23.155.245245.155.23.3.in-addr.arpa name = ec2-3-23-155-245.us-east-2.compute.amazonaws.comIdentify the IP address ownerUse the following command to check which organization owns the IP address:whois <IP_Address>Contact the IP address owner to create or update the reverse DNS recordIf you're using an AWS Elastic IP address, complete the following steps to create a reverse DNS record:For AWS Elastic IP addresses in the US East (Ohio), Africa (Cape Town), Asia Pacific (Mumbai), Canada (Central), and Europe (Milan) Regions – Update the reverse DNS address using the AWS Elastic Compute Cloud (Amazon EC2) console or the AWS Command Line Interface (AWS CLI).For AWS Elastic IP addresses in all other Regions – See Request to remove reverse DNS and email sending limitations.(When using non-AWS resources) The IP addresses belong to a third party, such as another cloud computing platform or your internet service provider (ISP). Contact the owner of the IP addresses to configure reverse DNS.Check that the private hosted zone is associated with the appropriate Amazon VPCImportant: The following steps apply only if the reverse DNS record is in a Route 53 private hosted zone.1. Open the Route 53 console.2. In the navigation pane, choose Hosted Zones.3. Select the hosted zone that you're using for the reverse DNS domain.4. Choose View Details.5. Expand Hosted zone details.6. Verify that the private hosted zone is associated with the appropriate Amazon VPC.Check that the DNS hostnames and DNS resolution parameters are enabled1. Open the Amazon VPC console.2. In the navigation pane, choose Your VPC.3. Select the VPC ID of the Amazon VPC where you're resolving the reverse DNS record.4. In the Description pane, confirm that DNS hostnames and DNS resolution are enabled.Confirm that your custom DNS servers are correctly configuring in your Amazon VPCPrivate hosted zones are resolvable only through the Amazon VPC DNS. To confirm that your Amazon VPC settings are correctly configured, follow these steps:1. Open the Amazon VPC console.2. In the navigation pane, choose DHCP Option Sets.3. Select the VPC DHCP Option Set ID associated with your Amazon VPC.4. In the Details pane, confirm that the Domain name server is set to the Amazon-provided DNS servers of your Amazon VPC. For example, if the CIDR range for your Amazon VPC is 10.0.0.0/16, then the IP address of the Amazon VPC DNS server is 10.0.0.2 (VPC CIDR + 2) or AmazonProvidedDNS.Related informationHow do I enable reverse DNS functionality for Route 53 with a PTR record?Follow" | https://repost.aws/knowledge-center/route-53-fix-ptr-record-reverse-dns |
Why am I receiving errors when using yum on my EC2 instance running Amazon Linux 1 or Amazon Linux 2? | Why am I receiving errors when using yum on my Amazon Elastic Compute Cloud (Amazon EC2) instance running Amazon Linux 1 or Amazon Linux 2? | "Why am I receiving errors when using yum on my Amazon Elastic Compute Cloud (Amazon EC2) instance running Amazon Linux 1 or Amazon Linux 2?Short descriptionUse the output messages of the yum command to determine what error occurred. The following are common error messages:Connection timed out XXX millisecondsHTTP Error 403 - ForbiddenCould not resolve host: xxxxxxxxx.$awsregion.$awsdomainHTTP Error 407 - Proxy Authentication RequiredResolving timed out after 5000 millisecondsResolutionConnection timed out XXXX milliseconds1. Verify that the security group attached to your EC2 instance allows outbound http/https traffic.2. Verify that the network ACLs associated with your EC2 instance's subnet allows outbound http/https traffic through your NACLs.The following example shows a custom network ACL that allows outbound traffic on port 80 and 443:Inbound rulesRule# Type Protocol Port Range Source Allow/Deny100 Custom TCP Rule TCP (6) 1024-65535 0.0.0.0/0 ALLOW101 Custom TCP Rule TCP (6) 1024-65535 ::/0 ALLOW* ALL Traffic ALL ALL ::/0 DENY* ALL Traffic ALL ALL 0.0.0.0/0 DENYOutbound rulesRule # Type Protocol Port Range Source Allow/Deny100 HTTP (80) TCP (6) 80 0.0.0.0/0 ALLOW101 HTTPS (443) TCP (6) 443 0.0.0.0/0 ALLOW102 HTTP (80) TCP (6) 80 ::/0 ALLOW103 HTTPS (443) TCP (6) 443 ::/0 ALLOW* ALL Traffic ALL ALL ::/0 DENY* ALL Traffic ALL ALL 0.0.0.0/0 DENY3. Verify that your EC2 instance has access to Amazon Linux repositories using one of the following optionsYour instance is in a public subnet with an internet gateway. For more information, see Turn on internet access.Your instance is in a private subnet with a NAT gateway. For more information, see NAT gateways.Your instance is in a private subnet with a NAT instance. For more information, see NAT instances.Your instance is in a public or private subnet with an Amazon Simple Storage Service (Amazon S3) VPC endpoint. For more information, see How can I update yum or install packages without internet access on EC2 instances running Amazon Linux 1 or Amazon Linux 2?Your instance is in a private subnet with a proxy. To configure yum to use a proxy, modify the /etc/yum.conf file with the following parameters. In the following example, replace proxy-port, proxy-user-name, and proxy-password with the correct values for your proxy.proxy=http://proxy-server-IP-address:proxy_portproxy_username="proxy-user-name"proxy_password="proxy-password"For more information, see Using yum with a proxy server on the fedoraproject.org website.4. After configuring your instance using one of the preceding options, run the following curl command to confirm that the instance can access the repository. In the following command, replace us-east-1 with your instance's Region.Amazon Linux 1curl -I repo.us-east-1.amazonaws.comAmazon Linux 2curl -I amazonlinux.us-east-1.amazonaws.comNote: curl is pre-installed on all AMIs, but the Amazon Linux repositories aren't accessible without credentials. curl can't take the credentials of a yum repository. You receive an access denied error message similar to the one below. The curl command is used to test whether the timeout issue is still occurring. The error message shows that the network is reachable and the timeout issue is no longer occurring:$ curl -I amazonlinux.us-east-1.amazonaws.comHTTP/1.1 403 Forbiddenx-amz-bucket-region: us-east-1x-amz-request-id: xxxxxxxxx-amz-id-2: xxxxxxxxxxxxx=Content-Type: application/xmlDate: Thu, 17 Nov 2022 16:59:59 GMTServer: AmazonS3To install software, such as telnet, run the following command:sudo yum install telnetHTTP Error 403 - Forbidden1. If you use an Amazon S3 VPC endpoint in your instance's VPC, verify that the attached policy allows the s3:GetObject API call on the following resources:Amazon Linux 1:"arn:aws:s3:::packages.region.amazonaws.com/*""arn:aws:s3:::repo.region.amazonaws.com/*"Amazon Linux 2:"arn:aws:s3:::amazonlinux.region.amazonaws.com/*""arn:aws:s3:::amazonlinux-2-repos-region/*"Note: Replace the Region in the preceding examples with your instance's Region.For more information, see Endpoint policies for Amazon S3.2. If you use a proxy to access Amazon Linux repositories, verify that the subdomains .amazonaws.com are on the allow list in your proxy configuration.Could not resolve host: xxxxxxxx.$awsregion.$awsdomain1. Run the following commands to verify that the directory /etc/yum/vars defines the custom yum variables. The directory must include the variables awsdomain and awsregion. In the following example command, replace us-east-1 with your instance's Region.$ cat /etc/yum/vars/awsregionus-east-1$ cat /etc/yum/vars/awsdomainamazonaws.com2. Verify the DNS resolution of your instance. The instance must resolve the domain name of the Amazon Linux repositories:$ dig amazonlinux.us-east-1.amazonaws.com$ dig repo.us-east-1.amazonaws.comQueries to the Amazon provided DNS server at the 169.254.169.253 IPv4 address and the fd00:ec2::253 IPv6 address will succeed. Queries to the Amazon provided DNS server at the reserved IP address at the base of the VPC IPv4 network range plus two will also succeed. The IPv6 address is accessible only on EC2 instances built on the Nitro System.HTTP Error 407 - Proxy Authentication RequiredThis occurs if your proxy can't complete the request because yum doesn't have proper authentication credentials for your proxy server. To configure yum to use a proxy, modify the /etc/yum.conf file with the following parameters:proxy=http://proxy-server-IP-address:proxy_portproxy_username=proxy-user-nameproxy_password=proxy-passwordResolving timed out after 5000 millisecondsRun the following command to verify that the /etc/resolv.conf file has the correct IP for your DNS server:cat /etc/resolv.confnameserver YourDNSIPYou can modify the time-out period of 5000 milliseconds by modifying the timeout value in the yum configuration file.To check the query time using dig, run the following command:$ dig repo.us-east-1.amazonaws.com | grep timeFollow" | https://repost.aws/knowledge-center/ec2-troubleshoot-yum-errors-al1-al2 |
How can I determine if my load balancer supports SSL/TLS renegotiation? | How can I determine if my load balancer supports Secure Sockets Layer/Transport Layer Security (SSL/TLS) renegotiation? | "How can I determine if my load balancer supports Secure Sockets Layer/Transport Layer Security (SSL/TLS) renegotiation?ResolutionAlthough only the client can initiate a session resumption, either side can initiate session renegotiation. Support of SSL/TLS renegotiation varies by load balancer type:Classic Load Balancerssupport secure client-initiated renegotiations for incoming SSL/TLS client connections. Classic Load Balancers also support server-initiated renegotiation for the backend SSL/TLS connection. **Note:**If you need to disable client-initiated renegotiations for incoming SSL/TLS connections, you can migrate to an Application Load Balancer where these renegotiations aren't supported.Application Load Balancersdon't support SSL/TLS renegotiation for client or target connections.Network Load Balancersdon't support SSL/TLS renegotiation.All load balancers support session resumption. However, only Network Load Balancers support resuming an SSL session that was originally negotiated with a different IP associated with the same load balancer.Related InformationUpdate the SSL Negotiation Configuration of Your Classic Load BalancerFollow" | https://repost.aws/knowledge-center/elb-ssl-tls-renegotiation-support |
How do I use my Windows 10 desktop license with WorkSpaces? | I want to use my Windows desktop license with Amazon WorkSpaces. How can I do that? | "I want to use my Windows desktop license with Amazon WorkSpaces. How can I do that?Short descriptionAmazon WorkSpaces provides two options for Windows-based operating systems: Windows Server 2016 and Windows Server 2019. Both are based on Windows 10 desktop experience.If you have Windows desktop licenses for Windows 10 Pro or Windows 10 Enterprise and your licensing agreement allows it, you can bring your own license (BYOL) to WorkSpaces. For more information, see Bring Your Own Windows desktop licenses.ResolutionTo bring your Windows 10 license to a WorkSpace, you must meet BYOL prerequisites. BYOL for WorkSpaces supports the following Windows 10 versions:Windows 10 Version 2004 (May 2020 Update)Windows 10 Version 20H2 (October 2020 Update)Windows 10 Version 21H1 (May 2021 Update)Windows 10 Version 21H2 (December 2021 Update)You must meet additional requirements to use BYOL. For example, you must commit to running 100 WorkSpaces in an AWS Region each month on dedicated hardware. For more information, see Requirements.The requirements are different for BYOL for Graphics WorkSpaces. Contact AWS Support for more information about BYOL for Graphics WorkSpaces.Setting up BYOL in WorkSpacesNext, you must check your account’s eligibility for BYOL and have your eligibility verified. When verified, you can activate BYOL.1. Open the WorkSpaces console.2. In the navigation pane, choose Account Settings.3. Under Bring your own license (BYOL), choose View WorkSpaces BYOL settings. If your account isn’t eligible for BYOL, a message provides guidance for next steps.4. If you must start setting up BYOL, contact your AWS account manager or sales representative, or create a case in the AWS Support Center.5. For your contact to verify your eligibility for BYOL, they must get specific information from you. Share your answers to the following questions:Have you reviewed and accepted the BYOL requirements listed earlier? [Yes/No]Account name:AWS account ID:AWS Support level for the account: [Basic (no paid support), Developer, Business, Enterprise]Capacity request/Bundle type/Region: [Include the number of WorkSpaces in each bundle for each Region. Specify how the total number is split across Value, Performance, Power, or Graphics WorkSpaces.]Technical contact information: Name, email, and phone number (Note: The technical contact must have AWS admin privileges in the AWS account that the virtual machine imports to.)Are you buying WorkSpaces from a reseller? [Yes/No]What is your ramp-up plan? For example, how many BYOL WorkSpaces do you plan to create? What is the ramp-up timeline?In what Regions do you need your account set up for BYOL?How many BYOL WorkSpaces do you plan to deploy in each Region (a minimum of 100 in each Region)?Does your organization have any other AWS accounts set up for BYOL in the same Region? If yes, do you want to link these accounts so that they use the same underlying hardware?Note: If you have linked accounts, the total number of WorkSpaces deployed in these accounts is aggregated to determine your eligibility for BYOL. Be aware that linking the accounts takes additional time. If you want to link the accounts, be ready to provide the account numbers to your contact.6. When your eligibility for BYOL is confirmed, activate BYOL for your account.Follow" | https://repost.aws/knowledge-center/workspaces-byol |
How can I access my Amazon EC2 Mac instance through a GUI? | I have an Amazon Elastic Compute Cloud (Amazon EC2) macOS instance on a dedicated host. I want to access the instance through a GUI so that I can have the premier experience of the macOS environment. | "I have an Amazon Elastic Compute Cloud (Amazon EC2) macOS instance on a dedicated host. I want to access the instance through a GUI so that I can have the premier experience of the macOS environment.ResolutionNote: The following steps are tested for macOS Mojave 10.14.6 and macOS Catalina 10.15.7.1. Connect to your EC2 macOS instance using SSH.LinuxUse the following command to use SSH to connected into your EC2 macOS instance as ec2-user. Replace keypair_file with your key pair and Instance-Public-IP with the public IP of your instance.% ssh -i keypair_file ec2-user@Instance-Public-IPWindowsWindows 10 and newer versions of Windows Server have an OpenSSH client installed by default. Or, you can activate the OpenSSH client by selecting Settings, Apps, Apps & features, Manage optional features, Add a feature, and then select OpenSSH Client. If you're using an older version of Windows, then use Git Bash to implement the preceding command.Note: You can make the instance accessible through a public IP address or an Elastic IP address while it's in a public subnet. In this case, use a bastion or jump server to connect to the instance. Or, you can establish a connection using AWS VPN or AWS Direct Connect that allows you to access your instance through a private IP. For security reasons, traffic to the VNC server is tunneled using SSH. It's a best practice to avoid opening VNC ports in your security groups.2. Run the following command to install and start VNC (macOS screen sharing SSH) from the macOS instance:sudo defaults write /var/db/launchd.db/com.apple.launchd/overrides.plist com.apple.screensharing -dict Disabled -bool falsesudo launchctl load -w /System/Library/LaunchDaemons/com.apple.screensharing.plist3. Run the following command to set a password for ec2-user:sudo /usr/bin/dscl . -passwd /Users/ec2-user4. Create an SSH tunnel to the VNC port. In the following command, replace keypair_file with your SSH key path, and replace 192.0.2.0 with your instance's IP address or DNS name:ssh -i keypair_file -L 5900:localhost:5900 [email protected]: Keep the SSH session running while you're in the remote session.5. Using a VNC client, connect to localhost:5900.Note: macOS has a built-in VNC client. For Windows, you can use RealVNC Viewer for Windows. For Linux, you can use Remmina. Other clients, such as TightVNC running on Windows, don't work with this resolution.For macOS: To access the VNC viewer, open Finder, select Go, and then select Connect to Server. Or, use the keyboard shortcut CMD + K. Then, enter the following in the Server Address field:vnc://localhost:5900For Windows: Using the RealVNC Viewer client, connect to the macOS host over the SSH Local Port Forwarding tunnel. Select New Connection from the File drop-down menu. Then, complete the following fields:VNC Server: localhost:5900 Encryption: Let VNC Server Choose Select OK.Note: If you experience authentication errors with RealVNCSet, then set Encryption to Prefer On or Prefer Off until one of those settings works.6. The GUI of the macOS launches. Connect to the remote session of the macOS instance as ec2-user using the password that you set in step 3.You're now logged in to your macOS desktop.Related informationHow do I install a GUI on my Amazon EC2 instance running Amazon Linux 2?Follow" | https://repost.aws/knowledge-center/ec2-mac-instance-gui-access |