Question
stringlengths
0
222
Description
stringlengths
0
790
Answer
stringlengths
0
28.2k
Link
stringlengths
35
92
How do I troubleshoot failed Amazon EC2 restore jobs using AWS Backup?
"I'm using AWS Backup to restore an Amazon Elastic Compute Cloud (Amazon EC2) instance from a recovery point in AWS Backup. However, I get an encoded error message that says "You are not authorized to perform this operation. Please consult the permissions associated with your AWS Backup role(s), and refer to the AWS Backup documentation for more details.""
"I'm using AWS Backup to restore an Amazon Elastic Compute Cloud (Amazon EC2) instance from a recovery point in AWS Backup. However, I get an encoded error message that says "You are not authorized to perform this operation. Please consult the permissions associated with your AWS Backup role(s), and refer to the AWS Backup documentation for more details."ResolutionThis error typically occurs under the following conditions:The original Amazon EC2 instance has an instance profile attached to it.You try to restore the instance on the AWS Backup console using the setting Default role for Restore role and Restore with Original IAM Role for Instance IAM role.To resolve this issue, use either of the following options based on your use case.Use the Proceed with no IAM role optionWhen you run the restore job for the instance in the AWS Backup console, select Proceed with no IAM role for Instance IAM role. With this option, you can restore the instance, and the restored instance doesn't have an instance profile attached to it. Later, you can attach the instance profile to this restored instance.Use the Restore with Original IAM role optionWhen you run the restore job, you can select Restore with Original IAM Role for Instance IAM role after attaching additional permissions to your Restore role:1.    If you know which role you used for the restore, then skip to step 2. Otherwise, run the decode-authorization-message command using the AWS Command Line Interface (AWS CLI) to find the role that was used to restore the instance. If you're using a Linux-based operating system, then you can combine this command with the jq tool to get a viewer-friendly output:# aws sts decode-authorization-message --encoded-message (encoded error message) --query DecodedMessage --output text | jq '.'Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.You get an output similar to the following:{ "allowed": false,….. "context": { "principal": { "id": "AROAAAAAAAAAA:AWSBackup-AWSBackupDefaultServiceRole", "arn": "arn:aws:sts::111122223333:assumed-role/AWSBackupDefaultServiceRole/AWSBackup-AWSBackupDefaultServiceRole" }, "action": "iam:PassRole", "resource": "arn:aws:iam::111122223333:role/AmazonSSMRoleForInstancesQuickSetup", "conditions": { "items": [….. }The example output shows that the Restore role is same as AWSBackupDefaultServiceRole. The Restore role must have the iam:PassRole permission so that it can interact with the role AmazonSSMRoleForInstancesQuickSetup, which is required to restore the instance.2.    Open the IAM console, and create the following policy: Note: Replace 111122223333 with your AWS account ID.{ "Version": "2012-10-17", "Statement": [ { "Action": "iam:PassRole", "Resource": "arn:aws:iam::111122223333:role/*", "Effect": "Allow" } ]}Then, attach this policy to your Restore role.3.    After you update the IAM role, re-run the restore job.Related informationAccess controlRestoring an Amazon EC2 instanceFollow"
https://repost.aws/knowledge-center/aws-backup-encoded-authorization-failure
How do I use FSx for ONTAP REST APIs?
I want to use the NetApp ONTAP REST API to manage my Amazon FSx for NetApp ONTAP resources. How do I do this?
"I want to use the NetApp ONTAP REST API to manage my Amazon FSx for NetApp ONTAP resources. How do I do this?ResolutionPrerequisitesThe API requestor must be able to make HTTPS connections to the FSx for ONTAP file system and to the storage virtual machine (SVM) endpoints. For more information, see Using the NetApp ONTAP REST API.Run Hello World on the FSx for ONTAP file system with curlAt the command line interface of a Linux instance on your Amazon Virtual Private Cloud (Amazon VPC) running the FSx for ONTAP file system, do the following:1.    Enter your fsxadmin password and the endpoint DNS name. In the following example command, replace Password and fs-XXXXXXXX.fsx.region.amazonaws.com with the correct values for your use case:$ CRED=fsxadmin:Password$ ONTAP=management.fs-XXXXXXXX.fsx.region.amazonaws.com2.    Run the following command to retrieve the ONTAP software version:$ curl -X GET -u ${CRED} -k "https://${ONTAP}/api/cluster?fields=version"{ "version": { "full": "NetApp Release 9.10.1RC1P1: Sat Nov 27 18:08:32 UTC 2021", "generation": 9, "major": 10, "minor": 1 }, "_links": { "self": { "href": "/api/cluster" } }}Example REST API: Get volumesThe following is an example of the GET command used to retrieve volumes:$ curl -X GET -u ${CRED} -k "https://${ONTAP}/api/storage/volumes"{ "records": [ { "uuid": "504c8162-a435-11ec-bb13-130f21c56a08", "name": "svm1_root", "_links": { "self": { "href": "/api/storage/volumes/504c8162-a435-11ec-bb13-130f21c56a08" } } }, { "uuid": "956f5ce9-a435-11ec-bb13-130f21c56a08", "name": "vol1", "_links": { "self": { "href": "/api/storage/volumes/956f5ce9-a435-11ec-bb13-130f21c56a08" } } } ], "num_records": 2, "_links": { "self": { "href": "/api/storage/volumes" } }}Note: Some APIs don't run on FSx for ONTAP.Use the NetApp BlueXP API Swagger interfaceYou can access some NetApp ONTAP APIs using the BlueXP r Connector Swagger interface. For more information, see Learn about BlueXP and How to log in to BlueXP (formerly Cloud Manager) API Swagger interface on the NetApp website.The following steps are an example of accessing the Swagger interface using BlueXP and posting credentials.1.    In the BlueXP Connector, select the Help menu, represented by a question mark, then select API.Or, go to the Swagger interface directly (example URL: http://connectorip/occm/api-doc/).2.    Select User Management Operations.3.    Select auth : Authentication operations.4.    Select POST /auth/login, Expand Operations.5.    Select Model Schema.6.    Select Click to set as parameter value under the model schema that displays the following:{"email": "string","password": "string"}7.    In the Value field where model schema is populated, edit the string to provide your correct email and password:{"email": "[email protected]","password": "xxxxxxxxxx"}Note: The email id isn'tfsxadmin. The email is the Cloud Central Auth0 email id used to log in to the BlueXP Connector. The password is set in plain text.8.    Select Try it out!9.    Verify that the login is successful from the Response Code.Example: Get volumes API using the BlueXP Swagger interface1.    In the Swagger interface, select FSx (Data ONTAP cluster) working environment operations.2.    Select fsx/volumes.3.    Select GET /fsx/volumes, Expand Operations.4.    In the Value field, enter the file system ID that you want to retrieve the volumes list from.5.    Select Try it out!6.    Verify that the login is successful from the Response Code and the Response Body.Related informationONTAP REST API Python sample scripts now available on GitHubFollow"
https://repost.aws/knowledge-center/fsx-ontap-rest-apis
Why am I not receiving validation emails when using ACM to issue or renew a certificate?
Why didn't I receive the validation email to issue or renew AWS Certificate Manager (ACM) certificates?
"Why didn't I receive the validation email to issue or renew AWS Certificate Manager (ACM) certificates?Short descriptionACM sends the validation emails to the five common system addresses as long as an MX record exists for the domain. For a list of the default email addresses, see MX record.ACM also sends a domain validation email to the email addresses associated with the domain registrant, technical contact, and administrative contact fields in the WHOIS listing. For more information, see validate domain ownership with email.Some domain registrars don't populate the contact information in WHOIS ("Who is") data. Your ACM certificate issue or renewal can be affected if:Your domain registrar doesn't include contact email addresses in WHOIS data.You use custom emails addresses in WHOIS for certificate validation.The WHOIS lookup for email validation is performed on the apex domain and searches for email addresses in the domain registrant, technical contact, and administrative contact fields. Verify your listed email addresses using a WHOIS query. For additional information, see Enabling or disabling privacy protection for contact information for a domain. If your domain has privacy protection enabled, you might not receive a reply or received a response similar to the following:Registrant ContactName: Data Protected Data ProtectedOrganization: Data ProtectedMailing Address: 123 Data Protected, Toronto ON M6K 3M1 CAPhone: +1.0000000000Ext:Fax: +1.0000000000Fax Ext:Email:[email protected]:ACM isn't compatible with CAPTCHA. ACM might not locate WHOIS data configured with a CAPTCHA text.AWS doesn't control WHOIS data and can't prevent WHOIS server throttling. For more information, see WHOIS throttling.ResolutionTwo options are available depending on your preference and the effort required for maintaining or switching.You can't convert an ACM certificate's validation method from email to DNS or from DNS to email. To switch validation methods, request a new ACM certificate to replace the previous one.If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Option 1 - use emailCheck your certificate for your domain to verify the email addresses.Open the ACM console, and then choose List certificates.Choose the certificate that you want to renew.In Domains, note the Registered owners field. Usually the registered owners include admin@, administrator@, hostmaster@, postmaster@, [email protected] the five system email addresses aren't listed, confirm that the domain has at least one valid MX record using the following commands:Linux and macOS$dig mx example.comWindows$nslookup -q=mx example.comThe mail servers indicated in the MX record are sent the validation emails similar to the following:;; ANSWER SECTION:example.com. 599 IN MX 10 mail1.example.com.example.com. 599 IN MX 20 mail2.example.com.You can also use Amazon Simple Email Service (Amazon SES) and Amazon Simple Notification Service (Amazon SNS) to receive an ACM validation email if:You don't have an MX recordYour domain registrar doesn't support email forwarding.Follow the instructions for resending the validation email using the AWS Management Console or the AWS CLI.For more information, see Troubleshoot email validation problems.Option 2 - use DNSTo switch to DNS validation, recreate the ACM certificate, and then select DNS for validation. DNS validation has several advantages over email validation, especially if Amazon Route 53 is the DNS provider for your domain.DNS requires that you create one CNAME record per domain name used only for requesting an ACM certificate. Email validation sends up to eight email messages per domain name.You can request additional ACM certificates for your fully qualified domain name (FQDN) if the DNS record is in use.ACM automatically renews certificates that you validated using DNS. ACM renews each certificate before expiration if the certificate and DNS record are both in use.ACM can add the CNAME record for you if you use Route 53 to manage your public DNS records.Automation using the DNS validation process is less complex than using the email validation process.You can switch to DNS validation at no additional cost.Services integrated with AWS Certificate Manager using the previous ACM certificate must be updated to use the new certificate. This is because new ACM certificates generate an Amazon Resource Name (ARN). You can't retain the ARN with a new ACM certificate. Only renewed ACM certificates retain the same ARN.You can establish the Region for an ACM certificate by running the AWS CLI command describe-certificate similar to the following:$aws acm describe-certificate --certificate-arn arn:aws:acm:region:12345678911:certificate/123456-1234-1234-1234-123456789 --output text |grep INUSEBYFor more information, see DNS validation.Related informationTroubleshoot certificate validationManaged renewal for ACM certificatesCheck a certificate's renewal statusRenewal for domains validated by emailFollow"
https://repost.aws/knowledge-center/acm-email-validation-custom
How do I delete a Network Load Balancer that's "currently associated with another service"?
"I'm trying to delete my Network Load Balancer. However, I receive an error that the "Network Load Balancer is currently associated with another service.""
"I'm trying to delete my Network Load Balancer. However, I receive an error that the "Network Load Balancer is currently associated with another service."Short descriptionThis error indicates that a Network Load Balancer that you want to delete is already associated with a virtual private cloud (VPC) endpoint service. Before you can delete a Network Load Balancer, you must first disassociate it from any associated VPC endpoint services.ResolutionDisassociate the Network Load Balancer from any associated VPC endpoint servicesOpen the Amazon Virtual Private Cloud (Amazon VPC) console.In the navigation pane, choose Endpoint Services.Review the Network Load Balancer tab for each of your endpoint services to determine whether your Network Load Balancer is associated with an endpoint service.Choose the Endpoint Connections tab to determine which endpoint connections are attached to your endpoint service.For the connections that aren't in the Rejected state, choose Actions, and then choose Reject endpoint connection request.After you delete the associated endpoints, use one of the options in the following section to disassociate the Network Load Balancer.Delete the Network Load Balancer and keep the endpoint service activeTo delete a Network Load Balancer but keep the endpoint service active, associate the endpoint service with a different Network Load Balancer:Open the Amazon VPC console.In the navigation pane, choose Endpoint services.Select the endpoint service that's associated with the Network Load Balancer.Choose Actions, Associate or disassociate load balancers.Clear the selection for the Network Load Balancer that you want to delete, and then select a Network Load Balancer to replace it.Delete a Network Load Balancer and associated endpoint serviceDelete both the Network Load Balancer and the associated endpoint service:Open the Amazon VPC console.Choose the endpoint service that you want to delete.Choose Action, and then choose Delete endpoint services.Delete the Network Load Balancer.Follow"
https://repost.aws/knowledge-center/elb-fix-nlb-associated-with-service
How do I move data or files from EFS Standard-Infrequent Access or EFS ONE Zone-Infrequent Access to EFS Standard or EFS One Zone storage class?
"I want to move data or files from Amazon Elastic File System (Amazon EFS) Standard-Infrequent Access EFS (Standard-IA) to EFS Standard storage. Or, I want to move data or files from EFS One Zone-Infrequent Access (EFS One Zone-IA) to EFS One Zone. How can I do this?"
"I want to move data or files from Amazon Elastic File System (Amazon EFS) Standard-Infrequent Access EFS (Standard-IA) to EFS Standard storage. Or, I want to move data or files from EFS One Zone-Infrequent Access (EFS One Zone-IA) to EFS One Zone. How can I do this?ResolutionMove files to Standard storage or One Zone storageMethod 1:To move files from Standard-IA or One Zone IA to Standard or One Zone storage, do the following:1.    Temporarily copy the files to another location.2.    Move them back to the original location on the EFS file system. For example, if EFS is mounted at /efs/file-old, bring it back to Standard storage by copying to another path, as shown in the following example:$ cp /efs/file-old /efs/file-newNote: Moving or renaming files using the mv command doesn't transfer files to Standard storage class. If files must remain in the Standard storage class, first stop Lifecycle Management on the file system, and then copy the files following the preceding steps.For more information, see Infrequent access performance.Method 2:Use Amazon EFS Intelligent-Tiering. If access patterns change, EFS Intelligent‐Tiering automatically moves files back to the EFS Standard or EFS One Zone storage classes when the Transition out of IA lifecycle policy is set to On first access.Verify which files are moved to Infrequent Access storageAmazon EFS lifecycle management uses an internal timer to track when a file was last accessed. Amazon EFS doesn't use the publicly viewable POSIX file system attributes. Whenever a file in Standard storage is written to or read from, the Lifecycle Management timer resets.The following are prerequisites for files to be moved to infrequent access:Files that are 128 KiB or larger and haven't been accessed or modified for at least 30 days can be transitioned to the new storage class. The 30 days requirement might vary depending on the number of days set in lifecycle management configuration of your file system.Modifications to a file's metadata that don't change the file don't delay a transition.Metadata operations, such as listing the contents of a directory, don't count as file access.File metadata is always stored in Standard storage to ensure consistent metadata performance. File metadata includes file names, ownership information, and file system directory structure.For more information, see Amazon EFS lifecycle management.Follow"
https://repost.aws/knowledge-center/efs-move-to-different-storage-class
How do I sign up for an AWS Activate package?
I'm interested in an AWS Activate package. How do I sign up?
"I'm interested in an AWS Activate package. How do I sign up?ResolutionAWS Activate offers two packages: the Founders package and the Portfolio package.The AWS Activate Founders package is available for startups that aren't associated with an AWS Activate Provider. The AWS Activate Providers include select venture capital firms, accelerators, incubators, and other startup-enabling organizations. For more information on how to qualify for the AWS Founders package, see Getting Started with AWS Activate.The AWS Activate Portfolio package is available to startups that are associated with an AWS Activate Provider. For a non-exhaustive list of AWS Activate Providers, see AWS Activate Providers. You can contact your AWS Activate Provider for more information on how to qualify for the AWS Activate Portfolio package.For more information about these packages, see AWS Activate.Note: If you're an agency, IT shop, or a consultancy, consider the AWS Partner Network instead.Related informationApply for AWS ActivateAWS Activate FAQRedeem your AWS Promotional CreditFollow"
https://repost.aws/knowledge-center/activate-portfolio-package
"I'm a root user, power user, or administrator with cross-account access. Why doesn't the Amazon S3 console show all the buckets that I have access to?"
"I'm a root user, or I have power user access or administrator access across several AWS accounts. However, when I sign in to the Amazon Simple Storage Service (Amazon S3) console, I don't see all the buckets that I have access to. How can I access those buckets that aren't listed?"
"I'm a root user, or I have power user access or administrator access across several AWS accounts. However, when I sign in to the Amazon Simple Storage Service (Amazon S3) console, I don't see all the buckets that I have access to. How can I access those buckets that aren't listed?ResolutionThere's no way to list all the Amazon S3 buckets that you have access to across several AWS accounts. By default, the Amazon S3 console lists only the buckets that are owned by the account that you use to sign in. The console doesn't list buckets in other accounts, even if you have access to them.Note: To access an individual bucket in another account, you must know the bucket name.To access an individual bucket in another account using the Amazon S3 console, replace doc-examplebucket with the bucket name, like this:https://s3.console.aws.amazon.com/s3/buckets/doc-examplebucket/Important: For the direct console link to work, you must have permissions to access the bucket using the console.You can also access an individual bucket in another account using the AWS Command Line Interface (AWS CLI), AWS SDK, or Amazon S3 REST API.Follow these steps to configure the AWS CLI to access an Amazon S3 bucket in another account:1.    Use the AWS Identity and Access Management (IAM) console to create access keys for the IAM user that has access to that account.2.    Install the AWS CLI.3.    Configure the AWS CLI using the access keys that you created.After you configure the AWS CLI, you can run commands that send requests to the bucket. For example, run this command to copy an object from the bucket to your local machine:aws s3 cp s3://doc-examplebucket/objectname/local/pathNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent AWS CLI version.Related informationIdentity and Access Management in Amazon S3Follow"
https://repost.aws/knowledge-center/s3-bucket-cross-account-access
How can I prevent IAM policies from allowing a user or role to access a KMS key in AWS KMS?
"I want to secure my AWS Key Management Service (AWS KMS) KMS key from access by AWS Identity and Access Management (IAM) identities. However, the default KMS key policy allows IAM identities in the account to access the KMS key with IAM permissions."
"I want to secure my AWS Key Management Service (AWS KMS) KMS key from access by AWS Identity and Access Management (IAM) identities. However, the default KMS key policy allows IAM identities in the account to access the KMS key with IAM permissions.Short descriptionThe default KMS key policy contains the following statement:{ "Sid": "Enable IAM User Permissions", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::111122223333:root" }, "Action": "kms:*", "Resource": "*"}In the preceding example, the Effect and Principal elements don't refer to the AWS root user account. The Amazon Resource Names (ARN) allows permissions to the KMS key with this IAM policy. If you attach the required permissions to the IAM entity, then any principal in the AWS account 111122223333 has root access to the KMS key.ResolutionYou can prevent IAM entities from accessing the KMS key and allow the root user account to manage the key. This also prevents the root user account from losing access to the KMS key.Replace the Sid "Enable IAM User Permissions" in the default KMS key policy with the Sid "EnableRootAccessAndPreventPermissionDelegation". Also, add a Condition element similar to the one in the following policy:Important: Replace the account 111122223333 with your account number, and be sure that the condition key aws:PrincipalType is set to Account.{ "Sid": "EnableRootAccessAndPreventPermissionDelegation", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::111122223333:root" }, "Action": "kms:*", "Resource": "*", "Condition": { "StringEquals": { "aws:PrincipalType": "Account" } }}You can add key administrator IAM users or roles to allow managing the key in the statement with Sid "Allow access for Key Administrators". You can also allow IAM users or roles to use the key for cryptographic operations and with other AWS services. Add the IAM user or role ARNs to the statements with the Sid “Allow use of the key” and “Allow attachment of persistent resources”.Note: You must create the key with the modified policy with the root user account. Or, use a principal that’s allowed in the statement “Allow access for Key Administrators”. This prevents the "MalformedPolicyDocumentException" policy error.The modified default KMS key policy is similar to the following:{ "Id": "key-consolepolicy-1", "Version": "2012-10-17", "Statement": [ { "Sid": "EnableRootAccessAndPreventPermissionDelegation", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::111122223333:root" }, "Action": "kms:*", "Resource": "*", "Condition": { "StringEquals": { "aws:PrincipalType": "Account" } } }, { "Sid": "Allow access for Key Administrators", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::111122223333:user/KMSAdminUser", "arn:aws:iam::111122223333:role/KMSAdminRole" ] }, "Action": [ "kms:Create*", "kms:Describe*", "kms:Enable*", "kms:List*", "kms:Put*", "kms:Update*", "kms:Revoke*", "kms:Disable*", "kms:Get*", "kms:Delete*", "kms:TagResource", "kms:UntagResource", "kms:ScheduleKeyDeletion", "kms:CancelKeyDeletion" ], "Resource": "*" }, { "Sid": "Allow use of the key", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::111122223333:user/ExampleUser", "arn:aws:iam::111122223333:role/ExampleRole" ] }, "Action": [ "kms:Encrypt", "kms:Decrypt", "kms:ReEncrypt*", "kms:GenerateDataKey*", "kms:DescribeKey" ], "Resource": "*" }, { "Sid": "Allow attachment of persistent resources", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::111122223333:user/ExampleUser", "arn:aws:iam::111122223333:role/ExampleRole" ] }, "Action": [ "kms:CreateGrant", "kms:ListGrants", "kms:RevokeGrant" ], "Resource": "*", "Condition": { "Bool": { "kms:GrantIsForAWSResource": "true" } } } ]}The key policy allows the following permissions:The AWS root user account full access to the key.The principals KMSAdminUser and KMSAdminRole to perform management operations on the key.The principals ExampleUser and ExampleRole to use the key.Related informationBest practices for managing AWS access keysAWS KMS keysFollow"
https://repost.aws/knowledge-center/kms-prevent-access
Why do I have replication lags and errors in my RDS for PostgreSQL DB instance?
I am getting replication errors and lags in my Amazon Relational Database Service (Amazon RDS) for PostgreSQL instance.
"I am getting replication errors and lags in my Amazon Relational Database Service (Amazon RDS) for PostgreSQL instance.Short descriptionYou can scale reads for your Amazon RDS for PostgreSQL DB instance by adding read replicas to the instance. RDS for PostgreSQL uses PostgreSQL native streaming replication to create a read-only copy of a source DB instance. This read replica DB instance is an asynchronously created physical replica of the source DB instance. This means that sometimes the replica can't keep up with the primary DB instance. As a result, replication lag can occur. The replica DB is created by a special connection that transmits the write ahead log (WAL) data from the source DB instance to the read replica. Therefore, the read replica checks the WAL logs to replicate the changes done on the primary instance. When the read replica can't find the WAL on the primary instance, the read replica is recovered from the archived WAL data in Amazon Simple Storage Service (Amazon S3). For more information, see How streaming replication works for different RDS for PostgreSQL versions.You can monitor replication lag in Amazon CloudWatch by viewing the Amazon RDS ReplicaLag metric. This metric shows how far a read replica has fallen behind its source DB instance. Amazon RDS monitors the replication status of your read replica. Then, it updates the Replication State field in the Amazon RDS console to Error if replication stops for any reason. The ReplicaLag metric indicates how well a read replica is keeping up with the source DB instance and the amount of latency between the source DB instance and a specific read instance.ResolutionYou might see one of the following errors in the RDS for PostgreSQL error logs when replica lag increases:Streaming replication has stopped: You get this error when streaming replication between the primary and replica instances breaks down. In this case, replication starts replaying from archive location in Amazon S3, leading to further increase of replica lag.Streaming replication has been terminated: You get this error when replication is stopped for more than 30 consecutive days, either manually or due to a replication error. In this case, Amazon RDS terminates replication between the primary DB instance and all read replicas to prevent increased storage requirements on the primary instance and longer failover times.The read replica instance is available even after replication is terminated. However, replication can't be resumed because the transaction logs required by the read replica are deleted from the primary instance after replication is terminated.The most common reasons for increase in the replica lag are the following:Configuration differences between the primary and replica instancesHeavy write workload on the primary instanceTransactions that are running for a long timeExclusive lock on primary instance tablesCorrupted or missing WAL fileNetwork issuesIncorrect parameter settingNo transactionsConfiguration differences between primary instance and read replicaIncorrect replica instance configurations can affect replication performance. Read replica handles a write workload that's similar to that of the source instance along with additional read queries. Therefore, use replicas of the same or higher instance class and storage type as the source instance. Because the replica must replay the same write activity as the source instance, the use of a lower-instance class replica can cause high latency for the replica and increase replication lag. Mismatched storage configurations increase the replication lag as well.Heavy write workload on the primary instanceA heavy write workload on the primary instance might create a high influx of WAL files. An increase in the number of WAL files and replaying of these files on read replicas might slow down the overall replication performance. Therefore, when you see an increase in replica lag, be sure to check the write activity on the primary instance. You can use CloudWatch metrics or Enhanced Monitoring to analyze this workload. View values for TransactionLogsDiskUsage, TransactionLogsGeneration, WriteIOPS, WriteThroughput, and WriteLatency to find if the source instance is under heavy write workload. You can also check for bottlenecks at the throughput level. Each instance type has its dedicated throughput. For more information, see Hardware specifications for DB instance classes.To avoid this issue, control and distribute write activity for the source instance. Instead of performing many write activities together, break your task into smaller task bundles, and then distribute these bundles evenly across multiple transactions. You can use CloudWatch alerts for metrics, such as Writelatency and WriteIOPS, to be notified of heavy writes on the source instance.Transactions that are running for a long timeActive transactions that are running for a long time in the database might interfere with the WAL replication process, thereby increasing the replication lag. Therefore, be sure to monitor the runtime of active transactions with the PostgreSQL pg_stat_activity view.Run a query on the primary instance similar to the following to find the process ID (PID) of the query that's running for a long time:SELECT datname, pid,usename, client_addr, backend_start,xact_start, current_timestamp - xact_start AS xact_runtime, state,backend_xmin FROM pg_stat_activity WHERE state='active';After identifying the PID of the query, you can choose to end the query.Run a query on the primary instance similar to the following to terminate the query:SELECT pg_terminate_backend(PID);You can also choose to rewrite or tune the query to avoid transactions that are running for a long time.Exclusive lock on primary instance tablesWhen you run commands, such as DROP TABLE, TRUNCATE, REINDEX, VACUUM FULL, REFRESH MATERIALIZED VIEW (without CONCURRENTLY), on the primary instance, PostgreSQL processes an Access Exclusive lock. This lock prevents all other transactions from accessing the table for the lock’s hold duration. Usually, the table remains locked until the transaction ends. This lock activity is recorded in WAL, and is then replayed and held by the read replica. The longer the table remains under an Access Exclusive lock, the longer the replication lag.To avoid this issue, it's a best practice to monitor the transactions by periodically querying the pg_locks and pg_stat_activity catalog tables.Example:SELECT pid, usename, pg_blocking_pids(pid) AS blocked_by, QUERY AS blocked_query<br>FROM pg_stat_activity<br>WHERE cardinality(pg_blocking_pids(pid)) > 0;Corrupted or missing WAL fileA corrupted or missing WAL file can result in replica lag. In this case, you see an error in PostgreSQL logs stating that the WAL can't be opened. You might also see the error "requested WAL segment XXX has already been removed".Network issuesA network interruption between the primary and replica instances might cause issues with streaming replication that might result in an increased replica lag.Incorrect parameter settingIncorrectly setting some of the custom parameters in the server configuration parameter group might cause an increased replica lag. The following are some of the parameters that you must set correctly:wal_keep_segments: This parameter specifies the number of WAL files that the primary instance keeps in the pg_wal directory. The default value for this parameter is set to 32. If this parameter isn't set to a value that's high enough for your deployment, the read replica might fall behind, causing the streaming replication to stop. In this case, RDS generates a replication error and begins recovery on the read replica by replaying the primary instance's archived WAL data from S3. This recovery process continues until the read replica can continue streaming replication.Note: In PostgreSQL version 13, the wal_keep_segments parameter is named wal_keep_size. This parameter serves the same purpose as wal_keep_segments. However, the default value for this parameter is defined in MB (2048 MB) rather than the number of files.max_wal_senders: This parameter specifies the maximum number of connections that the primary instance can support at the same time over the streaming replication protocol. The default value for this parameter for RDS for PostgreSQL 13 and higher releases is 20. This parameter should be set to a value that's slightly higher than the actual number of read replicas. If this parameter is set to a value that's less than the number of read replicas, then replication stops.hot_standby_feedback: This parameter specifies whether the replica instance sends feedback to the primary instance about queries that are currently running in the replica instance. By turning on this parameter, you curate the following error message at the source instance and postpone the VACUUM operation on related tables, unless the read query in the replica instance is completed. Therefore, a replica instance that has hot_standby_feedback turned on can serve long-running queries. However, this parameter can bloat tables at the source instance. Be sure to monitor long-running queries in replica instances to avoid serious issues such as out-of-storage and Transaction ID Wraparound in the primary instance.ERROR: canceling statement due to conflict with recoveryDetail: User query might have needed to see row versions that must be removedmax_standby_streaming_delay/max_standby_archive_delay: You can enable parameters, such as max_standby_archive_delay or max_standby_streaming_delay, on the replica instance for completing long-running read queries. These parameters pause WAL replay in the replica if the source data is modified when read queries are running on the replica. A value of -1 for these parameters lets the WAL replay wait until the read query completes. However, this pause increases the replication lag indefinitely and causes high storage consumption at the source due to WAL accumulation.No transactionsIf no user transactions are occurring on the source DB instance, then the PostgreSQL read replica reports a replication lag of up to five minutes.Related informationWorking with read replicas for Amazon RDS for PostgreSQLPostgreSQL documentation for Server configurationFollow"
https://repost.aws/knowledge-center/rds-postgresql-replication-lag
I received an email with my AWS Activate Founders or Portfolio package information. Where do I find my AWS promotional credit?
I received an email with my AWS Activate Founders or Portfolio package information. Where do I find my AWS promotional credit?
"I received an email with my AWS Activate Founders or Portfolio package information. Where do I find my AWS promotional credit?ResolutionIf you receive an email welcoming you to AWS Activate along with benefit information, your AWS Activate Founders or Portfolio package application is approved and processed. Your AWS promotional credits are directly added to the AWS account that you specified on your application.Check the Credits page of the Billing and Cost Management console to see your account's active credits and promotions.Related informationGetting started with AWS ActivateAWS Activate FAQFollow"
https://repost.aws/knowledge-center/view-activate-credits
How do I set up logging for Amazon Pinpoint voice messages for Amazon Pinpoint SMS and Voice v2 API?
I want to monitor the status of the voice messages that I send through Amazon Pinpoint.
"I want to monitor the status of the voice messages that I send through Amazon Pinpoint.ResolutionTo log the status of Amazon Pinpoint voice messages, you must configure a configuration set and event destination. After you set up the event destination, map it to your configuration set. This allows you to receive the response information about the voice messages that you send through Amazon Pinpoint.You can configure any of the following AWS resources as Amazon Pinpoint voice event destinations:Amazon Simple Notification Service (Amazon SNS) topicsAmazon CloudWatch LogsAmazon Kinesis Data Firehose streamsTo configure an event destination, use either the Amazon Pinpoint SMS and Voice messaging v2 APIs or one of the AWS SDKs.Note: The following resolution applies only to Amazon Pinpoint SMS and Voice v2 API. For Amazon Pinpoint SMS and Voice v1 API, see How do I set up logging for Amazon Pinpoint voice messages for Amazon Pinpoint SMS and Voice v1 API?Configure an Amazon SNS topic as an Amazon Pinpoint voice event destination1.    To create a configuration set, run the following create-configuration-set (pinpoint-sms-voice v2) AWS CLI command:aws pinpoint-sms-voice-v2 create-configuration-set --configuration-set-name VoiceSNSNote: You can replace VoiceSNS with any name for your configuration set.2.    Subscribe the endpoint for which you want to log voice messages to an Amazon SNS topic. The SNS topic can be either a new or existing topic. For instructions, see To subscribe an endpoint to an Amazon SNS topic.Note: To create a new Amazon SNS topic using the AWS CLI, run the following create-topic command:aws sns create-topic --name pinpointsmsvoice3.    You must have the following permission in your SNS topic access policy. This allows the Amazon Pinpoint SMS voice service to deliver logs:`{ "Sid": "pinpointsmsvoice", "Effect": "Allow", "Principal": { "Service": "sms-voice.amazonaws.com" }, "Action": "SNS:Publish", "Resource": "arn:aws:sns:us-east-1:ACCOUNT_ID:`pinpointsmsvoice`" }`Note: Replace us-east-1 with your AWS Region. Replace ACCOUNT_ID with your AWS account ID. Replace pinpointsmsvoice with the name of your SNS topic.4.    In a text editor, create an input request file named matching.json for MatchingEventTypes. Specify the events that you want to receive, or specify "ALL" to receive all events:["ALL"]5.    To map the event destination to the configuration-set-name, run the following create-event-destination command:`aws pinpoint-sms-voice-v2 create-event-destination --configuration-set-name VoiceSNS --event-destination-name VoiceSNS --matching-event-types file://matching.json --sns-destination TopicArn=arn:aws:sns:`us-east-1:ACCOUNT\_ID:pinpointsmsvoiceNote: Replace us-east-1 with your Region. Replace ACCOUNT_ID with your AWS account ID. Replace pinpointsmsvoice with the name of your SNS topic.6.    To test the setup, use the SendVoiceMessage v2 API operation to send an Amazon Pinpoint voice message. After a few minutes, the event appears in the endpoint that's subscribed to the SNS topic.Configure CloudWatch Logs as an Amazon Pinpoint voice event destination1.    To create a configuration set, run the following create-configuration-set (pinpoint-sms-voice v2) AWS CLI command:aws pinpoint-sms-voice-v2 create-configuration-set --configuration-set-name VoiceCWNote: You can replace VoiceCW with any name for your configuration set.2.    Create a new CloudWatch log group that receives voice message logs. Run the following create-log-group:aws logs create-log-group --log-group-name /aws/pinpoint/voice-or-Use an existing CloudWatch log group to complete the following steps.3.    Get your CloudWatch log group's Amazon Resource Names (ARN): Open the CloudWatch console. In the left navigation pane, choose Logs. Then, choose Log groups. In the Log group column, choose your log group's name. In the Log group details pane, copy the ARN value. This is your log group's ARN.4.    Create a new AWS Identity and Access Management (IAM) role for the Amazon Pinpoint service to assume. For instructions, see Creating a role for an AWS service (console) or Creating a role for a service (AWS CLI). When you configure the role, modify the role's trust policy so that it includes the following permissions statement in the policy's principal section:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "sms-voice.amazonaws.com" }, "Action": "sts:AssumeRole" } ]}Note: This permissions statement allows the sms-voice service to assume the IAM role.5.    Modify the IAM role's permissions policy so that it includes the following permissions statement:{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "logs:CreateLogStream", "logs:CreateLogGroup", "logs:DescribeLogStreams", "logs:PutLogEvents" ], "Resource": "arn:aws:logs:*:*:*" } ]}Note: This permissions statement grants permissions to call specific CloudWatch Logs API operations. For more information, see the CloudWatch Logs permissions reference.6.    In a text editor, create an input request file named CloudWatchDestination.json. Then, enter the following destination parameters into the file:`{ "IamRoleArn": "arn:aws:iam::ACCOUNT_ID:role/IAM_ROLE", "LogGroupArn": "arn:aws:logs:us-east-1:ACCOUNT_ID:LOG_GROUP:`pinpointsmsvoice`:" }`Note: Replace the value for IamRoleArn with your IAM role ARN. Replace the value for LogGroupArn with your log group ARN and SNS topic name.7.    In a text editor, create an input request file named matching.json for MatchingEventTypes. Specify the events that you want to receive, or specify "ALL" to receive all events:["ALL"]Important: Make sure that you replace VoiceCW with your configuration's set's name.8.    Map the event destination to the configuration-set-name. To do this, run the following create-event-destination command:aws pinpoint-sms-voice-v2 create-event-destination --configuration-set-name VoiceCW --event-destination-name CloudWatch_Destination --matching-event-types file://matching.json --cloud-watch-logs-destination file://CloudWatchDestination.json9.    Test the setup by sending an Amazon Pinpoint voice message using the SendVoiceMessage v2 API operation. After a few minutes, the event appears in the endpoint that's subscribed to the Amazon SNS topic.Configure a Kinesis Data Firehose stream as an Amazon Pinpoint voice event destination1.    To create a configuration set, run the following create-configuration-set (pinpoint-sms-voice v2) AWS CLI command:aws pinpoint-sms-voice-v2 create-configuration-set --configuration-set-name VoiceKinesisNote: You can replace VoiceKinesis with any name for your configuration set.2.    Create a Kinesis Data Firehose delivery stream. For the Destination setting, choose Amazon Simple Storage Service (Amazon S3).Important: Accept the default IAM service role. Then, copy the name of the IAM service role to your clipboard. You need role name for the following steps.3.    Modify the IAM role's permissions policy so that it includes the following permissions statement in the policy's principal section:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "firehose.amazonaws.com", "sms-voice.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ]}This permissions statement grants the sms-voice service to assume the IAM role.4.    Modify the IAM service role's permissions policy so that it includes the following permissions statement:Important: Don't remove any of the IAM service role's default permissions statements.{ "Sid": "VisualEditor0", "Effect": "Allow", "Action": "firehose:*", "Resource": "*"}5.    In a text editor, create an input request file named KinesisFirehoseDestination.json. Then, copy and paste the following destination parameters into the file:{ "IamRoleArn": "arn:aws:iam::191418023309:role/IAM_ROLE", "DeliveryStreamArn": "arn:aws:firehose:us-east-1:ACCOUNT_ID:deliverystream/KINESIS_FIREHOSE_NAME"}Note: Replace us-east-1 with your Region. Replace ACCOUNT_ID with your AWS account ID. Replace KINESIS_FIREHOSE_NAME with the name of your Kinesis Data Firehose stream. Replace IAM_ROLE with your IAM role's name.6.    To map the event destination to the configuration-set-name, run the create-event-destination command with an input request file.In a text editor, create an input request file named matching.json for MatchingEventTypes. Specify the events that you want to receive, or specify ALL to receive all events:["ALL"]Then, run the create-event-destination command:aws pinpoint-sms-voice-v2 create-event-destination --configuration-set-name VoiceKinesis --event-destination-name KinesisFirehose_Destination --matching-event-types file://matching.json --kinesis-firehose-destination file://KinesisFirehoseDestination.jsonImportant: Make sure that you replace VoiceKinesis with your configuration's set's name.7.    To test the setup, use the SendVoiceMessage v2 API operation to send an Amazon Pinpoint voice message. After a few minutes, the event appears in the Amazon S3 bucket that you configured when you created the Kinesis Data Firehose stream.Follow"
https://repost.aws/knowledge-center/pinpoint-voice-message-logging-setup-v2
My virtual interface BGP status For Direct Connect is down in the AWS console. What should I do?
My virtual interface BGP status for AWS Direct Connect is down in the AWS console. How can I troubleshoot this issue?
"My virtual interface BGP status for AWS Direct Connect is down in the AWS console. How can I troubleshoot this issue?ResolutionYour virtual interface status might be down because of configuration issues with the OSI Layer 2 or Border Gateway Protocol (BGP).OSI Layer 2 configurationVerify that your OSI layer 2 is configured correctly by confirming the following:You configured the correct VLAN ID with dot1Q encapsulation on your device—such as a router or switch—as displayed in the Direct Connect console.The peer IP addresses' configuration is identical on your device and in the Direct Connect console.All the intermediate devices along the path are configured for dot1Q VLAN tagging with correct VLAN ID, and VLAN-tagged traffic is preserved on the AWS side of Direct Connect device.Note: Some network providers might also use Q-in-Q tagging, which can alter your tagged VLAN. AWS Direct Connect service doesn't support Q-in-Q tagging.Your device is learning the media access control (MAC) address of the AWS Direct Connect device for the configured VLAN ID from the Address Resolution Protocol (ARP) table.Your device can ping the Amazon peer IP sourcing from your peer IP.For more information, see Troubleshooting layer 2 (data link) issues.BGP ConfigurationIf the OSI layer 2 configuration looks good, then confirm the BGP configuration on your device by verifying the following:The local ASN and remote ASN, as provided in the Downloaded configuration file.The neighbor IP address and BGP MD5 password, as provided in the Downloaded configuration file.Your device isn't blocking ingress or egress traffic on TCP port 179 and other appropriate ephemeral ports.Your device isn't advertising more than 100 prefixes to AWS by BGP. By default, AWS only accepts up to 100 prefixes using a BGP session on AWS Direct Connect. For more information, see Direct Connect quotas.After confirming these configurations, your virtual interface BGP status is now up.For more information, see How can I troubleshoot BGP connection issues over Direct Connect?Related informationCreate a virtual interfaceTroubleshooting AWS Direct ConnectAWS Direct Connect FAQsFollow"
https://repost.aws/knowledge-center/virtual-interface-bgp-down
Why isn't CloudFront returning my default root object from a subdirectory?
Why isn't Amazon CloudFront returning my default root object from a subfolder or subdirectory?
"Why isn't Amazon CloudFront returning my default root object from a subfolder or subdirectory?ResolutionThe default root object feature for CloudFront supports only the root of the origin that your distribution points to. CloudFront doesn't return default root objects in subdirectories. For more information, see Specifying a Default Root Object.If your CloudFront distribution must return the default root object from a subfolder or subdirectory, then you can integrate Lambda@Edge with your distribution. For an example configuration, see Implementing Default Directory Indexes in Amazon S3-backed Amazon CloudFront Origins Using [email protected]: You are charged an additional fee when you use Lambda@Edge. For more information, see Lambda@Edge Pricing.Related informationConfiguring an index documentFollow"
https://repost.aws/knowledge-center/cloudfront-default-root-object-subdirectory
How can I receive custom email notifications when a resource is created in my AWS account using AWS Config service?
"I created an Amazon EventBridge rule to initiate on service event types when new AWS resources are created. However, the responses are in JSON format. How can I receive an email response with a custom notification?"
"I created an Amazon EventBridge rule to initiate on service event types when new AWS resources are created. However, the responses are in JSON format. How can I receive an email response with a custom notification?ResolutionYou can use a custom event pattern with the EventBridge rule to match an AWS Config supported resource type. Then, route the response to an Amazon Simple Notification Service (Amazon SNS) topic.In the following example, SNS notifications are received when a new Amazon Elastic Compute Cloud (Amazon EC2) instance is created using the AWS::EC2::Instance resource type.Note: You can replace the resource type for your specific AWS service.1.    If you haven't already created an Amazon SNS topic, then follow the instructions for Getting started with Amazon SNS.Note: The Amazon SNS topic must be in the same Region as your AWS Config service.2.    Open the EventBridge console, and then choose Rules from the navigation pane.3.    Choose Create rule.4.    For Name, enter a name for your rule. You can optionally enter a Description.5.    For Rule type, choose Rule with an event pattern, then choose Next.6.    For Event source, choose AWS events or EventBridge partner events.7.    In the Event pattern pane, choose Custom patterns (JSON editor), and then paste the following example event pattern:Note: You can replace the EC2::Instance resource type with other resources. For a list of available resource types, see the resourceType section in ResourceIdentifier.{ "source": [ "aws.config" ], "detail-type": [ "Config Configuration Item Change" ], "detail": { "messageType": [ "ConfigurationItemChangeNotification" ], "configurationItem": { "resourceType": [ "AWS::EC2::Instance" ], "configurationItemStatus": [ "ResourceDiscovered" ] } }}8.    Choose Next.9.    For Target types, select AWS service.10.    For Select a target, choose SNS topic.11.    For Topic, choose your SNS topic.12.    Expand Additional settings. Then, for Configure target input, choose Input transformer.13.    Choose Configure input transformer. Then, under Target input transformer for the Input Path text box, enter the following example path:{ "awsRegion": "$.detail.configurationItem.awsRegion", "awsAccountId": "$.detail.configurationItem.awsAccountId", "resource_type": "$.detail.configurationItem.resourceType", "resource_ID": "$.detail.configurationItem.resourceId", "configurationItemCaptureTime": "$.detail.configurationItem.configurationItemCaptureTime"}14.    For the Template text box, enter the following example template:"On <configurationItemCaptureTime> AWS Config service recorded a creation of a new <resource_type> with Id <resource_ID> in the account <awsAccountId> region <awsRegion>. For more details open the AWS Config console at https://console.aws.amazon.com/config/home?region=<awsRegion>#/timeline/<resource_type>/<resource_ID>/configuration"15.    Choose Confirm. Then, choose Next.16.    Optionally, you can Add new tag. Then, choose Next.17.    Choose Create rule.18.    If an event type is initiated, then you receive an SNS email notification with the custom fields populated from step 13 similar to the following:"On ExampleTime AWS Config service recorded a creation of a new AWS::EC2::Instance with Id ExampleID in the account AccountID region ExampleRegion. For more details open the AWS Config console at https://console.aws.amazon.com/config/home?region=*ExampleRegion*#/timeline/AWS::EC2::Instance/*ExampleID*/configuration"Related informationHow can I configure an EventBridge rule for GuardDuty to send custom SNS notifications if specific AWS service event types trigger?How can I receive custom email notifications when a resource is deleted in my AWS account using AWS Config service?Follow"
https://repost.aws/knowledge-center/config-email-resource-created
How do I troubleshoot an increase in the TargetResponseTime metric for an Application Load Balancer?
I noticed an increase in the Application Load Balancer TargetResponseTime metric. How do I troubleshoot this issue?
"I noticed an increase in the Application Load Balancer TargetResponseTime metric. How do I troubleshoot this issue?Short descriptionTargetResponseTime is the time elapsed, in seconds, between when the request leaves the load balancer and when a response from the target is received. This is equivalent to the target_processing_time field in the Application Load Balancer access logs.Possible causes of an increase in TargetResponseTime include:The hosts are unhealthy.The backend instances are overwhelmed by too many requests.There's high CPU utilization on the backend instances.One or more of the targets is faulty.There are issues with web application dependencies running on backend instances.ResolutionThe hosts are unhealthyVerify that all the Application Load Balancer targets are healthy. See, How do I troubleshoot and fix failing health checks for Application Load Balancers?The backend instances are overwhelmed by too many requestsCheck the Sum statistic of the Amazon CloudWatch RequestCount and ActiveConnectionCount metrics for your Application Load Balancer. A sum increase that coincides with the increase in TargetResponseTime can indicate that the backend instances are overwhelmed by the request load.To resolve this issue, configure an Auto Scaling group for your backend instances. See, Tutorial: Set up a scaled and load-balanced application.There's high CPU utilization on the backend instancesCheck the CloudWatch CPUUtilization metric of your backend instances. If CPU utilization is high, or there is a spike in utilization, upgrade your instances to a larger instance type.One or more of the targets is faultyIf you're experiencing faulty targets, the follow these steps to resolve the issue:1.    Enable access logging for your load balancer.2.    Download the access logs for the time range when TargetResponseTime is high. For example, to download the access logs between 2022-02-01T03:00 and 2022-02-01T03:35 using AWS Command Line Interface (AWS CLI):aws s3 cp s3://bucket-name[/prefix]/AWSLogs/aws-account-id/elasticloadbalancing/region/2022/02/01/ ./alblogs --recursive --exclude "*" --include "*20220201T03[0123]*"Note: Replace bucket-name with your bucket's name, aws-account-id with your AWS account's ID, and region with the AWS Region that your account is located in.3.    Use command line tools to analyze the access logs:Elastic Load Balancing (ELB) access logs are compressed in a .gzip format.Optional step: To extract the logs, use the following command:$ gzip -dr ./alblogsExample scenariosTo get the maximum latency for target_processing_time, run the following command.Compressed log file:$zcat *.log.gz | awk '$7 != -1' | awk 'BEGIN{a=0}{if ($7>0+a) a=$7} END{print a}'Uncompressed log file:$cat *.log | awk '$7 != -1' | awk 'BEGIN{a=0}{if ($7>0+a) a=$7} END{print a}'-or-To count the number of requests that have a target_processing_time ">=N" seconds per target, modify N with the number of seconds for your requirements.Example command for compressed log file:$zcat *.log.gz | awk '{if($7 >= N){print $5}}' | sort | uniq -cExample command for uncompressed log file:$cat *.log | awk '{if($7 >= N){print $5}}' | sort | uniq -cExample output:12 10.10.20.111:80 12 10.10.60.163:80254 10.10.70.7:806 10.10.80.109:8020656 10.3.19.141:80In the preceding example, the target with IP address 10.3.19.141 accounts for most of the increase in TargetResponseTime. In this case, check the Operating System (OS) and web application for the target.There are issues with web application dependencies running on backend instancesRun a packet capture on the target to identify the delay in target response. For Linux OS use tcpdump.To capture a complete incoming and outgoing POST HTTP transmission, including HTTP request and response on port TCP/80:tcpdump -i any -ns 0 -A 'tcp dst port 80 and tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x504F5354 or tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x48545450 or tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x3C21444F'To capture a complete incoming and outgoing GET HTTP transmission, including HTTP request and response on port TCP/80:tcpdump -i any -ns 0 -A 'tcp dst port 80 and tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x47455420 or tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x48545450 or tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x3C21444F'Example outputs:14:04:12.186593 IP 10.10.30.219.13000 > 10.10.60.10.http: Flags [P.], seq 641705534:641705793, ack 1587610435, win 106, options [nop,nop,TS val 1165674323 ecr 263805247], length 259: HTTP: GET / HTTP/1.1 E..7."@...I. .. &lt; 2..P&?.>^..C...j9...... Ez.S..Y?GET / HTTP/1.1 X-Forwarded-For: 54.66.76.204 X-Forwarded-Proto: http X-Forwarded-Port: 80 Host: labalbinternet-1236602672.ap-southeast-2.elb.amazonaws.com X-Amzn-Trace-Id: Root=1-6254355c-15db4904726649b66a1e47d7 User-Agent: curl/7.79.1 Accept: */* ................14:04:21.187892 IP 10.10.60.10.http > 10.10.30.219.13000: Flags [P.], seq 1:592, ack 259, win 488, options [nop,nop,TS val 263814250 ecr 1165674323], length 591: HTTP: HTTP/1.1 200 OK E...\.@[email protected]. &lt; ...P2.^..C&?.A....qn..... ..|jEz.SHTTP/1.1 200 OK Date: Mon, 11 Apr 2022 14:04:12 GMT Server: Apache/2.4.52 () OpenSSL/1.0.2k-fips X-Powered-By: PHP/7.2.34 Upgrade: h2,h2c Connection: Upgrade Transfer-Encoding: chunked Content-Type: text/html; charset=UTF-8 159 PHP file name: /index.php&lt;br> ................Note: In the preceding example outputs, the GET HTTP responds at 14:04:12 and the target responds at 14:04:21. The TargetResponseTime is approximately 9 seconds. You can use X-Amzn-Trace-Id: Root to trace this record in the access logs.Example command for compressed log file:$zcat *.log.gz | awk '{if($20 ~ "1-6254355c-15db4904726649b66a1e47d7"){print $6,$7,$8 }}'Example command for uncompressed log file:$cat *.log | awk '{if($20 ~ "1-6254355c-15db4904726649b66a1e47d7"){print $6,$7,$8 }}'Example output:0.008 9.002 0.000Follow"
https://repost.aws/knowledge-center/alb-troubleshoot-targetresponsetime
When can I add secondary objects to a target database during AWS DMS migration?
"At what stage of migration can I add secondary objects to my target database with AWS Database Migration Service (AWS DMS)? Also, what task settings can I use that will allow me to create secondary objects on the target database?"
"At what stage of migration can I add secondary objects to my target database with AWS Database Migration Service (AWS DMS)? Also, what task settings can I use that will allow me to create secondary objects on the target database?Short descriptionAWS DMS creates tables on the target database using the TargetTablePrepMode option. When AWS DMS creates target tables, it migrates only the objects that it needs to effectively migrate data to the target. For example, AWS DMS creates tables, primary keys, and in some cases, unique indexes. But, it doesn't create secondary indexes, non-primary key constraints, data defaults, or user accounts. For more information, see Foreign keys and secondary indexes are missing.If you create tables manually on the target before migration, then it's a best practice to drop secondary objects such as secondary indexes before migration starts.Note: This doesn't apply for a change data capture (CDC) only task.So, to make sure that migration is successful and to improve task performance, it's important that you understand when to create secondary objects. The timing varies depending on the migration method that the task uses:Full load only (migrate existing data)Full load and CDC (migrate existing data and replicate ongoing changes)CDC (replicate data changes only)ResolutionFull load onlyFor a full load only task, it's a best practice to drop primary keys and all secondary objects before the start of migration. Don't create these objects until after full load completes. If you have secondary objects on the target database during full load, then you might see additional maintenance overhead.If you have foreign keys on the target, this can cause the task to fail. This happens because the task loads groups of tables together in no specific order, unless you have manually specified in table mappings.Similarly, insert, update, or delete triggers might cause errors if they are present on the target database. For example, a row insert that's is triggered by an insert trigger on a previously loaded table might cause duplicate rows. Other types of triggers also affect performance because they cause added processing.Full load and CDCFor full load and CDC tasks, it's a best practice to drop all secondary objects before migration starts. But, you must apply secondary objects on the target database at different phases of migration.Review the stages of full load and CDC tasks migration, and at which stage to apply specific secondary objects:The full load of existing data - Add secondary indexes after the task has completed full load, but before it applies the captured cached changes.The application of cached changes - Add foreign keys (referential integrity constraints) after the task has applied cached changes.Ongoing replication - Create triggers after migration is complete and before the application cutover.While full load is in progress, any changes that you make to the tables that are being loaded are then cached. These cached changes are applied when the full load for the table completes. After full load completes, and the cached changes are applied, then the target tables are transactionally consistent. AWS DMS then begins the ongoing replication phase. For more information on this, see High-level view of AWS DMS.To stop the task during migration, use these task settings:Use StopTaskCachedChangesNotApplied to stop the task before applying cached changes.Use StopTaskCachedChangesApplied to stop the task after applying cached changes.Note: You can turn on both StopTaskCachedChangesNotApplied and StopTaskCachedChangesApplied using the AWS Command Line Interface (AWS CLI). If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.CDC only tasksFor CDC only tasks, you can create the secondary indexes and the foreign keys on the target database before the migration. Then, create the triggers on the target after the migration is complete, and before the application cutover.Related informationCreating a taskFull-load task settingsFollow"
https://repost.aws/knowledge-center/dms-secondary-objects-target
How do I manage permissions across namespaces for IAM users in an Amazon EKS cluster?
I want to manage user permissions for my AWS Identity and Access Management (IAM) users across namespaces in my Amazon Elastic Kubernetes Service (Amazon EKS) cluster.
"I want to manage user permissions for my AWS Identity and Access Management (IAM) users across namespaces in my Amazon Elastic Kubernetes Service (Amazon EKS) cluster.Short descriptionTo manage user permissions across namespaces in an Amazon EKS cluster, you must:Create an IAM role that can be assumed by members of your organization.Create a Kubernetes role-based access control (RBAC) role (Role) and role binding (RoleBinding) (from the Kubernetes website) for your cluster.Map the IAM roles to the RBAC roles and groups using aws-auth ConfigMap.Note: When a cluster is created, only the Amazon Resource Name (ARN) of the IAM user or role that created the cluster is added to the aws-auth ConfigMap. Also, it is only this ARN that has system:masters permissions. This means that only the cluster creator can add more users or roles to the aws-auth ConfigMap.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Create an IAM role that can be assumed by members of your organizationTo give members of your organization access to a namespace, you must create an IAM role that can be assumed by those members.1.    Create a role to delegate permissions to an IAM user.2.    To verify that a user has permission to assume the IAM role from step 1, configure the AWS CLI. Then, run the following command from that user's workstation:$ aws sts assume-role --role-arn arn:aws:iam::yourAccountID:role/yourIAMRoleName --role-session-name abcde{ "Credentials": { "AccessKeyId": "yourAccessKeyId", "SecretAccessKey": "yourSecretAccessKey", "SessionToken": "yourSessionToken", "Expiration": "2020-01-30T01:57:17Z" }, "AssumedRoleUser": { "AssumedRoleId": "yourAssumedRoleId", "Arn": "arn:aws:iam::yourAccountID:role/yourIAMRoleName" }}Note: Replace yourAccessKeyId, yourSecretAccessKey, yourSessionToken, yourAssumedRoleId, yourAccountID, and yourIAMRoleName with your values.3.    To configure the IAM user's kubectl to always use the role when accessing the Kubernetes API, run the following command to update the kubeconfig file:$ aws eks update-kubeconfig --name yourClusterName --role-arn arn:aws:iam::yourAccountID:role/yourIAMRoleNameNote: Replace yourClusterName, y****ourAccountID, and yourIAMRoleName with your values.Create a Kubernetes RBAC role and role binding for your clusterImportant: The following steps must be completed from a workstation that's configured to access Kubernetes. The user must be a cluster creator or an IAM entity that already has access through the aws-auth ConfigMap. The IAM role created in previous steps hasn't been given access to the cluster yet.You can bind a cluster role (ClusterRole) to a role binding. This is possible because an RBAC role and role binding are Kubernetes namespaced resources. However, you can't bind a role to a cluster role binding (ClusterRoleBinding).1.    To list all built-in cluster roles and bind the cluster role admin to a role binding for that namespace, run the following command:$ kubectl get clusterrole2.    To see the permissions associated with the cluster role admin, run the following command:$ kubectl describe clusterrole adminImportant: To use an existing namespace, you can skip the following step 3. If you choose a different name for the namespace test, replace the values for the namespace parameter in the following steps 4 and 6.3.    To create the namespace test that grants access to the IAM users as part of the IAM group, run the following command:$ kubectl create namespace test4.    To create a Kubernetes RBAC role, copy the following code into a new YAML file (for example, role.yaml):kind: RoleapiVersion: rbac.authorization.k8s.io/v1beta1metadata: name: k8s-test-role namespace: testrules: - apiGroups: - "" - "apps" - "batch" - "extensions" resources: - "configmaps" - "cronjobs" - "deployments" - "events" - "ingresses" - "jobs" - "pods" - "pods/attach" - "pods/exec" - "pods/log" - "pods/portforward" - "secrets" - "services" verbs: - "create" - "delete" - "describe" - "get" - "list" - "patch" - "update"Note: The preceding role allows users to perform all the actions in the verbs section.5.    To create the RBAC role, run the following command:$ kubectl apply -f role.yaml6.    To create a Kubernetes role binding, copy the following code into a new YAML file (for example, rolebinding.yaml):kind: RoleBindingapiVersion: rbac.authorization.k8s.io/v1beta1metadata: name: k8s-test-rolebinding namespace: testsubjects:- kind: User name: k8s-test-userroleRef: kind: Role name: k8s-test-role apiGroup: rbac.authorization.k8s.ioNote: The role binding is a namespaced resource that binds the RBAC role in the roleRef section to the user in the subjects section. You don't need to create the user k8s-test-user, because Kubernetes doesn't have a resource type user.7.    To create the RBAC role binding, run the following command:$ kubectl apply -f rolebinding.yamlMap the IAM role to the RBAC role and group using the aws-auth ConfigMAP1.    To associate the IAM role yourIAMRoleName with the Kubernetes user k8s-test-user, run the following command:$ eksctl create iamidentitymapping --cluster yourClusterName --arn arn:aws:iam::yourAccountID:role/yourIAMRoleName --username k8s-test-userNote: Replace yourClusterName, yourAccountID, and yourIAMRoleName with your values.Test the access to the namespace1.    To test access to the namespace test, assume the IAM role yourIAMRoleName for a user that you created, and then run the following command:$ kubectl create job hello -n test --image=busybox -- echo "Hello World"Note: The preceding command creates a job by using the RBAC role k8s-test-role that you created earlier.2.    To check the pod and job in the namespace test, run the following commands:$ kubectl get job -n testNAME COMPLETIONS DURATION AGEhello 1/1 4s 15s$ kubectl get pods -n testNAME READY STATUS RESTARTS AGEhello-tpjmf 0/1 Completed 0 2m34sThe IAM users that are part of the IAM group can now use this namespace and deploy their applications.Related informationEnabling IAM user and role access to your clusterFollow"
https://repost.aws/knowledge-center/eks-iam-permissions-namespaces
How do I set up my Application Load Balancer to authenticate users through an Amazon Cognito user pool in another AWS account?
"I want to use Amazon Cognito authentication on my Application Load Balancer, but my user pool is in another AWS account. Can I use a cross-account user pool for authentication?"
"I want to use Amazon Cognito authentication on my Application Load Balancer, but my user pool is in another AWS account. Can I use a cross-account user pool for authentication?Short descriptionCurrently, only Amazon Cognito user pools in the same account are supported by default when configuring your Application Load Balancer for user authentication. However, as a workaround, you can set up your cross-account user pool as an OpenID Connect (OIDC) identity provider (IdP).Follow these instructions to configure your Application Load Balancer in one account ("account B") for authentication through a user pool in another account ("account A").ResolutionCreate an Application Load BalancerIf you haven't done so already, in account B create an Application Load Balancer with an HTTPS listener.Note: The authenticate-cognito and authenticate-oidc rule action types are supported only with HTTPS listeners.Get the DNS name of your Application Load Balancer in account BIn account B, open the Load Balancers page of the Amazon Elastic Compute Cloud (Amazon EC2) console.Select your Application Load Balancer.On the Description tab, copy your load balancer's DNS name. You need this to access your load balancer's endpoint URL for testing later.Create and configure a user pool in account AIn account A, create an Amazon Cognito user pool with an app client. For the app client, be sure to select the Generate client secret check box. For more information, see Prepare to use Amazon Cognito.Note: During user pool creation, configure the settings that you want for production. Some user pool settings can't be changed after you create the user pool. For example, you can't change the standard attributes required for user registration.In the Amazon Cognito console, on the General settings page for your user pool, note the Pool Id. You need this later for getting your user pool's OIDC configuration details.In the left navigation pane, under General settings, choose App clients.On the App clients page, do the following:Choose Show Details.Copy the App client id and App client secret. You need these later for configuring your Application Load Balancer for user authentication.In the left navigation pane, under App integration, choose App client settings.On the App client settings page, do the following:Under Enabled Identity Providers, select the Cognito User Pool check box.For Callback URL(s), enter https://loadBalancerDNSName/oauth2/idpresponse. Or, if you mapped a custom domain to your load balancer using a CNAME record, enter https://CNAME/oauth2/idpresponse.Note: Replace loadBalancerDNSName with the DNS name that you copied from the Amazon EC2 console. If you're using a CNAME record, replace CNAME with your custom domain.For Sign out URL(s), enter a URL where you want your users to be redirected after signing out. For testing, you can enter any valid URL, such as https://example.com/.Under Allowed OAuth Flows, select at least the Authorization code grant check box.Under Allowed OAuth Scopes, select at least the openid check box. The openid scope returns an ID token. Select any additional OAuth scopes according to your requirements.Choose Save changes.In the left navigation pane, under App integration, choose Domain name.Add a domain name for your user pool.For more information, see Configuring a user pool app client and Adding OIDC identity providers to a user pool.Get your user pool's OIDC configuration detailsAccess your user pool's OIDC configuration endpoint. You need the configuration details to set up your user pool as an OIDC IdP on your Application Load Balancer.In your browser, enter the following URL:https://cognito-idp.region.amazonaws.com/userPoolId/.well-known/openid-configurationNote: Replace region with the AWS Region of your user pool. (For example, us-east-1.) Replace userPoolId with your user pool's ID that you noted earlier.Copy the JSON response that you see in your browser. Note the values for the following:authorization_endpointissuerscopes_supportedtoken_endpointuserinfo_endpointConfigure your Application Load Balancer in account BIn account B, on the Load Balancers page of the Amazon EC2 console, select your Application Load Balancer.On the Listeners tab, under Rules, choose View/edit rules for your HTTPS listener.In the menu bar, choose the pencil icon (Edit rules).Next to the default rule for your HTTPS listener, choose the pencil icon (Edit Rule).Under THEN, choose Add action, and then do the following:Choose Authenticate.For Authenticate, choose OIDC.For Issuer, enter the issuer value from your user pool's OIDC configuration.For Authorization endpoint, enter the authorization_endpoint value.For Token endpoint, enter the token_endpoint value.For User info endpoint, enter the userinfo_endpoint value.For Client ID, enter the App client id that you copied earlier from the Amazon Cognito console.For Client secret, enter the App client secret that you copied earlier.Expand Advanced settings.For Scope, enter the scopes that you configured for your user pool app client, separated by spaces. You can find the scopes in your user pool's OIDC configuration. For example, if the scopes_supported value in the configuration is ["openid","email","phone","profile"], enter openid email phone profile.Choose the check mark icon.Under THEN, choose Add action, and then do the following:Note: If you can't choose Add action, delete the existing routing action (such as Redirect to) using the trash can icon, and then try again.Choose Forward to.For Forward to, choose one or more target groups.(Optional) Configure Group-level stickiness.Choose the check mark icon.Choose Update. The HTTPS listener's default rule is updated.For more information, see Edit a rule.Test the setupIn your browser, enter either of the following URLs:https://loadBalancerDNSName/https://CNAME/Note: Replace loadBalancerDNSName with the DNS name that you copied earlier from the Amazon EC2 console. Or, replace CNAME with your custom domain.You're redirected to the Amazon Cognito hosted web UI for your user pool. When a user signs in here and is authenticated by the user pool, they're redirected to the target.Related informationGetting started with Application Load BalancersSimplify login with Application Load Balancer built-in authenticationAuthenticate users Using an Application Load BalancerListener rules for your Application Load BalancerOIDC user pool IdP authentication flowFollow"
https://repost.aws/knowledge-center/cognito-user-pool-cross-account-alb
How can I use Okta with my AWS Managed Microsoft AD to provide multi-factor authentication for end users connecting to an AWS Client VPN endpoint?
How can I use Okta with my AWS Directory Service for Microsoft Active Directory to provide multi-factor authentication (MFA) for end users who are connecting to an AWS Client VPN endpoint?
"How can I use Okta with my AWS Directory Service for Microsoft Active Directory to provide multi-factor authentication (MFA) for end users who are connecting to an AWS Client VPN endpoint?Short descriptionAWS Client VPN supports the following types of end user authentication:Mutual authenticationMicrosoft Active Directory authenticationDual authentication (Mutual + Microsoft Active Directory-based authentication)The MFA service must be turned on for the AWS Managed Microsoft AD (not directly on the Client VPN). Be sure that your AWS Managed Microsoft AD type supports MFA. MFA functionality is supported by both new and existing Client VPNs.To set up MFA for end users who are connecting to a Client VPN endpoint using Okta:Complete the IT administrator configuration tasks to set up the required services.Then, have each end user complete the end user configuration tasks to establish their secure connection to the Client VPN endpoint.ResolutionNote: The following tasks must be completed by IT administrators, except for the last section that must be completed by end users.Create and configure an AWS Managed Microsoft AD1.    Create an AWS Managed Microsoft AD directory.2.    Join a Windows EC2 instance to the AWS Managed Microsoft AD.This instance is used to install services in the AWS Managed Microsoft AD and to manage users and groups in the AWS Managed Microsoft AD. When launching the instance, be sure that the instance is associated with the AWS Managed Microsoft AD. Also, be sure to add an AWS Identity and Access Management (IAM) role with the "AmazonSSMManagedInstanceCore" and "AmazonSSMDirectoryServiceAcces" policies attached.3.    Install the AWS Managed Microsoft AD services. Then, configure the AWS Managed Microsoft AD users and groups.First, log in to (or use a Remote Desktop Connection to connect to) the instance that you created in step 2 using the following command. Be sure to replace Your Admin password with the Admin password that you created in step 1.User name: Admin@ad_DNS_namePassword: Your Admin passwordThen, install the following services using PowerShell (in Admin mode):install-windowsfeature rsat-ad-tools, rsat-ad-admincenter, gpmc, rsat-dns-server -confirm:$falseNext, create Microsoft AD users and Microsoft AD groups. Then,add your users to their appropriate Microsoft AD groups.Note: These users are the same end users who will connect to the Client VPN service. While creating users in the AWS Managed Microsoft AD, be sure to provide both first and last names. Otherwise, Okta might not import users from the AWS Managed Microsoft AD.Finally, use the following command to get the SID for your Microsoft AD groups. Be sure to replace Your-AD-group-name with your Microsoft AD group name.Get-ADGroup -Identity <Your-AD-group-name>Note: You need the SID to authorize the Microsoft AD users of this group when you configure the Client VPN authorization rules.Install and configure Okta1.    Sign up for an Okta account using your work email address. You'll receive an authorization email with the following details:Okta organization nameOkta homepage URLUsername (Admin_email) Temporary Password2.    Log in using your Okta homepage URL, and then change the temporary password.3.    Install Okta Verify on the IT administrator's mobile device. Follow the in-app prompts to verify your identity.4.    Launch another EC2 Windows instance. This instance is used to configure and manage the Okta Radius application. Be sure that the instance is associated with the AWS Managed Microsoft AD, has the correct IAM role, and has internet access.5.    Use Remote Desktop to connect to the instance. Then, log in to Okta (**https://.okta.com** ) using your credentials from step 1.6.    Choose Settings, and then choose Downloads. Then, download the Okta Radius Server Agents and AD Agent Installer on your instance.To install the Okta RADIUS Server Agents:Provide the RADIUS shared secret key and the RADIUS port. Be sure to note these values, because you'll use them later to turn on MFA on your AWS Managed Microsoft AD.(Optional) Configure the RADIUS Agent proxy, if applicable.To register this agent with your domain, enter the custom domain that you registered with Okta.sub-domain: company_name (from https:// <company_name>.okta.com)After authentication, you're prompted to allow access to the Okta RADIUS Agent. Choose Allow to complete the installation process.To install the Okta AD Agent Installer:Choose the domain that you plan to manage with this agent. Be sure to use the same domain as your Microsoft AD's domain.Select a user who is part of your Microsoft AD (or create a new user). Be sure that this user is part of the Admin group within your Microsoft AD. The Okta Microsoft AD agent runs as this user.After you enter the credentials, you're promoted to authenticate and proceed to install the Microsoft AD agent.(Optional) Configure the RADIUS Agent proxy, if applicable.To register this agent with your domain, enter the custom domain that you registered with Okta.sub-domain: company_name (from https:// <company_name>.okta.com)7.    In the same Windows EC2 instance, choose Services. Then, verify that both Okta Radius Server Agents and AD Agent Installer are installed and are in the Running state.Import AD users from your AWS Managed Microsoft AD to Okta1.    Log in to your Okta account using your Okta homepage URL and credentials:2.    From the top navigation bar in Okta, choose Directory, and then choose Directory Integrations.3.    Select your AWS Managed Microsoft AD, and then activate the directory. After it's activated, choose Import, Import Now, and then Full Import.4.    Select the Microsoft AD users and groups that you want to import from your AWS Managed Microsoft AD to Okta.5.    Choose Confirm Assignments, and then select Auto-activate users after confirmation.6.    In your directory, verify the status of your imported users under People. Your users should all be in the Active state. If not, select each individual user and activate them manually.Install the Radius application and assign it to your Microsoft AD users1.    On your Okta homepage, choose Applications, Add Application. Search for Radius Application, and then choose Add.2.    Under Sign-On Options, be sure that Okta performs primary authentication is not selected. For UDP Port, choose the port that you selected during installation of the Okta Radius Server Agents. For Secret Key, choose the key that you selected during installation of the Okta Radius Server Agents.3.    For Application username format, choose AD SAM account name.4.    Assign the Radius application to your Microsoft AD users and groups. Choose Assign. Then, choose Assign to People or Assign to Groups (depending on your use case). Select all of the names of the desired Microsoft AD users or groups. Choose Done.Turn on MFA for your users1.    On your Okta homepage, choose Security, Multifactor, Factor Types.2.    For Okta Verify, choose Okta Verify with Push.3.    Choose Factor Enrollment, and then choose Add Rule.4.    To assign this MFA rule to the Radius application, choose Applications, Radius Application, Sign On Policy, and Add Rule.5.    Under Conditions, confirm that the rule applies to Users assigned this app. For Actions, choose Prompt for factor.Modify the security group configurationLog in to the AWS Management Console.2.    Choose Security groups.3.    Select the security group for the directory controllers.4.    Edit the outbound rule for the security group of the Microsoft AD to allow UDP 1812 (or the Radius service port) for the destination IP address (private IP address) of your Radius Server. Or, you can allow all traffic, if your use case permits.Turn on MFA on your AWS Microsoft Managed ADOpen the AWS Directory Service console.2.    Choose Directory Service, and then choose Directories.2.    Select your directory.3.    Under Networking & security, choose Multi-factor authentication. Then, choose Actions, Enable.4.    Specify the following:RADIUS server DNS name or IP addresses: Enter the private IP address of the EC2 Radius instance.Display label: Enter a label name.Port: Enter the port that you selected during installation of the Okta Radius Server Agents.Shared secret code: Choose the key that you selected while installing the Okta Radius Server Agents.Protocol: Choose PAP.Server timeout: Set the desired value.Max RADIUS request retries: Set the desired value.Create the Client VPN endpoint1.    After the AWS Microsoft Managed AD and MFA are set up, create the Client VPN endpoint using the Microsoft AD that MFA is turned on for.2.    Download the new client configuration file and distribute it to your end users.Note: You can download the client configuration file from the AWS Management Console, the AWS Command Line Interface (AWS CLI), or the API command.3.    Confirm that the client configuration file includes the following parameters:auth-user-passstatic-challenge "Enter MFA code " 1Note: If you're using dual authentication (for example, mutual authentication + AD-based authentication), also be sure to add the client and to the configuration file.End user configuration tasks1.    Make sure that the Okta Verify mobile application is installed on your mobile device.2.    Log in to the Okta homepage using the following credentials:OKTA homepage URL: https:// <company_name>.okta.comUsername: End user's AD namePassword: End user's AD password3.    Follow the provided instructions to set up MFA.4.    Install the AWS Client VPN for Desktop tool.Note: You can also connect to the Client VPN endpoint using any other standard OpenVPN-based client tool.5.    Create a profile using the client configuration file provided by your IT administrator.6.    To connect to the Client VPN endpoint, enter your Microsoft AD user credentials when prompted. Then, enter the MFA code generated by your Okta Verify application.Follow"
https://repost.aws/knowledge-center/client-vpn-use-okta-for-mfa-to-endpoint
How can I troubleshoot missing CloudWatch logs for API Gateway REST APIs?
"I have activated Amazon CloudWatch logging for Amazon API Gateway, but I couldn't find any logs. How do I get the CloudWatch logs for troubleshooting API Gateway REST APIs?"
"I have activated Amazon CloudWatch logging for Amazon API Gateway, but I couldn't find any logs. How do I get the CloudWatch logs for troubleshooting API Gateway REST APIs?Short descriptionYou can use CloudWatch logging can be used to help debug issues related to request execution or client access to your API. CloudWatch logging includes execution logging and access logging.For execution logging, API Gateway manages the CloudWatch logs including creating log groups and log streams. For access logging, you can create your own log groups or choose existing log groups.Not all client-side errors rejected by API Gateway are logged into execution logs. For example, a client making an API request to an incorrect resource path of your REST API returns a 403 "Missing Authentication Token" response. This type of response isn't logged into execution logs. Use CloudWatch access logging to troubleshoot client-side errors.For more information, see CloudWatch log formats for API Gateway.API Gateway might not generate logs for:413 Request Entity Too Large errors.Excessive 429 Too Many Requests errors.400 series errors from requests sent to a custom domain that has no API mapping.500 series errors caused by internal failures.For more information, see Monitoring REST APIs.ResolutionVerify API Gateway permissions for CloudWatch loggingTo activate CloudWatch Logs, you must grant API Gateway permission to read and write logs to CloudWatch for your account. The AmazonAPIGatewayPushToCloudWatchLogs managed policy has the required permissions.Create an AWS Identity and Access Management (IAM) role with apigateway.amazonaws.com as its trusted entity. Then, attach the following policy to the IAM role, and set the IAM role ARN on the cloudWatchRoleArn property for your AWS Account:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:DescribeLogGroups", "logs:DescribeLogStreams", "logs:PutLogEvents", "logs:GetLogEvents", "logs:FilterLogEvents" ], "Resource": "*" } ]}Make sure that:AWS Security Token Service (AWS STS) is activated for your AWS Region. For more information, see Managing AWS STS in an AWS Region.The IAM role is activated for all AWS Regions where you want to activate CloudWatch logs.For more information, see Permissions for CloudWatch logging.Verify API Gateway logging settingsVerify that the CloudWatch execution or access logging settings are activated for API Gateway.Note: You can activate execution logging and access logging independent of each other.1.    Open the API Gateway console.2.    In the navigation pane, choose APIs.3.    Choose your API, and then choose Stages.4.    In Stages, choose your stage, and then choose the Logs/Tracing tab.5.    In CloudWatch Settings, verify the following:Enable CloudWatch Logs is selected.        Log level is set to INFO. Note: If Log level is set to ERROR, only requests for errors in API Gateway are logged. Successful API requests aren't logged.Log full requests/responses data and Enable Detailed CloudWatch Metrics are selected for additional log data. Note: It's a best practice not to enable Log full requests/responses data for production APIs which can result in logging sensitive data.6.    In Custom Access Logging, verify that Enable Access Logging is selected.Verify logging method and override if necessaryBy default, all API resources use the same configurations as their stage. This setting can be overridden to have different configurations for each method if you don't want to inherit from the stage.1.    Open the API Gateway console.2.    In the navigation pane, choose APIs.3.    Choose your API, and then choose Stages.4.    In Stages, expand your stage name, and then choose your HTTP method. For example, GET.5.    In Settings, choose Override for this method.6.    In CloudWatch settings, make any additional log changes for your use case if needed, and then choose Save Changes.For more information, see Setting up CloudWatch logging for a REST API in API Gateway.Related informationHow do I find API Gateway REST API errors in my CloudWatch logs?How can I set up access logging for API Gateway?How do I turn on CloudWatch Logs for troubleshooting my API Gateway REST API or WebSocket API?Follow"
https://repost.aws/knowledge-center/api-gateway-missing-cloudwatch-logs
How do I view server activity for an Amazon RDS for MySQL DB instance?
How can I view the server activity for an Amazon Relational Database Service (Amazon RDS) DB instance that is running MySQL?
"How can I view the server activity for an Amazon Relational Database Service (Amazon RDS) DB instance that is running MySQL?ResolutionYou can use server activity to help identify the source of performance issues. You can review the state of the InnoDB storage engine, identify running queries, or find deadlocks on the DB instance.You must have MySQL PROCESS server administration privileges to see all the threads running on a MySQL DB instance. If you don't have admin privileges, SHOW PROCESSLIST shows only the threads associated with the MySQL account that you're using. You must also have MySQL PROCESS server admin privileges to use SHOW ENGINE. And, you need MYSQL PROCESS server admin privileges to view information about the state of the InnoDB storage engine.To view the server activity for a DB instance, follow these steps:1.    Turn on the general and slow query logs for your MySQL DB instance.2.    Connect to the DB instance running the MySQL database engine.3.    Run these commands:SHOW FULL PROCESSLIST\GSHOW ENGINE INNODB STATUS\GNote: To view more than the first 100 characters of each statement, use the FULL keyword.4.    Check which transactions are waiting and which transactions are blocking the transactions that are waiting. Run one of these commands depending on the version of Amazon RDS for MySQL you're running:For versions 5.6 and 5.7:SELECT r.trx_id waiting_trx_id, r.trx_mysql_thread_id waiting_thread, r.trx_query waiting_query, b.trx_id blocking_trx_id, b.trx_mysql_thread_id blocking_thread, b.trx_query blocking_query FROM information_schema.innodb_lock_waits w INNER JOIN information_schema.innodb_trx b ON b.trx_id = w.blocking_trx_id INNER JOIN information_schema.innodb_trx r ON r.trx_id = w.requesting_trx_id;For version 8.0ON r.trx_id = w.requesting_engine_transaction_id;INNER JOIN information_schema.innodb_trx rON b.trx_id = w.blocking_engine_transaction_idINNER JOIN information_schema.innodb_trx bFROM performance_schema.data_lock_waits wb.trx_query blocking_queryb.trx_mysql_thread_id blocking_thread,b.trx_id blocking_trx_id,r.trx_query waiting_query,r.trx_mysql_thread_id waiting_thread,r.trx_id waiting_trx_id,Note: It's a best practice to gather these outputs at short, consecutive intervals (for example, 60 seconds). Reviewing multiple outputs can provide a broader view of the state of the engine. This helps you troubleshoot problems with performance.Related informationMonitoring metrics in an Amazon RDS instanceFollow"
https://repost.aws/knowledge-center/rds-mysql-server-activity
How do I use a PostgreSQL database as the external metastore for Hive on Amazon EMR?
I want to use an Amazon Relational Database Service (Amazon RDS) for PostgreSQL DB instance as my external metastore for Apache Hive on Amazon EMR.
"I want to use an Amazon Relational Database Service (Amazon RDS) for PostgreSQL DB instance as my external metastore for Apache Hive on Amazon EMR.ResolutionBefore you begin, note the following:This solution assumes that you already have an active PostgreSQL database.If you're using Amazon EMR release version 5.7 or earlier, download the PostgreSQL JDBC driver. Then, add the driver to the Hive library path ( /usr/lib/hive/lib). Amazon EMR release versions 5.8.0 and later come with the PostgreSQL JDBC driver in the Hive library path.To configure a PostgreSQL DB instance as the external metastore for Hive, do the following:1.    Create an Amazon RDS for PostgreSQL DB instance and create the database. Note that you can do this while creating the DB instance from Amazon RDS in the AWS Console. You can specify the database name in the Initial database name field under Additional configuration. Or, you can connect the PostgreSQL database instance and then create the database.2.    Modify the DB instance security group to allow connections on port 5432 between your database and the ElasticMapReduce-master security group. For more information, see VPC security groups.3.    Launch an Amazon EMR cluster without an external metastore. Amazon EMR uses the default MySQL database in this case.4.    Connect to the master node using SSH.5.    Replace the Hive configuration with the following properties.Replace the following values in the example:mypostgresql.testabcd1111.us-west-2.rds.amazonaws.com with the endpoint of your DB instancemypgdb with the name of your PostgreSQL databasedatabase_username with the DB instance usernamedatabase_password with the DB instance password[hadoop@ip-X-X-X-X ~]$ sudo vi /etc/hive/conf/hive-site.xml<property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:postgresql://mypostgresql.testabcd1111.us-west-2.rds.amazonaws.com:5432/mypgdb</value> <description>PostgreSQL JDBC driver connection URL</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>org.postgresql.Driver</value> <description>PostgreSQL metastore driver class name</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>database_username</value> <description>the username for the DB instance</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>database_password</value> <description>the password for the DB instance</description> </property>6.    Run the following commands to create the PostgreSQL schema:[hadoop@ip-X-X-X-X ~]$ cd /usr/lib/hive/bin/[hadoop@ip-X-X-X-X bin]$ ./schematool -dbType postgres -initSchema Metastore connection URL: jdbc:postgresql://mypostgresql.testabcd1111.us-west-2.rds.amazonaws.com:5432/mypgdbMetastore Connection Driver : org.postgresql.DriverMetastore connection User: testStarting metastore schema initialization to 2.3.0Initialization script hive-schema-2.3.0.postgres.sqlInitialization script completedschemaTool completed7.    Stop and start Hive services so that the updated settings take effect:[hadoop@ip-X-X-X-X bin]$ sudo initctl list |grep -i hivehive-server2 start/running, process 11818hive-hcatalog-server start/running, process 12708[hadoop@ip-X-X-X-X9 bin]$ sudo stop hive-server2hive-server2 stop/waiting[hadoop@ip-X-X-X-X bin]$ sudo stop hive-hcatalog-serverhive-hcatalog-server stop/waiting[hadoop@ip-X-X-X-X bin]$ sudo start hive-server2hive-server2 start/running, process 18798[hadoop@ip-X-X-X-X bin]$ sudo start hive-hcatalog-serverhive-hcatalog-server start/running, process 19614You can choose to automate the steps 5 through 7 in the preceding process by running the following bash script (hive_postgres_emr_step.sh) as a step job in the EMR cluster.## Automated Bash script to update the hive-site.xml and restart Hive## Parametersrds_db_instance_endpoint='<rds_db_instance_endpoint>'rds_db_instance_port='<rds_db_instance_port>'rds_db_name='<rds_db_name>'rds_db_instance_username='<rds_db_instance_username>'rds_db_instance_password='<rds_db_instance_username>'############################# Copying the original hive-site.xmlsudo cp /etc/hive/conf/hive-site.xml /tmp/hive-site.xml############################# Changing the JDBC URLold_jdbc=`grep "javax.jdo.option.ConnectionURL" -A +3 -B 1 /tmp/hive-site.xml | grep "<value>" | xargs`sudo sed -i "s|$old_jdbc|<value>jdbc:postgresql://$rds_db_instance_endpoint:$rds_db_instance_port/$rds_db_name</value>|g" /tmp/hive-site.xml############################# Changing the Driver nameold_driver_name=`grep "javax.jdo.option.ConnectionDriverName" -A +3 -B 1 /tmp/hive-site.xml | grep "<value>" | xargs`sudo sed -i "s|$old_driver_name|<value>org.postgresql.Driver</value>|g" /tmp/hive-site.xml############################# Changing the database userold_db_username=`grep "javax.jdo.option.ConnectionUserName" -A +3 -B 1 /tmp/hive-site.xml | grep "<value>" | xargs`sudo sed -i "s|$old_db_username|<value>$rds_db_instance_username</value>|g" /tmp/hive-site.xml############################# Changing the database password and descriptionconnection_password=`grep "javax.jdo.option.ConnectionPassword" -A +3 -B 1 /tmp/hive-site.xml | grep "<value>" | xargs`sudo sed -i "s|$connection_password|<value>$rds_db_instance_password</value>|g" /tmp/hive-site.xmlold_password_description=`grep "javax.jdo.option.ConnectionPassword" -A +3 -B 1 /tmp/hive-site.xml | grep "<description>" | xargs`new_password_description='<description>the password for the DB instance</description>'sudo sed -i "s|$password_description|$new_password_description|g" /tmp/hive-site.xml############################# Moving hive-site to backupsudo mv /etc/hive/conf/hive-site.xml /etc/hive/conf/hive-site.xml_bkupsudo mv /tmp/hive-site.xml /etc/hive/conf/hive-site.xml############################# Init Schema for Postgres/usr/lib/hive/bin/schematool -dbType postgres -initSchema############################# Restart Hive## Check Amazon Linux version and restart HiveOS_version=`uname -r`if [[ "$OS_version" == *"amzn2"* ]]; then echo "Amazon Linux 2 instance, restarting Hive..." sudo systemctl stop hive-server2 sudo systemctl stop hive-hcatalog-server sudo systemctl start hive-server2 sudo systemctl start hive-hcatalog-serverelif [[ "$OS_version" == *"amzn1"* ]]; then echo "Amazon Linux 1 instance, restarting Hive" sudo stop hive-server2 sudo stop hive-hcatalog-server sudo start hive-server2 sudo start hive-hcatalog-serverelse echo "ERROR: OS version different from AL1 or AL2."fiecho "--------------------COMPLETED--------------------"Be sure to replace the following values in the script:rds_db_instance_endpoint with the endpoint of your DB instancerds_db_instance_port with the port of your DB instancerds_db_name with the name of your PostgreSQL databaserds_db_instance_username with the DB instance user namerds_db_instance_password with the DB instance passwordUpload the script to Amazon S3. You can run the script as a step job using the Amazon EMR Console, AWS Command Line Interface (AWS CLI), or the API. To use the Amazon EMR console to run the script, do the following:1.    Open the Amazon EMR console.2.    On the Cluster List page, select the link for your cluster.3.    On the Cluster Details page, choose the Steps tab.4.    On the Steps tab, choose Add step.5.    In the Add step dialog box, retain the default values for Step type and Name.6.    For JAR location, enter the following:command-runner.jar7.    For Arguments, enter the following:bash -c "aws s3 cp s3://example_bucket/script/hive_postgres_emr_step.sh .; chmod +x hive_postgres_emr_step.sh; ./hive_postgres_emr_step.sh"Replace the S3 location in the command with the location where you stored the script.8.    Choose Add run the step job.After the step job is completed, do the following to verify the Hive configuration updates:1.    Log in to the Hive shell and create a Hive table.Note: Be sure to replace test_postgres in the example with the name of your Hive table.[hadoop@ip-X-X-X-X bin]$ hiveLogging initialized using configuration in file:/etc/hive/conf.dist/hive-log4j2.properties Async: truehive> show databases;OKdefaultTime taken: 0.569 seconds, Fetched: 1 row(s)hive> create table test_postgres(a int,b int);OKTime taken: 0.708 seconds2.    Install PostgreSQL:[hadoop@ip-X-X-X-X bin]$ sudo yum install postgresql3.    Connect to the PostgreSQL DB instance using the command line.Replace the following values in the command:mypostgresql.testabcd1111.us-west-2.rds.amazonaws.com with the endpoint of your DB instancemypgdb with the name of your PostegreSQL databasedatabase_username with the DB instance user name[hadoop@ip-X-X-X-X bin]$ psql --host=mypostgresql.testabcd1111.us-west-2.rds.amazonaws.com --port=5432 --username=database_username --password --dbname=mypgdb4.    When prompted, enter the password for the DB instance.5.    Run the following command to confirm that you can access the Hive table that you created earlier:mypgdb=> select * from "TBLS"; TBL_ID | CREATE_TIME | DB_ID | LAST_ACCESS_TIME | OWNER | RETENTION | SD_ID | TBL_NAME | TBL_TYPE | VIEW_EXPANDED_TEXT | VIEW_ORIGINAL_TEXT | IS_REWRITE_ENABLED --------+-------------+-------+------------------+--------+-----------+-------+---------------+---------------+--------------------+--------------------+-------------------- 1 | 1555014961 | 1 | 0 | hadoop | 0 | 1 | test_postgres | MANAGED_TABLE | | | f(1 row)Your Amazon EMR cluster is now using the PostgreSQL database as the external metastore for Hive.Related informationConfiguring an external metastore for HiveConnecting to a DB instance running the PostgreSQL database engineFollow"
https://repost.aws/knowledge-center/postgresql-hive-metastore-emr
How do I resolve the trace quota error in X-Ray?
"I want to resolve the error "This trace has reached its maximum allocated quota. For more information, see AWS X-Ray endpoints and quotas"."
"I want to resolve the error "This trace has reached its maximum allocated quota. For more information, see AWS X-Ray endpoints and quotas".Short descriptionThe following scenarios might cause trace documents to exceed the allowed limit in AWS X-Ray:You sent an increased number of batched segments for a trace where the trace document size exceeds 500 KB size.You sent non-batched segments for a trace where trace document size exceeds 100 KB size.You added custom segments, metadata, and annotations that increased the trace document size.The upper limit of an X-Ray trace document size changes dynamically as per the number of segments that you send together. This is due to the limit exceeded trace feature. For a high number of segments that you send together in a batch that's attached to a trace, the upper limit is 500 KB. For individual segments that you send with a time gap that's attached to a trace, the upper limit is 100 KB.The faster you send a trace (the more segments that you batch together and send), the more the compression efficiency increases. The slower you send a trace (send segments individually with a time gap), the more the trace splits into multiple revisions. Also, the slower you send a trace, the more it consumes storage capacity for cache in the backend. Traces that last longer produce more duplicates and results in X-Ray collecting less data.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, then make sure that you’re using the most recent AWS CLI version.View the trace in the X-Ray console, or run the following command to confirm that the trace document exceeds the size limit:aws xray batch-get-traces --trace-ids <EXAMPLE_TRACE_ID>Note: Replace EXAMPLE_TRACE_ID with your trace's ID.X-Ray collects the following information for a larger trace:{ "Id": "<EXAMPLE_TRACE_ID>", "Duration": 187.859, "LimitExceeded": true, "Segments": [ ... ] .... }Note: If the LimitExceeded parameter is true, then you exceeded the trace or segment quota.Increased number of batched segments for a trace where trace document size exceeds 500 KB sizeThis scenario occurs when you integrate Step Functions with X-Ray. When you integrate Step Functions with X-Ray, you can't customize what X-Ray does or doesn't trace. When you pass the trace ID through all the AWS Lambda functions, segment data gets added to the trace ID.If you're using Step Functions, then complete the following steps:Go to Step Functions, and then turn off active tracing.Pass the trace ID across Lambda functions only for critical workflows, and don't exceed the trace upper limit.If Lambda is receiving the trace header from upstream services, then remove the PutTraceSegments permission from the Lambda execution role. The upper limit is dynamically adjusted.If you aren't using Step Functions, then complete the following steps:Debug your code to check if you're passing the same trace ID for different requests.Break the trace. To do this, don't pass the trace ID in the invocation to downstream services.Create a new trace ID from the breaking point.Note: If you create a trace ID for every application, then your trace count increases. For easier tracing, keep the critical application workflows as part of one trace.For more information, see AWS X-Ray and Step Functions.Non-batched segments for a trace where trace document size exceeds 100 KB sizeFor this scenario, use the solution from the preceding section.Break traces for the new upper limit of 100 KB. Or, batch more segments in the application to increase the upper limit to 500 KB. Batching segments is supported only with open-telemetry SDK. If you're using X-Ray SDK, then change the way that the application sends segments.Added custom segments, metadata, and annotations that increased the document sizeTo reduce the trace document size, don't add extra custom segments to the same trace ID. Add custom segments only for necessary workflows. Also, to reduce the trace document size, reduce the metadata and annotations in the traces.Follow"
https://repost.aws/knowledge-center/x-ray-trace-quota-error
Why can't I delete a security group attached to my Amazon VPC?
I'm getting errors when trying to delete a security group for my Amazon Virtual Private Cloud (Amazon VPC).
"I'm getting errors when trying to delete a security group for my Amazon Virtual Private Cloud (Amazon VPC).ResolutionThe security group is a default security groupIf you try to delete the default security group, then you get the following error:"error: Client.CannotDelete"All VPCs have a default security group. If you don't specify a different security group when you launch the instance, then a default security group is automatically associated with your instance. You can't delete a default security group. But, you can change the default security group's rules. For more information, see Default security groups for your VPCs.A rule in another security group references the security groupIf a rule in another security group references the security group, then you receive the following error:"An error occurred (DependencyViolation) when calling the DeleteSecurityGroup operation: resource sg-xyz has a dependent object"You can't delete a security group that's referenced by a security group rule. You must remove the reference to delete the security group. To modify security group rules, see Security group rules.For example, security group A (sg-A) has a rule that references security group B (sg-B). To delete security group sg-B, you must first remove the rule that references sg-B. Complete the following steps to remove the rule that references the security group that you want to delete:1.    Open the Amazon VPC console.2.    In the navigation pane, choose Security Groups.3.    Select the security group that you want to update.4.    Choose Actions, Edit inbound rules or Actions, Edit outbound rules, depending on your use case.5.    Choose Delete for the rule that you want to delete.6.    Choose Save rules.A security group that's in another Amazon VPC with an established peering connection might also reference this security group. To delete the security group, either remove the reference or delete the VPC peering connection.Complete the following steps:1.    Open the Amazon VPC console.2.    In the navigation pane, choose Peering Connections.3.    Select the VPC peering connection, and then choose Actions, Delete VPC Peering Connection.4.    In the confirmation dialog box, choose Yes, delete.Note: Use the DescribeSecurityGroupReferences API to describe the VPCs on the other side of a VPC peering connection that reference the security groups that you're deleting.The security group is associated with an instance that's in the Running or Stopped stateTo determine if the security group is assigned to an instance, complete the following steps:1.    Open the Amazon Elastic Compute Cloud (Amazon EC2) console.2.    In the navigation pane, choose Instances.3.    In the search bar that's in the content pane, enter Client filter.4.    From the dropdown menu, choose Instance state (client).5.    Choose Instance state (client): running.6.    Repeat steps 3–5. Then, choose Instance state (client): stopped.7.    In the filtered list, choose either Security Group ID or Security Group Name. Then, choose the security group ID or security group name. Any instances that are assigned to the security group appear in the filtered instance list.Note: To change the security group that's assigned to an instance, see Work with security groups.The security group is associated with a network interfaceYou can't delete a security group that's associated with a requester-managed network interface. Requester-managed network interfaces are automatically created for managed resources, such as Application Load Balancer nodes. Services and resources, such as AWS Lambda, Amazon FSx, Redis, and Memcached have security groups that are always attached to the elastic network interface. To delete or detach these elastic network interfaces, you must delete the resource that the network interface represents. The AWS service then automatically detaches and deletes the network interface for you.If your interface is attached to AWS managed resources, then you might receive the following errors when deleting these types of security groups.Example error message:"Error detaching network interface. eni-xxxxxxxx:Network interface 'eni-xxxxxxxx' is currently in use"To resolve this error, complete the following steps :1.    Open the Amazon EC2 console.2.    In the navigation pane, choose Network Interfaces.3.    Search for the ID of the elastic network interface that you're detaching or deleting.4.    Select the elastic network interface, and then choose the Details tab.5.    Important: Review the Description to find which resource the elastic network interface is attached to.6.    If you're no longer using the corresponding AWS service, then first delete the service. The elastic network interface is automatically removed from your VPC.You can't delete a security group that's associated with a network interface that's used on VPC endpoints. If you try to delete the security group, then the you might get an error similar to the following one:"An error occurred (DependencyViolation) when calling the DeleteSecurityGroup operation: resource sg-xyz has a dependent object"To delete the security group, remove or replace the security group from the interface endpoint:1.    Open the Amazon VPC console.2.    In the navigation pane, choose Endpoints, and then select the interface endpoint.3.    Choose Actions, Manage security groups.4.    Select or deselect the security groups as required, and then choose Save.Note: Run the following command in the AWS Command Line Interface (AWS CLI) to find network interfaces that are associated with a security group. Replace <group-id> with your security group's ID and <region> with your AWS Region.aws ec2 describe-network-interfaces --filters Name=group-id,Values=<group-id> --region <region> --output jsonReview the command output. If the output is empty as shown in the following example, then no resources are associated with the security group:Example output:{ "NetworkInterfaces": []}Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.You're not authorized to perform the DeleteSecurityGroup operationIf you receive the following error, then you might not have the correct permissions to delete security groups:"Failed to delete security groups. An unknown error happened. You are not authorized to perform "DeleteSecurityGroup" operation"1.    Check the AWS CloudTrail logs for DeleteSecurityGroup API calls. If the following error message appears in the logs, then the error is related to the AWS Identity and Access Management (IAM) permissions:"errorMessage: You are not authorized to perform this operation."2.    Verify that the DeleteSecurityGroup action is added in AWS IAM policies.3.    Check with your organization to make the necessary changes in their security control policies (SCPs), and then change the permission for the user. If you're not the primary account owner, then ask the primary account owner to change the SCPs.Note: An SCP restricts permissions for IAM users and roles in member accounts, including the member account's root user. Permissions blocked at any level above an account, either implicitly or explicitly (using a Deny), apply to all users and roles in the affected account. If the account administrator attaches the AdministratorAccess IAM policy with */* permissions to the user, then the permission is still blocked.For more information, see SCP effects on permissions.https://docs.aws.amazon.com/vpc/latest/userguide/security-groups.html#working-with-security-groupsFollow"
https://repost.aws/knowledge-center/troubleshoot-delete-vpc-sg
How can I manage static IP addresses on my Lightsail instances using AWS CLI commands?
I want to detach my static IP address from my Amazon Lightsail instance and attach it to a new Lightsail instance. How can I do this using AWS Command Line Interface (AWS CLI) commands?
"I want to detach my static IP address from my Amazon Lightsail instance and attach it to a new Lightsail instance. How can I do this using AWS Command Line Interface (AWS CLI) commands?Short descriptionFor a list of Amazon Lightsail AWS CLI commands, see the AWS CLI Command Reference and the Amazon Lightsail API Reference.Important: Keep in mind the following when using AWS CLI commands:If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.JSON is the default AWS CLI output. You can use the default, or append --output json to the commands to receive output as shown in the following examples. For more information, see Controlling command output from the AWS CLI.For general information on solving AWS CLI errors, please refer Why am I receiving errors when running AWS CLI commands?The AWS CLI output displays timestamps in Unix Epoch time. Use one of the following methods to convert the timestamp into UTC:macOS:Remove the decimal point from the timestamp and any digits to the right of the decimal point, and then run the following command:# date -r 1602175741 -uThu Oct 8 16:49:01 UTC 2020Linux:Run the following command:# date -d @1602175741.603 -uThu Oct 8 16:49:01 UTC 2020Windows:Convert the timestamp using a converter, such as epochconverter.com.ResolutionDetaching the static IP address from an existing Lightsail instanceRun the detach-static-ip command to detach the static IP address from the instance. The following example detaches the static IP address from an instance in the eu-west-1 Region. Replace the --static-ip-name and --region with the appropriate values for your request.# aws lightsail detach-static-ip --static-ip-name StaticIpForTestLightsailInstance1 --region eu-west-1{ "operations": [ { "id": "c86e552e-c21a-4cdf-aa68-05fb20574e8b", "resourceName": "StaticIpForTestLightsailInstance1", "resourceType": "StaticIp", "createdAt": 1602182597.168, "location": { "availabilityZone": "all", "regionName": "eu-west-1" }, "isTerminal": true, "operationDetails": "TestLightsailInstance1", "operationType": "DetachStaticIp", "status": "Succeeded", "statusChangedAt": 1602182597.168 }, { "id": "4b9dcaa7-be3a-4dfd-8ac0-32f0238c0833", "resourceName": "TestLightsailInstance1", "resourceType": "Instance", "createdAt": 1602182597.17, "location": { "availabilityZone": "eu-west-1a", "regionName": "eu-west-1" }, "isTerminal": true, "operationDetails": "StaticIpForTestLightsailInstance1", "operationType": "DetachStaticIp", "status": "Succeeded", "statusChangedAt": 1602182597.17 } ]}Attaching the static IP address to a new Lightsail instance1.    Run the attach-static-ip command to attach a static IP address to a new instance from backup. Replace --static-ip-name, --instance-name, and --region with the appropriate values for your request.# aws lightsail attach-static-ip --static-ip-name StaticIpForTestLightsailInstance1 --instance-name RestoredTestLightsailInstance1-New --region eu-west-1{ "operations": [ { "id": "192c4917-c332-49c8-88ab-60484a42c98f", "resourceName": "StaticIpForTestLightsailInstance1", "resourceType": "StaticIp", "createdAt": 1602182686.46, "location": { "availabilityZone": "all", "regionName": "eu-west-1" }, "isTerminal": true, "operationDetails": "RestoredTestLightsailInstance1-New", "operationType": "AttachStaticIp", "status": "Succeeded", "statusChangedAt": 1602182686.46 }, { "id": "fb93c012-e3a2-4908-8746-01a4ae018440", "resourceName": "RestoredTestLightsailInstance1-New", "resourceType": "Instance", "createdAt": 1602182686.463, "location": { "availabilityZone": "eu-west-1a", "regionName": "eu-west-1" }, "isTerminal": true, "operationDetails": "StaticIpForTestLightsailInstance1", "operationType": "AttachStaticIp", "status": "Succeeded", "statusChangedAt": 1602182686.463 } ]}2.    Run the get-instances command to verify that the static IP address is assigned to your instance.# aws lightsail get-instances --region eu-west-1 --query 'instances[].{name:name,createdAt:createdAt,blueprintId:blueprintId,bundleid:bundleId,blueprintName:blueprintName,publicIpAddress:publicIpAddress,InstanceID:supportCode}' --output table----------------------------------------------------------------------------------------------------------------------------------------------------------------------| GetInstances |+----------------------------------+------------------+----------------+------------+-----------------+------------------------------------------+-------------------+| InstanceID | blueprintId | blueprintName | bundleid | createdAt | name | publicIpAddress |+----------------------------------+------------------+----------------+------------+-----------------+------------------------------------------+-------------------+| 11178xxxxxxx/i-09f6xxxx| wordpress | WordPress | large_2_0 | 1602182374.625 | RestoredTestLightsailInstance1-New | 52.210.xx.xx |+----------------------------------+------------------+----------------+------------+-----------------+------------------------------------------+-------------------+Related informationHow can I manage my Lightsail instance using AWS CLI commands?How can I manage my snapshots and create backups for my Lightsail instances using AWS CLI commands?Enabling or disabling automatic snapshots for instances or disks in Amazon LightsailLightsail docsFollow"
https://repost.aws/knowledge-center/lightsail-aws-cli-static-ips
What do I need to know about the IP addresses assigned to my Amazon RDS DB instances?
I'm looking for information related to the IP addresses assigned to my Amazon Relational Database Services (Amazon RDS) instances.
"I'm looking for information related to the IP addresses assigned to my Amazon Relational Database Services (Amazon RDS) instances.ResolutionWhen Amazon RDS creates a DB instance in a virtual private cloud (VPC), a network interface is assigned to your DB instance using an IP address from your DB subnet group. Two different types of IP addresses are assigned to your instance based on the configuration of your instance.Private IP address: When you launch a DB instance inside a VPC, the DB instance has a private IP address for traffic inside the VPC. This IP address isn't accessible from the internet. This IP address is used for connecting to the instance from the resources inside the same VPC. By default, every Amazon RDS DB instance has a private IP address. This IP address is assigned from the range that you defined in your DB subnet group.Public IP address: The public IP address is accessible from the internet. This IP address is used for connecting to the instance from the resources outside of the VPC or from internet. A public IP address is assigned to your DB instance only when the configuration setting Publicly accessible is selected for the instance.I've selected the Publicly accessible setting for my RDS instance, but a public IP address isn't assigned to the instanceTypically this happens when the subnets in your DB subnet group are private subnets.To resolve this issue, do the following:Open the Amazon RDS console.In the navigation pane, choose Subnet groups.Choose the subnet group that the DB instance is associated with.You can view the subnet groups with the VPC ID and subnet IDs of the subnets associated.Open the Amazon VPC console.In the navigation pane, choose Internet Gateways.Check whether your VPC is attached to an internet gateway.If your VPC isn't attached to an internet gateway, then create and attach an internet gateway to your VPC.In the navigation pane, choose Route tables.Choose the route table associated with your VPC.Choose the Subnet associations tab. Then, verify that all the subnets in your DB subnet group are attached to the route table.If the subnets aren't associated with the route table, choose Edit subnet associations. Then, select the subnet to be associated with the route table. For more information, see Associate a subnet with a route table.Choose the Routes tab. Then, verify that all the subnets in your DB subnet group have 0.0.0.0/0 in the Destination field and the internet gateway ID in the Target field.If the subnets have different values for the Destination and Target fields, then edit the route to include the preceding values. For more information, see Add and remove routes from a route table.Open the Amazon RDS console.In the navigation pane, choose Databases.Select the DB instance that you want to modify, and then choose Modify.Under Connectivity, expand the Additional configuration section, and then select Publicly accessible.Choose Continue.Choose Modify DB instance.Note: Be sure that your subnet group doesn't include a combination of public and private subnets. This combination might result in situations, such as the primary instance running in the public subnet while the secondary is running in a private subnet in a Multi-AZ configuration. These situations result in connectivity issues after a failover.I want to find the private and public IP addresses for my Amazon RDS DB instanceIn Amazon RDS, the IP addresses are dynamic while the endpoints are static. Therefore, it's a best practice to use endpoints to connect to your instance. Every Amazon RDS instance has an endpoint. To find the endpoint, also called the DNS Name, of your instance, do the following:Open the Amazon RDS console.In the navigation pane, choose Databases.Choose the database instance for which you want to find the IP address.Choose the Connectivity & security tab.You can see the endpoint information under the Endpoint & port section.When you try to connect to your DB instance from resources within the same VPC, your RDS endpoint automatically resolves to the private IP address. When you connect to your DB instance from either outside the VPC or the internet, the endpoint resolves to a public IP address.You can also find the IP address of your RDS instance by running either of the following commands:nslookup example-rds-endpoint-or-dig example-rds-endpointYou might see an output similar to the following when you run the nslookup command for an RDS DB instance:Output from an Amazon Elastic Compute Cloud (Amazon EC2) instance in the same VPC resolves to a private IP address:[ec2-user@ip-172-xx-xx-xx ~]$ nslookup myoracledb.xxxxx.us-east-1.rds.amazonaws.comServer: xxx.xxx.xxx.xxxAddress: xxx.xxx.xxx.xxx#53Non-authoritative answer: myoracledb.xxxxx.us-east-1.rds.amazonaws.com canonical name = ec2-3-232-189-42.compute-1.amazonaws.com.Name: ec2-3-232-189-42.compute-1.amazonaws.comAddress: 172.31.8.27Output from an Amazon EC2 instance in a different VPC resolves to the public IP address:[ec2-user@ip-172-xx-xx-xx ~]$ nslookup myoracledb.xxxxx.us-east-1.rds.amazonaws.comServer: xxx.xxx.xxx.xxxAddress: xxx.xxx.xxx.xxx#53Non-authoritative answer: myoracledb.xxxxx.us-east-1.rds.amazonaws.com canonical name = ec2-3-232-189-42.compute-1.amazonaws.com.Name: ec2-3-232-189-42.compute-1.amazonaws.comAddress: 3.232.189.42The IP addresses of my DB instances aren't consistentBecause the IP address of your instance is dynamic, you can't assign a static IP address or an Elastic IP address to your instance. The IP address assigned to an RDS DB instance changes under one or more of the following conditions:The instance is stopped and started again.Note: When the instance is rebooted, the IP addresses don't change.The underlying host is replaced because of circumstances such as instance failure and DB instance class update.A hardware maintenance happened on the instance.The instance is in a Multi-AZ environment, and a failover happened.The operating system of the DB instance undergoes software patching.A manual failover of the DB instance is initiated using a reboot with failover.The DB engine undergoes a major or minor version upgrade.There is an outage in the Availability Zone of the instance.Related informationWorking with a DB instance in a VPCFollow"
https://repost.aws/knowledge-center/rds-ip-address-issues
I am not able to launch EC2 instances with encrypted AMIs or encrypted volumes using Amazon EC2 Auto Scaling
"Amazon Elastic Compute Cloud (Amazon EC2) Auto Scaling failed to launch instances using encrypted Amazon Machine Image (AMI) or encrypted volumes. The AWS Identity and Access Management (IAM) identities (users, roles) used to create the Amazon EC2 Auto Scaling has administrator permissions."
"Amazon Elastic Compute Cloud (Amazon EC2) Auto Scaling failed to launch instances using encrypted Amazon Machine Image (AMI) or encrypted volumes. The AWS Identity and Access Management (IAM) identities (users, roles) used to create the Amazon EC2 Auto Scaling has administrator permissions.Short descriptionAmazon EC2 Auto Scaling uses service-linked roles for the required permissions to call other AWS services. The permissions for SLR are hardcoded by AWS and can't be changed. By default, permissions provided to Amazon EC2 Auto Scaling SLR don't include permissions to access AWS KMS keys.You can use AWS managed keys or customer managed keys to encrypt Amazon Elastic Block Store (Amazon EBS) volumes or AMIs with Amazon EC2 Auto Scaling. Amazon EC2 Auto Scaling doesn't need additional permissions to use AWS managed keys. However, Amazon EC2 Auto Scaling SLR must have additional permissions with customer managed keys.ResolutionFollow these instructions depending on if Amazon EC2 Auto Scaling is using the customer managed key present in the same or external AWS account.Note:The following examples use the default Amazon EC2 Auto Scaling SLR AWSServiceRoleForAutoScaling, but you can also create a unique role name.If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.AWS KMS grants must be created from the account that owns the Amazon EC2 Auto Scaling group, not the AWS KMS account. For more information, see Using grants.Amazon EC2 Auto Scaling is using the customer managed key present in the same AWS accountFollow the instructions for changing a key policy and add the following example statement:Note: Replace 123456789012 with the account ID where the Amazon EC2 Auto Scaling group is deployed.{ "Sid": "Allow service-linked role use of the KMS", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::123456789012:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling" ] }, "Action": [ "kms:Encrypt", "kms:Decrypt", "kms:ReEncrypt*", "kms:GenerateDataKey*", "kms:DescribeKey" ], "Resource": "*"},{ "Sid": "Allow attachment of persistent resources", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::123456789012:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling" ] }, "Action": [ "kms:CreateGrant" ], "Resource": "*", "Condition": { "Bool": { "kms:GrantIsForAWSResource": true } }}Amazon EC2 Auto Scaling is using the customer managed key present in the external AWS account1.    Follow the instructions for changing a key policy. Modify the key policy to grant permissions to the IAM entity present in the external AWS account for performing the CreateGrant API action:{ "Sid": "Allow external account 111122223333 use of the KMS", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::111122223333:root" ] }, "Action": [ "kms:Encrypt", "kms:Decrypt", "kms:ReEncrypt*", "kms:GenerateDataKey*", "kms:DescribeKey" ], "Resource": "*"},{ "Sid": "Allow attachment of persistent resources in external account 111122223333", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::111122223333:root" ] }, "Action": [ "kms:CreateGrant" ], "Resource": "*"}2.    Use the AWS CLI command create-grant with the credentials of an IAM entity present in the AWS account that owns the Amazon EC2 Auto Scaling group.Note: Replace 444455556666 with the account ID where the KMS key is present.$ aws kms create-grant --key-id arn:aws:kms:us-west-2:444455556666:key/1a2b3c4d-5e6f-1a2b-3c4d-5e6f1a2b3c4d --grantee-principal arn:aws:iam::111122223333:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling --operations Decrypt GenerateDataKeyWithoutPlaintext ReEncryptFrom ReEncryptTo CreateGrantNote: Be sure that the IAM entity has permissions to perform the CreateGrant API action. If CreateGrant permissions are missing, then add the following statement to the IAM entity's attached policy:{ "Sid": "AllowCreationOfGrantForTheKMSinExternalAccount444455556666", "Effect": "Allow", "Action": "kms:CreateGrant", "Resource": "arn:aws:kms:us-west-2:444455556666:key/1a2b3c4d-5e6f-1a2b-3c4d-5e6f1a2b3c4d"}Related informationService-linked roles for Amazon EC2 Auto ScalingRequired KMS key policy for use with encrypted volumesFollow"
https://repost.aws/knowledge-center/kms-launch-ec2-instance
How can I troubleshoot database connection errors on WordPress-based applications hosted in Lightsail?
I'm receiving one or both of the following errors when connecting to my WordPress-based application:"Error establishing a database connection.""One or more database tables are unavailable. The database may need to be repaired."How can I resolve these errors?
"I'm receiving one or both of the following errors when connecting to my WordPress-based application:"Error establishing a database connection.""One or more database tables are unavailable. The database may need to be repaired."How can I resolve these errors?Short descriptionThe error "Error establishing a database connection" might occur for the following reasons:There are corrupted database tables.The remote database connection is disabled.The database service is down.There is insufficient space on your volume.There are incorrect login credentials in your WordPress Configuration file.ResolutionCorrupted database tablesOpen the wp-admin page of your website (for example, example.com/wp-admin) in the browser and look for the error "One or more database tables are unavailable. The database may need to be repaired.". If you see this error, then you're getting the "Error establishing database connection" error due to corrupted database tables. To repair corrupted tables, do the following:1.    Access the wp-config.php file using a text editor, such as the vi editor.$ sudo vi wp-config.php2.    Add the following line in your wp-config.php file. Make sure to add the line immediately before the line "That’s all, stop editing! Happy blogging".define('WP_ALLOW_REPAIR' ,true);3.    After adding the preceding setting to the file, access the following URL and then run Repair Database:/wp-admin/maint/repair.php (for example, example.com/wp-admin/maint/repair.php)4.    After running the database repair, remove the line of code you added to your wp-config.php file. If you don't remove this line, then anyone can run the repair on your database.Disabled remote database connectionSometimes databases reside on a remote database server. If the database server doesn't allow remote connections from the instance hosting the website, then you can't connect to the database. To troubleshoot this, do the following:1.    Check the configuration file wp-config.php for the DB_HOST value. If the host isn't localhost or 127.0.0.1, then the database resides in a remote server, as shown in the following example:define('DB_HOST', '192.168.22.9');2.    Try to telnet from the server to the remote server on port 3306. If you can't connect, then remote connections aren't allowed in the database configuration on the remote server. Or, there's a firewall on the remote server that's blocking the connection. Contact the external database owner or support for assistance allowing connections from your Lightsail instance.It's a best practice to store the website database in a Lightsail managed database for high availability and security.Database service is downNote: The following file paths and commands might change depending on whether your Lightsail WordPress instance uses MySQL or MariaDB. Also, the file paths and commands vary depending on whether the instance uses native Linux system packages (Approach A), or if it's a self-contained installation (Approach B). To identify the database server type and which approach to follow, run the following commands:test ! -f "/opt/bitnami/common/bin/openssl" && echo "Approach A" || echo "Approach B"test -d /opt/bitnami/mariadb && echo "MariaDB" || echo "MySQL"1.    If you verified that there are no table corruption and no remote database connection issues, and WordPress still can't connect to the database, then your database server might be down. This might happen due to database configuration issues, heavy traffic on a server, low disk space, low available memory, and so on. Check the database service status using the following command:MySQL database serversudo /opt/bitnami/ctlscript.sh status mysqlMariaDB database serversudo /opt/bitnami/ctlscript.sh status mariadb2.    If the preceding command shows that database is in the stopped state, then try starting the database service using the following command:MySQL database serversudo /opt/bitnami/ctlscript.sh start mysqlMariaDB database serversudo /opt/bitnami/ctlscript.sh start mariadb3.    If you're still not able to start the database service and you're seeing errors during the start process, then check the database service logs to identify the root cause and troubleshoot the issue. The main database service log file is located at one of the following locations in your Lightsail WordPress Instance:MySQL database server following Approach A: /opt/bitnami/mysql/logs/mysqld.logMySQL database server following Approach B: /opt/bitnami/mysql/data/mysqld.logMariaDB database server following Approach A: /opt/bitnami/mariadb/logs/mysqld.logMariaDB database server following Approach B: /opt/bitnami/mariadb/data/mysqld.logDatabase performance and connectivity can be affected by low disk space and/or low available memory. Check these resources using the df and free commands.Insufficient space on your volumeIf the free disk space on your volume is 100% or just below 100%, then the database service might go down.1.    Run the following command:$ sudo df -hThe preceding command lists the amount of free disk space, as shown in the following example:Filesystem Size Used Avail Use% Mounted ondevtmpfs 1.9G 0 1.9G 0% /devtmpfs 1.9G 0 1.9G 0% /dev/shmtmpfs 1.9G 400K 1.9G 1% /runtmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup/dev/nvme0n1p1 8.0G 8.0G 0G 100% /tmpfs 389M 0 389M 0% /run/user/10002.    If the command output shows that you don't have enough available space, you can resize your instance to bigger size. Or, you can delete unnecessary files from the server to create free space.3.    After increasing the free disk space, restart the database service.Incorrect login credentials in your WordPress settingsWordPress needs a specific database connection string, which includes a user name, password, and host to access the database. If any of those items have changed, then WordPress can't access the database.1.    To verify you're using the correct connection string, get the connection string details DB_NAME, DB_HOST, DB_USER and DB_PASSWORD in the wp-config.php file.2.    Access the database from a terminal using the connection string. Make sure to replace DB_NAME, DB_HOST AND DB_USER with the values you got in step 1.sudo mysql 'DB_NAME' -h 'DB_HOST' -u 'DB_USER' -pEnter password: ********Note: The password isn't displayed as you enter it so that it won't be visible to other users.3.    Press the ENTER key after entering the password.If you're getting an Access Denied error when using the preceding command, then it usually means that the credentials are incorrect.If you're using a remote DB host, then add the correct connection string in the wp-config.php file. If the database is in the same server, then make sure that DB_NAME is bitnami_wordpress and DB_USER is bn_wordpress.To reset the database user password, do the following:1.    Use the following command to access /home/bitnami/bitnami_credentials. Make a note of the root database password.sudo cat /home/bitnami/bitnami_credentials2.    Log in to MySQL/MariaDB shell using the following command, then enter the password you got from the preceding command. If you're not able to log in to shell using the database root password, then reset the password (MySQL or MariaDB)sudo mysql -u root -pEnter password: ********3.    Inside the MySQL or MariaDB shell, run the following query to make sure that database bitnami_wordpress exists:show databases;4.    Run the following query to make sure that the database user bn_wordpress exists:SELECT user FROM mysql.user;5.    Reset the password of the database user "bn_wordpress" using the following query.Note: Replace PASSWORD with the password you got from the wp-config.php file.MySQL databasealter user 'bn_wordpress'@'localhost' identified by 'PASSWORD‘;alter user 'bn_wordpress'@'127.0.0.1' identified by 'PASSWORD‘;MariaDB databasealter user 'bn_wordpress'@'%' identified by 'PASSWORD‘;Note: If none of the preceding resolutions work, you can restore your instance using a backup snapshot.Follow"
https://repost.aws/knowledge-center/lightsail-wordpress-fix-database-errors
Why can’t I create an Amazon VPC peering connection with a VPC in another AWS account?
"I'm trying to create an Amazon Virtual Private Cloud (Amazon VPC) peering connection between my Amazon VPC and a VPC that's associated with another AWS account. However, I get the error: "The connection failed due to incorrect VPC-ID, Account ID, or overlapping CIDR range.""
"I'm trying to create an Amazon Virtual Private Cloud (Amazon VPC) peering connection between my Amazon VPC and a VPC that's associated with another AWS account. However, I get the error: "The connection failed due to incorrect VPC-ID, Account ID, or overlapping CIDR range."ResolutionCheck Amazon VPC settingsWhen you create your VPC peering connection with a VPC in another AWS account, make sure to check the following settings:The account ID for VPC (Accepter) is entered correctly.The correct VPC ID for VPC (Accepter) is selected.None of the primary CIDR blocks or secondary CIDR blocks for your selected VPC (Requester) and VPC (Accepter) overlap.Note: If your VPCs have overlapping CIDR blocks, then you can’t create a VPC peering connection. You must delete and recreate one of the VPCs with a non-overlapping CIDR block.Implement a best practice for overlapping CIDR blocksUse a Private NAT gateway and Application Load Balancer through AWS Transit Gateway to establish private communication between two VPCs that have overlapping CIDR blocks.Complete the following steps.First VPC:Add a secondary nonidentical CIDR.Create additional private subnets in the primary and secondary blocks of the CIDRs.Create a Private NAT Gateway in the secondary CIDR subnet to establish a private IP address from the subnet range.Second VPC:Add a secondary CIDR to the primary overlapping CIDR.Create another identical private subnet in the primary CIDR.In the newly created private subnet, launch an application hosting Amazon Elastic Compute Cloud (Amazon EC2) instance.Create two more private subnets in different Availability Zones.Create an internal Application Load Balancer, and then select the subnets from step 4.Configure the Application Load Balancer, and register the instance by selecting the launched instance as Target. Note: The Application Load Balancer's targets must be the workloads in the primary CIDR's private subnet that the workloads from the first VPC need to access. Also, make sure that the registered targets are healthy.Transit Gateway:Create a Transit Gateway, and choose Disabled for the default route table propagation.Create Transit Gateway VPC attachments for each VPC by associating appropriate subnets in each Availability Zone.Enter routes in the Transit Gateway route table to route destination CIDRs to the VPC attachments.VPC route table:For first VPC:Edit the route table of the workload in the private subnet.Add a static route for the secondary destination CIDR through the Private NAT Gateway.Create or modify a NAT subnet route table where the Private NAT gateway is launched.Add a route entry to the secondary CIDR of the destination VPC through Transit Gateway.For second VPC:Edit the route table of the Application Load Balancer subnets.Add routes for the return traffic of the first VPC's secondary CIDR through Transit Gateway.Connectivity check:Connect to the workload's instance of the first VPC using SSH.Test the connectivity of the target instance in the second VPC through Application Load Balancer.Important: Make sure that the Availability Zone or the subnets in the VPC attachment align with the Availability Zone or subnets of the NAT Gateway.Limitations:Routing traffic to a Private NAT Gateway and another VPC makes the on-premises network unidirectional. When the on-premises network is unidirectional, resources on the other side of the connections can't use a NAT gateway.You can route your NAT gateway to Transit Gateway only for outbound private communication between two VPCs or a VPC and your on-premises network.Because NAT Gateways perform only source NAT, the preceding setup allows only the source to initiate a connection to the destination VPC. If you need bidirectional traffic from the second overlapping VPC to the first, then you must reverse the setup. Create the NAT Gateway in the second VPC, and have Application Load Balancer target the instance in the first VPC.Follow"
https://repost.aws/knowledge-center/vpc-peering-connection-error
How can I troubleshoot issues failing over to a secondary cluster in my Amazon Aurora global database due to minor version mismatch?
I want to troubleshoot minor version mismatch between primary and secondary Amazon Aurora PostgreSQL-Compatible clusters in the same Aurora global cluster.
"I want to troubleshoot minor version mismatch between primary and secondary Amazon Aurora PostgreSQL-Compatible clusters in the same Aurora global cluster.Short descriptionAlthough both the primary and secondary clusters are on the same Aurora PostgreSQL-Compatible versions, you might experience version mismatch. This happens because the primary and secondary clusters are running on differentpatches of the same version. So you aren't able to failover to the secondary cluster in your Amazon Aurora global database, and you get an error similar to this:Target cluster <DB cluster ID> must be on the same engine version as the current primary cluster.ResolutionRun the following command to check what versions your primary and secondary Aurora clusters are running:>> select AURORA_VERSION();To bring both clusters to the same version, check if there are any pending maintenance actions on the cluster:>> aws rds describe-pending-maintenance-actions --resource-identifier <ARN of the cluster>If any pending maintenance action is available, apply the updates required on the cluster.Log in to the Amazon Relational Database Service (Amazon RDS) console.From the navigation pane, choose Databases.Choose the DB cluster that you want to update.For Actions, choose Upgrade Now or Upgrade at Next Window, depending on when you want to apply updates.For more information, seeApplying updates for a DB cluster.Note: The Aurora version upgrade process causes downtime. It's a best practice to perform the upgrade during your planned maintenance window. After you apply the patch, you can confirm that your primary and secondary Aurora clusters are running the same versions by running this command again:>> select AURORA_VERSION();After you have confirmed that both versions of your Amazon Aurora cluster are the same, initiate a manual failover in your Aurora global database.Related informationUsing failover in an Amazon Aurora global databaseaurora_versionFollow"
https://repost.aws/knowledge-center/aurora-postgresql-global-mismatch
How does elastic resize work in Amazon Redshift?
I want to perform an elastic resize on my Amazon Redshift cluster. How does elastic resize work and what are some considerations for using elastic resize?
"I want to perform an elastic resize on my Amazon Redshift cluster. How does elastic resize work and what are some considerations for using elastic resize?Short descriptionAmazon Redshift allows you to migrate to a certain number of nodes during a cluster resize. By default, Amazon Redshift aims to maintain the same number of data slices in the target cluster. Slice mapping reduces the time required to resize a cluster. During the slice-mapping process, Amazon Redshift reshuffles the cluster data slices to the new compute nodes in your target cluster.For classic resize, all rows are copied to the cluster, mapping them to a slice based on distribution settings. For elastic resize, you can copy the whole slice of data to a node where the slice is mapped.If you plan to perform an elastic resize on your Amazon Redshift cluster, consider the following:Elastic resize doesn't sort tables or reclaim disk space. Run VACUUM to sort tables and reclaim disk space.Elastic resize is available only for Amazon Redshift clusters that use the EC2-VPC platform.Elastic resize often requires less time to complete than a classic resize. To compare, classic resize operations provision a new cluster while copying over data from your source cluster. The classic resize operation first distributes the data to the new nodes according to distribution style. Then, it runs the ANALYZE command to update table statistics. This means that a classic resize operation takes more time to complete than an elastic resize operation on your Amazon Redshift cluster.ResolutionHow elastic resize worksIn Amazon Redshift, elastic resize can work differently depending on the target node type. Check whether the target node type is the same as the source node type.To check your node type, sign in to the Amazon Redshift console. From the navigation menu, choose Clusters. The Clusters page indicates the node type under each cluster name. Or, you can use the describe-clusters AWS Command Line Interface (AWS CLI) command to obtain more information about your Amazon Redshift cluster:aws redshift describe-clusters --region <Cluster Region>Example 1: Target node type is same as the existing node typeAmazon Redshift automatically redistributes data to new nodes when you resize a cluster using elastic resize (add or remove nodes without changing the node type).Unlike classic resize (which provisions a new cluster and transfers data to it), elastic resize doesn't create a new cluster. Elastic resize typically completes within a few minutes. You can expect a small increase in your query runtime while elastic resize completes data redistribution in the background.Note: Your Amazon Redshift cluster is temporarily unavailable for a few minutes during the metadata migration. For more information about the Amazon Redshift elastic resize process, see Elastic resize.Example 2: Target node type is different from the existing node typeIf your node type changed, Amazon Redshift creates a snapshot first. A new target cluster is then provisioned with the latest data from the snapshot, and data is transferred to the new cluster in the background. During the data transfer, your Amazon Redshift cluster operates in read-only mode and all writes are blocked. When the resize nears completion, Amazon Redshift automatically updates the new cluster’s endpoint to match your existing cluster’s endpoint. All connections to the original cluster are then closed.DC2 and DS2 node count limitationsIf you perform an elastic resize on your Amazon Redshift cluster, then note the following limitations for DC2 and DS2 node types:For dc2.large or ds2.xlarge node types, use half or double the current number of nodes. For example, a cluster with 6 nodes can be resized to either 12 nodes or 3 nodes.For dc2.8xlarge, ds2.8xlarge, or ra3.xlplus node types, use half or up to double the number of nodes. For example, a cluster with 6 nodes can be resized to 3, 4, 5, 7, 8, 9, 10, 11, 12 nodes.For ra3.16xlarge or ra3.4xlarge node types, use quarter or up to four times the current number of nodes. For example, a ra3 cluster with 8 nodes can be resized to 2, 3, 4, 5, 6, 7, 9 up to 32 nodes.Elastic resize best practicesConsider the following best practices when performing an elastic resize on your Amazon Redshift cluster:Activate automated snapshots or take a manual snapshot before you start the elastic resize process, especially if you're resizing a new cluster. By default, manual snapshots are retained indefinitely, even after you delete your cluster. An automated snapshot, however, is periodically taken by Amazon Redshift and then deleted at the end of a retention period.Use the describe-node-configuration-options AWS CLI command to get possible node configurations for a resize operation.Note: If you receive an error when running your AWS CLI command, be sure that you’re using the most recent version of the CLI.VACUUM the cluster before resizing. Elastic resize doesn't automatically delete rows that are marked for deletion.Use the resize-cluster command to easily specify all node configuration changes. You can also resize your cluster using the Amazon Redshift console.Here's an example of how to use the describe-node-configuration-options command:aws redshift describe-node-configuration-options --action-type resize-cluster --cluster-identifier <Cluster Name> --region <Cluster Region>Here's an example of how to use the resize-cluster command:aws redshift resize-cluster --cluster-identifier <Cluster Name> --cluster-type multi-node --node-type <Target Node Type> --number-of-nodes <Number of Target Nodes> --no-classic —region <Cluster Region>Additional considerationsReview these additional considerations when performing an elastic resize on your Amazon Redshift cluster:A cluster snapshot is required for an elastic resize. You can manage snapshots using the console or the Amazon Redshift CLI and API.Note: If you receive an error when running the AWS CLI command, be sure to use the most recent version of the CLI.After you initiate an elastic resize operation in Amazon Redshift, the operation can't be canceled. Wait until the resize operation completes before performing another resize operation or cluster reboot.The new node configuration must have enough storage for existing data. Even when you add nodes, your new configuration might not have enough storage because of the way that the data is redistributed. For more information about storage space, see Why does a table in an Amazon Redshift cluster consume more or less disk storage space than expected?Performing an elastic resize on your Amazon Redshift cluster can cause data skew between nodes from an uneven distribution of data slices. If you observe data skew in your Amazon Redshift cluster, perform a classic resize instead.Perform a classic resize on your Amazon Redshift cluster if it's the best option for your use case. For example, you can perform a classic resize if you're resizing to a single-node cluster. An elastic resize allows you to add or remove nodes from the cluster by preserving the original configuration's slice count. However, it can introduce performance variation. If you want your node slices to match the number of slices in your target node type, use a classic resize. For more information, see Resizing clusters in Amazon Redshift.When an elastic resize is started and a snapshot operation is underway, the resize can fail if the snapshot doesn’t complete within a few minutes.Elastic resize can't be used to resize from or to single node clusters.For reserved nodes, such as DS2 reserved nodes, you can upgrade to RA3 reserved nodes when you perform a resize. You can upgrade the node when you perform an elastic resize or use the console to restore from a snapshot.Related informationResizing clusters in Amazon RedshiftHow do I resize an Amazon Redshift cluster?Scale your Amazon Redshift clusters up and down in minutes to get the performance you need, when you need itFollow"
https://repost.aws/knowledge-center/redshift-elastic-resize
How can I get notifications to check for Amazon S3 IP address changes?
How can I get notifications to check for Amazon Simple Storage Service (Amazon S3) IP address changes?
"How can I get notifications to check for Amazon Simple Storage Service (Amazon S3) IP address changes?ResolutionCreate an Amazon Simple Notification Service (Amazon SNS) subscription, and then subscribe to the following SNS topic:arn:aws:sns:us-east-1:806199016981:AmazonIpSpaceChangedNote: For more information on this SNS topic, see Subscribe to AWS public IP address changes via Amazon SNS.After the SNS subscription is configured, you receive a notification every time there's a change to the JSON file that contains AWS IP address ranges. To check for updated Amazon S3 IP addresses, you must filter the JSON file.Related informationAWS IP address rangesFollow"
https://repost.aws/knowledge-center/s3-ip-address-change-notification
Why am I unable to read or update a KMS key policy in AWS KMS?
"I want to update a AWS KMS key policy in AWS Key Management Service (AWS KMS). I verified that I have administrator permissions for my AWS Identity and Access Management (IAM) identities (users, groups, and roles), but I can't read or update the KMS key policy."
"I want to update a AWS KMS key policy in AWS Key Management Service (AWS KMS). I verified that I have administrator permissions for my AWS Identity and Access Management (IAM) identities (users, groups, and roles), but I can't read or update the KMS key policy.Short descriptionIAM principals must have the API action permission GetKeyPolicy to read a key policy, and PutKeyPolicy to update a policy. These permissions are granted either directly with the key policy, or a combination of the key and IAM policies. For more information, see Managing access to AWS KMS keys.The default KMS key IAM policy contains a statement similar to the following:{ "Sid": "Enable IAM User Permissions", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::111122223333:root" }, "Action": "kms:*", "Resource": "*"}The IAM entities for the AWS account 111122223333 can perform any AWS KMS actions allowed in the attached policy. If the entities can't perform API actions such as GetKeyPolicy or PutKeyPolicy even if allowed permissions in their attached policies, then the statement "Enable IAM User Permissions" might have changed.ResolutionVerify IAM policy permissionsMake sure that your IAM entities have permission to read and update a KMS key similar to the following IAM policy:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "kms:Create*", "kms:Describe*", "kms:Enable*", "kms:List*", "kms:Put*", "kms:Update*", "kms:Revoke*", "kms:Disable*", "kms:Get*", "kms:Delete*", "kms:TagResource", "kms:UntagResource", "kms:ScheduleKeyDeletion", "kms:CancelKeyDeletion" ], "Resource": "arn:aws:kms:*:111122223333:key/*" } ]}Use CloudTrail event history1.    Open the AWS CloudTrail console, and then choose Event history.2.    Choose the Lookup attributes dropdown list, and then choose Event name.3.    In the search window, enter PutKeyPolicy.4.    Open the most recent PutKeyPolicy event.5.    In Event record, copy the policy, and paste it into your favorite text editor.6.    Parse the policy into a readable format.7.    In the IAM policy Sid "Allow access for Key Administrators", note the IAM identity administrators similar to the following:{ "Sid": "Allow access for Key Administrators", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::111122223333:role/Administrator" ] },Key administrators can then be used to regain access to the key.Use Athena queriesIf the CloudTrail event history event is past 90 days, you can use Amazon Athena to search through CloudTrail logs.For instructions, see Using the CloudTrail console to create an Athena table for CloudTrail logs.For more information, see How do I automatically create tables in Athena to search through CloudTrail logs?Related informationBest practices for managing AWS access keysAWS KMS keysFollow"
https://repost.aws/knowledge-center/update-kms-policy
Why aren't my Amazon EC2 Reserved Instances applying to my AWS billing in the way that I expected?
"I purchased an Amazon Elastic Compute Cloud (Amazon EC2) Reserved Instance (RI), but it's being used in a way that I didn't expect."
"I purchased an Amazon Elastic Compute Cloud (Amazon EC2) Reserved Instance (RI), but it's being used in a way that I didn't expect.ResolutionIf your RI isn't applying to any usage at all, see Why isn't my Reserved Instance applying to my AWS billing?The most common reason that an RI discount isn't applying to the instance in the way that you expect is that the terms of the RI don't match any instances running on your account. The RI that you purchase must match the platform, type, size (or size footprint), Availability Zone (if applicable), and tenancy of a running On-Demand Instance for it to apply. Check the terms of your RI and the instance it applies to.If your RI is active and matches the specifications of a running On-Demand Instance, then use Cost Explorer to analyze your spending and usage. You can use AWS Cost Explorer to generate the RI Utilization and RI coverage reports. For more information, see How do I view my Reserved Instance utilization and coverage?If the RI terms do match the instance that you intended it to apply to, consider the following:RI discounts are applied differently in an organization's consolidated bill, depending on whether RI sharing is turned on or off. For more information, see How is the pricing benefit of a Reserved Instance applied across an organization's consolidated bill?For size-flexible RIs, the billing benefit isn't necessarily applied to an instance exactly matching the terms of the RI before it matches a complementary grouping of smaller instances. For example, if you run an m3.large instance and two m3.medium instances, and you purchase a Reserved Instance for an m3.large instance, the RI billing benefit might apply to either of these groups of instances. To see which instances are covered by RIs on your account, use one of the bill reporting options here: How do I find out if my Amazon EC2 Reserved Instances are being fully used?Related informationHow can I find out if my Amazon EC2 Reserved Instance provides regional benefit or size flexibility?Follow"
https://repost.aws/knowledge-center/reserved-instance-applying-why
How do I increase logging levels for the AWS SCT when using AWS DMS?
How can I increase logging levels for the AWS Schema Conversion Tool (AWS SCT) when using the AWS Database Migration Service (AWS DMS)?
"How can I increase logging levels for the AWS Schema Conversion Tool (AWS SCT) when using the AWS Database Migration Service (AWS DMS)?ResolutionInstall the AWS SCT, and then confirm that you installed the required database drivers on your system.After installation, check that you see the AWS Schema Conversion Tool in your applications folder.Open the application by choosing the icon.Note: If you see an update notification, you can choose to update before or after your project is complete.If an auto-project window opens, close the window, and then create a project manually.Choose Settings, and then select Global settings.From the Global settings window, choose Logging.In the Debug mode field, select True.From the Message level section, you can modify the following types of logs:GeneralLoaderParserPrinterResolverTelemetryConverterBy default, all Message levels are set to Info. For each of the Message level types, you can choose from these levels of logging:Trace (most detailed logging)DebugInfoWarningError (least detailed logging)CriticalMandatoryChoose Apply to modify other settings for your project. Choose OK to close the Global settings window.Related informationHow do I install the AWS SCT and database drivers for Windows to convert the database schema for my AWS DMS task?Working with AWS DMS TasksTroubleshooting migration tasks in AWS Database Migration ServiceWhat is the AWS Schema Conversion Tool?Follow"
https://repost.aws/knowledge-center/increase-logging-sct-dms
How can I get temporary credentials for an IAM Identity Center user using the AWS CLI?
I want to get temporary credentials for an AWS IAM Identity Center (successor to AWS Single Sign-On) user.
"I want to get temporary credentials for an AWS IAM Identity Center (successor to AWS Single Sign-On) user.Short descriptionConfiguring a named profile to use IAM Identity Center creates a JSON file in the $ cd ~/.aws/sso/cache directory. The JSON file contains a JSON Web Token (JWT) used to get the temporary security credentials with the get-role-credentials API call. The access token is valid for 8 hours as noted in the expiresAt timestamp in the JSON file. Expired tokens must re-authenticate using the get-role-credentials API call.ResolutionYou can use the AWS Command Line Interface (AWS CLI) to get the temporary credentials for an IAM Identity Center user.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent AWS CLI version.Open the JSON file and copy the access token:$ cat 535a8450b05870c9045c8a7b95870.json{"startUrl": "https://my-sso-portal.awsapps.com/start", "region": "us-east-1", "accessToken": "eyJlbmMiOiJBM….”, "expiresAt": "2020-06-17T10:02:08UTC"}Run the AWS CLI command get-role-credentials to get the credentials for the IAM Identity Center user similar to the following:$ aws sso get-role-credentials --account-id 123456789012 --role-name <permission-set-name> --access-token eyJlbmMiOiJBM…. --region <enter_the_same_sso_region_same_in_the_JSON_file>Example output:{ "roleCredentials": { "accessKeyId": "ASIA*************”, "secretAccessKey": “**********************************”, "sessionToken": “****************************************”, "expiration": 1592362463000 }}Then, follow the instructions to configure the credentials as environment variables.Related informationHow do I use IAM Identity Center permission sets?Follow"
https://repost.aws/knowledge-center/sso-temporary-credentials
How can I reference a resource in another stack from an AWS CloudFormation template?
I want to reference a resource in another AWS CloudFormation stack when I create a template.
"I want to reference a resource in another AWS CloudFormation stack when I create a template.Short descriptionThe following resolution provides an example of one method to create a cross-stack reference. For additional instructions, see Walkthrough: Refer to resource outputs in another AWS CloudFormation stack.Note: To reference a resource in another AWS CloudFormation stack, you must first create cross-stack references. To create a cross-stack reference, use the export field to flag the value of a resource output for export. Then, use the Fn::ImportValue intrinsic function to import the value in any stack within the same AWS Region and account. AWS CloudFormation identifies exported values by the names specified in the template. These names must be unique to your AWS Region and account.ResolutionThe following steps show how to create an AWS CloudFormation stack named NetworkStack. This stack creates network-related resources and exports named ${AWS::StackName}-SecurityGroupID and ${AWS::StackName}-SubnetID. The ${AWS::StackName} is replaced by NetworkStack after stack creation. The final export names are NetworkStack-SecurityGroupID and NetworkStack-SubnetID.Create a stack to export output values1.    Create an AWS CloudFormation stack using this template.2.    Name the stack NetworkStack.Note: NetworkStack exports the subnet and security group values.Create an Amazon Elastic Compute Cloud (Amazon EC2) instance using an imported subnet and security group1.    Open the AWS CloudFormation console.2.    Choose Create Stack, and then choose Design template.3.    In the Parameters tab of the code editor, choose Template.4.    Copy and paste the following template into the code editor, and then update the template with appropriate values for InstanceType and ImageId:{ "Parameters": { "NetworkStackParameter": { "Type": "String" } }, "Resources": { "WebServerInstance": { "Type": "AWS::EC2::Instance", "Properties": { "InstanceType": "t2.micro", "ImageId": "ami-a1b23456", "NetworkInterfaces": [ { "GroupSet": [ { "Fn::ImportValue": { "Fn::Sub": "${NetworkStackParameter}-SecurityGroupID" } } ], "AssociatePublicIpAddress": "true", "DeviceIndex": "0", "DeleteOnTermination": "true", "SubnetId": { "Fn::ImportValue": { "Fn::Sub": "${NetworkStackParameter}-SubnetID" } } } ] } } }}Important: In the template in step 4, use the NetworkStack resource stack as the value for NetworkStackParameter. The NetworkStack value replaces the correct stack name in the corresponding Fn::ImportValue functions.Note: For examples of import and export templates, see Fn::ImportValue.5.    Choose the Create stack icon, and then choose Next.6.    For Stack name, enter a name for your stack.7.    For Parameters, enter the network stack name (NetworkStack) that you want to cross-reference.8.    Choose Next, choose Next again, and then choose Create.9.    After the stack creation is complete, open the Amazon EC2 console.10.    In the navigation pane, choose Instances, and then choose the instance that you created with the template in step 4.11.    Choose the Description view, and then verify that the security group and subnet are configured.Important: You can't delete the source stack or the source stack's export values while another stack is importing these values. To update the source stack's export values, first manually replace the actual values in the stacks that are importing the source stack's export values. Then, you can update the export values of the source stack.To list all stacks that are importing an exported output value, run the list-imports command. To list all exports in an AWS Region, use the AWS CloudFormation console or run the list-exports command. The export name must be unique for the account per AWS Region.Related informationHow do I use parameters in AWS Systems Manager Parameter Store to share values between CloudFormation stacks?AWS CloudFormation templatesAWS::EC2::InstanceFollow"
https://repost.aws/knowledge-center/cloudformation-reference-resource
How do I remove an AWS Backup Vault Lock?
I want to delete an AWS Backup Vault Lock for my backup vault.
"I want to delete an AWS Backup Vault Lock for my backup vault.ResolutionWhen you create a vault lock, you have a choice of two modes: governance or compliance mode. For more information on modes, see Vault lock modes.A vault lock with compliance mode has a grace time period. To delete a vault lock with compliance mode, you must delete the lock before the grace time expires. After the grace time is expired, the vault and its lock are immutable. No user or service can change it. If you try to delete the vault lock after the grace time period, you receive an InvalidRequestException error.To delete a vault lock with governance mode, you must have the appropriate AWS Identity and Access Management (IAM) permissions. The required IAM permission to delete a backup vault is backup:DeleteBackupVaultLockConfiguration.Delete a vault lock using the AWS Backup consoleTo delete a vault lock with governance mode or compliance mode (during grace time), complete the following steps:Open the AWS Backup console.In the navigation pane, under My account, choose Backup vaults. Then, choose Backup Vault Lock.Choose the vault lock you want to remove. Then, choose Manage vault lock.Choose Delete vault lock. A confirmation window appears.Enter confirm in the text box and then choose confirm.If the steps have been completed successfully, then a Success banner appears at the top of the console.Delete a vault lock programmaticallyTo delete your vault lock during grace time using an AWS Command Line Interface (AWS CLI) command, use DeleteBackupVaultLockConfiguration.The following is an example of the DeleteBackupVaultLockConfiguration command:Note: If you receive errors when running AWS CLI commands, make sure that you're using the most recent version of the AWS CLI.aws backup delete-backup-vault-lock-configuration \--backup-vault-name my_vault_to_lockImportant: Deleting the vault lock doesn't delete the backup vault or recovery point. You can delete the vault or recovery point after the lock is removed.Related informationEnhance the security posture of your backups with AWS Backup Vault LockVault lock removal during grace time (Compliance mode)Follow"
https://repost.aws/knowledge-center/backup-delete-vault-lock
Which log group is causing a sudden increase in my CloudWatch Logs bill?
"My Amazon CloudWatch Logs bill is unusually high, and I want to determine which log group is increasing my CloudWatch Logs costs."
"My Amazon CloudWatch Logs bill is unusually high, and I want to determine which log group is increasing my CloudWatch Logs costs.Short descriptionSudden increases in CloudWatch Logs bills often result from an increase in ingested or storage data in a particular log group. Use CloudWatch Logs metrics to check data usage, and review your AWS bill to identify the log group that's responsible for bill increases.ResolutionCheck the amount of data that you're ingestingThe IncomingBytes metric shows you the amount of ingested data in your CloudWatch log groups in near real time. This metric can help you to determine the following points:Which log group is the highest contributor towards your billWhether there's a spike in the incoming data to your log groups or a gradual increase due to new applicationsHow much data is pushed during a particular period of timeQuery a small set of log groups using the CloudWatch console1.    Open the Amazon CloudWatch console.2.    In the navigation pane, choose Metrics.3.    For each of your log groups, select the individual IncomingBytes metrics. Then, choose the Graphed metrics tab.4.    For Statistic, choose Sum.5.    For Period, choose 30 Days.6.    Choose the Graph options tab, and then choose Number.7.    At the top right of the graph, choose Custom, and then choose Absolute. Select a start and end date that corresponds with the last 30 days.Query a large set of log groups using the CloudWatch console1.    Open the Amazon CloudWatch console.2.    In the navigation pane, choose All metrics.3.    Choose the Graphed metrics tab. Then, from the Add metric dropdown list, choose Start with an empty expression.4.    Copy the following math expression, and then paste it into the Edit math expression field:SORT(REMOVE_EMPTY(SEARCH('{AWS/Logs,LogGroupName} MetricName="IncomingBytes"', 'Sum', 2592000)),SUM, DESC)After you paste the expression, choose Apply.5.    Choose the Graph options tab, and then choose Number.6.    At the top right of the graph, choose Custom. Then, choose Absolute. Select a start and end date that corresponds with the last 30 days.Note: You can graph up to 500 metrics using this method.Query a large set of log groups using an API callNote: Before you run the following API calls, review the associated costs with making API calls. It's a best practice to distribute the ListMetrics call to avoid throttling. The default limit for ListMetrics is 25 transactions per second. However, you can request a limit increase, if necessary.1.    Make a ListMetrics call. This call finds all log group names that ingested data in the past 14 days. Use the following parameters:Namespace: AWS/LogsMetricName: IncomingBytes2.    Make a GetMetricData call. This call finds the sum of all incoming bytes in a month for every log group name that you get from the ListMetrics call. Use the following parameters:Namespace: AWS/LogsMetricName: IncomingBytesDimensions: As received from the ListMetrics callStartTime: [Date and time 14 days before the current date]EndTime: [Current date and time]Period: [EndTime - StartTime, in seconds]Statistics: Sum3.    To display the log group names with the highest Ingested data amounts, sort the resulting data points in descending order.To be sure that ingested data charges don't exceed a specified limit in the future, create a CloudWatch alarm.Review your storage data usageCheck your most recent AWS bill to see how much storage data you used in the previous billing cycle.Related informationCloudWatch billing and costFollow"
https://repost.aws/knowledge-center/cloudwatch-logs-bill-increase
"How do I restore, resize, or create an EBS persistent storage snapshot in Amazon EKS for disaster recovery or when the EBS modification rate is exceeded?"
"I want to use an Amazon Elastic Block Store (Amazon EBS) persistent storage snapshot in Amazon Elastic Kubernetes Service (Amazon EKS) for disaster recovery. How do I create, resize, or restore such a snapshot? Or, I exceeded my Amazon EBS modification rate. But I still need to resize, restore, or create a snapshot of my Amazon EBS persistent storage in Amazon EKS."
"I want to use an Amazon Elastic Block Store (Amazon EBS) persistent storage snapshot in Amazon Elastic Kubernetes Service (Amazon EKS) for disaster recovery. How do I create, resize, or restore such a snapshot? Or, I exceeded my Amazon EBS modification rate. But I still need to resize, restore, or create a snapshot of my Amazon EBS persistent storage in Amazon EKS.Short descriptionYou're modifying your Amazon EBS persistent storage in Amazon EKS, and you receive the following error:errorCode: Client.VolumeModificationRateExceedederrorMessage: You've reached the maximum modification rate per volume limit. Wait at least 6 hours between modifications per EBS volumeAfter you modify a volume, you must wait at least six hours before you can continue to modify the volume. Make sure that the volume is in the in-use or available state before you modify it again.Your organization might have a Disaster Recovery (DR) objective with a Recovery Time Objective (RTO) that's less than six hours. For RTOs that are less than six hours, create a snapshot and restore your volume using the Amazon EBS Container Storage Interface (CSI) driver.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Use the Amazon EBS CSI driver and external snapshotter to do the following:Create an Amazon EBS snapshot of the PersistentVolumeClaim.Restore the PersistentVolumeClaim.Bind the PersistentVolumeClaim to the workload.Pre-requisites:An existing Amazon EKS cluster with worker nodes. If you don't have one, then create your Amazon EKS cluster.The latest versions of AWS CLI, eksctl, and kubectl.An Amazon EBS CSI driver AWS Identity and Access Management (IAM) role for service accounts.Install the Amazon EBS CSI driver with the external snapshotter1.    Check if you have an existing IAM OpenID Connect (OIDC) provider for your cluster:% cluster_name=ebs% oidc_id=$(aws eks describe-cluster --name cluster_name --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5)% aws iam list-open-id-connect-providers | grep $oidc_idNote: Replace cluster_name with your cluster's name.Example output:"Arn": "arn:aws:iam::XXXXXXXXXX:oidc-provider/oidc.eks.eu-west-1.amazonaws.com/id/B7E2BC2980D17C9A5A3889998CB22B23"Note: If you don't have an IAM OIDC provider, then create one for your cluster.2.    Install the external snapshotter.Note: You must install the external snapshotter before you install the Amazon EBS CSI add-on. Also, you must install the external snapshotter components in the following order:CustomResourceDefinition (CRD) for volumesnapshotclasses, volumesnapshots, and volumesnapshotcontentsmkdir crdcd crdwget https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/kustomization.yamlwget https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yamlwget https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yamlwget https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yamlkubectl apply -k ../crdRBAC, such as ClusterRole, and ClusterRoleBindingkubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yamlController deploymentkubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml3.    Create your Amazon EBS CSI plugin IAM role using eksctl:eksctl create iamserviceaccount \ --name ebs-csi-controller-sa \ --namespace kube-system \ --cluster cluster_name \ --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \ --approve \ --role-only \ --role-name AmazonEKS_EBS_CSI_DriverRole4.    Add the Amazon EBS CSI add-on using eksctl:eksctl create addon --name aws-ebs-csi-driver --cluster cluster_name --service-account-role-arn arn:aws:iam::account_id:role/AmazonEKS_EBS_CSI_DriverRole --forceNote: Replace account_id with your AWS account ID.5.    Confirm that the Amazon EBS CSI driver and external snapshotter pods are running:% kubectl get pods -A | egrep "csi|snapshot"Create a StatefulSet with Amazon EBS persistent storage1.    Download the manifests from the GitHub website.2.    Create the StorageClass and VolumeSnapshotClass:% kubectl apply -f manifests/classes/Example output:volumesnapshotclass.snapshot.storage.k8s.io/csi-aws-vsc createdstorageclass.storage.k8s.io/ebs-sc created3.    Deploy the StatefulSet on your cluster along with the PersistentVolumeClaim:% kubectl apply -f manifests/app/Example output:service/cassandra createdStatefulSet.apps/cassandra created4.    Verify that the pods are in Running status:% kubectl get podsExample output:NAME READY STATUS RESTARTS AGEcassandra-0 1/1 Running 0 33mcassandra-1 1/1 Running 0 32mcassandra-2 1/1 Running 0 30m5.    Verify that the PersistenVolumeClaim is bound to your PersisentVolume:% kubectl get pvcExample output:NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEpersistentvolumeclaim/cassandra-data-cassandra-0 Bound pvc-b3ab4971-37dd-48d8-9f59-8c64bb65b2c8 2Gi RWO ebs-sc 28mpersistentvolumeclaim/cassandra-data-cassandra-1 Bound pvc-6d68170e-2e51-40f4-be52-430790684e51 2Gi RWO ebs-sc 28mpersistentvolumeclaim/cassandra-data-cassandra-2 Bound pvc-d0403adb-1a6f-44b0-9e7f-a367ad7b7353 2Gi RWO ebs-sc 26m...Note: Note the names of each PersistentVolumeClaim to compare to the PersistentVolumeClaim names in the snapshot manifest.6.    To test the StatefulSet, write content to the PersistentVolumeClaim:for i in {0..2}; do kubectl exec "cassandra-$i" -- sh -c 'echo "$(hostname)" > /cassandra_data/data/file.txt'; doneCreate a snapshotThe persistentVolumeClaimName in the snapshot manifest must match the name of the PersistentVolumeClaim that you created for each pod in the StatefulSet. For example:apiVersion: snapshot.storage.k8s.io/v1kind: VolumeSnapshotmetadata: name: cassandra-data-snapshot-0spec: volumeSnapshotClassName: csi-aws-vsc source: persistentVolumeClaimName: cassandra-data-cassandra-01.    Create a snapshot from each PersistenVolumeClaim:% kubectl apply -f manifests/snapshot/Example output:volumesnapshot.snapshot.storage.k8s.io/cassandra-data-snapshot-0 createdvolumesnapshot.snapshot.storage.k8s.io/cassandra-data-snapshot-1 createdvolumesnapshot.snapshot.storage.k8s.io/cassandra-data-snapshot-2 created2.    After the state is completed, verify that the snapshots are available on the Amazon Elastic Compute Cloud (Amazon EC2) console:aws ec2 describe-snapshots --filters "Name=tag-key,Values=*ebs*" --query 'Snapshots[*].{VOL_ID:VolumeId,SnapshotID:SnapshotId,State:State,Size:VolumeSize,Name:[Tags[?Key==`Name`].Value] [0][0]}' --output table---------------------------------------------------------------------------------------------------------------------------------------| DescribeSnapshots |+------------------------------------------------------------+-------+-------------------------+------------+-------------------------+| Name | Size | SnapshotID | State | VOL_ID |+------------------------------------------------------------+-------+-------------------------+------------+-------------------------+| ebs-dynamic-snapshot-c6c9cb3c-2dab-4833-9124-40a0abde170d | 2 | snap-057c5e2de3957d855 | pending | vol-01edf53ee26a615f5 || ebs-dynamic-snapshot-1c1ad0c5-a93a-468f-ab54-576db5d630d4 | 2 | snap-02bf49a3b78ebf194 | completed | vol-0289efe54525dca4a || ebs-dynamic-snapshot-760c48e7-33ff-4b83-a6fb-6ef13b8d31b7 | 2 | snap-0101c3d2efa40af19 | completed | vol-0fe68c9ac2f6375a4 |+------------------------------------------------------------+-------+-------------------------+------------+-------------------------+Restore the snapshotYou can restore a PersistentVolumeClaim from a snapshot that's created from an existing PersistentVolumeClaim using the same name of the PersistentVolumeClaim. When you recreate the StatefulSet, the PersistentVolumeClaim dynamically provisions a PersistentVolume and is automatically bound to the StatefulSet pods. The StatefulSet PersistenVolumeClaim name format is: PVC\_TEMPLATE\_NAME-STATEFULSET\_NAME-REPLICA\_INDEX.To restore a snapshot follow these steps:1.    Delete the existing StatefulSet workload:kubectl delete -f manifests/app/Cassandra_statefulset.yamlNote: Deleting the workload also deletes the StatefulSet pods. The snapshot that you created acts as a backup.Example output:statefulset.apps "cassandra" deleted2.    Forcefully delete the PersistentVolumeClaim:for i in {0..2}do kubectl delete pvc cassandra-data-cassandra-$i --forcedoneNote: Deleting the PersistentVolumeClaim also deletes the PersistentVolume.3.    Restore the PersistentVolumeClaim from the snapshot using the same name of the PersistentVolumeClaim that you created:kubectl apply -f manifests/snapshot-restore/Example output:persistentvolumeclaim/cassandra-data-cassandra-0 createdpersistentvolumeclaim/cassandra-data-cassandra-1 createdpersistentvolumeclaim/cassandra-data-cassandra-2 created4.    Verify that each PersistentVolumeClaim is in Pending status:kubectl get pvcExample output:NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEcassandra-data-cassandra-0 Pending ebs-sc 24scassandra-data-cassandra-1 Pending ebs-sc 23scassandra-data-cassandra-2 Pending ebs-sc 22s5.    Recreate the StatefulSet with the original manifest:kubectl apply -f manifests/app-restore/Note: To resize the storage, define the StatefulSet with a new storage size.Example output:StatefulSet.apps/cassandra created6.    Check the content of the Amazon EBS storage to confirm that the snapshot and restore work:for i in {0..2}; do kubectl exec "cassandra-$i" -- sh -c 'cat /cassandra_data/data/file.txt'; donecassandra-0cassandra-1cassandra-2Resize the PersistentVolumeClaimYou can modify the .spec.resources.requests.storage of the PersistentVolumeClaim to automatically reflect the size that you defined in the StatefulSet manifest:for i in {0..2}do echo "Resizing cassandra-$i" kubectl patch pvc cassandra-data-cassandra-$i -p '{ "spec": { "resources": { "requests": { "storage": "4Gi" }}}}'doneNote: 4Gi is an example storage size. Define a storage size that's suitable for your use case.Confirm that the new storage size is reflected on the Amazon EC2 console and in the pods:% aws ec2 describe-volumes --filters "Name=tag-key,Values=*pvc*" --query 'Volumes[*].{ID:VolumeId,Size:Size,Name:[Tags[?Key==`Name`].Value] [0][0]}' --output table-------------------------------------------------------------------------------------------| DescribeVolumes |+------------------------+--------------------------------------------------------+-------+| ID | Name | Size |+------------------------+--------------------------------------------------------+-------+| vol-01266a5f1f8453e06 | ebs-dynamic-pvc-359a87f6-b584-49fa-8dd9-e724596b2b43 | 4 || vol-01b63a941450342d9 | ebs-dynamic-pvc-bcc6f2cd-383d-429d-b75f-846e341d6ab2 | 4 || vol-041486ec92d4640af | ebs-dynamic-pvc-fa99a595-84b7-49ad-b9c2-1be296e7f1ce | 4 |+------------------------+--------------------------------------------------------+-------+% for i in {0..2}do echo "Inspecting cassandra-$i" kubectl exec -it cassandra-$i -- lsblk kubectl exec -it cassandra-$i -- df -hdone...Run the following Kubectl commands to clean up your StatefulSetTo delete the resources that you created for your StatefulSet, run the following kubectl commands:app-restorekubectl delete -f manifests/app-restoresnapshot-restorekubectl delete -f manifests/snapshot-restoresnapshotkubectl delete -f manifests/snapshotclasseskubectl delete -f manifests/classesCassandrakubectl delete -f manifests/app/Cassandra_service.yamlRelated informationModifyVolumeFollow"
https://repost.aws/knowledge-center/eks-modify-persistent-storage-snapshot
How do I troubleshoot issues with my CloudFront distribution's connection to a custom origin over HTTPS?
"I configured my Amazon CloudFront distribution to connect to a custom origin using HTTPS. Now, I receive the "CloudFront could not connect to Origin" error with the HTTP status code 502 (Bad Gateway)."
"I configured my Amazon CloudFront distribution to connect to a custom origin using HTTPS. Now, I receive the "CloudFront could not connect to Origin" error with the HTTP status code 502 (Bad Gateway).ResolutionVerify that the CloudFront distribution's Origin Domain Name matches the certificate domain nameVerify that the Origin Domain Name specified on your CloudFront distribution matches a domain name on your origin SSL/TLS certificate. The distribution's Origin Domain Name can match either of the following:The domain name specified as the certificate's Common Name (CN)The domain name specified in the certificate's Subject Alternative Name (SAN)If the Origin Domain Name doesn't match any domain name associated with your certificate, then CloudFront returns the HTTP status code 502 (Bad Gateway).Check for any missing intermediary certificate authoritiesUse an SSL checker to test whether your origin's certificate chain is available and doesn't need any intermediary certificate authorities.If you're using Elastic Load Balancing as your custom origin and must update the certificate chain, then do the following:Upload the certificate again with the correct certificate chain.-or-Use AWS Certificate Manager (ACM) to request a public certificate that secures your domain. ACM is fully integrated with Elastic Load Balancing.Test your origin's supported protocol policy and ciphersFor the SSL handshake to succeed, your origin must support the ciphers that CloudFront uses.If your origin protocol policy has SSLv3 turned on, then CloudFront uses only SSLv3 to communicate to your origin from the command line or Windows terminal.Note: OpenSSL is usually available by default on Linux and macOS systems. OpenSSL for Windows is available on the OpenSSL website.To test if your origin supports the ciphers that CloudFront uses, run the following OpenSSL commands.If your origin protocol policy is set to SSLv3, then run:echo | openssl s_client -ssl3 -cipher 'ECDHE-RSA-AES256-GCM-SHA384 ECDHE-RSA-AES256-SHA384 ECDHE-RSA-AES256-SHA ECDHE-RSA-AES128-GCM-SHA256 ECDHE-RSA-AES128-SHA256 ECDHE-RSA-AES128-SHA AES256-SHA AES128-SHA DES-CBC3-SHA RC4-MD5 ECDHE-ECDSA-AES256-GCM-SHA384 ECDHE-ECDSA-AES256-SHA384 ECDHE-ECDSA-AES256-SHA ECDHE-ECDSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-SHA256 ECDHE-ECDSA-AES128-SHA' -connect your.origin.domain:443If your origin is using TLS, then test your origin for each protocol using these commands:For TLS, run:echo | openssl s_client -tls1 -cipher 'ECDHE-RSA-AES256-GCM-SHA384 ECDHE-RSA-AES256-SHA384 ECDHE-RSA-AES256-SHA ECDHE-RSA-AES128-GCM-SHA256 ECDHE-RSA-AES128-SHA256 ECDHE-RSA-AES128-SHA AES256-SHA AES128-SHA DES-CBC3-SHA RC4-MD5 ECDHE-ECDSA-AES256-GCM-SHA384 ECDHE-ECDSA-AES256-SHA384 ECDHE-ECDSA-AES256-SHA ECDHE-ECDSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-SHA256 ECDHE-ECDSA-AES128-SHA' -connect your.origin.domain:443 -servername your.origin.domainFor TLS 1.1, run:echo | openssl s_client -tls1_1 -cipher 'ECDHE-RSA-AES256-GCM-SHA384 ECDHE-RSA-AES256-SHA384 ECDHE-RSA-AES256-SHA ECDHE-RSA-AES128-GCM-SHA256 ECDHE-RSA-AES128-SHA256 ECDHE-RSA-AES128-SHA AES256-SHA AES128-SHA DES-CBC3-SHA RC4-MD5 ECDHE-ECDSA-AES256-GCM-SHA384 ECDHE-ECDSA-AES256-SHA384 ECDHE-ECDSA-AES256-SHA ECDHE-ECDSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-SHA256 ECDHE-ECDSA-AES128-SHA' -connect your.origin.domain:443 -servername your.origin.domainFor TLS 1.2, run:echo | openssl s_client -tls1_2 -cipher 'ECDHE-RSA-AES128-SHA256 ECDHE-RSA-AES256-SHA384 AES256-SHA AES128-SHA DES-CBC3-SHA RC4-MD5 ECDHE-ECDSA-AES256-GCM-SHA384 ECDHE-ECDSA-AES256-SHA384 ECDHE-ECDSA-AES256-SHA ECDHE-ECDSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-SHA256 ECDHE-ECDSA-AES128-SHA' -connect your.origin.domain:443 -servername your.origin.domainNote: Set the value of -servername to the origin domain name. Or, if you're using CloudFront to forward the Host header, set -servername to the CNAME from the CloudFront request.If you successfully connect to the origin, then you see output from the preceding commands that's similar to the following. The output confirms that your connection is successfully established using the SSL or TLS version and supported ciphers.New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES128-SHA256Server public key is 2048 bitSecure Renegotiation IS supportedCompression: NONEExpansion: NONENo ALPN negotiatedSSL-Session:Protocol : TLSv1.2Cipher : ECDHE-RSA-AES128-SHA256....Timeout : 7200 (sec)Verify return code: 0 (ok)----DONENote: For more troubleshooting guidance on 502 errors, see HTTP 502 status code (Bad Gateway).Related informationRequiring HTTPS for communication between CloudFront and your custom originFollow"
https://repost.aws/knowledge-center/cloudfront-connectivity-custom-origin
How can I configure my EC2 Spot Instances so that the root EBS volume won’t be deleted when I terminate the instance?
I want to configure my Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances so that the root Amazon Elastic Block Store (Amazon EBS) volume isn't deleted when I terminate the instance.
"I want to configure my Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances so that the root Amazon Elastic Block Store (Amazon EBS) volume isn't deleted when I terminate the instance.ResolutionBy default, when EC2 Spot Instances are terminated, all EBS volumes attached to that instance are deleted because the DeleteOnTermination attribute is set to true.To change the DeleteOnTermination attribute for a new Spot requestOpen the Amazon EC2 console, and then choose Spot Requests from the navigation pane.Choose Request Spot Instances.Choose an Availability Zone, and then choose Next.In EBS volumes, clear Delete.After you clear the Delete box, finish creating your Spot request. Any new instances that are launched when this Spot request is fulfilled will have DeleteOnTermination set to false.Note: EBS volumes with DeleteOnTermination set to false incur charges and remain in your EBS volume console. You must manually delete the volume. For more information, see Amazon EBS pricing.To change the DeleteOnTermination attribute for a running Spot InstanceUse the modify-instance-attribute command in the AWS Command Line Interface (AWS CLI) to configure the root EBS volume to persist on termination.Related informationSpot InstancesFollow"
https://repost.aws/knowledge-center/ami-preserve-ebs-spot
"What are the least privileges required for a user to perform creates, deletes, modifications, backup, and recovery for an Amazon RDS DB instance?"
I want to limit the access that I give my AWS Identity and Access Management (IAM) users to an Amazon Relational Database Service (Amazon RDS) DB instance. How can I grant IAM users the least privileges required to perform a specific action for an Amazon RDS DB instance?
"I want to limit the access that I give my AWS Identity and Access Management (IAM) users to an Amazon Relational Database Service (Amazon RDS) DB instance. How can I grant IAM users the least privileges required to perform a specific action for an Amazon RDS DB instance?ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.1.    Open the IAM console, and then choose Users from the navigation pane.2.    Choose Add user, and then enter a User name.3.    For Access type, choose AWS Management Console access, and then create a password for using the Amazon RDS console. To provide access to the AWS Command Line Interface (AWS CLI), choose Programmatic access.Important: For Programmatic access, be sure to download the access key ID and the secret access key by choosing Download.csv. You need the keys to create the security tokens later.4.    Review the permissions and tags, and then choose Create user. This creates an IAM user with the IAMUserChangePassword policy.5.    Create IAM policies for the actions that you want to perform in Amazon RDS.6.    Return to the IAM console, and then choose Users from the navigation pane.7.    Choose the IAM user that you created.8.    From the Permissions tab, choose Add inline policy.9.    Choose the JSON tab, and then enter one or more of the following policies based on your use case.Note: The following policies provide the least privileges required to perform the specified actions. You might see errors (such as IAMUser is not authorized to perform: rds:Action) in the Amazon RDS console because this privilege isn't present in the policy. Most often, this error occurs for Describe actions. The error is expected, and it doesn't affect your ability to perform those actions. To avoid this error, you can modify the following example IAM policies, or you can perform actions by using the AWS CLI.Creating and deleting RDS DB instancesThe following policy allows users to create RDS DB instances without encryption activated:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeVpcAttribute", "ec2:DescribeSecurityGroups", "ec2:DescribeInternetGateways", "ec2:DescribeAvailabilityZones", "ec2:DescribeVpcs", "ec2:DescribeAccountAttributes", "ec2:DescribeSubnets", "rds:Describe*", "rds:ListTagsForResource", "rds:CreateDBInstance", "rds:CreateDBSubnetGroup" ], "Resource": "*" } ]}The following policy allows users to create RDS DB instances with encryption activated:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeVpcAttribute", "ec2:DescribeSecurityGroups", "ec2:DescribeInternetGateways", "ec2:DescribeAvailabilityZones", "ec2:DescribeVpcs", "ec2:DescribeAccountAttributes", "ec2:DescribeSubnets", "rds:Describe*", "rds:ListTagsForResource", "rds:CreateDBInstance", "rds:CreateDBSubnetGroup", "kms:ListAliases" ], "Resource": "*" } ]}Note: To use a customer managed key for encryption instead of the default AWS managed key, you must authorize the use of a customer managed key.The following policy allows users to delete RDS DB instances:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "rds:DeleteDBInstance", "rds:DescribeDBInstances" ], "Resource": "*" } ]}The following policy allows users to create and delete RDS DB instances:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeVpcAttribute", "ec2:DescribeSecurityGroups", "ec2:DescribeInternetGateways", "ec2:DescribeAvailabilityZones", "ec2:DescribeVpcs", "ec2:DescribeAccountAttributes", "ec2:DescribeSubnets", "rds:Describe*", "rds:ListTagsForResource", "rds:CreateDBInstance", "rds:CreateDBSubnetGroup", "rds:DeleteDBInstance" ], "Resource": "*" } ]}Stopping and starting RDS DB instancesThe following policy allows users to stop and start RDS DB instances:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "rds:StopDBInstance", "rds:StartDBInstance", "rds:Describe*" ], "Resource": "*" } ]}Performing backup and recovery (creating DB snapshots, restoring DB instance from DB snapshots, and point in time restore)The following policy allows users to create DB snapshots:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "rds:Describe*", "rds:CreateDBSnapshot" ], "Resource": "*" } ]}The following policy allows users to restore RDS DB instances using DB snapshots:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:Describe*", "rds:Describe*", "rds:RestoreDBInstanceFromDBSnapshot" ], "Resource": "*" } ]}The following policy allows users to perform point in time recovery:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:Describe*", "rds:Describe*", "rds:RestoreDBInstanceToPointInTime" ], "Resource": "*" } ]}Modifying RDS DB instancesThe following policy allows users to change DB instance class type, allocated storage, storage type, and instance version:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:Describe*", "rds:Describe*", "rds:ModifyDBInstance" ], "Resource": "*" } ]}Activating Enhanced Monitoring and Performance InsightsThe following policy allows users to activate Enhanced Monitoring. Be sure to replace AccountID with each account that is receiving the enhanced monitoring role:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "iam:GetRole", "iam:ListRoles", "rds:ModifyDBInstance", "rds:Describe*", "ec2:Describe*" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "iam:PassRole" ], "Resource": "arn:aws:iam::AccountID:role/rds-monitoring-role" } ]}Note: When used with an iam:PassRole, a wildcard (*) is overly permissive because it allows iam:PassRole permissions on all resources. Therefore, it's a best practice to specify the Amazon Resource Names (ARNs), as shown in the example earlier.The following policy allows users to activate Performance Insights:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "rds:ModifyDBInstance", "ec2:Describe*", "rds:Describe*", "pi:*" ], "Resource": "*" } ]}Creating, modifying, and deleting DB parameter groups and DB option groupsThe following policy allows users to create, modify, or delete DB parameter groups and option groups:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:Describe*", "rds:Describe*", "rds:CreateDBParameterGroup", "rds:ModifyDBParameterGroup", "rds:DeleteDBParameterGroup", "rds:CreateOptionGroup", "rds:ModifyOptionGroup", "rds:DeleteOptionGroup" ], "Resource": "*" } ]}Viewing Amazon CloudWatch metrics from the Amazon RDS consoleThe following policy allows users to view CloudWatch metrics from the Amazon RDS console:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "rds:Describe*", "cloudwatch:GetMetricData", "cloudwatch:GetMetricStatistics" ], "Resource": "*" } ]}10.    Choose Review policy.11.    Enter a Name for your policy, and then choose Create policy.Related informationIdentity and access management for Amazon RDSHow do I allow users to authenticate to an Amazon RDS MySQL DB instance using their IAM credentials?Follow"
https://repost.aws/knowledge-center/rds-iam-least-privileges
Why is my Elastic Beanstalk environment in the invalid state?
I want to troubleshoot the error "Environment is in an invalid state for this operation. Must be ready" while in an AWS Elastic Beanstalk environment.
"I want to troubleshoot the error "Environment is in an invalid state for this operation. Must be ready" while in an AWS Elastic Beanstalk environment.Short descriptionWhen you receive this error, make sure that there is no ongoing operation in the environment. If there is an ongoing operation, then you must either wait for the update to complete or cancel the in-progress updates according to your requirements. You can start your updates over when the environment becomes ready again. If there is no ongoing operation in the environment and you are still receiving the error, then your environment might be in an Unrecoverable state. This state prevents further operations from being performed in the environment. If you need further help returning the environment to the Available state, contact AWS Support. However, there are things you can check before you contact AWS Support.ResolutionElastic Beanstalk creates an AWS CloudFormation stack in the backend to manage the resources associated with the environment. You can check this stack in the CloudFormation console with the name awseb-(env-ID)-stack.When the Elastic Beanstalk environment goes into anUnrecoverable state, the CloudFormation stack shows a *_FAILED status. Before the AWS Support team can change the environment to Available, the stack must show a *_COMPLETE status.To correct the *_FAILED status of your CloudFormation stack, do the following steps according to the stack status:"UPDATE_ROLLBACK_FAILED" status1.    Navigate to the CloudFormation console. Then, identify the resource that failed to update during the rollback from the respective stack events.2.    Bring the stack to the UPDATE_ROLLBACK_COMPLETE status by selecting the Continue Update Rollback option from the CloudFormation console.3.    In the Continue update rollback dialog box, expand Advanced troubleshooting. In the Resources to skip - optional section, select the resource that failed to update.4.    Choose Continue update rollback. The stack now shows the UPDATE_ROLLBACK_COMPLETE status.5.    Contact the AWS Support team to change the environment to an Available state.6.    When the environment is in Available status, you can perform further updates on the environment."DELETE_FAILED" status1.    Navigate to the CloudFormation console. Then identify the resource that failed to delete from the respective stack events.2.    Manually delete the resource that failed to delete. For example, if the resource that failed to delete is a security group, then delete it from the Amazon Elastic Compute Cloud (Amazon EC2) console.3.    Delete the CloudFormation stack from the CloudFormation console. The stack now shows a DELETE_COMPLETE status.4.    Contact the Elastic Beanstalk support team to change the environment to an Available state.5.    When the environment is the Available state, you can rebuild or terminate the environment."CREATE_FAILED" statusIf your stack has this status, it's a best practice to create a new Elastic Beanstalk environmen, and then terminate the current one. This is because the state of the stack isn't stable enough perform a rollback. It's a best practice to not perform further updates on the current environment.Before terminating the current environment, try the following:Leverage saved configurations if you want to have similar configurations for your new environment.Perform blue/green deployments and when the new environment is working correctly, perform the CNAME Swap between the URLs of the two environments.Follow"
https://repost.aws/knowledge-center/elastic-beanstalk-invalid-state
How do I troubleshoot call failures from an Amazon Connect instance?
My Amazon Connect instance is producing inbound and outbound Call failure errors.
"My Amazon Connect instance is producing inbound and outbound Call failure errors.ResolutionTo troubleshoot inbound Call failure errors from an Amazon Connect instanceConfirm that the number on the instance is attached to a contact flow by doing the following:In the Amazon Connect console, choose Routing.Choose Phone Numbers.Choose Phone Number, and then verify that the number generating the error is assigned to a published contact flow.Note: The MisconfiguredPhoneNumbers metric shows the number of failed calls because the phone number isn’t associated with a flow.Confirm that the Allow incoming calls check box is selected in your instance's Telephony Options page settings. Fore instructions, see step 4 in Step 2: Create an instance.Confirm that the instance isn't over its concurrent call quota by doing the following:In the Amazon CloudWatch console, choose Metrics from the left sidebar.From list of services, choose Connect.Locate the instance ID that's generating the error.Check the CallsBreachingActiveCallsQuota metric box next to the instance ID.If the CallsBreachingActiveCallsQuota metric populates, then the concurrent active call limit is breached.Note: To see the number of calls that are rejected because the rate of calls per second exceeds your concurrent call quota, review the following metrics:CallsPerIntervalThrottledCalls-or-If you're unsure about the percentage of your instance's concurrent call quota being exhausted, then do the following:Check the concurrent calls percentage metric box.Confirm that the concurrent calls percentage is below 1%.The concurrent calls percentage metric ranges from 0-1. 1 means that the active calls limit is 100% used.For more information, see Monitoring your instance using CloudWatch.Note: The default concurrent calls quota is 10. To increase your instance’s concurrent calls quota, use the Amazon Connect service quotas increase form. You must sign in to your AWS account to access the form. Amazon Connect might lower these default limits to prevent fraud or malicious use.Confirm that the Call failure error isn't on the source telecom-carrier side by doing the following:Call the number that's generating the error using different telecom carriers.If the number is reachable from one carrier, but not another, then the issue is on the source carrier's side.To troubleshoot outbound Call failure errors from an Amazon Connect instanceConfirm that the country code of the number generating the error message is on the Countries you can call by default list .Note: To allow calling to additional countries, submit a service quota increase request. You must sign in to your AWS account to access the form. The countries listed in the default Contact Control Panel (CCP) are the countries that are allowed for your Connect instance.Confirm that the number dialed is in E.164 format on Wikipedia.Note: Make sure to remove any leading and trailing digits. For example, you must remove the "0" long-distance code that often appears before the United Kingdom's country code.Confirm that the Allow outgoing calls check box is selected in your instance's Telephony Options page settings. Fore instructions, see step 4 in Step 2: Create an instance.Confirm that the instance isn't over its concurrent call quota. (See step 3 of the To troubleshoot inbound Call failure errors from an Amazon Connect Instance section of this article.)Confirm that the user has the permission CCP: Outbound in their security profile.Confirm that the Outbound Queue has the Outbound caller ID number configured in the Outbound caller configuration. Confirm that the Call failure error isn't on the destination number's side by doing the following:Call the number that's generating the error using a number outside of Amazon Connect.If the destination number can’t be reached outside of Amazon Connect, then the issue is on the destination number’s side.Related informationUse CloudWatch metrics to calculate concurrent call quotaFollow"
https://repost.aws/knowledge-center/resolve-amazon-connect-call-failures
How do I resolve the error "Failed to receive X resource signal(s) within the specified duration" in AWS CloudFormation?
"In AWS CloudFormation, I receive the following error message: "Failed to receive X resource signal(s) within the specified duration." How can I resolve this error?"
"In AWS CloudFormation, I receive the following error message: "Failed to receive X resource signal(s) within the specified duration." How can I resolve this error?Short descriptionYou get this error when an Amazon Elastic Compute Cloud (Amazon EC2) instance, Auto Scaling group, or a WaitCondition doesn't receive success signals from one or more instances in the time period specified by the CreationPolicy attribute.This error can occur in one of the following scenarios:Scenario 1: The cfn-signal script isn't installed on one or more instances of the AWS CloudFormation stack.Scenario 2: There are syntax errors or incorrect values in the AWS CloudFormation template.Scenario 3: The value of the Timeout property for the CreationPolicy attribute is too low.Scenario 4: The cfn-signal isn't sent from the Amazon EC2 instance.Note: The troubleshooting scenarios for this error apply only to AWS CloudFormation stacks created with Linux instances. The scenarios don't apply to Windows instances. For more information, see How to troubleshoot stack creating issues.ResolutionBefore following the steps in the troubleshooting scenarios, set the Rollback on failure option for your AWS CloudFormation stack to No.Scenario 1: The cfn-signal script isn't installed on one or more instances of the AWS CloudFormation stackTo confirm that the cfn-signal script is installed on the instance that is configured to send signals to AWS CloudFormation resources, complete the following steps:1.    Connect to your Linux instance using SSH.2.    Confirm that the cfn-signal script is installed using either of the following commands.To confirm that the cfn-signal script is located in your directory, run the following command:$ sudo find / -name cfn-signal/opt/aws/bin/cfn-signal/opt/aws/apitools/cfn-init-1.4-30.amzn2/bin/cfn-signalTo confirm that the AWS CloudFormation helper scripts package that contains the cfn-signal script is installed, run the following command:$ sudo rpm -q aws-cfn-bootstrapaws-cfn-bootstrap-1.4-30.amzn2.noarchImportant: The preceding command works only on distributions that use the RPM Package Manager.Note: By default, AWS CloudFormation helper scripts are installed on Amazon Linux Amazon Machine Images (AMIs). If AWS CloudFormation helper scripts aren't installed, see CloudFormation helper scripts reference for installation instructions.Scenario 2: There are syntax errors or incorrect values in the AWS CloudFormation templateTo confirm that the UserData property is configured to signal the AWS CloudFormation resources specified by the CreationPolicy attribute, complete the following steps:1.    In a code editor, open the AWS CloudFormation template for your stack, and then find the UserData property section.2.    Check for errors, including syntax errors, missing spaces, misspellings, and other typos.3.    Confirm that the values for the stack, resource, and Region properties are correct.Note: If you use a bootstrap script that includes the UserData property and calls the cfn-signal script, then check the bootstrap script for syntax errors or incorrect values.If you signal within the cfn-init commands key, look for information about the signal in the cfn-init logs. To search for errors in the cloud-init logs or cfn-init logs, connect to your Amazon EC2 instance using SSH. Then, look for detailed error or failure messages by searching for the keyword "error" or "failure" in the following logs:/var/log/cloud-init-output.log/var/log/cloud-init.log/var/log/cfn-init.log/var/log/cfn-init-cmd.log/var/log/cfn-wire.logTo parse all instances of the words "error" or "failure" in any /var/log/cfn or /var/log/cloud-init files, run the following command:grep -ni 'error\|failure' $(sudo find /var/log -name cfn\* -or -name cloud-init\*)Note: The preceding command returns the file name, line number, and error message.Scenario 3: The value of the Timeout property for the CreationPolicy attribute is too lowThe value of the Timeout property is defined by the CreationPolicy attribute. To confirm that the value is high enough for tasks to run before the cfn-signal script sends signals to AWS CloudFormation resources, complete the following steps.Important: The following steps work only if the instance isn't terminated (for example, by an Auto Scaling group). You already set the Rollback on failure option of your AWS CloudFormation stack to No. This option means that there is no failure rollback, and the instance won't be terminated until you delete the stack. You can connect to the instance using SSH, and then continue with the following troubleshooting steps.1.    In a code editor, open the AWS CloudFormation template for your stack, and then find the value of the Timeout property.Note: The value of the Timeout property is the maximum amount of time AWS CloudFormation waits for a signal before returning an error.2.    To get an estimate of when the cfn-signal script is triggered, connect to the instance using SSH, and then run the following command:less /var/log/cfn-init.logThe log file shows a timestamp when the SUCCESS signal is sent to AWS CloudFormation resources. See the following example:2019-01-11 12:46:40,101 [DEBUG] Signaling resource EC2Instance in stack XXXX with unique ID i-045a536a3dfc8ccad and status SUCCESS3.    Open the AWS CloudFormation console.4.    To see the resource failure timestamp for the "Failed to receive X resource signal(s) within the specified duration" event, choose the Events view.5.    For Status Reason, expand the row for the event with the status reason "Failed to receive X resource signal(s) within the specified duration."6.    Compare the signaling timestamp to the resource failure timestamp.Note: Notice that the signal was sent after the Amazon EC2 resource failed to be created. The signal is sent before the Amazon EC2 resource is created or fails to be created.Scenario 4: The cfn-signal isn't sent from the Amazon EC2 instanceThe SignalResource API is useful when you want to send signals from anywhere other than from an Amazon EC2 instance.For example, you can use an AWS Lambda function to call the SignalResource API and then send the signal to the AWS CloudFormation stack. In such a scenario, check your Lambda logs with Amazon CloudWatch Logs. These logs help you figure out why the signal isn't sent to the AWS CloudFormation stack.Follow"
https://repost.aws/knowledge-center/cloudformation-failed-signal
What are best practices for migrating my ElastiCache for Redis cluster?
What best practices should I use when migrating my Amazon ElastiCache for Redis cluster?
"What best practices should I use when migrating my Amazon ElastiCache for Redis cluster?Short descriptionAmazon ElastiCache currently supports offline methods using backup to migrate an ElastiCache Redis cluster within a Region, across Regions in the same account, or across accounts.Note: For information on migrating a Redis cluster on an Amazon Elastic Compute Cloud (Amazon EC2) instance to ElastiCache, see Online migration to ElastiCache.ResolutionMigrate an ElastiCache Redis cluster within a Region1.    Create an ElastiCache backup of your ElastiCache for Redis cluster. Review the backup constraints before creating your backup.2.    Creating a new Redis cluster by restoring for the backup.Note: This method can be used to migrate an ElastiCache for Redis cluster to a different Availability Zone within a Region. Or, you can use it to turn on in-transit encryption or at-rest encryption for an existing cluster.Migrate an ElastiCache Redis cluster to a different Region1.    Create an ElastiCache backup of your ElastiCache for Redis cluster. Review the backup constraints before creating your backup.2.    Create an Amazon Simple Storage Service (S3) bucket in the same AWS Region as the Redis cluster.3.    Grant ElastiCache access to the S3 bucket.4.    Export the ElastiCache Backup to the S3 bucket.5.    Create anS3 bucket in your destination Region.6.    Install and configure the AWS Command Line Interface (AWS CLI).Configure the AWS CLI by running the following command:aws configureEnter access keys (access key ID and secret access key) of your AWS Identity and Access Management (IAM) user or role.7.    Use the AWS CLI to copy the .rdb backup file from the source Region's S3 bucket to the destination Region's S3 bucket:aws s3 cp s3://SourceBucketName/BackupName.rdb s3://DestinationBucketName/BackupName.rdb --acl bucket-owner-full-control --source-region SourceRegionName --region DestinationRegionName8.    In the destination Region, go to the Amazon S3 console and grant ElastiCache read access to the .rdb file.9.    Create an ElastiCache for Redis cluster by seeding the cluster with the .rdb file.Migrate an ElastiCache Redis cluster to a different account1.    Create a backup of your cluster. Review the backup constraints before creating your backup.2.    Create an Amazon S3 bucket in the same AWS Region as the Redis cluster.3.    Grant ElastiCache access to the S3 bucket.4.    Export the ElastiCache backup to the S3 bucket.5.    Create an S3 bucket in your destination account. The bucket must be in the same Region as the Redis cluster.6.    Install and configure the AWS Command Line Interface (AWS CLI).Configure the AWS CLI by running the following command:aws configureEnter access keys (access key ID and secret access key) of your source account's AWS Identity and Access Management (IAM) user or role.7.    Copy the .rdb backup file from the source account's S3 bucket to the destination account's S3 bucket.Note: If the source and destination Regions are different, copy the .rdb file using the following command:aws s3 cp s3://SourceAccountBucketName/BackupName.rdb s3://DestinationAccountBucketName/BackupName.rdb --acl bucket-owner-full-control --source-region SourceRegionName --region DestinationRegionName8.    In the destination account, open the Amazon S3 console and grant ElastiCache read access to the .rdb file.9.    Create an ElastiCache for Redis cluster by seeding the cluster with the .rdb file.Follow"
https://repost.aws/knowledge-center/elasticache-redis-migrate-best-practices
Why am I receiving the error "Your requested instance type is not supported in your requested Availability Zone" when launching an EC2 instance?
I received the error message "Your requested instance type is not supported in your requested Availability Zone" when launching an Amazon Elastic Compute Cloud (Amazon EC2) instance. How can I determine which Availability Zone to use?
"I received the error message "Your requested instance type is not supported in your requested Availability Zone" when launching an Amazon Elastic Compute Cloud (Amazon EC2) instance. How can I determine which Availability Zone to use?Short descriptionSome Availability Zones don't support particular instance types. If you receive the error "Your requested instance type is not supported in your requested Availability Zone," do the following:Determine which Availability Zones support your instance type.Retry the request and specify an Availability Zone that supports your chosen instance type. Or, submit the request without specifying an Availability Zone.Note: This error is different from an insufficient instance capacity error. For information on insufficient capacity errors, see Insufficient instance capacity.ResolutionDetermine which Availability Zones support your instance typeFrom the Amazon EC2 console:Open the Amazon EC2 console.Choose the Region where you want to launch the instance.Select Instance Types.For Filter instance types, enter your preferred instance type.Select your preferred instance type.Under Networking, review the Availability Zones listed under Availability Zones.From the AWS Command Line Interface (AWS CLI):Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Use the describe-instance-type-offerings command and include filters for the Availability Zone and the instance type you want to launch. Add additional filters as needed. The following example command filters the search results by Availability Zone, instance type, and Region. For more information on the describe-instance-type-offerings command, see describe-instance-type-offerings in the AWS CLI Command Reference.# aws ec2 describe-instance-type-offerings --location-type availability-zone --filters Name=instance-type,Values=c5.xlarge --region af-south-1 --output tableNote: Availability Zone names might not map to the same location across accounts. For more information, see Availability Zones. Use the location-type availability-zone-id command to have the output list Availability Zone IDs. You can use the Availability Zone ID to validate the Availability Zone mapping on your account.Retry the requestWhen launching the instance, choose a supported Availability Zone during instance launch. You can launch an instance to an Availability Zone using the old launch wizard, new launch wizard, or using AWS CLI. Or, don't specify an Availability Zone in your request. If you don't specify an Availability Zone, Amazon EC2 chooses an Availability Zone for you that supports your instance type.Follow"
https://repost.aws/knowledge-center/ec2-instance-type-not-supported-az-error
How do I troubleshoot data loading errors while using the COPY command in Amazon Redshift?
"I tried to use the COPY command to load a flat file. However, I'm experiencing data loading issues or errors in Amazon Redshift. How do I troubleshoot this?"
"I tried to use the COPY command to load a flat file. However, I'm experiencing data loading issues or errors in Amazon Redshift. How do I troubleshoot this?Short descriptionUse the STL_LOAD_ERRORS table to identify any data loading errors that occur during a flat file load. The STL_LOAD_ERRORS table can help you track the progress of a data load, recording any failures or errors along the way. After you troubleshoot the identified issue, reload the data in the flat file while using the COPY command.Tip: If you're using the COPY command to load a flat file in Parquet format, you can also use the SVL_S3LOG table. The SVL_S3LOG table can be used to identify any data loading errors.ResolutionNote: The following steps use an example dataset of cities and venues.1.    Check the data in your sample flat file to confirm that the source data is valid.For example:7|BMO Field|Toronto|ON|016|TD Garden|Boston|MA|023|The Palace of Auburn Hills|Auburn Hills|MI|028|American Airlines Arena|Miami|FL|037|Staples Center|Los Angeles|CA|042|FedExForum|Memphis|TN|052|PNC Arena|Raleigh|NC ,25 |059|Scotiabank Saddledome|Calgary|AB|066|SAP Center|San Jose|CA|073|Heinz Field|Pittsburgh|PA|65050In this example demo.txt file, five fields are used, separated by a pipe character. For more information, see Load LISTING from a pipe-delimited file (default delimiter).2.    Open the Amazon Redshift console.3.    Create a sample table using the following DDL:CREATE TABLE VENUE1(VENUEID SMALLINT,VENUENAME VARCHAR(100),VENUECITY VARCHAR(30),VENUESTATE CHAR(2),VENUESEATS INTEGER) DISTSTYLE EVEN;4.    Create a view to preview the relevant columns from the STL_LOAD_ERRORS table:create view loadview as(select distinct tbl, trim(name) as table_name, query, starttime,trim(filename) as input, line_number, colname, err_code,trim(err_reason) as reasonfrom stl_load_errors sl, stv_tbl_perm spwhere sl.tbl = sp.id);This view can help you identify the cause of the data loading error.5.    Use the COPY command to load the data:copy Demofrom 's3://your_S3_bucket/venue/'iam_role 'arn:aws:iam::123456789012:role/redshiftcopyfroms3'delimiter '|' ;Note: Replace your_S3_bucket with the name of your S3 bucket. Then, replace arn:aws:iam::123456789012:role/redshiftcopyfroms3 with the Amazon Resource Name (ARN) for your AWS Identity and Access Management (IAM) role. This IAM role must be able to access data from your S3 bucket. For more information, see Parameters.6.    Query the load view to display and review the error load details of the table:testdb=# select * from loadview where table_name='venue1';tbl | 265190table_name | venue1query | 5790starttime | 2017-07-03 11:54:22.864584input | s3://your_S3_bucket/venue/venue_pipe0000_part_00line_number | 7colname | venuestateerr_code | 1204reason | Char length exceeds DDL lengthIn this example, the exception is caused by the length value, which must be added to the venuestate column. The (NC ,25 |) value is longer than the length defined in the VENUESTATE CHAR(2) DDL.You can resolve this exception in two different ways:If the data is expected to exceed the defined length of the column, then review and update the table definition to modify the column length.-or-If the data isn't properly formatted or transformed, then modify the data in file to use the correct value.The output from this query includes the following important information:The file causing the error.The column causing the error.The line number in the input file.The reason for the exception.7.    Modify the data in your load file to use the correct values (the length must align with the defined column length):7|BMO Field|Toronto|ON|016|TD Garden|Boston|MA|023|The Palace of Auburn Hills|Auburn Hills|MI|028|American Airlines Arena|Miami|FL|037|Staples Center|Los Angeles|CA|042|FedExForum|Memphis|TN|052|PNC Arena|Raleigh|NC|059|Scotiabank Saddledome|Calgary|AB|066|SAP Center|San Jose|CA|073|Heinz Field|Pittsburgh|PA|650508.    Reload the data load:testdb=# copy Demofrom 's3://your_S3_bucket/sales/'iam_role 'arn:aws:iam::123456789012:role/redshiftcopyfroms3' delimiter '|' ;INFO: Load into table 'venue1' completed, 808 record(s) loaded successfully.Note: The STL_LOAD_ERRORS table can hold only a limited amount of logs (typically for around 4 to 5 days). Also, standard users can view only their own data when querying the STL_LOAD_ERRORS table. To view all the table data, you must be a superuser.Related informationAmazon Redshift best practices for designing tablesAmazon Redshift best practices for loading dataSystem tables for troubleshooting data loadsWorking with recommendations from Amazon Redshift AdvisorFollow"
https://repost.aws/knowledge-center/redshift-stl-load-errors
What are some use cases for using an object ACL in Amazon S3?
I want to delegate access to my Amazon Simple Storage Service (Amazon S3) objects using an access control list (ACL). What are some use cases for using an object or bucket ACL?
"I want to delegate access to my Amazon Simple Storage Service (Amazon S3) objects using an access control list (ACL). What are some use cases for using an object or bucket ACL?ResolutionAmazon S3 access control lists (ACLs) enable you to manage access to S3 buckets and objects. Every S3 bucket and object has an ACL attached to it as a subresource. The ACLs define which AWS accounts or groups are granted access along with the type of access. When you submit a request against a resource, Amazon S3 checks the corresponding ACL to confirm that you have the required access permissions.Most use cases where access is granted to objects or buckets no longer require ACLs. However, in some cases, using an ACL might be more appropriate. For example, here are some use cases for when you might need to use an ACL to manage bucket or object access:An object ACL is the only way to grant access to objects that are not owned by the bucket owner. By default, when another AWS account uploads an object to your S3 bucket, that account (the object writer) owns the object. Additionally, the object writer has access to the object, and can grant other users access to it using ACLs.Object ACLs can be used when you need to manage permissions at the object level. For example, if you need to delegate access to an entire folder you can use a bucket policy. However, if the access permissions vary by object, granting permissions to individual objects using a bucket policy might not be practical. Therefore, an object ACL might be more appropriate for managing object access.If you want to own new objects written to your bucket by other AWS accounts (and your ACL isn't disabled), apply the bucket owner preferred setting. With this setting, new objects that are written with the bucket-owner-full-control ACL are automatically owned by the bucket owner (and not the object writer). All other ACL behaviors remain in place.Note: To disable an ACL, use the bucket owner enforced setting for S3 Object Ownership. When ACLs are disabled, you can easily maintain a bucket with objects uploaded (cross-account) by different AWS accounts using bucket policies. If your bucket uses the bucket owner enforced setting for S3 Object Ownership, requests to set or update ACLs fail, returning the AccessControlListNotSupported error code. However, requests to read ACLs will still be supported.Bucket ACLs can be used to grant permissions to AWS services like Amazon CloudFront to perform certain actions to your bucket. For example, when you create or update a CloudFront distribution and enable CloudFront logging, CloudFront updates the bucket ACL. This update gives the awslogsdelivery account FULL_CONTROL permissions to write logs to your bucket. For more information, see Permissions required to configure standard logging and to access your log files.Applying ACLs to objectsExample 1If you're uploading an object to a bucket in a different AWS account, use the bucket-owner-full-control canned ACL:aws s3api put-object --bucket destination_bucket --key dir-1/myfile --body dir-1/myfile --acl bucket-owner-full-controlThe bucket-owner-full-control canned ACL provides access to the bucket owner's account.Note: Amazon S3 supports a set of predefined ACLs known as canned ACLs (such as the bucket-owner-full-control ACL used in this example).Example 2The object uploader can also add an ACL to grant read permissions to other AWS accounts:aws s3api put-object --bucket destination_mybucket --key dir/myfile --body dir/myfile --grant-read [email protected],id=canonical-id-of-accountNote: You can only specify a grantee using email addresses in the following AWS Regions: N. Virginia, N. California, Oregon, Singapore, Sydney, Tokyo, Ireland, and São Paulo.Example 3You can also update the ACL of an existing object:aws s3api put-object-acl --bucket destination_bucket --key dir/myfile --acl bucket-owner-full-controlExample 4Amazon S3 has a set of predefined groups. You can use object ACLs to grant permissions to the users who are part of these predefined groups.For example, you can grant object access to any authenticated AWS user by granting access to theAuthenticated Users group:aws s3api put-object --bucket destination_mybucket --key dir/myfile --body dir/myfile --grant-read uri=http://acs.amazonaws.com/groups/global/AuthenticatedUsersNote: Before granting access to the Authenticated Users group, disable the Block Public Access settings for ACLs at both the account and bucket level. Otherwise, you'll get an Access Denied error. To troubleshoot ACL-related Access Denied errors, see A user with permission to add objects to my Amazon S3 bucket is getting Access Denied errors. Why?Related informationHow do I troubleshoot 403 Access Denied errors from Amazon S3?Managing access with ACLsWhen to use an ACL-based access policy (bucket and object ACLs)Follow"
https://repost.aws/knowledge-center/s3-object-acl-use-cases
How do I troubleshoot timeout issues when I query CloudTrail data using Athena?
"When I use Amazon Athena to query my AWS CloudTrail data, my queries take a long time to run or they time out."
"When I use Amazon Athena to query my AWS CloudTrail data, my queries take a long time to run or they time out.ResolutionCloudTrail logs can grow in size over time even if you partition the CloudTrail table to reduce the run time of the queries. Queries against a highly partitioned table have a higher planning time and do not complete quickly.To resolve the timeout issue, you can manually create a CloudTrail table using partition projection. This allows Athena to dynamically calculate the value of CloudTrail tables instead of scanning through a large list of partitions. With partition projection, you don't need to manage partitions because partition values and locations are calculated from configuration.To create a CloudTrail table partitioned by timestamp with partition projection, see Creating the table for CloudTrail logs in Athena using partition projection.To create a CloudTrail table for multiple accounts that's partitioned by year, month, and day with partition projection, use a command similar to the following:CREATE EXTERNAL TABLE ctrail_pp_ymd (eventversion STRING,useridentity STRUCT< type:STRING, principalid:STRING, arn:STRING, accountid:STRING, invokedby:STRING, accesskeyid:STRING, userName:STRING,sessioncontext:STRUCT<attributes:STRUCT< mfaauthenticated:STRING, creationdate:STRING>,sessionissuer:STRUCT< type:STRING, principalId:STRING, arn:STRING, accountId:STRING, userName:STRING>>>,eventtime STRING,eventsource STRING,eventname STRING,awsregion STRING,sourceipaddress STRING,useragent STRING,errorcode STRING,errormessage STRING,requestparameters STRING,responseelements STRING,additionaleventdata STRING,requestid STRING,eventid STRING,resources ARRAY<STRUCT< ARN:STRING, accountId:STRING, type:STRING>>,eventtype STRING,apiversion STRING,readonly STRING,recipientaccountid STRING,serviceeventdetails STRING,sharedeventid STRING,vpcendpointid STRING)PARTITIONED BY (account string, region string, year string, month string, day string)ROW FORMAT SERDE 'com.amazon.emr.hive.serde.CloudTrailSerde'STORED AS INPUTFORMAT 'com.amazon.emr.cloudtrail.CloudTrailInputFormat'OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'LOCATION 's3://doc_example_bucket/AWSLogs/'TBLPROPERTIES ( 'projection.enabled'='true', 'projection.day.type'='integer', 'projection.day.range'='01,31', 'projection.day.digits'='2', 'projection.month.type'='integer', 'projection.month.range'='01,12', 'projection.month.digits'='2', 'projection.region.type'='enum', 'projection.region.values'='us-east-1,us-east-2,us-west-2', 'projection.year.type'='integer', 'projection.year.range'='2015,2021', 'projection.account.type'='enum', 'projection.account.values'='111122223334444,5555666677778888', 'storage.location.template'='s3://doc_example_bucket/AWSLogs/${account}/CloudTrail/${region}/${year}/${month}/${day}')Be sure to replace the following in the preceding query:ctrail_pp_ymd with the name of the CloudTrail tabledoc_example_bucket with the name of the Amazon Simple Storage Service (Amazon S3) bucket where you want to create the CloudTrail table1111222233334444 and 5555666677778888 with the account IDs of accounts that you want to create the CloudTrail table forus-east-1,us-east-2,us-west-2 with the Region that you want to create the CloudTrail table forTable attributes and properties based on your use caseProjection ranges based on your use case (for example, if your CloudTrail data is available only from year 2018, then replace the projection range for partition column year with '2018,2021')To create a CloudTrail table for multiple accounts under the same organization , use a command similar to the following:CREATE EXTERNAL TABLE ctrail_pp_ymd_org (eventversion STRING,useridentity STRUCT< type:STRING, principalid:STRING, arn:STRING, accountid:STRING, invokedby:STRING, accesskeyid:STRING, userName:STRING,sessioncontext:STRUCT<attributes:STRUCT< mfaauthenticated:STRING, creationdate:STRING>,sessionissuer:STRUCT< type:STRING, principalId:STRING, arn:STRING, accountId:STRING, userName:STRING>>>,eventtime STRING,eventsource STRING,eventname STRING,awsregion STRING,sourceipaddress STRING,useragent STRING,errorcode STRING,errormessage STRING,requestparameters STRING,responseelements STRING,additionaleventdata STRING,requestid STRING,eventid STRING,resources ARRAY<STRUCT< ARN:STRING, accountId:STRING, type:STRING>>,eventtype STRING,apiversion STRING,readonly STRING,recipientaccountid STRING,serviceeventdetails STRING,sharedeventid STRING,vpcendpointid STRING)PARTITIONED BY (account string, region string, year string, month string, day string)ROW FORMAT SERDE 'com.amazon.emr.hive.serde.CloudTrailSerde'STORED AS INPUTFORMAT 'com.amazon.emr.cloudtrail.CloudTrailInputFormat'OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'LOCATION 's3://doc_example_bucket/AWSLogs/doc_example_orgID/'TBLPROPERTIES ( 'projection.enabled'='true', 'projection.day.type'='integer', 'projection.day.range'='01,31', 'projection.day.digits'='2', 'projection.month.type'='integer', 'projection.month.range'='01,12', 'projection.month.digits'='2', 'projection.region.type'='enum', 'projection.region.values'='us-east-1,us-east-2,us-west-2', 'projection.year.type'='integer', 'projection.year.range'='2010,2100', 'projection.account.type'='enum', 'projection.account.values'='111122223334444,5555666677778888', 'storage.location.template'='s3://doc_example_bucket/AWSLogs/doc_example_orgID/${account}/CloudTrail/${region}/${year}/${month}/${day}')Note: If you need to query the CloudTrail data before year 2010, then be sure to update the year range in projection.year.range.Be sure to replace the following in the above query:ctrail_pp_ymd_org with the name of the CloudTrail tabledoc_example_bucket with the name of the Amazon S3 bucket where you want to create the CloudTrail tabledoc_example_orgID with your organization ID1111222233334444 and 5555666677778888 with the account IDs of accounts that you want to create the CloudTrail table forus-east-1, us-east-2, and us-west-2 with the Regions where you want to create the CloudTrail tableTable attributes and properties based on your use caseProjection ranges based on your use case (for example, if your CloudTrail data is available only from year 2018, then replace the projection range for partition column year with '2018,2021')When you run your queries, be sure to include restrictive conditions on the partition columns in your queries. This allows Athena to scan less data and speeds up query processing.For example, you can run a command similar to the following to find out which user made the GetObject request to the S3 bucket. The table in this query uses the year, month, and day partitioning strategy.Note: Be sure to have event logging activated for Amazon S3 in CloudTrail.SELECT useridentity.arn, eventtime FROM "ctrail_pp_ymd"where eventname = 'GetObject'and year = '2021'and month = '05'and region = 'us-east-1'and cast(json_extract(requestparameters, '$.bucketName')as varchar) ='doc_example_bucket'Be sure to replace the following in the above query:ctrail_pp_ymd with the name of the CloudTrail tabledoc_example_bucket with the name of the S3 bucket where you want to create the CloudTrail tableRestrictive conditions based on your use caseIf you have timeout issues even after implementing the above steps, then you can request a service quota increase.Related informationQuerying AWS CloudTrail logsFollow"
https://repost.aws/knowledge-center/athena-cloudtrail-data-timeout
How do I create a DHCP options set for my AWS directory?
I want to create Dynamic Host Configuration Protocol (DHCP) options set for my AWS Directory Service directory. How can I do this?
"I want to create Dynamic Host Configuration Protocol (DHCP) options set for my AWS Directory Service directory. How can I do this?Short DescriptionBy default, Amazon Virtual Private Cloud (Amazon VPC) uses AWS DNS instead of your directory service DNS. It's a best practice that you create a DHCP options set for your AWS Directory Service directory. Then, assign the DHCP options set to the VPC that your directory runs in. This allows any instances in that VPC to point to the specified domain and for DNS servers to resolve their domain names.ResolutionTo create a DHCP options set for your AWS directory, follow the instructions to Create a DHCP Options Set.Follow"
https://repost.aws/knowledge-center/enable-dhcp-options-set-directory-service
Why am I experiencing a data delivery failure with Kinesis Data Firehose?
I'm trying to send data from Amazon Kinesis Data Firehose to my Amazon OpenSearch Service domain. Why am I experiencing a data delivery failure?
"I'm trying to send data from Amazon Kinesis Data Firehose to my Amazon OpenSearch Service domain. Why am I experiencing a data delivery failure?Short descriptionA failed delivery between Kinesis Data Firehose and Amazon OpenSearch Service can be caused by the following reasons:Invalid delivery destinationNo incoming dataDisabled Kinesis Data Firehose logsLack of proper permissionsAWS Lambda function invocation issuesOpenSearch Service domain health issuesResolutionInvalid delivery destinationConfirm that you have specified a valid Kinesis Data Firehose delivery destination and that you are using the correct ARN. You can check whether your delivery was successful by viewing the DeliveryToElasticsearch.Success metric in Amazon CloudWatch. A DeliveryToElasticsearch.Success metric value of zero is confirmation that the deliveries are unsuccessful. For more information about the DeliveryToElasticsearch.Success metric, see Delivery to OpenSearch Service in Data delivery CloudWatch metrics.No incoming dataConfirm that there is incoming data for Kinesis Data Firehose by monitoring the IncomingRecords and IncomingBytes metrics. A value of zero for those metrics means that there are no records reaching Kinesis Data Firehose. For more information about the IncomingRecords and IncomingBytes metrics, see Data ingestion through direct PUT in Data ingestion metrics.If the delivery stream uses Amazon Kinesis Data Streams as a source, then check the IncomingRecords and IncomingBytes metrics of the Kinesis data stream. These two metrics indicate whether there is incoming data. A value of zero confirms that there are no records reaching the streaming services.Check the DataReadFromKinesisStream.Bytes and DataReadFromKinesisStream.Records metrics to verify whether data is coming from Kinesis Data Streams to Kinesis Data Firehose. For more information about the data metrics, see Data ingestion through Kinesis Data Streams. A value of zero can indicate a failure to deliver to OpenSearch Service rather than a failure between Kinesis Data Streams and Kinesis Data Firehose.You can also check to see if the PutRecord and PutRecordBatch API calls for Kinesis Data Firehose are called properly. If you aren't seeing any incoming data flow metrics, check the producer that is performing the PUT operations. For more information about troubleshooting producer application issues, see Troubleshooting Amazon Kinesis Data Streams producers.Disabled Kinesis Data Firehose logsMake sure that Logging is enabled for Kinesis Data Firehose. Otherwise, the error logs result in a delivery failure. Then, check for the /aws/kinesisfirehose/delivery-stream-name log group name in CloudWatch Logs.In the Kinesis Data Firehose role, the following permissions are required:{ "Action": [ "logs:PutLogEvents" ]},{ "Resource": [ "arn:aws:logs:region:account-id:log-group:log-group-name:log-stream:log-stream-name" ]}Verify that you have granted Kinesis Data Firehose access to a public OpenSearch Service destination. If you are using the data transformation feature, then you must also grant access to AWS Lambda.Lack of proper permissionsThere are several permissions required depending on the configuration of Kinesis Data Firehose.To deliver records to an Amazon Simple Storage Service (Amazon S3) bucket, the following permissions are required:{ "Effect": "Allow", "Action": [ "s3:AbortMultipartUpload", "s3:GetBucketLocation", "s3:GetObject", "s3:ListBucket", "s3:ListBucketMultipartUploads", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::bucket-name", "arn:aws:s3:::bucket-name/*" ] }Note: To use this policy, the Amazon S3 bucket resource must be present.If your Kinesis Data Firehose is encrypted at rest, the following permissions are required:{ "Effect": "Allow", "Action": [ "kms:Decrypt", "kms:GenerateDataKey" ], "Resource": [ "arn:aws:kms:region:account-id:key/key-id" ], "Condition": { "StringEquals": { "kms:ViaService": "s3.region.amazonaws.com" }, "StringLike": { "kms:EncryptionContext:aws:s3:arn": "arn:aws:s3:::bucket-name/prefix*" } }}To allow permissions for OpenSearch Service access, you can update your policy like this:{ "Effect": "Allow", "Action": [ "es:DescribeElasticsearchDomain", "es:DescribeElasticsearchDomains", "es:DescribeElasticsearchDomainConfig", "es:ESHttpPost", "es:ESHttpPut" ], "Resource": [ "arn:aws:es:region:account-id:domain/domain-name", "arn:aws:es:region:account-id:domain/domain-name/*" ]},{ "Effect": "Allow", "Action": [ "es:ESHttpGet" ], "Resource": [ "arn:aws:es:region:account-id:domain/domain-name/_all/_settings", "arn:aws:es:region:account-id:domain/domain-name/_cluster/stats", "arn:aws:es:region:account-id:domain/domain-name/index-name*/_mapping/type-name", "arn:aws:es:region:account-id:domain/domain-name/_nodes", "arn:aws:es:region:account-id:domain/domain-name/_nodes/stats", "arn:aws:es:region:account-id:domain/domain-name/_nodes/*/stats", "arn:aws:es:region:account-id:domain/domain-name/_stats", "arn:aws:es:region:account-id:domain/domain-name/index-name*/_stats" ]}If you are using Kinesis Data Streams as a source, update your permissions like this:{ "Effect": "Allow", "Action": [ "kinesis:DescribeStream", "kinesis:GetShardIterator", "kinesis:GetRecords", "kinesis:ListShards" ], "Resource": "arn:aws:kinesis:region:account-id:stream/stream-name"}To configure Kinesis Data Firehose for data transformation, you can update your policy like this:{ "Effect": "Allow", "Action": [ "lambda:InvokeFunction", "lambda:GetFunctionConfiguration" ], "Resource": [ "arn:aws:lambda:region:account-id:function:function-name:function-version" ]}AWS Lambda function invocation issuesCheck the Kinesis Data Firehose ExecuteProcessing.Success and Errors metrics to be sure that Kinesis Data Firehose has invoked your function. If Kinesis Data Firehose hasn't invoked your Lambda function, then check the invocation time to see if it is beyond the timeout parameter. Your Lambda function might require a greater timeout value or need more memory to complete in time. For more information about invocation metrics, see Using invocation metrics.To identify the reasons that Kinesis Data Firehose isn't invoking the Lambda function, check the Amazon CloudWatch Logs group for /aws/lambda/lambda-function-name. If data transformation fails, then the failed records are delivered to the S3 bucket as a backup in the processing-failed folder. The records in your S3 bucket also contain the error message for failed invocation. For more information about resolving Lambda invocation failures, see Data transformation failure handling.OpenSearch Service domain health issuesCheck the following metrics to confirm that OpenSearch Service is in good health:CPU utilization: If this metric is consistently high, the data node might be unable to respond to any requests or incoming data. You might need to scale your cluster.JVM memory pressure: If the JVM memory pressure is consistently above 80%, the cluster might be triggering memory circuit breaker exceptions. These exceptions can prevent the data from being indexed.ClusterWriteBlockException: This indexing block occurs when your domain is under high JVM memory pressure or if more storage space is needed. If a data node doesn't have enough space, then new data cannot be indexed. For more information about troubleshooting OpenSearch Service issues, see Amazon OpenSearch Service troubleshooting.Follow"
https://repost.aws/knowledge-center/kinesis-data-firehose-delivery-failure
How can I troubleshoot a State Manager association that failed or that is stuck in pending status?
"I created a State Manager association scheduled to run on my managed Amazon Elastic Compute Cloud (Amazon EC2) instance. However, the association status is failed or stuck in pending."
"I created a State Manager association scheduled to run on my managed Amazon Elastic Compute Cloud (Amazon EC2) instance. However, the association status is failed or stuck in pending.ResolutionAWS Systems Manager State Manager association is a configuration assigned to your managed instances. The configuration defines the state that you want to maintain on your instances.When you create a State Manager association, Systems Manager binds the schedule, targets, document, and parameter information that you specify to the managed instances. The association status is initially pending while the system tries to reach all targets and immediately apply the state specified in the association.Troubleshoot an association stuck in pending/failed statusIf the State Manager association remains stuck in pending or failed state, first confirm that the latest version of SSM Agent is installed.Then, verify the status of the resource where the association is applied, and view the history to confirm if there were any invocations.From the Systems Manager console State Manager Associations page, choose the hyperlinked Association id for the association that is stuck in pending or failed state.Choose the Execution history tab to view the invocation history.If the history lists invocations, choose the hyperlinked Execution id to see the resource type, status, and other details. Then, proceed to the Identify the cause of the failure section of this article.If there aren't any invocations listed in the history, verify that the instance is a managed instance. From the Systems Manager console, the instance must be listed under Managed instances, and the SSM Agent ping status must be Online.To troubleshoot managed instances, see Why is my EC2 instance not displaying as a managed node or showing a "Connection lost" status in Systems Manager?Identify the cause of the failureIf the history lists invocations, from Execution ID Association execution targets page, select the target instance Resource ID, and then choose Output. The output displays details and an error message about why the association failed.For Systems Manager Automation document errors, see Troubleshooting Systems Manager Automation.For Run Command document errors, see Troubleshooting Systems Manager Run Command.Note: The output differs depending on the Systems Manager document that you use. For more information, see AWS Systems Manager documents.Review SSM Agent logsReview the SSM Agent logs for more details about the Run Command document failure:For Linux and macOS, locate the logs in the following directories:/var/log/amazon/ssm/amazon-ssm-agent.log/var/log/amazon/ssm/errors.log/var/log/amazon/ssm/audits/amazon-ssm-agent-audit-YYYY-MM-DDNote: SSM Agent stderr and stdout files write to the /var/lib/amazon/ssm directory.For Windows, locate the logs in the following directories:%PROGRAMDATA%\Amazon\SSM\Logs\amazon-ssm-agent.log%PROGRAMDATA%\Amazon\SSM\Logs\errors.log%PROGRAMDATA%\Amazon\SSM\Logs\audits\amazon-ssm-agent-audit-YYYY-MM-DDRelated informationUnderstanding automation statusesAWS Systems Manager State ManagerFollow"
https://repost.aws/knowledge-center/ssm-state-manager-association-fail
"How do I calculate bandwidth and set a CloudWatch alarm for AWS Direct Connect, AWS Site-Site VPN, or Transit Gateway connections?"
"I have an AWS Direct Connect, AWS Site-to-Site VPN Connection, or an AWS Transit Gateway connection. I want to use Amazon CloudWatch metrics to calculate bandwidth and set up a bandwidth-based notification."
"I have an AWS Direct Connect, AWS Site-to-Site VPN Connection, or an AWS Transit Gateway connection. I want to use Amazon CloudWatch metrics to calculate bandwidth and set up a bandwidth-based notification.Short descriptionUse CloudWatch metrics to calculate bandwidth for an AWS Direct Connect connection, AWS Site-to-Site VPN, or AWS Transit Gateway connection. Then, create a CloudWatch alarm with an Simple Notification Service (Amazon SNS) notification to be alerted when bandwidth exceeds the values that you set.Important: Bandwidth calculation is approximate and doesn’t provide exact bandwidth usage up to the moment. A CloudWatch metrics-based alarm is effective for connections that exceed bandwidth for a duration of 15 minutes or more.Based on the type of connection that you have, see the related Resolution section to calculate the bandwidth and set up the CloudWatch alarm. You can also set up Amazon SNS notifications for the CloudWatch alarm.ResolutionDirect Connect connectionAccess the CloudWatch console.In the navigation pane, choose Metrics. Then, choose All metrics.Under All Metrics, choose DX. Then, choose Connection metrics.Select ConnectionBpsIngress and ConnectionBpsEgress for the Direct Connect connection that you want to measure.Choose the Graphed metrics tab and set following:Statistics: MaximumPeriod: 5 minutesChoose Add Math. From the dropdown list, choose Start with an empty expression.After you choose Start with an empty expression, a math expression box appears. In this box, enter (m1+m2).This formula calculates total bandwidth. The variables represent the following values:m1 = ConnectionBpsIngress (in bits per second)m2 = ConnectionBpsEgress (in bits per second)Choose Apply.In the graphed metrics section, the expression that you added and the metrics in the expression are listed. To see the representation in the graph section, select only the expression that you added: (m1+m2).The Output result is in bits per second.Configure your CloudWatch alarm for Direct Connect connections. Use the math expression that you calculated in the previous step for Metric. When creating the alarm, set the following values:For Select metric, re-enter the expression from the steps in the Calculate bandwidth for a Direct Connect connection section of this article. Select only the expression that you created.In the Conditions section, set bandwidth value that you want to monitor as a condition. For example, to be notified when bandwidth reaches 100Mbps, enter Greater/Equal(>=) 1,000,000,00.In the Additional configuration section, set Datapoints to alarm to 3 out of 3.[Optional] Set up Amazon SNS notifications for the CloudWatch alarm.Site-to-Site VPN connectionAccess the CloudWatch console.In the navigation pane, choose Metrics. Then, choose All metrics.Under All Metrics, choose VPN. Then, choose VPN tunnel metrics.Select TunnelDataIn and TunnelDataOut metrics for the VPN tunnel that you want to measure.Choose the Graphed metrics tab and set following parameters:Statistics: SUMPeriod: 5 minutesChoose Add Math. From the dropdown list, choose Start with an empty expression.After you choose Start with an empty expression, a math expression box appears. In this box, enter (m1+m2)*8/300.This formula converts bytes to bits, divides by time in seconds to calculate output in bits per second. The variables represent the following values:m1 = TunnelDataIn (in bytes)m2 = TunnelDataOut (in bytes)Choose Apply.In the graphed metrics section, the expression that you added and the metrics in the expression are listed. To see the representation in the graph section, select only the expression you added: (m1+m2)*8/300. The Output result is in bits per second.Configure your CloudWatch alarm for VPN connections. Use the math expression that you calculated in the previous step for Metric. When creating the alarm, set the following values:For Select metric, re-enter the expression from the steps in the Calculate bandwidth for a Site-to-Site VPN connection section of this article. Select only the expression that you created.In the Conditions section, set bandwidth value you want to monitor as a condition. For example, to be notified when bandwidth reaches 100Mbps, enter Greater/Equal(>=) 1,000,000,00.In the Additional configuration section**,** set Datapoints to alarm to 3 out of 3.[Optional] Set up Amazon SNS notifications for the CloudWatch alarm.Transit Gateway attachmentThe following procedure calculates bandwidth on a Transit Gateway VPC attachment. You can use a similar calculation on different types of attachments.Access the CloudWatch console.In the navigation pane, choose Metrics. Then, choose All metrics.Under All Metrics, choose Transit Gateway. Then, choose Per-TransitGatewayAttachment Metrics.Select BytesIn and BytesOut metrics for the Transit Gateway attachment that you want to measure.Choose the Graphed metrics tab, and then set following parameters:Statistics: SUMPeriod: 5 minutesChoose Add Math. From the dropdown list, choose Start with an empty expression.After you choose Start with an empty expression, a math expression box displays. In this box, enter (m1+m2)*8/300.This formula converts bytes to bits, divided by time in seconds to calculate output in bits per second. The variables represent the following values:m1 = BytesIn (in bytes)m2 = BytesOut (in bytes)Choose Apply.In the graphed metrics section, the expression that you added and the metrics in the expression are listed. To see the representation in the graph section, select only the expression you added: (m1+m2)*8/300. The Output result is in bits per second.Create a CloudWatch alarm based on a metric math expression. Use the math expression that you calculated in the previous step for Metric. When creating the alarm, set the following values:Set the following values when creating the alarm:For Select metric, re-enter the expression from the steps in the Calculate bandwidth for a Transit Gateway attachment section. Select only the expression that you created.In the Conditions section, set bandwidth value that you want to monitor as a condition. For example, to be notified when bandwidth reaches 100Mbps, enter Greater/Equal(>=) 1,000,000,00.In the Additional configuration section**,** set Datapoints to alarm to 3 out of 3.[Optional] Set up Amazon SNS notifications for the CloudWatch alarm.Related informationVPN tunnel metrics and dimensionsAWS Direct Connect metrics and dimensionsAWS Transit Gateway metricsFollow"
https://repost.aws/knowledge-center/cloudwatch-connection-bandwidth-alarms
Why is my Amazon EMR cluster unreachable?
I can't connect to my Amazon EMR cluster.
"I can't connect to my Amazon EMR cluster.Short descriptionThe following are common reasons that your EMR cluster might be unreachable:There's a permissions issue in the security group rules.The network setup is incorrect for clusters that are provisioned in a private subnet.There's an issue with cluster authentication setup.There are resource constraints in the cluster nodes.The Amazon EMR service daemon is stopped.ResolutionAmazon EMR security group rules1.    Verify that the security group rules are correct. For more information, see Working with Amazon EMR-managed security groups.2.    Verify that TCP on port 8443 is allowed. Port 8443 allows the cluster manager to talk to the cluster master node.3.    Verify that SSH on port 22 is allowed, if you're trying to connect to the cluster through SSH.    If outside users or applications are not able to reach the EMR cluster, then validate the related rules that are set in managed-security groups. Also validate the rules in additional security groups.EMR clusters in a private subnetIn addition to the items mentioned in the previous section, verify the following for EMR clusters that are in a private subnet:1.    Verify that the additional managed security group for service access is added. Verify that the rules allow the cluster manager to communicate with the cluster nodes. For more information, see Amazon EMR-managed security group for service access (private subnets).2.    If you're using a bastion host and you can't reach Amazon EMR through the bastion host, then do the following:Verify that the bastion host security group allows inbound traffic from the client system.Verify that the EMR cluster security groups allow inbound traffic from the bastion host.As network configuration setups vary, make sure that the end-to-end connection is properly set without any black holes.Authentication methodsTo make sure that authentication is set up correctly, do the following:1.    If authentication uses an Amazon Elastic Compute Cloud (Amazon EC2) keypair, then verify that it's created and configured correctly. For more information, see Use an Amazon EC2 key pair for SSH credentials.2.    If authentication uses Kerberos, verify that it is configured correctly. For more information, see Use Kerberos authentication.Resource constraints in the cluster nodes1.    Verify that the underlying master node is in running state and isn't terminated.2.    Check the Instance-state log of the master node to determine how resources are being used.Run the following command to check for the top CPU user:ps auxwww --sort -%cpu | head -10Run the following command to check the kernel's performance:dmesg | tail -n 25Run the following command to check memory usage:free -mRun the following command to check disk usage:df -hEMR cluster daemonsThe master node's instance controller (I/C) is the daemon that runs on the cluster nodes. The instance controller communicates with the Amazon EMR control plane and the rest of the cluster. Run the following commands to make sure that it's in running state:Run the following command to check the status of the instance controller:sudo systemctl status instance-controllerRun the following command to start the instance controller:sudo systemctl start instance-controllerFollow"
https://repost.aws/knowledge-center/emr-troubleshoot-unreachable-cluster
Why am I not able to connect to my AWS Glue development endpoint using SSH?
I'm unable to connect to my AWS Glue development endpoint using SHH.-or-I'm unable to use SSH port forwarding to connect to my AWS Glue development endpoint.
"I'm unable to connect to my AWS Glue development endpoint using SHH.-or-I'm unable to use SSH port forwarding to connect to my AWS Glue development endpoint.ResolutionConnect to development endpoint using SSHBe sure that you've changed the permissions on your key pair file in the development endpoint. Confirm that only you can view the file by running the following command:$ chmod 400 my-key-pair.pemConfirm the following:The path to your private key is correct.The private extension is .pem and enclosed in double quotes.Be sure to check the network connectivity to the development endpoint on port 22 using tools such as Telnet or Netcat.Be sure that your security group allows traffic from your IP address on port 22. Check if the rule for the outbound traffic is correct. The rule for the outbound traffic must confirm that the outbound traffic is open to all ports. Or, the rule must be a self-referencing rule with the following parameters: Type as All TCP, Protocol as TCP, Port Range as ALL, and Source with the same security group name as the Group ID. For more information, see Setting up your network for a development endpoint.Be sure that the Edit DNS Hostnames setting is turned on in the virtual private cloud (VPC) used for the AWS Glue development endpoint. Also, confirm that the Amazon Simple Storage Service (Amazon S3) endpoint is attached to the VPC subnet used for the development endpoint.If you are using PuTTY as the SSH client, then convert your private .pem file to a .ppk file using the PuTTYgen tool.Connect to development endpoint using SSH port forwardingSSH port forwarding requires a public DNS address to connect to the development endpoint. To add a public DNS address to your development endpoint, do the following:1.    Create a development endpoint with a VPC.2.    In the AWS Glue console, choose Dev endpoints. Note the Private address for your development endpoint. You will use this address in the next step.3.    In the Amazon Elastic Compute Cloud (Amazon EC2) console, choose Network & Security from the navigation pane. Then, choose Network Interfaces. In the Network interfaces page, search for the Private IPv4 DNS address that corresponds to the Private Address field on your development endpoint.4.    Allocate an Elastic IP address and associate the address to the elastic network interface using the following instructions:For Resource type, choose Network interface.For Network interface, choose the elastic network interface that you noted in the previous step.Verify that the address in the Private IP address field is same as the IP address of your endpoint.5.    To validate the setup, use the command similar to the following to check if you can connect to the development endpoint using SSH:ssh -i dev-endpoint-private-key.pem glue@elastic-ip6.    If you can connect successfully using this command, then use the same Elastic IP address in the actual command:ssh -i private-key-file-path -NTL 9007:169.254.76.1:9007 glue@elastic_ipFor more information, see Accessing your development endpoint.Note: If you are trying to connect a Jupyter notebook to a development endpoint and can't create a port forwarding tunnel, then check the port in the ssh command. Be sure that the port used in the command is 8998 instead of 9007.For more information, seeTutorial: Set up a Jupyter notebook in JupyterLab to test and debug ETL scripts.Related informationManaging your development endpointFollow"
https://repost.aws/knowledge-center/glue-connect-development-endpoint-ssh
How do I copy and restore an Amazon Redshift Serverless snapshot to a different AWS account?
I want to copy and restore an Amazon Redshift Serverless snapshot from one AWS account to another AWS account. How can I do that?
"I want to copy and restore an Amazon Redshift Serverless snapshot from one AWS account to another AWS account. How can I do that?ResolutionPerform a copy and restore from one AWS account to another using the Amazon Redshift console or AWS Command Line Interface (AWS CLI).Before you begin, consider the following:You can restore a snapshot to an Amazon Redshift Serverless namespace only if it's in Available status and is associated with an available workgroup.Restoring an Amazon Redshift Serverless namespace from a snapshot replaces all of the namespace's databases with databases in the snapshot.During the Restore, Amazon Redshift Serverless will be unavailable.Copy and restore using the Amazon Redshift consoleConvert a recovery point of Amazon Redshift Serverless to a snapshot in the source accountOpen the Amazon Redshift console.In the navigation pane, choose Redshift serverless, and then choose Data backup.Under Recovery points, choose the Creation time of the recovery point that you want to convert to a snapshot.Choose Create snapshot from recovery point.Enter a Snapshot identifier and retention period.Choose Create.Share the snapshot in the source account with another AWS accountOpen the Amazon Redshift console.In the navigation pane, choose Redshift serverless, and then choose Data backup.Choose the snapshot you created previously.Choose Actions, Manage access.Choose Add AWS account located under Provide access to serverless accounts and enter an AWS account ID (destination).Choose Save changes.Restore the snapshot to an Amazon Redshift Serverless namespace in the destination accountOpen the Amazon Redshift console.In the navigation pane, choose Redshift serverless, and then choose Data backup.Choose the snapshot shared to the AWS account ID to restore. You can restore only one snapshot at a time.Choose Actions, Restore to serverless namespace.Choose an available namespace to restore to. You can restore to only namespaces with an Available status.Choose Restore.Copy and restore using the AWS CLINote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Convert a recovery point of Amazon Redshift Serverless to a snapshot in the source account1.    Use the list-recovery-points command for a list of snapshots created after a start-time. Run the following command, and replace namespacename with your namespace, us-west-2 with your AWS Region, and starttime with your recovery point start time in UTC:aws redshift-serverless list-recovery-points --namespace-name <namespacename> --region <region name> --start-time <starttime>In the following example, the Redshift Serverless cluster is in the US West (Oregon) Region with a namespace of default and a start time of 2022-09-06T07:10 UTC:aws redshift-serverless list-recovery-points --namespace-name default --region us-west-2 --start-time 2022-09-06T07:102.    Use the convert-recovery-point-to-snapshot command to create a snapshot and retention period. Run the following command, and replace recoveryPointId with the recovery point ID output from step 1, days with the number of days to retain the snapshot, snapshot name with your snapshot's name, and us-west-2 with your AWS Region:aws redshift-serverless convert-recovery-point-to-snapshot --recovery-point-id <recoveryPointId> --retention-period <days> --snapshot-name <snapshot name> --region <region name>In the following example, the snapshot is named snapshot01 with a retention period of three days, and is in the US West (Oregon) Region**.**aws redshift-serverless convert-recovery-point-to-snapshot --recovery-point-id 72acee50-34df-45f6-865f-46aa178ada82 --retention-period 3 --snapshot-name snapshot01 --region us-west-23.    Use the get-snapshot command to confirm the snapshot was created. Run the following command, and replace snapshot name with the name of your snapshot:aws redshift-serverless get-snapshot --snapshot-name <snapshot name>Share the snapshot in the source account with another AWS accountUse the put-resource-policy command to provide another AWS account access to the snapshot. Run the following command, and replace destination account ID with the destination AWS account ID and snapshot arn with the snapshot's ARN:aws redshift-serverless put-resource-policy --policy "{\"Version\": \"2012-10-17\", \"Statement\" : [{ \"Sid\": \"AllowUserRestoreFromSnapshot\", \"Principal\":{\"AWS\": [\”<destination account ID>\”]}, \"Action\": [\"redshift-serverless:RestoreFromSnapshot\"] , \"Effect\": \"Allow\" }]}" --resource-arn <snapshot arn>In the following example, access is provided to the snapshot ARN in account number 123456789012:aws redshift-serverless put-resource-policy --policy "{\"Version\": \"2012-10-17\", \"Statement\" : [{ \"Sid\": \"AllowUserRestoreFromSnapshot\", \"Principal\":{\"AWS\": [\"123456789012\"]}, \"Action\": [\"redshift-serverless:RestoreFromSnapshot\"] , \"Effect\": \"Allow\" }]}" --resource-arn arn:aws:redshift-serverless:us-west-2:112233445566:snapshot/4978ca91-24ba-4196-91ad-9d372f72b0feRestore the snapshot to an Amazon Redshift Serverless namespace in the destination account1.    Use the list-snapshots command to list the snapshot in your AWS Region. Run the following command, and replace us-west-2 with your AWS Region:aws redshift-serverless list-snapshots --region us-west-22.    Use the restore-from-snapshot command to restore the snapshot to an Amazon Redshift Serverless. Run the following command, and replace snapshot name with the name of your snapshot, workgroup name with the name of your workgroup, and snapshot arn with the snapshot ARN from the preceding command:aws redshift-serverless restore-from-snapshot --namespace-name <namespace name > --workgroup-name <workgroup name> --snapshot-arn <snapshot arn>In the following example, the account ID 112233445566 has an Amazon Redshift Serverless in Available state with a namespace name of restore and a workgroup name of restore:aws redshift-serverless restore-from-snapshot --namespace-name restore --workgroup-name restore --snapshot-arn arn:aws:redshift-serverless:us-west-2:112233445566:snapshot/4978ca91-24ba-4196-91ad-9d372f72b0feFollow"
https://repost.aws/knowledge-center/redshift-copy-restore-different-account
Why doesn't the CurrItems metric in my Memcached cluster decrease when the keys expire?
Why doesn't the CurrItems metric in my Memcached cluster decrease when the keys expire?
"Why doesn't the CurrItems metric in my Memcached cluster decrease when the keys expire?ResolutionThis is an expected behavior. Memcached uses lazy expiration to remove keys when the time to live (TTL) expires. This means that the key isn't removed from the node even though it's expired. However, when someone tries to access an expired key, Memcached checks the key, identifies that the key is expired, and then removes it from memory.If there are no free chunks or free pages left in the appropriate slab class to accommodate new keys, then Memcached removes the expired keys, or uses a least recently used (LRU) algorithm to evict keys if it can't find an already expired key. The CurrItems metric decreases after the keys are removed from memory.The lru_crawler thread, which is an optional background thread, also clears expired keys from memory. The lru_crawler is a conservative task, and, even if enabled, has limited action over expired keys. Therefore, even with lru_crawler enabled, it might take some time for the CurrItems metric and keyspace memory to show a decrease in usage.Use TTL to make a key available for a particular amount of time. When the key becomes invalid (expires), it's eventually removed. Keys can't be retrieved after their expiration time.Related informationMemcached specific parametersFollow"
https://repost.aws/knowledge-center/currlitems-metric-memcached-keys-expire
How do I delay Auto Scaling termination of unhealthy Amazon EC2 instances so I can troubleshoot them?
"My Amazon Elastic Compute Cloud (Amazon EC2) instance was marked as unhealthy and moved to the "Auto Scaling Terminating" state. Then, my Amazon EC2 instance terminated before I could determine the cause of the problem."
"My Amazon Elastic Compute Cloud (Amazon EC2) instance was marked as unhealthy and moved to the "Auto Scaling Terminating" state. Then, my Amazon EC2 instance terminated before I could determine the cause of the problem.Short descriptionAdd a lifecycle hook to your AWS Auto Scaling group to move instances in the Terminating state to the Terminating:Wait state. In this state, you can access instances before they're terminated, and then troubleshoot why they were marked as unhealthy.By default, an instance remains in the Terminating:Wait state for 3600 seconds (1 hour). To increase this time, use the heartbeat-timeoutparameter in the put-lifecycle-hook API call. The maximum time that you can keep an instance in the Terminating:Wait state is 48 hours or 100 times the heartbeat timeout, whichever is smaller.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent version of the AWS CLI.Use the following steps to configure a lifecycle hook using the AWS CLI. Then, create the necessary Amazon Simple Notification Service (Amazon SNS) topic and AWS Identity and Access Management (IAM) permissions.Or, you can configure a lifecycle hook using the AWS Management Console. Then, refer to the following to manage Amazon SNS topics and IAM permissions in the console:Creating a role to delegate permissions to an AWS serviceTutorial: Creating an Amazon SNS topicCreate an Amazon SNS topic1.    Create a topic where AWS Auto Scaling can send lifecycle notifications. The following example calls the create-topic command to create the ASNotifications topic:$ aws sns create-topic --name ASNotificationsAn Amazon Resource Name (ARN) similar to the following is returned:"TopicArn": "arn:aws:sns:us-west-2:123456789012:ASNotifications"2.    Create a subscription to the topic. You must have a subscription to receive the LifecycleActionToken that's required to extend the heartbeat timeout of the pending state or complete the lifecycle action. The following example uses the subscribe command to create a subscription that uses the email protocol (SMTP) with the endpoint email address [email protected].$ aws sns subscribe --topic-arn arn:aws:sns:us-west-2:123456789012:ASNotifications --protocol email --notification-endpoint [email protected] IAM permissionsIAM permissions are configured by creating an IAM role that grants the AWS Auto Scaling service permissions to send to the SNS topic. To complete this task, create a text file that contains the appropriate policy. Then, reference the file in the create-role command.1.    Use a text editor (such as vi) to create the text file:$ sudo vi assume-role.txt2.    Paste the following in the text file, and then save the file.{  "Version": "2012-10-17",  "Statement": [{      "Sid": "",      "Effect": "Allow",      "Principal": {        "Service": "autoscaling.amazonaws.com"      },      "Action": "sts:AssumeRole"    }  ]}3.    Use the aws iam create-role command to create the IAM role AS-Lifecycle-Hook-Role from the policy saved to assume-role.txt:$ aws iam create-role --role-name AS-Lifecycle-Hook-Role --assume-role-policy-document file://assume-role.txtThe output contains the ARN for the role. Be sure to save both the ARN of the IAM role and the SNS topic.4.    Add permissions to the role to allow AWS Auto Scaling to send SNS notifications when a lifecycle hook event occurs. The following example uses the attach-role-policy command to attach the managed policy AutoScalingNotificationAccessRole to the IAM role AS-Lifecycle-Hook-Role:$ aws iam attach-role-policy --role-name AS-Lifecycle-Hook-Role --policy-arn arn:aws:iam::aws:policy/service-role/AutoScalingNotificationAccessRoleThis managed policy grants the following permissions:{  "Version": "2012-10-17",  "Statement": [{      "Effect": "Allow",      "Resource": "*",      "Action": [        "sqs:SendMessage",        "sqs:GetQueueUrl",        "sns:Publish"      ]    }  ]}Important: The AWS managed policy AutoScalingNotificationAccessRole allows the AWS Auto Scaling service to make calls to all SNS topics and queues. To restrict AWS Auto Scaling's access to only specific SNS topics or queues, use the following sample policy.{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Resource": "arn:aws:sns:us-west-2:123456789012:ASNotifications", "Action": [ "sqs:SendMessage", "sqs:GetQueueUrl", "sns:Publish" ] } ]}Configure the lifecycle hookNext, use the put-lifecycle-hook command to configure the lifecycle hook:aws autoscaling put-lifecycle-hook --lifecycle-hook-name AStroublshoot --auto-scaling-group-name MyASGroup        --lifecycle-transition autoscaling:EC2_INSTANCE_TERMINATING        --notification-target-arn arn:aws:sns:us-west-2:123456789012:ASNotifications        --role-arn arn:aws:iam::123456789012:role/AS-Lifecycle-Hook-Role Be sure to substitute your own AWS Auto Scaling group name, SNS target ARN, and IAM role ARN before running this command.This command:Names the lifecycle hook (AStroubleshoot)Identifies the AWS Auto Scaling group that is associated with the lifecycle hook (MyASGroup)Configures the hook for the instance termination lifecycle stage (EC2_INSTANCE_TERMINATING)Specifies the SNS topic's ARN (arn:aws:sns:us-west-2:123456789012:ASNotifications)Specifies the IAM role's ARN (arn:aws:iam::123456789012:role/AS-Lifecycle-Hook-Role)Test the lifecycle hookTo test the lifecycle hook, choose an instance and then useterminate-instance-in-auto-scaling group to terminate the instance. This forces AWS Auto Scaling to terminate the instance, similar to when the instance becomes unhealthy. After the instance moves to theTerminating:Wait state, you can keep your instance in this state usingrecord-lifecycle-action-heartbeat. Or, allow the termination to complete usingcomplete-lifecycle-action.aws autoscaling complete-lifecycle-action --lifecycle-hook-name my-lifecycle-hook        --auto-scaling-group-name MyASGroup --lifecycle-action-result CONTINUE        --instance-id i-0e7380909ffaab747Related informationAmazon EC2 Auto Scaling lifecycle hooksFollow"
https://repost.aws/knowledge-center/auto-scaling-delay-termination
How do I host multiple public websites using IIS on the same EC2 Windows Server instance?
I want to use Internet Information Services (IIS) to host multiple websites on a single Amazon Elastic Compute Cloud (Amazon EC2) Windows Server instance.
"I want to use Internet Information Services (IIS) to host multiple websites on a single Amazon Elastic Compute Cloud (Amazon EC2) Windows Server instance.ResolutionIf you have multiple public websites, then you can host them in IIS on the same EC2 Windows Server instance. IIS uses bindings to differentiate between websites. Bindings are a combination of the protocol type, IP address, port, and hostname. To avoid IP address and port conflicts, you must add a hostname.To configure the IIS server on your instance to host multiple websites, follow these steps:Set up your websites on the EC2 Windows Server instance. Make sure to install and configure IIS (from the Microsoft website) on the instance.Use the Amazon EC2 console to get the IP addresses of the EC2 Windows Server instance.Use Remote Desktop Protocol to connect to the instance.Open IIS Manager. In the Connections pane, choose the site that you want to add a hostname to.In the Actions pane, complete the following steps:Choose Bindings.Choose Edit.In the hostname field, enter a name.Choose OK.Update the hosts file with the hostname to access the website locally from the IIS server:Open a text editor with the Run as Administrator option, and then open the hosts file in the C:\Windows\System32\drivers\etc directory.Enter the private IP address of the EC2 instance and the hostname of your website, and then save the file.Repeat steps 4 through 6 for each additional website.To access your websites, use the private IP address locally from the hosts file on the same EC2 Windows Server instance. For public access, use your DNS host provider or Amazon Route 53 to add the public IP address in your public hosted zone.Follow"
https://repost.aws/knowledge-center/multiple-public-website-ec2-iis
How can I troubleshoot why Amazon Redshift switched to one-by-one mode because a bulk operation failed during an AWS DMS task?
I have an AWS Database Migration Service (AWS DMS) task that is migrating data to Amazon Redshift as the target. But my task changed to one-by-one mode because a bulk operation failed. How do I troubleshoot this issue?
"I have an AWS Database Migration Service (AWS DMS) task that is migrating data to Amazon Redshift as the target. But my task changed to one-by-one mode because a bulk operation failed. How do I troubleshoot this issue?Short descriptionDuring the change data capture (CDC) phase of a task, AWS DMS uses a single thread to read change data from the source and apply changes to the target. Because the thread can handle only a certain number of transactions at a time, depending on the rate of changes on the source, sometimes the thread can't keep the target in sync with the source. This happens more often when Amazon Redshift is the target for AWS DMS because commits are expensive in OLAP engines. By default, AWS DMS uses Batch Apply mode to process the changes in batches. When using Batch Apply mode, AWS DMS does the following:Collects changes from a batch that is controlled by Batch Apply settings.Creates a net changes table that contains all the changes from the batch to the target instance.Uses an algorithm that groups transactions and applies them in bulk to the target.When a migration task that is replicating data to Amazon Redshift has an issue applying a batch, AWS DMS doesn't fail the whole batch. AWS DMS breaks the batch down and switches to one-by-one mode to apply transactions. When AWS DMS encounters the transaction that caused the batch to fail, AWS DMS logs the transaction to the awsdms_apply_exceptions table on the Amazon Redshift target. Then, AWS DMS applies the other transactions in the batch one by one until all transactions from that batch are applied onto the target. Finally, AWS DMS switches back to Batch Apply mode for a new batch and continues to use Batch Apply unless another batch fails.ResolutionYou can see whether your batch failed and AWS DMS used one-by-one mode by checking the AWS DMS task log. Each time a batch fails and AWS DMS switches to one-by-one mode, you see the following log entry:Messages[TARGET_APPLY ]I: Bulk apply operation failed. Trying to execute bulk statements in 'one-by-one' mode (bulk_apply.c:2175)When this happens, AWS DMS applies transactions sequentially onto the target until AWS DMS encounters an issue with any transaction in the batch. If AWS DMS encounters an issue, it logs the transaction and you see a log entry similar to the following:Messages[TARGET_APPLY ]W: Source changes that would have had no impact were not applied to the target database. Refer to the 'awsdms_apply_exceptions' table for details. (endpointshell.c:5984)Note: By default, the awsdms_apply_exceptions table is created in the public schema, unless you specify a control table schema in the task settings of your AWS DMS task.After AWS DMS logs the transaction, it completes the application of all the transactions from that batch. Then, AWS DMS switches to Batch Apply again, and you see a message in the log that is similar to the following:Messages[TARGET_APPLY ]I: Switch back to bulk apply mode (bulk_apply.c:4751)Amazon Redshift is an OLAP data warehouse that is optimized to run complex analytic queries. However, Amazon Redshift performance can be affected when running transactional changes from an OLTP database. As a result, when Batch Apply fails and AWS DMS switches to one-by-one mode, you can see that target latency increases for the duration of the time that AWS DMS runs transactions in a one-by-one mode. After AWS DMS switches back to Bulk Apply, the target latency reduces.To resolve this issue, connect to the Amazon Redshift target. Then, get the output from the awsdms_apply_exceptions table to identify the query that caused the batch to fail:select * from public.awsdms_apply_exceptions order by 4 desc;After you find the query that caused the batch to fail (for example, update conflicts or constraint violations), then you can resolve the conflicts to prevent tasks from moving to one-by-one mode.Related informationDebugging Your AWS DMS Migrations: What to Do When Things Go WrongUsing an Amazon Redshift Database as a Target for AWS Database Migration ServiceFollow"
https://repost.aws/knowledge-center/dms-task-redshift-bulk-operation
How do I troubleshoot RBAC issues with Amazon EKS?
"When I use my Amazon Elastic Kubernetes Service (Amazon EKS) cluster, I want to troubleshoot errors, such as Access Denied, Unauthorized, and Forbidden."
"When I use my Amazon Elastic Kubernetes Service (Amazon EKS) cluster, I want to troubleshoot errors, such as Access Denied, Unauthorized, and Forbidden.Short descriptionAWS Identity and Access Management (IAM) provides authentication to the cluster, and relies on native Kubernetes role-based access control (RBAC) for authorization. When an IAM user or role creates an Amazon EKS cluster, the IAM entity is added to the Kubernetes RBAC authorization table with system:masters permissions.To add users with administrator access to an Amazon EKS cluster, complete the following steps:Allow the required IAM console permissions for the associated IAM users so that users can perform the necessary cluster operations.Update the aws-auth ConfigMap to provide the additional IAM users with the cluster role and role bindings. For more information, see Add IAM users or roles to your Amazon EKS cluster.Note: The aws-auth ConfigMap doesn't support wildcards. It's a best practice to use eksctl to edit the ConfigMap. Malformed entries can lead to lockout.Run the following kubectl auth can-i command to verify that RBAC permissions are set correctly:kubectl auth can-i list secrets --namespace dev --as daveWhen you run the kubectl command, the authentication mechanism completes the following main steps:Kubectl reads context configuration from ~/.kube/config.The AWS Command Line Interface (AWS CLI) command aws eks get-token is run to get credentials, as defined in .kube/config.The k8s api request is sent and signed with the preceding token.Note: You can't modify the 15-minute expiration on the token that's obtained through aws eks get-token.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Authentication issuesError: "The cluster is inaccessible due to the cluster creator IAM user being deleted"If you receive the preceding error, then you must re-create the cluster creator IAM user with the same name as the cluster. To do this, find information on the cluster admin and cluster creator.If you created the cluster within the last 90 days, then you can search AWS CloudTrail for CreateCluster API calls. Cluster creator permissions are identical to system:masters permissions. If you have other users with system:masters permissions, then you aren't dependent on the cluster creator. If you previously authenticated with the Amazon EKS cluster, then you can review previous authenticator logs in the Amazon CloudWatch log group. Use the following CloudWatch Logs Insights query to check the cluster admin user and role details:fields @timestamp, @message| sort @timestamp desc| filter @logStream like /authenticator/| filter @message like "system:masters"To re-create the cluster creator IAM user and role, run the following commands:Important: Make sure to check all AWS CLI commands and replace all instances of example strings with your required values. For example, replace EXAMPLE-USER with your username.aws iam create-user --user-name <EXAMPLE-USER>aws iam create-role --role-name <EXAMPLE-ROLE>Error: "Could not be assumed because it does not exist or the trusted entity is not correct or an error occurred when calling the AssumeRole operation"If you receive the preceding error, then verify that the trust policy correctly grants assume permissions to the user. For more information, see IAM tutorial: Delegate access across AWS accounts using IAM roles.To identify local users that deploy the Amazon EKS cluster by default, run the following command:kubectl get clusterroles -l kubernetes.io/bootstrapping=rbac-defaultsTurn off anonymous access for API actions. Anonymous users have subject set to name: system:unauthenticated. To identify anonymous users, run the following command:kubectl get clusterrolebindings.rbac.authorization.k8s.io -o json | jq '.items[] | select(.subjects[]?.name=="system:unauthenticated")'For more information, see the Amazon EKS best practices guides.Authorization issuesError: "Couldn't get current server API group list"To troubleshoot the preceding error, see Unauthorized or access denied (kubectl).Error: "You must be logged in to the server (Unauthorized)"To troubleshoot the preceding error, see How do I resolve the error "You must be logged in to the server (Unauthorized)"?Error: "You must be logged in to the server (the server has asked for the client to provide credentials)"The preceding error occurs when you use an IAM entity to make API calls and didn't correctly map the IAM entity. You must map the IAM entity to an Amazon EKS role in the cluster's aws-auth ConfigMap. For more information, see Turning on IAM user and role access to your cluster.Error: "Can't describe cluster control plane: AccessDeniedException"The preceding error occurs when you update the kubeconfig with a user and a role who don't have permission to perform the eks:DescribeCluster action.Error: "Current user or role does not have access to Kubernetes objects on this EKS cluster"For information on the preceding error, see Resolve the Kubernetes object access error in Amazon EKS.Error: "Changing the cluster creator IAM to another user/role"After you create a cluster, you can't change the cluster creator IAM to another user because you can't configure a cluster creator IAM.Network issuesError: "Unable to connect to the server: dial tcp 172.xx.xx.xx.xx:443: i/o timeout"If you receive this error, then confirm that the security groups are permitting traffic from the sender's source IP address.Error: "Unable to connect to the server: x509: certificate is valid for *.example.com , example.com , not https://xxx.gr7.us-east-1.eks.amazonaws.com"If you receive this error, then verify that the proxy settings are correct.KUBECONFIG issuesError: "The connection to the server localhost:8080 was refused"The preceding error occurs when the kubeconfig file is missing. The kubeconfig file is located in ~/.kube/config, and kubectl requires the file. This file contains cluster credentials that are required to connect to the cluster API server. If kubectl can't find this file, then it tries to connect to the default address (localhost:8080).Error: "Kubernetes configuration file is group-readable"The preceding error occurs when the permissions for the kubeconfig file are incorrect. To resolve this issue, run the following command:chmod o-r ~/.kube/configchmod g-r ~/.kube/configAWS IAM Identity Center (successor to AWS Single Sign-On) configuration issuesImportant: Remove /aws-reserved/sso.amazonaws.com/ from the rolearn URL. If you don't, then you can't authorize as a valid user.Assign user groups to an IAM permissions policy1.    Open the IAM Identity Center console.2.    Choose the AWS Accounts tab, and then choose AWS account to assign users.3.    Choose Assign Users.4.    Search for the user groups, and then choose Next: Permission sets.5.    Choose Create new permission set, and then choose Create a custom permission set.6.    Give the permission set a name, and then select the Create a custom permissions policy check box.7.    Copy the following permissions policy, and then paste it into the window:{"Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "*" } ]}8.    Choose Create.Configure role permissions with Kubernetes RBACTo configure role permissions with Kubernetes RBAC, use the following manifest to create a RBAC role:apiVersion: rbac.authorization.k8s.io/v1 kind: Rolemetadata: name: <example name of the RBAC group> namespace: <example name of namespace> rules: - apiGroups: [""] resources: ["services", "endpoints", "pods", "deployments", "ingress"] verbs: ["get", "list", "watch"]Modify the IAM authenticator ConfigMap1.    Run the following command to capture the IAM role of the IAM Identity Center user group that contains the desired user's data:aws iam list-roles | grep Arn2.    Run the following h command to modify the authenticator ConfigMap:kubectl edit configmap aws-auth --namespace kube-system3.    Add the following attributes to the ConfigMap under mapRoles:- rolearn: <example arn of the AWS SSO IAM role> username: <example preferred username> groups: - <example name of the RBAC group>Important: Remove /aws-reserved/sso.amazonaws.com/ from the rolearn URL. If you don't, then you can't authorize as a valid user.4.    Update your kubeconfig file by running the following command:aws eks update-kubeconfig —-name <example eks cluster> —-region <example region>5.    Log in with the IAM Identity Center user name, and then run the kubectl commands.Related informationDefault roles and role bindingsControlling access to EKS clustersFollow"
https://repost.aws/knowledge-center/eks-troubleshoot-rbac-errors
How can I resolve API throttling or "Rate exceeded" errors for IAM and AWS STS?
"My application is getting an error message similar to the following:"Throttling: Rate exceeded, status code: 400,""
"My application is getting an error message similar to the following:"Throttling: Rate exceeded, status code: 400,"Short descriptionAPI calls from the AWS Management Console, the AWS Command Line Interface (AWS CLI), and applications contribute to a maximum rate limit for your AWS account.Note: The AWS service rate limits can't be increased.ResolutionFollow these best practices to avoid throttling errors.Implement exponential backoff in your application's code. Exponential backoff allows longer waits each time an API call to AWS is throttled. Depending on the application, the maximum number of delays and the maximum number of retries can vary.Note: AWS SDK implements automatic retry logic and exponential backoff algorithms.Some applications can implement caching to lower the rate of API calls. For example, if your application calls the API call AssumeRole for a cross-account workflow, the temporary credentials you received can be stored and reused for multiple cross-account calls. This means that you don't need to make a new AssumeRole call for each cross-account API call made.If your application is calling AssumeRole and caching the credentials, you can check the maximum session duration of the role's temporary credentials. Lengthening the duration of the temporary credentials makes sure that you don't need to call AssumeRole as often.Spread your API calls over a longer period of time instead of calling the APIs all at once. For example, applications that have a daily job calling SimulatePrincipalPolicy or GenerateServiceLastAccessedDetails to audit permissions for AWS Identity and Access Management (IAM) users and roles. You can stagger the API calls instead of running them at the same time.For applications that dynamically change IAM policy permissions using API calls like CreatePolicyVersion, consider another method. For example, you can use session policies during IAM role assumption.For AWS Security Token Service (AWS STS) throttling errors, consider using Regional STS endpoints instead of sending all AWS STS calls to the global endpoint. Each endpoint has a separate throttling limit. Using Regional AWS STS endpoints can provide applications a faster response time on the AWS STS API calls.If you're not sure which IAM user or role in your AWS account is making a large amount of API calls, use AWS CloudTrail to view Event history.You can also use Amazon Athena to run SQL queries and filter CloudTrail logs. For instructions, see How can I find which API call is causing the "Rate exceeded" error?Because AWS accounts have separate throttling limits, consider spreading the workloads across multiple accounts using AWS Organizations. Creating new AWS accounts are at no additional cost and Organizations provides consolidated billing. Using Service control policies (SCPs) allow you to control the maximum permissions of IAM users and roles across an AWS account. For more information, see Manage accounts through AWS Organizations and How do I get started with AWS Organizations?Related informationHow do I automatically create tables in Amazon Athena to search through AWS CloudTrail logs?Follow"
https://repost.aws/knowledge-center/iam-sts-throttling-error
How can I troubleshoot issues with my weighted routing policy in Route 53?
I get unexpected results when testing the DNS resolution for a weighted routing policy in Amazon Route 53.
"I get unexpected results when testing the DNS resolution for a weighted routing policy in Amazon Route 53.Short descriptionSuppose that you created a text (TXT) record with the name "weighted.awsexampledomain.com". The record has a Time to Live (TTL) of 300 seconds, and has weights configured as follows:NameTypeTTLValuesWeightHealth check statusweighted.awsexampledomain.com.TXT300"Record with Weight 0"Weight=0Health check associatedweighted.awsexampledomain.com.TXT300"Record with Weight 20"Weight=20Health check associatedweighted.awsexampledomain.com.TXT300"Record with Weight 50"Weight=50Health check associatedweighted.awsexampledomain.com.TXT300"Record with Weight 70"Weight=70Health check associatedThis configuration is referenced in the following examples.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you're using the most recent AWS CLI version.Test your weighted routing policy to identify the issueSend multiple (over 10,000) queries to test your weighted routing policy. Test the DNS resolution from multiple locations or directly query the authoritative name servers to understand the policy. Use the following scripts to send multiple DNS queries for your domain name.Send DNS queries using the recursive resolver:#!/bin/bashfor i in {1..10000}dodomain=$(dig <domain-name> <type> @RecursiveResolver_IP +short)echo -e "$domain" >> RecursiveResolver_results.txtdoneSend DNS queries directly to the authoritative name servers:#!/bin/bashfor i in {1..10000}dodomain=$(dig <domain-name> <type> @AuthoritativeNameserver_IP +short)echo -e "$domain" >> AuthoritativeNameServer_results.txtdoneExample output using the awk tool in the AWS CLI:$ for i in {1..10000}; do domain=$(dig weighted.awsexampledomain.com. TXT @172.16.173.64 +short); echo -e "$domain" >> RecursiveResolver_results.txt; done$ awk ' " " ' RecursiveResolver_results.txt | sort | uniq -c1344 "Record with Weight 20"3780 "Record with Weight 50"4876 "Record with Weight 70"Use your test results to troubleshoot your specific issueIssue: Endpoint resources of the weighted records aren't receiving the expected traffic ratio.Route 53 sends traffic to resources based on the weight assigned to the record as a proportion of the total weight for all records. Intermediate DNS resolvers cache the DNS responses for the duration of the record TTL. Clients are directed to only specific endpoints for the duration due to the cached response.ExampleYou query against the caching DNS resolver 192.168.1.2:$ for i in {1..10000}; do domain=$(dig weighted.awsexampledomain.com. TXT @192.168.1.2 +short); echo -e "$domain" >> CachingResolver_results.txt; done$ awk ' " " ' CachingResolver_results.txt | sort | uniq -c3561 "Record with Weight 20"1256 "Record with Weight 50"5183 "Record with Weight 70"Notice that the preceding results aren't as expected because the cache at the recursive DNS resolver.Issue: Some weighted records aren't returned.If you associate health checks to a resource record set, then Route 53 responds with the record only if the associated health check is successful. For more information, see How Amazon Route 53 determines whether a health check is healthy.If an RRSet in a policy doesn't have an attached health check, then it's always considered healthy. It's also included in the possible responses to DNS queries. Records that fail health checks aren't returned. Check the health check configuration, and be sure that it's reported as healthy.If you use "Evaluate Target Health" with the resource record set, then Route 53 relies on the health check reported by the end resource. For more information, see Why is my alias record pointing to an Application Load Balancer marked as unhealthy when I’m using “Evaluate Target Health"?ExampleSome health checks are failing:NameTypeTTLValuesWeightHealth check statusweighted.awsexampledomain.com.TXT300"Record with Weight 0"Weight=0Health Check Successweighted.awsexampledomain.com.TXT300"Record with Weight 20"Weight=20Health Check Successweighted.awsexampledomain.com.TXT300"Record with Weight 50"Weight=50Health Check Failweighted.awsexampledomain.com.TXT300"Record with Weight 70"Weight=70Health Check Success$ for i in {1..10000}; do domain=$(dig weighted.awsexampledomain.com. TXT @192.168.1.2 +short); echo -e "$domain" >> HealthCheck_results.txt; done$ awk ' " " ' HealthCheck_results.txt | sort | uniq -c3602 "Record with Weight 20"6398 "Record with Weight 70"In this example, the "Record with Weight 50" isn't returned by Route 53 because its health check is failing.Issue: All weighted records are unhealthy.Even if none of the records in a group of records are healthy, Route 53 must still provide a response to the DNS queries. However, there's no basis for choosing one record over another. In this case, Route 53 considers all the records in the group to be healthy. One record is selected based on the routing policy and the values that you specify for each record.ExampleNameTypeTTLValuesWeightHealth check statusweighted.awsexampledomain.com.TXT300"Record with Weight 0"Weight=0Health Check Failweighted.awsexampledomain.com.TXT300"Record with Weight 20"Weight=20Health Check Failweighted.awsexampledomain.com.TXT300"Record with Weight 50"Weight=50Health Check Failweighted.awsexampledomain.com.TXT300"Record with Weight 70"Weight=70Health Check Fail$ for i in {1..10000}; do domain=$(dig weighted.awsexampledomain.com. TXT @205.251.194.16 +short); echo -e "$domain" >> All_UnHealthy_results.txt; done$ awk ' " " ' All_UnHealthy_results.txt | sort | uniq -c1446 "Record with Weight 20"3554 "Record with Weight 50"5000 "Record with Weight 70"In this example, Route 53 considered all records healthy (Fail Open). Route 53 responded to the DNS requests based on the configured proportions. "Record with Weight 0" isn't returned because its weight is zero.Note: If you set nonzero weights to some records and zero weights to others, then health checks work the same as when all records have nonzero weights. However, there are a few exceptions:Route 53 initially considers only the healthy nonzero weighted records, if any.If all nonzero records are unhealthy, then Route 53 considers the healthy zero weighted records.ExampleNameTypeTTLValuesWeightHealth Check Statusweighted.awsexampledomain.com.TXT300"Record with Weight 0"Weight=0Health Check Passweighted.awsexampledomain.com.TXT300"Record with Weight 20"Weight=20Health Check Passweighted.awsexampledomain.com.TXT300"Record with Weight 50"Weight=50Health Check Failweighted.awsexampledomain.com.TXT300"Record with Weight 70"Weight=70Health Check Fail$ for i in {1..10000}; do domain=$(dig weighted.awsexampledomain.com. TXT @192.168.1.2 +short); echo -e "$domain" >> HealthCheck_results.txt; done$ awk ' " " ' HealthCheck_results.txt | sort | uniq -c10000 "Record with Weight 20"In this example, Route 53 doesn't consider the record with weight 0. Unless all weighted records are unhealthy, Route 53 doesn't return the zero-weighted records.If you set an equal weight for all records in a group, then traffic is routed to all healthy resources with equal probability. If you set "Weight" to zero for all records in a group, then traffic is routed to all healthy resources with equal probability.Related informationChoosing a routing policyHow Amazon Route 53 chooses records when health checking is configuredFollow"
https://repost.aws/knowledge-center/route-53-fix-dns-weighted-routing-issue
What are the best practices to follow when migrating a source RDS MySQL database to a target RDS MySQL database using AWS DMS?
I have a MySQL database that I want to migrate to an Amazon Relational Database Service (Amazon RDS) for MySQL database using AWS Database Migration Service (AWS DMS). What best practices can I use to optimize migration between a source MySQL database and a target MySQL database?
"I have a MySQL database that I want to migrate to an Amazon Relational Database Service (Amazon RDS) for MySQL database using AWS Database Migration Service (AWS DMS). What best practices can I use to optimize migration between a source MySQL database and a target MySQL database?Short descriptionUsing AWS DMS, you can migrate data from a source data store to a target data store. These two data stores are called endpoints. You can migrate between source and target endpoints that use the same database engine, such as from one MySQL database to another MySQL database.Although AWS DMS creates the target schema objects, it creates only the minimum objects that it needs for data to be effectively migrated from the source. So, AWS DMS creates tables, primary keys, and in some cases unique indexes, but it doesn't create objects such as secondary indexes, non-primary key constraints, and data defaults. For more information on what AWS DMS migrates, see High-level view of AWS DMS.Pre-create the tables on the target database before migrationTo preserve the default data definitions, pre-create the tables on the target database before migration. Use one of these approaches, depending on the type of migration you're performing:For homogeneous migrations such as MySQL to MySQL, use the native database engine utilities such as mysqldump to take an export of the table definitions. Then, import these table definitions into the target without the data. After the table definitions are created, use the AWS DMS task with target table preparation mode set to TRUNCATE_BEFORE_LOAD to load the data.For migrations across different database engines, use the AWS Schema Conversion Tool (AWS SCT). You can also use this method for homogeneous databases. The AWS SCT connects to your source and target databases, and then converts the existing database schema from one database engine to another. You can use the AWS SCT to pre-create tables on the target database with the default data definitions intact. Then, use the AWS DMS task with the target table preparation mode set to TRUNCATE_BEFORE_LOAD to load the data. For more information, see Converting your schema by using AWS SCT.ResolutionFollow the best practices for MySQL to MySQL AWS DMS migrationUse these best practices when you migrate data from a MySQL source database to a MySQL target database.Turn off backups and database-specific logs (such as bin, general, and audit) on the target database during migration. If you need to, you can turn them on again to troubleshoot issues.Turn off triggers, other cron jobs, and event schedulers on the target database during migration.Avoid using Multi-AZ on the target Amazon RDS database when performing AWS DMS migration.Avoid applying any other external client traffic to the target database when performing migration.Provision your AWS DMS replication instance, source, and the target databases with the required CPU, memory, storage, and IOPS to avoid resource contention during migration.Configure the source database with the prerequisites for using AWS DMS change data capture (CDC) before you start migration.Use optimized LOB settings such as limited LOB and inline LOB for migration.If the source database contains many tables with a heavy workload, split the tables among multiple tasks. Split the tables based on their size on the source database, the application traffics pattern, and the presence of LOB columns. If the table has many LOB (TEXT or JSON) columns with high write traffics on the source, then create a separate task for the table. Transactional consistency is maintained within a task, so it's important that tables in separate tasks don't participate in common transactions.Use the parallel full load mechanism for heavy source tables to reduce migration time. For more information, see Using parallel load for selected tables, views, and collections.Turn off foreign key constraints on the target table during full load migration.Add secondary index on the target database before starting the CDC phase of replication.The Amazon RDS primary user doesn't have drop and recreate privileges on the default schema tables. So, avoid migrating default database or schema tables from the source using AWS DMS.Review the documentation on Using AWS DMS to migrate data from MySQL to MySQL for information on which types of data that AWS DMS can migrate successfully.Test your workload using the default transactional CDC apply before you use the batch apply CDC method. For more information, see How can I use the DMS batch apply feature to improve CDC replication performance?Test the migration using the same production data on any other QA/DEV database environment before starting the production migration. Make sure to use the same AWS DMS configuration when you do the production migration.For more information, see Improving the performance of an AWS DMS migration.Use the recommended configuration and methods on your source and target databasesPre-create the table DDL on the target MySQL/PostgreSQL databases manually. Then, create an AWS DMS task with target preparation mode set to DO_DOTHING”/"TRUNCATE to migrate only the data.Run this command to create a dump without data from the source MySQL database:mysqldump -h yourhostnameorIP -u root -p --no-data --skip-triggers --single-transaction --dbname > schema.sqlThis command dumps the DDL structure from the source without any data.Next, run this command to restore the DDL structure on the target:mysql -u user -p -h yourhostnameorIP database_name < schema.sqlOr, you can allow AWS DMS to create the tables on the target using the DROP AND CREATE target preparation mode. Then, skip to step 3 to alter the table and add missing objects such as secondary indexes before you resume the task for the CDC phase.Note: By default, AWS DMS creates the table on the target with only the primary key or the unique key. It doesn't migrate any other objects to the target MySQL database.During the full load, AWS DMS doesn't identify the foreign key relational tables. It loads the data randomly, so the table load can fail if the target database has a foreign key check turned on. Use this extra connection attribute (ECA) on the target MySQL endpoint to turn off foreign key checks for this AWS DMS session.initstmt=SET FOREIGN_KEY_CHECKS=0;For more information, see Extra connection attributes when using a MySQL-compatible database as a target for AWS DMS.In the JSON settings, set stop task before applying cached changes to true, and stop task after applying cached changes to true."FullLoadSettings": { "TargetTablePrepMode": "TRUNCATE_BEFORE_LOAD" "CreatePkAfterFullLoad": false, "TransactionConsistencyTimeout": 600, "MaxFullLoadSubTasks": 8, "StopTaskCachedChangesNotApplied": true, <--- set this to true "StopTaskCachedChangesApplied": true, <--- set this to true "CommitRate": 50000,}After full load is complete, and before it applies cached changes, the task stops. While the task is stopped, create primary key indexes and secondary indexes on the target.Next, resume the task because the task stops again after it applies cached changes. Then, verify the migrated data by using AWS DMS validation output or manual verification before resuming the task again for the CDC replication phase. By completing this step, you can identify any issues and address them before resuming for CDC replication.4.    In the Task full load settings, tune the commitRate setting to speed up the data extraction rate from the source. The default value is 10000, so tune this setting when you are migrating a large amount of data from the source table.CommitRate=50000Note: Changing commitRate to a higher value could affect performance, so make sure to monitor and have enough memory in the replication instance.5.    Add this ECA on the target endpoint to specify the maximum size (in KB) of any .csv file used to transfer data to the target MySQL. The default value is 32,768 KB (32 MB), and valid values range from 1–1,048,576 KB (up to 1.1 GB).maxFileSize=250000;Note: When you use a target instance such as MySQL, Aurora or MariaDB for full load, use this option to allow AWS DMS to create a .csv file in the background to load the data into the target instance. Use a value between 32 MB and 1 GB. But, also consider how much your target instance can handle. If you have multiple tasks loading 1 GB of .csv file, this can cause overhead on your target instance. Make sure that you have an instance with high computing power at the target.6.    Use the limited LOB or inline LOB settings for better performance.Limited LOB mode: When using Limited LOB mode, you specify the maximum size of LOB column data. This allows AWS DMS to pre-allocate resources, and then apply LOB in bulk. If the size of the LOB columns exceeds the size that you specified in the task, AWS DMS truncates the data. AWS DMS then sends warnings to the AWS DMS log file. Using Limited LOB mode improves performance—however, before you run the task, you must identify the maximum LOB size of the data on the source. Then, specify the Max LOB size parameter. It's a best practice to make sure that you have enough memory allocated to the replication instance to handle the task.Inline LOB mode: When using Inline LOB mode, you can migrate LOBs without truncating data or slowing your task performance, by replicating both small and large LOBs. First, specify a value for the InlineLobMaxSize parameter, which is available only when Full LOB mode is set to true. The AWS DMS task transfers the small LOBs inline, which is more efficient. Then, AWS DMS migrates LOBs that are larger than the specified size in Full LOB mode by performing a lookup from the source table. Note, however, that Inline LOB mode works only during the full load phase.Note: You must set InlineLobMaxSize when you specify the task settings for your task.Run these queries to check the LOB size, and then populate the Max LOB size.List the tables that have LOB columns:select tab.table_name,count(*) as columnsfrom information_schema.tables as tabinner join information_schema.columns as colon col.table_schema = tab.table_schemaand col.table_name = tab.table_nameand col.data_type in ('blob', 'mediumblob', 'longblob','text', 'mediumtext', 'longtext')where tab.table_schema = 'your database name'. <---- enter database name hereand tab.table_type = 'BASE TABLE'group by tab.table_nameorder by tab.table_name;Check the size of the LOB column:Select (max(length (<COL_NAME>))/(1024)) as “size in KB” from <TABLE_NAME>;Check the size of the LOB columns for all tables, and then populate the maximum size in Max LOB size (K).Using use the Max LOB size (K) option with a value greater than 63 KB affects the performance of a full load configured to run in limited LOB mode. During a full load, AWS DMS allocates memory by multiplying the Max LOB size (k) value by the commit rate. Then, the product is multiplied by the number of LOB columns.When AWS DMS can't pre-allocate that memory, AWS DMS starts consuming SWAP memory. This affects the performance of a full load. So, if you experience performance issues when using limited LOB mode, decrease the commit rate until you achieve an acceptable level of performance. Or, consider using inline LOB mode for supported endpoints after first checking the LOB distribution for the table.For more information, see Setting LOB support for source databases in an AWS DMS task.Related informationMigrate from MySQL to Amazon RDS with AWS DMSDatabase migration step-by-step walkthroughsFollow"
https://repost.aws/knowledge-center/dms-mysql-source-mysql-target
How can I set up alerts to see when an IAM access key is used?
I want to set up notifications to see when a specific AWS Identity and Access Management (IAM) credential or access key is used.
"I want to set up notifications to see when a specific AWS Identity and Access Management (IAM) credential or access key is used.ResolutionThere are no predefined rules to track and send notifications about the use of IAM credentials. However, you can use a custom rule that combines AWS CloudTrail and Amazon EventBridge. This lets you send a notification to an Amazon Simple Notification Service (Amazon SNS) topic or Amazon Simple Queue Service (Amazon SQS) queue.EventBridge rules are represented as JSON objects. A rule has a simple match or no match logic that applies to events. Based on the structure of events, you can build custom patterns for the specific criteria that you want to match.The following example rule tracks a single access key in the same AWS Region where the rule is configured.Important:You must have an active trail to send events for EventBridge to invoke notifications to your SNS topic or SQS queue.Your trail's management events must be configured as Write-only or All. Trail management events that are configured as Read-only don't invoke the EventBridge rule. For more information, see Read and write events, Events from AWS services, and CloudTrail supported services and integrations.1.    Open the EventBridge console, and then choose Rules.2.    Choose Create rule.3.    Enter a Name for the rule. You can optionally enter a Description. Then, choose Next.4.    For Event source, choose Other.5.    For Creation method, choose Custom pattern (JSON editor).6.    For Event pattern, enter a JSON template that's similar to the following:Note: You can modify this template to track notifications for a range of criteria, such as access keys, login types, or specific identities.{ "detail-type": [ "AWS API Call via CloudTrail" ], "detail": { "userIdentity": { "accessKeyId": [ "AKIAIOSFODNN7EXAMPLE" ] } }}7.    Choose Next.8.    For Target types, choose AWS service. Then, complete the following fields:For Select a target, select SNS topic or SQS queue.For Topic, select the topic that you want to respond to the event. Then, choose Next.9.    (Optional) Choose tags for your rule, if desired.10.    Choose Next to review your rule. Then, choose Create rule.Related informationAmazon EventBridge event patternsGetting credential reports for your AWS accountFollow"
https://repost.aws/knowledge-center/track-access-key-credential
Why am I receiving errors when deleting a static IP address using the Amazon Lightsail console?
I'm receiving the error "The specified Static IP address cannot be released from your account. A reverse DNS record may be associated with the Static IP address." when deleting a static IP address from my Amazon Lightsail console. How do I resolve this issue?
"I'm receiving the error "The specified Static IP address cannot be released from your account. A reverse DNS record may be associated with the Static IP address." when deleting a static IP address from my Amazon Lightsail console. How do I resolve this issue?Short descriptionYou might receive the specified Static IP address cannot be released error message when a reverse Domain Name System (rDNS) record was previously created for your static IP address and the rDNS entry isn't removed.To confirm if the static IP address has rDNS mapping to the domain, run the following command on the Terminal or Powershell:$ nslookup -type=ptr 8.8.8.8Note: Replace IP address 8.8.8.8 with your static IP address.If the rDNS record doesn't end with "compute.amazonaws.com" but instead ends with a custom record of your domain, then you must request for rDNS removal.ResolutionTo resolve the issue, submit a request to remove the rDNS record for the static IP address.Open the Request to remove email sending limitations form.Complete the form using the following information:Email Address: Your email address.Use Case Description: Provide your specific use case for requesting rDNS removal.Elastic IP address: Enter IP address here.Reverse DNS record: Enter "Remove rDNS record".Note: You have the option to enter up to 100 Elastic IP addresses.Choose Submit.Note: rDNS removal might take a few hours to propagate through the system.You receive an email with the Request ID after submitting the request form. It might take up to 48 hours to process your request. If your request is approved, then you receive an email to notify you that the rDNS record is removed. After you receive confirmation that the associated rDNS record is removed, you can delete the static IP address from the Lightsail console. If you don't receive an update within 48 hours after submitting the request, then reply to the initial email that you received.Follow"
https://repost.aws/knowledge-center/lightsail-release-static-ip
How can I audit deleted or missing objects from my Amazon S3 bucket?
There's an object or file that's missing from my Amazon Simple Storage Service (Amazon S3) bucket. Where can I find information about how the object or file was deleted? How can I prevent future accidental deletions?
"There's an object or file that's missing from my Amazon Simple Storage Service (Amazon S3) bucket. Where can I find information about how the object or file was deleted? How can I prevent future accidental deletions?ResolutionTo find out how an S3 object was deleted, you can review either server access logs or AWS CloudTrail logs.Note: Logging must be enabled on the bucket before the deletion event occurs. You receive logs only for events that occurred after logging was enabled.Server access logs track S3 operations performed manually or as part of a lifecycle configuration. To enable server access logging, see How do I enable server access logging for an S3 bucket? For more information on how to analyze server access logs, see How do I analyze my Amazon S3 server access logs using Athena?CloudTrail logs can track object-level data events in an S3 bucket, such as GetObject, DeleteObject, and PutObject. To enable CloudTrail logs for object-level events, see How do I enable object-level logging for an S3 bucket with AWS CloudTrail data events? For more information on how to find specific events, see I enabled object-level logging for my Amazon S3 bucket. Where can I find the events in the CloudTrail event history?Note: By default, CloudTrail records bucket-level events. To get logs for object-level operations like GetObject, DeleteObject, and PutObject, you must configure object-level logging. Object-level logging incurs additional charges, so be sure to review the pricing for CloudTrail data events.To prevent or mitigate future accidental deletions, consider the following features:Enable versioning to keep historical versions of an object.Enable Cross-Region Replication of objects.Enable MFA delete to require multi-factor authentication (MFA) when deleting an object version.Follow"
https://repost.aws/knowledge-center/s3-audit-deleted-missing-objects
How do I troubleshoot connectivity issues that I'm experiencing while using an Amazon VPC?
I'm unable to connect to my destination server using an Amazon Virtual Private Cloud (Amazon VPC) resource as the source.
"I'm unable to connect to my destination server using an Amazon Virtual Private Cloud (Amazon VPC) resource as the source.ResolutionTo troubleshoot VPC connectivity issues, use the AWSSupport-ConnectivityTroubleshooter automation document to check for common issues with:Security group configurationsNetwork access control list (network ACL) configurationsRoute table configurationsConfirm that you have the required permissions to run the automation documentThe following AWS Identity and Access Management (IAM) permissions are required to run the automation document:ec2:DescribeNetworkInterfacesec2:DescribeRouteTablesec2:DescribeSecurityGroupsec2:DescribeNetworkAclsec2:DescribeNatGatewaysec2:DescribeVpcPeeringConnectionsRun the automation documentFrom the AWS Management Console:Open the document in the AWS Systems Manager console. Be sure to open the document in the Region where your resources are located.For SourceIP, enter the private IP address of the VPC resource.For DestinationIP, enter the destination server IP address.For DestinationPort, enter the destination server port.Choose Execute.Monitor the progress of the document's execution. If the document status is Success, the automation didn't find any misconfigurations. If the document status is Failed, check the step that failed for details to resolve the issue.From the AWS Command Line Interface (AWS CLI):Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.For example, to diagnose connectivity issues from 172.31.2.7 to 172.31.2.8 on port 443 in an Amazon VPC:aws ssm start-automation-execution --document-name "AWSSupport-ConnectivityTroubleshooter" --parameters "SourceIP=172.31.2.7,DestinationIP=172.31.2.8,DestinationPort=443" --region <region>Follow"
https://repost.aws/knowledge-center/vpc-fix-connectivity-issues
How can I allocate memory to tasks in Amazon ECS?
I want to use Amazon Elastic Container Service (Amazon ECS) to allocate memory to tasks.
"I want to use Amazon Elastic Container Service (Amazon ECS) to allocate memory to tasks.Short descriptionIn Amazon ECS, memory can be defined at both the task level and at each container level. Memory defined at the task level is the hard limit of memory for the task. At the container level, there are two parameters for allocating memory to tasks: memoryReservation (a soft limit) and memory (a hard limit). For tasks that are hosted on Amazon EC2 instances, the task level memory field is optional, and any value can be used. If a task-level memory value is specified, then the container-level memory value is optional. The value for each parameter is subtracted from the available memory units of an Amazon ECS container instance when a task is running. The calculation is based on the soft limit, hard limit, or task-level memory of a task definition. For more information, see Cluster reservation.Note: The memory and memoryReservation parameters are set as the container definition parameters of an Amazon ECS task definition. If you specify a value for both container-level memory and memoryReservation, then memory must be greater than memoryReservation. If you specify memoryReservation, then that value is subtracted from the available memory resources for the container instance where the container is placed. Otherwise, the value of memory is used. For more information, see Memory.ResolutionBefore you start, check to see that you have an Amazon ECS cluster that includes an Amazon Elastic Compute Cloud (Amazon EC2) instance. For more information on creating a cluster, see Creating a cluster using the classic console. For more information on configuring the cluster and container instance, see Container Instance Memory Management.View the memory allocations of a container instanceOpen the Amazon ECS console.In the navigation pane, choose Clusters.Choose the cluster that you created.Choose the ECS Instances view, and then choose the container instance included with the cluster that you created from the Container Instance column.Note: The Details pane shows that the memory in the Available column is equal to the memory in the Registered column.For statistics on the resource usage of the instance, connect to the instance using SSH, and then run the docker stats command.Create a task definition with a soft limitFrom the Amazon ECS console, in the navigation pane, choose Task Definitions.Choose Create new Task Definition.For the launch type, choose EC2, and then choose Next step.For Task Definition Name, enter a name.In the Container Definitions section, choose Add container.For Container name, enter a name.For Image, enter nginx or the appropriate Docker image for your environment.For Memory Limits (MiB), choose Soft limit, and then enter 700.Choose Add, and then choose Create.Run the task definition with a soft limitFrom the Amazon ECS console, in the navigation pane, choose Clusters, and then choose the cluster that you created.Choose the Tasks view, and then choose Run new Task.Choose Launch Type as EC2, choose the task definition that you created with soft limit, and then choose Run Task. Note: Task Definition and Cluster can be prepopulated with the name of the task definition and cluster that you created earlier if you are using ECS for the first time.When the Last status column of the task with a soft limit shows as RUNNING, move to next step.Note: To update the status of the task to RUNNING, refresh the page.Choose the ECS Instances view, and then choose the instance from the Container Instance column.Note: The Details pane shows that the memory in the Available column is less than the memory in the Registered column.For statistics on the resource usage of the instance, connect to the instance using SSH, and then run the docker stats command.Choose Clusters from the navigation pane, and then choose the cluster.Choose the Tasks view, select the task with a soft limit, and then choose Stop.Choose the ECS Instances view, and then choose the instance from the Container Instance column.Note: The Details pane shows that the memory in the Available column is equal to the memory in the Registered column.Create a new revision of a task definition with a hard limitFrom the Amazon ECS console, in the navigation pane, choose Task Definitions.Select the task definition that you created with a soft limit, and then choose Create new revision.In the Container Definitions section, in the Container Name column, choose the container that you added for the task definition with a soft limit.For Memory Limits (MiB), choose Hard limit, and then enter 1000.Choose Update, and then choose Create.Run the revised task definition with a hard limitFrom the Amazon ECS console, in the navigation pane, choose Clusters, and then choose the cluster that you created.Choose the Tasks view, and then choose Run new Task.Choose Launch Type as EC2For Task Definition, choose the task definition with a hard limit that you created, and then choose Run Task.When the Last status column of the revised task with a hard limit shows as RUNNING, move to next step.Note: To update the status of the task to RUNNING, refresh the page.Choose the ECS Instances view, and then choose the instance from the Container Instance column.Note: The Details pane shows that available memory in the Available column is less than the memory in the Registered column.For statistics on the resource usage of the instance, connect to the instance using SSH, and then run the docker stats command.Choose Clusters from the navigation pane, and then choose the cluster.Choose the Tasks view, select the task with a hard limit, and then choose Stop.Choose the ECS Instances view, and then choose the instance from the Container Instance column.Note: The Details pane shows that the memory in the Available column is equal to the memory in the Registered column.Create a new revision of a task definition with both a soft limit and a hard limitFrom the Amazon ECS console, in the navigation pane, choose Task Definitions.Select the task definition that you created with a hard limit, and then choose Create new revision.In the Container Definitions section, in the Container Name column, choose the container that you added for the task definition with a hard limit.For Memory Limits (MiB), choose Soft limit, and then enter 700.Choose Add Hard limit, and then enter 1200.Choose Update, and then choose Create.Run the revised task definition with both a soft limit and a hard limitFrom the Amazon ECS console, in the navigation pane, choose Clusters, and then choose the cluster that you created.Choose the Tasks view, and then choose Run new Task.Choose Launch Type as EC2For Task Definition, choose the task definition that you created with both a soft limit and a hard limit, and then choose Run Task.When the Last status column of the task shows as RUNNING, move to next step.Note: To update the status of the task to RUNNING, refresh the page.Choose the ECS Instances view, and then choose the instance from the Container Instance column.Note: The Details pane shows that the memory in the Available column is less than the memory in the Registered column.For statistics on the resource usage of the instance, connect to the instance using SSH, and then run the docker stats command.Choose Clusters from the navigation pane, and then choose the cluster.Choose the Tasks view, select the task with both a soft limit and a hard limit, and then choose Stop.Choose the ECS Instances view, and then choose the instance from the Container Instance column.Note: The Details pane shows that the memory in the Available column is equal to the memory in the Registered column.Create a new revision of a task definition with Task Level Memory limitFrom the Amazon ECS console, in the navigation pane, choose Task Definitions.Select the task definition that you created with a hard and soft limit, and then choose Create new revision.Under the Task Size section, for Task memory (MiB) enter 1000In the Container Definitions section, in the Container Name column, choose the container that you added for the task definition with a hard and soft limit.For Memory Limits (MiB), remove Soft limit, by choosing the x icon on the right sideNext for Hard limit, remove the value 1200 by selecting it and deleting it.Choose Update, and then choose Create.Run the revised task definition with Task Level Memory limitFrom the Amazon ECS console, in the navigation pane, choose Clusters, and then choose the cluster that you created.Choose the Tasks view, and then choose Run new Task.Choose Launch Type as EC2For Task Definition, choose the task definition that you created with Task Level memory, and then choose Run Task.When the Last status column of the task shows as RUNNING, move to next step.Note: To update the status of the task to RUNNING, refresh the page.Choose the ECS Instances view, and then choose the instance from the Container Instance column.Note: The Details pane shows that the memory in the Available column is less than the memory in the Registered column.For statistics on the resource usage of the instance, connect to the instance using SSH, and then run the docker stats command.Note: You might observe that the memory limit shown by the docker stats command might not be 1000 MiB for the container. This is because the Task Level memory is managed by ECS Agent and not by docker daemon.Choose Clusters from the navigation pane, and then choose the cluster.Choose the Tasks view, select the task with Task level memory, and then choose Stop.Choose the ECS Instances view, and then choose the instance from the Container Instance column.Note: The Details pane shows that the memory in the Available column is equal to the memory in the Registered column.Related informationAmazon ECS CloudWatch metricsAmazon EC2 Instance typesLimit a container's access to memoryTask sizeFollow"
https://repost.aws/knowledge-center/allocate-ecs-memory-tasks
Why did my Spark job in Amazon EMR fail?
My Apache Spark job in Amazon EMR failed.
"My Apache Spark job in Amazon EMR failed.ResolutionApplication failuresERROR ShuffleBlockFetcherIterator: Failed to get block(s) from ip-192-168-14-250.us-east-2.compute.internal:7337org.apache.spark.network .client.ChunkFetchFailureException: Failure while fetching StreamChunkId[streamId=842490577174,chunkIndex=0]: java.lang.RuntimeException: Executor is not registered (appId=application_1622819239076_0367, execId=661)This issue might occur when the executor worker node in Amazon EMR is in an unhealthy state. When disk utilization for a worker node exceeds the 90% utilization threshold, the YARN NodeManager health service reports the node as UNHEALTHY. Unhealthy nodes are included in Amazon EMR deny lists. In addition, YARN containers aren't allocated to those nodes.To troubleshoot this issue, do the following:Review the resource manager logs from the EMR cluster master node for unhealthy worker nodes. For more information, see How do I resolve ExecutorLostFailure "Slave lost" errors in Spark on Amazon EMR? and review the High disk utilization section.Verify impacted node disk space utilization, review files consuming disk space, and perform data recovery to return nodes to a healthy state. For more information, see Why is the core node in my Amazon EMR cluster running out of disk space?ERROR [Executor task launch worker for task 631836] o.a.s.e.Executor:Exception in task 24.0 in stage 13028.0 (TID 631836) java.util.NoSuchElementException: None.getThis error occurs when there is a problem within the application code and the SparkContext initialization.Make sure that there aren't multiple SparkContext jobs active within the same session. According to the Java Virtual Machine (JVM), You can have one active SparkContext at a time. If you want to initialize another SparkContext, then you must stop the active job before creating a new one.Container killed on request. Exit code is 137This exception occurs when a task in a YARN container exceeds the physical memory allocated for that container. This commonly happens when you have shuffle partitions, inconsistent partition sizes, or a large number of executor cores.Review error details in the Spark driver logs to determine the cause of the error. For more information, see How can I access Spark driver logs on an Amazon EMR cluster?The following is an example error from the driver log:ERROR YarnScheduler: Lost executor 19 on ip-10-109-xx-xxx.aws.com : Container from a bad node: container_1658329343444_0018_01_000020 on host: ip-10-109-xx-xxx.aws.com . Exit status: 137.Diagnostics:Container killed on request. Exit code is 137Container exited with a non-zero exit code 137.Killed by external signalExecutor container 'container_1658329343444_0018_01_000020' was killed with exit code 137. To understand the root cause, you can analyze executor container log.# java.lang.OutOfMemoryError: Java heap space# -XX:OnOutOfMemoryError="kill -9 %p"# Executing /bin/sh -c "kill -9 23573"...The preceding error stack trace indicates that there isn't enough available memory on the executor to continue processing data. This error might happen in different job stages in both narrow and wide transformations.To resolve this issue, do one of the following:Increase executor memory.Note: Executor memory includes memory required for executing the tasks plus overhead memory. The sum of these must not be greater than the size of JVM and the YARN maximum container size.Add more Spark partitions.Increase the number of shuffle partitions.Reduce the number of executor cores.For more information, see How do I resolve "Container killed on request. Exit code is 137" errors in Spark on Amazon EMR?Spark jobs are in a hung state and not completingSpark jobs might be in stuck for multiple reasons. For example, if the Spark driver (application master) process is impacted or executor containers are lost.This commonly happens when you have high disk space utilization, or when you use Spot Instances for cluster nodes and the Spot Instance is terminated. For more information, see How do I resolve ExecutorLostFailure "Slave lost" errors in Spark on Amazon EMR?To troubleshoot this issue, do the following:Review Spark application master or driver logs for any exceptions.Validate the YARN node list for any unhealthy nodes. When disk utilization for a core node exceeds the utilization threshold, the YARN Node Manager health service reports the node as UNHEALTHY. Unhealthy nodes are included in Amazon EMR deny lists. In addition, YARN containers aren't allocated to those nodes.Monitor disk space utilization, and configure Amazon Elastic Block Store (Amazon EBS) volumes to keep utilization below 90% for EMR cluster worker nodes.WARN Executor: Issue communicating with driver in heartbeater org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [10000 milliseconds]. This timeout is controlled by spark.executor.heartbeatIntervalSpark executors send heartbeat signals to the Spark driver at intervals specified by the spark.executor.heartbeatInterval property. During long garbage collection pauses, the executor might not send a heartbeat signal. The driver kills executors that fail to send a heartbeat signal for more than the value specified.TimeoutExceptionTimeout exceptions occur when the executor is under memory constraint or facing OOM issues while processing data. This also impacts the garbage collection process, causing further delay.Use one of the following methods to resolve heartbeat timeout errors:Increase executor memory. Also, depending on the application process, repartition your data.Tune garbage collection.Increase the interval for spark.executor.heartbeatInterval.Specify a longer spark.network.timeout period.ExecutorLostFailure "Exit status: -100. Diagnostics: Container released on a *lost* nodeThis error occurs when a core or task node is terminated because of high disk space utilization. The error also occurs when a node becomes unresponsive due to prolonged high CPU utilization or low available memory. For troubleshooting steps, see How can I resolve "Exit status: -100. Diagnostics: Container released on a *lost* node" errors in Amazon EMR?Note: This error might also occur when you use Spot Instances for cluster nodes, and a Spot Instance is terminated. In this scenario, the EMR cluster provisions an On-Demand Instance to replace the terminated Spot Instance. This means that the application might recover on its own. For more information, see Spark enhancements for elasticity and resiliency on Amazon EMR.executor 38: java.sql.SQLException (Network error IOException: Connection timed out (Connection timed out)This issue is related to communication with the SQL database for establishing the socket connection when reading or writing data. Verify that the DB host can receive incoming connections on Port 1433 from your EMR cluster security groups.Also, review the maximum number of parallel database connections configured for the SQL DB and the memory allocation for the DB instance class. Database connections also consume memory. If utilization is high, then review the database configuration and the number of allowed connections. For more information, see Maximum number of database connections.Amazon S3 exceptionsHTTP 503 "Slow Down"HTTP 503 exceptions occur when you exceed the Amazon Simple Storage Service (Amazon S3) request rate for the prefix. A 503 exception doesn't always mean that a failure will occur. However, mitigating the problem can improve your application's performance.For more information, see Why does my Spark or Hive job on Amazon EMR fail with an HTTP 503 "Slow Down" AmazonS3Exception?HTTP 403 "Access Denied"HTTP 403 errors are caused by incorrect or not valid credentials, such as:Credentials or roles that are specified in your application code.The policy that's attached to the Amazon EC2 instance profile role.Amazon Virtual Private Cloud (Amazon VPC) endpoints for Amazon S3.S3 source and destination bucket policies.To resolve 403 errors, be sure that the relevant AWS Identity and Access Management (IAM) role or policy allows access to Amazon S3. For more information, see Why does my Amazon EMR application fail with an HTTP 403 "Access Denied" AmazonS3Exception?HTTP 404 "Not Found"HTTP 404 errors indicate that the application expected to find an object in S3, but at the time of the request, the object wasn't found. Common causes include:Incorrect S3 paths (for example, a mistyped path).The file was moved or deleted by a process outside of the application.An operation that caused eventual consistency problems, such as an overwrite.For more information, see Why does my Amazon EMR application fail with an HTTP 404 "Not Found" AmazonS3Exception?Follow"
https://repost.aws/knowledge-center/emr-troubleshoot-failed-spark-jobs
Why can't my Amazon EC2 instance connect to the internet using an internet gateway?
"My Amazon Elastic Compute Cloud (Amazon EC2) instance in a public subnet has a public IP address or an internet gateway, but can’t access the internet."
"My Amazon Elastic Compute Cloud (Amazon EC2) instance in a public subnet has a public IP address or an internet gateway, but can’t access the internet.Short descriptionTo troubleshoot why your Amazon EC2 can't access the internet, do the following:Verify that the EC2 instance meets all prerequisites.Verify that the instance has a public IP address.Verify that a firewall isn't blocking the access.ResolutionVerify that the instance meets all prerequisitesThe instance must meet the following conditions:The route table that's associated with your instance’s subnet has a default route to an internet gateway (0.0.0.0/0).The internet gateway that's associated with the route isn't deleted.The security group that's attached to the instance’s elastic network interface has rules allowing outbound internet traffic (0.0.0.0/0) for your ports and protocols.The network access control list (network ACL) that is associated with the instance's subnet has rules allowing both outbound and inbound traffic to the internet.Verify that the instance has a public IP addressIf the instance in a public subnet doesn't have a public IP address, then the instance isn't accessible outside the virtual private cloud (VPC) where it resides in. This is true even if the instance has an internet gateway.To allow the instance connectivity to the internet, allocate an Elastic IP address, and then associate this Elastic IP address with the instance.Verify that a firewall isn't blocking accessIf the instance meets the preceding conditions and internet connectivity issues persist, then you might have a local firewall running in the operating system. It's a best practice to use security groups instead of having a local firewall in the operating system. Be sure that disabling the local firewall doesn't impact your workload.# For Uncomplicated Firewallsudo ufw disable# For firewalldsudo systemctl disable firewalld --nowIf you must use a firewall, then the internet connectivity issues are usually related to the OUTPUT chain. You can allow outgoing traffic by running the following commands:sudo iptables -P OUTPUT ACCEPTsudo iptables -I OUTPUT 1 -j ACCEPTWindows Server:For Windows Server default firewalls, run the following command:netsh advfirewall firewall show rule name=allIf the preceding command indicates blocked traffic, then remove the old rule or add a new rule allowing traffic for that specific port. For more information, see the Microsoft documentation for Understanding Windows firewall with advanced security rules.Related informationConnect to the internet using an internet gatewayControl traffic to resources using security groupsWhy can't my Amazon EC2 instance in a private subnet connect to the internet using a NAT gateway?Follow"
https://repost.aws/knowledge-center/ec2-connect-internet-gateway
How can I activate a BAA agreement for my organization using AWS Artifact?
I need to activate or manage a Business Associate Addendum (BAA) agreement for AWS Organizations with an AWS account for an organization.
"I need to activate or manage a Business Associate Addendum (BAA) agreement for AWS Organizations with an AWS account for an organization.Short descriptionAWS BAA agreements are required for some organizations that are subject to the Health Insurance Portability and Accountability Act (HIPAA). HIPAA compliance safeguards protected health information (PHI). You can use AWS Artifact to manage agreements for your AWS account or for all accounts in your organization if you use AWS Organizations. For more information, see Managing agreements in AWS Artifact.ResolutionFollow these instructions to download and accept the AWS BAA agreement with a single AWS account or multiple accounts in an organization for AWS Organizations.Managing an agreement for a single account in AWS ArtifactManaging an agreement for multiple accounts in AWS ArtifactIn the AWS Artifact console, the Agreement state is now Active.Related informationIntroducing the Self-Service Business Associate AddendumHow to use AWS Artifact to accept an agreement for your accountHIPAA eligible services referenceAWS Artifact FAQsFollow"
https://repost.aws/knowledge-center/activate-artifact-baa-agreement
How do I troubleshoot an AWS DMS task that is failing with error message "ERROR: canceling statement due to statement timeout"?
"I'm migrating data to or from my on-premises PostgreSQL database using AWS Database Migration Service (AWS DMS). The AWS DMS task runs normally for a while, and then the task fails with an error. How do I troubleshoot and resolve these errors?"
"I'm migrating data to or from my on-premises PostgreSQL database using AWS Database Migration Service (AWS DMS). The AWS DMS task runs normally for a while, and then the task fails with an error. How do I troubleshoot and resolve these errors?Short descriptionIf the PostgreSQL database is the source of your migration task, then AWS DMS gets data from the table during the full load phase. Then, AWS DMS reads from the write-ahead logs (WALs) that are kept by the replication slot during the change data capture (CDC) phase.If the PostgreSQL database is the target, then AWS DMS gets the data from the source and creates CSV files in the replication instance. Then, AWS DMS runs a COPY command to insert those records into the target during the full load phase.But, during the CDC phase, AWS DMS runs the exact DML statements from the source WAL logs in transactional apply mode. For batch apply mode, AWS DMS also creates CSV files during the CDC phase. Then, it runs a COPY command to insert the net changes to the target.When AWS DMS tries to either get data from source or put data in the target, it uses the default timeout setting of 60 seconds. If the source or target is heavily loaded or there are locks in the tables, then AWS DMS can't finish running those commands within 60 seconds. So, the task fails with an error that says "canceling statement due to statement timeout," and you see one of these entries in the log:Messages:"]E: RetCode: SQL_ERROR SqlState: 57014 NativeError: 1 Message: ERROR: canceling statement due to statement timeout; Error while executing the query [1022502] (ar_odbc_stmt.c:2738)""]E: test_decoding_create_replication_slot(...) - Unable to create slot 'lrcyli7hfxcwkair_00016402_8917165c_29f0_4070_97dd_26e78336e01b' (on execute(...) phase) [1022506] (postgres_test_decoding.c:392))"To troubleshoot and resolve these errors, follow these steps:Identify the cause of long run times for commands.Increase the timeout value and check the slot creation timeout value.Troubleshoot slot creation issues.ResolutionIdentify the cause of long run times for commandsTo find the command that failed to run during the timeout period, review the AWS DMS task log and the table statistics section of the task. You can also find this information in the PostgreSQL error log file if the parameter log_min_error_statement is set to ERROR or a lower severity. After identifying the command that failed, you can find the failed table names. See this example error message from the PostgreSQL error log:ERROR: canceling statement due to statement timeout STATEMENT: <The statement executed>"To find locks on the associated tables, run this command in the source or target (depending where the error is appearing):SELECT blocked_locks.pid AS blocked_pid, blocked_activity.usename AS blocked_user, blocking_locks.pid AS blocking_pid, blocking_activity.usename AS blocking_user, blocked_activity.query AS blocked_statement, blocking_activity.query AS current_statement_in_blocking_process FROM pg_catalog.pg_locks blocked_locks JOIN pg_catalog.pg_stat_activity blocked_activity ON blocked_activity.pid = blocked_locks.pid JOIN pg_catalog.pg_locks blocking_locks ON blocking_locks.locktype = blocked_locks.locktype AND blocking_locks.DATABASE IS NOT DISTINCT FROM blocked_locks.DATABASE AND blocking_locks.relation IS NOT DISTINCT FROM blocked_locks.relation AND blocking_locks.page IS NOT DISTINCT FROM blocked_locks.page AND blocking_locks.tuple IS NOT DISTINCT FROM blocked_locks.tuple AND blocking_locks.virtualxid IS NOT DISTINCT FROM blocked_locks.virtualxid AND blocking_locks.transactionid IS NOT DISTINCT FROM blocked_locks.transactionid AND blocking_locks.classid IS NOT DISTINCT FROM blocked_locks.classid AND blocking_locks.objid IS NOT DISTINCT FROM blocked_locks.objid AND blocking_locks.objsubid IS NOT DISTINCT FROM blocked_locks.objsubid AND blocking_locks.pid != blocked_locks.pid JOIN pg_catalog.pg_stat_activity blocking_activity ON blocking_activity.pid = blocking_locks.pid WHERE NOT blocked_locks.GRANTED;If you find any PIDs that are blocked, stop or terminate the blocked PID by running this command:SELECT pg_terminate_backend(blocking_pid);Because dead rows, or "tuples," can increase SELECT time, check for large numbers of dead rows in the source tables by running this command:select * from pg_stat_user_tables where relname= 'table_name';Check to see if the failed target table has primary keys or unique indexes. If the table has no primary keys or unique indexes, then a full table scan is performed during the running of any UPDATE statement. This table scan can take a long time.Increase the timeout valueAWS DMS uses the executeTimeout extra connection attribute in both the source and target endpoints. The default value for executeTimeout is 60 seconds, so AWS DMS times out if a query takes longer than 60 seconds to run.If the error appears in Source_Unload or Source_Capture, then set the timeout value for executeTimeout in the source. If the error appears in Target_Load or Target_Apply, set the timeout value for executeTimeout in the target. Increase the timeout value setting by following these steps:1.    Open the AWS DMS console.2.    Choose Endpoints from the navigation pane.3.    Choose the PostgreSQL endpoint.4.    Choose Actions, and select Modify.5.    Expand the Endpoint-specific settings section.6.    In the field for Extra connection attributes, enter this value:executeTimeout=3600;7.    Choose Save.8.    From the Endpoints pane, choose the name of your PostgreSQL endpoint.9.    From the Connections section, the Status of the endpoint changes from Testing to Successful.You can increase (in milliseconds) the statement_timeout parameter in the PostgreSQL DB instance. The default value is 0, which turns off timeouts for any query. You can also increase the lock_timeout parameter. The default value is 0, which turns off timeouts for locks.Troubleshoot slot creation issuesIf the timeout occurred when you created the replication slot in the PostgreSQL database, then you see log entries similar to the following:Messages"]E: test_decoding_create_replication_slot(...) - Unable to create slot 'lrcyli7hfxcwkair_00016402_8917165c_29f0_4070_97dd_26e78336e01b' (on execute(...) phase) [1022506] (postgres_test_decoding.c:392)"You can increase this timeout by configuring the TransactionConsistencyTimeout parameter in the Task settings section. The default value is 600 seconds.PostgreSQL can't create the replication slot if there are any active locks in the database user tables. Check for locks by running this command:select * from pg_locks;Then, to test whether the error has been resolved, run this command to manually create the replication slot in the source PostgreSQL database:select xlog_position FROM pg_create_logical_replication_slot('<Slot name as per the task log>', 'test_decoding');If the command still can't create the slot, then you might need to work with a PostgreSQL DBA to identify the bottleneck and configure your database. If the command is successful, delete the slot that you just created as a test:select pg_drop_replication_slot(‘<slot name>');Finally, restart your migration task.Related informationPostgreSQL documentation for client connection defaultsExtra connection attributes when using PostgreSQL as a target for AWS DMSExtra connection attributes when using PostgreSQL as a DMS sourceUsing a PostgreSQL database as an AWS DMS sourceFollow"
https://repost.aws/knowledge-center/dms-error-canceling-statement-timeout
Why aren't the configuration parameters of my DHCP options set passing to instances in the VPC?
"I set custom configuration parameters to the Dynamic Host Configuration Protocol (DHCP) options set for my Amazon Virtual Private Cloud (Amazon VPC). However, those options aren't passing to the Amazon Elastic Compute Cloud (Amazon EC2) instances in the Amazon VPC. How can I fix this?"
"I set custom configuration parameters to the Dynamic Host Configuration Protocol (DHCP) options set for my Amazon Virtual Private Cloud (Amazon VPC). However, those options aren't passing to the Amazon Elastic Compute Cloud (Amazon EC2) instances in the Amazon VPC. How can I fix this?Short descriptionWhen you associate a new set of DHCP options with your Amazon VPC, all new and existing instances in that VPC use the new options. Your instances automatically pick up the changes, depending on how frequently they renew their DHCP leases. You can manually renew the address lease using the operating system on the instance. If you tried renewing the IP address and don't see the new DHCP parameters, then check these resources and configurations to diagnose and troubleshoot the issue:Parameters of the DHCP options setNetwork configuration and operating system kernel parameters of the Amazon EC2 instancesResolutionParameters of the DHCP options setOpen the Amazon VPC console.In the navigation pane, under Virtual Private Cloud, choose DHCP Options Sets.In the resource list, choose the DHCP options set with your custom configuration parameters.In the Summary view, verify that the Options follow the guidelines described in DHCP option sets in Amazon VPC.Domain name parameter:Some Linux operating systems accept multiple domain names separated by spaces. Other Linux operating systems treat the value as a single domain. Windows operating systems treat the value as a single domain. Therefore, it’s a best practice to specify only one domain name when the DHCP option set is associated with a VPC.Domain name servers parameter:You can enter either AmazonProvidedDNS or custom domain name servers. Using both might cause unexpected behavior. Therefore, it’s a best practice to use either AmazonProvidedDNS, or a custom domain name server. You can enter the IP addresses of up to four IPv4 domain name servers. Or, you can add up to three IPv4 domain name servers, an AmazonProvidedDNS, and four IPv6 domain name servers separated by commas. Although you can specify up to eight domain name servers, some operating systems might impose lower limits.Important: You can't modify the DHCP options set after you create the set. To modify your DHCP options set, create a new DHCP options set with the correct parameters and associate it with your VPC.Network configuration and operating system kernel parameters of the Amazon EC2 instancesSearch for any customization (PEERDNS, timeouts, and so on) in the network configuration files that were set either manually or by using scripts. For more information, see User data and shell scripts.Verify that the configuration files used by the operating system are mutable. If the files are immutable, then the instance doesn't receive the configuration parameters from the DHCP options set correctly. When using Linux, configuration files are typically made immutable with the chattr command.Check the operating systems of the Amazon EC2 instances and search for known bugs. If there's a bug related to the issue, then follow the guidelines provided by the operating system. Some helpful articles include: How to configure a domain suffix search list on the Domain Name System clients.Related informationDNS attributes for your VPCFollow"
https://repost.aws/knowledge-center/passing-dhcp-instances-vpc
How do I use logs to track activity in my Amazon Redshift database cluster?
How can I perform database auditing on my Amazon Redshift cluster?
"How can I perform database auditing on my Amazon Redshift cluster?Short descriptionAmazon Redshift provides three logging options:Audit logs: Stored in Amazon Simple Storage Service (Amazon S3) bucketsSTL tables: Stored on every node in the clusterAWS CloudTrail: Stored in Amazon S3 bucketsAudit logs and STL tables record database-level activities, such as which users logged in and when. These tables also record the SQL activities that these users performed and when. CloudTrail tracks activities performed at the service level.Note: To view logs using external tables, use Amazon Redshift Spectrum. For more information, see Analyze database audit logs for security and compliance using Amazon Redshift Spectrum.ResolutionAudit logs and STL tablesThe following table compares audit logs and STL tables. Choose the logging option that's appropriate for your use case.Audit logsSTL tablesMust be enabled. To enable audit logging, follow the steps for Configuring auditing using the console or Configuring logging by using the Amazon Redshift CLI and API.Automatically available on every node in the data warehouse cluster.Audit log files are stored indefinitely unless you define Amazon S3 lifecycle rules to archive or delete files automatically. For more information, see Object lifecycle management.Log history is stored for two to five days, depending on log usage and available disk space. To extend the retention period, use the Amazon Redshift system object persistence utility from AWS Labs on GitHub.Access to audit log files doesn't require access to the Amazon Redshift database.Access to STL tables requires access to the Amazon Redshift database.Reviewing logs stored in Amazon S3 doesn't require database computing resources.Running queries against STL tables requires database computing resources, just as when you run other queries.Using timestamps, you can correlate process IDs with database activities. Cluster restarts don't affect audit logs in Amazon S3.It's not always possible to correlate process IDs with database activities, because process IDs might be recycled when the cluster restarts.Stores information in the following log files:Connection logUser logUser activity log Note: You must enable the enable_user_activity_logging database parameter for the user activity log. For more information, see Enabling logging.| Stores information in multiple tables. Use the following tables to review information similar to what is stored in the S3 audit logs:SVL_STATEMENTTEXT: Provides a complete record of SQL commands that have been run on the system. Combines all of the rows in the STL_DDLTEXT, STL_QUERYTEXT, and STL_UTILITYTEXT tables.STL_CONNECTION_LOG: Logs authentication attempts, connections, or disconnections.|| Records all SQL statements in the user activity logs. | Queries that are run are logged in STL_QUERY. DDL statements are logged in STL_DDLTEXT. The text of non-SELECT SQL commands are logged in STL_UTILITYTEXT. || Statements are logged as soon as Amazon Redshift receives them. Files on Amazon S3 are updated in batch, and can take a few hours to appear. | Logs are generated after each SQL statement is run. || Records who performed what action and when that action happened, but not how long it took to perform the action. | Use the STARTTIME and ENDTIME columns to determine how long an activity took to complete. To determine which user performed an action, combine SVL_STATEMENTTEXT (userid) with PG_USER (usesysid). || You are charged for the storage that your logs use in Amazon S3. | There are no additional charges for STL table storage. || Leader-node only queries are recorded. | Leader-node only queries aren't recorded. |CloudTrailUsing information collected by CloudTrail, you can determine what requests were successfully made to AWS services, who made the request, and when the request was made. For more information, see Logging Amazon Redshift API calls with AWS CloudTrail.CloudTrail log files are stored indefinitely in Amazon S3, unless you define lifecycle rules to archive or delete files automatically. For more information, see Object Lifecycle Management.Related informationTuning query performanceUser activity log release noteFollow"
https://repost.aws/knowledge-center/logs-redshift-database-cluster
How can I resolve an error that I received when using mysqldump on Amazon RDS for MySQL or MariaDB?
"I'm using an Amazon Relational Database Service (Amazon RDS) DB instance that is running MySQL or MariaDB. I'm using mysqldump to import data or export data, and I'm getting an error. How do I troubleshoot and resolve this error?"
"I'm using an Amazon Relational Database Service (Amazon RDS) DB instance that is running MySQL or MariaDB. I'm using mysqldump to import data or export data, and I'm getting an error. How do I troubleshoot and resolve this error?Short descriptionYou can receive the following errors when using mysqldump:Couldn't execute FLUSH TABLES WITH READ LOCK errorsMax_allowed_packet errorsSUPER privilege(s) and DEFINER errorsLost or aborted connection errorsResolutionCouldn't execute FLUSH TABLES WITH READ LOCK errorWhen using the --master-data option with mysqldump to export data, you might receive an error similar to the following:"mysqldump: Couldn't execute 'FLUSH TABLES WITH READ LOCK': Access denied for user 'user'@'%' (using password: YES) (1045)"The --master-data option acquires a FLUSH TABLES WITH READ LOCK. This requires SUPER privileges that the Amazon RDS master user doesn't have, and Amazon RDS doesn't support GLOBAL READ LOCK. When MySQL runs a CHANGE MASTER TO statement to get log information, the binary log file name and position (coordinates) is recorded in the mysqldump file. For more information, see the MySQL Documentation for ER_ACCESS_DENIED_ERROR.To resolve this error, remove the --master-data option. When you remove this option, you aren't given an exact log position in the mysqldump. To work around this issue, either take the mysqldump when your application is stopped, or take the mysqldump from an Amazon RDS read replica. This allows you to get the exact log position by executing SHOW SLAVE STATUS because stopping the replica confirms that the binlog positions do not change. Follow these steps to create a mysqldump from an Amazon RDS MySQL read replica of this RDS DB instance.1.    Set a value for the binary log retention.2.    Stop the replication by running the following command on the read replica:CALL mysql.rds_stop_replication;3.    Take the mysqldump without --master-data=2 from the read replica.4.    Run SHOW SLAVE STATUS on the replica and capture the Master_Log_File and Exec_Master_Log_Pos.5.    If you use the replica for your application, start replication again by using the following stored procedure:CALL mysql.rds_start_replication;If you don't use the replica for your application, you can delete it.Max_allowed_packet errorsWhen using mysqldump to export data, you might receive an error similar to the following:"Error 2020: Got packet bigger than 'max_allowed_packet' bytes when dumping table tb\_name at row: XX"This error occurs when the mysqldump command requests a packet that is larger than the value of the max_allowed_packet parameter that is set for your RDS DB instance. For more information, see the MySQL Documentation for Packet too large.To resolve max_allowed_packet errors, increase the global value for max_allowed_packet, or configure the max_allowed_packet in the mysqldump for that session (rather than globally for the whole database). For example, you can modify the command similar to the following:$ mysqldump --max_allowed_packet=1G ......SUPER privilege(s) and DEFINER errorsWhen using mysqldump to import data into an RDS DB instance that is running MySQL or MariaDB, you might receive an error similar to the following:"ERROR 1227 (42000) at line XX: Access denied; you need (at least one of) the SUPER privilege(s) for this operation"This error indicates one or more of the following issues:Your target RDS DB instance has the binary log enabled (backup retention period > 0), and the mysqldump file contains an object, such as a trigger, view, function, or event. For more information, see How can I resolve ERROR 1227 when enabling replication or automated backups on an Amazon RDS MySQL instance?The mysqldump file that you imported tried to create an object with a DEFINER attribute user that doesn't exist in your RDS DB instance, or you tried to create an attribute user that doesn't have the required SUPER user privileges. For more information, see How can I resolve definer errors when importing data to my Amazon RDS for MySQL instance using mysqldump?The command for the line referenced in the error message requires SUPER privilege(s) that aren't provided in RDS DB instances.Lost or aborted connection errorsWhen using mysqldump to import data, you might receive an error similar to the following:"mysqldump: error 2013: lost connection to mysql server during query when dumping table"--or--"mysqldump: Aborted connection XXXXXX to db: 'db_name' user: 'master_user' host: 'XXXXXXX' (Got timeout writing communication packets)"For more information about the cause and resolution of this error, see How do I resolve the error "MySQL server has gone away" when connecting to my Amazon RDS MySQL DB instance?Related informationMySQL Documentation for mysqldumpHow do I enable functions, procedures, and triggers for my Amazon RDS MySQL DB instance?Follow"
https://repost.aws/knowledge-center/mysqldump-error-rds-mysql-mariadb
How can I troubleshoot unusual resource activity with my AWS account?
I want to determine which AWS Identity and Access Management (IAM) user created a resource and restrict access.
"I want to determine which AWS Identity and Access Management (IAM) user created a resource and restrict access.Short descriptionUnauthorized account activity, such as new services that are unexpectedly launched, can indicate that your AWS credentials are compromised. Someone with malicious intent can use your credentials to access your account and perform activities permitted by the policies. For more information, see What do I do if I notice unauthorized activity in my AWS account and the AWS Customer Agreement.ResolutionIdentify the compromised IAM user and access key. Then, disable them. Use AWS CloudTrail to search for API event history associated with the compromised IAM user.In the following example, an Amazon Elastic Compute Cloud (Amazon EC2) instance launched unexpectedly.Note: These instructions apply to long-term security credentials, not temporary security credentials. To disable temporary credentials, see Disabling permissions for temporary security credentials.Identify the Amazon EC2 instance IDOpen the Amazon EC2 console, and then choose Instances.Choose the EC2 instance, and then choose the Description tab.Copy the Instance ID.Locate the IAM access key ID and user name used to launch the instanceOpen the CloudTrail console, and then choose Event history.Select the Filter dropdown list, and then choose Resource name.In the Enter resource name field, paste the EC2 instance ID, and then choose enter on your device.Expand the Event name for RunInstances.Copy the AWS access key, and then note the User name.Disable the IAM user, create a backup IAM access key, and then disable the compromised access keyOpen the IAM console, and then paste the IAM access key ID in the Search IAM bar.Choose the user name, and then choose the Security credentials tab.In Console password, choose Manage.Note: If the AWS Management Console password is set to Disabled, you can skip this step.In Console access, choose Disable, and then choose Apply.Important: Users whose accounts are disabled can't access the AWS Management Console. However, if the user has active access keys, they can still access AWS services using API calls.Follow the instructions to rotate access keys for an IAM user without interrupting your applications (console).For the compromised IAM access key, choose Make inactive.Review CloudTrail event history for activity by the compromised access keyOpen the CloudTrail console, and then choose Event history from the navigation pane.Select the Filter dropdown menu, and then choose AWS access key filter.In the Enter AWS access key field, enter the compromised IAM access key ID.Expand the Event name for the API call RunInstances.Note: You can view event history for the last 90 days.You can also search CloudTrail event history to determine how a security group or resource was changed and API calls that run, stop, start, and terminate EC2 instances.For more information, see Viewing events with CloudTrail event history.Related informationHow can I use CloudTrail to review what API calls and actions have occurred in my AWS account?Security best practices in IAMBest practices for managing AWS access keysManaging IAM policiesAWS security audit guidelinesFollow"
https://repost.aws/knowledge-center/troubleshoot-iam-account-activity
How do I run a command on a new EC2 Windows instance at launch?
I want to run a custom script when I launch a new Amazon Elastic Compute Cloud (Amazon EC2) Windows instance.
"I want to run a custom script when I launch a new Amazon Elastic Compute Cloud (Amazon EC2) Windows instance.Short descriptionTo run a script that starts when the instance launches, add the script to user data. User data is processed by EC2Config (Windows Server 2012 R2 and earlier) or EC2Launch or EC2Launch V2 (Windows Server 2016 and later).ResolutionWhen you add the script to user data, you must enclose it within a special tag. This tag determines whether the commands run in a command prompt window or use Windows PowerShell. For more information, see Run commands on your Windows instance at launch.When you launch a new EC2 Windows instance, you can specify user data during configuration to run a custom script at startup.Important: If you launch an instance from a custom AMI, then you must shut down the original instance that created the AMI. To do this, use EC2Launch, EC2Launch V2, or EC2Config. From the EC2Launch, EC2Launch V2, or EC2Config settings, choose Shutdown with Sysprep or Shutdown without Sysprep.1.    Open the Amazon EC2 console, and then choose AMIs from the navigation pane.2.    Select an AMI, and then choose Launch.3.    Select an instance type, and then choose Next: Configure Instance Details.4.    For Advanced Details, enter your custom script in the User data text box. Be sure to use the correct tag.Note: To run user data scripts every time you reboot or restart the instance, add the following command:<persist>true</persist>5.    Complete the launch wizard to start the instance.For additional troubleshooting, EC2Launch, EC2Launch V2, and EC2Config log files contain the output from the standard output and standard error streams. You can find log files in the following locations:EC2Launch: C:\ProgramData\Amazon\EC2-Windows\Launch\Log\UserdataExecution.logEC2Launch V2: C:\ProgramData\Amazon\EC2Launch\log\agent.logEC2Config: C:\Program Files\Amazon\Ec2ConfigService\Logs\Ec2Config.logRelated informationHow do I run a command on an existing EC2 Windows instance when I reboot or start the instance?Follow"
https://repost.aws/knowledge-center/ec2-windows-run-command-new
How do I set the properties of a root volume for an Amazon EC2 instance that I created using an AWS CloudFormation template?
"I want to set the properties of the root volume for an Amazon Elastic Compute Cloud (Amazon EC2) instance that I created using an AWS CloudFormation template. For example, I want to change the size of the root volume, or enable encryption of the root volume."
"I want to set the properties of the root volume for an Amazon Elastic Compute Cloud (Amazon EC2) instance that I created using an AWS CloudFormation template. For example, I want to change the size of the root volume, or enable encryption of the root volume.Short descriptionTo set the properties of the root volume for an EC2 instance, you must identify the device name of the root volume for your Amazon Machine Image (AMI). Then, you can use the BlockDeviceMapping property of an AWS::EC2::Instance resource to set the properties of the root volume.Note: By default, the block devices specified in the block device mapping for the AMI are used by the EC2 instance. To override the AMI block device mapping, use instance block device mapping. For the root volume, you can override only the volume size, volume type, and DeleteOnTermination setting. After the instance is running, you can modify only the DeleteOnTermination setting of the attached Amazon Elastic Block Store (Amazon EBS) volumes.Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.ResolutionIdentify the device name of the root volume of your AMITo find the device name, complete the following steps in either the Amazon EC2 console or the AWS CLI.Using the Amazon EC2 console:1.    Open the Amazon EC2 console.2.    From the navigation bar, select the AWS Region where you want to launch your instances.3.    In the navigation pane, choose AMIs.4.    Use the Filter option to find your AMI, and then select your AMI.5.    On the Details tab, find the Root Device Name. This is where your root device name is listed.Using the AWS CLI command:In the AWS CLI, run the following command:aws ec2 describe-images \ --region us-east-1 \ --image-ids ami-1234567890AWSEXAMPLENote: Replace us-east-1 with your Region. Replace ami-1234567890AWSEXAMPLE with your AMI.The output of the preceding command returns the RootDeviceName field, which shows the device name of the root volume.Set the properties of the root volume for your EC2 instanceUse the BlockDeviceMapping property of an AWS::EC2::Instance resource to set the properties of the root volume for your EC2 instance.In the following JSON and YAML examples, AWS CloudFormation creates an EC2 instance with the size of the root volume set to 30 GB.In the JSON and YAML templates, the DeleteOnTermination property of the root volume is set to true. The DeviceName is set to /dev/xvda because the AMI specified is an Amazon Linux 2 AMI. Finally, the Encrypted property is set to true, which enables default encryption on the root volume.Important: In your template, replace /dev/xvda with the value of the Root Device Name property that you identified earlier. Then, modify the Ebs property in the template based on your requirements.JSON template:{ "AWSTemplateFormatVersion": "2010-09-09", "Description": "AWS CloudFormation Sample Template that shows how to increase the size of the root volume. **WARNING** This template creates an Amazon EC2 instance. You will be billed for the AWS resource used if you create a stack from this template.", "Parameters": { "KeyName": { "Type": "AWS::EC2::KeyPair::KeyName", "Description": "Name of an existing EC2 KeyPair to enable SSH access to the EC2 instance." }, "InstanceType": { "Description": "EC2 instance type", "Type": "String", "Default": "t2.micro", "ConstraintDescription": "Please choose a valid instance type." }, "AMIID": { "Description": "The Latest Amazon Linux 2 AMI taken from the public AWS Systems Manager Parameter Store", "Type": "AWS::SSM::Parameter::Value<String>", "Default": "/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2" } }, "Resources": { "LinuxInstance": { "Type": "AWS::EC2::Instance", "Properties": { "ImageId": { "Ref": "AMIID" }, "InstanceType": { "Ref": "InstanceType" }, "KeyName": { "Ref": "KeyName" }, "BlockDeviceMappings": [ { "DeviceName": "/dev/xvda", "Ebs": { "VolumeType": "gp2", "VolumeSize": "30", "DeleteOnTermination":"false", "Encrypted": "true" } } ] } } }}YAML template:AWSTemplateFormatVersion: 2010-09-09Description: >- AWS CloudFormation Sample Template that shows how to increase the size of the root volume. **WARNING** This template creates an Amazon EC2 instance. You will be billed for the AWS resource used if you create a stack from this template.Parameters: KeyName: Type: 'AWS::EC2::KeyPair::KeyName' Description: Name of an existing EC2 KeyPair to enable SSH access to the EC2 instance. InstanceType: Description: EC2 instance type Type: String Default: t2.micro ConstraintDescription: Please choose a valid instance type. AMIID: Description: >- The Latest Amazon Linux 2 AMI taken from the public Systems Manager Parameter Store Type: 'AWS::SSM::Parameter::Value<String>' Default: /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2Resources: LinuxInstance: Type: 'AWS::EC2::Instance' Properties: ImageId: !Ref AMIID InstanceType: !Ref InstanceType KeyName: !Ref KeyName BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeType: gp2 VolumeSize: '30' DeleteOnTermination: 'false' Encrypted: 'true'Follow"
https://repost.aws/knowledge-center/cloudformation-root-volume-property
Why am I unable to create an Aurora read replica for my RDS for PostgreSQL instance?
"I'm trying to create an Amazon Aurora read replica for my Amazon Relational Database Service (Amazon RDS) for PostgreSQL instance from the RDS console. However, the option to do so is greyed out in the Amazon RDS console."
"I'm trying to create an Amazon Aurora read replica for my Amazon Relational Database Service (Amazon RDS) for PostgreSQL instance from the RDS console. However, the option to do so is greyed out in the Amazon RDS console.Short descriptionYou can use an Amazon RDS for PostgreSQL DB instance to create a new Amazon Aurora PostgreSQL-Compatible Edition DB cluster by using an Aurora read replica for the migration process. In this case, an Aurora cluster is created with a reader instance. This cluster, called the Replica cluster, acts as a read replica for the RDS for PostgreSQL instance. After creating the Replica cluster and migrating data to Aurora with a replication lag of zero, you can perform a cutover by promoting the Aurora read replica.To create an Aurora read replica for the migration process, see Creating an Aurora read replica.If the option to create an Aurora read replica using the Amazon RDS console isn't available, then be sure that your Aurora PostgreSQL version is compatible with the RDS for PostgreSQL version.ResolutionThe Aurora read replica option is available only for migrating within the same AWS Region and account. The option is available only if the Region offers a compatible version of Aurora PostgreSQL for your RDS for PostgreSQL DB instance. The Aurora PostgreSQL version must be the same as the RDS for PostgreSQL version or a higher minor version in the same major version family.For example, to use this technique to migrate an RDS for PostgreSQL 11.14 DB instance, the Region must offer either of the following:Aurora PostgreSQL version 11.14A higher minor version in the PostgreSQL version 11 familyTo see a list of available versions and the defaults for newly created DB instances, run the AWS Command Line Interface (AWS CLI) command describe-db-engine-versions:aws rds describe-db-engine-versions --engine postgres --query DBEngineVersions[*].EngineVersionaws rds describe-db-engine-versions --engine aurora-postgresql --query DBEngineVersions[*].EngineVersionNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.You can compare the results of both commands to check if the engine version of Aurora PostgreSQL is compatible with that of RDS for PostgreSQL.Use the AWS CLI to create an Aurora read replica when the option to create a read replica with the RDS console isn't available to you.To create an Aurora read replica from a source RDS for PostgreSQL DB instance using the AWS CLI, run the create-db-cluster command. Be sure to set the parameter replication-source-identifier to the ARN of the source instance. Running this command creates a headless Aurora DB cluster. A headless cluster is cluster storage without any instances.After the cluster is created, use the create-db-instance command to create the primary instance for your DB cluster.aws rds create-db-cluster --db-cluster-identifier example-aurora-cluster --db-subnet-group-name example-db-subnet --vpc-security-group-ids example-security-group --engine aurora-postgresql --engine-version <same-as-your-rds-instance-version> --replication-source-identifier example-rds-source-instance-arn aws rds create-db-instance --db-cluster-identifier example-aurora-cluster --db-instance-class example-instance-class --db-instance-identifier example-instance identifier --engine aurora-postgresqlRelated informationMigrating data from an RDS for PostgreSQL DB instance to an Aurora PostgreSQL DB cluster using an Aurora read replicaFollow"
https://repost.aws/knowledge-center/rds-postgresql-aurora-read-replica
How can I grant directory access to specific EC2 instances using IAM and EFS access points?
I want to use AWS Identity and Access Management (IAM) and Amazon Elastic File System (Amazon EFS) to grant directory access to specific Amazon Elastic Compute Cloud Amazon (Amazon EC2) instances.
"I want to use AWS Identity and Access Management (IAM) and Amazon Elastic File System (Amazon EFS) to grant directory access to specific Amazon Elastic Compute Cloud Amazon (Amazon EC2) instances.Short descriptionUse the same file system for different instances and grant access to specific directories with Amazon EFS access points. To use access points and IAM to control access to your directories, do the following:1.    Create Amazon EFS access points for your file system.2.    Grant ClientMount and ClientWrite permissions to create IAM policies for each instance. Then, create roles for the policies.3.    Create an Amazon EFS policy for your file system.4.    Test your configuration.ResolutionRequirements1.    You must have two Amazon EC2 instances in the same VPC used for your file system—or, you must make sure that the instances can reach your file system. It’s a best practice to use the latest Amazon Linux 2 AMI. The security group attached to the instances must allow outbound access on port 2049 towards your Amazon EFS.2.    Before mounting your file system, add a rule to the mount target security group that allows inbound NFS access from the Amazon EC2 security group. For more information, see Using VPC security groups for Amazon EC2 instances and mount targets.Note: It’s a best practice to use the file system's DNS name as your mounting option. The file system DNS name automatically resolves to the mount target’s IP address in the Availability Zone of the connecting EC2 instance. Get the file system DNS name from the console. Or, if you have the file system ID, then construct the file system DNS name using the following example:file-system-id.efs.aws-region.amazonaws.com3.    To test the connection from your client system where you mount to the EFS FQDN or the short name, you must do one of the following:Perform telnetMount the EFS as an NFS type using the EFS FQDN where amazon-efs-utils aren't required.The following command mounts your file system into both EC2 instances:sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport file-system-id.efs.ap-southeast-2.amazonaws.com:/ /efsNote: Replace the example file system with your file system.If you test by mounting, then after it's mounted, run the umount command on your file system and on all of your EC2 instances. If umount isn't run, then I/O errors occur if you make a mistake when applying the EFS policy later on:sudo umount /efs4.    Using access points and IAM policies on Amazon EFS requires the amazon-efs-utils tool. Run the following command to install amazon-efs-utils:sudo yum install -y amazon-efs-utilsIf you run a distribution other than Amazon Linux 2 and need installation instructions for amazon-efs-utils, see Using the amazon-efs-utils tools. Also, verify the source code from a third-party such as GitHub.Create Amazon EFS access pointsImportant: This resolution assumes that you already created an Amazon EFS without access points or any IAM policy. Make sure that the security group attached to your file system allows inbound access on port 2049 for the EC2 instances in use.Note: Replace the example names (APP_team and DB_team) with your resource names.1.    Open the Amazon EFS console.2.    Choose File systems, select the file system that you want to manage access for, and then choose View details.3.    Choose Access points, and then choose Create access point.4.    Create the first access point by entering a Name and Root directory path. For example:Name: App_team_APRoot directory path: /App_teamNote: Amazon EFS creates an access point root directory only if the OwnUid, OwnGID, and permissions are specified for the directory. If you don't provide this information, then Amazon EFS doesn't create the root directory. If the root directory doesn't exist, then attempts to mount using the access point will fail with an error message such as "'b'mount.nfs4: access denied by server while mounting 127.0.0.1:/".If you don't specify the ownership and permissions for an access point root directory, then Amazon EFS won't create the root directory. All attempts to mount the access point will fail, with the error "access denied". For more information, see Enforce root directory access point in EFS.5.    Choose Create access point.6.    Repeat steps 2 through 4 to create a second access point. For example:Name: DB_team_APRoot directory path: /DB_team7.    Choose Create access point.8.    Note the access point IDs. An access point ID is similar to the following example:Access point ID: fsap-0093c87d798ae5ccbNote: You also can use an access point to enforce POSIX identities (user IDs, group IDs) for all file system requests made through the access point. To activate this feature, specify the user and group ID when you create the access point. For more information, see Enforcing a user identify using an access point.Create IAM policies and roles for your instances1.    Open the IAM console.2.    Create IAM policies for each instance. The following policies grant ClientMount and ClientWrite permissions. You can use these policies as a reference. First, replace the value shown in Resource in these examples with your resource's ARN. Then, replace the file system ID and access point ID with the correct values. For example:App_team_policy{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "elasticfilesystem:ClientMount", "elasticfilesystem:ClientWrite" ], "Resource": "arn:aws:elasticfilesystem:ap-southeast-2:123456789012:file-system/fs-8ce001b4", "Condition": { "ForAnyValue:StringEquals": { "elasticfilesystem:AccessPointArn": "arn:aws:elasticfilesystem:ap-southeast-2:123456789012:access-point/fsap-0093c87d798ae5ccb" } } } ]}DB_team_policy{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "elasticfilesystem:ClientMount", "elasticfilesystem:ClientWrite" ], "Resource": "arn:aws:elasticfilesystem:ap-southeast-2:123456789012:file-system/fs-8ce001b4", "Condition": { "ForAnyValue:StringEquals": { "elasticfilesystem:AccessPointArn": "arn:aws:elasticfilesystem:ap-southeast-2:123456789012:access-point/fsap-054969ebbe52a6121" } } } ]}3.    Choose Roles, and then choose Create role.4.    Choose EC2 as your use case, and then choose Next: Permissions.5.    Select one of the policies that you just created, and then choose Next: Tags.6.    Choose Next: Review.7.    Enter a Role name, and then choose Create role. For example:Role name: App_team_rolePolicy: App_team_policy8.    Repeat steps 3-7 for the second policy. For example:Role name: DB_team_rolePolicy: DB_team_policyCreate an EFS policy1.    Open the Amazon EFS console.2.    Choose File systems, select your file system, and then choose View Details.3.    Choose File system policy, and then choose Edit in the Policy section.4.    Add the following policy to allow ClientMount and ClientWrite for the access points that you created. Make sure you update Resource with your resource's ARN and replace the file system ID and access point ID with the correct values.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": [ "elasticfilesystem:ClientMount", "elasticfilesystem:ClientWrite" ], "Resource": "arn:aws:elasticfilesystem:ap-southeast-2:123456789012:file-system/fs-8ce001b4", "Condition": { "StringEquals": { "aws:PrincipalArn": "arn:aws:iam::123456789012:role/DBA_team_role", "elasticfilesystem:AccessPointArn": "arn:aws:elasticfilesystem:ap-southeast-2:123456789012:access-point/fsap-054969ebbe52a6121" } } }, { "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": [ "elasticfilesystem:ClientMount", "elasticfilesystem:ClientWrite" ], "Resource": "arn:aws:elasticfilesystem:ap-southeast-2:123456789012:file-system/fs-8ce001b4", "Condition": { "StringEquals": { "aws:PrincipalArn": "arn:aws:iam::123456789012:role/App_team_role", "elasticfilesystem:AccessPointArn": "arn:aws:elasticfilesystem:ap-southeast-2:123456789012:access-point/fsap-0093c87d798ae5ccb" } } } ]}5.    Choose Save. Your file system is ready to use.Test your configuration1.    Access your Amazon EC2 instance (DB_team instance, in this example) without attaching any IAM role to the Amazon EC2 instance.2.    Attempt to mount your file system using mount command as NFS type. If you have a file system level resource-based access policy, then you receive an error message similar to the following:sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-8ce001b4.efs.ap-southeast-2.amazonaws.com:/ /efs mount.nfs4: access denied by server while mounting fs-8ce001b4.efs.ap-southeast-2.amazonaws.com:/3.    Assign the role that you created for this Amazon EC2 instance (DB_team_role in this example) to the DB_team Amazon EC2 instance using the Amazon EC2 console. For more information, see How do I assign an existing IAM role to an EC2 instance? If you can't assign a role to an Amazon EC2 instance as a profile role, then you can mount IAM using a named profile as a mount option along with an access point. For example:sudo mount -t efs -o tls,iam,awsprofile=namedprofile,accesspoint=fsap- file-system-id efs-mount-point/4.    Run the mount command to mount your file system using the second access point that you created (DB_team_AP in this example). If the file system mounts successfully, then your instance role is granting you permissions:sudo mount -t efs -o tls,accesspoint=fsap-054969ebbe52a6121,iam fs-8ce001b4:/ /efs5.    Run the umount command to unmount the file system:sudo umount /efs6.    Mount your file system using the first access point that you created (App_team_AP, for example) from the same Amazon EC2 instance that DB_team_role is assigned to. As expected, the mount operation is denied because of EFS access policy and IAM role calling:sudo mount -t efs -o tls,accesspoint=fsap-0093c87d798ae5ccb,iam fs-8ce001b4:/ /efs mount.nfs4: access denied by server while mounting 127.0.0.1:/7.    Use SSH to connect into the App_team instance, and perform the preceding steps. You can't mount the file system using DB_team_AP while using the App_team_role.Your file system now allows mounts only when the EC2 instance involved is using the required IAM role.sudo mount -t efs -o tls,accesspoint=fsap-0093c87d798ae5ccb,iam fs-8ce001b4:/ /efsFollow"
https://repost.aws/knowledge-center/efs-access-points-directory-access
How do I check the current status of my VPN tunnel?
I don’t see network traffic flowing on the AWS side of my Amazon Virtual Private Cloud (Amazon VPC) connection. How do I check the AWS VPN tunnel status?
"I don’t see network traffic flowing on the AWS side of my Amazon Virtual Private Cloud (Amazon VPC) connection. How do I check the AWS VPN tunnel status?ResolutionVerify whether you are using static or dynamic Site-to-Site VPN routing. VPN devices that don’t support Border Gateway Protocol (BGP) must use static routing. VPN devices that support BGP can use dynamic routing.Check the current status using the Amazon VPC consoleIf you use a static VPN, then follow these steps:Sign in to the Amazon VPC console.In the navigation pane, under Site-to-Site VPN Connections, choose Site-to-Site VPN Connections.Select your VPN connection.Choose the Tunnel Details view.Review the Status of your VPN tunnel.If the tunnel status is UP, then choose the Static Routes view. Be sure to specify any private networks behind your on-premises firewall.If the tunnel status is DOWN, then verify that your on-premises firewall is properly configured.Be sure to turn on route propagation in your VPC route table.If you use a dynamic VPN with BGP, then follow these steps:Sign in to the Amazon VPC console.In the navigation pane, under Site-to-Site VPN Connections, choose Site-to-Site VPN Connections.Select your VPN connection.Choose the Tunnel Details view.Review the Status of your VPN tunnel.If the tunnel status is UP, then verify that the Details column has one or more BGP routes listed.If the tunnel status is DOWN but the Details column is IPSEC IS UP, then be sure to configure BGP properly on your firewall. Phase 2 of Internet Protocol Security (IPSec) is established, but BGP isn’t established.Be sure to turn on route propagation in your VPC route table.If you continue to experience issues, then follow these steps:Verify that the security groups of Amazon Elastic Compute Cloud (Amazon EC2) instances in your VPC allow appropriate access. For more information, see Security groups for your VPC.Verify that your local firewall allows the same service in its access control lists (ACLs) and firewall policies.For more information, see Troubleshooting your customer gateway device.Monitor your VPN tunnel using Amazon CloudWatchYou can also use CloudWatch to check the status of a VPN tunnel and be notified when the status of the tunnel changes. CloudWatch can be used to access metric data over time to help evaluate the tunnel's stability. For more information, see Monitoring VPN tunnels using Amazon CloudWatch.Related informationHow do I troubleshoot BGP connection issues over VPN?Follow"
https://repost.aws/knowledge-center/check-vpn-tunnel-status
Why isn't Amazon SNS invoking my Lambda function when triggered by a CloudWatch alarm?
I want Amazon Simple Notification Service (Amazon SNS) to invoke my AWS Lambda function when triggered by an Amazon CloudWatch alarm.
"I want Amazon Simple Notification Service (Amazon SNS) to invoke my AWS Lambda function when triggered by an Amazon CloudWatch alarm.Short descriptionThe following scenarios prevent you from invoking your Lambda function:Your Lambda function's resource policy hasn't granted access to invoke the function from the SNS topic. For this scenario, complete the steps in the Check the resource-based policy document for your Lambda function section.Your Lambda function invocations are delayed. For this scenario, complete the steps in the Check your Amazon SNS delivery logs section.Note: The following resolution assumes that you can call the Publish API on your SNS topic, but are getting errors between Amazon SNS and Lambda integration. If you can't see any activity on your SNS topic, see Why didn't I receive an SNS notification for my CloudWatch alarm trigger? That activity can include the following CloudWatch metrics: NumberOfMessagesPublished, NumberOfNotificationsDelivered, or NumberOfNotificationsFailed.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Check the resource-based policy document for your Lambda functionWhen Amazon SNS invokes a Lambda function asynchronously, Lambda returns a 202 HTTP status code back to Amazon SNS. The status code shows that Lambda has accepted the message for later processing. For more information, see Asynchronous invocation. If you get a failed response, you can check your Amazon SNS delivery logs for more information.Choose a resolution based on the scenario in your account.If your SNS topic and Lambda function are in the same account:1.    Open the Lambda console.2.    On the navigation pane, choose Functions, and then choose your function.3.    Choose the Configuration tab, and then choose Permissions.4.    In the Resource-based policy section, choose the policy statement from the Statement ID column to view your policy document. You see the following policy document:statement idyour-statement-idprincipal:sns.amazonaws.comeffectallowactionLambda:InvokeFunctionconditions{ "arnlike": { "aws:sourcearn": "arn:aws:sns:your-aws-region:your-aws-account-id:your-sns-topic-name" } Note: To see your policy document in JSON, choose View policy document in the Resource-based policy section.If you're missing the Lambda resource policy that grants Amazon SNS access to invoke the function, add the following function to your policy document. You can use the Lambda console, or the AWS CLI or AWS CloudShell.Using the command line:aws lambda add-permission \--function-name your-lambda-function-name \--statement-id triggerfromsns-statement-id \--action lambda:invokefunction \--principal sns.amazonaws.com \--source-arn arn:aws:sns:your-aws-region:your-aws-account-id:your-sns-topic-nameNote: Replace your-lambda-function-name, your-aws-region, your-aws-account-id, and your-sns-topic-name with your values. The AWS CLI command uses the default AWS Region. You can override the default Region with the --region flag if your Lambda function is in a different Region.Using the Lambda console:1.    Open the Lambda console.2.    On the navigation pane, choose Functions, and then choose your function.3.    Choose the Configuration tab, and then choose Permissions.4.    In the Resource-based policy section, choose Add permissions.5.    For Principal, select sns.amazonaws.com.6.    For Actions, select Lambda:InvokeFunction.7.    For Statement ID, enter a unique ID.8.    Choose Save.If your SNS topic and Lambda function are in different accounts:1.    Set up cross-account permissions.2.    In your CloudWatch logs, use delivery status logging to verify that Amazon SNS has successfully sent a message to Lambda or the NumberOfNotificationsDelivered CloudWatch metric.Example of a successful response between Amazon SNS and Lambda:{ "notification": { "messagemd5sum": "your-md5-sum", "messageid": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "topicarn": "arn:aws:sns:your-aws-region:your-aws-account-id:your-sns-topic", "timestamp": "2021-04-04 14:08:48.083" }, "delivery": { "deliveryid": "your-sns-delivery-id", "destination": "arn:aws:lambda:your-aws-region:your-aws-account-id:function:your-function-name", "providerresponse": "{\ "lambdarequestid\":\ "your-lambda-request-id\"}", "dwelltimems": 92, "attempts": 1, "statuscode": 202 }, "status": "success"}Check your Amazon SNS delivery logsCheck your SNS topic's delivery logs to your Lambda function. If the response is successful, you see a 202 status code.To view CloudWatch logs for your SNS topic, do the following:1.    Open the CloudWatch console.2.    On the navigation pane, expand Logs, and then choose Log groups.3.    In the filter search box, enter the name of your SNS topic. Two log groups for your SNS topic appear: one for successes and one for failures.4.    Choose the success log group.5.    In the Log streams section, choose Search all.Note: You can also check the timestamp of the request in the Last event time column. Then, search for the Lambda function's Amazon Resource Name (ARN) and name.6.    From the Log stream column, choose the log stream to open it.If you don't see any results, do the following:1.    Choose Log groups, and then choose the failure log groups.2.    In the Log streams section, choose Search all.3.    From the Log stream column, choose the log stream to open it.To troubleshoot failed log groups, do the following:1.    Check if your Lambda function's X-Ray trace has a high dwell time. If there's a high dwell time, use the CloudWatch console to verify that your Lambda functions in that Region have the fewest number of errors and throttles. Be sure to select all functions, and then select the Errors and Throttles metrics.Note: Hundreds of errors and throttles across functions that are invoked asynchronously can back up the internal Lambda queue, resulting in delays in function invocations. It's a best practice to keep the error and throttle rate to minimum, so you can avoid unwanted delays for function invocations. For more information, see Asynchronous invocation.2.    Set a destination Amazon Simple Queue Service (Amazon SQS) queue or Lambda function for separate handling. This prevents message loss and is done because the maximum age of the asynchronous events can be six hours for a Lambda function.Follow"
https://repost.aws/knowledge-center/sns-lambda-function-cloudwatch-alarm
How do I create a linked server in RDS for SQL Server with the source as RDS?
I want to create a linked server from an Amazon Relational Database Service (Amazon RDS) for Microsoft SQL Server instance to SQL Server. How can I do this?
"I want to create a linked server from an Amazon Relational Database Service (Amazon RDS) for Microsoft SQL Server instance to SQL Server. How can I do this?Short descriptionAmazon RDS is a managed service, so users don't have sysadmin access. Directly creating a linked server from a GUI results in an error. To create a linked server, use T-SQL. The only supported targets are SQL Server.PrerequisitesYou must have connectivity between RDS for SQL Server and the target SQL Server.Note: The linked server password and configuration remains intact even after a host replacement.ResolutionRDS for SQL Server instance to RDS for SQL Server instanceIt's a best practice to use the DNS name when creating a linked server with RDS for SQL Server as the source to RDS for SQL Server as the target. Using the DNS name prevents IP address changes due to host replacements or server changes.In Amazon RDS, IP addresses are dynamic and endpoints are static. So, it’s a best practice to use endpoints to connect to your instance. Every Amazon RDS instance has an endpoint.Parameters:@server: Your linked server name.@datasrc: Your RDS endpoint name. For Amazon Elastic Compute Cloud (Amazon EC2) on-premises instance your EC2 on-premises IP address or DNS name.@rmtuser: The login name that has access to the target database.@rmtpassword: The password for the login name.Step 1: Connect to the RDS for SQL Server instanceConnect to the instance using the master login and then run the following command. Make sure that you're using the endpoint and not the IP address. IP addresses of RDS instances might change during a host replacement.EXEC master .dbo.sp_addlinkedserver @server = N'LinkedServerRDSSQL', @srvproduct= N'', @provider= N'SQLNCLI', @datasrc= N'SQL-2019.ckeixtynaaaj.us-east-1.rds.amazonaws.com'goEXEC master .dbo.sp_addlinkedsrvlogin @rmtsrvname=N'LinkedServerRDSSQL' ,@useself=N'False' ,@locallogin=NULL,@rmtuser =N'linkedserverloginname',@rmtpassword='YourStrongPassword'goStep 2: Test the linked server1.    In Microsoft SQL Server Management Studio (SSMS), connect to the RDS instance.2.    On the View menu, select Object Explorer.3.    Select Server Objects, Linked Servers.4.    Right-click your server name, and then select Test the connection.Step 3: Query the linked serverRun the following query:select * from [LinkedServerName].[Databasename].[schemaname].[tablename]RDS for SQL Server instance to an EC2 SQL Server instance or an on-premises SQL ServerStep 1: Create the linked serverCreate the linked server with RDS for SQL Server as the source to SQL Server on an EC2 instance or to an on-premises SQL Server.Run the following commands to create the linked server using the IP address for the remote server:EXEC master .dbo.sp_addlinkedserver @server = N'LinkedServerRDSSQL', @srvproduct= N'', @provider= N'SQLNCLI', @datasrc= N'10.0.0.152'GoEXEC master .dbo.sp_addlinkedsrvlogin @rmtsrvname=N'LinkedServerRDSSQL' ,@useself=N'False' ,@locallogin=NULL,@rmtuser =N'linkedserverloginname',@rmtpassword='YourStrongPassword'GoRun the following commands to create the linked server using the DNS name for the remote server:EXEC master .dbo.sp_addlinkedserver @server = N'LinkedServerRDSSQL', @srvproduct= N'', @provider= N'SQLNCLI', @datasrc= N'ServerName.datacenter.mycompany.com'GoEXEC master .dbo.sp_addlinkedsrvlogin @rmtsrvname=N'LinkedServerRDSSQL' ,@useself=N'False' ,@locallogin=NULL,@rmtuser =N'linkedserverloginname',@rmtpassword='YourStrongPassword'goStep 2: Test the linked server1.    In Microsoft SQL Server Management Studio (SSMS), connect to the RDS instance.2.    On the View menu, select Object Explorer.3.    Select Server Objects, Linked Servers.4.    Right-click your server name, and then select Test the connection.Step 3: Query the linked serverRun the following query:select * from [LinkedServerName].[Databasename].[schemaname].[tablename]Configure the linked server using Microsoft Windows AuthenticationNote: Configuring a linked server from RDS for SQL Server to an EC2 instance or to an on-premises SQL Server using Windows Authentication isn't supported.PrerequisitesYou must have the domain created and joined with the AWS Managed Microsoft AD.The source EC2 SQL Server instance and target RDS SQL Server must have connectivity.Step 1: Configure the linked server from an EC2 or on-premises SQL Server to RDS for SQL Server using Windows Authentication1.    Log in with your domain login and run the following query to create the linked server.USE [master]GOEXEC sp_addlinkedserver @server=N'LinkedServerToRDSInstance',@srvproduct=N'',@provider=N'SQLNCLI',@datasrc=N'EndpointName';GOEXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname = N'LinkedServerToRDSInstance', @locallogin = NULL , @useself = N'True'GOStep 2: Test the linked server1.    In Microsoft SQL Server Management Studio (SSMS), connect to the RDS instance.2.    On the View menu, select Object Explorer.3.    Select Server Objects, Linked Servers.4.    Right-click your server name, then select Test the connection.Step 3: Query the linked serverRun the following query:select * from [LinkedServerName].[Databasename].[schemaname].[tablename]TroubleshootingYou might receive the following error message when connecting from the client:Msg 18456, Level 14, State 1, Line 21Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'This error is caused by a "double hop". A double hop occurs when one computer connects to another computer to connect to a third computer. Double hops might occur in the following scenarios:There isn't a service principal name configuration (SPN) for AWS Managed AD to process the authentication between the client and the EC2 instance.The linked server is configured using an endpoint that isn't from your domain, such as the RDS instance endpoint. The authentication method to both EC2 and RDS needs to be KERBEROS.To resolve this issue, do the following:Step 1. Check the authentication method to confirm KERBEROS is being picked when connecting to both RDS and EC2Run the following query using the domain login from the client:select @@servername as ServerName, session_id,net_transport, auth_scheme from sys.dm_exec_connections where session_id= @@spid;Step 2: Correct the SPNs for the SQL Server service account that is part of your domain1.    In Active Directory Users and Computers, select YourDomain.com, YourDomain, Users.2.    Right-click YourServiceAccount to view the properties.3.    In the Delegation tab, choose Trust this user for delegation to any service (Kerberos only), and then select OK.4.    Restart the SQL server service on the EC2 instance or the on-premises SQL Server.5.     Add the SPN for the service account as shown in the following example command. Replace YourDomainName\ServiceAccountName and the Ec2name domain with the correct values for your domain.setspn -A MSSQLSvc/Ec2name.domain.com YourDomainName\ServiceAccountNamesetspn -A MSSQLSvc/Ec2name.domain.com:1433 YourDomainName\ServiceAccountName6.    Run the following command to verify the newly created SPNs:setspn -l YourDomainName\ServiceAccountNameStep 3: Recreate the linked server using RDS YourDomain.com endpoint1.    Run the following query in RDS for SQL Server to retrieve the server name:select @@servername as ServerName, session_id,net_transport, auth_scheme from sys.dm_exec_connections where session_id= @@spid;2.    In the output from the preceding command, check the server name column to verify the SPN, as shown in the following example.setspn -l YourServerNameThe preceding command output also shows the registered ServicePrincipalNames for your RDS instance, as shown in the following example:MSSQLSvc/ YourServerName.yourdomainname.com:14333.    Run the following command to recreate the linked server using the domain login. The data source is the same that you retrieved from the command output in step 2.USE [master]GOEXEC sp_addlinkedserver @server=N'LinkedServerToRDSInstance',@srvproduct=N'',@provider=N'SQLNCLI',@datasrc=N'YourServerName.YourDomainnanme,com';GOEXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname = N'LinkedServerToRDSInstance', @locallogin = NULL , @useself = N'True'GO4.    Test connectivity from the client.Note: Currently a linked server is supported as the target for SQL Server. This is because Amazon RDS is managed service. You don't have the OS access to install additional ODBC or any of the drivers required to connect different database servers such as Oracle,MYSQL,PostGreSQL.For heterogenous linked servers, you can use RDS Custom for SQL Server.Related informationImplement linked servers with Amazon RDS for Microsoft SQL ServerFollow"
https://repost.aws/knowledge-center/rds-sql-server-create-linked-server
"What are the basic steps for mounting, unmounting, automounting, and on-premises mounting my EFS file system?"
"What are the basic steps for mounting, unmounting, automounting, and on-premises mounting of my Amazon Elastic File System (Amazon EFS) file system?"
"What are the basic steps for mounting, unmounting, automounting, and on-premises mounting of my Amazon Elastic File System (Amazon EFS) file system?ResolutionTo mount your Amazon EFS file system, you can either install the nfs-utils package or the efs-utils package.Mounting EFS with efs-utils tools1.    Run the following command to install the efs-utils package:Amazon Linux or Amazon Linux 2$ sudo yum install -y amazon-efs-utilsUbuntu and Debian-based distributions$ sudo apt-get -y install ./build/amazon-efs-utils*deb./build-deb.sh$ cd /path/to/efs-utils$ git clone https://github.com/aws/efs-utils$ sudo apt-get -y install git binutils$ sudo apt-get updateOther Linux distributions$ sudo yum -y install git$ sudo yum -y install rpm-build$ git clone https://github.com/aws/efs-utils$ cd /path/efs-utils$ sudo yum -y install make$ sudo yum -y install rpm-build$ sudo make rpm$ sudo yum -y install ./build/amazon-efs-utils*rpm2.    After the efs-utils package is installed, open the EFS console.3.    Select File systems.4.    Select the file system that you want to mount.5.    Select Attach.6.    Copy the command under using the EFS mount helper.7.    Connect to the instance through SSH or AWS Systems Manager Session Manager and run the command you copied in step 6:$ sudo mkdir -p /mnt/efs$ sudo mount -t efs -o tls fs-12345678:/ /mnt/efs$ sudo mount -t efs -o tls,accesspoint=fsap-12345678 fs-01233210 /mnt/efsNote: Edit the preceding commands as required by replacing the file system id, mount point, and so on.Mounting EFS with the NFS client1.    Run the following command to install the nfs-utils package:RHEL and CentOS-based distributions$ sudo yum -y install nfs-utilsUbuntu-based distributions$ sudo apt install nfs-common2.    After installing the nfs-utils package, navigate to the EFS console.3.    Select File systems.4.    Select the file system that you want to mount.5.    Select Attach.6.    Copy the command under using the NFS mount helper.7.    Connect to the instance through SSH or Session Manager and run the command you copied in step 6:$ sudo mkdir -p /mnt/efs$ sudo mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport mount-target-DNS:/ ~/efs-mount-point-or-Run the following command to mount using an IP address:$ sudo mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport mount-target-ip:/ ~/efs-mount-pointNote: Edit the preceding commands as required by replacing the file system id, mount point, DNS, IP address, and so on.Unmounting an EFS file systemRun the following command to unmount the file system:$ umount /mnt/efsIf the mount point is busy, then use the -l parameter with the umount command:$ umount -l /mnt/efsAuto-mounting a file system using /etc/fstabRun the following commands to make an entry in /etc/fstab so that the EFS mount persists on reboot:# vim /etc/fstabUsing efs-utilsfs-xxxxxxxx:/ /mnt/efs efs _netdev,nofail,noresvport,tls,iam 0 0Using the NFS clientModify the parameters in fstab as needed for your configuration.fs-XXXXXXXX.efs.REGION.amazonaws.com:/ /mnt/efs nfs4 defaults,_netdev,nofail 0 0# mount -aFor various mounting options using the mount helper, see Automatically mount EFS using /etc/fstab with EFS mount helper.Note: You can mount your file system using an IP address of a mount target in a different Availability Zone than the client (Amazon Elastic Compute Cloud (Amazon EC2)). When you do this, consider factors such as cross-Availability Zone data transfer charges, and latency.Mounting EFS on instance launch using the launch wizardWhen launching EC2 instances you can use the launch wizard to add user data automatically for mounting of EFS.1.    Open the EC2 console.2.    Select Launch instances.3.    Select an AMI and an instance type, and then select Next: Configure Instance Details.4.    Configure various parameters as per your requirements. Make sure that you select the required VPC and subnet for EFS mounting.5.    On the Configure instance page, under File systems, choose the EFS file system that you want to mount. The path shown next the file system ID is the mount point that the EC2 instance will use. You can change this path, if needed. User data is automatically generated for mounting EFS in the Advanced details section:#cloud-configpackage_update: truepackage_upgrade: trueruncmd:- yum install -y amazon-efs-utils- apt-get -y install amazon-efs-utils- yum install -y nfs-utils- apt-get -y install nfs-common- file_system_id_1=fs-0cae1679a766bcf49- efs_mount_point_1=/mnt/efs/fs1- mkdir -p "${efs_mount_point_1}"- test -f "/sbin/mount.efs" && printf "\n${file_system_id_1}:/ ${efs_mount_point_1} efs tls,_netdev\n" >> /etc/fstab || printf "\n${file_system_id_1}.efs.us-east-1.amazonaws.com:/ ${efs_mount_point_1} nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport,_netdev 0 0\n" >> /etc/fstab- test -f "/sbin/mount.efs" && grep -ozP 'client-info]\nsource' '/etc/amazon/efs/efs-utils.conf'; if [[ $? == 1 ]]; then printf "\n[client-info]\nsource=liw\n" >> /etc/amazon/efs/efs-utils.conf; fi;- retryCnt=15; waitTime=30; while true; do mount -a -t efs,nfs4 defaults; if [ $? = 0 ] || [ $retryCnt -lt 1 ]; then echo File system mounted successfully; break; fi; echo File system not available, retrying to mount.; ((retryCnt--)); sleep $waitTime; done;>-or-To mount EFS on a custom AMI or with specific options, add custom user data with the required commands in the Advanced details section. For more information, see Run commands on your Linux instance at launch.RHEL and CentOS-based distributions#!/bin/bashsudo mkdir -p /mnt/efssudo yum -y install nfs-utilsUbuntu-based distributions#!/bin/bashsudo mkdir -p /mnt/efssudo apt install nfs-commonsudo mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport mount-target-ip:/ /mnt/efs6.    Launch the instance.Mounting EFS on-premisesTo mount EFS on your on-premises servers, there must be connectivity between EFS and the on-premises servers. You can use AWS Direct Connect and VPN to establish this connectivity.After establishing connectivity between the on-premises server and EFS's VPC, run the following commands to install the NFS client and mount EFS:$ sudo yum -y install nfs-utils (Red Hat Linux)$ sudo apt-get -y install nfs-common(Ubuntu)$ mkdir ~/efs$ sudo mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport mount-target-IP:/ ~/efsFor more information, see Mounting on your on-premises Linux client with the EFS mount helper over AWS Direct Connect and VPN.Follow"
https://repost.aws/knowledge-center/efs-mount-automount-unmount-steps
How can I create CloudWatch alarms to monitor my Amazon RDS SQL Server DB instance’s memory consumption using Enhanced Monitoring?
My Amazon Relational Database Service (Amazon RDS) SQL Server DB instance is using more memory than expected. How can I set an Amazon CloudWatch alarm to monitor how much memory is consumed by SQL Server?
"My Amazon Relational Database Service (Amazon RDS) SQL Server DB instance is using more memory than expected. How can I set an Amazon CloudWatch alarm to monitor how much memory is consumed by SQL Server?Short descriptionAfter you enable Enhanced Monitoring for your RDS DB instance, you can create a CloudWatch alarm and use Amazon Simple Notification Service (Amazon SNS) to receive notifications about SQL Server memory consumption. This example uses the Enhanced Monitoring metric sqlServerTotKb to create a CloudWatch alarm and sends an SNS notification about the memory consumed by your Amazon RDS for SQL Server DB instance.ResolutionOpen the CloudWatch console, and then choose Log groups from the navigation pane.Filter for RDSOSMetrics from the list of Log Groups, select RDSOSMetrics. Navigate to Metric filters, and then choose Create Metric Filter.Enter a Filter Pattern for your RDS DB instance, such as {$.instanceID = "nameOfYourRDSInstance"}. For example, you can enter the RDS DB instance name {$.instanceID ="sqltest"}.From the Select Log Data to Test section, choose the resource ID of your RDS DB instance.Note: To find the resource ID of your RDS DB instance, open the Amazon RDS console, and then choose Databases from the navigation pane. Choose your RDS DB instance, and navigate to the Configuration tab. The resource ID appears in the Instance section.Select Next to assign a Filter name.Filter name: MyFilterEnter a Metric Namespace and Metric Name. See the following example:Metric Namespace: EMMetricMetric Name: SQLServerMemoryConsumptionEnter the metric value $.memory.sqlServerTotKb and then choose Next.Choose Create Metric Filter. A custom metric with the name specified is created. This metric reports the Enhanced Monitoring data in a CloudWatch graph.Select the Metric Filter, and then choose Create Alarm.From the Metrics section on the next page, verify the Namespace and Metric name, and then set the Period to 1 minute.From the Conditions section, define the threshold of the metric alarm. See the following example:Threshold type: StaticWhenever SQLServerMemoryConsumption is: Greater > thresholdThen: Enter 20971520Note: To specify 20 GiB as a threshold, enter the value in KiB. For example, 20971520 (20 * 1024 * 1024).Choose Next.From the Configure Action section, choose In Alarm.Select an SNS topic, or choose Create new topic using the email address where you want to receive alerts, and then choose Next.Enter an alarm name and description and choose Next. See the following example:Alarm name: RDS DB instance: SQLTEST: SQL Server Memory Consumption> 20 GiBAlarm description: SQL Server Memory consumption on your RDS DB instance is highFrom the Preview and Create page, verify the details of your alarm.Choose Create alarm.After the alarm is created, you can view it under Alarms on the CloudWatch console. Whenever your SQL Server memory usage exceeds the defined threshold, your alarm enters the ALARM state and you receive an email notification.Follow"
https://repost.aws/knowledge-center/rds-sql-server-memory-consumption
How can I use Security Hub to monitor security issues for my AWS environment?
I want to use AWS Security Hub to monitor security issues in my AWS environment.
"I want to use AWS Security Hub to monitor security issues in my AWS environment.Short descriptionSecurity Hub provides you with a detailed view of your security state and helps check your environment against security standards and best practices.Benefits of Security Hub include:Reduced effort to collect and prioritize findingsAutomatic security checks against best practices and standardsConsolidated view of findings across accounts and providersAbility to automate remediation of findingsSupports integration with Amazon EventBridge.For more information, seeBenefits of AWS Security Hub.ResolutionTo automate remediation of specific findings, you can define custom actions to take when a finding is received.Follow these instructions to create a custom action, define an EventBridge rule, and send findings.Create a custom actionIf you haven't already done so, start the configuration recorder in AWS Config.1.    Open the Security Hub console, choose Settings, and then choose Custom actions.2.    Choose Create custom action.3.    Enter an Action name and Description.4.    For Custom action ID, enter a unique ID, and then choose Create custom action.5.    In Custom action ARN, take note of the ARN.Define a rule in EventBridgeIf you haven't already done so, create an Amazon Simple Notification Service (Amazon SNS) topic.1.    Open the EventBridge console in the same AWS Region as Security Hub, expand Events, and then choose Rules.2.    Choose Create rule.3.    Enter a Rule name and Description.4.    From the Event bus drop down menu, choose either the default or custom bus.5.    Make sure that the Enable the rule on the selected event bus switch is turned on.6.    For Rule type, choose Rule with an event pattern, and then choose Next.7.    For Event source, choose AWS events or EventBridge partner events.8.    In Event pattern, choose the following:For Event source, choose AWS services.For AWS service, choose Security Hub.For Event type, choose Security Hub Findings - Custom Action, choose Specific custom action ARN(s), and then choose Next.9.    Choose the Select a target drop down menu, choose your target type, choose Next, Next, and then choose Create rule.For more information, see Amazon EventBridge event patterns.Send findings to EventBridge1.    Open the Security Hub console, and then choose Findings.2.    Follow the instructions to send findings to EventBridge.Note:You can create up to 50 custom actions.If you created cross-Region aggregation and manage finding from the aggregation Region, create custom actions in that Region.For more information, see Findings in AWS Security Hub.Related informationHow Security Hub worksAWS Security Hub endpoints and quotasFollow"
https://repost.aws/knowledge-center/security-hub-monitor
How do I troubleshoot Systems Manager Agent that's stuck in the starting state or fails to start?
I want to troubleshoot why AWS Systems Manager Agent (SSM Agent) is stuck in the starting state or failing to start.
"I want to troubleshoot why AWS Systems Manager Agent (SSM Agent) is stuck in the starting state or failing to start.Short descriptionA managed instance is an Amazon EC2 instance that's configured for use with Systems Manager. Managed instances can use Systems Manager services such as Run Command, Patch Manager, and Session Manager.Make sure that your Amazon Elastic Compute Cloud (Amazon EC2) instance meets the following prerequisites to be a managed instance:The instance has SSM Agent installed and running.The instance has connectivity to the instance metadata service.The instance has connectivity with Systems Manager endpoints using the SSM Agent.The instance has the correct AWS Identity and Access Management (IAM) role attached to it.SSM Agent doesn't start when these prerequisites aren't met.For SSM Agent version 3.1.501.0 and later, you can use ssm-cli tool to determine whether an instance meets these requirements. With this tool, you can diagnose why an EC2 instance that's running isn't included in the list of managed instances in Systems Manager.If your instance doesn't appear as a managed instance in the Systems Manager console, check the SSM Agent logs to troubleshoot further.You can find the SSM Agent logs for Linux at /var/log/amazon/ssm.You can find the SSM Agent logs for Windows at %PROGRAMDATA%\Amazon\SSM\Logs.Note: If the instance isn't reporting to Systems Manager, then try logging in using RDP (Windows) or SSH (Linux) to collect the logs. If you still can't log in, then stop the instance and detach the root volume. Then, attach the root volume to another instance in the same Availability Zone as a secondary volume to get the logs.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you're using the most recent AWS CLI version.Make sure that you installed the latest version of SSM AgentIt's a best practice to use the latest version of SSM Agent.For Linux, see Manually install SSM Agent on EC2 instances for Linux.For Windows, see Manually install SSM Agent on EC2 instances for Windows Server.Check connectivity to the instance metadata serviceNote: This connectivity is required only for an EC2 instance and not for hybrid activation.SSM Agent relies on EC2 instance metadata to function correctly. SSM Agent can access instance metadata using Instance Metadata Service Version 1 (IMDSv1) or Instance Metadata Service Version 2 (IMDSv2). Make sure that your instance can access IPv4 address of the Instance Metadata Service: 169.254.169.254.To verify connectivity to Instance Metadata Service, run the following command from your EC2 instance:Linux:telnet 169.254.169.254 80orcurl -I http://169.254.169.254/latest/meta-data/Windows:curl http://169.254.169.254/latest/meta-data/ orTest-NetConnection 169.254.169.254 -port 80If your instance can't access metadata, then make sure that metadata is turned on.For existing EC2 instances, do the following to check if metadata is turned on:Open the Amazon EC2 console.In the navigation pane, choose Instances.Select your instance.Choose Actions, Instance settings, Modify instance metadata options.In the Modify instance metadata options dialog box, check whether Instance metadata service is enabled.Or, use the describe-instances command to verify if Instance Metadata Service is turned on:aws ec2 describe-instances --query "Reservations[*].Instances[*].MetadataOptions" --instance-ids i-012345678910The output looks like the following:[ [ { "State": "applied", "HttpTokens": "optional", "HttpPutResponseHopLimit": 1, "HttpEndpoint": "enabled", "HttpProtocolIpv6": "disabled", "InstanceMetadataTags": "disabled" } ]]The field HttpEndpoint in the preceding output indicates whether metadata is turned on.If metadata access is turned off, turn it on.If a proxy is configured in the instance, then make sure that the instance bypasses metadata IP (169.254.169.254). For more information, see the following user guides:Linux: Configuring SSM Agent to use a proxy (Linux)Windows: Configure SSM Agent to use a proxy for Windows Server instancesFor Windows, check the specific route to metadata (169.254.169.254).In PowerShell, run the route print and ipconfig /all commands. Then, check the metadata output: Network Address Netmask Gateway Address 169.254.169.254 255.255.255.255 <Subnet Router Address>Confirm that the Gateway Address field in the output matches the default gateway for the instance's primary network interface.If the route isn't present or the Gateway Address field doesn't match, then do the following:Confirm that the latest version of EC2Config (Windows Server 2012R2 and earlier) or EC2Launch (Windows Server 2016 or later) is installed on the instance.To apply the route to the instance, restart the EC2Config service.If the routes are correct, but the instance is still unable to retrieve metadata, then review your instance's Windows Firewall, third-party firewall, and antivirus configuration. Confirm that traffic to 169.254.169.254 isn't explicitly denied.To manually reset the metadata routes, do the following:Note: These configured changes populate immediately. You don't need to restart the instance for the changes to take effect.Run the following commands to remove the existing metadata routes from the route table:route delete 169.254.169.123route delete 169.254.169.249route delete 169.254.169.250route delete 169.254.169.251route delete 169.254.169.252route delete 169.254.169.253route delete 169.254.169.254Run the following command:ipconfig /allNote the Default Gateway IP that's returned from the command in Step 2.Run the following commands. Replace DefaultGatewayIP with the IP address that you retrieved in Step 3.route -p add 169.254.169.123 MASK 255.255.255.255 DefaultGatewayIProute -p add 169.254.169.249 MASK 255.255.255.255 DefaultGatewayIProute -p add 169.254.169.250 MASK 255.255.255.255 DefaultGatewayIProute -p add 169.254.169.251 MASK 255.255.255.255 DefaultGatewayIProute -p add 169.254.169.252 MASK 255.255.255.255 DefaultGatewayIProute -p add 169.254.169.253 MASK 255.255.255.255 DefaultGatewayIProute -p add 169.254.169.254 MASK 255.255.255.255 DefaultGatewayIPRestart SSM Agent.Check connectivity with Systems Manager endpointsThe best method to verify this connectivity depends on your operating system. For a list of Systems Manager endpoints by Region, see AWS Systems Manager endpoints and quotas.Note: In the following examples, the ssmmessages endpoint is required only for AWS Systems Manager Session Manager.For EC2 Linux instances, run either telnet or netcat commands to verify connectivity to endpoints on port 443.Telnettelnet ssm.RegionID.amazonaws.com 443telnet ec2messages.RegionID.amazonaws.com 443telnet ssmmessages.RegionID.amazonaws.com 443Be sure to replace RegionID with your AWS Region ID.If the connection is successful, you get an output that's similar to the following:root@111800186:~# telnet ssm.us-east-1.amazonaws.com 443Trying 52.46.141.158...Connected to ssm.us-east-1.amazonaws.com.Escape character is '^]'.To exit from telnet, hold down the Ctrl key and press the ] key. Enter quit, and then press Enter.Netcatnc -vz ssm.RegionID.amazonaws.com 443nc -vz ec2messages.RegionID.amazonaws.com 443nc -vz ssmmessages.RegionID.amazonaws.com 443Note: Netcat isn't preinstalled on Amazon EC2 instances. To manually install Netcat, see Ncat on the Nmap website.For EC2 Windows instances, run the following Windows PowerShell commands to verify connectivity to endpoints on port 443:Test-NetConnection ssm.RegionID.amazonaws.com -port 443Test-NetConnection ec2messages.RegionID.amazonaws.com -port 443Test-NetConnection ssmmessages.RegionID.amazonaws.com -port 443If the connection is successful, you get an output that's similar to the following:PS C:\Users\ec2-user> Test-NetConnection ssm.us-east-1.amazonaws.com -port 443ComputerName : ssm.us-east-1.amazonaws.comRemoteAddress : 52.46.145.233RemotePort : 443InterfaceAlias : EthernetSourceAddress : 10.35.218.225TcpTestSucceeded : TrueCheck the IAM role for SSM AgentSSM Agent requires certain IAM permissions to make the Systems Manager API calls. You can manage these permissions using one of the following approaches:Default Host Management Configuration allows Systems Manager to manage your Amazon EC2 instances automatically. It allows instance management without the use of instance profiles. This configuration makes sure that Systems Manager has permissions to manage all instances in the Region and account.You can grant access at the instance level using an IAM instance profile. An instance profile is a container that passes IAM role information to an instance at launch. For more information, see Alternative configuration.Related informationWhy is my EC2 instance not displaying as a managed node or showing a "Connection lost" status in Systems Manager?Make an Amazon EBS volume available for use on LinuxMake an Amazon EBS volume available for use on WindowsWhy does my Amazon EC2 Windows instance generate a "Waiting for the metadata service" error?Follow"
https://repost.aws/knowledge-center/systems-manager-agent-stuck-start
How can I connect to my Amazon VPC?
There are several options to connect to a virtual private cloud (VPC) in Amazon Virtual Private Cloud (Amazon VPC). How do I decide which option to use?
"There are several options to connect to a virtual private cloud (VPC) in Amazon Virtual Private Cloud (Amazon VPC). How do I decide which option to use?Short descriptionYou can connect to your VPC through the following:A virtual private network (VPN)AWS Direct Connect (DX)A VPC peering connectionA VPC endpointAn internet gatewayA network address translation (NAT) gatewayA NAT instanceA transit gatewayThe best option depends on your specific use case and preferences.ResolutionReview the following options for connecting to your VPC and choose the best one for your use case.VPN connectionYou can establish a VPN connection to an Amazon Web Services (AWS)-managed virtual private gateway, which is the VPN device on the AWS side of the VPN connection.You can use an AWS managed VPN connection or a third-party VPN solution. Use a third-party solution if you require full access and management of the AWS side of the VPN connection.After creating your connection, you can download the Internet Protocol Security (IPsec) VPN configuration from the VPC console. Use the IPsec VPN configuration to configure the firewall or device in your local network that connects to the VPN.DX connectionAn AWS Direct Connect (DX connection) links your internal network to a DX location over a standard 1-Gbps or 10-Gbps Ethernet fiber-optic cable.DX usage is charged per port-hour with additional data transfer rates that vary by AWS Region. For more information, see AWS Direct Connect pricing.VPC peering connectionA VPC peering connection connects two VPCs and routes traffic between them through private IP addresses, which allows the VPCs to function as if they are on the same network. These connections aren't subject to common issues, such as a single point of failure or network bandwidth bottlenecks, because they don't rely on physical hardware.VPC peering is supported for VPCs across all AWS Regions in both the same or different AWS accounts. For more information, see VPC peering limitations.VPC endpointsA VPC endpoint is a private connection between your VPC and another AWS service that doesn't require internet access. The two types of VPC endpoints are interface VPC endpoints (for AWS PrivateLink services) and gateway VPC endpoints. After you configure a VPC endpoint, instances in your VPC can use private IP addresses to communicate with:Resources in other AWS servicesVPC endpoint services hosted by other AWS accountsSupported AWS Marketplace partner servicesInternet gatewayAn internet gateway enables communication between instances in your VPC and the internet. You can scope the route to all destinations not explicitly known to the route table or to a narrower range of IP addresses.NAT gatewayA NAT gateway is a managed service that enables instances in a private subnet of a VPC to connect to the internet or other AWS services without allowing connections to those instances from the internet.Note: Be sure to create the NAT gateway in a public subnet. For more information, see NAT gateways.NAT instanceA NAT instance in the public subnet of a VPC enables instances in the private subnet to initiate outbound IPv4 traffic to the internet or other AWS services while also preventing those instances from receiving inbound traffic initiated by someone on the internet.Note: A NAT gateway is a best practice for common use cases. For more information, see Compare NAT instances and NAT gateways.Transit gatewayA transit gateway acts as a central hub for connecting your VPCs and your on-premises networks. For more information, see AWS Transit Gateway.Related informationWhat is Amazon VPC?Amazon VPC quotasConfigure route tablesFollow"
https://repost.aws/knowledge-center/connect-vpc
How can I migrate my Amazon Redshift data from one cluster to another?
I want to migrate my Amazon Redshift data from one cluster to another cluster.
"I want to migrate my Amazon Redshift data from one cluster to another cluster.ResolutionYou can use data sharing, Amazon Redshift Spectrum, Amazon Simple Storage Service (Amazon S3), or a cluster snapshot to migrate Amazon Redshift data.Data sharingYou can use data sharing to share live data across Amazon Redshift clusters with AWS accounts or AWS Regions.Note: Amazon Redshift data sharing is available only on RA3 node types. For RA3 node migrations, see How do I migrate my Amazon Redshift cluster to an RA3 node type?For AWS accounts, see Sharing data within an AWS account and Sharing data across AWS accounts.For AWS Regions, see Sharing data across AWS Regions.For more information, see Sharing data across clusters in Amazon Redshift.Amazon Redshift SpectrumYou can use Amazon Redshift Spectrum to query external data. You can also use Amazon Redshift Spectrum to share data on multiple clusters. For instructions, see How do I create and query an external table in Amazon Redshift Spectrum?Amazon S3You can move data from Amazon Redshift to an Amazon S3 bucket. For instructions, see How do I COPY or UNLOAD data from Amazon Redshift to an Amazon S3 bucket in another account?Or, you can use the Amazon Redshift Unload/Copy Utility to migrate data between Amazon Redshift clusters or databases to Amazon S3.Cluster snapshotYou can copy an Amazon Redshift provisioned cluster to another AWS account by creating and sharing a manual snapshot. For instructions, see How do I copy an Amazon Redshift provisioned cluster to a different AWS account?Related informationGetting started data sharingHow do I move my Amazon Redshift provisioned cluster from one VPC to another VPC?Managing snapshots using the AWS Command Line Interface (AWS CLI) and Amazon Redshift APIHow can I copy S3 objects from another AWS account?Follow"
https://repost.aws/knowledge-center/redshift-migrate-data
How can I view execution plans captured by statspack for an Amazon RDS Oracle DB instance?
I captured performance statistics for my Amazon Relational Database Service (Amazon RDS) Oracle DB instance using Statspack. But the Statspack report doesn't have any information about execution plans. How can I view execution plans for queries that were captured by Statspack?
"I captured performance statistics for my Amazon Relational Database Service (Amazon RDS) Oracle DB instance using Statspack. But the Statspack report doesn't have any information about execution plans. How can I view execution plans for queries that were captured by Statspack?Resolution1.    Take a Statspack snapshot that has a snap level greater than or equal to 6 (i_snap_level => 6) to capture SQL execution plans. For more information, see How do I check the performance statistics on an Amazon RDS DB instance that is running Oracle?2.    Confirm that the Begin Snap ID, End Snap ID, and the old hash value for the query by using the Statspack report. In the following example, the Begin Snap ID is 22 and the End Snap ID is 23:STATSPACK report forDatabase DB Id Instance Inst Num Startup Time Release RAC~~~~~~~~ ----------- ------------ -------- --------------- ----------- --- 1234567890 silent 1 03-Jan-20 00:45 12.2.0.1.0 NOHost Name Platform CPUs Cores Sockets Memory (G)~~~~ ---------------- ---------------------- ----- ----- ------- ------------ ip-172-31-22-176 Linux x86 64-bit 2 2 1 3.7Snapshot Snap Id Snap Time Sessions Curs/Sess Comment~~~~~~~~ ---------- ------------------ -------- --------- ------------------Begin Snap: 22 03-Jan-20 01:30:36 41 .8 End Snap: 23 03-Jan-20 01:39:12 40 .8 Elapsed: 8.60 (mins) Av Act Sess: 0.0 DB time: 0.11 (mins) DB CPU: 0.11 (mins)...3.    Find the query to view the execution plan by using the old hash value. The Statspack report includes different "SQL order by" sections. For example, the "SQL ordered by CPU DB/Inst" section lists CPU intensive queries. The following example uses the old hash value 73250552, which is the hash value of the most CPU intensive query around that time:...SQL ordered by CPU DB/Inst: SILENT/silent Snaps: 22-23-> Total DB CPU (s): 7-> Captured SQL accounts for 80.3% of Total DB CPU-> SQL reported below exceeded 1.0% of Total DB CPU CPU CPU per Elapsd Old Time (s) Executions Exec (s) %Total Time (s) Buffer Gets Hash Value---------- ------------ ---------- ------ ---------- --------------- ---------- 4.03 3 1.34 60.8 4.08 528,477 73250552Module: SQL*PlusSELECT COUNT(*) FROM HOGE_TBL H1 INNER JOIN HOGE_TBL H2 USING(OBJECT_NAME) 0.75 1 0.75 11.3 0.77 18,994 2912400228Module: sqlplus@ip-172-31-22-176 (TNS V1-V3)BEGIN statspack.snap; END; 0.14 107 0.00 2.1 0.15 732 3879834072select TIME_WAITED_MICRO from V$SYSTEM_EVENT where event = 'Shared IO Pool Memory'...3.    Connect to the DB instance using an Oracle client, such as SQL*Plus.4.    Invoke a query similar to the following to retrieve the execution plan:SELECT lpad(' ', 1 * ( depth - 1 )) || operation AS operation, object_name, cardinality, bytes, costFROM stats$sql_planWHERE plan_hash_value IN(SELECT plan_hash_value FROM stats$sql_plan_usage WHERE old_hash_value = OLD_HASH_VALUE AND snap_id BETWEEN BEGIN_SNAP_ID AND END_SNAP_ID AND plan_hash_value > 0)ORDER BY plan_hash_value, id;Note: Replace OLD_HASH_VALUE, BEGIN_SNAP_ID, and END_SNAP_ID with your own values.The following example is the execution plan for the query retrieved by SQL*Plus:SQL> col operation format a20col object_name format a20SQL>SELECT lpad(' ', 1 * ( depth - 1 )) || operation AS operation, object_name, cardinality, bytes, costFROM stats$sql_planWHERE plan_hash_value IN(SELECT plan_hash_value FROM stats$sql_plan_usage WHERE old_hash_value = 73250552 AND snap_id BETWEEN 22 AND 23 AND plan_hash_value > 0)OPERATION OBJECT_NAME CARDINALITY BYTES COST-------------------- -------------------- ----------- ---------- ----------SELECT STATEMENT 1119SORT 1 70 HASH JOIN 87756 6142920 1119 TABLE ACCESS HOGE_TBL 72992 2554720 397 TABLE ACCESS HOGE_TBL 72992 2554720 397Related InformationHow do I check the performance statistics on an Amazon RDS DB instance that is running Oracle?Oracle StatspackOracle documentation for Using StatspackFollow"
https://repost.aws/knowledge-center/rds-oracle-view-plans-statspack
How do I use the Reserved Instance utilization report in Cost Explorer?
I want to use the Cost Explorer reports to understand the utilization of my Reserved Instances (RIs).
"I want to use the Cost Explorer reports to understand the utilization of my Reserved Instances (RIs).ResolutionYou can use the RI utilization reports in Cost Explorer to do the following:View the combined usage of your purchased RIs in the chart by selecting up to ten leases at one time.View the utilization of individual RIs in the chart by selecting the RI from the table.View the utilization of your RIs as the percentage of purchased RI hours in the chart.View the number of RI hours used against the number of RI hours purchased in the table.Define a utilization threshold, known as a utilization target, and identify RIs that meet your utilization target and RIs that are underutilized.To define the utilization target, enter the preferred value for Utilization target (%), and select Show target line on chart. You can see the target utilization as a dotted line in the chart and as an RI utilization status bar in the table.RIs with a red status bar haven't used reservation hours.RIs with a yellow status bar are under your utilization target.RIs with a green status bar have met your utilization target.Instances with a gray bar aren't using reservations.Related informationRI utilization reportsHow do I use the Reserved Instance coverage report in Cost Explorer?How can I use Cost Explorer to analyze my spending and usage?Follow"
https://repost.aws/knowledge-center/cost-explorer-reserved-instance-utilization
Why does my Spark or Hive job on Amazon EMR fail with an HTTP 503 "Slow Down" AmazonS3Exception?
"My Apache Spark or Apache Hive job on Amazon EMR fails with an HTTP 503 "Slow Down" AmazonS3Exception similar to the following:java.io.IOException: com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception: Slow Down (Service: Amazon S3; Status Code: 503; Error Code: 503 Slow Down; Request ID: 2E8B8866BFF00645; S3 Extended Request ID: oGSeRdT4xSKtyZAcUe53LgUf1+I18dNXpL2+qZhFWhuciNOYpxX81bpFiTw2gum43GcOHR+UlJE=), S3 Extended Request ID: oGSeRdT4xSKtyZAcUe53LgUf1+I18dNXpL2+qZhFWhuciNOYpxX81bpFiTw2gum43GcOHR+UlJE="
"My Apache Spark or Apache Hive job on Amazon EMR fails with an HTTP 503 "Slow Down" AmazonS3Exception similar to the following:java.io.IOException: com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception: Slow Down (Service: Amazon S3; Status Code: 503; Error Code: 503 Slow Down; Request ID: 2E8B8866BFF00645; S3 Extended Request ID: oGSeRdT4xSKtyZAcUe53LgUf1+I18dNXpL2+qZhFWhuciNOYpxX81bpFiTw2gum43GcOHR+UlJE=), S3 Extended Request ID: oGSeRdT4xSKtyZAcUe53LgUf1+I18dNXpL2+qZhFWhuciNOYpxX81bpFiTw2gum43GcOHR+UlJE=Short descriptionThis error occurs when the Amazon Simple Storage Service (Amazon S3) request rate for your application exceeds the typically sustained rates of over 5,000 requests per second, and Amazon S3 internally optimizes performance.To improve the success rate of your requests when accessing the S3 data using Amazon EMR, try the following approaches:Modify the retry strategy for S3 requests.Adjust the number of concurrent S3 requests.ResolutionTo help identify the issue with too many requests, it's a best practice to configure Amazon CloudWatch request metrics for the S3 bucket. You can determine the solution that best works for your use case based on these CloudWatch metrics.Configure CloudWatch request metricsTo monitor Amazon S3 requests, turn on CloudWatch request metrics for the bucket. Then, define a filter for the prefix. For a list of useful metrics to monitor, see Monitoring metrics with Amazon CloudWatch.Modify the retry strategy for S3 requestsBy default, EMRFS uses an exponential backoff strategy to retry requests to Amazon S3. The default EMRFS retry limit is 15. However, you can increase the retry limit on a new cluster, on a running cluster, or at application runtime.To increase the retry limit, change the value of fs.s3.maxRetries parameter. If you set a very high value for this parameter, then you might experience longer job duration. Try setting this parameter to a high value (for example, 20), monitor the duration overhead of the jobs, and then adjust this parameter based on your use case.For a new cluster, you can add a configuration object similar to the following when you launch the cluster:[ { "Classification": "emrfs-site", "Properties": { "fs.s3.maxRetries": "20" } }]After the cluster is launched, Spark and Hive applications running on Amazon EMR use the new limit.To increase the retry limit on a running cluster, do the following:1.    Open the Amazon EMR console.2.    In the cluster list, choose the active cluster that you want to reconfigure under Name.3.    Open the cluster details page for the cluster and choose the Configurations tab.4.    In the Filter dropdown list, select the instance group that you want to reconfigure.5.    In Reconfigure dropdown list, choose Edit in table.6.    In the configuration classification table, choose Add configuration, and then enter the following:For Classification: emrfs-siteFor Property: fs.s3.maxRetriesFor Value: the new value for the retry limit (for example, 20)7.    Select Apply this configuration to all active instance groups.8.    Choose Save changes.After the configuration is deployed, Spark and Hive applications use the new limit.To increase the retry limit at runtime, use a Spark shell session similar to the following:spark> sc.hadoopConfiguration.set("fs.s3.maxRetries", "20")spark> val source_df = spark.read.csv("s3://awsexamplebucket/data/")spark> source_df.write.save("s3://awsexamplebucket2/output/")Here's an example of how to increase the retry limit at runtime for a Hive application:hive> set fs.s3.maxRetries=20;hive> select ....Adjust the number of concurrent S3 requestsIf you have multiple jobs (Spark, Apache Hive, or s-dist-cp) reading and writing to the same S3 prefix, then you can adjust the concurrency. Start with the most read/write heavy jobs and lower their concurrency to avoid excessive parallelism. If you configured cross-account access for Amazon S3, keep in mind that other accounts might also be submitting jobs to the same prefix.If you see errors when the job tries to write to the destination bucket, then reduce excessive write parallelism. For example, use Spark .coalesce() or .repartition() operations to reduce number of Spark output partitions before writing to Amazon S3. You can also reduce the number of cores per executor or reduce the number of executors.If you see errors when the job tries to read from the source bucket, then adjust the size of objects. You can aggregate smaller objects to larger ones so that the number of objects to be read by the job is reduced. Doing this makes your jobs to read datasets with fewer read requests. For example, use s3-dist-cp to merge a large number of small files into a smaller number of large files.Related informationBest practices design patterns: optimizing Amazon S3 performanceWhy does my Amazon EMR application fail with an HTTP 403 "Access Denied" AmazonS3Exception?Why does my Amazon EMR application fail with an HTTP 404 "Not Found" AmazonS3Exception?Follow"
https://repost.aws/knowledge-center/emr-s3-503-slow-down
How do I build a Direct Connect LAG?
I want to build a AWS Direct Connect link aggregation group (LAG).
"I want to build a AWS Direct Connect link aggregation group (LAG).Short descriptionA LAG is a logical interface that uses the link aggregation control protocol (LACP) to aggregate multiple connections at a single Direct Connect endpoint. This aggregation allows you to treat the multiple connections as a single managed connection.All LAGs have an attribute that determines the minimum number of connections in the LAG required for the LAG to be operational. By default, new LAGs have this attribute set to 0. You can update your LAG to specify a different value. If the number of operational connections fall below your updating setting, then your LAG becomes non-operational. This attribute is also used to prevent over-utilization of the remaining connections.Before you build a LAG, make sure of the following:Your connection must be on the same AWS device (terminating at the same Direct Connect endpoint)All connections must use the same bandwidthThe overall connection limit for the AWS Region where you're creating the LAG can't be exceededAll connections must be dedicated and have speeds of 1 Gbps, 10 Gbps, 100 GbpsImportant: Connections that are associated to the LAG must be on the same device. AWS can't guarantee enough available ports on a given Direct Connect endpoint when you create a LAG or associate connections to a LAG. All connections in a LAG operate in Active/Active mode.ResolutionCreate a LAG by one of the following methods:Create a LAG with new connectionsCreate a LAG with existing connectionsCreate a LAG with new connections1.    Sign into the AWS Management Console and open the Direct Connect console.2.    In the navigation pane, choose LAGs.3.    Choose Create LAG.4.    Under Lag creation type, choose Request new connections, and provide the following information:Lag name: a name for your LAG.Location: the location of your LAG.Port speed: The port speed of your connections.Number of new connections: The number of new connections you're going to create. You can have a maximum of four connections for port speeds of 1G or 10G and two connections for a port speed of 100G.Note: Steps 5 and 6 are optional. If you don't want to configure MAC security settings or add or remove a tag, then skip to step 7.5.    (Optional) Configure MAC security for your connection. Under Additional Settings, select Request a MACsec capable port.Note: MACsec is only available on dedicated connections.6.    (Optional) Add or remove a tag as follows:To add a tag, choose Add tag, and then enter the key name for Key and enter the key value for Value.To remove a tag, choose Remove tag next to the tag.7.    Choose Create LAG.Note: Make sure you download the Letter of Authorization and Connecting Facility Assignment (LOA-CFA) for each new physical connection individually from the Direct Connect console.Create a LAG with existing connections1.    Sign into the AWS Management Console and open the Direct Connect console.2.    In the navigation pane, choose LAGs.3.    Choose Create LAG.4.    Under Lag creation type, choose Use existing connections, and provide the following information:Lag name: a name for your LAG.Existing connections: The Direct Connect connection that you're going to use for the LAG.(Optional) Number of new connections: The number of new connections you're going to create. You can have a maximum of four connections for port speeds of 1G or 10G and two connections for a port speed of 100G.Minimum links: The minimum number of connections that must be operational for the LAG to be operational. If you don't specify a value, then a default value of 0 is assigned.5.    (Optional) Add or remove a tag as follows:To add a tag, choose Add tag, and then enter the key name for Key and enter the key value for Value.To remove a tag, choose Remove tag next to the tag.6.    Choose Create LAG.Follow"
https://repost.aws/knowledge-center/direct-connect-create-lag
Why is my Amazon DynamoDB maximum latency metric high when the average latency is normal?
"When I review the Amazon CloudWatch metrics for my Amazon DynamoDB workloads, the maximum latency metric is high. But, average latency is normal."
"When I review the Amazon CloudWatch metrics for my Amazon DynamoDB workloads, the maximum latency metric is high. But, average latency is normal.ResolutionWhen you analyze the CloudWatch metric SuccessfulRequestLatency, it's a best practice to check the average latency. Maximum latency doesn't give a picture of overall latency on your DynamoDB table. Instead, it shows the maximum time taken by a single request in that period. For example, if you have 100 requests on a DynamoDB table at one time, even if 99 requests take 10 ms and a single request takes 100 ms, then the maximum latency metric is 100 ms.DynamoDB is a mass-scaled distribution system, with thousands of nodes in the backend fleet. So, a DynamoDB table might have multiple partitions in the tablespace, and each partition has multiple copies in the backend fleet. When you make an API call to DynamoDB, the DynamoDB service endpoint receives your call, and then routes it to one of the back end nodes for processing. When the call is successfully processed, DyanamoDB routes the results back to your client.In most cases, the API call is successfully processed in a single attempt, and you observe small latency on the client side. But, sometimes the first attempt fails if the back end node is experiencing:A busy periodFailoverPartition splitConnectivity issuesIn cases like these, the first attempt fails within a timeout on the server side (5000 ms). Then, the server automatically retries the API call on another node, often multiple times. The server returns the result back to your client when the API call is successfully processed. When this happens, you observe elevated latency for that particular request.So, a high maximum latency metric is generally not a cause for concern. If the DynamoDB service observes a consistently high latency from one node, then the service automatically removes that component from the back end fleet. You might observe an elevated level of latency for a certain percentage of API calls when the previously mentioned localized failure occurs on the service side. This is reflected in a high level of maximum SuccessfulRequestLatency in the CloudWatch metrics for the related DynamoDB tables. For this reason, localized failures can increase your maximum latency, but you do not need to take any action to control this failure.But, you can configure your application to react quickly by failing fast with exponential backoff retry. This means that the new request hits the new node, and you get faster results. For more information, see Tuning AWS Java SDK HTTP request settings for latency-aware Amazon DynamoDB applications.Related informationHow can I troubleshoot high latency on an Amazon DynamoDB table?Latency metrics loggingFollow"
https://repost.aws/knowledge-center/dynamodb-maximum-latency
"Why is my EC2 instance slow, unresponsive, or inaccessible, but my CPU and memory consumption aren't high?"
"I'm having trouble connecting to an Amazon Elastic Compute Cloud (Amazon EC2) instance that has an Amazon Elastic Block Store (Amazon EBS) General Purpose SSD (gp2) root volume. The connection is slow or times out, even though CPU and memory aren't fully used."
"I'm having trouble connecting to an Amazon Elastic Compute Cloud (Amazon EC2) instance that has an Amazon Elastic Block Store (Amazon EBS) General Purpose SSD (gp2) root volume. The connection is slow or times out, even though CPU and memory aren't fully used.Short descriptionThere are many possible causes of slow or unresponsive EC2 instances when CPU and memory aren't fully used, including:Problems with an external service that your instance relies on.Disk thrashing.Network connectivity issues.This resolution discusses one common cause: depleted I/O burst credits on the gp2 root volume. In most Regions, gp2 is the default storage drive for root volumes. For more information, see Amazon EBS volume types.ResolutionCheck I/O burst credit balanceOpen the Amazon EC2 console.In the navigation pane, choose Instances, and then select the instance.On the Storage tab, choose the Volume ID of the root device.Choose the Monitoring tab for the EBS volume, and then find the Burst Balance metric. A burst balance of 0% means that all the burst credits are used and the volume can't burst above its baseline performance level.Use one of the following methods to resolve this problem.Estimate IOPS requirements, and then modify the volume to support the loadFind the root EBS volume's VolumeReadOps and VolumeWriteOps metrics in Amazon CloudWatch. For more information, see Searching for available metrics.Use the CloudWatch Sum statistic to get the peak levels of VolumeReadOps and VolumeWriteOps. Then, add the two numbers together. Example:VolumeReadOps = 737,000 VolumeWriteOps = 199,000Total = 936,000To estimate how many IOPS that you need, divide the total from the previous step by the number of seconds in the measurement period.Example:Measurement period = 5 minutes (300 seconds)936,000 / 300 = 3,120 IOPSBaseline performance for gp2 volumes scales linearly at 3 IOPS per GiB of volume size. This means that a volume with 3,120 IOPS must be scaled up to 1,040 GiB to improve performance (3,120 / 3 = 1,040).Or, change your volume from gp2 to Provisioned IOPS SSD (io1). With io1 volumes, you can specify how many IOPS you need without increasing the volume size. For more information about when to use io1 volumes, see Provisioned IOPS SSD volumes. For a cost comparison of gp2 and io1 volumes, see Amazon EBS pricing.Change how you distribute your workloadWhen you have multiple applications on an EC2 instance, those applications compete for the root EBS volume’s IOPS. As your workload grows over time, the IOPS demand increases. To improve performance on your instance, consider using additional non-root EBS volumes for your applications. Also, consider using the root volume for the operating system only.If you still have trouble connecting to your EC2 instance after modifying the volume size and the workload distribution, see Troubleshoot connecting to your instance.Related informationI/O characteristics and monitoringHow do I optimize the performance of my Amazon EBS Provisioned IOPS volumes?How are charges for Amazon EBS volumes calculated on my bill?Follow"
https://repost.aws/knowledge-center/ec2-instance-slow-cpu-not-high
How do I recreate a default subnet for my default Amazon VPC?
I deleted the default subnet for my default Amazon Virtual Private Cloud (Amazon VPC). How do I recreate a default subnet for my default Amazon VPC?
"I deleted the default subnet for my default Amazon Virtual Private Cloud (Amazon VPC). How do I recreate a default subnet for my default Amazon VPC?ResolutionYou can create a default subnet in any Availability Zone that doesn't have a default subnet. For more information, see Create a default subnet.Note: Default subnets can be created only in a default Amazon VPC. If you deleted your default Amazon VPC, you can create a new default Amazon VPC that contains a default subnet in each Availability Zone in the Region.Related informationDefault VPCsDelete your default subnets and default VPCWhat is Amazon VPC?Follow"
https://repost.aws/knowledge-center/recreate-default-vpc
How can I enable logs on an Aurora Serverless cluster so I can view and download the logs?
I want to audit database activity to meet compliance requirements for my Amazon Aurora Serverless v1 (Amazon Aurora Serverless version 1) clusters that run Amazon Aurora MySQL-Compatible Edition or Amazon Aurora PostgreSQL-Compatible Edition. Then I want to publish the logs to Amazon CloudWatch to view or download them. How can I do that?
"I want to audit database activity to meet compliance requirements for my Amazon Aurora Serverless v1 (Amazon Aurora Serverless version 1) clusters that run Amazon Aurora MySQL-Compatible Edition or Amazon Aurora PostgreSQL-Compatible Edition. Then I want to publish the logs to Amazon CloudWatch to view or download them. How can I do that?Short descriptionFor Aurora MySQL-Compatible DB clusters, you can enable the slow query log, general log, or audit logs. For Aurora PostgreSQL-Compatible DB clusters, you can control the level of logging by using the log_statement parameter.By design, Aurora Serverless V1 connects to a proxy fleet of DB instances that scales automatically. There isn't a direct DB instance to access and host the log files. This means that you can't view the logs directly from the Amazon Relational Database Service (Amazon RDS) console. However, you can view and download logs that are sent to the CloudWatch console.To enable Advanced Auditing, see How can I enable audit logging for my Amazon Aurora MySQL DB cluster and publish the logs to CloudWatch?ResolutionTo enable logs, first modify the cluster parameter groups for an Aurora Serverless V1 cluster. Aurora Serverless V1 then automatically uploads the logs to CloudWatch. For MySQL-compatible DB clusters, use an Aurora MySQL 5.6/5.7 cluster parameter group family based on your cluster version. For PostgreSQL-compatible DB clusters, use an Aurora PostgreSQL 10 cluster parameter group family.Enabling the logging for Aurora Serverless V1Note: If your DB cluster is already using a custom DB cluster parameter group, then skip steps 1 and 3 of this process.Create a custom DB cluster parameter group.Modify the DB cluster parameter group values. For MySQL-compatible clusters, the error log is enabled by default. To enable the slow query and general logs, modify the following parameters:general_log=1slow_query_log=1For PostgreSQL-compatible clusters, log_statement parameter controls which SQL statements are logged, and the default value is none. Modify the following parameter to log the query and error logs:log_statement=allTip: It's a best practice to set log_statement to all to log all statements when you debug issues in your DB instance. To log all data definition language (DDL) statements (such as CREATE, ALTER, and DROP), set the parameter value to ddl. To log all DDL and data modification language (DML) statements (such as INSERT, UPDATE, and DELETE), set the parameter value to mod.Modify your DB cluster to use the custom DB parameter group that you created in step 2.After you modify your DB cluster, Aurora Serverless V1 attempts to perform an automatic seamless scale to apply the parameter changes.Note: Aurora Serverless V1 uses the ForceApplyCapacityChange timeout action when applying this change. This means that that if your Aurora Serverless V1 DB cluster can't find a scaling point before timing out, connections might be dropped.Viewing the logs in CloudWatchBecause Aurora Serverless V1 automatically publishes these logs to CloudWatch, you can view and download the logs and view in the CloudWatch console:Open the CloudWatch console.Choose Log groups from the navigation pane.Select the appropriate log group from the list.For more information, see Monitoring log events in Amazon CloudWatch.Related informationHow do I publish logs for Amazon RDS or Aurora MySQL-Compatible instances to CloudWatch?Publishing Aurora MySQL-Compatible logs to Amazon CloudWatch LogsPublishing Aurora PostgreSQL-Compatible logs to Amazon CloudWatch LogsPublishing database logs to Amazon CloudWatch LogsFollow"
https://repost.aws/knowledge-center/aurora-serverless-logs-enable-view