diff --git "a/questions/SAA-C03-v2.json" "b/questions/SAA-C03-v2.json" new file mode 100644--- /dev/null +++ "b/questions/SAA-C03-v2.json" @@ -0,0 +1,8060 @@ +[ + { + "question": "A software development company is using serverless computing with AWS Lambda to build and run applications without having to set up or manage ser vers. They have a Lambda function that connects to a MongoDB Atlas, which is a popular Database as a Ser vice (DBaaS) platform and also uses a third party A PI to fetch certain data for their application. One of th e developers was instructed to create the environme nt variables for the MongoDB database hostname, username, and pa ssword as well as the API credentials that will be used by the Lambda function for DEV, SIT, UAT, and PROD environments. Considering that the Lambda function is storing sen sitive database and API credentials, how can this information be secured to prevent other developers in the team, or anyone, from seeing these credentia ls in plain text? Select the best option that provides ma ximum security.", + "options": [ + "A. Enable SSL encryption that leverages on AWS Cloud HSM to store and encrypt the sensitive information.", + "B. AWS Lambda does not provide encryption for the en vironment variables. Deploy your code", + "C. There is no need to do anything because, by defau lt, AWS Lambda already encrypts the environment", + "D. Create a new KMS key and use it to enable encrypt ion helpers that leverage on AWS Key Management" + ], + "correct": "D. Create a new KMS key and use it to enable encrypt ion helpers that leverage on AWS Key Management", + "explanation": "Explanation/Reference: When you create or update Lambda functions that use environment variables, AWS Lambda encrypts them using the AWS Key Management Service. When your Lam bda function is invoked, those values are decrypted and made available to the Lambda code. The first time you create or update Lambda function s that use environment variables in a region, a def ault service key is created for you automatically within AWS KMS. This key is used to encrypt environment variables. However, if you wish to use encryption h elpers and use KMS to encrypt environment variables after your Lambda function is created, you must create yo ur own AWS KMS key and choose it instead of the def ault key. The default key will give errors when chosen. Creating your own key gives you more flexibility, i ncluding the ability to create, rotate, disable, and define access controls, and to audit the encryption keys u sed to protect your data. The option that says: There is no need to do anythi ng because, by default, AWS Lambda already encrypts the environment variables using the AWS Key Management Service is incorrect. Although Lambda encrypts the environment variables in your function by default, the sensitive information would still be visible to other users who have access to the Lambda console. This is because Lambda uses a d efault KMS key to encrypt the variables, which is usually accessible by other users. The best option in this scenario is to use encryption helpers to secure your environmen t variables. The option that says: Enable SSL encryption that le verages on AWS CloudHSM to store and encrypt the sensitive information is also incorrect since enabl ing SSL would encrypt data only when in-transit. Your other teams would still be able to view the pl aintext at-rest. Use AWS KMS instead. The option that says: AWS Lambda does not provide e ncryption for the environment variables. Deploy your code to an EC2 instance instead is inco rrect since, as mentioned, Lambda does provide encryption functionality of environment variables. References: https://docs.aws.amazon.com/lambda/latest/dg/env_va riables.html#env_encrypt https://docs.aws.amazon.com/lambda/latest/dg/tutori al-env_console.html Check out this AWS Lambda Cheat Sheet: https://tutorialsdojo.com/aws-lambda/ AWS Lambda Overview - Serverless Computing in AWS: https://www.youtube.com/watch?v=bPVX1zHwAnY", + "references": "" + }, + { + "question": "A company hosted an e-commerce website on an Auto S caling group of EC2 instances behind an Application Load Balancer. The Solutions Architect noticed that the website is receiving a large number of illegit imate external requests from multiple systems with IP add resses that constantly change. To resolve the perfo rmance issues, the Solutions Architect must implement a so lution that would block the illegitimate requests with minimal impact on legitimate traffic. Which of the following options fulfills this requir ement?", + "options": [ + "A. Create a regular rule in AWS WAF and associate th e web ACL to an Application Load Balancer.", + "B. Create a rate-based rule in AWS WAF and associate the web ACL to an Application Load Balancer.", + "C. Create a custom rule in the security group of the Application Load Balancer to block the offending r equests.", + "D. Create a custom network ACL and associate it with the subnet of the Application Load Balancer to blo ck the" + ], + "correct": "B. Create a rate-based rule in AWS WAF and associate the web ACL to an Application Load Balancer.", + "explanation": "Explanation/Reference: AWS WAF is tightly integrated with Amazon CloudFron t, the Application Load Balancer (ALB), Amazon API Gateway, and AWS AppSync services that AWS custome rs commonly use to deliver content for their websit es and applications. When you use AWS WAF on Amazon Cl oudFront, your rules run in all AWS Edge Locations, located around the world close to your end-users. T his means security doesn't come at the expense of performance. Blocked requests are stopped before th ey reach your web servers. When you use AWS WAF on regional services, such as Application Load Balance r, Amazon API Gateway, and AWS AppSync, your rules run in the region and can be used to protect Intern et-facing resources as well as internal resources. .cm A rate-based rule tracks the rate of requests for e ach originating IP address and triggers the rule ac tion on IPs with rates that go over a limit. You set the li mit as the number of requests per 5-minute time spa n. You can use this type of rule to put a temporary block on requests from an IP address that's sending exces sive requests. Based on the given scenario, the requirement is to limit the number of requests from the illegitimate requests without affecting the genuine requests. To accomplish this requirement, you can use AWS WAF web ACL. There are two types of rules in creating y our own web ACL rule: regular and rate-based rules. You need to select the latter to add a rate limit to your web ACL. After c reating the web ACL, you can associate it with ALB. When the rule action triggers, AWS WAF applies the action to additional requests from the IP address u ntil the request rate falls below the limit. Hence, the correct answer is: Create a rate-based r ule in AWS WAF and associate the web ACL to an Application Load Balancer. The option that says: Create a regular rule in AWS WAF and associate the web ACL to an Application Load Balancer is incorrect because a re gular rule only matches the statement defined in th e rule. If you need to add a rate limit to your rule, you should create a rate-based rule. The option that says: Create a custom network ACL a nd associate it with the subnet of the Application Load Balancer to block the offending requests is in correct. Although NACLs can help you block incoming traffic, this option wouldn't be able to l imit the number of requests from a single IP addres s that is dynamically changing. The option that says: Create a custom rule in the s ecurity group of the Application Load Balancer to block the offending requests is incorrect because t he security group can only allow incoming traffic. Remember that you can't deny traffic using security groups. In addition, it is not capable of limiting the rate of traffic to your application unlike AWS WAF. References: https://docs.aws.amazon.com/waf/latest/developergui de/waf-rule-statement-type-rate-based.html https://aws.amazon.com/waf/faqs/ Check out this AWS WAF Cheat Sheet: https://tutorialsdojo.com/aws-waf/ AWS Security Services Overview - WAF, Shield, Cloud HSM, KMS: https://www.youtube.com/watch?v=-1S-RdeAmMo", + "references": "" + }, + { + "question": "There was an incident in your production environmen t where the user data stored in the S3 bucket has b een accidentally deleted by one of the Junior DevOps En gineers. The issue was escalated to your manager an d after a few days, you were instructed to improve th e security and protection of your AWS resources. What combination of the following options will prot ect the S3 objects in your bucket from both acciden tal deletion and overwriting? (Select TWO.)", + "options": [ + "A. Enable Versioning", + "B. Enable Amazon S3 Intelligent-Tiering", + "C. Provide access to S3 data strictly through pre-si gned URL only", + "D. Enable Multi-Factor Authentication Delete" + ], + "correct": "", + "explanation": "Explanation/Reference: By using Versioning and enabling MFA (Multi-Factor Authentication) Delete, you can secure and recover your S3 objects from accidental deletion or overwrite. Versioning is a means of keeping multiple variants of an object in the same bucket. Versioning-enabled buckets enable you to recover objects from accident al deletion or overwrite. You can use versioning to preserve, retrieve, and restore every version of ev ery object stored in your Amazon S3 bucket. With ve rsioning, you can easily recover from both unintended user ac tions and application failures. You can also optionally add another layer of securi ty by configuring a bucket to enable MFA (Multi-Fac tor Authentication) Delete, which requires additional a uthentication for either of the following operation s: - Change the versioning state of your bucket - Permanently delete an object version MFA Delete requires two forms of authentication tog ether: - Your security credentials - The concatenation of a valid serial number, a spa ce, and the six-digit code displayed on an approved authentication device Providing access to S3 data strictly through pre-si gned URL only is incorrect since a pre-signed URL gives access to the object identified in the URL. P re-signed URLs are useful when customers perform an object upload to your S3 bucket, but does not help in preventing accidental deletes. Disallowing S3 Delete using an IAM bucket policy is incorrect since you still want users to be able to delete objects in the bucket, and you just want to prevent accidental deletions. Disallowing S3 Delete using an IAM bucket policy will restrict all delete opera tions to your bucket. Enabling Amazon S3 Intelligent-Tiering is incorrect since S3 intelligent tiering does not help in this situation.", + "references": "https://docs.aws.amazon.com/AmazonS3/latest/dev/Ver sioning.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/" + }, + { + "question": "A telecommunications company is planning to give AW S Console access to developers. Company policy mandates the use of identity federation and role-ba sed access control. Currently, the roles are alread y assigned using groups in the corporate Active Directory. In this scenario, what combination of the following services can provide developers access to the AWS console? (Select TWO.)", + "options": [ + "A. AWS Directory Service Simple AD", + "B. IAM Roles", + "C. IAM Groups", + "D. AWS Directory Service AD Connector" + ], + "correct": "", + "explanation": "Explanation/Reference: Considering that the company is using a corporate A ctive Directory, it is best to use AWS Directory Service AD Connector for easier integration. In add ition, since the roles are already assigned using g roups in the corporate Active Directory, it would be bett er to also use IAM Roles. Take note that you can as sign an IAM Role to the users or groups from your Active Directory once it is integrated with your VPC via the AWS Directory Service AD Connector. AWS Directory Service provides multiple ways to use Amazon Cloud Directory and Microsoft Active Directory (AD) with other AWS services. Directories store information about users, groups, and devices , and administrators use them to manage access to inf ormation and resources. AWS Directory Service provides multiple directory choices for customers w ho want to use existing Microsoft AD or Lightweight Directory Access Protocol (LDAP)aware applications in the cloud. It also offers those same choices to developers who need a directory to manage users, gr oups, devices, and access. AWS Directory Service Simple AD is incorrect becaus e this just provides a subset of the features offer ed by AWS Managed Microsoft AD, including the ability to manage user accounts and group memberships, create and apply group policies, securely connect t o Amazon EC2 instances, and provide Kerberos-based single sign-on (SSO). In this scenario, the more su itable component to use is the AD Connector since i t is a directory gateway with which you can redirect direc tory requests to your on-premises Microsoft Active Directory. IAM Groups is incorrect because this is just a coll ection of IAM users. Groups let you specify permiss ions for multiple users, which can make it easier to manage the permissions for those users. In this scenario, the more suitable one to use is IAM Roles in order for permissions to create AWS Directory Service resourc es. Lambda is incorrect because this is primarily used for serverless computing.", + "references": "https://aws.amazon.com/blogs/security/how-to-connec t-your-on-premises-active-directory-to-aws-using-ad - connector/" + }, + { + "question": "An AI-powered Forex trading application consumes th ousands of data sets to train its machine learning model. The application's workload requires a high-p erformance, parallel hot storage to process the tra ining datasets concurrently. It also needs cost-effective cold storage to archive those datasets that yield low profit. Which of the following Amazon storage services shou ld the developer use?", + "options": [ + "A. A. Use Amazon FSx For Windows File Server and Ama zon S3 for hot and cold storage respectively.", + "B. B. Use Amazon Elastic File System and Amazon S3 f or hot and cold storage respectively.", + "C. C. Use Amazon FSx For Lustre and Amazon EBS Provi sioned IOPS SSD (io1) volumes for hot and cold", + "D. D. Use Amazon FSx For Lustre and Amazon S3 for ho t and cold storage respectively." + ], + "correct": "", + "explanation": "Explanation/Reference: Hot storage refers to the storage that keeps freque ntly accessed data (hot data). Warm storage refers to the storage that keeps less frequently accessed data (w arm data). Cold storage refers to the storage that keeps rarely accessed data (cold data). In terms of prici ng, the colder the data, the cheaper it is to store , and the costlier it is to access when needed. Amazon FSx For Lustre is a high-performance file sy stem for fast processing of workloads. Lustre is a popular open-source parallel file system which stor es data across multiple network file servers to max imize performance and reduce bottlenecks. Amazon FSx for Windows File Server is a fully manag ed Microsoft Windows file system with full support for the SMB protocol, Windows NTFS, Microsoft Active Direct ory (AD) Integration. Amazon Elastic File System is a fully-managed file storage service that makes it easy to set up and sc ale file storage in the Amazon Cloud. Amazon S3 is an object storage service that offers industry-leading scalability, data availability, se curity, and performance. S3 offers different storage tiers for different use cases (frequently accessed data, infrequently accessed data, and rarely accessed dat a). The question has two requirements: High-performance, parallel hot storage to process t he training datasets concurrently. Cost-effective cold storage to keep the archived da tasets that are accessed infrequently In this case, we can use Amazon FSx For Lustre for the first requirement, as it provides a high- performance, parallel file system for hot data. On the second requirement, we can use Amazon S3 for storing cold data. Amazon S3 supports a cold storag e system via Amazon S3 Glacier / Glacier Deep Archi ve. Hence, the correct answer is: Use Amazon FSx For Lu stre and Amazon S3 for hot and cold storage respectively. Using Amazon FSx For Lustre and Amazon EBS Provisio ned IOPS SSD (io1) volumes for hot and cold storage respectively is incorrect because the Provisioned I OPS SSD (io1) volumes are designed for storing hot data (data that are frequently accessed ) used in I/O-intensive workloads. EBS has a storag e option called \"Cold HDD,\" but due to its price, it is not ideal for data archiving. EBS Cold HDD is much more expensive than Amazon S3 Glacier / Glacier Dee p Archive and is often utilized in applications whe re sequential cold data is read less frequently. Using Amazon Elastic File System and Amazon S3 for hot and cold storage respectively is incorrect. Although EFS supports concurrent access to data, it does not have the high-performance ability that is required for machine learning workloads. Using Amazon FSx For Windows File Server and Amazon S3 for hot and cold storage respectively is incorr ect because Amazon FSx For Windows File Server does not have a parallel file system, unlike Lustre. References: https://aws.amazon.com/fsx/ https://docs.aws.amazon.com/whitepapers/latest/cost -optimization-storage-optimization/aws-storage- services.html https://aws.amazon.com/blogs/startups/picking-the-r ight-data-store-for-your-workload/", + "references": "" + }, + { + "question": "A newly hired Solutions Architect is assigned to ma nage a set of CloudFormation templates that are use d in the company's cloud architecture in AWS. The Architect accessed the templates and tried to analyze the configured IAM policy for an S3 bucket. { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"s3:Get*\", \"s3:List*\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": \"s3:PutObject\", \"Resource\": \"arn:aws:s3:::boracay/*\" } ] } What does the above IAM policy allow? (Select THREE .)", + "options": [ + "A. A. An IAM user with this IAM policy is allowed to read objects in the boracay S3 bucket but not allo wed to list", + "B. B. An IAM user with this IAM policy is allowed to change access rights for the boracay S3 bucket.", + "C. C. An IAM user with this IAM policy is allowed to write objects into the boracay S3 bucket.", + "D. D. An IAM user with this IAM policy is allowed to read objects from the boracay S3 bucket." + ], + "correct": "", + "explanation": "Explanation/Reference: You manage access in AWS by creating policies and a ttaching them to IAM identities (users, groups of u sers, or roles) or AWS resources. A policy is an object i n AWS that, when associated with an identity or re source, defines their permissions. AWS evaluates these poli cies when an IAM principal (user or role) makes a r equest. Permissions in the policies determine whether the r equest is allowed or denied. Most policies are stor ed in AWS as JSON documents. AWS supports six types of po licies: identity-based policies, resource-based policies, permissions boundaries, AWS Organizations SCPs, ACLs, and session policies. IAM policies define permissions for action regardle ss of the method that you use to perform the operat ion. For example, if a policy allows the GetUser action, the n a user with that policy can get user information from the AWS Management Console, the AWS CLI, or the AWS API . When you create an IAM user, you can choose to allow console or programmatic access. If console access is allowed, the IAM user can sig n in to the console using a user name and password. Or if p rogrammatic access is allowed, the user can use access keys to work with the CLI or API. Based on the provided IAM policy, the user is only allowed to get, write, and list all of the objects for the boracay s3 bucket. The s3:PutObject basically means that you can submit a PUT object request to the S3 bucket to store data. Hence, the correct answers are: - An IAM user with this IAM policy is allowed to re ad objects from all S3 buckets owned by the account. - An IAM user with this IAM policy is allowed to wr ite objects into the boracay S3 bucket. - An IAM user with this IAM policy is allowed to re ad objects from the boracay S3 bucket. The option that says: An IAM user with this IAM pol icy is allowed to change access rights for the boracay S3 bucket is incorrect because the template does not have any statements which allow the user to change access rights in the bucket. The option that says: An IAM user with this IAM pol icy is allowed to read objects in the boracay S3 bucket but not allowed to list the objects in the b ucket is incorrect because it can clearly be seen i n the template that there is a s3:List* which permits the user to list objects. The option that says: An IAM user with this IAM pol icy is allowed to read and delete objects from the boracay S3 bucket is incorrect. Although you can re ad objects from the bucket, you cannot delete any objects. References: https://docs.aws.amazon.com/AmazonS3/latest/API/RES TObjectOps.html https://docs.aws.amazon.com/IAM/latest/UserGuide/ac cess_policies.html", + "references": "" + }, + { + "question": "A retail website has intermittent, sporadic, and un predictable transactional workloads throughout the day that are hard to predict. The website is currently hosted on-premises and is slated to be migrated to AWS. A new relational database is needed that autoscales c apacity to meet the needs of the application's peak load and scales back down when the surge of activity is over. Which of the following option is the MOST cost-effe ctive and suitable database setup in this scenario?", + "options": [ + "A. Launch a DynamoDB Global table with Auto Scaling enabled.", + "B. Launch an Amazon Aurora Serverless DB cluster the n set the minimum and maximum capacity for the", + "C. Launch an Amazon Redshift data warehouse cluster with Concurrency Scaling.", + "D. Launch an Amazon Aurora Provisioned DB cluster wi th burstable performance DB instance class types." + ], + "correct": "B. Launch an Amazon Aurora Serverless DB cluster the n set the minimum and maximum capacity for the", + "explanation": "Explanation/Reference: Amazon Aurora Serverless is an on-demand, auto-scal ing configuration for Amazon Aurora. An Aurora Serverless DB cluster is a DB cluster that automati cally starts up, shuts down, and scales up or down its compute capacity based on your application's needs. Aurora Serverless provides a relatively simple, co st- effective option for infrequent, intermittent, spor adic or unpredictable workloads. It can provide thi s because it automatically starts up, scales compute capacity to match your application's usage and shuts down when it's not in use. Take note that a non-Serverless DB cluster for Auro ra is called a provisioned DB cluster. Aurora Serve rless clusters and provisioned clusters both have the sam e kind of high-capacity, distributed, and highly av ailable storage volume. When you work with Amazon Aurora without Aurora Ser verless (provisioned DB clusters), you can choose your DB instance class size and create Aurora Repli cas to increase read throughput. If your workload changes, you can modify the DB instance class size and change the number of Aurora Replicas. This mode l works well when the database workload is predictabl e, because you can adjust capacity manually based o n the expected workload. However, in some environments, workloads can be int ermittent and unpredictable. There can be periods o f heavy workloads that might last only a few minutes or hours, and also long periods of light activity, or even no activity. Some examples are retail websites with in termittent sales events, reporting databases that p roduce reports when needed, development and testing enviro nments, and new applications with uncertain requirements. In these cases and many others, it ca n be difficult to configure the correct capacity at the right times. It can also result in higher costs when you pay for capacity that isn't used. With Aurora Serverless , you can create a database endpoint without specifying the DB instance class s ize. You set the minimum and maximum capacity. With Auro ra Serverless, the database endpoint connects to a proxy fleet that routes the workload to a fleet of resources that are automatically scaled. Because of the proxy fleet, connections are continuous as Aurora Serverless sca les the resources automatically based on the minimu m and maximum capacity specifications. Database clien t applications don't need to change to use the prox y fleet. Aurora Serverless manages the connections automatic ally. Scaling is rapid because it uses a pool of \"w arm\" resources that are always ready to service requests . Storage and processing are separate, so you can s cale down to zero processing and pay only for storage. Aurora Serverless introduces a new serverless DB en gine mode for Aurora DB clusters. Non-Serverless DB clusters use the provisioned DB engine mode. Hence, the correct answer is: Launch an Amazon Auro ra Serverless DB cluster then set the minimum and maximum capacity for the cluster. The option that says: Launch an Amazon Aurora Provi sioned DB cluster with burstable performance DB instance class types is incorrect because an Aurora Provisioned DB cluster is not suitable for intermi ttent, sporadic, and unpredictable transactional workloads . This model works well when the database workload is predictable because you can adjust capacity manuall y based on the expected workload. A better database setup here is to use an Amazon Aurora Serverless cl uster. The option that says: Launch a DynamoDB Global tabl e with Auto Scaling enabled is incorrect because although it is using Auto Scaling, the scenario exp licitly indicated that you need a relational databa se to handle your transactional workloads. DynamoDB is a NoSQL d atabase and is not suitable for this use case. Moreover, the use of a DynamoDB Global table is not warranted since this is primarily used if you need a fully managed, multi-region, and multi-master database th at provides fast, local, read and write performance for massively scaled, global applications. The option that says: Launch an Amazon Redshift dat a warehouse cluster with Concurrency Scaling is inc orrect because this type of database is primarily used for online analytical processing (OLAP) and not for on line transactional processing (OLTP). Concurrency Scalin g is simply an Amazon Redshift feature that automat ically and elastically scales query processing power of yo ur Redshift cluster to provide consistently fast pe rformance for hundreds of concurrent queries. References: https://docs.aws.amazon.com/AmazonRDS/latest/Aurora UserGuide/aurora-serverless.how-it-works.html https://docs.aws.amazon.com/AmazonRDS/latest/Aurora UserGuide/aurora-serverless.html", + "references": "" + }, + { + "question": "A popular social media website uses a CloudFront we b distribution to serve their static contents to th eir millions of users around the globe. They are receiv ing a number of complaints recently that their user s take a lot of time to log into their website. There are also occasions when their users are getting HTTP 50 4 errors. You are instructed by your manager to signi ficantly reduce the user's login time to further op timize the system. Which of the following options should you use toget her to set up a cost-effective solution that can im prove your application's performance? (Select TWO.)", + "options": [ + "A. Customize the content that the CloudFront web dis tribution delivers to your users using Lambda@Edge,", + "B. Deploy your application to multiple AWS regions t o accommodate your users around themworld. Set up a", + "D. Set up an origin failover by creating an origin g roup with two origins. Specify one as the primary o rigin and" + ], + "correct": "", + "explanation": "Explanation/Reference: Lambda@Edge lets you run Lambda functions to custom ize the content that CloudFront delivers, executing the functions in AWS locations closer to the viewer. Th e functions run in response to CloudFront events, w ithout provisioning or managing servers. You can use Lambd a functions to change CloudFront requests and responses at the following points: - After CloudFront receives a request from a viewer (viewer request) - Before CloudFront forwards the request to the ori gin (origin request) - After CloudFront receives the response from the o rigin (origin response) - Before CloudFront forwards the response to the vi ewer (viewer response) In the given scenario, you can use Lambda@Edge to a llow your Lambda functions to customize the content that CloudFront delivers and to execute the authent ication process in AWS locations closer to the user s. In addition, you can set up an origin failover by crea ting an origin group with two origins with one as t he primary origin and the other as the second origin w hich CloudFront automatically switches to when the primary origin fails. This will alleviate the occasional HT TP 504 errors that users are experiencing. Therefore, the correct answers are: - Customize the content that the CloudFront web dis tribution delivers to your users using Lambda@Edge, which allows your Lambda functions to execute the authentication process in AWS locations closer to the users. - Set up an origin failover by creating an origin g roup with two origins. Specify one as the primary o rigin and the other as the second origin which CloudFront automat ically switches to when the primary origin returns specific HTTP status code failure responses. The option that says: Use multiple and geographical ly disperse VPCs to various AWS regions then create a transit VPC to connect all of your resources. In or der to handle the requests faster, set up Lambda fu nctions in each region using the AWS Serverless Application Mo del (SAM) service is incorrect because of the same reason provided above. Although setting up multiple VPCs across various regions which are connected wi th a transit VPC is valid, this solution still entails h igher setup and maintenance costs. A more cost-effective option wou ld be to use Lambda@Edge instead. The option that says: Configure your origin to add a Cache-Control max-age directive to your objects, and specify the longest practical value for max-age to increase the cache hit ratio of your CloudFront dis tribution is incorrect because improving the cache hit ratio for the CloudFront distribution is irrelevant in this scenario. You can improve your cache perfor mance by increasing the proportion of your viewer r equests that are served from CloudFront edge caches instead of going to your origin servers for content. However, take note that the problem in the scenario is the sluggish authentication process of your glo bal users and not just the caching of the static object s. The option that says: Deploy your application to mu ltiple AWS regions to accommodate your users around the world. Set up a Route 53 record with latency routin g policy to route incoming traffic to the region th at provides the best latency to the user is incorrect because a lthough this may resolve the performance issue, thi s solution entails a significant implementation cost since you have to deploy your application to multiple AWS re gions. Remember that the scenario asks for a solution that will improve the performance of the application with minimal cost. References: https://docs.aws.amazon.com/AmazonCloudFront/latest /DeveloperGuide/high_availability_origin_failover.h tml https://docs.aws.amazon.com/lambda/latest/dg/lambda -edge.html Check out these Amazon CloudFront and AWS Lambda Ch eat Sheets: https://tutorialsdojo.com/amazon-cloudfront/ https://tutorialsdojo.com/aws-lambda/", + "references": "" + }, + { + "question": "A popular mobile game uses CloudFront, Lambda, and DynamoDB for its backend services. The player data is persisted on a DynamoDB table and the static assets are distributed by CloudFront. However, there are a lot of complaints that saving and retrieving player inform ation is taking a lot of time. To improve the game's performance, which AWS servic e can you use to reduce DynamoDB response times from milliseconds to microseconds?", + "options": [ + "A. DynamoDB Auto Scaling", + "B. Amazon ElastiCache C. AWS Device Farm", + "D. Amazon DynamoDB Accelerator (DAX)" + ], + "correct": "D. Amazon DynamoDB Accelerator (DAX)", + "explanation": "Explanation/Reference: Amazon DynamoDB Accelerator (DAX) is a fully manage d, highly available, in-memory cache that can reduc e Amazon DynamoDB response times from milliseconds to microseconds, even at millions of requests per second. Amazon ElastiCache is incorrect because although yo u may use ElastiCache as your database cache, it wi ll not reduce the DynamoDB response time from milliseconds to microseconds as compared with DynamoDB DAX. AWS Device Farm is incorrect because this is an app testing service that lets you test and interact wi th your Android, iOS, and web apps on many devices at once, or reproduce issues on a device in real time. DynamoDB Auto Scaling is incorrect because this is primarily used to automate capacity management for your tables and global secondary indexes. References: https://aws.amazon.com/dynamodb/dax https://aws.amazon.com/device-farm Check out this Amazon DynamoDB Cheat Sheet: https://tutorialsdojo.com/amazon-dynamodb/", + "references": "" + }, + { + "question": "A popular social network is hosted in AWS and is us ing a DynamoDB table as its database. There is a requirement to implement a 'follow' feature where u sers can subscribe to certain updates made by a particular user and be notified via email. Which of the following is the most suitable solution that y ou should implement to meet the requirement?", + "options": [ + "A. Using the Kinesis Client Library (KCL), write an application that leverages on DynamoDB Streams Kine sis", + "B. Enable DynamoDB Stream and create an AWS Lambda t rigger, as well as the IAM role which contains all of the permissions that the Lambda function will ne ed at runtime. The data from the stream record will be", + "C. Set up a DAX cluster to access the source DynamoD B table. Create a new DynamoDB trigger and a", + "D. Create a Lambda function that uses DynamoDB Strea ms Kinesis Adapter which will fetch data from the" + ], + "correct": "B. Enable DynamoDB Stream and create an AWS Lambda t rigger, as well as the IAM role which contains all of the permissions that the Lambda function will ne ed at runtime. The data from the stream record will be", + "explanation": "Explanation/Reference: A DynamoDB stream is an ordered flow of information about changes to items in an Amazon DynamoDB table . When you enable a stream on a table, DynamoDB captu res information about every modification to data it ems in the table. Whenever an application creates, updates, or delete s items in the table, DynamoDB Streams writes a str eam record with the primary key attribute(s) of the ite ms that were modified. A stream record contains inf ormation about a data modification to a single item in a Dyn amoDB table. You can configure the stream so that t he stream records capture additional information, such as the \"before\" and \"after\" images of modified ite ms. Amazon DynamoDB is integrated with AWS Lambda so th at you can create triggers--pieces of code that automatically respond to events in DynamoDB Streams . With triggers, you can build applications that re act to data modifications in DynamoDB tables. If you enable DynamoDB Streams on a table, you can associate the stream ARN with a Lambda function tha t you write. Immediately after an item in the table i s modified, a new record appears in the table's str eam. AWS Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records. The Lambda function can perform any action s you specify, such as sending a notification or in itiating a workflow. Hence, the correct answer in this scenario is the o ption that says: Enable DynamoDB Stream and create an AWS Lambda trigger, as well as the IAM role which c ontains all of the permissions that the Lambda func tion will need at runtime. The data from the stream reco rd will be processed by the Lambda function which w ill then publish a message to SNS Topic that will notify the subscribers via email. The option that says: Using the Kinesis Client Libr ary (KCL), write an application that leverages on D ynamoDB Streams Kinesis Adapter that will fetch data from t he DynamoDB Streams endpoint. When there are updates made by a particular user, n otify the subscribers via email using SNS is incorr ect because although this is a valid solution, it is mi ssing a vital step which is to enable DynamoDB Stre ams. With the DynamoDB Streams Kinesis Adapter in place, you can begin developing applications via the KCL interface, with the API calls seamlessly direct ed at the DynamoDB Streams endpoint. Remember that the DynamoDB Stream feature is not enabled by default. The option that says: Create a Lambda function that uses DynamoDB Streams Kinesis Adapter which will f etch data from the DynamoDB Streams endpoint. Set up an SNS Topic that will notify the subscribers via emai l when there is an update made by a particular user i s incorrect because just like in the above, you hav e to manually enable DynamoDB Streams first before you c an use its endpoint. The option that says: Set up a DAX cluster to acces s the source DynamoDB table. Create a new DynamoDB trigger and a Lambda function. For every update mad e in the user data, the trigger will send data to t he Lambda function which will then notify the subscrib ers via email using SNS is incorrect because the Dy namoDB Accelerator (DAX) feature is primarily used to sign ificantly improve the in-memory read performance of your database, and not to capture the time-ordered seque nce of item-level modifications. You should use DynamoDB Streams in this scenario instead. References: https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/Streams.html https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/Streams.Lambda.Tutorial.html Check out this Amazon DynamoDB Cheat Sheet: https://tutorialsdojo.com/amazon-dynamodb/", + "references": "" + }, + { + "question": "A suite of web applications is hosted in an Auto Sc aling group of EC2 instances across three Availabil ity Zones and is configured with default settings. There is a n Application Load Balancer that forwards the reque st to the respective target group on the URL path. The scale- in policy has been triggered due to the low number of incoming traffic to the application. Which EC2 instance will be the first one to be term inated by your Auto Scaling group?", + "options": [ + "A. The EC2 instance launched from the oldest launch configuration", + "B. The instance will be randomly selected by the Aut o Scaling group", + "C. The EC2 instance which has the least number of us er sessions", + "D. The EC2 instance which has been running for the l ongest time" + ], + "correct": "A. The EC2 instance launched from the oldest launch configuration", + "explanation": "Explanation/Reference: The default termination policy is designed to help ensure that your network architecture spans Availab ility Zones evenly. With the default termination policy, the be havior of the Auto Scaling group is as follows: 1. If there are instances in multiple Availability Zones, choose the Availability Zone with the most instances and at least one instance that is not pro tected from scale in. If there is more than one Ava ilability Zone with this number of instances, choose the Avai lability Zone with the instances that use the oldes t launch configuration. 2. Determine which unprotected instances in the sel ected Availability Zone use the oldest launch configuration. If there is one such instance, termi nate it. 3. If there are multiple instances to terminate bas ed on the above criteria, determine which unprotect ed instances are closest to the next billing hour. (Th is helps you maximize the use of your EC2 instances and manage your Amazon EC2 usage costs.) If there is on e such instance, terminate it. 4. If there is more than one unprotected instance c losest to the next billing hour, choose one of thes e instances at random. The following flow diagram illustrates how the defa ult termination policy works: References: https://docs.aws.amazon.com/autoscaling/ec2/usergui de/as-instance-termination.html#default-termination - policy https://docs.aws.amazon.com/autoscaling/ec2/usergui de/as-instance-termination.html Check out this AWS Auto Scaling Cheat Sheet: https://tutorialsdojo.com/aws-auto-scaling/", + "references": "" + }, + { + "question": "A financial application is composed of an Auto Scal ing group of EC2 instances, an Application Load Bal ancer, and a MySQL RDS instance in a Multi-AZ Deployments configuration. To protect the confidential data of your customers, you have to ensure that your RDS databas e can only be accessed using the profile credentia ls specific to your EC2 instances via an authenticatio n token. As the Solutions Architect of the company, which of the following should you do to meet the above requirement?", + "options": [ + "A. Create an IAM Role and assign it to your EC2 inst ances which will grant exclusive access to your RDS", + "B. Enable the IAM DB Authentication.", + "C. Configure SSL in your application to encrypt the database connection to RDS.", + "D. Use a combination of IAM and STS to restrict acce ss to your RDS instance via a temporary token." + ], + "correct": "B. Enable the IAM DB Authentication.", + "explanation": "Explanation/Reference: You can authenticate to your DB instance using AWS Identity and Access Management (IAM) database authentication. IAM database authentication works w ith MySQL and PostgreSQL. With this authentication method, you don't need to use a password when you c onnect to a DB instance. Instead, you use an authentication token. An authentication token is a unique string of chara cters that Amazon RDS generates on request. Authentication tokens are generated using AWS Signa ture Version 4. Each token has a lifetime of 15 minutes. You don't need to store user credentials i n the database, because authentication is managed externally using IAM. You can also still use standa rd database authentication. IAM database authentication provides the following benefits: Network traffic to and from the database is encrypt ed using Secure Sockets Layer (SSL). You can use IAM to centrally manage access to your database resources, instead of managing access individually on each DB instance. For applications running on Amazon EC2, you can use profile credentials specific to your EC2 instance to access your database instead of a password, for gre ater security Hence, enabling IAM DB Authentication is the correc t answer based on the above reference. Configuring SSL in your application to encrypt the database connection to RDS is incorrect because an SSL connection is not using an authentication token fro m IAM. Although configuring SSL to your application can improve the security of your data in flight, it is still not a suitable option to use in this scenario . Creating an IAM Role and assigning it to your EC2 i nstances which will grant exclusive access to your RDS instance is incorrect because although you can crea te and assign an IAM Role to your EC2 instances, yo u still need to configure your RDS to use IAM DB Authentica tion. Using a combination of IAM and STS to restrict acce ss to your RDS instance via a temporary token is in correct because you have to use IAM DB Authentication for t his scenario, and not a combination of an IAM and S TS. Although STS is used to send temporary tokens for a uthentication, this is not a compatible use case for RDS.", + "references": "https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/UsingWithRDS.IAMDBAuth.html Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/" + }, + { + "question": "A pharmaceutical company has resources hosted on bo th their on-premises network and in AWS cloud. They want all of their Software Architects to acces s resources on both environments using their on-pre mises credentials, which is stored in Active Directory. In this scenario, which of the following can be use d to fulfill this requirement?", + "options": [ + "A. Set up SAML 2.0-Based Federation by using a Web I dentity Federation.", + "B. Set up SAML 2.0-Based Federation by using a Micro soft Active Directory Federation Service (AD FS).", + "C. Use Amazon VPC", + "D. Use IAM users" + ], + "correct": "B. Set up SAML 2.0-Based Federation by using a Micro soft Active Directory Federation Service (AD FS).", + "explanation": "Explanation/Reference: Since the company is using Microsoft Active Directo ry which implements Security Assertion Markup Language (SAML), you can set up a SAML-Based Federa tion for API Access to your AWS cloud. In this way, you can easily connect to AWS using the login credentials of your on-premises network. AWS supports identity federation with SAML 2.0, an open standard that many identity providers (IdPs) u se. This feature enables federated single sign-on (SSO) , so users can log into the AWS Management Console or call the AWS APIs without you having to create a n IAM user for everyone in your organization. By using SAML, you can simplify the process of configu ring federation with AWS, because you can use the IdP's service instead of writing custom identity pr oxy code. Before you can use SAML 2.0-based federation as des cribed in the preceding scenario and diagram, you must configure your organization's IdP and your AWS account to trust each other. The general process f or configuring this trust is described in the followin g steps. Inside your organization, you must have an IdP that supports SAML 2.0, like Microsoft Active Directory Federation Service (AD FS, part of Windows Server), Shibboleth, or another compatible SAML 2.0 provider. Hence, the correct answer is: Set up SAML 2.0-Based Federation by using a Microsoft Active Directory Federation Service (AD FS). Setting up SAML 2.0-Based Federation by using a Web Identity Federation is incorrect because this is primarily used to let users sign in via a well-know n external identity provider (IdP), such as Login w ith Amazon, Facebook, Google. It does not utilize Active Direct ory. Using IAM users is incorrect because the situation requires you to use the existing credentials stored in their Active Directory, and not user accounts that will b e generated by IAM. Using Amazon VPC is incorrect because this only let s you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a v irtual network that you define. This has nothing to do with user authentication or Active Directory. References: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_ roles_providers_saml.html https://docs.aws.amazon.com/IAM/latest/UserGuide/id _roles_providers.html Check out this AWS IAM Cheat Sheet: https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/", + "references": "" + }, + { + "question": "A company has 3 DevOps engineers that are handling its software development and infrastructure management processes. One of the engineers accident ally deleted a file hosted in Amazon S3 which has caused disruption of service. What can the DevOps engineers do to prevent this fr om happening again?", + "options": [ + "A. Set up a signed URL for all users.", + "B. Use S3 Infrequently Accessed storage to store the data.", + "C. Create an IAM bucket policy that disables delete operation.", + "D. Enable S3 Versioning and Multi-Factor Authenticat ion Delete on the bucket.(Correct)" + ], + "correct": "D. Enable S3 Versioning and Multi-Factor Authenticat ion Delete on the bucket.(Correct)", + "explanation": "Explanation/Reference: To avoid accidental deletion in Amazon S3 bucket, y ou can: - Enable Versioning - Enable MFA (Multi-Factor Authentication) Delete Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versio ning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. With versioning, you can easily recover from both uninte nded user actions and application failures. If the MFA (Multi-Factor Authentication) Delete is enabled, it requires additional authentication for either of the following operations: - Change the versioning state of your bucket - Permanently delete an object version Using S3 Infrequently Accessed storage to store the data is incorrect. Switching your storage class to S3 Infrequent Access won't help mitigate accidental de letions. Setting up a signed URL for all users is incorrect. Signed URLs give you more control over access to your content, so this feature deals more on accessi ng rather than deletion. Creating an IAM bucket policy that disables delete operation is incorrect. If you create a bucket poli cy preventing deletion, other users won't be able to d elete objects that should be deleted. You only want to prevent accidental deletion, not disable the action itself.Reference: http://docs.aws.amazon.com/AmazonS3/latest/dev/Vers ioning.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", + "references": "" + }, + { + "question": "An application that records weather data every minu te is deployed in a fleet of Spot EC2 instances and uses a MySQL RDS database instance. Currently, there is on ly one RDS instance running in one Availability Zon e. You plan to improve the database to ensure high availab ility by synchronous data replication to another RD S instance. Which of the following performs synchronous data re plication in RDS?", + "options": [ + "A. A. CloudFront running as a Multi-AZ deployment", + "B. B. DynamoDB Read Replica", + "C. C. RDS DB instance running as a Multi-AZ deployme nt", + "D. D. RDS Read Replica" + ], + "correct": "C. C. RDS DB instance running as a Multi-AZ deployme nt", + "explanation": "Explanation/Reference: When you create or modify your DB instance to run a s a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronou s standby replica in a different Availability Zone. Updates to your DB Instance are synchronously replicated ac ross Availability Zones to the standby in order to keep both in sync and protect your latest database updates ag ainst DB instance failure. RDS Read Replica is incorrect as a Read Replica pro vides an asynchronous replication instead of synchronous. DynamoDB Read Replica and CloudFront running as a M ulti-AZ deployment are incorrect as both DynamoDB and CloudFront do not have a Read Replica feature.", + "references": "https://aws.amazon.com/rds/details/multi-az/ Amazon RDS Overview: https://youtu.be/aZmpLl8K1UU Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/" + }, + { + "question": "A Solutions Architect identified a series of DDoS a ttacks while monitoring the VPC. The Architect need s to fortify the current cloud infrastructure to protect the data of the clients. Which of the following is the most suitable solutio n to mitigate these kinds of attacks?", + "options": [ + "A. Use AWS Shield Advanced to detect and mitigate DD oS attacks.", + "B. A combination of Security Groups and Network Acce ss Control Lists to only allow authorized traffic t o", + "C. Set up a web application firewall using AWS WAF t o filter, monitor, and block HTTP traffic.", + "D. Using the AWS Firewall Manager, set up a security layer that will prevent SYN floods, UDP reflection" + ], + "correct": "A. Use AWS Shield Advanced to detect and mitigate DD oS attacks.", + "explanation": "Explanation/Reference: For higher levels of protection against attacks tar geting your applications running on Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing(ELB), A mazon CloudFront, and Amazon Route 53 resources, you can subscribe to AWS Shield Advanced . In addition to the network and transport layer protections that come with Standard, AWS Shield Adv anced provides additional detection and mitigation against large and sophisticated DDoS attacks, near real-time visibility into attacks, and integration with AWS WAF, a web application firewall. AWS Shield Advanced also gives you 24x7 access to t he AWS DDoS Response Team (DRT) and protection against DDoS related spikes in your Amaz on Elastic Compute Cloud (EC2), Elastic Load Balancing(ELB), Amazon CloudFront, and Amazon Route 53 charges. Hence, the correct answer is: Use AWS Shield Advanc ed to detect and mitigate DDoS attacks. The option that says: Using the AWS Firewall Manage r, set up a security layer that will prevent SYN floods, UDP reflection attacks and other DDoS attac ks is incorrect because AWS Firewall Manager is mainly used to simplify your AWS WAF administration and maintenance tasks across multiple accounts and resources. It does not protect your VPC against DDoS attacks. The option that says: Set up a web application fire wall using AWS WAF to filter, monitor, and block HTTP traffic is incorrect. Even though AWS WAF can help you block common attack patterns to your VPC such as SQL injection or cross-site scripting, this is still not enough to withstand DDoS attacks. It is better to use AWS Shield in this scenario. The option that says: A combination of Security Gro ups and Network Access Control Lists to only allow authorized traffic to access your VPC is inco rrect. Although using a combination of Security Groups and NACLs are valid to provide security to y our VPC, this is not enough to mitigate a DDoS atta ck. You should use AWS Shield for better security prote ction. References: https://d1.awsstatic.com/whitepapers/Security/DDoS_ White_Paper.pdf https://aws.amazon.com/shield/ Check out this AWS Shield Cheat Sheet: https://tutorialsdojo.com/aws-shield/ AWS Security Services Overview - WAF, Shield, Cloud HSM, KMS: https://www.youtube.com/watch?v=-1S-RdeAmMo", + "references": "" + }, + { + "question": "a few days, you found out that there are other trav el websites linking and using your photos. This res ulted in financial losses for your business. What is the MOST effective method to mitigate this issue?", + "options": [ + "A. Use CloudFront distributions for your photos.", + "B. Block the IP addresses of the offending websites using NACL.", + "C. Configure your S3 bucket to remove public read ac cess and use pre-signed URLs with expiry dates.", + "D. Store and privately serve the high-quality photos on Amazon WorkDocs instead." + ], + "correct": "C. Configure your S3 bucket to remove public read ac cess and use pre-signed URLs with expiry dates.", + "explanation": "Explanation/Reference: In Amazon S3, all objects are private by default. O nly the object owner has permission to access these objects. However, the object owner can optionally share obje cts with others by creating a pre-signed URL, using their own security credentials, to grant time-limited per mission to download the objects. When you create a pre-signed URL for your object, y ou must provide your security credentials, specify a bucket name, an object key, specify the HTTP method (GET t o download the object) and expiration date and time . The pre-signed URLs are valid only for the specified du ration. Anyone who receives the pre-signed URL can then acc ess the object. For example, if you have a video in your bucket and both the bucket and the object are priva te, you can share the video with others by generati ng a pre- signed URL. Using CloudFront distributions for your photos is i ncorrect. CloudFront is a content delivery network service that speeds up delivery of content to your customers. Blocking the IP addresses of the offending websites using NACL is also incorrect. Blocking IP address using NACLs is not a very efficient method because a quick change in IP address would easily bypass th is configuration. Storing and privately serving the high-quality phot os on Amazon WorkDocs instead is incorrect as WorkDocs is simply a fully managed, secure content creation, storage, and collaboration service. It is not a suitable service for storing static content. Amazon WorkDocs is more often used to easily create, edit , and share documents for collaboration and not for servi ng object data like Amazon S3. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/Sha reObjectPreSignedURL.html https://docs.aws.amazon.com/AmazonS3/latest/dev/Obj ectOperations.html Check out this Amazon CloudFront Cheat Sheet: https://tutorialsdojo.com/amazon-cloudfront/ S3 Pre-signed URLs vs CloudFront Signed URLs vs Ori gin Access Identity (OAI) https://tutorialsdojo.com/s3-pre-signed-urls-vs-clo udfront-signed-urls-vs-origin-access-identity-oai/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/", + "references": "" + }, + { + "question": "The company that you are working for has a highly a vailable architecture consisting of an elastic load balancer and several EC2 instances configured with auto-scal ing in three Availability Zones. You want to monito r your EC2 instances based on a particular metric, which i s not readily available in CloudWatch. Which of the following is a custom metric in CloudW atch which you have to manually set up?", + "options": [ + "A. Network packets out of an EC2 instance", + "B. CPU Utilization of an EC2 instance", + "C. Disk Reads activity of an EC2 instance", + "D. Memory Utilization of an EC2 instance" + ], + "correct": "D. Memory Utilization of an EC2 instance", + "explanation": "Explanation/Reference: CloudWatch has available Amazon EC2 Metrics for you to use for monitoring. CPU Utilization identifies the processing power required to run an application upon a selected instance. Network Utilization iden tifies the volume of incoming and outgoing network traffic to a single instance. Disk Reads metric is used to det ermine the volume of the data the application reads from t he hard disk of the instance. This can be used to d etermine the speed of the application. However, there are ce rtain metrics that are not readily available in Clo udWatch such as memory utilization, disk space utilization, and many others which can be collected by setting up a custom metric. You need to prepare a custom metric using CloudWatc h Monitoring Scripts which is written in Perl. You can also install CloudWatch Agent to collect more s ystem-level metrics from Amazon EC2 instances. Here's the list of custom metrics that you can set up: - Memory utilization - Disk swap utilization - Disk space utilization - Page file utilization - Log co llection CPU Utilization of an EC2 instance, Disk Reads acti vity of an EC2 instance, and Network packets out of an EC2 instance are all incorrect because these metrics are readily available in CloudWatch by defa ult. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /monitoring_ec2.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /mon-scripts.html#using_put_script Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/ Check out this Amazon CloudWatch Cheat Sheet: https://tutorialsdojo.com/amazon-cloudwatch/", + "references": "" + }, + { + "question": "A Solutions Architect needs to make sure that the O n-Demand EC2 instance can only be accessed from thi s IP address (110.238.98.71) via an SSH connection. Whic h configuration below will satisfy this requirement ?", + "options": [ + "A. Security Group Inbound Rule: Protocol UDP, Port Range 22, Source 110.238.98.71/32", + "B. Security Group Inbound Rule: Protocol TCP. Port Range 22, Source 110.238.98.71/0", + "C. Security Group Inbound Rule: Protocol TCP. Port Range 22, Source 110.238.98.71/32", + "D. Security Group Inbound Rule: Protocol UDP, Port Range 22, Source 110.238.98.71/0" + ], + "correct": "C. Security Group Inbound Rule: Protocol TCP. Port Range 22, Source 110.238.98.71/32", + "explanation": "Explanation/Reference: A security group acts as a virtual firewall for you r instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to f ive security groups to the instance. Security group s act at the instance level, not the subnet level. Th erefore, each instance in a subnet in your VPC can be assigned to a different set of security groups. The requirement is to only allow the individual IP of the client and not the entire network. Therefore , the proper CIDR notation should be used. The /32 denote s one IP address and the /0 refers to the entire network. Take note that the SSH protocol uses TCP a nd port 22. Hence, the correct answer is: Protocol TCP, Port R ange 22, Source 110.238.98.71/32 Protocol UDP, Port Range 22, Source 110.238.98.71 /32 and Protocol UDP, Port Range 22, Source 110.238.98.71/0 are incorrect as they are us ing UDP. Protocol TCP, Port Range 22, Source 110.238.98.71 /0 is incorrect because it uses a /0 CIDR notation. Protocol TCP, Port Range 22, Source 110.238.98.71 /0 is incorrect because it allows the entire networ k instead of a single IP.", + "references": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /using-network-security.html#security-group-rules Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/" + }, + { + "question": "An online cryptocurrency exchange platform is hoste d in AWS which uses ECS Cluster and RDS in Multi-AZ Deployments configuration. The application is heavi ly using the RDS instance to process complex read a nd write database operations. To maintain the reliabil ity, availability, and performance of your systems, you have to closely monitor how the different processes or thre ads on a DB instance use the CPU, including the per centage of the CPU bandwidth and total memory consumed by e ach process. Which of the following is the most suitable solutio n to properly monitor your database?", + "options": [ + "A. Use Amazon CloudWatch to monitor the CPU Utilizat ion of your database.", + "B. Create a script that collects and publishes custo m metrics to CloudWatch, which tracks the real-time CPU", + "C. Enable Enhanced Monitoring in RDS.", + "D. Check the CPU% and MEM% metrics which are readily available in the Amazon RDS console that shows" + ], + "correct": "C. Enable Enhanced Monitoring in RDS.", + "explanation": "Explanation/Reference: Amazon RDS provides metrics in real time for the op erating system (OS) that your DB instance runs on. You can view the metrics for your DB instance using the console, or consume the Enhanced Monitoring JS ON output from CloudWatch Logs in a monitoring system of your choice. By default, Enhanced Monitoring metrics are stored in the CloudWatch Log s for 30 days. To modify the amount of time the met rics are stored in the CloudWatch Logs, change the reten tion for the RDSOSMetrics log group in the CloudWat ch console. Take note that there are certain differences betwee n CloudWatch and Enhanced Monitoring Metrics. CloudWatch gathers metrics about CPU utilization fr om the hypervisor for a DB instance, and Enhanced Monitoring gathers its metrics from an agent on the instance. As a result, you might find differences between the measurements, because the hypervisor layer perf orms a small amount of work. Hence, enabling Enhanc ed Monitoring in RDS is the correct answer in this spe cific scenario. The differences can be greater if your DB instances use smaller instance classes, because then there a re likely more virtual machines (VMs) that are managed by the hypervisor layer on a single physical instance. En hanced Monitoring metrics are useful when you want to see how different processes or threads on a DB instance use the CPU. Using Amazon CloudWatch to monitor the CPU Utilizat ion of your database is incorrect because although you can use this to monitor the CPU Utiliz ation of your database instance, it does not provid e the percentage of the CPU bandwidth and total memory consumed by each database process in your RDS instance. Take note that CloudWatch gathers metrics about CPU utilizati on from the hypervisor for a DB instance while RDS Enhanced Monitoring gathers its metrics from an age nt on the instance. The option that says: Create a script that collects and publishes custom metrics to CloudWatch, which tracks the real-time CPU Utilization of the RDS ins tance and then set up a custom CloudWatch dashboard to view the metrics is incorrect because although you can use Amazon CloudWatch Logs and CloudWatch dashboard to monitor the CPU Utilization of the database instance, using CloudWatch alone i s still not enough to get the specific percentage of the CP U bandwidth and total memory consumed by each database processes. The data provided by CloudWatch is not as detailed as compared with the Enhanced Monitoring feature in RDS. Take note as well that y ou do not have direct access to the instances/serve rs of your RDS database instance, unlike with your EC2 in stances where you can install a CloudWatch agent or a custom script to get CPU and memory utilization of your instance. The option that says: Check the CPU% and MEM% metri cs which are readily available in the Amazon RDS console that shows the percentage of the CPU bandwidth and total memory consumed by each database process of your RDS instance is in correct because the CPU% and MEM% metrics are not readily available in the Amazon RDS console, wh ich is contrary to what is being stated in this opt ion. References: https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/ USER_Monitoring.OS.html#USER_Monitoring.OS.CloudWat chLogs https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/MonitoringOverview.html#monitoring-cloudwatch Check out this Amazon CloudWatch Cheat Sheet: https://tutorialsdojo.com/amazon-cloudwatch/ Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/", + "references": "" + }, + { + "question": "A government entity is conducting a population and housing census in the city. Each household informat ion uploaded on their online portal is stored in encryp ted files in Amazon S3. The government assigned its Solutions Architect to set compliance policies that verify sensitive data in a manner that meets their compliance standards. They should also be alerted if there are compromised files detected containing personally identifiable information (PII), protected health in formation (PHI) or intellectual properties (IP). Which of the following should the Architect impleme nt to satisfy this requirement?", + "options": [ + "A. Set up and configure Amazon Macie to monitor and detect usage patterns on their Amazon S3 data.", + "B. Set up and configure Amazon Inspector to send out alert notifications whenever a security violation is", + "C. Set up and configure Amazon Rekognition to monito r and recognize patterns on their Amazon S3 data.", + "D. Set up and configure Amazon GuardDuty to monitor malicious activity on their Amazon S3 data." + ], + "correct": "A. Set up and configure Amazon Macie to monitor and detect usage patterns on their Amazon S3 data.", + "explanation": "Explanation/Reference: Amazon Macie is an ML-powered security service that helps you prevent data loss by automatically discovering, classifying, and protecting sensitive data stored in Amazon S3. Amazon Macie uses machine learning to recognize sensitive data such as person ally identifiable information (PII) or intellectual property, assigns a business value, and provides visibility i nto where this data is stored and how it is being u sed in your organization. Amazon Macie continuously monitors data access acti vity for anomalies, and delivers alerts when it det ects risk of unauthorized access or inadvertent data leaks. A mazon Macie has ability to detect global access permissions inadvertently being set on sensitive da ta, detect uploading of API keys inside source code , and verify sensitive customer data is being stored and accessed in a manner that meets their compliance standards. Hence, the correct answer is: Set up and configure Amazon Macie to monitor and detect usage patterns on their Amazon S3 data. The option that says: Set up and configure Amazon R ekognition to monitor and recognize patterns on their Amazon S3 data is incorrect because Rekogniti on is simply a service that can identify the object s, people, text, scenes, and activities, as well as detect any inappropriate content on your images or videos. The option that says: Set up and configure Amazon G uardDuty to monitor malicious activity on their Ama zon S3 data is incorrect because GuardDuty is just a threa t detection service that continuously monitors for malicious activity and unauthorized be havior to protect your AWS accounts and workloads. The option that says: Set up and configure Amazon I nspector to send out alert notifications whenever a security violation is detected on their Amazon S3 data is incorrect because Inspector is basically a n automated security assessment service that helps im prove the security and compliance of applications deployed on AWS. References: https://docs.aws.amazon.com/macie/latest/userguide/ what-is-macie.html https://aws.amazon.com/macie/faq/ https://docs.aws.amazon.com/macie/index.html Check out this Amazon Macie Cheat Sheet: https://tutorialsdojo.com/amazon-macie/ AWS Security Services Overview - Secrets Manager, A CM, Macie: https://www.youtube.com/watch?v=ogVamzF2Dzk", + "references": "" + }, + { + "question": "An IT consultant is working for a large financial c ompany. The role of the consultant is to help the d evelopment team build a highly available web application using stateless web servers. In this scenario, which AWS services are suitable f or storing session state data? (Select TWO.)", + "options": [ + "A. RDS", + "B. Redshift Spectrum", + "C. DynamoDB", + "D. Glacier" + ], + "correct": "", + "explanation": "Explanation/Reference: DynamoDB and ElastiCache are the correct answers. Y ou can store session state data on both DynamoDB and ElastiCache. These AWS services provide high-pe rformance storage of key-value pairs which can be used to build a highly available web application. Redshift Spectrum is incorrect since this is a data warehousing solution where you can directly query data from your data warehouse. Redshift is not suitable for s toring session state, but more on analytics and OLA P processes. RDS is incorrect as well since this is a relational database solution of AWS. This relational storage type might not be the best fit for session states, and it migh t not provide the performance you need compared to DynamoDB for the same cost. S3 Glacier is incorrect since this is a low-cost cl oud storage service for data archiving and long-ter m backup. The archival and retrieval speeds of Glacier is too slow for handling session states. References: https://aws.amazon.com/caching/database-caching/ https://aws.amazon.com/caching/session-management/ Check out this Amazon Elasticache Cheat Sheet: https://tutorialsdojo.com/amazon-elasticache/", + "references": "" + }, + { + "question": "A company has a web application that uses Internet Information Services (IIS) for Windows Server. A fi le share is used to store the application data on the networ k-attached storage of the company's on-premises dat a center. To achieve a highly available system, they plan to migrate the application and file share to A WS. Which of the following can be used to fulfill this requirement? A. Migrate the existing file share configuration to AWS Storage Gateway.", + "options": [ + "B. Migrate the existing file share configuration to Amazon FSx for Windows File Server.", + "C. Migrate the existing file share configuration to Amazon EFS.", + "D. Migrate the existing file share configuration to Amazon EBS." + ], + "correct": "B. Migrate the existing file share configuration to Amazon FSx for Windows File Server.", + "explanation": "Explanation/Reference: Amazon FSx for Windows File Server provides fully m anaged Microsoft Windows file servers, backed by a fully native Windows file system. Amazon FSx for Windows File Server has the features, performance, and compatibility to easily lift and shift enterprise a pplications to the AWS Cloud. It is accessible from Windows, Linux, and macOS compute instances and devices. Tho usands of compute instances and devices can access a file system concurrently. In this scenario, you need to migrate your existing file share configuration to the cloud. Among the o ptions given, the best possible answer is Amazon FSx. A fi le share is a specific folder in your file system, including the folder's subfolders, which you make a ccessible to your compute instances via the SMB protocol. To migrate file share configurations from your on-premises file system, you must migrate you r files first to Amazon FSx before migrating your file shar e configuration. Hence, the correct answer is: Migrate the existing file share configuration to Amazon FSx for Windows File Server. The option that says: Migrate the existing file sha re configuration to AWS Storage Gateway is incorrect because AWS Storage Gateway is primarily used to integrate your on-premises network to AWS but not for migrating your applications. Using a fi le share in Storage Gateway implies that you will s till keep your on-premises systems, and not entirely migrate it. The option that says: Migrate the existing file sha re configuration to Amazon EFS is incorrect because it is stated in the scenario that the company is using a file share that runs on a Windows server. Remember that Amazon EFS only supports Linux workloads. The option that says: Migrate the existing file sha re configuration to Amazon EBS is incorrect because EBS is primarily used as block storage for EC2 instances a nd not as a shared file system. A file share is a s pecific folder in a file system that you can access using a server message block (SMB) protocol. Amazon EBS do es not support SMB protocol. References: https://aws.amazon.com/fsx/windows/faqs/ https://docs.aws.amazon.com/fsx/latest/WindowsGuide /migrate-file-share-config-to-fsx.html Check out this Amazon FSx Cheat Sheet: https://tutorialsdojo.com/amazon-fsx/", + "references": "" + }, + { + "question": "A company plans to migrate its on-premises workload to AWS. The current architecture is composed of a Microsoft SharePoint server that uses a Windows sha red file storage. The Solutions Architect needs to use a cloud storage solution that is highly available and can be integrated with Active Directory for access control and authentication. Which of the following options can satisfy the give n requirement?", + "options": [ + "A. Create a Network File System (NFS) file share usi ng AWS Storage Gateway.", + "B. Create a file system using Amazon FSx for Windows File Server and join it to an Active Directory dom ain in", + "C. Launch an Amazon EC2 Windows Server to mount a ne w S3 bucket as a file volume.", + "D. Create a file system using Amazon EFS and join it to an Active Directory domain.", + "A. Convertible Reserved Instances allow you to excha nge for another convertible reserved instance of a", + "B. Unused Convertible Reserved Instances can later b e sold at the Reserved Instance Marketplace.", + "C. It can enable you to reserve capacity for your Am azon EC2 instances in multiple Availability Zones a nd", + "D. It runs in a VPC on hardware that's dedicated to a single customer." + ], + "correct": "B. Create a file system using Amazon FSx for Windows File Server and join it to an Active Directory dom ain in", + "explanation": "Explanation/Reference: Reserved Instances (RIs) provide you with a signifi cant discount (up to 75%) compared to On-Demand instance pricing. You have the flexibility to chang e families, OS types, and tenancies while benefitin g from RI pricing when you use Convertible RIs. One import ant thing to remember here is that Reserved Instanc es are not physical instances, but rather a billing di scount applied to the use of On-Demand Instances in your account. The offering class of a Reserved Instance is either Standard or Convertible. A Standard Reserved Insta nce provides a more significant discount than a Convert ible Reserved Instance, but you can't exchange a St andard Reserved Instance unlike Convertible Reserved Insta nces. You can modify Standard and Convertible Reser ved Instances. Take note that in Convertible Reserved I nstances, you are allowed to exchange another Convertible Reserved instance with a different instance type and tenancy. The configuration of a Reserved Instance comprises a single instance type, platform, scope, and tenanc y over a term. If your computing needs change, you might b e able to modify or exchange your Reserved Instance. When your computing needs change, you can modify yo ur Standard or Convertible Reserved Instances and continue to take advantage of the billing benefit. You can modify the Availability Zone, scope, networ k platform, or instance size (within the same instance type) of your Reserved Instance. You can also sell your unu sed instance for Standard RIs but not Convertible RIs o n the Reserved Instance Marketplace. Hence, the correct options are: - Unused Standard Reserved Instances can later be s old at the Reserved Instance Marketplace. - Convertible Reserved Instances allow you to exchang e for another convertible reserved instance of a different instance family. The option that says: Unused Convertible Reserved I nstances can later be sold at the Reserved Instance Marketplace is incorrect. This is not poss ible. Only Standard RIs can be sold at the Reserved Instance Marketplace. The option that says: It can enable you to reserve capacity for your Amazon EC2 instances in multiple Availability Zones and multiple AWS Regions for any duration is incorrect because you can reserve capacity to a specific AWS Region (regional Reserve d Instance) or specific Availability Zone (zonal Reserved Instance) only. You cannot reserve capacit y to multiple AWS Regions in a single RI purchase. The option that says: It runs in a VPC on hardware that's dedicated to a single customer is incorrect because that is the description of a Dedicated inst ance and not a Reserved Instance. A Dedicated insta nce runs in a VPC on hardware that's dedicated to a sin gle customer. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ri-modifying.html https://aws.amazon.com/ec2/pricing/reserved-instanc es/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ec2-reserved-instances.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /reserved-instances-types.html Amazon EC2 Overview: https://youtu.be/7VsGIHT_jQE Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", + "references": "" + }, + { + "question": "A media company has an Amazon ECS Cluster, which us es the Fargate launch type, to host its news websi te. The database credentials should be supplied using e nvironment variables, to comply with strict securit y compliance. As the Solutions Architect, you have to ensure that the credentials are secure and that th ey cannot be viewed in plaintext on the cluster itself. Which of the following is the most suitable solutio n in this scenario that you can implement with mini mal effort?", + "options": [ + "A. In the ECS task definition file of the ECS Cluste r, store the database credentials using Docker Secr ets to", + "B. Use the AWS Systems Manager Parameter Store to ke ep the database credentials and then encrypt them", + "C. Store the database credentials in the ECS task de finition file of the ECS Cluster and encrypt it wit h KMS.", + "D. Use the AWS Secrets Manager to store the database credentials and then encrypt them using AWS KMS." + ], + "correct": "B. Use the AWS Systems Manager Parameter Store to ke ep the database credentials and then encrypt them", + "explanation": "Explanation/Reference: Amazon ECS enables you to inject sensitive data int o your containers by storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them in your container definition. This feature is supported by tasks using both the EC2 a nd Fargate launch types. Secrets can be exposed to a container in the follow ing ways: - To inject sensitive data into your containers as environment variables, use the secrets container de finition parameter. - To reference sensitive information in the log con figuration of a container, use the secretOptions co ntainer definition parameter. Within your container definition, specify secrets w ith the name of the environment variable to set in the container and the full ARN of either the Secrets Ma nager secret or Systems Manager Parameter Store parameter containing the sensitive data to present to the container. The parameter that you reference can be from a different Region than the container using it , but must be from within the same account. Hence, the correct answer is the option that says: Use the AWS Systems Manager Parameter Store to kee p the database credentials and then encrypt them usin g AWS KMS. Create an IAM Role for your Amazon ECS task execution role (taskRoleArn) and reference it with your task definition, which allows access to b oth KMS and the Parameter Store. Within your container defi nition, specify secrets with the name of the enviro nment variable to set in the container and the full ARN o f the Systems Manager Parameter Store parameter con taining the sensitive data to present to the container. The option that says: In the ECS task definition fi le of the ECS Cluster, store the database credentia ls using Docker Secrets to centrally manage these sensitive data and securely transmit it to only those contain ers that need access to it. Secrets are encrypted during tra nsit and at rest. A given secret is only accessibl e to those services which have been granted explicit access to it via IAM Role, and only while those service tasks are running is incorrect. Although you can use Docker Secrets to secure the sensitive database credentials, this feature is only applicab le in Docker Swarm. In AWS, the recommended way to secure sensitive data is either through the use of Secrets Manager or Systems Manager Parameter Store. The option that says: Store the database credential s in the ECS task definition file of the ECS Cluste r and encrypt it with KMS. Store the task definition JSON file in a private S3 bucket and ensure that HTTPS is enabled on the bucket to encrypt the data in-flight. Create an IAM role to the ECS task defin ition script that allows access to the specific S3 bucket and then pass the --cli-input-json parameter when calling the ECS register-task-defini tion. Reference the task definition JSON file in th e S3 bucket which contains the database credentials is i ncorrect. Although the solution may work, it is not recommended to store sensitive credentials in S3. T his entails a lot of overhead and manual configurat ion steps which can be simplified by simply using the S ecrets Manager or Systems Manager Parameter Store. The option that says: Use the AWS Secrets Manager t o store the database credentials and then encrypt t hem using AWS KMS. Create a resource-based policy for y our Amazon ECS task execution role (taskRoleArn) and reference it with your task definition which allows access to both KMS and AWS Secrets Manager. Within your container definition, specify secrets with the name of the environment variable to set in the container and th e full ARN of the Secrets Manager secret which cont ains the sensitive data, to present to the container is inco rrect. Although the use of Secrets Manager in secur ing sensitive data in ECS is valid, Amazon ECS doesn't support resource-based policies. An example of a resource-based policy is the S3 bucket policy. An E CS task assumes an execution role (IAM role) to be able to call other AWS services like AWS Secrets Manager on your behalf. References: https://docs.aws.amazon.com/AmazonECS/latest/develo perguide/specifying-sensitive-data.html https://aws.amazon.com/blogs/mt/the-right-way-to-st ore-secrets-using-parameter-store/ Check out these Amazon ECS and AWS Systems Manager Cheat Sheets: https://tutorialsdojo.com/amazon-elastic-container- service-amazon-ecs/ https://tutorialsdojo.com/aws-systems-manager/", + "references": "" + }, + { + "question": "A company needs to deploy at least 2 EC2 instances to support the normal workloads of its application and automatically scale up to 6 EC2 instances to handle the peak load. The architecture must be highly available and fault-tolerant as it is processing mi ssion-critical workloads. As the Solutions Architect of the company, what sho uld you do to meet the above requirement?", + "options": [ + "A. Create an Auto Scaling group of EC2 instances and set the minimum capacity to 2 and the maximum", + "B. Create an Auto Scaling group of EC2 instances and set the minimum capacity to 2 and the maximum", + "C. Create an Auto Scaling group of EC2 instances and set the minimum capacity to 2 and the maximum", + "D. Create an Auto Scaling group of EC2 instances and set the minimum capacity to 4 and the maximum" + ], + "correct": "", + "explanation": "Explanation/Reference: Amazon EC2 Auto Scaling helps ensure that you have the correct number of Amazon EC2 instances availabl e to handle the load for your application. You create collections of EC2 instances, called Auto Scaling groups. You can specify the minimum number of instances in each Auto Scaling group, and Amazon EC2 Auto Scalin g ensures that your group never goes below this size. You can also specify the maximum number of instanc es in each Auto Scaling group, and Amazon EC2 Auto Scalin g ensures that your group never goes above this siz e. To achieve highly available and fault-tolerant arch itecture for your applications, you must deploy all your instances in different Availability Zones. This wil l help you isolate your resources if an outage occu rs. Take note that to achieve fault tolerance, you need to have redundant resources in place to avoid any system degradation in the event of a server fault o r an Availability Zone outage. Having a fault-toler ant architecture entails an extra cost in running addit ional resources than what is usually needed. This i s to ensure that the mission-critical workloads are processed. Since the scenario requires at least 2 instances to handle regular traffic, you should have 2 instance s running all the time even if an AZ outage occurred. You can use an Auto Scaling Group to automatically scale y our compute resources across two or more Availability Z ones. You have to specify the minimum capacity to 4 instances and the maximum capacity to 6 instances. If each AZ has 2 instances running, even if an AZ f ails, your system will still run a minimum of 2 instances . Hence, the correct answer in this scenario is: Crea te an Auto Scaling group of EC2 instances and set t he minimum capacity to 4 and the maximum capacity to 6 . Deploy 2 instances in Availability Zone A and another 2 instances in Availability Zone B. The option that says: Create an Auto Scaling group of EC2 instances and set the minimum capacity to 2 and the maximum capacity to 6. Deploy 4 instances in Av ailability Zone A is incorrect because the instances are only deployed in a single Availabilit y Zone. It cannot protect your applications and dat a from datacenter or AZ failures. The option that says: Create an Auto Scaling group of EC2 instances and set the minimum capacity to 2 and the maximum capacity to 6. Use 2 Availability Zones and deploy 1 instance for each AZ is incorre ct. It is required to have 2 instances running all the time. If an AZ outage happened, ASG will launch a new ins tance on the unaffected AZ. This provisioning does not happe n instantly, which means that for a certain period of time, there will only be 1 running instance left. The option that says: Create an Auto Scaling group of EC2 instances and set the minimum capacity to 2 and the maximum capacity to 4. Deploy 2 instances in Av ailability Zone A and 2 instances in Availability Zone B is incorrect. Although this ful fills the requirement of at least 2 EC2 instances a nd high availability, the maximum capacity setting is wrong . It should be set to 6 to properly handle the peak load. If an AZ outage occurs and the system is at its peak load , the number of running instances in this setup wil l only be 4 instead of 6 and this will affect the performance o f your application. References: https://docs.aws.amazon.com/autoscaling/ec2/usergui de/what-is-amazon-ec2-auto-scaling.html https://docs.aws.amazon.com/documentdb/latest/devel operguide/regions-and-azs.html Check out this AWS Auto Scaling Cheat Sheet: https://tutorialsdojo.com/aws-auto-scaling/", + "references": "" + }, + { + "question": "A Docker application, which is running on an Amazon ECS cluster behind a load balancer, is heavily usi ng DynamoDB. You are instructed to improve the databas e performance by distributing the workload evenly and using the provisioned throughput efficiently. Which of the following would you consider to implem ent for your DynamoDB table?", + "options": [ + "A. Use partition keys with low-cardinality attribute s, which have a few number of distinct values for e ach item.", + "B. Reduce the number of partition keys in the Dynamo DB table.", + "C. Use partition keys with high-cardinality attribut es, which have a large number of distinct values fo r each", + "D. Avoid using a composite primary key, which is com posed of a partition key and a sort key." + ], + "correct": "C. Use partition keys with high-cardinality attribut es, which have a large number of distinct values fo r each", + "explanation": "Explanation/Reference: The partition key portion of a table's primary key determines the logical partitions in which a table' s data is stored. This in turn affects the underlying physica l partitions. Provisioned I/O capacity for the tabl e is divided evenly among these physical partitions. Therefore a partition key design that doesn't distribute I/O r equests evenly can create \"hot\" partitions that result in t hrottling and use your provisioned I/O capacity ine fficiently. The optimal usage of a table's provisioned throughp ut depends not only on the workload patterns of ind ividual items, but also on the partition-key design. This d oesn't mean that you must access all partition key values to achieve an efficient throughput level, or even that the percentage of accessed partition key values mu st be high. It does mean that the more distinct partition key values that your workload accesses, the more t hose requests will be spread across the partitioned spac e. In general, you will use your provisioned throug hput more efficiently as the ratio of partition key values ac cessed to the total number of partition key values increases. One example for this is the use of partition keys w ith high-cardinality attributes, which have a large number of distinct values for each item. Reducing the number of partition keys in the Dynamo DB table is incorrect. Instead of doing this, you s hould actually add more to improve its performance to dis tribute the I/O requests evenly and not avoid \"hot\" partitions. Using partition keys with low-cardinality attribute s, which have a few number of distinct values for e ach item is incorrect because this is the exact opposite of the correct answer. Remember that the more distinct pa rtition key values your workload accesses, the more those r equests will be spread across the partitioned space . Conversely, the less distinct partition key values, the less evenly spread it would be across the part itioned space, which effectively slows the performance. The option that says: Avoid using a composite prima ry key, which is composed of a partition key and a sort key is incorrect because as mentioned, a composite prim ary key will provide more partition for the table a nd in turn, improves the performance. Hence, it should be used and not avoided. References: https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/bp-partition-key-uniform-load.html https://aws.amazon.com/blogs/database/choosing-the- right-dynamodb-partition-key/ Check out this Amazon DynamoDB Cheat Sheet: https://tutorialsdojo.com/amazon-dynamodb/ Amazon DynamoDB Overview: https://www.youtube.com/watch?v=3ZOyUNIeorU", + "references": "" + }, + { + "question": "An organization needs to provision a new Amazon EC2 instance with a persistent block storage volume to migrate data from its on-premises network to AWS. T he required maximum performance for the storage volume is 64,000 IOPS. In this scenario, which of the following can be use d to fulfill this requirement?", + "options": [ + "A. Launch an Amazon EFS file system and mount it to a Nitro-based Amazon EC2 instance and set the", + "B. Directly attach multiple Instance Store volumes i n an EC2 instance to deliver maximum IOPS performan ce.", + "C. Launch a Nitro-based EC2 instance and attach a Pr ovisioned IOPS SSD EBS volume (io1) with 64,000", + "D. Launch any type of Amazon EC2 instance and attach a Provisioned IOPS SSD EBS volume (io1) with" + ], + "correct": "C. Launch a Nitro-based EC2 instance and attach a Pr ovisioned IOPS SSD EBS volume (io1) with 64,000", + "explanation": "Explanation/Reference: An Amazon EBS volume is a durable, block-level stor age device that you can attach to your instances. After you attach a volume to an instance, you can u se it as you would use a physical hard drive. EBS volumes are flexible. The AWS Nitro System is the underlying platform for the latest generation of EC2 instances that enable s AWS to innovate faster, further reduce the cost of the customers, and deliver added benefits like increase d security and new instance types. Amazon EBS is a persistent block storage volume. It can persist independently from the life of an inst ance. Since the scenario requires you to have an EBS volu me with up to 64,000 IOPS, you have to launch a Nitro-based EC2 instance. Hence, the correct answer in this scenario is: Laun ch a Nitro-based EC2 instance and attach a Provisioned IOPS SSD EBS volume (io1) with 64,000 I OPS. The option that says: Directly attach multiple Inst ance Store volumes in an EC2 instance to deliver maximum IOPS performance is incorrect. Although an Instance Store is a block storage volume, it is not persistent and the data will be gone if the instanc e is restarted from the stopped state (note that th is is different from the OS-level reboot. In OS-level reboot, data still persists in the instance store). An instance store only provides temporary block-level storage for your ins tance. It means that the data in the instance store can be lost if the underlying disk drive fails, if the ins tance stops, and if the instance terminates. The option that says: Launch an Amazon EFS file sys tem and mount it to a Nitro-based Amazon EC2 instance and set the performance mode to Max I/O is incorrect. Although Amazon EFS can provide over 64,000 IOPS, this solution uses a file system and not a block storage volume which is what is ask ed in the scenario. The option that says: Launch an EC2 instance and at tach an io1 EBS volume with 64,000 IOPS is incorrect. In order to achieve the 64,000 IOPS for a provisioned IOPS SSD, you must provision a Nitro- based EC2 instance. The maximum IOPS and throughput are g uaranteed only on Instances built on the Nitro System provisioned with more than 32,000 IOPS . Other instances guarantee up to 32,000 IOPS only. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ebs-volume-types.html#EBSVolumeTypes_piops https://aws.amazon.com/s3/storage-classes/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /instance-types.html Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/ Amazon S3 vs EFS vs EBS Cheat Sheet: https://tutorialsdojo.com/amazon-s3-vs-ebs-vs-efs/", + "references": "" + }, + { + "question": "A Solutions Architect designed a serverless archite cture that allows AWS Lambda to access an Amazon DynamoDB table named tutorialsdojo in the US East ( N. Virginia) region. The IAM policy attached to a Lambda function allows it to put and delete items i n the table. The policy must be updated to only all ow two operations in the tutorialsdojo table and prevent o ther DynamoDB tables from being modified. Which of the following IAM policies fulfill this re quirement and follows the principle of granting the least privilege? A.", + "options": [ + "B.", + "C.", + "D." + ], + "correct": "B.", + "explanation": "Explanation/Reference: Every AWS resource is owned by an AWS account, and permissions to create or access a resource are governed by permissions policies. An account admini strator can attach permissions policies to IAM iden tities (that is, users, groups, and roles), and some servi ces (such as AWS Lambda) also support attaching permissions policies to resources. In DynamoDB, the primary resources are tables. Dyna moDB also supports additional resource types, index es, and streams. However, you can create indexes and st reams only in the context of an existing DynamoDB t able. These are referred to as subresources. These resour ces and subresources have unique Amazon Resource Names (ARNs) associated with them. For example, an AWS Account (123456789012) has a Dy namoDB table named Books in the US East (N.Virginia) (us-east-1) region. The ARN of the Boo ks table would be: arn:aws:dynamodb:us-east-1:123456789012:table/Books A policy is an entity that, when attached to an ide ntity or resource, defines their permissions. By us ing an IAM policy and role to control access, it will gran t a Lambda function access to a DynamoDB table It is stated in the scenario that a Lambda function will be used to modify the DynamoDB table named tutorialsdojo. Since you only need to access one ta ble, you will need to indicate that table in the re source element of the IAM policy. Also, you must specify t he effect and action elements that will be generate d in the policy. Hence, the correct answer in this scenario is: { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"TutorialsdojoTablePolicy\", \"Effect\": \"Allow\", \"Action\": [ \"dynamodb:PutItem\", \"dynamodb:DeleteItem\" ], \"Resource\": \"arn:aws:dynamodb:us-east-1:12061898120 6:table/tutorialsdojo\" } ] } The IAM policy below is incorrect because the scena rio only requires you to allow the permissions in t he tutorialsdojo table. Having a wildcard: table/* in this policy would allow the Lambda function to modi fy all the DynamoDB tables in your account. { { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"TutorialsdojoTablePolicy\", \"Effect\": \"Allow\", \"Action\": [ \"dynamodb:PutItem\", \"dynamodb:DeleteItem\" ], \"Resource\": \"arn:aws:dynamodb:us-east-1:12061898120 6:table/*\" } ] } The IAM policy below is incorrect. The first statem ent is correctly allowing PUT and DELETE actions to the tutorialsdojo DynamoDB table. However, the second s tatement counteracts the first one as it allows all DynamoDB actions in the tutorialsdojo table. { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"TutorialsdojoTablePolicy1\", \"Effect\": \"Allow\", \"Action\": [ \"dynamodb:PutItem\", \"dynamodb:DeleteIte m\" ], \"Resource\": \"arn:aws:dynamodb:us-east-1:12061898120 61898:table/tutorialsdojo\" }, { \"Sid\": \"TutorialsdojoTablePolicy2\", \"Effect\": \"Allow\", \"Action\": \"dynamodb:*\", \"Resource\": \"arn:aws:dynamodb:us-east-1:12061898120 61898:table/tutorialsdojo\" } ] } The IAM policy below is incorrect. Just like the pr evious option, the first statement of this policy i s correctly allowing PUT and DELETE actions to the tutorialsdoj o DynamoDB table. However, the second statement counteracts the first one as it denies al l DynamoDB actions. Therefore, this policy will not allow any actions on all DynamoDB tables of the AWS account. { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"TutorialsdojoTablePolicy1\", \"Effect\": \"Allow\", \"Action\": [ \"dynamodb:PutItem\", \"dynamodb:DeleteItem\" ], \"Resource\": \"arn:aws:dynamodb:us-east-1:12061898120 61898:table/tutorialsdojo\" }, { \"Sid\": \"TutorialsdojoTablePolicy2\", \"Effect\": \"Deny\", \"Action\": \"dynamodb:*\", \"Resource\": \"arn:aws:dynamodb:us-east-1:12061898120 61898:table/*\" } ] } References: https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/using-identity-based-policies.html https://docs.aws.amazon.com/IAM/latest/UserGuide/re ference_policies_examples_lambda-access- dynamodb.html https://aws.amazon.com/blogs/security/how-to-create -an-aws-iam-policy-to-grant-aws-lambda-access-to-an - amazon-dynamodb-table/ Check out this AWS IAM Cheat Sheet: https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/", + "references": "" + }, + { + "question": "A company requires all the data stored in the cloud to be encrypted at rest. To easily integrate this with other AWS services, they must have full control over the encryption of the created keys and also the ability to immediately remove the key material from AWS KMS. T he solution should also be able to audit the key us age independently of AWS CloudTrail. Which of the following options will meet this requi rement?", + "options": [ + "A. Use AWS Key Management Service to create a CMK in a custom key store and store the non-extractable", + "B. Use AWS Key Management Service to create AWS-owne d CMKs and store the non-extractable key", + "C. Use AWS Key Management Service to create AWS-mana ged CMKs and store the non-extractable key", + "D. Use AWS Key Management Service to create a CMK in a custom key store and store the non-extractable" + ], + "correct": "D. Use AWS Key Management Service to create a CMK in a custom key store and store the non-extractable", + "explanation": "Explanation/Reference: The AWS Key Management Service (KMS) custom key sto re feature combines the controls provided by AWS CloudHSM with the integration and ease of use of AW S KMS. You can configure your own CloudHSM cluster and authorize AWS KMS to use it as a dedicated key store for your keys rather than the default AWS KMS key store. When you create keys in AWS KMS you can choo se to generate the key material in your CloudHSM cluster. CMKs that are ge nerated in your custom key store never leave the HS Ms in the CloudHSM cluster in plaintext and all AWS KMS o perations that use those keys are only performed in your HSMs. AWS KMS can help you integrate with other AWS servi ces to encrypt the data that you store in these ser vices and control access to the keys that decrypt it. To immediately remove the key material from AWS KMS, you can use a custom key store. Take note that each custom key store is associated with an AWS CloudHSM cluster in your AWS account. Therefore, wh en you create an AWS KMS CMK in a custom key store, AWS KMS generates and stores the non-extract able key material for the CMK in an AWS CloudHSM cluster that you own and manage. This is a lso suitable if you want to be able to audit the us age of all your keys independently of AWS KMS or AWS CloudTrai l. Since you control your AWS CloudHSM cluster, you ha ve the option to manage the lifecycle of your CMKs independently of AWS KMS. There are four reasons wh y you might find a custom key store useful: You might have keys that are explicitly required to be protected in a single-tenant HSM or in an HSM o ver which you have direct control. You might have keys that are required to be stored in an HSM that has been validated to FIPS 140-2 lev el 3 overall (the HSMs used in the standard AWS KMS key store are either validated or in the process of bei ng validated to level 2 with level 3 in multiple categ ories). You might need the ability to immediately remove ke y material from AWS KMS and to prove you have done so by independent means. You might have a requirement to be able to audit al l use of your keys independently of AWS KMS or AWS CloudTrail. Hence, the correct answer in this scenario is: Use AWS Key Management Service to create a CMK in a cus tom key store and store the non-extractable key materia l in AWS CloudHSM. The option that says: Use AWS Key Management Servic e to create a CMK in a custom key store and store t he non-extractable key material in Amazon S3 is incorr ect because Amazon S3 is not a suitable storage service to use in storing encryption keys. You have to use AWS CloudHSM instead. The options that say: Use AWS Key Management Servic e to create AWS-owned CMKs and store the non- extractable key material in AWS CloudHSM and Use AW S Key Management Service to create AWS-managed CMKs and store the non-extractable key material in AWS CloudHSM are both incorrect because the scenario requires you to have full cont rol over the encryption of the created key. AWS-owned CMKs and AWS-managed CMKs are managed by AWS. Moreover, these options do not allow you to audit the key usage independently of AWS Clo udTrail. References: https://docs.aws.amazon.com/kms/latest/developergui de/custom-key-store-overview.html https://aws.amazon.com/kms/faqs/ https://aws.amazon.com/blogs/security/are-kms-custo m-key-stores-right-for-you/ Check out this AWS KMS Cheat Sheet: https://tutorialsdojo.com/aws-key-management-servic e-aws-kms/", + "references": "" + }, + { + "question": "An application hosted in EC2 consumes messages from an SQS queue and is integrated with SNS to send ou t an email to you once the process is complete. The O perations team received 5 orders but after a few ho urs, they saw 20 email notifications in their inbox. Which of the following could be the possible culpri t for this issue?", + "options": [ + "A. The web application is not deleting the messages in the SQS queue after it has processed them.", + "B. The web application is set for long polling so th e messages are being sent twice.", + "C. The web application does not have permission to c onsume messages in the SQS queue.", + "D. The web application is set to short polling so so me messages are not being picked up" + ], + "correct": "A. The web application is not deleting the messages in the SQS queue after it has processed them.", + "explanation": "Explanation/Reference: Always remember that the messages in the SQS queue will continue to exist even after the EC2 instance has processed it, until you delete that message. You ha ve to ensure that you delete the message after processing to prevent the message from being receiv ed and processed again once the visibility timeout expires. There are three main parts in a distributed messagi ng system: 1. The components of your distributed system (EC2 i nstances) 2. Your queue (distributed on Amazon SQS servers) 3. Messages in the queue. You can set up a system which has several component s that send messages to the queue and receive messages from the queue. The queue redundantly stor es the messages across multiple Amazon SQS servers.Refer to the third step of the SQS Message Lifecycl e: Component 1 sends Message A to a queue, and the mes sage is distributed across the Amazon SQS servers redundantly. When Component 2 is ready to process a message, it consumes messages from the queue, and Message A is returned. While Message A is being processed, it re mains in the queue and isn't returned to subsequent receive requests for the duration of the visibility timeout. Component 2 deletes Message A from the queue to pre vent the message from being received and processed again once the visibility timeout expires. The option that says: The web application is set fo r long polling so the messages are being sent twice is incorrect because long polling helps reduce the cos t of using SQS by eliminating the number of empty responses (when there are no messages available for a ReceiveMessage request) and false empty response s (when messages are available but aren't included in a response). Messages being sent twice in an SQS queue configured with long polling is quite unlikel y. The option that says: The web application is set to short polling so some messages are not being picke d up is incorrect since you are receiving emails from SNS w here messages are certainly being processed. Following the scenario, messages not being picked u p won't result into 20 messages being sent to your inbox. The option that says: The web application does not have permission to consume messages in the SQS queu e is incorrect because not having the correct permiss ions would have resulted in a different response. T he scenario says that messages were properly processed but there were over 20 messages that were sent, he nce, there is no problem with the accessing the queue. References: https://docs.aws.amazon.com/AWSSimpleQueueService/l atest/SQSDeveloperGuide/sqs-message- lifecycle.html https://docs.aws.amazon.com/AWSSimpleQueueService/l atest/SQSDeveloperGuide/sqs-basic- architecture.html Check out this Amazon SQS Cheat Sheet: https://tutorialsdojo.com/amazon-sqs/", + "references": "" + }, + { + "question": "A Solutions Architect needs to set up a relational database and come up with a disaster recovery plan to mitigate multi-region failure. The solution require s a Recovery Point Objective (RPO) of 1 second and a Recovery Time Objective (RTO) of less than 1 minute . Which of the following AWS services can fulfill thi s requirement?", + "options": [ + "A. AWS Global Accelerator", + "B. Amazon Aurora Global Database", + "C. Amazon RDS for PostgreSQL with cross-region read replicas", + "D. Amazon DynamoDB global tables" + ], + "correct": "B. Amazon Aurora Global Database", + "explanation": "Explanation/Reference: Amazon Aurora Global Database is designed for globa lly distributed applications, allowing a single Ama zon Aurora database to span multiple AWS regions. It re plicates your data with no impact on database performance, enables fast local reads with low late ncy in each region, and provides disaster recovery from region-wide outages. Aurora Global Database supports storage-based repli cation that has a latency of less than 1 second. If there is an unplanned outage, one of the secondary regions y ou assigned can be promoted to read and write capabilities in less than 1 minute. This feature is called Cross-Region Disaster Recovery. An RPO of 1 second and an RTO of less than 1 minute provides you a str ong foundation for a global business continuity pla n. Hence, the correct answer is: Amazon Aurora Global Database. Amazon DynamoDB global tables is incorrect because it is stated in the scenario that the Solutions Architect needs to create a relational database and not a NoSQL database. When you create a DynamoDB global table, it consists of multiple replica table s (one per AWS Region) that DynamoDB treats as a si ngle unit. Multi-AZ Amazon RDS database with cross-region read replicas is incorrect because a Multi-AZ deployment is only applicable inside a single regio n and not in a multi-region setup. This database se tup is not capable of providing an RPO of 1 second and an RTO of less than 1 minute. Moreover, the replication of cross- region RDS Read Replica is not as fast compared wit h Amazon Aurora Global Databases. AWS Global Accelerator is incorrect because this is a networking service that simplifies traffic management and improves application performance. AW S Global Accelerator is not a relational database service; therefore, this is not a suitable service to use in this scenario. References: https://aws.amazon.com/rds/aurora/global-database/ https://docs.aws.amazon.com/AmazonRDS/latest/Aurora UserGuide/aurora-global-database.html Amazon Aurora Overview: https://youtu.be/iwS1h7rLNBQ Check out this Amazon Aurora Cheat Sheet: https://tutorialsdojo.com/amazon-aurora/", + "references": "" + }, + { + "question": "A Solutions Architect is hosting a website in an Am azon S3 bucket named tutorialsdojo. The users load the website using the following URL: http://tutorialsdo jo.s3-website-us-east-1.amazonaws.com and there is a new requirement to add a JavaScript on the webpages in order to make authenticated HTTP GET requests again st the same bucket by using the Amazon S3 API endpoint (tutorialsdojo.s3.amazonaws.com). Upon testing, yo u noticed that the web browser blocks JavaScript from allowing those requests. Which of the following options is the MOST suitable solution that you should implement for this scenar io?", + "options": [ + "A. Enable Cross-Region Replication (CRR).", + "B. Enable Cross-origin resource sharing (CORS) confi guration in the bucket.", + "C. Enable cross-account access.", + "D. Enable Cross-Zone Load Balancing." + ], + "correct": "B. Enable Cross-origin resource sharing (CORS) confi guration in the bucket.", + "explanation": "Explanation/Reference: Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different do main. With CORS support, you can build rich client- side web applications with Amazon S3 and selectively allow c ross-origin access to your Amazon S3 resources. Suppose that you are hosting a website in an Amazon S3 bucket named your-website and your users load t he website endpoint http://your-website.s3-website-us- east-1.amazonaws.com. Now you want to use JavaScrip t on the webpages that are stored in this bucket to be a ble to make authenticated GET and PUT requests agai nst the same bucket by using the Amazon S3 API endpoint for the bucket, your-website.s3.amazonaws.com. A browser would normally block JavaScript from allowi ng those requests, but with CORS you can configure your bucket to explicitly enable cross-origin requests f rom your-website.s3- website-us-east-1.amazonaws.com. In this scenario, you can solve the issue by enabli ng the CORS in the S3 bucket. Hence, enabling Cross - origin resource sharing (CORS) configuration in the bucket is the correct answer. Enabling cross-account access is incorrect because cross-account access is a feature in IAM and not in Amazon S3. Enabling Cross-Zone Load Balancing is incorrect bec ause Cross-Zone Load Balancing is only used in ELB and not in S3. Enabling Cross-Region Replication (CRR) is incorrec t because CRR is a bucket-level configuration that enables automatic, asynchronous copying of objects across buckets in different AWS Regions. References: http://docs.aws.amazon.com/AmazonS3/latest/dev/cors .html https://docs.aws.amazon.com/AmazonS3/latest/dev/Man ageCorsUsing.html", + "references": "" + }, + { + "question": "A multi-tiered application hosted in your on-premis es data center is scheduled to be migrated to AWS. The application has a message broker service which uses industry standard messaging APIs and protocols tha t must be migrated as well, without rewriting the mes saging code in your application. Which of the following is the most suitable service that you should use to move your messaging service to AWS?", + "options": [ + "A. Amazon SNS", + "B. Amazon MQ", + "C. Amazon SWF", + "D. Amazon SQS" + ], + "correct": "B. Amazon MQ", + "explanation": "Explanation/Reference: Amazon MQ, Amazon SQS, and Amazon SNS are messaging services that are suitable for anyone from startups to enterprises. If you're using messaging with existing applications and want to move your me ssaging service to the cloud quickly and easily, it is reco mmended that you consider Amazon MQ. It supports in dustry- standard APIs and protocols so you can switch from any standards-based message broker to Amazon MQ without rewriting the messaging code i n your applications. Hence, Amazon MQ is the correct answer. If you are building brand new applications in the c loud, then it is highly recommended that you consid er Amazon SQS and Amazon SNS. Amazon SQS and SNS are lightwei ght, fully managed message queue and topic services that scale almost infinitely and pro vide simple, easy-to-use APIs. You can use Amazon S QS and SNS to decouple and scale microservices, distribute d systems, and serverless applications, and improve reliability. Amazon SQS is incorrect because although this is a fully managed message queuing service, it does not support an extensive list of industry-standard mess aging APIs and protocol, unlike Amazon MQ. Moreover , using Amazon SQS requires you to do additional chan ges in the messaging code of applications to make i t compatible. Amazon SNS is incorrect because SNS is more suitabl e as a pub/sub messaging service instead of a messa ge broker service. Amazon SWF is incorrect because this is a fully-man aged state tracker and task coordinator service and not a messaging service, unlike Amazon MQ, AmazonSQS and Amazon SNS. References: https://aws.amazon.com/amazon-mq/faqs/ https://docs.aws.amazon.com/AWSSimpleQueueService/l atest/SQSDeveloperGuide/welcome.html#sqs- difference-from-amazon-mq-sns Check out this Amazon MQ Cheat Sheet: https://tutorialsdojo.com/amazon-mq/", + "references": "" + }, + { + "question": "A company hosts multiple applications in their VPC. While monitoring the system, they noticed that mul tiple port scans are coming in from a specific IP address bloc k that is trying to connect to several AWS resource s inside their VPC. The internal security team has requested that all offending IP addresses be denied for the next 24 hours for security purposes. Which of the following is the best method to quickl y and temporarily deny access from the specified IP addresses?", + "options": [ + "A. Configure the firewall in the operating system of the EC2 instances to deny access from the IP addre ss", + "B. Add a rule in the Security Group of the EC2 insta nces to deny access from the IP Address block.", + "C. Modify the Network Access Control List associated with all public subnets in the VPC to deny access from", + "D. Create a policy in IAM to deny access from the IP Address block." + ], + "correct": "C. Modify the Network Access Control List associated with all public subnets in the VPC to deny access from", + "explanation": "Explanation/Reference: To control the traffic coming in and out of your VP C network, you can use the network access control l ist (ACL). It is an optional layer of security for your VPC th at acts as a firewall for controlling traffic in an d out of one or more subnets. This is the best solution among other options as you can easily add and remove the restr iction in a matter of minutes. Creating a policy in IAM to deny access from the IP Address block is incorrect as an IAM policy does n ot control the inbound and outbound traffic of your VPC. Adding a rule in the Security Group of the EC2 inst ances to deny access from the IP Address block is i ncorrect. Although a Security Group acts as a firewall, it wi ll only control both inbound and outbound traffic a t the instance level and not on the whole VPC. Configuring the firewall in the operating system of the EC2 instances to deny access from the IP addre ss block is incorrect because adding a firewall in the under lying operating system of the EC2 instance is not enough; the attacker can just conne ct to other AWS resources since the network access control list still allows them to do so.", + "references": "http://docs.aws.amazon.com/AmazonVPC/latest/UserGui de/VPC_ACLs.html Amazon VPC Overview: https://www.youtube.com/watch?v=oIDHKeNxvQQ Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/" + }, + { + "question": "A Forex trading platform, which frequently processe s and stores global financial data every minute, is hosted in your on-premises data center and uses an Oracle dat abase. Due to a recent cooling problem in their dat a center, the company urgently needs to migrate their infrastructure to AWS to improve the performance o f their applications. As the Solutions Architect, you are r esponsible in ensuring that the database is properl y migrated and should remain available in case of database ser ver failure in the future. Which of the following is the most suitable solutio n to meet the requirement?", + "options": [ + "A. Create an Oracle database in RDS with Multi-AZ de ployments.", + "B. Launch an Oracle Real Application Clusters (RAC) in RDS.", + "C. Launch an Oracle database instance in RDS with Re covery Manager (RMAN) enabled.", + "D. Convert the database schema using the AWS Schema Conversion Tool and AWS Database Migration" + ], + "correct": "A. Create an Oracle database in RDS with Multi-AZ de ployments.", + "explanation": "Explanation/Reference: Amazon RDS Multi-AZ deployments provide enhanced av ailability and durability for Database (DB) Instanc es, making them a natural fit for production database w orkloads. When you provision a Multi-AZ DB Instance , Amazon RDS automatically creates a primary DB Insta nce and synchronously replicates the data to a stan dby instance in a different Availability Zone (AZ). Eac h AZ runs on its own physically distinct, independe nt infrastructure, and is engineered to be highly reli able. In case of an infrastructure failure, Amazon RDS pe rforms an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your D B Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention. In this scenario, the best RDS configuration to use is an Oracle database in RDS with Multi-AZ deploym ents to ensure high availability even if the primary databa se instance goes down. Hence, creating an Oracle da tabase in RDS with Multi-AZ deployments is the correct ans wer. Launching an Oracle database instance in RDS with R ecovery Manager (RMAN) enabled and launching an Oracle Real Application Clusters (RAC) in RDS are i ncorrect because Oracle RMAN and RAC are not supported in RDS. The option that says: Convert the database schema u sing the AWS Schema Conversion Tool and AWS Database Migration Service. Migrate the Oracle data base to a non-cluster Amazon Aurora with a single instance is incorrect because although this solution is feasible, it takes time to migrate your Oracle database to Aurora, which is not acceptable. Based on this option, the Aurora database is only using a single instance with no Read Replica and is not configured as an Amazon Aurora DB cluster, which could have improved the availability of the database. References: https://aws.amazon.com/rds/details/multi-az/ https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/Concepts.MultiAZ.html Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/", + "references": "" + }, + { + "question": "An application is hosted in an AWS Fargate cluster that runs a batch job whenever an object is loaded on an Amazon S3 bucket. The minimum number of ECS Tasks i s initially set to 1 to save on costs, and it will only increase the task count based on the new objects up loaded on the S3 bucket. Once processing is done, t he bucket becomes empty and the ECS Task count should be back to 1. Which is the most suitable option to implement with the LEAST amount of effort?", + "options": [ + "A. Set up an alarm in CloudWatch to monitor CloudTra il since the S3 object-level operations are recorde d on", + "B. Set up a CloudWatch Event rule to detect S3 objec t PUT operations and set the target to a Lambda fun ction", + "C. Set up a CloudWatch Event rule to detect S3 objec t PUT operations and set the target to the ECS clus ter", + "D. Set up an alarm in CloudWatch to monitor CloudTra il since this S3 object-level operations are record ed on" + ], + "correct": "C. Set up a CloudWatch Event rule to detect S3 objec t PUT operations and set the target to the ECS clus ter", + "explanation": "Explanation/Reference: You can use CloudWatch Events to run Amazon ECS tas ks when certain AWS events occur. You can set up a CloudWatch Events rule that runs an Amazon ECS task whenever a file is uploaded to a certain Amazon S3 bucket using the Amazon S3 PUT operation. You can also declare a reduced number of ECS tasks whenever a file is deleted on the S3 bucket using t he DELETE operation. First, you must create a CloudWatch Events rule for the S3 service that will watch for object-level operations PUT and DELETE objects. For object-leve l operations, it is required to create a CloudTrail trail first. On the Targets section, select the \"ECS task\" and i nput the needed values such as the cluster name, ta sk definition and the task count. You need two rules one for the scale-up and another for the scale-down of the ECS task count. Hence, the correct answer is: Set up a CloudWatch E vent rule to detect S3 object PUT operations and set the target to the ECS cluster with the increase d number of tasks. Create another rule to detect S3 DELETE operations and set the target to the ECS Cluster wi th 1 as the Task count. The option that says: Set up a CloudWatch Event rul e to detect S3 object PUT operations and set the target to a Lambda function that will run Amazon EC S API command to increase the number of tasks on ECS. Create another rule to detect S3 DELE TE operations and run the Lambda function to reduce the number of ECS tasks is incorrect. Althou gh this solution meets the requirement, creating yo ur own Lambda function for this scenario is not really nec essary. It is much simpler to control ECS task dire ctly as target for the CloudWatch Event rule. Take note tha t the scenario asks for a solution that is the easi est to implement. The option that says: Set up an alarm in CloudWatch to monitor CloudTrail since the S3 object-level operations are recorded on CloudTrail. Create two L ambda functions for increasing/decreasing the ECS task count. Set these as respective targets for the CloudWatch Alarm depending on the S3 event is incorrect because using CloudTrail, CloudWatch A larm, and two Lambda functions creates an unnecessary complexity to what you want to achieve. CloudWatch Events can directly target an ECS task on the Targets section when you create a new rule. The option that says: Set up an alarm in CloudWatch to monitor CloudTrail since this S3 object-level operations are recorded on CloudTrail. Set two alar m actions to update ECS task count to scale- out/scale-in depending on the S3 event is incorrect because you can't directly set CloudWatch Alarms t o update the ECS task count. You have to use CloudWatch Even ts instead. References: https://docs.aws.amazon.com/AmazonCloudWatch/latest /events/CloudWatch-Events-tutorial-ECS.html https://docs.aws.amazon.com/AmazonCloudWatch/latest /events/Create-CloudWatch-Events-Rule.html Check out this Amazon CloudWatch Cheat Sheet: https://tutorialsdojo.com/amazon-cloudwatch/ Amazon CloudWatch Overview: https://www.youtube.com/watch?v=q0DmxfyGkeU", + "references": "" + }, + { + "question": "In a government agency that you are working for, yo u have been assigned to put confidential tax documents on AWS cloud. However, there is a concern from a security perspective on what can be put on AWS. What are the features in AWS that can ensure data s ecurity for your confidential documents? (Select TW O.)", + "options": [ + "A. Public Data Set Volume Encryption", + "B. S3 On-Premises Data Encryption", + "C. S3 Server-Side Encryption", + "D. EBS On-Premises Data Encryption" + ], + "correct": "", + "explanation": "Explanation/Reference: You can secure the privacy of your data in AWS, bot h at rest and in-transit, through encryption. If yo ur data is stored in EBS Volumes, you can enable EBS Encryptio n and if it is stored on Amazon S3, you can enable client-side and server-side encryption. Public Data Set Volume Encryption is incorrect as p ublic data sets are designed to be publicly accessi ble. EBS On-Premises Data Encryption and S3 On-Premises Data Encryption are both incorrect as there is no s uch thing as On-Premises Data Encryption for S3 and EBS as these services are in the AWS cloud and not on your on-premises network. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/Usi ngEncryption.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /EBSEncryption.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /using-public-data-sets.html Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", + "references": "" + }, + { + "question": "A car dealership website hosted in Amazon EC2 store s car listings in an Amazon Aurora database managed by Amazon RDS. Once a vehicle has been sold, its data must be removed from the current listings and forwa rded to a distributed processing system. Which of the following options can satisfy the give n requirement?", + "options": [ + "A. Create an RDS event subscription and send the not ifications to AWS Lambda. Configure the Lambda", + "B. Create an RDS event subscription and send the not ifications to Amazon SNS. Configure the SNS topic t o", + "C. Create a native function or a stored procedure th at invokes a Lambda function. Configure the Lambda", + "D. Create an RDS event subscription and send the not ifications to Amazon SQS. Configure the SQS queues to", + "A. Attach an EBS volume in your EC2 instance. Use Am azon S3 to store your backup data and configure a", + "B. Attach an EBS volume in your EC2 instance. Use Am azon S3 to store your backup data and configure a", + "C. Attach an instance store volume in your EC2 insta nce. Use Amazon S3 to store your backup data and", + "D. Attach an instance store volume in your existing EC2 instance. Use Amazon S3 to store your backup da ta" + ], + "correct": "B. Attach an EBS volume in your EC2 instance. Use Am azon S3 to store your backup data and configure a", + "explanation": "Explanation/Reference: Amazon Elastic Block Store (EBS) is an easy-to-use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications , big data analytics engines, file systems, and med ia workflows are widely deployed on Amazon EBS. Amazon Simple Storage Service (Amazon S3) is an obj ect storage service that offers industry-leading scalability, data availability, security, and perfo rmance. This means customers of all sizes and indus tries can use it to store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterpri se applications, IoT devices, and big data analytic s. In an S3 Lifecycle configuration, you can define ru les to transition objects from one storage class to another to save on storage costs. Amazon S3 supports a waterfa ll model for transitioning between storage classes, as shown in the diagram below: In this scenario, three services are required to im plement this solution. The mission-critical workloa ds mean that you need to have a persistent block storage vo lume and the designed service for this is Amazon EB S volumes. The second workload needs to have an objec t storage service, such as Amazon S3, to store your backup data. Amazon S3 enables you to configure the lifecycle policy from S3 Standard to different storage classes. For the last one, it needs archive storage such as Amazon S3 Glacier. Hence, the correct answer in this scenario is: Atta ch an EBS volume in your EC2 instance. Use Amazon S 3 to store your backup data and configure a lifecycle po licy to transition your objects to Amazon S3 Glacie r. The option that says: Attach an EBS volume in your EC2 instance. Use Amazon S3 to store your backup da ta and configure a lifecycle policy to transition your objects to Amazon S3 One Zone-IA is incorrect because this lifecycle policy will transi tion your objects into an infrequently accessed sto rage class and not a storage class for data archiving. The option that says: Attach an instance store volu me in your existing EC2 instance. Use Amazon S3 to store your backup data and configure a lifecycle policy t o transition your objects to Amazon S3 Glacier is incorrect because an Instance Store volu me is simply a temporary block-level storage for EC 2 instances. Also, you can't attach instance store vo lumes to an instance after you've launched it. You can specify the instance store volumes for your instance only w hen you launch it. The option that says: Attach an instance store volu me in your EC2 instance. Use Amazon S3 to store your backup data and configure a lifecycle policy t o transition your objects to Amazon S3 One Zone- IA is incorrect. Just like the previous option, the use of instance store volume is not suitable for m ission- critical workloads because the data can be lost if the under lying disk drive fails, the instance stops, or if t he instance is terminated. In addition, Amazon S3 Glacier is a mor e suitable option for data archival instead of Amaz on S3 One Zone-IA. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /AmazonEBS.html https://aws.amazon.com/s3/storage-classes/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/ Tutorials Dojo's AWS Storage Services Cheat Sheets: https://tutorialsdojo.com/aws-cheat-sheets-storage- services/", + "references": "" + }, + { + "question": "A Solutions Architect is working for a company whic h has multiple VPCs in various AWS regions. The Arc hitect is assigned to set up a logging system which will t rack all of the changes made to their AWS resources in all regions, including the configurations made in IAM, CloudFront, AWS WAF, and Route 53. In order to pass the compliance requirements, the solution must ensure t he security, integrity, and durability of the log d ata. It should also provide an event history of all API cal ls made in AWS Management Console and AWS CLI. Which of the following solutions is the best fit fo r this scenario?", + "options": [ + "A. Set up a new CloudTrail trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi -", + "B. Set up a new CloudWatch trail in a new S3 bucket using the CloudTrail console and also pass the --is -multi-", + "C. Set up a new CloudWatch trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi -", + "D. Set up a new CloudTrail trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi -" + ], + "correct": "A. Set up a new CloudTrail trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi -", + "explanation": "Explanation/Reference: An event in CloudTrail is the record of an activity in an AWS account. This activity can be an action taken by a user, role, or service that is monitorable by Cloud Trail. CloudTrail events provide a history of both API and non- API account activity made through the AWS Managemen t Console, AWS SDKs, command-line tools, and other AWS services. There are two types of events that ca n be logged in CloudTrail: management events and data events. By default, trai ls log management events, but not data events. A trail can be applied to all regions or a single r egion. As a best practice, create a trail that appl ies to all regions in the AWS partition in which you are working. This is the default setting when you create a trail in the CloudTrail console. For most services, events are recorded in the regio n where the action occurred. For global services su ch as AWS Identity and Access Management (IAM), AWS STS, Amazon CloudFront, and Route 53, events are delivered to any trail that includes global service s, and are logged as occurring in US East (N. Virgi nia) Region. In this scenario, the company requires a secure and durable logging solution that will track all of th e activities of all AWS resources in all regions. CloudTrail can be used for this case with multi-region trail enabled , however, it will only cover the activities of the regional serv ices (EC2, S3, RDS etc.) and not for global service s such as IAM, CloudFront, AWS WAF, and Route 53. In order to satisfy the requirement, you have to add the --include-global-service-events parameter in your AWS CLI command. The option that says: Set up a new CloudTrail trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi-region-trail and --include -global-service-events parameters then encrypt log files using KMS encryption. Apply Multi Factor Auth entication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the lo gs by configuring the bucket policies is correct because it provides security, integrity, and durabi lity to your log data and in addition, it has the - include- global- service-events parameter enabled which will also in clude activity from global services such as IAM, Ro ute 53, AWS WAF, and CloudFront. The option that says: Set up a new CloudWatch trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi-region-trail and --include -global-service-events parameters then encrypt log files using KMS encryption. Apply Multi Factor Authentication ( MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies is incorrect because you need to use CloudTrail instead of CloudWatch. The option that says: Set up a new CloudWatch trail in a new S3 bucket using the CloudTrail console an d also pass the --is-multi-region-trail parameter then enc rypt log files using KMS encryption. Apply Multi Fa ctor Authentication (MFA) Delete on the S3 bucket and en sure that only authorized users can access the logs by configuring the bucket policies is incorrect becaus e you need to use CloudTrail instead of CloudWatch. In addition, the --include-global-service-events param eter is also missing in this setup. The option that says: Set up a new CloudTrail trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi-region-trail and --no-include-global -service-events parameters then encrypt log files u sing KMS encryption. Apply Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only autho rized users can access the logs by configuring the bucket policies is incorrect because the --is-multi-regio n-trail is not enough as you also need to add the --include-global - service-events parameter and not --no-include-glo bal- service-events. References: https://docs.aws.amazon.com/awscloudtrail/latest/us erguide/cloudtrail-concepts.html#cloudtrail-concept s- global-service-events http://docs.aws.amazon.com/IAM/latest/UserGuide/clo udtrail-integration.html https://docs.aws.amazon.com/awscloudtrail/latest/us erguide/cloudtrail-create-and-update-a-trail-by-usi ng- the- aws-cli.html Check out this AWS CloudTrail Cheat Sheet: https://tutorialsdojo.com/aws-cloudtrail/", + "references": "" + }, + { + "question": "An online shopping platform is hosted on an Auto Sc aling group of Spot EC2 instances and uses Amazon Aurora PostgreSQL as its database. There is a requi rement to optimize your database workloads in your cluster where you have to direct the write operatio ns of the production traffic to your high-capacity instances and point the reporting queries sent by y our internal staff to the low-capacity instances. Which is the most suitable configuration for your a pplication as well as your Aurora database cluster to achieve this requirement?", + "options": [ + "A. In your application, use the instance endpoint of your Aurora database to handle the incoming produc tion", + "B. Configure your application to use the reader endp oint for both production traffic and reporting quer ies, which", + "D. Create a custom endpoint in Aurora based on the s pecified criteria for the production traffic and an other" + ], + "correct": "D. Create a custom endpoint in Aurora based on the s pecified criteria for the production traffic and an other", + "explanation": "Explanation/Reference: Amazon Aurora typically involves a cluster of DB in stances instead of a single instance. Each connecti on is handled by a specific DB instance. When you conn ect to an Aurora cluster, the host name and port th at you specify point to an intermediate handler called an endpoint. Aurora uses the endpoint mechanism to abstract these connections. Thus, you don't have to hardcode all the hostnames or write your own logic for load-balancing and rerouting connections when some DB instances aren't available. For certain Aurora tasks, different instances or gr oups of instances perform different roles. For exam ple, the primary instance handles all data definition langua ge (DDL) and data manipulation language (DML) statements. Up to 15 Aurora Replicas handle read-on ly query traffic. Using endpoints, you can map each connection to the appropriate instance or group of instances based o n your use case. For example, to perform DDL statements yo u can connect to whichever instance is the primary instance. To perform queries, you can conne ct to the reader endpoint, with Aurora automaticall y performing load-balancing among all the Aurora Repl icas. For clusters with DB instances of different capacities or configurations, you can connect to cu stom endpoints associated with different subsets of DB instances. For diagnosis or tuning, you can connect to a specific instance endpoint to examine details about a specific DB instance. The custom endpoint provides load-balanced database connections based on criteria other than the read- only or read-write capability of the DB instances. For example, you might define a custom endpoint to connect to instances that use a particular AWS instance cla ss or a particular DB parameter group. Then you might tell particular groups of users about this cu stom endpoint. For example, you might direct intern al users to low-capacity instances for report generation or ad hoc (one-time) querying, and direct production tra ffic to high- capacity instances. Hence, creating a custom endpoi nt in Aurora based on the specified criteria for t he production traffic and another custom endpoint to h andle the reporting queries is the correct answer. Configuring your application to use the reader endp oint for both production traffic and reporting quer ies, which will enable your Aurora database to automatically p erform load-balancing among all the Aurora Replicas is incorrect because although it is true that a reader endpoint enables your Aurora database to automatic ally perform load-balancing among all the Aurora Replica s, it is quite limited to doing read operations onl y. You still need to use a custom endpoint to load-balance the d atabase connections based on the specified criteria . The option that says: In your application, use the instance endpoint of your Aurora database to handle the incoming production traffic and use the cluster end point to handle reporting queries is incorrect because a cluster endpoint (also known as a writer endpoint) for an Aurora DB cluster simply connects to the current primary DB instance for that DB cluster. Th is endpoint can perform write operations in the dat abase such as DDL statements, which is perfect for handli ng production traffic but not suitable for handling queries for reporting since there will be no write database ope rations that will be sent. Moreover, the endpoint d oes not point to lower-capacity or high-capacity instances as per the requirement. A better solution for this is to use a custom endpoint. The option that says: Do nothing since by default, Aurora will automatically direct the production traffic to your high-capacity instances and the rep orting queries to your low-capacity instances is incorrect because Aurora does not do this by defaul t. You have to create custom endpoints in order to accomplish this requirement.", + "references": "https://docs.aws.amazon.com/AmazonRDS/latest/Aurora UserGuide/Aurora.Overview.Endpoints.html Amazon Aurora Overview: https://youtu.be/iwS1h7rLNBQ Check out this Amazon Aurora Cheat Sheet: https://tutorialsdojo.com/amazon-aurora/" + }, + { + "question": "A company is using Amazon S3 to store frequently ac cessed data. When an object is created or deleted, the S3 bucket will send an event notification to the Am azon SQS queue. A solutions architect needs to crea te a solution that will notify the development and opera tions team about the created or deleted objects. Which of the following would satisfy this requireme nt?", + "options": [ + "A. Create a new Amazon SNS FIFO topic for the other team. Grant Amazon S3 permission to send the", + "B. Set up another Amazon SQS queue for the other tea m. Grant Amazon S3 permission to send a notificatio n", + "C. Set up an Amazon SNS topic and configure two Amaz on SQS queues to poll the SNS topic. Grant Amazon", + "D. Create an Amazon SNS topic and configure two Amaz on SQS queues to subscribe to the topic. Grant" + ], + "correct": "", + "explanation": "Explanation/Reference: The Amazon S3 notification feature enables you to r eceive notifications when certain events happen in your bucket. To enable notifications, you must first add a notification configuration that identifies the e vents you want Amazon S3 to publish and the destinations where you want Amazon S3 to send the notifications. You stor e this configuration in the notification subresource that is associated with a bucket. Amazon S3 supports the following destinations where it can publish events: - Amazon Simple Notification Service (Amazon SNS) t opic - Amazon Simple Queue Service (Amazon SQS) queue - AWS Lambda In Amazon SNS, the fanout scenario is when a messag e published to an SNS topic is replicated and pushe d to multiple endpoints, such as Amazon SQS queues, HTTP (S) endpoints, and Lambda functions. This allows for parallel asynchronous processing. For example, you can develop an application that pu blishes a message to an SNS topic whenever an order is placed for a product. Then, SQS queues that are sub scribed to the SNS topic receive identical notifications for the new order. An Amazon Elastic Compute Cloud (Amazon EC2) server instance attached to one of the SQS queues can handle the pr ocessing or fulfillment of the order. And you can a ttach another Amazon EC2 server instance to a data wareho use for analysis of all orders received. Based on the given scenario, the existing setup sen ds the event notification to an SQS queue. Since yo u need to send the notification to the development and ope rations team, you can use a combination of Amazon SNS and SQS. By using the message fanout pat tern, you can create a topic and use two Amazon SQS queues to subscribe to the topic. If Amazon SNS receives an event notification, it will publish th e message to both subscribers. Take note that Amazon S3 event notifications are de signed to be delivered at least once and to one destination only. You cannot attach two or more SNS topics or SQS queues for S3 event notification. Therefore, you must send the event notification to Amazon SNS. Hence, the correct answer is: Create an Amazon SNS topic and configure two Amazon SQS queues to subscribe to the topic. Grant Amazon S3 permission to send notifications to Amazon SNS and update the bucket to use the new SNS topic. The option that says: Set up another Amazon SQS que ue for the other team. Grant Amazon S3 permission to send a notification to the second SQS queue is incorrect because you can only add 1 SQS or SNS at a time for Amazon S3 events notification. If you need to send the events to multiple subscri bers, you should implement a message fanout pattern with Amaz on SNS and Amazon SQS. The option that says: Create a new Amazon SNS FIFO topic for the other team. Grant Amazon S3 permission to send the notification to the second S NS topic is incorrect. Just as mentioned in the previous option, you can only add 1 SQS or SNS at a time for Amazon S3 events notification. In additio n, neither Amazon SNS FIFO topic nor Amazon SQS FIFO q ueue is warranted in this scenario. Both of them can be used together to provide strict message orde ring and message deduplication. The FIFO capabiliti es of each of these services work together to act as a fu lly managed service to integrate distributed applications that require data consistency in near- real-time. The option that says: Set up an Amazon SNS topic an d configure two Amazon SQS queues to poll the SNS topic. Grant Amazon S3 permission to send notif ications to Amazon SNS and update the bucket to use the new SNS topic is incorrect because you c an't poll Amazon SNS. Instead of configuring queues to poll Amazon SNS, you should configure each Amazon SQS qu eue to subscribe to the SNS topic. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/way s-to-add-notification-config-to-bucket.html https://docs.aws.amazon.com/AmazonS3/latest/dev/Not ificationHowTo.html#notification-how-to-overview https://docs.aws.amazon.com/sns/latest/dg/welcome.h tml Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/ Amazon SNS Overview: https://www.youtube.com/watch?v=ft5R45lEUJ8", + "references": "" + }, + { + "question": "A company plans to launch an Amazon EC2 instance in a private subnet for its internal corporate web po rtal. For security purposes, the EC2 instance must send d ata to Amazon DynamoDB and Amazon S3 via private endpoints that don't pass through the publi c Internet. Which of the following can meet the above requireme nts?", + "options": [ + "A. Use AWS VPN CloudHub to route all access to S3 an d DynamoDB via private endpoints.", + "B. Use AWS Transit Gateway to route all access to S3 and DynamoDB via private endpoints.", + "C. Use AWS Direct Connect to route all access to S3 and DynamoDB via private endpoints.", + "D. Use VPC endpoints to route all access to S3 and D ynamoDB via private endpoints." + ], + "correct": "D. Use VPC endpoints to route all access to S3 and D ynamoDB via private endpoints.", + "explanation": "Explanation/Reference: A VPC endpoint allows you to privately connect your VPC to supported AWS and VPC endpoint services powered by AWS PrivateLink without needing an Inter net gateway, NAT computer, VPN connection, or AWS Direct Connect connection. Instances in your VP C do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not lea ve the Amazon network. In the scenario, you are asked to configure private endpoints to send data to Amazon DynamoDB and Amazon S3 without accessing the public Internet. Am ong the options given, VPC endpoint is the most suitable service that will allow you to use private IP addresses to access both DynamoDB and S3 withou t any exposure to the public internet. Hence, the correct answer is the option that says: Use VPC endpoints to route all access to S3 and DynamoDB via private endpoints. The option that says: Use AWS Transit Gateway to ro ute all access in S3 and DynamoDB to a public endpoint is incorrect because a Transit Gateway sim ply connects your VPC and on-premises networks through a central hub. It acts as a cloud router th at allows you to integrate multiple networks. The option that says: Use AWS Direct Connect to rou te all access to S3 and DynamoDB via private endpoints is incorrect because AWS Direct Connect i s primarily used to establish a dedicated network connection from your premises to AWS. The scenario didn't say that the company is using its on-premise s server or has a hybrid cloud architecture. The option that says: Use AWS VPN CloudHub to route all access in S3 and DynamoDB to a private endpoint is incorrect because AWS VPN CloudHub is m ainly used to provide secure communication between remote sites and not for creating a private endpoint to access Amazon S3 and DynamoDB within the Amazon network. References: https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/vpc-endpoints-dynamodb.html https://docs.aws.amazon.com/glue/latest/dg/vpc-endp oints-s3.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", + "references": "" + }, + { + "question": "A company hosted a web application in an Auto Scali ng group of EC2 instances. The IT manager is concer ned about the over-provisioning of the resources that c an cause higher operating costs. A Solutions Archit ect has been instructed to create a cost-effective solution without affecting the performance of the applicati on. Which dynamic scaling policy should be used to sati sfy this requirement?", + "options": [ + "A. Use simple scaling.", + "B. Use suspend and resume scaling.", + "C. Use scheduled scaling.", + "D. Use target tracking scaling." + ], + "correct": "D. Use target tracking scaling.", + "explanation": "Explanation/Reference: An Auto Scaling group contains a collection of Amaz on EC2 instances that are treated as a logical grouping for the purposes of automatic scaling and management. An Auto Scaling group also enables you to use Amazon EC2 Auto Scaling features such as hea lth check replacements and scaling policies. Both maintaining the number of instances in an Auto Scal ing group and automatic scaling are the core functionality of the Amazon EC2 Auto Scaling servic e. The size of an Auto Scaling group depends on the number of instances that you set as the desired cap acity. You can adjust its size to meet demand, eith er manually or by using automatic scaling. Step scaling policies and simple scaling policies a re two of the dynamic scaling options available for you to use. Both require you to create CloudWatch alarms for th e scaling policies. Both require you to specify the high and low thresholds for the alarms. Both require you to define whether to add or remove instances, and how many, or set the group to an exact size. The main differe nce between the policy types is the step adjustments that you get with step scaling policies . When step adjustments are applied, and they incre ase or decrease the current capacity of your Auto Scaling group, the adjustments vary based on the size of th e alarm breach. The primary issue with simple scaling is that after a scaling activity is started, the policy must wai t for the scaling activity or health check replacement to com plete and the cooldown period to expire before responding to additional alarms. Cooldown periods h elp to prevent the initiation of additional scaling activities before the effects of previous activities are visib le. With a target tracking scaling policy, you can incr ease or decrease the current capacity of the group based on a target value for a specific metric. This policy wil l help resolve the over-provisioning of your resour ces. The scaling policy adds or removes capacity as required to keep the metric at, or close to, the specified target value. In addition to keeping the metric close to the targ et value, a target tracking scaling policy also adj usts to changes in the metric due to a changing load patter n. Hence, the correct answer is: Use target tracking s caling. The option that says: Use simple scaling is incorre ct because you need to wait for the cooldown period to complete before initiating additional scaling activ ities. Target tracking or step scaling policies can trigger a scaling activity immediately without waiting for th e cooldown period to expire. The option that says: Use scheduled scaling is inco rrect because this policy is mainly used for predic table traffic patterns. You need to use the target tracking scali ng policy to optimize the cost of your infrastructu re without affecting the performance. The option that says: Use suspend and resume scalin g is incorrect because this type is used to tempora rily pause scaling activities triggered by your scaling policies and scheduled actions. References: https://docs.aws.amazon.com/autoscaling/ec2/usergui de/as-scaling-target-tracking.html https://docs.aws.amazon.com/autoscaling/ec2/usergui de/AutoScalingGroup.html Check out this AWS Auto Scaling Cheat Sheet: https://tutorialsdojo.com/aws-auto-scaling/", + "references": "" + }, + { + "question": "A company needs to design an online analytics appli cation that uses Redshift Cluster for its data ware house. Which of the following services allows them to moni tor all API calls in Redshift instance and can also provide secured data for auditing and compliance purposes?", + "options": [ + "A. AWS CloudTrail", + "B. Amazon CloudWatch", + "C. AWS X-Ray", + "D. Amazon Redshift Spectrum" + ], + "correct": "A. AWS CloudTrail", + "explanation": "Explanation/Reference: AWS CloudTrail is a service that enables governance , compliance, operational auditing, and risk auditi ng of your AWS account. With CloudTrail, you can log, con tinuously monitor, and retain account activity related to actions across your AWS infrastructure. By default, CloudTrail is enabled on your AWS accou nt when you create it. When activity occurs in your AWS acc ount, that activity is recorded in a CloudTrail eve nt. You can easily view recent events in the CloudTrail console by going to Event history. CloudTrail provides event history of your AWS accou nt activity, including actions taken through the AW S Management Console, AWS SDKs, command line tools, A PI calls, and other AWS services. This event history simplifies security analysis, resource chan ge tracking, and troubleshooting. Hence, the correct answer is: AWS CloudTrail. Amazon CloudWatch is incorrect. Although this is al so a monitoring service, it cannot track the API ca lls to your AWS resources. AWS X-Ray is incorrect because this is not a suitab le service to use to track each API call to your AW S resources. It just helps you debug and analyze your microservices applications with request tracing so you can find the root cause of issues and performance. Amazon Redshift Spectrum is incorrect because this is not a monitoring service but rather a feature of Amazon Redshift that enables you to query and analyze all of your data in Amazon S3 using the open data forma ts you already use, with no data loading or transformation s needed. References: https://aws.amazon.com/cloudtrail/ https://docs.aws.amazon.com/awscloudtrail/latest/us erguide/cloudtrail-user-guide.html Check out this AWS CloudTrail Cheat Sheet: https://tutorialsdojo.com/aws-cloudtrail/", + "references": "" + }, + { + "question": "A startup is using Amazon RDS to store data from a web application. Most of the time, the application has low user activity but it receives bursts of traffic wit hin seconds whenever there is a new product announc ement. The Solutions Architect needs to create a solution that will allow users around the globe to access the da ta using an API. What should the Solutions Architect do meet the abo ve requirement?", + "options": [ + "A. Create an API using Amazon API Gateway and use th e Amazon ECS cluster with Service Auto Scaling to", + "B. Create an API using Amazon API Gateway and use Am azon Elastic Beanstalk with Auto Scaling to handle", + "C. Create an API using Amazon API Gateway and use an Auto Scaling group of Amazon EC2 instances to", + "D. Create an API using Amazon API Gateway and use AW S Lambda to handle the bursts of traffic in seconds ." + ], + "correct": "D. Create an API using Amazon API Gateway and use AW S Lambda to handle the bursts of traffic in seconds .", + "explanation": "Explanation/Reference: AWS Lambda lets you run code without provisioning o r managing servers. You pay only for the compute ti me you consume. With Lambda, you can run code for virt ually any type of application or backend service - all with zero administration. Just upload your co de, and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app. The first time you invoke your function, AWS Lambda creates an instance of the function and runs its h andler method to process the event. When the function retu rns a response, it stays active and waits to proce ss additional events. If you invoke the function again while the first event is being processed, Lambda i nitializes another instance, and the function processes the tw o events concurrently. As more events come in, Lambda routes them to available instances and c reates new instances as needed. When the number of requests decreases, Lambda stops unused instances t o free up the scaling capacity for other functions.Your functions' concurrency is the number of instan ces that serve requests at a given time. For an ini tial burst of traffic, your functions' cumulative concurrency in a Region can reach an initial level of between 5 00 and 3000, which varies per Region. Based on the given scenario, you need to create a s olution that will satisfy the two requirements. The first requirement is to create a solution that will allow the users to access the data using an API. To impl ement this solution, you can use Amazon API Gateway. The secon d requirement is to handle the burst of traffic wit hin seconds. You should use AWS Lambda in this scenario because Lambda functions can absorb reasonable bursts of traffic for approximately 15-3 0 minutes. Lambda can scale faster than the regular Auto Scali ng feature of Amazon EC2, Amazon Elastic Beanstalk, or Amazon ECS. This is because AWS Lambda is more ligh tweight than other computing services. Under the hood, Lambda can run your code to thousands of available AWS-managed EC2 instances (that could already be running) within seconds to accommodate t raffic. This is faster than the Auto Scaling proces s of launching new EC2 instances that could take a few m inutes or so. An alternative is to overprovision yo ur compute capacity but that will incur significant co sts. The best option to implement given the require ments is a combination of AWS Lambda and Amazon API Gateway. Hence, the correct answer is: Create an API using A mazon API Gateway and use AWS Lambda to handle the bursts of traffic. The option that says: Create an API using Amazon AP I Gateway and use the Amazon ECS cluster with Service Auto Scaling to handle the bursts of t raffic in seconds is incorrect. AWS Lambda is a better option than Amazon ECS since it can handle a sudden burst of traffic within seconds and not min utes. The option that says: Create an API using Amazon AP I Gateway and use Amazon Elastic Beanstalk with Auto Scaling to handle the bursts of traffic i n seconds is incorrect because just like the previo us option, the use of Auto Scaling has a delay of a few minutes as it launches new EC2 instances that will be used by Amazon Elastic Beanstalk. The option that says: Create an API using Amazon AP I Gateway and use an Auto Scaling group of Amazon EC2 instances to handle the bursts of traffi c in seconds is incorrect because the processing time of Amazon EC2 Auto Scaling to provision new re sources takes minutes. Take note that in the scenar io, a burst of traffic within seconds is expected to happ en. References: https://aws.amazon.com/blogs/startups/from-0-to-100 -k-in-seconds-instant-scale-with-aws-lambda/ https://docs.aws.amazon.com/lambda/latest/dg/invoca tion-scaling.html Check out this AWS Lambda Cheat Sheet: https://tutorialsdojo.com/aws-lambda/", + "references": "" + }, + { + "question": "A company has a cloud architecture that is composed of Linux and Windows EC2 instances that process high volumes of financial data 24 hours a day, 7 da ys a week. To ensure high availability of the syste ms, the Solutions Architect needs to create a solution that allows them to monitor the memory and disk utilization metrics of all the instances. Which of the following is the most suitable monitor ing solution to implement?", + "options": [ + "A. Enable the Enhanced Monitoring option in EC2 and install CloudWatch agent to all the EC2 instances t o be", + "B. Use Amazon Inspector and install the Inspector ag ent to all EC2 instances.", + "C. Install the CloudWatch agent to all the EC2 insta nces that gathers the memory and disk utilization d ata.", + "D. Use the default CloudWatch configuration to EC2 i nstances where the memory and disk utilization metr ics" + ], + "correct": "C. Install the CloudWatch agent to all the EC2 insta nces that gathers the memory and disk utilization d ata.", + "explanation": "Explanation/Reference: Amazon CloudWatch has available Amazon EC2 Metrics for you to use for monitoring CPU utilization, Network utilization, Disk performance, and Disk Rea ds/Writes. In case you need to monitor the below items, you need to prepare a custom metric using a Perl or other shell script, as there are no ready t o use metrics for: Memory utilization Disk swap utilization Disk space utilization Page file utilization Log collection Take note that there is a multi-platform CloudWatch agent which can be installed on both Linux and Windows-based instances. You can use a single agent to collect both system metrics and log files from Amazon EC2 instances and on-premises servers. This agent supports both Windows Server and Linux and enables you to select the metrics to be collected, including sub-resource metrics such as per-CPU core . It is recommended that you use the new agent instead of t he older monitoring scripts to collect metrics and logs. Hence, the correct answer is: Install the CloudWatc h agent to all the EC2 instances that gathers the memory and disk utilization data. View the custom m etrics in the Amazon CloudWatch console. The option that says: Use the default CloudWatch co nfiguration to EC2 instances where the memory and d isk utilization metrics are already available. Install the AWS Systems Manager (SSM) Agent to all the EC2 instances is incorrect because, by default, CloudWa tch does not automatically provide memory and disk utilization metrics of your instances. You have to set up custom CloudWatch metrics to monitor the memory, disk swap, disk space, and page file utilization of your instances. The option that says: Enable the Enhanced Monitorin g option in EC2 and install CloudWatch agent to all the EC2 instances to be able to view the memory and dis k utilization in the CloudWatch dashboard is incorr ect because Enhanced Monitoring is a feature of Amazon RDS. By default, Enhanced Monitoring metrics are stored for 30 days in the CloudWatch Logs. The option that says: Use Amazon Inspector and inst all the Inspector agent to all EC2 instances is inc orrect because Amazon Inspector is an automated security a ssessment service that helps you test the network accessibility of your Amazon EC2 instances and the security state of your applications running on the instances. It does not provide a custom metric to track the memory and disk utilization of each an d every EC2 instance in your VPC. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /monitoring_ec2.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /mon-scripts.html#using_put_script Check out this Amazon CloudWatch Cheat Sheet: https://tutorialsdojo.com/amazon-cloudwatch/ CloudWatch Agent vs SSM Agent vs Custom Daemon Scri pts: https://tutorialsdojo.com/cloudwatch-agent-vs-ssm-a gent-vs-custom-daemon-scripts/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/", + "references": "" + }, + { + "question": "A company is in the process of migrating their appl ications to AWS. One of their systems requires a database that can scale globally and handle frequen t schema changes. The application should not have a ny downtime or performance issues whenever there is a schema change in the database. It should also provi de a low latency response to high-traffic queries. Which is the most suitable database solution to use to achieve this requirement?", + "options": [ + "A. Redshift", + "B. Amazon DynamoDB", + "C. An Amazon RDS instance in Multi-AZ Deployments co nfiguration", + "D. An Amazon Aurora database with Read Replicas" + ], + "correct": "B. Amazon DynamoDB", + "explanation": "Explanation/Reference: Before we proceed in answering this question, we mu st first be clear with the actual definition of a \"schema\". Basically, the english definition of a sc hema is: a representation of a plan or theory in th e form of an outline or model. Just think of a schema as the \"structure\" or a \"mod el\" of your data in your database. Since the scenar io requires that the schema, or the structure of your data, changes frequently, then you have to pick a d atabase which provides a non-rigid and flexible way of addi ng or removing new types of data. This is a classic example of choosing between a relational database and non-r elational (NoSQL) database. A relational database is known for having a rigid s chema, with a lot of constraints and limits as to w hich (and what type of ) data can be inserted or not. It is p rimarily used for scenarios where you have to suppo rt complex queries which fetch data across a number of tables. It is best for scenarios where you have complex table relationships but for use cases where you need to have a flexible schema, this is not a suitable database to use. For NoSQL, it is not as rigid as a relational datab ase because you can easily add or remove rows or elements in your table/collection entry. It also ha s a more flexible schema because it can store compl ex hierarchical data within a single item which, unlik e a relational database, does not entail changing m ultiple related tables. Hence, the best answer to be used h ere is a NoSQL database, like DynamoDB. When your business requires a low-latency response to high-tr affic queries, taking advantage of a NoSQL system generally makes technical and economic sense. Amazon DynamoDB helps solve the problems that limit the relational system scalability by avoiding them . In DynamoDB, you design your schema specifically to make the most common and important queries as fast and as inexpensive as possible. Your data stru ctures are tailored to the specific requirements of your business use cases. Remember that a relational database system does not scale well for the following reasons: - It normalizes data and stores it on multiple tabl es that require multiple queries to write to disk. - It generally incurs the performance costs of an A CID-compliant transaction system. - It uses expensive joins to reassemble required vi ews of query results. For DynamoDB, it scales well due to these reasons: - Its schema flexibility lets DynamoDB store comple x hierarchical data within a single item. DynamoDB is not a totally schemaless database since the very definition of a schema is just the model or struct ure of your data. - Composite key design lets it store related items close together on the same table. An Amazon RDS instance in Multi-AZ Deployments conf iguration and an Amazon Aurora database with Read Replicas are incorrect because both of th em are a type of relational database. Redshift is incorrect because it is primarily used for OLAP systems. References: https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/bp-general-nosql-design.html https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/bp-relational-modeling.html https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/SQLtoNoSQL.html Also check the AWS Certified Solutions Architect Of ficial Study Guide: Associate Exam 1st Edition and turn to page 161 which talks about NoSQL Databases. Check out this Amazon DynamoDB Cheat Sheet: https://tutorialsdojo.com/amazon-dynamodb/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", + "references": "" + }, + { + "question": "A company is using a combination of API Gateway and Lambda for the web services of the online web portal that is being accessed by hundreds of thousa nds of clients each day. They will be announcing a new revolutionary product and it is expected that the w eb portal will receive a massive number of visitors all around the globe. How can you protect the backend systems and applica tions from traffic spikes?", + "options": [ + "A. Use throttling limits in API Gateway", + "B. API Gateway will automatically scale and handle m assive traffic spikes so you do not have to do anyt hing.", + "C. Manually upgrade the EC2 instances being used by API Gateway", + "D. Deploy Multi-AZ in API Gateway with Read Replica" + ], + "correct": "A. Use throttling limits in API Gateway", + "explanation": "Explanation/Reference: Amazon API Gateway provides throttling at multiple levels including global and by a service call. Throttling limits can be set for standard rates and bursts. For example, API owners can set a rate lim it of 1,000 requests per second for a specific method in their REST APIs, and also configure Amazon API Gate way to handle a burst of 2,000 requests per second for a few seconds. Amazon API Gateway tracks the number of requests pe r second. Any requests over the limit will receive a 429 HTTP response. The client SDKs generated by Amazon API Gateway retry calls automatically when met with this response. Hence, the correct answer is: Use throttling limits in API Gateway. The option that says: API Gateway will automaticall y scale and handle massive traffic spikes so you do not have to do anything is incorrect. Although it can s cale using AWS Edge locations, you still need to co nfigure the throttling to further manage the bursts of your API s. Manually upgrading the EC2 instances being used by API Gateway is incorrect because API Gateway is a fully managed service and hence, you do not ha ve access to its underlying resources. Deploying Multi-AZ in API Gateway with Read Replica is incorrect because RDS has Multi-AZ and Read Replica capabilities, and not API Gateway.", + "references": "https://aws.amazon.com/api-gateway/faqs/#Throttling _and_Caching Check out this Amazon API Gateway Cheat Sheet: https://tutorialsdojo.com/amazon-api-gateway/" + }, + { + "question": "A company is designing a banking portal that uses A mazon ElastiCache for Redis as its distributed sess ion management component. Since the other Cloud Enginee rs in your department have access to your ElastiCache cluster, you have to secure the session data in the portal by requiring them to enter a pa ssword before they are granted permission to execute Redis commands. As the Solutions Architect, which of the following should you do to meet the above requirement?", + "options": [ + "A. Authenticate the users using Redis AUTH by creati ng a new Redis Cluster with both the -- transit-", + "B. Set up a Redis replication group and enable the A tRestEncryptionEnabled parameter.", + "C. Set up an IAM Policy and MFA which requires the C loud Engineers to enter their IAM credentials and t oken", + "D. Enable the in-transit encryption for Redis replic ation groups." + ], + "correct": "A. Authenticate the users using Redis AUTH by creati ng a new Redis Cluster with both the -- transit-", + "explanation": "Explanation/Reference: Using Redis AUTH command can improve data security by requiring the user to enter a password before they are granted permission to execute Redis comman ds on a password-protected Redis server. Hence, the correct answer is: Authenticate the users using Red is AUTH by creating a new Redis Cluster with both the --transit-encryption-enabled and --auth-to ken parameters enabled. To require that users enter a password on a passwor d-protected Redis server, include the parameter --a uth- token with the correct password when you create you r replication group or cluster and on all subsequen t commands to the replication group or cluster. Setting up an IAM Policy and MFA which requires the Cloud Engineers to enter their IAM credentials and token before they can access the El astiCache cluster is incorrect because this is not possible in IAM. You have to use the Redis AUTH opt ion instead. Setting up a Redis replication group and enabling t he AtRestEncryptionEnabled parameter is incorrect because the Redis At-Rest Encryption feat ure only secures the data inside the in-memory data store. You have to use Redis AUTH option instead. Enabling the in-transit encryption for Redis replic ation groups is incorrect. Although in-transit encryption is part of the solution, it is missing t he most important thing which is the Redis AUTH opt ion. References: https://docs.aws.amazon.com/AmazonElastiCache/lates t/red-ug/auth.html https://docs.aws.amazon.com/AmazonElastiCache/lates t/red-ug/encryption.html Check out this Amazon Elasticache Cheat Sheet: https://tutorialsdojo.com/amazon-elasticache/ Redis (cluster mode enabled vs disabled) vs Memcach ed: https://tutorialsdojo.com/redis-cluster-mode-enable d-vs-disabled-vs-memcached/", + "references": "" + }, + { + "question": "A company plans to host a web application in an Aut o Scaling group of Amazon EC2 instances. The application will be used globally by users to uploa d and store several types of files. Based on user t rends, files that are older than 2 years must be stored in a dif ferent storage class. The Solutions Architect of th e company needs to create a cost-effective and scalable solut ion to store the old files yet still provide durabi lity and high availability. Which of the following approach can be used to fulf ill this requirement? (Select TWO.)", + "options": [ + "A. Use Amazon EBS volumes to store the files. Config ure the Amazon Data Lifecycle Manager (DLM) to", + "B. Use Amazon S3 and create a lifecycle policy that will move the objects to Amazon S3 Glacier after 2 years.", + "C. Use a RAID 0 storage configuration that stripes m ultiple Amazon EBS volumes together to store the fi les.", + "D. Use Amazon S3 and create a lifecycle policy that will move the objects to Amazon S3 Standard-IA afte r 2" + ], + "correct": "", + "explanation": "Explanation/Reference: Amazon S3 stores data as objects within buckets. An object is a file and any optional metadata that describes the file. To store a file in Amazon S3, y ou upload it to a bucket. When you upload a file as an object, you can set permissions on the object and any metad ata. Buckets are containers for objects. You can ha ve one or more buckets. You can control access for each bu cket, deciding who can create, delete, and list obj ects in it. You can also choose the geographical region where A mazon S3 will store the bucket and its contents and view access logs for the bucket and its objects. To move a file to a different storage class, you ca n use Amazon S3 or Amazon EFS. Both services have lifecycle configurations. Take note that Amazon EFS can only transition a file to the IA storage class after 90 days. Since you need to move the files that are old er than 2 years to a more cost-effective and scalab le solution, you should use the Amazon S3 lifecycle co nfiguration. With S3 lifecycle rules, you can trans ition files to S3 Standard IA or S3 Glacier. Using S3 Glacier e xpedited retrieval, you can quickly access your fil es within 1-5 minutes. Hence, the correct answers are: - Use Amazon S3 and create a lifecycle policy that will move the objects to Amazon S3 Glacier after 2 years. - Use Amazon S3 and create a lifecycle policy that will move the objects to Amazon S3 Standard-IA afte r 2 years. The option that says: Use Amazon EFS and create a l ifecycle policy that will move the objects to Amazon EFS-IA after 2 years is incorrect because th e maximum days for the EFS lifecycle policy is only 90 days. The requirement is to move the files that are older than 2 years or 730 days. The option that says: Use Amazon EBS volumes to sto re the files. Configure the Amazon Data Lifecycle Manager (DLM) to schedule snapshots of th e volumes after 2 years is incorrect because Amazon EBS costs more and is not as scalable as Ama zon S3. It has some limitations when accessed by multiple EC2 instances. There are also huge costs i nvolved in using the multi-attach feature on a Provisioned IOPS EBS volume to allow multiple EC2 i nstances to access the volume. The option that says: Use a RAID 0 storage configur ation that stripes multiple Amazon EBS volumes together to store the files. Configure the Amazon D ata Lifecycle Manager (DLM) to schedule snapshots of the volumes after 2 years is incorrect because RAID (Redundant Array of Independent Disks) is just a data storage virtualization techno logy that combines multiple storage devices to achi eve higher performance or data durability. RAID 0 can stripe m ultiple volumes together for greater I/O performance than you can achieve with a single volu me. On the other hand, RAID 1 can mirror two volume s together to achieve on-instance redundancy. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/obj ect-lifecycle-mgmt.html https://docs.aws.amazon.com/efs/latest/ug/lifecycle -management-efs.html https://aws.amazon.com/s3/faqs/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", + "references": "" + }, + { + "question": "An online medical system hosted in AWS stores sensi tive Personally Identifiable Information (PII) of t he users in an Amazon S3 bucket. Both the master keys and th e unencrypted data should never be sent to AWS to comply with the strict compliance and regulatory re quirements of the company. Which S3 encryption technique should the Architect use?", + "options": [ + "A. Use S3 client-side encryption with a client-side master key.", + "B. Use S3 client-side encryption with a KMS-managed customer master key.", + "C. Use S3 server-side encryption with a KMS managed key.", + "D. Use S3 server-side encryption with customer provi ded key." + ], + "correct": "A. Use S3 client-side encryption with a client-side master key.", + "explanation": "Explanation/Reference: Client-side encryption is the act of encrypting dat a before sending it to Amazon S3. To enable client- side encryption, you have the following options: - Use an AWS KMS-managed customer master key. - Use a client-side master key. When using an AWS KMS-managed customer master key t o enable client-side data encryption, you provide a n AWS KMS customer master key ID (CMK ID) to AWS. On the other hand, when you use client-side master key for client-side data encryption, your client-side m aster keys and your unencrypted data are never sen t to AWS. It's important that you safely manage your encrypti on keys because if you lose them, you can't decrypt your data. This is how client-side encryption using client-sid e master key works: When uploading an object - You provide a client-sid e master key to the Amazon S3 encryption client. The client uses the master key only to encrypt the data encryption key that it generates randomly. The process works like this: 1. The Amazon S3 encryption client generates a one- time-use symmetric key (also known as a data encryp tion key or data key) locally. It uses the data key to e ncrypt the data of a single Amazon S3 object. The client generates a separate data key for each o bject. 2. The client encrypts the data encryption key usin g the master key that you provide. The client uploa ds the encrypted data key and its material description as part of the object metadata. The client uses the ma terial description to determine which client-side master k ey to use for decryption. 3. The client uploads the encrypted data to Amazon S3 and saves the encrypted data key as object metad ata (x-amz-meta-x-amz-key) in Amazon S3. When downloading an object - The client downloads t he encrypted object from Amazon S3. Using the mater ial description from the object's metadata, the client determines which master key to use to decrypt the data key. The client uses that master key to de crypt the data key and then uses the data key to de crypt the object. Hence, the correct answer is to use S3 client-side encryption with a client-side master key. Using S3 client-side encryption with a KMS-managed customer master key is incorrect because in client- side encryption with a KMS-managed customer master key, you provide an AWS KMS customer master key ID (CMK ID) to AWS. The scenario clearly indicates tha t both the master keys and the unencrypted data sho uld never be sent to AWS. Using S3 server-side encryption with a KMS managed key is incorrect because the scenario mentioned tha t the unencrypted data should never be sent to AWS, which means that you have to use client-side encryption in order to encrypt the data first before sending to A WS. In this way, you can ensure that there is no u nencrypted data being uploaded to AWS. In addition, the master key used by Server-Side Encryption with AWS KMS- Managed Keys (SSE-KMS) is uploaded and managed by A WS, which directly violates the requirement of not uploading the master key. Using S3 server-side encryption with customer provi ded key is incorrect because just as mentioned above, you have to use client-side encryption in th is scenario instead of server-side encryption. For the S3 server-side encryption with customer-provided key ( SSE-C), you actually provide the encryption key as part of your request to upload the object to S3. Us ing this key, Amazon S3 manages both the encryption (as it writes to disks) and decryption (when you access yo ur objects). References: https://docs.aws.amazon.com/AmazonS3/latest/dev/Usi ngEncryption.html https://docs.aws.amazon.com/AmazonS3/latest/dev/Usi ngClientSideEncryption.html", + "references": "" + }, + { + "question": "An application consists of multiple EC2 instances i n private subnets in different availability zones. The application uses a single NAT Gateway for downloadi ng software patches from the Internet to the instan ces. There is a requirement to protect the application f rom a single point of failure when the NAT Gateway encounters a failure or if its availability zone go es down. How should the Solutions Architect redesign the arc hitecture to be more highly available and cost-effe ctive", + "options": [ + "A. Create three NAT Gateways in each availability zo ne. Configure the route table in each private subne t to ensure that instances use the NAT Gateway in the sa me availability zone.", + "B. Create a NAT Gateway in each availability zone. C onfigure the route table in each private subnet to ensure", + "C. Create two NAT Gateways in each availability zone . Configure the route table in each public subnet t o", + "D. Create a NAT Gateway in each availability zone. C onfigure the route table in each public subnet to e nsure" + ], + "correct": "B. Create a NAT Gateway in each availability zone. C onfigure the route table in each private subnet to ensure", + "explanation": "Explanation/Reference: A NAT Gateway is a highly available, managed Networ k Address Translation (NAT) service for your resources in a private subnet to access the Interne t. NAT gateway is created in a specific Availabilit y Zone and implemented with redundancy in that zone. You must create a NAT gateway on a public subnet to enable instances in a private subnet to connect to the Internet or other AWS services, but prevent the Int ernet from initiating a connection with those insta nces. If you have resources in multiple Availability Zone s and they share one NAT gateway, and if the NAT gateway's Availability Zone is down, resources in t he other Availability Zones lose Internet access. T o create an Availability Zone-independent architecture, create a NAT gateway in each Availability Zone and configu re your routing to ensure that resources use the NAT gatewa y in the same Availability Zone. Hence, the correct answer is: Create a NAT Gateway in each availability zone. Configure the route table in each private subnet to ensure that instanc es use the NAT Gateway in the same availability zone. The option that says: Create a NAT Gateway in each availability zone. Configure the route table in each public subnet to ensure that instances use the NAT Gateway in the same availability zone is incorrect because you should configure the route ta ble in the private subnet and not the public subnet to associate the right instances in the private subnet . The options that say: Create two NAT Gateways in ea ch availability zone. Configure the route table in each public subnet to ensure that instances use the NAT Gateway in the same availability zone and Create three NAT Gateways in each availability zone . Configure the route table in each private subnet to ensure that instances use the NAT Gateway in the same availability zone are both incorrect because a single NAT Gateway in each availability z one is enough. NAT Gateway is already redundant in nature, meaning, AWS already handles any failures t hat occur in your NAT Gateway in an availability zo ne. References: https://docs.aws.amazon.com/vpc/latest/userguide/vp c-nat-gateway.html https://docs.aws.amazon.com/vpc/latest/userguide/vp c-nat-comparison.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", + "references": "" + }, + { + "question": "A tech company has a CRM application hosted on an A uto Scaling group of On-Demand EC2 instances. The application is extensively used during office h ours from 9 in the morning till 5 in the afternoon. Their users are complaining that the performance of the applica tion is slow during the start of the day but then w orks normally after a couple of hours. Which of the following can be done to ensure that t he application works properly at the beginning of t he day?", + "options": [ + "A. Configure a Dynamic scaling policy for the Auto S caling group to launch new instances based on the C PU", + "B. Set up an Application Load Balancer (ALB) to your architecture to ensure that the traffic is properl y", + "C. Configure a Scheduled scaling policy for the Auto Scaling group to launch new instances before the s tart of", + "D. Configure a Dynamic scaling policy for the Auto S caling group to launch new instances based on the" + ], + "correct": "C. Configure a Scheduled scaling policy for the Auto Scaling group to launch new instances before the s tart of", + "explanation": "Explanation/Reference: Scaling based on a schedule allows you to scale you r application in response to predictable load chang es. For example, every week the traffic to your web applica tion starts to increase on Wednesday, remains high on Thursday, and starts to decrease on Friday. You can plan your scaling activities based on the predicta ble traffic patterns of your web application. To configure your Auto Scaling group to scale based on a schedule, you create a scheduled action. The scheduled action tells Amazon EC2 Auto Scaling to p erform a scaling action at specified times. To crea te a scheduled scaling action, you specify the start tim e when the scaling action should take effect, and t he new minimum, maximum, and desired sizes for the scaling action. At the specified time, Amazon EC2 Auto Scaling updates the group with the values for minim um, maximum, and desired size specified by the scaling action. You can create scheduled actions fo r scaling one time only or for scaling on a recurri ng schedule. Hence, configuring a Scheduled scaling policy for t he Auto Scaling group to launch new instances before the start of the day is the correct answer. You need to configure a Scheduled scaling policy. T his will ensure that the instances are already scaled up and ready before the start of the day since this is wh en the application is used the most. Configuring a Dynamic scaling policy for the Auto S caling group to launch new instances based on the CPU utilization and configuring a Dynamic scali ng policy for the Auto Scaling group to launch new instances based on the Memory utilization are b oth incorrect because although these are valid solutions, it is still better to configure a Schedu led scaling policy as you already know the exact pe ak hours of your application. By the time either the CPU or Mem ory hits a peak, the application already has performance issues, so you need to ensure the scali ng is done beforehand using a Scheduled scaling pol icy. Setting up an Application Load Balancer (ALB) to yo ur architecture to ensure that the traffic is properly distributed on the instances is incorrect. Although the Application load balancer can also balance the traffic, it cannot increase the instanc es based on demand.", + "references": "https://docs.aws.amazon.com/autoscaling/ec2/usergui de/schedule_time.html Check out this AWS Auto Scaling Cheat Sheet: https://tutorialsdojo.com/aws-auto-scaling/" + }, + { + "question": "A company collects atmospheric data such as tempera ture, air pressure, and humidity from different cou ntries. Each site location is equipped with various weather instruments and a high-speed Internet connection. The average collected data in each location is around 5 00 GB and will be analyzed by a weather forecasting application hosted in Northern Virginia. As the Sol utions Architect, you need to aggregate all the dat a in the fastest way. Which of the following options can satisfy the give n requirement?", + "options": [ + "A. Set up a Site-to-Site VPN connection.", + "B. Enable Transfer Acceleration in the destination b ucket and upload the collected data using Multipart Upload.", + "C. Upload the data to the closest S3 bucket. Set up a cross-region replication and copy the objects to the", + "D. Use AWS Snowball Edge to transfer large amounts o f data." + ], + "correct": "B. Enable Transfer Acceleration in the destination b ucket and upload the collected data using Multipart Upload.", + "explanation": "Explanation/Reference: Amazon S3 is object storage built to store and retr ieve any amount of data from anywhere on the Intern et. It's a simple storage service that offers industry- leading durability, availability, performance, secu rity, and virtually unlimited scalability at very low costs. Amazon S3 is also designed to be highly flexible. S tore any type and amount of data that you want; read the same piece of data a million times or only for emergency disaster recovery; build a simple FTP app lication or a sophisticated web application. Since the weather forecasting application is locate d in N.Virginia, you need to transfer all the data in the same AWS Region. With Amazon S3 Transfer Acceleration, y ou can speed up content transfers to and from Amazo n S3 by as much as 50-500% for long-distance transfer of larger objects. Multipart upload allows you to upload a single object as a set of parts. After all the pa rts of your object are uploaded, Amazon S3 then pre sents the data as a single object. This approach is the faste st way to aggregate all the data. Hence, the correct answer is: Enable Transfer Accel eration in the destination bucket and upload the co llected data using Multipart Upload. The option that says: Upload the data to the closes t S3 bucket. Set up a cross-region replication and copy the objects to the destination bucket is incorrect beca use replicating the objects to the destination bucket takes about 15 minutes. Take note that the r equirement in the scenario is to aggregate the data in the fastest way. The option that says: Use AWS Snowball Edge to tran sfer large amounts of data is incorrect because the end- to-end time to transfer up to 80 TB of data into AW S Snowball Edge is approximately one week. The option that says: Set up a Site-to-Site VPN con nection is incorrect because setting up a VPN conne ction is not needed in this scenario. Site-to-Site VPN is ju st used for establishing secure connections between an on-premises network and Amazon VPC. Also , this approach is not the fastest way to transfer your data. You must use Amazon S3 Transfer Acceleration. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/rep lication.html https://docs.aws.amazon.com/AmazonS3/latest/dev/tra nsfer-acceleration.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", + "references": "" + }, + { + "question": "A company plans to build a data analytics applicati on in AWS which will be deployed in an Auto Scaling group of On-Demand EC2 instances and a MongoDB database. It is expected that the database will have high- throughput workloads performing small, random I/O o perations. As the Solutions Architect, you are requ ired to properly set up and launch the required resources i n AWS. Which of the following is the most suitable EBS typ e to use for your database?", + "options": [ + "A. General Purpose SSD (gp2)", + "B. Cold HDD (sc1)", + "C. Throughput Optimized HDD (st1)", + "D. Provisioned IOPS SSD (io1)" + ], + "correct": "D. Provisioned IOPS SSD (io1)", + "explanation": "Explanation/Reference: On a given volume configuration, certain I/O charac teristics drive the performance behavior for your E BS volumes. SSD-backed volumes, such as General Purpos e SSD (gp2) and Provisioned IOPS SSD (io1), deliver consistent performance whether an I/O opera tion is random or sequential. HDD-backed volumes like Throughput Optimized HDD (st1) and Cold HDD (s c1) deliver optimal performance only when I/O operations are large and sequential. In the exam, always consider the difference between SSD and HDD as shown on the table below. This will allow you to easily eliminate specific EBS-types in the options which are not SSD or not HDD, dependin g on whether the question asks for a storage type which has small, random I/O operations or large, sequential I/O operations. Provisioned IOPS SSD (io1) volumes are designed to meet the needs of I/O-intensive workloads, particularly database workloads, that are sensitive to storage performance and consistency. Unlike gp2 , which uses a bucket and credit model to calculate p erformance, an io1 volume allows you to specify a consistent IOPS rate when you create the volume, an d Amazon EBS delivers within 10 percent of the provisioned IOPS performance 99.9 percent of the ti me over a given year. General Purpose SSD (gp2) is incorrect because alth ough General Purpose is a type of SSD that can hand le small, random I/O operations, the Provisioned IOPS SSD volumes are much more suitable to meet the needs of I/O-intensive database workloads such as MongoDB, Oracle, MySQL, and many others. Throughput Optimized HDD (st1) and Cold HDD (sc1) a re incorrect because HDD volumes (such as Throughput Optimized HDD and Cold HDD volumes) are more suitable for workloads with large, sequential I/O operations instead of small, random I/O operations. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /EBSVolumeTypes.html#EBSVolumeTypes_piops https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ebs-io-characteristics.html Amazon EBS Overview - SSD vs HDD: https://www.youtube.com/watch?v=LW7x8wyLFvw Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/", + "references": "" + }, + { + "question": "A global IT company with offices around the world h as multiple AWS accounts. To improve efficiency and drive costs down, the Chief Information Officer (CIO) wan ts to set up a solution that centrally manages thei r AWS resources. This will allow them to procure AWS reso urces centrally and share resources such as AWS Tra nsit Gateways, AWS License Manager configurations, or Am azon Route 53 Resolver rules across their various accounts. As the Solutions Architect, which combination of op tions should you implement in this scenario? (Selec t TWO.)", + "options": [ + "A. Use the AWS Identity and Access Management servic e to set up cross-account access that will easily a nd", + "B. Consolidate all of the company's accounts using A WS ParallelCluster.", + "C. Use the AWS Resource Access Manager (RAM) service to easily and securely share your resources with", + "D. Use AWS Control Tower to easily and securely shar e your resources with your AWS accounts." + ], + "correct": "", + "explanation": "Explanation/Reference: AWS Resource Access Manager (RAM) is a service that enables you to easily and securely share AWS resources with any AWS account or within your AWS O rganization. You can share AWS Transit Gateways, Subnets, AWS License Manager configuratio ns, and Amazon Route 53 Resolver rules resources with RAM. Many organizations use multiple accounts to create administrative or billing isolation, and limit the impact of errors. RAM eliminates the need to create duplicate resources in multiple accounts, reducing the operational overhead of managing those resources in every single account you own. You can create resources centrally in a multi-account environment, and use RAM to share those resources across accoun ts in three simple steps: create a Resource Share, specif y resources, and specify accounts. RAM is available to you at no additional charge. You can procure AWS resources centrally, and use RA M to share resources such as subnets or License Manager configurations with other accounts. This el iminates the need to provision duplicate resources in every account in a multi-account environment, reducing th e operational overhead of managing those resources in every account. AWS Organizations is an account management service that lets you consolidate multiple AWS accounts into an organization that you create and centrally manage. With Organizations, you can create member accounts and invite existing accounts to join your organization. You can organize those accounts into groups and attach policy-based controls. Hence, the correct combination of options in this s cenario is: - Consolidate all of the company's accounts using A WS Organizations. - Use the AWS Resource Access Manager (RAM) service to easily and securely share your resources with your AWS accounts. The option that says: Use the AWS Identity and Acce ss Management service to set up cross-account access that will easily and securely share your res ources with your AWS accounts is incorrect because although you can delegate access to resources that are in different AWS accounts using IAM, this proce ss is extremely tedious and entails a lot of operational overhead since you have to manually set up cross- a ccount access to each and every AWS account of the company . A better solution is to use AWS Resources Access Manager instead. The option that says: Use AWS Control Tower to easi ly and securely share your resources with your AWS accounts is incorrect because AWS Control Tower simply offers the easiest way to set up and govern a new, secure, multi-account AWS environment. This is not the most suitable service to use to securely s hare your resources across AWS accounts or within your O rganization. You have to use AWS Resources Access Manager (RAM) instead. The option that says: Consolidate all of the compan y's accounts using AWS ParallelCluster is incorrect because AWS ParallelCluster is simply an AWS-suppor ted open-source cluster management tool that makes it easy for you to deploy and manage High-Performance Computing (HPC) clusters on AWS. In this particular scenario, it is more appropriate to use AWS Organiz ations to consolidate all of your AWS accounts. References: https://aws.amazon.com/ram/ https://docs.aws.amazon.com/ram/latest/userguide/sh areable.html", + "references": "" + }, + { + "question": "A tech company that you are working for has underta ken a Total Cost Of Ownership (TCO) analysis evalua ting the use of Amazon S3 versus acquiring more storage hardware. The result was that all 1200 employees wo uld be granted access to use Amazon S3 for the storage of their personal documents. Which of the following will you need to consider so you can set up a solution that incorporates a sing le sign-on feature from your corporate AD or LDAP directory an d also restricts access for each individual user to a designated user folder in an S3 bucket? ( Select TWO.)", + "options": [ + "A. Set up a matching IAM user for each of the 1200 u sers in your corporate directory that needs access to a", + "B. Configure an IAM role and an IAM Policy to access the bucket.", + "C. Use 3rd party Single Sign-On solutions such as At lassian Crowd, OKTA, OneLogin and many others.", + "D. Map each individual user to a designated user fol der in S3 using Amazon WorkDocs to access their" + ], + "correct": "", + "explanation": "Explanation Explanation/Reference: The question refers to one of the common scenarios for temporary credentials in AWS. Temporary credent ials are useful in scenarios that involve identity feder ation, delegation, cross-account access, and IAM roles. In this example, it is called enterprise identity federation considering that you also need to set up a single sign-on (SSO) capability. The correct answers are: - Setup a Federation proxy or an Identity provider - Setup an AWS Security Token Service to generate t emporary tokens - Configure an IAM role and an IAM Policy to access the bucket. In an enterprise identity federation, you can authe nticate users in your organization's network, and t hen provide those users access to AWS without creating new AWS identities for them and requiring them to s ign in with a separate user name and password. This is kno wn as the single sign-on (SSO) approach to temporar y access. AWS STS supports open standards like Securi ty Assertion Markup Language (SAML) 2.0, with which you can use Microsoft AD FS to leverage your Micros oft Active Directory. You can also use SAML 2.0 to manage your own solution for federating user identi ties. Using 3rd party Single Sign-On solutions such as At lassian Crowd, OKTA, OneLogin and many others is incorrect since you don't have to use 3rd party sol utions to provide the access. AWS already provides the necessary tools that you can use in th is situation. Mapping each individual user to a designated user f older in S3 using Amazon WorkDocs to access their personal documents is incorrect as there is no dire ct way of integrating Amazon S3 with Amazon WorkDoc s for this particular scenario. Amazon WorkDocs is simply a fully managed, secure content creation, storage, and collaboration service. With Amazon WorkDocs, you ca n easily create, edit, and share content. And becau se it's stored centrally on AWS, you can access it from any where on any device. Setting up a matching IAM user for each of the 1200 users in your corporate directory that needs access to a folder in the S3 bucket is incorrect si nce creating that many IAM users would be unnecessa ry. Also, you want the account to integrate with your A D or LDAP directory, hence, IAM Users does not fit these criteria. References: https://docs.aws.amazon.com/IAM/latest/UserGuide/id _roles_providers_saml.html https://docs.aws.amazon.com/IAM/latest/UserGuide/id _roles_providers_oidc.html https://aws.amazon.com/premiumsupport/knowledge-cen ter/iam-s3-user-specific-folder/ AWS Identity Services Overview: https://youtu.be/AIdUw0i8rr0 Check out this AWS IAM Cheat Sheet: https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/", + "references": "" + }, + { + "question": "There are a lot of outages in the Availability Zone of your RDS database instance to the point that yo u have lost access to the database. What could you do to preven t losing access to your database in case that this event happens again?", + "options": [ + "A. Make a snapshot of the database", + "B. Increase the database instance size", + "C. Create a read replica", + "D. Enabled Multi-AZ failover" + ], + "correct": "D. Enabled Multi-AZ failover", + "explanation": "Explanation/Reference: Amazon RDS Multi-AZ deployments provide enhanced av ailability and durability for Database (DB) Instances, making them a natural fit for production database workloads. For this scenario, enabling Mu lti- AZ failover is the correct answer. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and syn chronously replicates the data to a standby instanc e in a different Availability Zone (AZ). Each AZ runs on i ts own physically distinct, independent infrastruct ure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS pe rforms an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Making a snapshot of the database allows you to hav e a backup of your database, but it does not provid e immediate availability in case of AZ failure. So th is is incorrect. Increasing the database instance size is not a solu tion for this problem. Doing this action addresses the need to upgrade your compute capacity but does not solve th e requirement of providing access to your database even in the event of a loss of one of the Availability Zones. Creating a read replica is incorrect because this s imply provides enhanced performance for read-heavy database workloads. Although you can promote a read replica, its asynchronous replication might not provide you the latest version of your database.", + "references": "https://aws.amazon.com/rds/details/multi-az/ Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate-saa-c02/" + }, + { + "question": "A cryptocurrency trading platform is using an API b uilt in AWS Lambda and API Gateway. Due to the rece nt news and rumors about the upcoming price surge of B itcoin, Ethereum and other cryptocurrencies, it is expected that the trading platform would have a significant increase in site visitors and new users in the coming days ahead. In this scenario, how can you protect the backend s ystems of the platform from traffic spikes?", + "options": [ + "A. Move the Lambda function in a VPC.", + "B. Enable throttling limits and result caching in AP I Gateway.", + "C. Use CloudFront in front of the API Gateway to act as a cache.", + "D. Switch from using AWS Lambda and API Gateway to a more scalable and highly available architecture" + ], + "correct": "B. Enable throttling limits and result caching in AP I Gateway.", + "explanation": "Explanation/Reference: Amazon API Gateway provides throttling at multiple levels including global and by service call. Thrott ling limits can be set for standard rates and bursts. Fo r example, API owners can set a rate limit of 1,000 requests per second for a specific method in their REST APIs, and also configure Amazon API Gateway to handle a burst of 2,000 requests per second for a f ew seconds. Amazon API Gateway tracks the number of requests per second. Any request over the limit wil l receive a 429 HTTP response. The client SDKs gene rated by Amazon API Gateway retry calls automatically whe n met with this response. Hence, enabling throttlin g limits and result caching in API Gateway is the correct an swer. You can add caching to API calls by provisioning an Amazon API Gateway cache and specifying its size i n gigabytes. The cache is provisioned for a specific stage of your APIs. This improves performance and reduces the traffic sent to your back end. Cache se ttings allow you to control the way the cache key i s built and the time-to-live (TTL) of the data stored for e ach method. Amazon API Gateway also exposes management APIs that help you invalidate the cache for each stage. The option that says: Switch from using AWS Lambda and API Gateway to a more scalable and highly available architecture using EC2 instances, ELB, and Auto Scaling is incorrect since there is n o need to transfer your applications to other services. Using CloudFront in front of the API Gateway to act as a cache is incorrect because CloudFront only speeds up content delivery which provides a better latency experience for your users. It does not help much for the backend. Moving the Lambda function in a VPC is incorrect be cause this answer is irrelevant to what is being asked. A VPC is your own virtual private cloud wher e you can launch AWS services.", + "references": "https://aws.amazon.com/api-gateway/faqs/ Check out this Amazon API Gateway Cheat Sheet: https://tutorialsdojo.com/amazon-api-gateway/ Here is an in-depth tutorial on Amazon API Gateway: https://youtu.be/XwfpPEFHKtQ" + }, + { + "question": "A content management system (CMS) is hosted on a fl eet of auto-scaled, On-Demand EC2 instances that us e Amazon Aurora as its database. Currently, the syste m stores the file documents that the users upload i n one of the attached EBS Volumes. Your manager noticed that the system performance is quite slow and he has instructed you to improve the architecture of the s ystem. In this scenario, what will you do to implement a s calable, high-available POSIX-compliant shared file system?", + "options": [ + "A. Create an S3 bucket and use this as the storage f or the CMS", + "B. Upgrading your existing EBS volumes to Provisione d IOPS SSD Volumes", + "C. Use ElastiCache", + "D. Use EFS" + ], + "correct": "D. Use EFS", + "explanation": "Explanation/Reference: Amazon Elastic File System (Amazon EFS) provides si mple, scalable, elastic file storage for use with A WS Cloud services and on-premises resources. When moun ted on Amazon EC2 instances, an Amazon EFS file system provides a standard file system int erface and file system access semantics, allowing y ou to seamlessly integrate Amazon EFS with your existing applications and tools. Multiple Amazon EC2 instanc es can access an Amazon EFS file system at the same ti me, allowing Amazon EFS to provide a common data source for workloads and applications running on mo re than one Amazon EC2 instance. This particular scenario tests your understanding o f EBS, EFS, and S3. In this scenario, there is a fl eet of On-Demand EC2 instances that store file documents f rom the users to one of the attached EBS Volumes. The system performance is quite slow because the ar chitecture doesn't provide the EC2 instances parall el shared access to the file documents. Although an EBS Volume can be attached to multiple EC2 instances, you can only do so on instances within an availability zone. What we need is high-a vailable storage that can span multiple availabilit y zones. Take note as well that the type of storage n eeded here is \"file storage\" which means that S3 is not the best service to use because it is mainly used for \" object storage\", and S3 does not provide the notion of \"folders\" too. This is why using EFS is the correct answer. Upgrading your existing EBS volumes to Provisioned IOPS SSD Volumes is incorrect because an EBS volume is a storage area network (SAN) storage and not a POSIX-compliant shared file system. You have to use EFS instead. Using ElastiCache is incorrect because this is an i n-memory data store that improves the performance o f your applications, which is not what you need since it i s not a file storage.", + "references": "https://aws.amazon.com/efs/ Check out this Amazon EFS Cheat Sheet: https://tutorialsdojo.com/amazon-efs/ Check out this Amazon S3 vs EBS vs EFS Cheat Sheet: https://tutorialsdojo.com/amazon-s3-vs-ebs-vs-efs/" + }, + { + "question": "A company has a hybrid cloud architecture that conn ects their on-premises data center and cloud infras tructure in AWS. They require a durable storage backup for t heir corporate documents stored on- premises and a local cache that provides low latenc y access to their recently accessed data to reduce data egress charges. The documents must be stored to and retrieved from AWS via the Server Message Block (SMB) protocol. These files must immediately be acc essible within minutes for six months and archived for another decade to meet the data compliance. Which of the following is the best and most cost-ef fective approach to implement in this scenario? A. Launch a new file gateway that connects to your o n-premises data center using AWS Storage Gateway. Upload the documents to the file gateway and set up a lifecycle policy to move the data into Glacier f or data archival.", + "options": [ + "B. Use AWS Snowmobile to migrate all of the files fr om the on-premises network. Upload the documents to an", + "C. Establish a Direct Connect connection to integrat e your on-premises network to your VPC. Upload the", + "D. Launch a new tape gateway that connects to your o n-premises data center using AWS Storage Gateway." + ], + "correct": "", + "explanation": "Explanation/Reference: A file gateway supports a file interface into Amazo n Simple Storage Service (Amazon S3) and combines a service and a virtual software appliance. By using this combination, you can store and retrieve object s in Amazon S3 using industry-standard file protocols su ch as Network File System (NFS) and Server Message Block (SMB). The software appliance, or gateway, is deployed into your on-premises environment as a vi rtual machine (VM) running on VMware ESXi, Microsoft Hype r-V, or Linux Kernel-based Virtual Machine (KVM) hypervisor. The gateway provides access to objects in S3 as fil es or file share mount points. With a file gateway, you can do the following: - You can store and retrieve files directly using t he NFS version 3 or 4.1 protocol. - You can store and retrieve files directly using t he SMB file system version, 2 and 3 protocol. - You can access your data directly in Amazon S3 fr om any AWS Cloud application or service. - You can manage your Amazon S3 data using lifecycl e policies, cross-region replication, and versionin g. You can think of a file gateway as a file system mo unt on S3. AWS Storage Gateway supports the Amazon S3 Standard , Amazon S3 Standard-Infrequent Access, Amazon S3 One Zone-Infrequent Access and Amazon Gla cier storage classes. When you create or update a file share, you have the option to select a storage class for your objects. You can either choose the Amazon S3 Standard or any of the infrequent access storage cl asses such as S3 Standard IA or S3 One Zone IA. Objects stored in any of these storage classes can be transitioned to Amazon Glacier using a Lifecycle Policy. Although you can write objects directly from a file share to the S3-Standard-IA or S3-One Zone-IA stor age class, it is recommended that you use a Lifecycle P olicy to transition your objects rather than write directly from the file share, especially if you're expecting to u pdate or delete the object within 30 days of archiv ing it. Therefore, the correct answer is: Launch a new file gateway that connects to your on-premises data center using AWS Storage Gateway. Upload the docume nts to the file gateway and set up a lifecycle policy to move the data into Glacier for data archi val. The option that says: Launch a new tape gateway tha t connects to your on-premises data center using AWS Storage Gateway. Upload the documents to the ta pe gateway and set up a lifecycle policy to move the data into Glacier for archival is incorrec t because although tape gateways provide cost- effective and durable archive backup data in Amazon Glacier, it does not meet the criteria of being retrievable immediately within minutes. It also doe sn't maintain a local cache that provides low laten cy access to the recently accessed data and reduce data egres s charges. Thus, it is still better to set up a fil e gateway instead. The option that says: Establish a Direct Connect co nnection to integrate your on-premises network to your VPC. Upload the documents on Amazon EBS Volume s and use a lifecycle policy to automatically move the EBS snapshots to an S3 bucke t, and then later to Glacier for archival is incorrect because EBS Volumes are not as durable co mpared with S3 and it would be more cost-efficient if you directly store the documents to an S3 bucket. An al ternative solution is to use AWS Direct Connect wit h AWS Storage Gateway to create a connection for high-thr oughput workload needs, providing a dedicated network connection between your on-premis es file gateway and AWS. But this solution is using EBS, hence, this option is still wrong. The option that says: Use AWS Snowmobile to migrate all of the files from the on-premises network. Upload the documents to an S3 bucket and set up a l ifecycle policy to move the data into Glacier for archival is incorrect because Snowmobile is mainly used to migrate the entire data of an on-premises d ata center to AWS. This is not a suitable approach as t he company still has a hybrid cloud architecture wh ich means that they will still use their on-premises da ta center along with their AWS cloud infrastructure . References: https://docs.aws.amazon.com/AmazonS3/latest/dev/obj ect-lifecycle-mgmt.html https://docs.aws.amazon.com/storagegateway/latest/u serguide/StorageGatewayConcepts.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate-saa-c02/", + "references": "" + }, + { + "question": "A web application is using CloudFront to distribute their images, videos, and other static contents st ored in their S3 bucket to its users around the world. The compan y has recently introduced a new member-only access to some of its high quality media files. The re is a requirement to provide access to multiple p rivate media files only to their paying subscribers withou t having to change their current URLs. Which of the following is the most suitable solutio n that you should implement to satisfy this require ment?", + "options": [ + "A. Configure your CloudFront distribution to use Mat ch Viewer as its Origin Protocol Policy which will", + "C. Configure your CloudFront distribution to use Fie ld-Level Encryption to protect your private data an d only", + "D. Use Signed Cookies to control who can access the private files in your CloudFront distribution by mo difying" + ], + "correct": "D. Use Signed Cookies to control who can access the private files in your CloudFront distribution by mo difying", + "explanation": "Explanation/Reference: CloudFront signed URLs and signed cookies provide t he same basic functionality: they allow you to control who can access your content. If you want to serve private content through CloudFront and you'r e trying to decide whether to use signed URLs or signed cook ies, consider the following: Use signed URLs for the following cases: - You want to use an RTMP distribution. Signed cook ies aren't supported for RTMP distributions. - You want to restrict access to individual files, for example, an installation download for your appl ication. - Your users are using a client (for example, a cus tom HTTP client) that doesn't support cookies. Use signed cookies for the following cases: - You want to provide access to multiple restricted files, for example, all of the files for a video i n HLS format or all of the files in the subscribers' area of a website. - You don't want to change your current URLs. Hence, the correct answer for this scenario is the option that says: Use Signed Cookies to control who can access the private files in your CloudFront distrib ution by modifying your application to determine wh ether a user should have access to your content. For member s, send the required Set-Cookie headers to the view er which will unlock the content only to them. The option that says: Configure your CloudFront dis tribution to use Match Viewer as its Origin Protoco l Policy which will automatically match the user request. Th is will allow access to the private content if the request is a paying member and deny it if it is not a member is incorrect because a Mat ch Viewer is an Origin Protocol Policy which configure s CloudFront to communicate with your origin using HTTP or HTTPS, depending on the protocol of the vie wer request. CloudFront caches the object only once even if viewers make requests using both HTTP and H TTPS protocols. The option that says: Create a Signed URL with a cu stom policy which only allows the members to see the private files is incorrect because Signed URLs are primarily used for providing access to individu al files, as shown on the above explanation. In additi on, the scenario explicitly says that they don't wa nt to change their current URLs which is why implementing Signed Cookies is more suitable than Signed URL. The option that says: Configure your CloudFront dis tribution to use Field-Level Encryption to protect your private data and only allow access to members is incorrect because Field-Level Encryption only allows you to securely upload user-submitted sensit ive information to your web servers. It does not pr ovide access to download multiple private files.", + "references": "https://docs.aws.amazon.com/AmazonCloudFront/latest /DeveloperGuide/private-content-choosing-signed- ur ls- cookies.html https://docs.aws.amazon.com/AmazonCloudFront/latest /DeveloperGuide/private-content-signed-cookies.htmlCheck out this Amazon CloudFront Cheat Sheet: https://tutorialsdojo.com/amazon-cloudfront/" + }, + { + "question": "A recently acquired company is required to build it s own infrastructure on AWS and migrate multiple applications to the cloud within a month. Each application has approximately 50 TB of data to be transferred. After the migration is complete, t his company and its parent company will both require secure network connectivity with consi stent throughput from their data centers to the app lications. A solutions architect must ensure one-time data migration and ongoing network connect ivity. Which solution will meet these requirements?", + "options": [ + "A. AWS Direct Connect for both the initial transfer and ongoing connectivity.", + "B. AWS Site-to-Site VPN for both the initial transfe r and ongoing connectivity.", + "C. AWS Snowball for the initial transfer and AWS Dir ect Connect for ongoing connectivity.", + "D. AWS Snowball for the initial transfer and AWS Sit e-to-Site VPN for ongoing connectivity." + ], + "correct": "C. AWS Snowball for the initial transfer and AWS Dir ect Connect for ongoing connectivity.", + "explanation": "Explanation/Reference:", + "references": "https://docs.aws.amazon.com/dms/latest/userguide/CH AP_LargeDBs.html https://aws.amazon.com/ directconnect/" + }, + { + "question": "A company serves content to its subscribers across the world using an application running on AWS. The application has several Amazon EC2 instances in a private subnet behind an Application Load Balancer (ALB). Due to a recent change in cop yright restrictions, the chief information officer (CIO) wants to block access for certain cou ntries. Which action will meet these requirements?", + "options": [ + "A. Modify the ALB security group to deny incoming tr affic from blocked countries.", + "B. Modify the security group for EC2 instances to de ny incoming traffic from blocked countries.", + "C. Use Amazon CloudFront to serve the application an d deny access to blocked countries.", + "D. Use ALB listener rules to return access denied re sponses to incoming traffic from blocked countries." + ], + "correct": "C. Use Amazon CloudFront to serve the application an d deny access to blocked countries.", + "explanation": "Explanation/Reference: \"block access for certain countries.\" You can use g eo restriction, also known as geo blocking, to prev ent users in specific geographic locations from accessing content that you're distributing thr ough a CloudFront web distribution.", + "references": "https://docs.aws.amazon.com/AmazonCloudFront/latest /DeveloperGuide/georestrictions.html" + }, + { + "question": "A company is creating a new application that will s tore a large amount of data. The data will be analy zed hourly and modified by several Amazon EC2 Linux instances that are deployed across multip le Availability Zones. The application team believe s the amount of space needed will continue to grow for the next 6 months. Which set of actions should a solutions architect t ake to support these needs?", + "options": [ + "A. Store the data in an Amazon Elastic Block Store ( Amazon EBS) volume. Mount the EBS volume on the", + "B. Store the data in an Amazon Elastic File System ( Amazon EFS) file system. Mount the file system on t he", + "C. Store the data in Amazon S3 Glacier. Update the S 3 Glacier vault policy to allow access to the appli cation", + "D. Store the data in an Amazon Elastic Block Store ( Amazon EBS) Provisioned IOPS volume shared between" + ], + "correct": "B. Store the data in an Amazon Elastic File System ( Amazon EFS) file system. Mount the file system on t he", + "explanation": "Explanation/Reference: Amazon Elastic File System - Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file sy stem for use with AWS Cloud services and on-premises resources. It is built to scale on demand to petabytes without disrupting applications , growing and shrinking automatically as you add and remove files, eliminating the need to p rovision and manage capacity to accommodate growth. Amazon EFS is designed to provide massively paralle l shared access to thousands of Amazon EC2 instance s, enabling your applications to achieve high levels of aggregate throughput and IOP S with consistent low latencies. Amazon EFS is well suited to support a broad spectr um of use cases from home directories to business-c ritical applications. Customers can use EFS to lift-and- shift existing enterprise appl ications to the AWS Cloud. Other use cases include: big data analytics, web serving and content management, application development and testing, me dia and entertainment workflows, database backups, and container storage. Amazon EFS is a regional service storing data withi n and across multiple Availability Zones (AZs) for high availability and durability. Amazon EC2 instances can access your file system across AZ s, regions, and VPCs, while on-premises servers can access using AWS Direct Connect or AWS VPN.", + "references": "https://aws.amazon.com/efs/" + }, + { + "question": "A company is migrating a three-tier application to AWS. The application requires a MySQL database. In the past, the application users reported poor application performance when creating new entr ies. These performance issues were caused by users generating different real-time reports from the application during working hours. Which solution will improve the performance of the application when it is moved to AWS?", + "options": [ + "A. Import the data into an Amazon DynamoDB table wit h provisioned capacity. Refactor the application to use", + "B. Create the database on a compute optimized Amazon EC2 instance. Ensure compute resources exceed", + "C. Create an Amazon Aurora MySQL Multi-AZ DB cluster with multiple read replicas. Configure the applica tion", + "D. Create an Amazon Aurora MySQL Multi-AZ DB cluster . Configure the application to use the backup" + ], + "correct": "C. Create an Amazon Aurora MySQL Multi-AZ DB cluster with multiple read replicas. Configure the applica tion", + "explanation": "Explanation/Reference: Amazon RDS Read Replicas Now Support Multi-AZ Deplo yments Starting today, Amazon RDS Read Replicas for MySQL and MariaDB now support Multi-AZ deployments. Combining Read Replicas with Multi-AZ enables you to build a resilient disaster recovery strategy and simplify your database engine upgrade process. Amazon RDS Read Replicas enable you to create one o r more read-only copies of your database instance within the same AWS Region or in a different AWS Region. Updates made to the source database are the n asynchronously copied to your Read Replicas. In addition to providing scalability for read-heavy workloads, Read Replicas can be promoted to become a standalone database instance when needed. Amazon RDS Multi-AZ deployments provide enhanced av ailability for database instances within a single A WS Region. With Multi-AZ, your data is synchronously replicated to a standby in a diffe rent Availability Zone (AZ). In the event of an inf rastructure failure, Amazon RDS performs an automatic failover to the standby, minimizing disru ption to your applications. You can now use Read Replicas with Multi-AZ as part of a disaster recovery (DR) strategy for your prod uction databases. A well-designed and tested DR plan is critical for maintaining business continuity after a disaster. A Read Replica in a d ifferent region than the source database can be used as a standby database and promoted to becom e the new production database in case of a regional disruption. You can also combine Read Replicas with Multi-AZ fo r your database engine upgrade process. You can cre ate a Read Replica of your production database instance and upgrade it to a new database engine version. When the upgrade is complete, you c an stop applications, promote the Read Replica to a standalone database instance, and switch over your applications. Since the database instance is already a Multi-AZ deployment, no additional steps are needed. Overview of Amazon RDS Read Replicas Deploying one or more read replicas for a given sou rce DB instance might make sense in a variety of scenarios, including the following: Scaling beyond the compute or I/O capacity of a sin gle DB instance for read-heavy database workloads. You can direct this excess read traffic to one or more read replicas. Serving read traffic while the source DB instance i s unavailable. In some cases, your source DB instan ce might not be able to take I/O requests, for example due to I/O suspension for backups or sc heduled maintenance. In these cases, you can direct read traffic to your read replicas. For this use case, keep in mind that the data on the re ad replica might be \"stale\" because the source DB i nstance is unavailable. Business reporting or data warehousing scenarios wh ere you might want business reporting queries to ru n against a read replica, rather than your primary, production DB instance. Implementing disaster recovery. You can promote a r ead replica to a standalone instance as a disaster recovery solution if the source DB instance fails.", + "references": "https://aws.amazon.com/about-aws/whats-new/2018/01/ amazon-rds-read-replicas-now-support-multi-az- deployments/ https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/USER_ReadRepl.html" + }, + { + "question": "stores all data on multiple instances so it can withstand the loss of an instance. The da tabase requires block storage with latency and thro ughput to support several million transactions per second per server. Which storage solution should the solutions archite ct use?", + "options": [ + "A. EBS Amazon Elastic Block Store (Amazon EBS)", + "B. Amazon EC2 instance store", + "C. Amazon Elastic File System (Amazon EFS)", + "D. Amazon S3" + ], + "correct": "B. Amazon EC2 instance store", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "Organizers for a global event want to put daily rep orts online as static HTML pages. The pages are exp ected to generate millions of views from users around the world. The files are stored in an Amazon S3 bucket. A solutions architect has been as ked to design an efficient and effective solution. Which action should the solutions architect take to accomplish this?", + "options": [ + "A. Generate presigned URLs for the files.", + "B. Use cross-Region replication to all Regions.", + "C. Use the geoproximity feature of Amazon Route 53.", + "D. Use Amazon CloudFront with the S3 bucket as its o rigin." + ], + "correct": "D. Use Amazon CloudFront with the S3 bucket as its o rigin.", + "explanation": "Explanation/Reference: Using Amazon S3 Origins, MediaPackage Channels, and Custom Origins for Web Distributions Using Amazon S3 Buckets for Your Origin When you use Amazon S3 as an origin for your distri bution, you place any objects that you want CloudFr ont to deliver in an Amazon S3 bucket. You can use any method that is supported by Amazon S3 to get your objects into Amazon S3, for example, the Amazon S3 console or API, or a third-party tool. You can create a hierarchy in you r bucket to store the objects, just as you would wi th any other Amazon S3 bucket. Using an existing Amazon S3 bucket as your CloudFro nt origin server doesn't change the bucket in any w ay; you can still use it as you normally would to store and access Amazon S3 objects at the standard Amazon S3 price. You incur regular Amazon S3 charges for storing the objects in the bucket. Using Amazon S3 Buckets Configured as Website Endpo ints for Your Origin You can set up an Amazon S3 bucket that is configur ed as a website endpoint as custom origin with CloudFront. When you configure your CloudFront distribution, fo r the origin, enter the Amazon S3 static website ho sting endpoint for your bucket. This value appears in the Amazon S3 console, on the Properties tab, in the St atic website hosting pane. For example: http://buck et- name.s3-websiteregion. amazonaws.com For more information about specifying Amazon S3 sta tic website endpoints, see Website endpoints in the Amazon Simple Storage Service Developer Guide. When you specify the bucket name in this format as your origin, you can use Amazon S3 redirects and Am azon S3 custom error documents. For more information about Amazon S3 features, see the Amazon S3 documentation. Using an Amazon S3 bucket as your CloudFront origin server doesn ?\u20ac\u2122t change it in any way. You can st ill use it as you normally would and you incur regular Amazon S3 charges.", + "references": "https://docs.aws.amazon.com/AmazonCloudFront/latest /DeveloperGuide/ DownloadDistS3AndCustomOrigins.html" + }, + { + "question": "A solutions architect is designing a new service be hind Amazon API Gateway. The request patterns for t he service will be unpredictable and can change suddenly from 0 requests to over 500 per sec ond. The total size of the data that needs to be pe rsisted in a backend database is currently less than 1 GB with unpredictable future growth. Da ta can be queried using simple key-value requests. Which combination of AWS services would meet these requirements? (Choose two.)", + "options": [ + "A. AWS Fargate", + "B. AWS Lambda", + "C. Amazon DynamoDB", + "D. Amazon EC2 Auto Scaling" + ], + "correct": "", + "explanation": "Explanation/Reference:", + "references": "https://aws.amazon.com/about-aws/whats-new/2017/11/ amazon-api-gateway-supports-endpoint-integrations- with-private-vpcs" + }, + { + "question": "A start-up company has a web application based in t he us-east-1 Region with multiple Amazon EC2 instan ces running behind an Application Load Balancer across multiple Availability Zones. As the company ?\u20ac\u2122s user base grows in the us-west-1 Regi on, it needs a solution with low latency and high availability. What should a solutions architect do to accomplish this?", + "options": [ + "A. Provision EC2 instances in us-west-1. Switch the Application Load Balancer to a Network Load Balance r to", + "B. Provision EC2 instances and an Application Load B alancer in us-west-1. Make the load balancer distri bute", + "C. Provision EC2 instances and configure an Applicat ion Load Balancer in us-west-1. Create an accelerat or in", + "D. Provision EC2 instances and configure an Applicat ion Load Balancer in us-west-1. Configure Amazon" + ], + "correct": "", + "explanation": "Explanation/Reference: Register endpoints for endpoint groups: You registe r one or more regional resources, such as Applicati on Load Balancers, Network Load Balancers, EC2 Instances, or Elastic IP addresses, in each endpoin t group. Then you can set weights to choose how muc h traffic is routed to each endpoint. Endpoints in AWS Global Accelerator Endpoints in AWS Global Accelerator can be Network Load Balancers, Application Load Balancers, Amazon EC2 instances, or Elastic IP addresses. A static IP address serves as a single p oint of contact for clients, and Global Accelerator then distributes incoming traffic across healthy endpoints. Global Accelerator directs traff ic to endpoints by using the port (or port range) t hat you specify for the listener that the endpoint group for the endpoint belongs to. Each endpoint group can have multiple endpoints. Yo u can add each endpoint to multiple endpoint groups , but the endpoint groups must be associated with different listeners. Global Accelerator continually monitors the health of all endpoints that are included in an endpoint g roup. It routes traffic only to the active endpoints that are healthy. If Global Accelerator d oesn ?\u20ac\u2122t have any healthy endpoints to route traff ic to, it routes traffic to all endpoints.", + "references": "https://docs.aws.amazon.com/global-accelerator/late st/dg/about-endpoints.html https://aws.amazon.com/ global-accelerator/faqs/" + }, + { + "question": "A solutions architect is designing a solution to ac cess a catalog of images and provide users with the ability to submit requests to customize images. Image customization parameters will be in a ny request sent to an AWS API Gateway API. The customized image will be generated on demand, and users will receive a link they can clic k to view or download their customized image. The s olution must be highly available for viewing and customizing images. What is the MOST cost-effective solution to meet th ese requirements?", + "options": [ + "A. Use Amazon EC2 instances to manipulate the origin al image into the requested customization. Store th e", + "B. Use AWS Lambda to manipulate the original image t o the requested customization. Store the original a nd", + "C. Use AWS Lambda to manipulate the original image t o the requested customization. Store the original", + "D. Use Amazon EC2 instances to manipulate the origin al image into the requested customization. Store th e" + ], + "correct": "B. Use AWS Lambda to manipulate the original image t o the requested customization. Store the original a nd", + "explanation": "Explanation Explanation/Reference: AWS Lambda is a compute service that lets you run c ode without provisioning or managing servers. AWS Lambda executes your code only when needed and scales automatically, from a few re quests per day to thousands per second. You pay onl y for the compute time you consume ?\u20ac\" there is no charge when your code is not runnin g. With AWS Lambda, you can run code for virtually any type of application or backend service ?\u20ac\" all with zero administration. AWS Lambd a runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring and logging. All you need to do is supply your code in one of the languages that AWS Lambda supports. Storing your static content with S3 provides a lot of advantages. But to help optimize your applicatio n ?\u20ac\u2122s performance and security while effectively managing cost, we recommend that you al so set up Amazon CloudFront to work with your S3 bu cket to serve and protect the content. CloudFront is a content delivery network (CDN) service that delivers static and dynamic web content, video streams, and APIs around the world, securely and at scale. By design, delivering data out of CloudFront can be more cost effective than deliveri ng it from S3 directly to your users. CloudFront serves content through a worldwide netwo rk of data centers called Edge Locations. Using edg e servers to cache and serve content improves performance by providing content closer to where viewers are located. CloudFront has edge ser vers in locations all around the world.", + "references": "https://docs.aws.amazon.com/lambda/latest/dg/welcom e.html https://aws.amazon.com/blogs/networking-and-content -delivery/amazon-s3-amazon-cloudfront-a-match-made- in-the-cloud/" + }, + { + "question": "A company is planning to migrate a business-critica l dataset to Amazon S3. The current solution design uses a single S3 bucket in the us-east-1 Region with versioning enabled to store the dataset . The company's disaster recovery policy states tha t all data multiple AWS Regions. How should a solutions architect design the S3 solu tion?", + "options": [ + "A. Create an additional S3 bucket in another Region and configure cross-Region replication.", + "B. Create an additional S3 bucket in another Region and configure cross-origin resource sharing (CORS).", + "C. Create an additional S3 bucket with versioning in another Region and configure cross-Region replicat ion.", + "D. Create an additional S3 bucket with versioning in another Region and configure cross-origin resource" + ], + "correct": "C. Create an additional S3 bucket with versioning in another Region and configure cross-Region replicat ion.", + "explanation": "Explanation/Reference:", + "references": "https://medium.com/@KerrySheldon/s3-exercise-2-4-ad ding-objects-to-an-s3-bucket-with-cross-region- replication-a78b332b7697" + }, + { + "question": "A company ?\u20ac\u2122s web application uses an Amazon RDS P ostgreSQL DB instance to store its application data . During the financial closing period at the start of every month, Accountants run large que ries that impact the database's performance due to high usage. The company wants to minimize the impact that the reporting activity has on the web application. What should a solutions architect do to reduce the impact on the database with the LEAST amount of eff ort?", + "options": [ + "A. Create a read replica and direct reporting traffi c to the replica.", + "B. Create a Multi-AZ database and direct reporting t raffic to the standby.", + "C. Create a cross-Region read replica and direct rep orting traffic to the replica.", + "D. Create an Amazon Redshift database and direct rep orting traffic to the Amazon Redshift database." + ], + "correct": "A. Create a read replica and direct reporting traffi c to the replica.", + "explanation": "Explanation/Reference: Amazon RDS uses the MariaDB, MySQL, Oracle, Postgre SQL, and Microsoft SQL Server DB engines' built-in replication functionality to create a special type of DB instance called a read replica from a source DB instance. Updates made to the source DB instance ar e asynchronously copied to the read replica. You can reduce the load on your source DB instance by routing read queries from your applicat ions to the read replica. When you create a read replica, you first specify a n existing DB instance as the source. Then Amazon R DS takes a snapshot of the source instance and creates a read-only instance from the snapshot. Amazon RDS then uses the asynchronous replication method for the DB engine to update the read replica whenever there is a change to the source DB instance. The read replica operate s as a DB instance that allows only readonly connections. Applications connect to a read replica the same way they do to any DB instance. Amazon RD S replicates all databases in the source DB instance.", + "references": "https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/USER_ReadRepl.html" + }, + { + "question": "A company is hosting its web application in an Auto Scaling group of EC2 instances behind an Applicati on Load Balancer. Recently, the Solutions Architect id entified a series of SQL injection attempts and cro ss- site scripting attacks to the application, which ha d adversely affected their production data. Which of the following should the Architect impleme nt to mitigate this kind of attack?", + "options": [ + "A. Using AWS Firewall Manager, set up security rules that block SQL injection and cross-site scripting attacks.", + "B. Use Amazon GuardDuty to prevent any further SQL i njection and cross-site scripting attacks in your", + "C. Set up security rules that block SQL injection an d cross-site scripting attacks in AWS Web Applicati on", + "D. Block all the IP addresses where the SQL injectio n and cross-site scripting attacks originated using the" + ], + "correct": "C. Set up security rules that block SQL injection an d cross-site scripting attacks in AWS Web Applicati on", + "explanation": "Explanation/Reference: AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwa rded to an Amazon API Gateway API, Amazon CloudFront or an Application Load Balancer. AWS WAF also lets you control access to your content. B ased on conditions that you specify, such as the IP addresses that requests originate from or the value s of query strings, API Gateway, CloudFront or an Application Load Balancer responds to requests eith er with the requested content or with an HTTP 403 status code (Forbidden). You also can configure Clo udFront to return a custom error page when a reques t is blocked. At the simplest level, AWS WAF lets you choose one of the following behaviors: Allow all requests except the ones that you specify This is useful when you want CloudFront or an Application Load Balancer to serve content for a pu blic website, but you also want to block requests f rom attackers. Block all requests except the ones that you specify This is useful when you want to serve content for a restricted website whose users are readily identifi able by properties in web requests, such as the IP addresses that they use to browse to the website. Count the requests that match the properties that y ou specify When you want to allow or block requests based on new properties in web requests, y ou first can configure AWS WAF to count the request s that match those properties without allowing or blo cking those requests. This lets you confirm that yo u didn't accidentally configure AWS WAF to block all the traffic to your website. When you're confident that you specified t he correct properties, you can change the behavior to allow or block requests. Hence, the correct answer in this scenario is: Set up security rules that block SQL injection and cros s- site scripting attacks in AWS Web Application Firewall ( WAF). Associate the rules to the Application Load Balancer. Using Amazon GuardDuty to prevent any further SQL i njection and cross-site scripting attacks in your application is incorrect because Amazon GuardD uty is just a threat detection service that continuously monitors for malicious activity and un authorized behavior to protect your AWS accounts an d workloads. Using AWS Firewall Manager to set up security rules that block SQL injection and cross-site scripting attacks, then associating the rules to th e Application Load Balancer is incorrect because AWS Firewall Manager just simplifies your AWS WAF a nd AWS Shield Advanced administration and maintenance tasks across multiple accounts and reso urces. Blocking all the IP addresses where the SQL injecti on and cross-site scripting attacks originated using the Network Access Control List is incorrect because this is an optional layer of security for y our VPC that acts as a firewall for controlling traffic in and o ut of one or more subnets. NACLs are not effective in blocking SQL injection and cross-site scripting attacks References: https://aws.amazon.com/waf/ https://docs.aws.amazon.com/waf/latest/developergui de/what-is-aws-waf.html Check out this AWS WAF Cheat Sheet: https://tutorialsdojo.com/aws-waf/ AWS Security Services Overview - WAF, Shield, Cloud HSM, KMS: https://www.youtube.com/watch?v=-1S-RdeAmMo", + "references": "" + }, + { + "question": "An insurance company utilizes SAP HANA for its day- to-day ERP operations. Since they can't migrate this database due to customer preferences, they nee d to integrate it with the current AWS workload in the VPC in which they are required to establish a site-to-s ite VPN connection. What needs to be configured outside of the VPC for them to have a successful site-to-site VPN connecti on?", + "options": [ + "A. An EIP to the Virtual Private Gateway", + "B. The main route table in your VPC to route traffic through a NAT instance", + "C. A dedicated NAT instance in a public subnet", + "D. An Internet-routable IP address (static) of the c ustomer gateway's external interface for the on-pre mises" + ], + "correct": "D. An Internet-routable IP address (static) of the c ustomer gateway's external interface for the on-pre mises", + "explanation": "Explanation/Reference: By default, instances that you launch into a virtua l private cloud (VPC) can't communicate with your o wn network. You can enable access to your network from your VPC by attaching a virtual private gateway to the VPC, creating a custom route table, updating your s ecurity group rules, and creating an AWS managed VP N connection. Although the term VPN connection is a general term, in the Amazon VPC documentation, a VPN connection refers to the connection between your VP C and your own network. AWS supports Internet Protocol security (IPsec) VPN connections. A customer gateway is a physical device or software application on your side of the VPN connection. To create a VPN connection, you must create a custo mer gateway resource in AWS, which provides information to AWS about your customer gateway devi ce. Next, you have to set up an Internet-routable I P address (static) of the customer gateway's external interface. The following diagram illustrates single VPN connec tions. The VPC has an attached virtual private gateway, and your remote network includes a custome r gateway, which you must configure to enable the VPN connection. You set up the routing so that any traffic from the VPC bound for your network is rout ed to the virtual private gateway. The options that say: A dedicated NAT instance in a public subnet and the main route table in your VPC to route traffic through a NAT instance are inc orrect since you don't need a NAT instance for you to be able to create a VPN connection. An EIP to the Virtual Private Gateway is incorrect since you do not attach an EIP to a VPG. References: https://docs.aws.amazon.com/AmazonVPC/latest/UserGu ide/VPC_VPN.html https://docs.aws.amazon.com/vpc/latest/userguide/Se tUpVPNConnections.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", + "references": "" + }, + { + "question": "A company has a data analytics application that upd ates a real-time, foreign exchange dashboard and another separate application that archives data to Amazon Redshift. Both applications are configured t o consume data from the same stream concurrently and independently by using Amazon Kinesis Data Streams. However, they noticed that there are a lot of occurrences where a shard iterator expires unexpectedly. Upon checking, they found out that th e DynamoDB table used by Kinesis does not have enough capacity to store the lease data. Which of the following is the most suitable solutio n to rectify this issue?", + "options": [ + "A. Use Amazon Kinesis Data Analytics to properly sup port the data analytics application instead of Kine sis", + "B. Upgrade the storage capacity of the DynamoDB tabl e.", + "C. Increase the write capacity assigned to the shard table.", + "D. Enable In-Memory Acceleration with DynamoDB Accel erator (DAX)." + ], + "correct": "C. Increase the write capacity assigned to the shard table.", + "explanation": "Explanation/Reference: A new shard iterator is returned by every GetRecord s request (as NextShardIterator), which you then us e in the next GetRecords request (as ShardIterator). Typically, this shard iterator does not expire befo re you use it. However, you may find that shard iterators expire because you have not called GetRecords for m ore than 5 minutes, or because you've performed a resta rt of your consumer application. If the shard iterator expires immediately before yo u can use it, this might indicate that the DynamoDB table used by Kinesis does not have enough capacity to st ore the lease data. This situation is more likely t o happen if you have a large number of shards. To sol ve this problem, increase the write capacity assign ed to the shard table. Hence, increasing the write capacity assigned to th e shard table is the correct answer. Upgrading the storage capacity of the DynamoDB tabl e is incorrect because DynamoDB is a fully managed service which automatically scales its stor age, without setting it up manually. The scenario r efers to the write capacity of the shard table as it says that the DynamoDB table used by Kinesis does not h ave enough capacity to store the lease data. Enabling In-Memory Acceleration with DynamoDB Accel erator (DAX) is incorrect because the DAX feature is primarily used for read performance impr ovement of your DynamoDB table from milliseconds response time to microseconds. It does not have any relationship with Amazon Kinesis Data Stream in th is scenario. Using Amazon Kinesis Data Analytics to properly sup port the data analytics application instead of Kinesis Data Stream is incorrect. Although Amazon K inesis Data Analytics can support a data analytics application, it is still not a suitable solution fo r this issue. You simply need to increase the write capacity assigned to the shard table in order to rectify the problem which is why switching to Amazon Kinesis D ata Analytics is not necessary.", + "references": "https://docs.aws.amazon.com/streams/latest/dev/kine sis-record-processor-ddb.html https://docs.aws.amazon.com/streams/latest/dev/trou bleshooting-consumers.html Check out this Amazon Kinesis Cheat Sheet: https://tutorialsdojo.com/amazon-kinesis/" + }, + { + "question": "A web application, which is used by your clients ar ound the world, is hosted in an Auto Scaling group of EC2 instances behind a Classic Load Balancer. You n eed to secure your application by allowing multiple domains to serve SSL traffic over the same IP addre ss. Which of the following should you do to meet the ab ove requirement?", + "options": [ + "A. Use an Elastic IP and upload multiple 3rd party c ertificates in your Classic Load Balancer using the AWS", + "B. Use Server Name Indication (SNI) on your Classic Load Balancer by adding multiple SSL certificates t o", + "C. Generate an SSL certificate with AWS Certificate Manager and create a CloudFront web distribution.", + "D. It is not possible to allow multiple domains to s erve SSL traffic over the same IP address in AWS" + ], + "correct": "C. Generate an SSL certificate with AWS Certificate Manager and create a CloudFront web distribution.", + "explanation": "Explanation/Reference: Amazon CloudFront delivers your content from each e dge location and offers the same security as the Dedicated IP Custom SSL feature. SNI Custom SSL wor ks with most modern browsers, including Chrome version 6 and later (running on Windows XP and late r or OS X 10.5.7 and later), Safari version 3 and l ater (running on Windows Vista and later or Mac OS X 10. 5.6. and later), Firefox 2.0 and later, and Interne t Explorer 7 and later (running on Windows Vista and later). Some users may not be able to access your content b ecause some older browsers do not support SNI and will not be able to establish a connection with Clo udFront to load the HTTPS version of your content. If you need to support non-SNI compliant browsers for HTTPS content, it is recommended to use the Dedicated IP Custom SSL feature. Using Server Name Indication (SNI) on your Classic Load Balancer by adding multiple SSL certificates to allow multiple domains to serve SSL traffic is incorrect because a Classic Load Balanc er does not support Server Name Indication (SNI). You have to use an Application Load Balancer instead or a CloudFront web distribution to allow the SNI featur e. Using an Elastic IP and uploading multiple 3rd part y certificates in your Application Load Balancer using the AWS Certificate Manager is incorrect beca use just like in the above, a Classic Load Balancer does not support Server Name Indication (SNI) and t he use of an Elastic IP is not a suitable solution to allow multiple domains to serve SSL traffic. You ha ve to use Server Name Indication (SNI). The option that says: It is not possible to allow m ultiple domains to serve SSL traffic over the same IP address in AWS is incorrect because AWS does suppor t the use of Server Name Indication (SNI). References: https://aws.amazon.com/about-aws/whats-new/2014/03/ 05/amazon-cloudront-announces-sni-custom-ssl/ https://aws.amazon.com/blogs/security/how-to-help-a chieve-mobile-app-transport-security-compliance-by- using-amazon-cloudfront-and-aws-certificate-manager / Check out this Amazon CloudFront Cheat Sheet: https://tutorialsdojo.com/amazon-cloudfront/ SNI Custom SSL vs Dedicated IP Custom SSL: https://tutorialsdojo.com/sni-custom-ssl-vs-dedicat ed-ip-custom-ssl/ AWS Security Services Overview - Secrets Manager, A CM, Macie: https://www.youtube.com/watch?v=ogVamzF2Dzk", + "references": "" + }, + { + "question": "A company has two On-Demand EC2 instances inside th e Virtual Private Cloud in the same Availability Zo ne but are deployed to different subnets. One EC2 inst ance is running a database and the other EC2 instan ce a web application that connects with the database. Yo u need to ensure that these two instances can communicate with each other for the system to work properly. What are the things you have to check so that these EC2 instances can communicate inside the VPC? (Sel ect TWO.)", + "options": [ + "A. Ensure that the EC2 instances are in the same Pla cement Group.", + "B. Check if all security groups are set to allow the application host to communicate to the database on the right", + "C. Check if both instances are the same instance cla ss.", + "D. Check if the default route is set to a NAT instan ce or Internet Gateway (IGW) for them to communicat e." + ], + "correct": "", + "explanation": "Explanation/Reference: First, the Network ACL should be properly set to al low communication between the two subnets. The secu rity group should also be properly configured so that yo ur web server can communicate with the database ser ver. Hence, these are the correct answers: Check if all security groups are set to allow the a pplication host to communicate to the database on the right port and protocol. Check the Network ACL if it allows communication be tween the two subnets. The option that says: Check if both instances are t he same instance class is incorrect because the EC2 instances do not need to be of the same class in or der to communicate with each other. The option that says: Check if the default route is set to a NAT instance or Internet Gateway (IGW) for them to communicate is incorrect because an Int ernet gateway is primarily used to communicate to t he Internet. The option that says: Ensure that the EC2 instances are in the same Placement Group is incorrect because Placement Group is mainly used to provide l ow-latency network performance necessary for tightly-coupled node-to-node communication. Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGui de/VPC_Subnets.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", + "references": "" + }, + { + "question": "As part of the Business Continuity Plan of your com pany, your IT Director instructed you to set up an automated backup of all of the EBS Volumes for your EC2 instances as soon as possible. What is the fastest and most cost-effective solutio n to automatically back up all of your EBS Volumes?", + "options": [ + "A. Use Amazon Data Lifecycle Manager (Amazon DLM) to automate the creation of EBS snapshots.", + "B. Set your Amazon Storage Gateway with EBS volumes as the data source and store the backups in your on -", + "C. Use an EBS-cycle policy in Amazon S3 to automatic ally back up the EBS volumes.", + "D. For an automated solution, create a scheduled job that calls the \"create-snapshot\" command via the A WS" + ], + "correct": "A. Use Amazon Data Lifecycle Manager (Amazon DLM) to automate the creation of EBS snapshots.", + "explanation": "Explanation/Reference: You can use Amazon Data Lifecycle Manager (Amazon D LM) to automate the creation, retention, and deletion of snapshots taken to back up your Amazon EBS volumes. Automating snapshot management helps you to: - Protect valuable data by enforcing a regular back up schedule. - Retain backups as required by auditors or interna l compliance. - Reduce storage costs by deleting outdated backups . Combined with the monitoring features of Amazon Clo udWatch Events and AWS CloudTrail, Amazon DLM provides a complete backup solution for EBS vol umes at no additional cost. Hence, using Amazon Data Lifecycle Manager (Amazon DLM) to automate the creation of EBS snapshots is the correct answer as it is the fastes t and most cost-effective solution that provides an automated way of backing up your EBS volumes. The option that says: For an automated solution, cr eate a scheduled job that calls the \"create- snapshot\" command via the AWS CLI to take a snapsho t of production EBS volumes periodically is incorrect because even though this is a valid solut ion, you would still need additional time to create a scheduled job that calls the \"create-snapshot\" comm and. It would be better to use Amazon Data Lifecycl e Manager (Amazon DLM) instead as this provides you t he fastest solution which enables you to automate the creation, retention, and deletion of the EBS sn apshots without having to write custom shell script s or creating scheduled jobs. Setting your Amazon Storage Gateway with EBS volume s as the data source and storing the backups in your on-premises servers through the storage gat eway is incorrect as the Amazon Storage Gateway is used only for creating a backup of data from your o n-premises server and not from the Amazon Virtual Private Cloud. Using an EBS-cycle policy in Amazon S3 to automatic ally back up the EBS volumes is incorrect as there is no such thing as EBS-cycle policy in Amazo n S3. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /snapshot-lifecycle.html http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ ebs-creating-snapshot.html Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/ Amazon EBS Overview - SSD vs HDD: https://www.youtube.com/watch?v=LW7x8wyLFvw&t=8s", + "references": "" + }, + { + "question": "A website that consists of HTML, CSS, and other cli ent-side Javascript will be hosted on the AWS envir onment. Several high-resolution images will be displayed on the webpage. The website and the photos should hav e the optimal loading response times as possible, and sho uld also be able to scale to high request rates. Which of the following architectures can provide th e most cost-effective and fastest loading experienc e?", + "options": [ + "A. Launch an Auto Scaling Group using an AMI that ha s a pre-configured Apache web server, then configur e", + "B. Create a Nginx web server in an Amazon LightSail instance to host the HTML, CSS, and Javascript file s", + "C. Upload the HTML, CSS, Javascript, and the images in a single bucket. Then enable website hosting. Cr eate", + "D. Create a Nginx web server in an EC2 instance to h ost the HTML, CSS, and Javascript files then enable" + ], + "correct": "C. Upload the HTML, CSS, Javascript, and the images in a single bucket. Then enable website hosting. Cr eate", + "explanation": "Explanation/Reference: Amazon S3 is an object storage service that offers industry-leading scalability, data availability, se curity, and performance. Additionally, You can use Amazon S3 to host a static website. On a static website, individual webpages include static content. Amazon S3 is highly scalable and you only pay for what you use, you can start small and grow your application as yo u wish, with no compromise on performance or reliability. Amazon CloudFront is a fast content delivery networ k (CDN) service that securely delivers data, videos , applications, and APIs to customers globally with l ow latency, high transfer speeds. CloudFront can be integrated with Amazon S3 for fast delivery of data originating from an S3 bucket to your end-users. B y design, delivering data out of CloudFront can be more cost- effective than delivering it from S3 directly to yo ur users. The scenario given is about storing and hosting ima ges and a static website respectively. Since we are just dealing with static content, we can leverage the we b hosting feature of S3. Then we can improve the architecture further by integrating it with CloudFr ont. This way, users will be able to load both the web pages and images faster than if we are serving them from a standard webserver. Hence, the correct answer is: Upload the HTML, CSS, Javascript, and the images in a single bucket. Then enable website hosting. Create a CloudFront di stribution and point the domain on the S3 website endpoint. The option that says: Create an Nginx web server in an EC2 instance to host the HTML, CSS, and Javascript files then enable caching. Upload the im ages in a S3 bucket. Use CloudFront as a CDN to deliver the images closer to your end-users is inco rrect. Creating your own web server just to host a static website in AWS is a costly solution. Web Servers on an EC2 instance is usually used for hosting dynami c web applications. Since static websites contain web pag es with fixed content, we should use S3 website hos ting instead. The option that says: Launch an Auto Scaling Group using an AMI that has a pre-configured Apache web server, then configure the scaling policy accor dingly. Store the images in an Elastic Block Store. Then, point your instance's endpoint to AWS Global Accelerator is incorrect. This is how we serve static websites in the old days. Now, with the help of S3 website hosting, we can host our static cont ents from a durable, high-availability, and highly scalable env ironment without managing any servers. Hosting stat ic websites in S3 is cheaper than hosting it in an EC2 instance. In addition, Using ASG for scaling insta nces that host a static website is an over-engineered solutio n that carries unnecessary costs. S3 automatically scales to high requests and you only pay for what you use. The option that says: Create an Nginx web server in an Amazon LightSail instance to host the HTML, CSS, and Javascript files then enable caching. Uplo ad the images in an S3 bucket. Use CloudFront as a CDN to deliver the images closer to your end-user s is incorrect because although LightSail is cheape r than EC2, creating your own LightSail web server for hos ting static websites is still a relatively expensiv e solution when compared to hosting it on S3. In addition, S3 automatically scales to high request rates. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/Web siteHosting.html https://aws.amazon.com/blogs/networking-and-content -delivery/amazon-s3-amazon-cloudfront-a-match-made- in-the-cloud/ Check out these Amazon S3 and CloudFront Cheat Shee ts: https://tutorialsdojo.com/amazon-s3/ https://tutorialsdojo.com/amazon-cloudfront/", + "references": "" + }, + { + "question": "You have built a web application that checks for ne w items in an S3 bucket once every hour. If new ite ms exist, a message is added to an SQS queue. You have a flee t of EC2 instances which retrieve messages from the SQS queue, process the file, and finally, send you and the user an email confirmatio n that the item has been successfully processed. Your offi cemate uploaded one test file to the S3 bucket and after a couple of hours, you noti ced that you and your officemate have 50 emails fro m your application with the same message. Which of the fol lowing is most likely the root cause why the application has sent you and the user multi ple emails?", + "options": [ + "A. There is a bug in the application.", + "B. By default, SQS automatically deletes the message s that were processed by the consumers. It might be", + "C. The sqsSendEmailMessage attribute of the SQS queu e is configured to 50.", + "D. Your application does not issue a delete command to the SQS queue after processing the message, whic h" + ], + "correct": "D. Your application does not issue a delete command to the SQS queue after processing the message, whic h", + "explanation": "Explanation In this scenario, the main culprit is that your app lication does not issue a delete command to the SQS queue after processing the message, which is why this mes sage went back to the queue and was processed multiple times. The option that says: The sqsSendEmailMessage attri bute of the SQS queue is configured to 50 is incorrect as there is no sqsSendEmailMessage attrib ute in SQS. The option that says: There is a bug in the applica tion is a valid answer but since the scenario did n ot mention that the EC2 instances deleted the processed messag es, the most likely cause of the problem is that th e application does not issue a delete command to the SQS queue as mentioned above. The option that says: By default, SQS automatically deletes the messages that were processed by the consumers. It might be possible that your officemat e has submitted the request 50 times which is why you received a lot of emails is incorrect as SQS do es not automatically delete the messages.", + "references": "https://aws.amazon.com/sqs/faqs/ Check out this Amazon SQS Cheat Sheet: https://tutorialsdojo.com/amazon-sqs/" + }, + { + "question": "A Network Architect developed a food ordering appli cation. The Architect needs to retrieve the instanc e ID, public keys, and public IP address of the EC2 serve r made for tagging and grouping the attributes into the internal application running on-premises. Which of the following options fulfills this requir ement?", + "options": [ + "A. Amazon Machine Image", + "B. Instance user data", + "C. Resource tags", + "D. Instance metadata" + ], + "correct": "D. Instance metadata", + "explanation": "Explanation/Reference: Instance metadata is the data about your instance t hat you can use to configure or manage the running instance. You can get the instance ID, public keys, public IP address and many other information from the instance metadata by firing a URL command in your i nstance to this URL: http://169.254.169.254/latest/meta-data/ Instance user data is incorrect because this is mai nly used to perform common automated configuration tasks and run scripts after the instance starts. Resource tags is incorrect because these are labels that you assign to an AWS resource. Each tag consi sts of a key and an optional value, both of which you d efine. Amazon Machine Image is incorrect because this main ly provides the information required to launch an instance, which is a virtual server in the cloud.", + "references": "http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ ec2-instance-metadata.htm Amazon EC2 Overview: https://www.youtube.com/watch?v=7VsGIHT_jQE Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/" + }, + { + "question": "A DevOps Engineer is required to design a cloud arc hitecture in AWS. The Engineer is planning to devel op a highly available and fault-tolerant architecture th at is composed of an Elastic Load Balancer and an A uto Scaling group of EC2 instances deployed across mult iple Availability Zones. This will be used by an on line accounting application that requires path-based rou ting, host-based routing, and bi-directional commun ication channels using WebSockets. Which is the most suitable type of Elastic Load Bal ancer that will satisfy the given requirement?", + "options": [ + "A. Gateway Load Balancer B. Network Load Balancer", + "C. Application Load Balancer", + "D. Classic Load Balancer" + ], + "correct": "C. Application Load Balancer", + "explanation": "Explanation/Reference: Application Load Balancer operates at the request l evel (layer 7), routing traffic to targets (EC2 ins tances, containers, IP addresses, and Lambda functions) bas ed on the content of the request. Ideal for advance d load balancing of HTTP and HTTPS traffic, Applicati on Load Balancer provides advanced request routing targeted at delivery of modern application architec tures, including microservices and container-based applications. Application Load Balancer simplifies and improves the security of your application, by ensuring that the latest SSL/TLS ciphers and protoc ols are used at all times. If your application is composed of several individu al services, an Application Load Balancer can route a request to a service based on the content of the request su ch as Host field, Path URL, HTTP header, HTTP metho d, Query string, or Source IP address. Host-based Routing: You can route a client request based on the Host field of the HTTP header allowing you to route to multiple domains from the same load balanc er. Path-based Routing: You can route a client request based on the URL path of the HTTP header. HTTP header-based routing: You can route a client r equest based on the value of any standard or custom HTTP header. HTTP method-based routing: You can route a client r equest based on any standard or custom HTTP method. Query string parameter-based routing: You can route a client request based on query string or query parameters. Source IP address CIDR-based routing: You can route a client request based on source IP address CIDR from where the request originates. Application Load Balancers support path-based routi ng, host-based routing, and support for containeriz ed applications hence, Application Load Balancer is th e correct answer. Network Load Balancer is incorrect. Although it can handle WebSockets connections, it doesn't support path-based routing or host-based routing, unlike an Application Load Balancer. Classic Load Balancer is incorrect because this typ e of load balancer is intended for applications tha t are built within the EC2-Classic network only. A CLB doesn't support path-based routing or host-based routing. Gateway Load Balancer is incorrect because this is primarily used for deploying, scaling, and running your third-party virtual appliances. It doesn't hav e a path-based routing or host-based routing featur e. References: https://aws.amazon.com/elasticloadbalancing/feature s https://aws.amazon.com/elasticloadbalancing/faqs/ AWS Elastic Load Balancing Overview: https://youtu.be/UBl5dw59DO8 Check out this AWS Elastic Load Balancing (ELB) Che at Sheet: https://tutorialsdojo.com/aws-elastic-load-balancin g-elb/ Application Load Balancer vs Network Load Balancer vs Classic Load Balancer: https://tutorialsdojo.com/application-load-balancer -vs-network-load-balancer-vs-classic-load-balancer/", + "references": "" + }, + { + "question": "A software company has resources hosted in AWS and on-premises servers. You have been requested to create a decoupled architecture for applications wh ich make use of both resources. Which of the follow ing options are valid? (Select TWO.)", + "options": [ + "A. Use SWF to utilize both on-premises servers and E C2 instances for your decoupled application", + "B. Use SQS to utilize both on-premises servers and E C2 instances for your decoupled application", + "C. Use RDS to utilize both on-premises servers and E C2 instances for your decoupled application", + "D. Use DynamoDB to utilize both on-premises servers an d EC2 instances for your decoupled application E. Use VPC peering to connect both on-premises servers and EC2 instances for your decoupled application" + ], + "correct": "", + "explanation": "Explanation/Reference: Amazon Simple Queue Service (SQS) and Amazon Simple Workflow Service (SWF) are the services that you can use for creating a decoupled architect ure in AWS. Decoupled architecture is a type of computing architecture that enables computing compo nents or layers to execute independently while stil l interfacing with each other. Amazon SQS offers reliable, highly-scalable hosted queues for storing messages while they travel betwe en applications or microservices. Amazon SQS lets you move data between distributed application components and helps you decouple these components. Amazon SWF is a web service that makes it easy to coordinate work across distributed application comp onents. Using RDS to utilize both on-premises servers and E C2 instances for your decoupled application and using DynamoDB to utilize both on-premises servers and EC2 instances for your decoupled application are incorrect as RDS and DynamoDB are d atabase services. Using VPC peering to connect both on-premises serve rs and EC2 instances for your decoupled application is incorrect because you can't create a VPC peering for your on-premises network and AWS VPC. References: https://aws.amazon.com/sqs/ http://docs.aws.amazon.com/amazonswf/latest/develop erguide/swf-welcome.html Check out this Amazon SQS Cheat Sheet: https://tutorialsdojo.com/amazon-sqs/ Amazon Simple Workflow (SWF) vs AWS Step Functions vs Amazon SQS: https://tutorialsdojo.com/amazon-simple-workflow-sw f-vs-aws-step-functions-vs-amazon-sqs/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/", + "references": "" + }, + { + "question": "A company developed a web application and deployed it on a fleet of EC2 instances that uses Amazon SQS . The requests are saved as messages in the SQS queue , which is configured with the maximum message retention period. However, after thirteen days of o peration, the web application suddenly crashed and there are 10,000 unprocessed messages that are still waiting in the queue. Since they developed the application, they can easily resolve the issue but they need to send a communication to the users on the issue. What information should they provide and what will happen to the unprocessed messages?", + "options": [ + "A. Tell the users that unfortunately, they have to r esubmit all the requests again.", + "B. Tell the users that unfortunately, they have to r esubmit all of the requests since the queue would n ot be able", + "C. Tell the users that the application will be opera tional shortly however, requests sent over three da ys ago will", + "D. Tell the users that the application will be opera tional shortly and all received requests will be pr ocessed after" + ], + "correct": "", + "explanation": "Explanation/Reference: In Amazon SQS, you can configure the message retent ion period to a value from 1 minute to 14 days. The default is 4 days. Once the message retention limit is reached, your messages are automatically delete d. A single Amazon SQS message queue can contain an un limited number of messages. However, there is a 120,000 limit for the number of inflight messages f or a standard queue and 20,000 for a FIFO queue. Messages are inflight after they have been received from the queue by a consuming component, but have not yet been deleted from the queue. In this scenario, it is stated that the SQS queue i s configured with the maximum message retention per iod. The maximum message retention in SQS is 14 days that is why the option that says: Tell the users that the application will be operational shortly and all rec eived requests will be processed after the web application is restarted is the correct answer i.e. there will be no missing messages. The options that say: Tell the users that unfortuna tely, they have to resubmit all the requests again and Tell the users that the application will be operational shor tly, however, requests sent over three days ago wil l need to be resubmitted are incorrect as there are no missing m essages in the queue thus, there is no need to resu bmit any previous requests. The option that says: Tell the users that unfortuna tely, they have to resubmit all of the requests sin ce the queue would not be able to process the 10,000 messages to gether is incorrect as the queue can contain an unlimited number of messages, not just 1 0,000 messages.", + "references": "https://aws.amazon.com/sqs/ Check out this Amazon SQS Cheat Sheet: https://tutorialsdojo.com/amazon-sqs/" + }, + { + "question": "A company developed a meal planning application tha t provides meal recommendations for the week as well as the food consumption of the users. The appl ication resides on an EC2 instance which requires access to various AWS services for its day-to-day o perations. Which of the following is the best way to allow the EC2 instance to access the S3 bucket and other AWS services?", + "options": [ + "A. Add the API Credentials in the Security Group and assign it to the EC2 instance.", + "B. Store the API credentials in a bastion host.", + "C. Create a role in IAM and assign it to the EC2 ins tance.", + "D. Store the API credentials in the EC2 instance." + ], + "correct": "C. Create a role in IAM and assign it to the EC2 ins tance.", + "explanation": "Explanation/Reference: The best practice in handling API Credentials is to create a new role in the Identity Access Managemen t (IAM) service and then assign it to a specific EC2 instan ce. In this way, you have a secure and centralized way of storing and managing your credentials. Storing the API credentials in the EC2 instance, ad ding the API Credentials in the Security Group and assigning it to the EC2 instance, and storing t he API credentials in a bastion host are incorrect because it is not secure to store nor use the API c redentials from an EC2 instance. You should use IAM service instead.", + "references": "http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ iam-roles-for-amazon-ec2.html Check out this AWS IAM Cheat Sheet: https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/" + }, + { + "question": "An organization stores and manages financial record s of various companies in its on-premises data cent er, which is almost out of space. The management decide d to move all of their existing records to a cloud storage service. All future financial records will also be stored in the cloud. For additional securit y, all records must be prevented from being deleted or overwritten . Which of the following should you do to meet the ab ove requirement?", + "options": [ + "A. Use AWS DataSync to move the data. Store all of y our data in Amazon EFS and enable object lock.", + "B. Use AWS Storage Gateway to establish hybrid cloud storage. Store all of your data in Amazon S3 and", + "C. Use AWS Storage Gateway to establish hybrid cloud storage. Store all of your data in Amazon EBS and", + "D. Use AWS DataSync to move the data. Store all of y our data in Amazon S3 and enable object lock." + ], + "correct": "D. Use AWS DataSync to move the data. Store all of y our data in Amazon S3 and enable object lock.", + "explanation": "Explanation/Reference: AWS DataSync allows you to copy large datasets with millions of files, without having to build custom solutions with open source tools, or license and ma nage expensive commercial network acceleration software. You can use DataSync to migrate active da ta to AWS, transfer data to the cloud for analysis and processing, archive data to free up on-premises sto rage capacity, or replicate data to AWS for busines s continuity. AWS DataSync enables you to migrate your on-premise s data to Amazon S3, Amazon EFS, and Amazon FSx for Windows File Server. You can configure Data Sync to make an initial copy of your entire dataset , and schedule subsequent incremental transfers of changing data t owards Amazon S3. Enabling S3 Object Lock prevents your existing and future records from being deleted or overwritten. AWS DataSync is primarily used to migrate existing data to Amazon S3. On the other hand, AWS Storage Gateway is more suitable if you still want to retai n access to the migrated data and for ongoing updat es from your on-premises file-based applications. Hence, the correct answer in this scenario is: Use AWS DataSync to move the data. Store all of your data in Amazon S3 and enable object lock. The option that says: Use AWS DataSync to move the data. Store all of your data in Amazon EFS and enable object lock is incorrect because Amazon EFS only supports file locking. Object lock is a featur e of Amazon S3 and not Amazon EFS. The options that says: Use AWS Storage Gateway to e stablish hybrid cloud storage. Store all of your data in Amazon S3 and enable object lock is incorre ct because the scenario requires that all of the existing records must be migrated to AWS. The futur e records will also be stored in AWS and not in the on- premises network. This means that setting up a hybr id cloud storage is not necessary since the on- premises storage will no longer be used. The option that says: Use AWS Storage Gateway to es tablish hybrid cloud storage. Store all of your data in Amazon EBS and enable object lock is incorr ect because Amazon EBS does not support object lock. Amazon S3 is the only service capable of lock ing objects to prevent an object from being deleted or overwritten. References: https://aws.amazon.com/datasync/faqs/ https://docs.aws.amazon.com/datasync/latest/usergui de/what-is-datasync.html https://docs.aws.amazon.com/AmazonS3/latest/dev/obj ect-lock.html Check out this AWS DataSync Cheat Sheet: https://tutorialsdojo.com/aws-datasync/ AWS Storage Gateway vs DataSync: https://www.youtube.com/watch?v=tmfe1rO-AUs Amazon S3 vs EBS vs EFS Cheat Sheet: https://tutorialsdojo.com/amazon-s3-vs-ebs-vs-efs/", + "references": "" + }, + { + "question": "A Solutions Architect created a new Standard-class S3 bucket to store financial reports that are not frequently accessed but should immediately be avail able when an auditor requests them. To save costs, the Architect changed the storage class of the S3 bucke t from Standard to Infrequent Access storage class.In Amazon S3 Standard - Infrequent Access storage c lass, which of the following statements are true? (Select TWO.)", + "options": [ + "A. Ideal to use for data archiving.", + "B. It is designed for data that is accessed less fre quently.", + "C. It provides high latency and low throughput perfo rmance", + "D. It is designed for data that requires rapid acces s when needed." + ], + "correct": "", + "explanation": "Explanation/Reference: Amazon S3 Standard - Infrequent Access (Standard - IA) is an Amazon S3 storage class for data that is accessed less frequently, but requires rapid access when needed. Standard - IA offers the high durabil ity, throughput, and low latency of Amazon S3 Standard, with a low per GB storage price and per GB retrieva l fee. This combination of low cost and high performance m ake Standard - IA ideal for long-term storage, backups, and as a data store for disaster recovery. The Standard - IA storage class is set at the obje ct level and can exist in the same bucket as Standard, allow ing you to use lifecycle policies to automatically transition objects between storage classes without any application changes. Key Features: - Same low latency and high throughput performance of Standard - Designed for durability of 99.999999999% of objec ts - Designed for 99.9% availability over a given year - Backed with the Amazon S3 Service Level Agreement for availability - Supports SSL encryption of data in transit and at rest - Lifecycle management for automatic migration of o bjects Hence, the correct answers are: - It is designed for data that is accessed less fre quently. - It is designed for data that requires rapid acces s when needed. The option that says: It automatically moves data t o the most cost-effective access tier without any operational overhead is incorrect as it actually re fers to Amazon S3 - Intelligent Tiering, which is t he only cloud storage class that delivers automatic cost savings by moving objects between different access tiers wh en access patterns change. The option that says: It provides high latency and low throughput performance is incorrect as it shoul d be \"low latency\" and \"high throughput\" instead. S3 automati cally scales performance to meet user demands. The option that says: Ideal to use for data archivi ng is incorrect because this statement refers to Am azon S3 Glacier. Glacier is a secure, durable, and extremel y low-cost cloud storage service for data archiving and long- term backup. References: https://aws.amazon.com/s3/storage-classes/ https://aws.amazon.com/s3/faqs Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", + "references": "" + }, + { + "question": "A media company is setting up an ECS batch architec ture for its image processing application. It will be hosted in an Amazon ECS Cluster with two ECS tasks that wi ll handle image uploads from the users and image processing. The first ECS task will process t he user requests, store the image in an S3 input bu cket, and push a message to a queue. The second task reads fr om the queue, parses the message containing the obj ect name, and then downloads the object. Once the image is processed and transformed, it will upload the o bjects to the S3 output bucket. To complete the architectu re, the Solutions Architect must create a queue and the necessary IAM permissions for the ECS tasks. Which of the following should the Architect do next ?", + "options": [ + "A. Launch a new Amazon Kinesis Data Firehose and con figure the second ECS task to read from it. Create an", + "B. Launch a new Amazon AppStream 2.0 queue and confi gure the second ECS task to read from it. Create an", + "C. a new Amazon SQS queue and configure the second E CS task to read from it. Create an IAM role that th e", + "D. a new Amazon MQ queue and configure the second EC S task to read from it. Create an IAM role that the" + ], + "correct": "C. a new Amazon SQS queue and configure the second E CS task to read from it. Create an IAM role that th e", + "explanation": "Explanation/Reference: Docker containers are particularly suited for batch job workloads. Batch jobs are often short-lived an d embarrassingly parallel. You can package your batch processing application into a Docker image so that you can deploy it anywhere, such as in an Amazon ECS ta sk. Amazon ECS supports batch jobs. You can use Amazon ECS Run Task action to run one or more tasks once. The Run Task action starts the ECS task on an instance that meets the task's requirements includ ing CPU, memory, and ports. For example, you can set up an ECS Batch architectu re for an image processing application. You can set up an AWS CloudFormation template that creates an Amazon S3 bucket, an Amazon SQS queue, an Amazon CloudWatch alarm, an ECS cluster, and an ECS task d efinition. Objects uploaded to the input S3 bucket trigger an event that sends object details to the S QS queue. The ECS task deploys a Docker container t hat reads from that queue, parses the message containin g the object name and then downloads the object. On ce transformed it will upload the objects to the S3 ou tput bucket. By using the SQS queue as the location for all obje ct details, you can take advantage of its scalabili ty and reliability as the queue will automatically scale b ased on the incoming messages and message retention can be configured. The ECS Cluster will then be able to scale services up or down based on the number of messages in the queue. You have to create an IAM Role that the ECS task as sumes in order to get access to the S3 buckets and SQS queue. Note that the permissions of the IAM rol e don't specify the S3 bucket ARN for the incoming bucket. This is to avoid a circular dependency issu e in the CloudFormation template. You should always make sure to assign the least amount of privileges needed to an IAM role. Hence, the correct answer is: Launch a new Amazon S QS queue and configure the second ECS task to read from it. Create an IAM role that the ECS tasks can assume in order to get access to the S3 bucket s and SQS queue. Declare the IAM Role (taskRoleArn) i n the task definition. The option that says: Launch a new Amazon AppStream 2.0 queue and configure the second ECS task to read from it. Create an IAM role that the ECS tasks can assume in order to get access to the S3 bucket s and AppStream 2.0 queue. Declare the IAM Role (task RoleArn) in the task definition is incorrect because Amazon AppStream 2.0 is a fully managed app lication streaming service and can't be used as a queue. You have to use Amazon SQS instead. The option that says: Launch a new Amazon Kinesis D ata Firehose and configure the second ECS task to read from it. Create an IAM role that the ECS ta sks can assume in order to get access to the S3 buckets and Kinesis Data Firehose. Specify the ARN of the IAM Role in the (taskDefinitionArn) field of the task definition is incorrect because Amazon Kin esis Data Firehose is a fully managed service for delivering real-time streaming data. Although it ca n stream data to an S3 bucket, it is not suitable t o be used as a queue for a batch application in this scenario . In addition, the ARN of the IAM Role should be declared in the taskRoleArn and not in the taskDefi nitionArn field. The option that says: Launch a new Amazon MQ queue and configure the second ECS task to read from it. Create an IAM role that the ECS tasks can assum e in order to get access to the S3 buckets and Amazon MQ queue. Set the (EnableTaskIAMRole) option to true in the task definition is incorrect because Amazon MQ is primarily used as a managed me ssage broker service and not a queue. The EnableTaskIAMRole option is only applicable for Win dows-based ECS Tasks that require extra configurati on. References: https://github.com/aws-samples/ecs-refarch-batch-pr ocessing https://docs.aws.amazon.com/AmazonECS/latest/develo perguide/common_use_cases.html https://aws.amazon.com/ecs/faqs/", + "references": "" + }, + { + "question": "A company has a top priority requirement to monitor a few database metrics and then afterward, send em ail notifications to the Operations team in case there is an issue. Which AWS services can accomplish this requirement? (Select TWO.)", + "options": [ + "A. Amazon EC2 Instance with a running Berkeley Inter net Name Domain (BIND) Server.", + "B. Amazon CloudWatch", + "C. Simple Notification Service (SNS)", + "D. Amazon Simple Email Service" + ], + "correct": "", + "explanation": "Explanation/Reference: Amazon CloudWatch and Amazon Simple Notification Se rvice (SNS) are correct. In this requirement, you can use Amazon CloudWatch to monitor the databa se and then Amazon SNS to send the emails to the Operations team. Take note that you should use SNS instead of SES (Simple Email Service) when you want to monitor your EC2 instances. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providin g you with a unified view of AWS resources, applicati ons, and services that run on AWS, and on-premises servers. SNS is a highly available, durable, secure, fully m anaged pub/sub messaging service that enables you t o decouple microservices, distributed systems, and se rverless applications. Amazon Simple Email Service is incorrect. SES is a cloud-based email sending service designed to send notification and transactional emails. Amazon Simple Queue Service (SQS) is incorrect. SQS is a fully-managed message queuing service. It does not monitor applications nor send email notifi cations unlike SES. Amazon EC2 Instance with a running Berkeley Interne t Name Domain (BIND) Server is incorrect because BIND is primarily used as a Domain Name Sys tem (DNS) web service. This is only applicable if you have a private References: https://aws.amazon.com/cloudwatch/ https://aws.amazon.com/sns/ Check out this Amazon CloudWatch Cheat Sheet: https://tutorialsdojo.com/amazon-cloudwatch/", + "references": "" + }, + { + "question": "A media company has two VPCs: VPC-1 and VPC-2 with peering connection between each other. VPC-1 only contains private subnets while VPC-2 only contains public subnets. The company uses a single AWS Direct Connect connection and a virtual interface to connect their on-premises network with VPC-1. Wh ich of the following options increase the fault toleran ce of the connection to VPC-1? (Select TWO.)", + "options": [ + "A. Use the AWS VPN CloudHub to create a new AWS Dire ct Connect connection and private virtualninterface", + "B. Establish another AWS Direct Connect connection a nd private virtual interface in the same AWS region as", + "C. Establish a hardware VPN over the Internet betwee n VPC-2 and the on-premises network.", + "D. Establish a hardware VPN over the Internet betwee n VPC-1 and the on-premises network." + ], + "correct": "", + "explanation": "Explanation/Reference: In this scenario, you have two VPCs which have pee ring connections with each other. Note that a VPC peering connection does not support edge to edge r outing. This means that if either VPC in a peering relationship has one of the following connections, you cannot extend the peering relationship to that connection: - A VPN connection or an AWS Direct Connect connect ion to a corporate network - An Internet connection through an Internet gatewa y - An Internet connection in a private subnet throug h a NAT device - A gateway VPC endpoint to an AWS service; for exa mple, an endpoint to Amazon S3. - (IPv6) A ClassicLink connection. You can enable I Pv4 communication between a linked EC2-Classic instance and instances in a VPC on the other side o f a VPC peering connection. However, IPv6 is not supported in EC2-Classic, so you cannot extend this connection for IPv6 communication. For example, if VPC A and VPC B are peered, and VPC A has any of these connections, then instances in VPC B cannot use the connection to access resources on the other side of the connection. Similarly, resources on the other side of a connection cannot use the connection to access VPC B. Hence, this means that you cannot use VPC-2 to exte nd the peering relationship that exists between VPC -1 and the on-premises network. For example, traffic f rom the corporate network can't directly access VPC -1 by using the VPN connection or the AWS Direct Conne ct connection to VPC-2, which is why the following options are incorrect: - Use the AWS VPN CloudHub to create a new AWS Dire ct Connect connection and private virtual interface in the same region as VPC-2. - Establish a hardware VPN over the Internet betwee n VPC-2 and the on-premises network. - Establish a new AWS Direct Connect connection and private virtual interface in the same region as VP C-2. You can do the following to provide a highly availa ble, fault-tolerant network connection: - Establish a hardware VPN over the Internet betwee n the VPC and the on-premises network. - Establish another AWS Direct Connect connection a nd private virtual interface in the same AWS region. References: https://docs.aws.amazon.com/vpc/latest/peering/inva lid-peering-configurations.html#edge-to-edge-vgw https://aws.amazon.com/premiumsupport/knowledge-cen ter/configure-vpn-backup-dx/ https://aws.amazon.com/answers/networking/aws-multi ple-data-center-ha-network-connectivity/ Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", + "references": "" + }, + { + "question": "A Solutions Architect of a multinational gaming com pany develops video games for PS4, Xbox One, and Nintendo Switch consoles, plus a number of mobile g ames for Android and iOS. Due to the wide range of their products and services, the architect propose d that they use API Gateway. What are the key features of API Gateway that the a rchitect can tell to the client? (Select TWO.)", + "options": [ + "A. Enables you to run applications requiring high le vels of inter-node communications at scale on AWS", + "B. It automatically provides a query language for yo ur APIs similar to GraphQL.", + "C. You pay only for the API calls you receive and th e amount of data transferred out.", + "D. Provides you with static anycast IP addresses tha t serve as a fixed entry point to your applications hosted in" + ], + "correct": "", + "explanation": "Explanation/Reference: Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. Wi th a few clicks in the AWS Management Console, you can create an API that acts as a \"front door\" for a pplications to access data, business logic, or func tionality from your back-end services, such as workloads runn ing on Amazon Elastic Compute Cloud (Amazon EC2), code running on AWS Lambda, or any web applic ation. Since it can use AWS Lambda, you can run your APIs without servers. Amazon API Gateway handles all the tasks involved i n accepting and processing up to hundreds of thousands of concurrent API calls, including traffi c management, authorization and access control, monitoring, and API version management. Amazon API Gateway has no minimum fees or startup costs. You pay only for the API calls you receive and the amount of data transferred out. Hence, the correct answers are: - Enables you to build RESTful APIs and WebSocket A PIs that are optimized for serverless workloads - You pay only for the API calls you receive and th e amount of data transferred out. The option that says: It automatically provides a q uery language for your APIs similar to GraphQL is incorrect because this is not provided by API Gatew ay. The option that says: Provides you with static anyc ast IP addresses that serve as a fixed entry point to your applications hosted in one or more AWS Regions is incorrect because this is a capability of AWS Global Accelerator and not API Gateway. The option that says: Enables you to run applicatio ns requiring high levels of inter-node communications at scale on AWS through its custom-b uilt operating system (OS) bypass hardware interface is incorrect because this is a capability of Elastic Fabric Adapter and not API Gateway. References: https://aws.amazon.com/api-gateway/ https://aws.amazon.com/api-gateway/features/ Check out this Amazon API Gateway Cheat Sheet: https://tutorialsdojo.com/amazon-api-gateway/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", + "references": "" + }, + { + "question": "An online events registration system is hosted in A WS and uses ECS to host its front-end tier and an R DS configured with Multi-AZ for its database tier. Wha t are the events that will make Amazon RDS automati cally perform a failover to the standby replica? (Select TWO.)", + "options": [ + "A. Loss of availability in primary Availability Zone", + "B. Storage failure on primary", + "C. Compute unit failure on secondary DB instance", + "D. Storage failure on secondary DB instance" + ], + "correct": "", + "explanation": "Explanation/Reference: Amazon RDS provides high availability and failover support for DB instances using Multi-AZ deployments. Amazon RDS uses several different tech nologies to provide failover support. Multi-AZ deployments for Oracle, PostgreSQL, MySQL, and Mari aDB DB instances use Amazon's failover technology. SQL Server DB instances use SQL Server Database Mirroring (DBM). In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. The prima ry DB instance is synchronously replicated across Availability Zones to a standby replica to provide data redundancy, eliminate I/O freezes, and minimiz e latency spikes during system backups. Running a DB instance with high availability can enhance availability during planned system maintenance, and help protect your databases against DB instance failure and Availability Zone disruption. Amazon RDS detects and automatically recovers from the most common failure scenarios for Multi-AZ deployments so that you can resume database operati ons as quickly as possible without administrative intervention. The high-availability feature is not a scaling solu tion for read-only scenarios; you cannot use a stan dby replica to serve read traffic. To service read-only traffic, you should use a Read Replica. Amazon RDS automatically performs a failover in the event of any of the following: Loss of availability in primary Availability Zone. Loss of network connectivity to primary. Compute unit failure on primary. Storage failure on primary. Hence, the correct answers are: - Loss of availability in primary Availability Zone - Storage failure on primary The following options are incorrect because all the se scenarios do not affect the primary database. Automatic failover only occurs if the primary datab ase is the one that is affected. - Storage failure on secondary DB instance - In the event of Read Replica failure - Compute unit failure on secondary DB instance References: https://aws.amazon.com/rds/details/multi-az/ https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/Concepts.MultiAZ.html Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/", + "references": "" + }, + { + "question": "A company has multiple VPCs with IPv6 enabled for i ts suite of web applications. The Solutions Archite ct tried to deploy a new Amazon EC2 instance but she r eceived an error saying that there is no IP address available on the subnet. How should the Solutions Architect resolve this pro blem?", + "options": [ + "A. Set up a new IPv6-only subnet with a large CIDR r ange. Associate the new subnet with the VPC then", + "B. Set up a new IPv4 subnet with a larger CIDR range . Associate the new subnet with the VPC and then", + "C. Disable the IPv4 support in the VPC and use the a vailable IPv6 addresses.", + "D. Ensure that the VPC has IPv6 CIDRs only. Remove a ny IPv4 CIDRs associated with the VPC." + ], + "correct": "B. Set up a new IPv4 subnet with a larger CIDR range . Associate the new subnet with the VPC and then", + "explanation": "Explanation/Reference: Amazon Virtual Private Cloud (VPC) is a service tha t lets you launch AWS resources in a logically isolated virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP add ress range, creation of subnets, and configuration of route tables and network gateways. You can use both IPv4 and IPv6 for most resources in your virtual private cloud, helping to ensure secure and easy ac cess to resources and applications. A subnet is a range of IP addresses in your VPC. Yo u can launch AWS resources into a specified subnet. When you create a VPC, you must specify a range of IPv4 addresses for the VPC in the form of a CIDR block. Each subnet must reside entirely within one Availability Zone and cannot span zones. You can al so optionally assign an IPv6 CIDR block to your VPC, a nd assign IPv6 CIDR blocks to your subnets. If you have an existing VPC that supports IPv4 only and resources in your subnet that are configured t o use IPv4 only, you can enable IPv6 support for your VPC and resources. Your VPC can operate in dual-stack mode -- your resources can communicate over IPv4, o r IPv6, or both. IPv4 and IPv6 communication are independent of each other. You cannot disable IPv4 support for your VPC and subnets since this is the default IP addressing system for Amazon VPC and Ama zon EC2. By default, a new EC2 instance uses an IPv4 address ing protocol. To fix the problem in the scenario, y ou need to create a new IPv4 subnet and deploy the EC2 instance in the new subnet. Hence, the correct answer is: Set up a new IPv4 sub net with a larger CIDR range. Associate the new subnet with the VPC and then launch the instance. The option that says: Set up a new IPv6-only subnet with a large CIDR range. Associate the new subnet with the VPC then launch the instance is inc orrect because you need to add IPv4 subnet first before you can create an IPv6 subnet. The option that says: Ensure that the VPC has IPv6 CIDRs only. Remove any IPv4 CIDRs associated with the VPC is incorrect because you can't have a VPC with IPv6 CIDRs only. The default IP addressing system in VPC is IPv4. You can only change your VPC to dual-stack mode where your resources can communicate over IPv4, or IPv6, or both, but not ex clusively with IPv6 only. The option that says: Disable the IPv4 support in t he VPC and use the available IPv6 addresses is incorrect because you cannot disable the IPv4 suppo rt for your VPC and subnets since this is the defau lt IP addressing system. References: https://docs.aws.amazon.com/vpc/latest/userguide/vp c-migrate-ipv6.html https://docs.aws.amazon.com/vpc/latest/userguide/vp c-ip-addressing.html https://aws.amazon.com/vpc/faqs/ Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", + "references": "" + }, + { + "question": "An insurance company plans to implement a message f iltering feature in their web application. To imple ment this solution, they need to create separate Amazon SQS queues for each type of quote request. The entire message processing should not exceed 24 hours. As the Solutions Architect of the company, which of the following should you do to meet the above requirement?", + "options": [ + "A. Create multiple Amazon SNS topics and configure t he Amazon SQS queues to subscribe to the SNS topics .", + "B. Create a data stream in Amazon Kinesis Data Strea ms. Use the Amazon Kinesis Client Library to delive r all", + "C. Create one Amazon SNS topic and configure the Ama zon SQS queues to subscribe to the SNS topic.", + "D. Create one Amazon SNS topic and configure the Ama zon SQS queues to subscribe to the SNS topic. Set" + ], + "correct": "D. Create one Amazon SNS topic and configure the Ama zon SQS queues to subscribe to the SNS topic. Set", + "explanation": "Explanation/Reference: Amazon SNS is a fully managed pub/sub messaging ser vice. With Amazon SNS, you can use topics to simultaneously distribute messages to multiple subs cribing endpoints such as Amazon SQS queues, AWS Lambda functions, HTTP endpoints, email addresses, and mobile devices (SMS, Push). Amazon SQS is a message queue service used by distr ibuted applications to exchange messages through a polling model. It can be used to decouple sending a nd receiving components without requiring each component to be concurrently available. A fanout scenario occurs when a message published t o an SNS topic is replicated and pushed to multiple endpoints, such as Amazon SQS queues, HTTP(S) endpo ints, and Lambda functions. This allows for parallel asynchronous processing. For example, you can develop an application that pu blishes a message to an SNS topic whenever an order is placed for a product. Then, two or more SQS queu es that are subscribed to the SNS topic receive identical notifications for the new order. An Amazo n Elastic Compute Cloud (Amazon EC2) server instance attached to one of the SQS queues can hand le the processing or fulfillment of the order. And you can attach another Amazon EC2 server instance to a data warehouse for analysis of all orders received. By default, an Amazon SNS topic subscriber receives every message published to the topic. You can use Amazon SNS message filtering to assign a filter pol icy to the topic subscription, and the subscriber w ill only receive a message that they are interested in. Using Amazon SNS and Amazon SQS together, messages can be delivered to applications that requ ire immediate notification of an event. This method is known as fanout to Amazon SQS queues. Hence, the correct answer is: Create one Amazon SNS topic and configure the Amazon SQS queues to subscribe to the SNS topic. Set the filter policies in the SNS subscriptions to publish the message to the designated SQS queue based on its quote request type. The option that says: Create one Amazon SNS topic a nd configure the Amazon SQS queues to subscribe to the SNS topic. Publish the same messag es to all SQS queues. Filter the messages in each queue based on the quote request type is incorrect because this option will distribute the same messag es on all SQS queues instead of its designated queue. You nee d to fan-out the messages to multiple SQS queues using a filter policy in Amazon SNS subscrip tions to allow parallel asynchronous processing. By doing so, the entire message processing will not exceed 2 4 hours. The option that says: Create multiple Amazon SNS to pics and configure the Amazon SQS queues to subscribe to the SNS topics. Publish the message to the designated SQS queue based on the quote request type is incorrect because to implement the solution asked in the scenario, you only need to us e one Amazon SNS topic. To publish it to the designated S QS queue, you must set a filter policy that allows you to fanout the messages. If you didn't set a filter pol icy in Amazon SNS, the subscribers would receive al l the messages published to the SNS topic. Thus, using mu ltiple SNS topics is not an appropriate solution fo r this scenario. The option that says: Create a data stream in Amazo n Kinesis Data Streams. Use the Amazon Kinesis Client Library to deliver all the records to the de signated SQS queues based on the quote request type is incorrect because Amazon KDS is not a messa ge filtering service. You should use Amazon SNS and SQS to distribute the topic to the designated q ueue. References: https://aws.amazon.com/getting-started/hands-on/fil ter-messages-published-to-topics/ https://docs.aws.amazon.com/sns/latest/dg/sns-messa ge-filtering.html https://docs.aws.amazon.com/sns/latest/dg/sns-sqs-a s-subscriber.html Check out this Amazon SNS and SQS Cheat Sheets: https://tutorialsdojo.com/amazon-sns/ https://tutorialsdojo.com/amazon-sqs/ Amazon SNS Overview: https://www.youtube.com/watch?v=ft5R45lEUJ8", + "references": "" + }, + { + "question": "A music publishing company is building a multitier web application that requires a key-value store whi ch will save the document models. Each model is composed of band ID, album ID, song ID, composer ID,lyrics, an d other data. The web tier will be hosted in an Amazo n ECS cluster with AWS Fargate launch type. Which of the following is the MOST suitable setup f or the database-tier?", + "options": [ + "A. Launch an Amazon Aurora Serverless database.", + "B. Launch an Amazon RDS database with Read Replicas.", + "C. Launch a DynamoDB table.", + "D. Use Amazon WorkDocs to store the document models." + ], + "correct": "C. Launch a DynamoDB table.", + "explanation": "Explanation/Reference: Amazon DynamoDB is a fast and flexible NoSQL databa se service for all applications that need consisten t, single-digit millisecond latency at any scale. It i s a fully managed cloud database and supports both document and key-value store models. Its flexible d ata model, reliable performance, and automatic scal ing of throughput capacity makes it a great fit for mobile , web, gaming, ad tech, IoT, and many other applications. Hence, the correct answer is: Launch a DynamoDB tab le. The option that says: Launch an Amazon RDS database with Read Replicas is incorrect because this is a relational database. This is not suitable to be use d as a key-value store. A better option is to use D ynamoDB as it supports both document and key-value store mo dels. The option that says: Use Amazon WorkDocs to store the document models is incorrect because Amazon WorkDocs simply enables you to share content , provide rich feedback, and collaboratively edit documents. It is not a key-value store like DynamoD B. The option that says: Launch an Amazon Aurora Serve rless database is incorrect because this type of database is not suitable to be used as a key-value store. Amazon Aurora Serverless is an on-demand, au to- scaling configuration for Amazon Aurora where the database will automatically start-up, shut down, and scale capacity up or down based on your application's nee ds. It enables you to run your database in the clou d without managing any database instances. It's a simple, cos t-effective option for infrequent, intermittent, or unpredictable workloads and not as a key-value stor e. References: https://aws.amazon.com/dynamodb/ https://aws.amazon.com/nosql/key-value/ Check out this Amazon DynamoDB Cheat Sheet: https://tutorialsdojo.com/amazon-dynamodb/ Amazon DynamoDB Overview: https://www.youtube.com/watch?v=3ZOyUNIeorU", + "references": "" + }, + { + "question": "An application is hosted in AWS Fargate and uses RD S database in Multi-AZ Deployments configuration with several Read Replicas. A Solutions Architect w as instructed to ensure that all of their database credentials, API keys, and other secrets are encryp ted and rotated on a regular basis to improve data security. The application should also use the lates t version of the encrypted credentials when connect ing to the RDS database. Which of the following is the MOST appropriate solu tion to secure the credentials?", + "options": [ + "A. Store the database credentials, API keys, and oth er secrets to Systems Manager Parameter Store each", + "B. Store the database credentials, API keys, and oth er secrets in AWS KMS.", + "C. Store the database credentials, API keys, and oth er secrets to AWS ACM.", + "D. Use AWS Secrets Manager to store and encrypt the database credentials, API keys, and other secrets." + ], + "correct": "D. Use AWS Secrets Manager to store and encrypt the database credentials, API keys, and other secrets.", + "explanation": "Explanation/Reference: AWS Secrets Manager is an AWS service that makes it easier for you to manage secrets. Secrets can be database credentials, passwords, third-party API ke ys, and even arbitrary text. You can store and cont rol access to these secrets centrally by using the Secr ets Manager console, the Secrets Manager command li ne interface (CLI), or the Secrets Manager API and SDK s. In the past, when you created a custom application that retrieves information from a database, you typ ically had to embed the credentials (the secret) for accessing the database directly in the application. When it came time to rotate the credentials, you had to do much more than just create new credentials. You had to invest time to update the application to use the new credentials. Then you had to distribute the updated application. If you had multiple applications that shared credentials and y ou missed updating one of them, the application wou ld break. Because of this risk, many customers have chosen no t to regularly rotate their credentials, which effe ctively substitutes one risk for another. Secrets Manager enables you to replace hardcoded cr edentials in your code (including passwords), with an API call to Secrets Manager to retrieve the secret prog rammatically. This helps ensure that the secret can 't be compromised by someone examining your code, because the secret simply isn't there. Also, you can configure Secrets Manager to automatically rotate t he secret for you according to a schedule that you specify. This enables you to replace long-term secr ets with short-term ones, which helps to significan tly reduce the risk of compromise. Hence, the most appropriate solution for this scena rio is: Use AWS Secrets Manager to store and encrypt the database credentials, API keys, and oth er secrets. Enable automatic rotation for all of th e credentials. The option that says: Store the database credential s, API keys, and other secrets to Systems Manager Parameter Store each with a SecureString data type. The credentials are automatically rotated by default is incorrect because Systems Manager Parame ter Store doesn't rotate its parameters by default. The option that says: Store the database credential s, API keys, and other secrets to AWS ACM is incorrect because it is just a managed private CA s ervice that helps you easily and securely manage th e lifecycle of your private certificates to allow SSL communication to your application. This is not a s uitable service to store database or any other confidential credentials. The option that says: Store the database credential s, API keys, and other secrets in AWS KMS is incorrect because this only makes it easy for you t o create and manage encryption keys and control the use of encryption across a wide range of AWS services. Thi s is primarily used for encryption and not for hosting your credentials. References: https://aws.amazon.com/secrets-manager/ https://aws.amazon.com/blogs/security/how-to-secure ly-provide-database-credentials-to-lambda-functions - by- using-aws-secrets-manager/ Check out these AWS Secrets Manager and Systems Man ager Cheat Sheets: https://tutorialsdojo.com/aws-secrets-manager/ https://tutorialsdojo.com/aws-systems-manager/ AWS Security Services Overview - Secrets Manager, A CM, Macie: https://www.youtube.com/watch?v=ogVamzF2Dzk", + "references": "" + }, + { + "question": "An advertising company is currently working on a pr oof of concept project that automatically provides SEO analytics for its clients. Your company has a V PC in AWS that operates in a dual-stack mode in which IPv4 and IPv6 communication is allowed. You d eployed the application to an Auto Scaling group of EC2 instances with an Application Load Balancer in fron t that evenly distributes the incoming traffic. You are ready to go live but you need to point your domain name (tut orialsdojo.com) to the Application Load Balancer. In Route 53, which record types will you use to poi nt the DNS name of the Application Load Balancer? ( Select TWO.)", + "options": [ + "A. Alias with a type \"A\" record set", + "B. Non-Alias with a type \"A\" record set", + "C. Alias with a type \"AAAA\" record set .", + "D. Alias with a type \"CNAME\" record set" + ], + "correct": "", + "explanation": "Explanation/Reference: The correct answers are: Alias with a type \"AAAA\" r ecord set and Alias with a type \"A\" record set. To route domain traffic to an ELB load balancer, us e Amazon Route 53 to create an alias record that po ints to your load balancer. An alias record is a Route 53 e xtension to DNS. It's similar to a CNAME record, bu t you can create an alias record both for the root domain, su ch as tutorialsdojo.com, and for subdomains, such a s portal.tutorialsdojo.com. (You can create CNAME rec ords only for subdomains.) To enable IPv6 resolution, you would need to create a second resou rce record, tutorialsdojo.com ALIAS AAAA -> myelb.us-west-2.elb.amazonnaws.com, this is assumin g your Elastic Load Balancer has IPv6 support. Non-Alias with a type \"A\" record set is incorrect b ecause you only use Non-Alias with a type \"A\" recor d set for IP addresses. Alias with a type \"CNAME\" record set is incorrect b ecause you can't create a CNAME record at the zone apex. For example, if you register the DNS nam e tutorialsdojo.com, the zone apex is tutorialsdojo.com. Alias with a type of \"MX\" record set is incorrect b ecause an MX record is primarily used for mail serv ers. It includes a priority number and a domain name, fo r example: 10 mailserver.tutorialsdojo.com.", + "references": "https://docs.aws.amazon.com/Route53/latest/Develope rGuide/routing-to-elb-load-balancer.html https://docs.aws.amazon.com/Route53/latest/Develope rGuide/resource-record-sets-choosing-alias-non- alias.html Check out this Amazon Route 53 Cheat Sheet: https://tutorialsdojo.com/amazon-route-53/" + }, + { + "question": "A Solutions Architect is working for an online hote l booking firm with terabytes of customer data comi ng from the websites and applications. There is an ann ual corporate meeting where the Architect needs to present the booking behavior and acquire new insigh ts from the customers' data. The Architect is looki ng for a service to perform super-fast analytics on ma ssive data sets in near real-time. Which of the following services gives the Architect the ability to store huge amounts of data and perf orm quick and flexible queries on it?", + "options": [ + "A. Amazon DynamoDB", + "B. Amazon RDS", + "C. Amazon Redshift", + "D. Amazon ElastiCache" + ], + "correct": "C. Amazon Redshift", + "explanation": "Explanation/Reference: Amazon Redshift is a fast, scalable data warehouse that makes it simple and cost-effective to analyze all your data across your data warehouse and data lake. Redshift delivers ten times faster performance tha n other data warehouses by using machine learning, ma ssively parallel query execution, and columnar storage on high-performance disk. You can use Redshift to analyze all your data using standard SQL and your existing Business Intelligen ce (BI) tools. It also allows you to run complex analytic q ueries against terabytes to petabytes of structured and semi- structured data, using sophisticated query optimiza tion, columnar storage on high-performance storage, and massively parallel query execution. Hence, the correct answer is: Amazon Redshift. Amazon DynamoDB is incorrect. DynamoDB is a NoSQL d atabase which is based on key-value pairs used for fast processing of small data that dynamic ally grows and changes. But if you need to scan lar ge amounts of data (ie a lot of keys all in one query) , the performance will not be optimal. Amazon ElastiCache is incorrect because this is use d to increase the performance, speed, and redundanc y with which applications can retrieve data by provid ing an in-memory database caching system, and not f or database analytical processes. Amazon RDS is incorrect because this is mainly used for On-Line Transaction Processing (OLTP) applications and not for Online Analytics Processin g (OLAP). References: https://docs.aws.amazon.com/redshift/latest/mgmt/we lcome.html https://docs.aws.amazon.com/redshift/latest/gsg/get ting-started.htm l Amazon Redshift Overview: https://youtu.be/jlLERNzhHOg Check out this Amazon Redshift Cheat Sheet: https://tutorialsdojo.com/amazon-redshift/", + "references": "" + }, + { + "question": "One of your EC2 instances is reporting an unhealthy system status check. The operations team is lookin g for an easier way to monitor and repair these insta nces instead of fixing them manually. How will you automate the monitoring and repair of the system st atus check failure in an AWS environment?", + "options": [ + "A. Write a python script that queries the EC2 API fo r each instance status check", + "B. Write a shell script that periodically shuts down and starts instances based on certain stats.", + "C. implement a third party monitoring tool.", + "D. Create CloudWatch alarms that stop and start the instance based on status check alarms." + ], + "correct": "D. Create CloudWatch alarms that stop and start the instance based on status check alarms.", + "explanation": "Explanation/Reference: Using Amazon CloudWatch alarm actions, you can crea te alarms that automatically stop, terminate, reboo t, or recover your EC2 instances. You can use the stop or terminate actions to help you save money when y ou no longer need an instance to be running. You can u se the reboot and recover actions to automatically reboot those instances or recover them onto new har dware if a system impairment occurs. Writing a python script that queries the EC2 API fo r each instance status check, writing a shell script that periodically shuts down and starts inst ances based on certain stats, and buying and implementing a third party monitoring tool are all incorrect because it is unnecessary to go through s uch lengths when CloudWatch Alarms already has such a f eature for you, offered at a low cost.", + "references": "https://docs.aws.amazon.com/AmazonCloudWatch/latest /monitoring/UsingAlarmActions.html Check out this Amazon CloudWatch Cheat Sheet: https://tutorialsdojo.com/amazon-cloudwatch/" + }, + { + "question": "A Solutions Architect needs to set up a bastion hos t in Amazon VPC. It should only be accessed from th e corporate data center via SSH. What is the best way to achieve this?", + "options": [ + "A. Create a small EC2 instance with a security group which only allows access on port 22 via the IP add ress of", + "B. Create a large EC2 instance with a security group which only allows access on port 22 using your own pre-", + "C. Create a small EC2 instance with a security group which only allows access on port 22 using your own pre-", + "D. Create a large EC2 instance with a security group which only allows access on port 22 via the IP add ress of" + ], + "correct": "A. Create a small EC2 instance with a security group which only allows access on port 22 via the IP add ress of", + "explanation": "Explanation/Reference: The best way to implement a bastion host is to crea te a small EC2 instance which should only have a security group from a particular IP address for max imum security. This will block any SSH Brute Force attacks on your bastion host. It is also recommende d to use a small instance rather than a large one b ecause this host will only act as a jump server to connect to other instances in your VPC and nothing else. Therefore, there is no point of allocating a large instance simply because it doesn't need that much computing power to process SSH (port 22) or RDP (po rt 3389) connections. It is possible to use SSH wit h an ordinary user ID and a pre-configured password as c redentials but it is more secure to use public key pairs for SSH authentication for better security. Hence, the right answer for this scenario is the op tion that says: Create a small EC2 instance with a security group which only allows access on port 22 via the IP address of the corporate data center. Use a private key (.pem) file to connect to the bas tion host. Creating a large EC2 instance with a security group which only allows access on port 22 using your own pre-configured password and creating a small EC 2 instance with a security group which only allows access on port 22 using your own pre-configu red password are incorrect. Even though you have your own pre-configured password, the SSH connection can still be accessed by anyone over the Internet, which poses as a secur ity vulnerability. The option that says: Create a large EC2 instance w ith a security group which only allows access on port 22 via the IP address of the corporate data ce nter. Use a private key (.pem) file to connect to t he bastion host is incorrect because you don't need a large in stance for a bastion host as it does not require mu ch CPU resources. References: https://docs.aws.amazon.com/quickstart/latest/linux -bastion/architecture.html https://aws.amazon.com/blogs/security/how-to-record -ssh-sessions-established-through-a-bastion-host/ Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", + "references": "" + }, + { + "question": "A company has a cryptocurrency exchange portal that is hosted in an Auto Scaling group of EC2 instance s behind an Application Load Balancer and is deployed across multiple AWS regions. The users can be found all around the globe, but the majority are fr om Japan and Sweden. Because of the compliance requirements in these two locations, you want the J apanese users to connect to the servers in the ap- northeast-1 Asia Pacific (Tokyo) region, while the Swedish users should be connected to the servers in the eu- west-1 EU (Ireland) region. Which of the following services would allow you to easily fulfill this requirement?", + "options": [ + "A. Use Route 53 Weighted Routing policy.", + "B. Use Route 53 Geolocation Routing policy.", + "C. Set up a new CloudFront web distribution with the geo-restriction feature enabled.", + "D. Set up an Application Load Balancers that will au tomatically route the traffic to the proper AWS reg ion." + ], + "correct": "B. Use Route 53 Geolocation Routing policy.", + "explanation": "Explanation/Reference: Geolocation routing lets you choose the resources t hat serve your traffic based on the geographic loca tion of your users, meaning the location that DNS querie s originate from. For example, you might want all queries from Europe to be routed to an ELB load bal ancer in the Frankfurt region. When you use geolocation routing, you can localize your content and present some or all of your websit e in the language of your users. You can also use geoloc ation routing to restrict distribution of content t o only the locations in which you have distribution rights. An other possible use is for balancing load across end points in a predictable, easy-to-manage way, so that each user location is consistently routed to the same endpoin t. Setting up an Application Load Balancers that will automatically route the traffic to the proper AWS region is incorrect because Elastic Load Balancers distribute traffic among EC2 instances across multi ple Availability Zones but not across AWS regions. Setting up a new CloudFront web distribution with t he geo-restriction feature enabled is incorrect because the CloudFront geo-restriction feature is p rimarily used to prevent users in specific geograph ic locations from accessing content that you're distri buting through a CloudFront web distribution. It do es not let you choose the resources that serve your traffic ba sed on the geographic location of your users, unlik e the Geolocation routing policy in Route 53. Using Route 53 Weighted Routing policy is incorrect because this is not a suitable solution to meet th e requirements of this scenario. It just lets you ass ociate multiple resources with a single domain name (tutorialsdojo.com) or subdomain name (forums.tutor ialsdojo.com) and choose how much traffic is routed to each resource. You have to use a Geolocation routin g policy instead. References: https://docs.aws.amazon.com/Route53/latest/Develope rGuide/routing-policy.html https://aws.amazon.com/premiumsupport/knowledge-cen ter/geolocation-routing-policy Check out this Amazon Route 53 Cheat Sheet: https://tutorialsdojo.com/amazon-route-53/ Latency Routing vs Geoproximity Routing vs Geolocat ion Routing: https://tutorialsdojo.com/latency-routing-vs-geopro ximity-routing-vs-geolocation-routing/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/", + "references": "" + }, + { + "question": "An Intelligence Agency developed a missile tracking application that is hosted on both development and production AWS accounts. The Intelligence agency's junior developer only has access to the development account. She has received security clearance to acc ess the agency's production account but the access is only temporary and only write access to EC2 and S3 is al lowed. Which of the following allows you to issue short-li ved access tokens that act as temporary security credentials to allow access to your AWS resources?", + "options": [ + "A. All of the given options are correct.", + "B. Use AWS STS", + "C. Use AWS SSO", + "D. Use AWS Cognito to issue JSON Web Tokens (JWT)" + ], + "correct": "B. Use AWS STS", + "explanation": "Explanation/Reference: AWS Security Token Service (AWS STS) is the service that you can use to create and provide trusted users with temporary security credentials that can control access to your AWS resources. Temporary security credentials work almost identically to the long-term access key credentials that your IAM use rs can use. In this diagram, IAM user Alice in the Dev account (the role-assuming account) needs to access the Pro d account (the role-owning account). Here's how it wo rks: Alice in the Dev account assumes an IAM role (Write Access) in the Prod account by calling AssumeRole. STS returns a set of temporary security credentials . Alice uses the temporary security credentials to ac cess services and resources in the Prod account. Al ice could, for example, make calls to Amazon S3 and Ama zon EC2, which are granted by the WriteAccess role.Using AWS Cognito to issue JSON Web Tokens (JWT) is incorrect because the Amazon Cognito service is primarily used for user authentication a nd not for providing access to your AWS resources. A JSON Web Token (JWT) is meant to be used for user authen tication and session management. Using AWS SSO is incorrect. Although the AWS SSO se rvice uses STS, it does not issue short-lived credentials by itself. AWS Single Sign-On (SSO) is a cloud SSO service that makes it easy to centrally manage SSO access to multiple AWS accounts and business ap plications. The option that says All of the above is incorrect as only STS has the ability to provide temporary se curity credentials.", + "references": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id _credentials_temp.html AWS Identity Services Overview: https://www.youtube.com/watch?v=AIdUw0i8rr0 Check out this AWS IAM Cheat Sheet: https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/" + }, + { + "question": "A digital media company shares static content to it s premium users around the world and also to their partners who syndicate their media files. The company is loo king for ways to reduce its server costs and secure ly deliver their data to their customers globally with low lat ency. Which combination of services should be used to pro vide the MOST suitable and cost-effective architect ure? (Select TWO.)", + "options": [ + "A. Amazon S3", + "B. AWS Global Accelerator", + "C. AWS Lambda", + "D. Amazon CloudFront" + ], + "correct": "", + "explanation": "Explanation/Reference: Amazon CloudFront is a fast content delivery networ k (CDN) service that securely delivers data, videos , applications, and APIs to customers globally with l ow latency, high transfer speeds, all within a deve loper- friendly environment. CloudFront is integrated with AWS both physical lo cations that are directly connected to the AWS glob al infrastructure, as well as other AWS services. Clou dFront works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code cl oser to customers' users and to customize the user experience. Lastly, if you use AWS origins such as Amazon S3, Amazon EC2 or Elastic Load Balancing, you don't pay for any data transferred between thes e services and CloudFront. Amazon S3 is object storage built to store and retr ieve any amount of data from anywhere on the Intern et. It's a simple storage service that offers an extremely dur able, highly available, and infinitely scalable dat a storage infrastructure at very low costs. AWS Global Accelerator and Amazon CloudFront are se parate services that use the AWS global network and its edge locations around the world. CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (su ch as API acceleration and dynamic site delivery). Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions. Global Accelerator is a good f it for non-HTTP use cases, such as gaming (UDP), IoT (MQTT ), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover. Both ser vices integrate with AWS Shield for DDoS protection. Hence, the correct options are Amazon CloudFront an d Amazon S3. AWS Fargate is incorrect because this service is ju st a serverless compute engine for containers that work with both Amazon Elastic Container Service (ECS) and Ama zon Elastic Kubernetes Service (EKS). Although this service is more cost-effective than i ts server-based counterpart, Amazon S3 still costs way less than Fargate, especially for storing static content . AWS Lambda is incorrect because this simply lets yo u run your code serverless, without provisioning or managing servers. Although this is also a cost-effe ctive service since you have to pay only for the co mpute time you consume, you can't use this to store static con tent or as a Content Delivery Network (CDN). A bett er combination is Amazon CloudFront and Amazon S3. AWS Global Accelerator is incorrect because this se rvice is more suitable for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover. Moreover, there is no direct way that yo u can integrate AWS Global Accelerator with Amazon S3. It 's more suitable to use Amazon CloudFront instead in this scenario. References: https://aws.amazon.com/premiumsupport/knowledge-cen ter/cloudfront-serve-static-website/ https://aws.amazon.com/blogs/networking-and-content -delivery/amazon-s3-amazon-cloudfront-a-match-made- in-the-cloud/ https://aws.amazon.com/global-accelerator/faqs/", + "references": "" + }, + { + "question": "A Solutions Architect is building a cloud infrastru cture where EC2 instances require access to various AWS services such as S3 and Redshift. The Architect wil l also need to provide access to system administrators so they can deploy and test their ch anges. Which configuration should be used to ensure that t he access to the resources is secured and not compromised? (Select TWO.)", + "options": [ + "A. Store the AWS Access Keys in ACM.", + "B. Store the AWS Access Keys in the EC2 instance.", + "C. Enable Multi-Factor Authentication.", + "D. Assign an IAM role to the Amazon EC2 instance." + ], + "correct": "", + "explanation": "Explanation/Reference: In this scenario, the correct answers are: - Enable Multi-Factor Authentication - Assign an IAM role to the Amazon EC2 instance Always remember that you should associate IAM roles to EC2 instances and not an IAM user, for the purpose of accessing other AWS services. IAM roles are designed so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the ap plications use. Instead of creating and distributing your AWS credentials, you can delegate permission to make AP I requests using IAM roles. AWS Multi-Factor Authentication (MFA) is a simple b est practice that adds an extra layer of protection on top of your user name and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their user name and password ( the first factor--what they know), as well as for a n authentication code from their AWS MFA device (the second factor--what they have). Taken together, these multiple factors provide increased security f or your AWS account settings and resources. You can enable MFA for your AWS account and for individual IAM use rs you have created under your account. MFA can also be used to control access to AWS servi ce APIs. Storing the AWS Access Keys in the EC2 instance is incorrect. This is not recommended by AWS as it can be compromised. Instead of storing access keys on an EC2 instance for use by applications that run on the instance and make AWS API requests, you can use an IAM role to provide temporary access keys for these applications. Assigning an IAM user for each Amazon EC2 Instance is incorrect because there is no need to create an IAM user for this scenario since IAM roles already provide greater flexibility and easier management. Storing the AWS Access Keys in ACM is incorrect bec ause ACM is just a service that lets you easily provision, manage, and deploy public and private SS L/TLS certificates for use with AWS services and yo ur internal connected resources. It is not used as a s ecure storage for your access keys. References: https://aws.amazon.com/iam/details/mfa/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /iam-roles-for-amazon-ec2.html Check out this AWS IAM Cheat Sheet: https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/", + "references": "" + }, + { + "question": "A company plans to migrate all of their application s to AWS. The Solutions Architect suggested to stor e all the data to EBS volumes. The Chief Technical Office r is worried that EBS volumes are not appropriate f or the existing workloads due to compliance requiremen ts, downtime scenarios, and IOPS performance. Which of the following are valid points in proving that EBS is the best service to use for migration? (Select TWO.)", + "options": [ + "A. EBS volumes can be attached to any EC2 Instance i n any Availability Zone.", + "B. When you create an EBS volume in an Availability Zone, it is automatically replicated on a separate AWS", + "C. An EBS volume is off-instance storage that can pe rsist independently from the life of an instance.", + "D. EBS volumes support live configuration changes wh ile in production which means that you can modify t he" + ], + "correct": "", + "explanation": "Explanation/Reference: An Amazon EBS volume is a durable, block-level stor age device that you can attach to a single EC2 instance. You can use EBS volumes as primary storag e for data that requires frequent updates, such as the system drive for an instance or storage for a datab ase application. You can also use them for throughp ut- intensive applications that perform continuous disk scans. EBS volumes persist independently from the running life of an EC2 instance. Here is a list of important information about EBS V olumes: - When you create an EBS volume in an Availability Zone, it is automatically replicated within that zo ne to prevent data loss due to a failure of any single ha rdware component. - An EBS volume can only be attached to one EC2 ins tance at a time. - After you create a volume, you can attach it to a ny EC2 instance in the same Availability Zone - An EBS volume is off-instance storage that can pe rsist independently from the life of an instance. Y ou can specify not to terminate the EBS volume when yo u terminate the EC2 instance during instance creati on. - EBS volumes support live configuration changes wh ile in production which means that you can modify the volume type, volume size, and IOPS capacity wit hout service interruptions. - Amazon EBS encryption uses 256-bit Advanced Encry ption Standard algorithms (AES-256) - EBS Volumes offer 99.999% SLA. The option that says: When you create an EBS volume in an Availability Zone, it is automatically replicated on a separate AWS region to prevent data loss due to a failure of any single hardware component is incorrect because when you create an E BS volume in an Availability Zone, it is automatically replicated within that zone only, and not on a separate AWS region, to prevent data loss due to a failure of any single hardware component. The option that says: EBS volumes can be attached t o any EC2 Instance in any Availability Zone is incorrect as EBS volumes can only be attached to an EC2 instance in the same Availability Zone. The option that says: Amazon EBS provides the abili ty to create snapshots (backups) of any EBS volume and write a copy of the data in the volume t o Amazon RDS, where it is stored redundantly in multiple Availability Zones is almost correct. But instead of storing the volume to Amazon RDS, the EBS Volume snapshots are actually sent to Amazon S3. References: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ EBSVolumes.html https://aws.amazon.com/ebs/features/ Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/ Here is a short video tutorial on EBS: https://youtu.be/ljYH5lHQdxo", + "references": "" + }, + { + "question": "A company needs to assess and audit all the configu rations in their AWS account. It must enforce stric t compliance by tracking all configuration changes ma de to any of its Amazon S3 buckets. Publicly accessible S3 buckets should also be identified aut omatically to avoid data breaches. Which of the following options will meet this requi rement?", + "options": [ + "A. Use AWS CloudTrail and review the event history o f your AWS account.", + "B. Use AWS Trusted Advisor to analyze your AWS envir onment.", + "C. Use AWS IAM to generate a credential report.", + "D. Use AWS Config to set up a rule in your AWS accou nt." + ], + "correct": "D. Use AWS Config to set up a rule in your AWS accou nt.", + "explanation": "Explanation/Reference: AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. With Config, you ca n review changes in configurations and relationships between AWS resources, dive into detailed resource configuration histories, and determine your overall compliance against the configurations specified in your internal guidelines. This enables you to simplify c ompliance auditing, security analysis, change management, and operational troubleshooting. You can use AWS Config to evaluate the configuratio n settings of your AWS resources. By creating an AWS Config rule, you can enforce your ideal configu ration in your AWS account. It also checks if the applied configuration in your resources violates an y of the conditions in your rules. The AWS Config dashboard shows the compliance status of your rules and resources. You can verify if your resources comply with your desired configurations and learn w hich specific resources are noncompliant. Hence, the correct answer is: Use AWS Config to set up a rule in your AWS account. The option that says: Use AWS Trusted Advisor to an alyze your AWS environment is incorrect because AWS Trusted Advisor only provides best practice rec ommendations. It cannot define rules for your AWS resources. The option that says: Use AWS IAM to generate a cre dential report is incorrect because this report wil l not help you evaluate resources. The IAM credential report i s just a list of all IAM users in your AWS account. The option that says: Use AWS CloudTrail and review the event history of your AWS account is incorrect. Although it can track changes and store a history of what happened to your resources, this service still cannot enforce rules to comply with your orga nization's policies. References: https://aws.amazon.com/config/ https://docs.aws.amazon.com/config/latest/developer guide/evaluate-config.html Check out this AWS Config Cheat Sheet: https://tutorialsdojo.com/aws-config/", + "references": "" + }, + { + "question": "A Data Engineer is working for a litigation firm fo r their case history application. The engineer need s to keep track of all the cases that the firm has handled. T he static assets like .jpg, .png, and .pdf files ar e stored in S3 for cost efficiency and high durability. As these f iles are critical to the business, the engineer wan ts to keep track of what's happening in the S3 bucket. The eng ineer found out that S3 has an event notification w henever a delete or write operation happens within the S3 b ucket. What are the possible Event Notification destinatio ns available for S3 buckets? (Select TWO.)", + "options": [ + "A. SQS", + "B. SWF", + "C. SES", + "D. Lambda function" + ], + "correct": "", + "explanation": "Explanation/Reference: The Amazon S3 notification feature enables you to r eceive notifications when certain events happen in your bucket. To enable notifications, you must firs t add a notification configuration identifying the events you want Amazon S3 to publish, and the destinations whe re you want Amazon S3 to send the event notifications. Amazon S3 supports the following destinations where it can publish events: Amazon Simple Notification Service (Amazon SNS) topic - A web service that coo rdinates and manages the delivery or sending of messages to subscribing endpoints or clients. Amazon Simple Queue Service (Amazon SQS) queue - Of fers reliable and scalable hosted queues for storing messages as they travel between computer. AWS Lambda - AWS Lambda is a compute service where you can upload your code and the service can run the code on your behalf using the AWS infrastru cture. You package up and upload your custom code t o AWS Lambda when you create a Lambda function Kinesis is incorrect because this is used to collec t, process, and analyze real-time, streaming data s o you can get timely insights and react quickly to new inform ation, and not used for event notifications. You ha ve to use SNS, SQS or Lambda. SES is incorrect because this is mainly used for se nding emails designed to help digital marketers and application developers send marketing, notification , and transactional emails, and not for sending eve nt notifications from S3. You have to use SNS, SQS or Lambda. SWF is incorrect because this is mainly used to bui ld applications that use Amazon's cloud to coordina te work across distributed components and not used as a way to trigger event notifications from S3. You have t o use SNS, SQS or Lambda. Here's what you need to do in order to start using this new feature with your application: Create the queue, topic, or Lambda function (which I'll call the target for brevity) if necessary. Grant S3 permission to publish to the target or inv oke the Lambda function. For SNS or SQS, you do thi s by applying an appropriate policy to the topic or the queue. For Lambda, you must create and supply an IA M role, then associate it with the Lambda function. Arrange for your application to be invoked in respo nse to activity on the target. As you will see in a moment, you have several options here. Set the bucket's Notification Configuration to poin t to the target.", + "references": "https://docs.aws.amazon.com/AmazonS3/latest/dev/Not ificationHowTo.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/" + }, + { + "question": "A company is building an internal application that serves as a repository for images uploaded by a cou ple of users. Whenever a user uploads an image, it would b e sent to Kinesis Data Streams for processing befor e it is stored in an S3 bucket. If the upload was successfu l, the application will return a prompt informing t he user that the operation was successful. The entire processing typically takes about 5 minutes to finish. Which of the following options will allow you to as ynchronously process the request to the application from upload request to Kinesis, S3, and return a reply i n the most cost-effective manner?", + "options": [ + "A. Replace the Kinesis Data Streams with an Amazon S QS queue. Create a Lambda function that will", + "B. Use a combination of SQS to queue the requests an d then asynchronously process them using On-", + "C. Use a combination of Lambda and Step Functions to orchestrate service components and asynchronously", + "D. Use a combination of SNS to buffer the requests a nd then asynchronously process them using On-Demand" + ], + "correct": "A. Replace the Kinesis Data Streams with an Amazon S QS queue. Create a Lambda function that will", + "explanation": "Explanation/Reference: AWS Lambda supports the synchronous and asynchronou s invocation of a Lambda function. You can control the invocation type only when you invoke a Lambda function. When you use an AWS service as a trigger, the invocation type is predetermined for e ach service. You have no control over the invocatio n type that these event sources use when they invoke your Lambda function. Since processing only takes 5 minutes, Lambda is also a cost-effective choice. You can use an AWS Lambda function to process messa ges in an Amazon Simple Queue Service (Amazon SQS) queue. Lambda event source mappings support st andard queues and first-in, first-out (FIFO) queues . With Amazon SQS, you can offload tasks from one com ponent of your application by sending them to a queue and processing them asynchronously. Kinesis Data Streams is a real-time data streaming service that requires the provisioning of shards. A mazon SQS is a cheaper option because you only pay for wh at you use. Since there is no requirement for real- time processing in the scenario given, replacing Kinesis Data Streams with Amazon SQS would save more costs . Hence, the correct answer is: Replace the Kinesis s tream with an Amazon SQS queue. Create a Lambda function that will asynchronously process th e requests. Using a combination of Lambda and Step Functions to orchestrate service components and asynchronously process the requests is incorrect. T he AWS Step Functions service lets you coordinate multiple AWS services into serverless workflows so you can build and update apps quickly. Although thi s can be a valid solution, it is not cost-effective since the application does not have a lot of components to orchestrate. Lambda functions can effectively meet the requirements in this scenario without using Ste p Functions. This service is not as cost-effective as Lambda. Using a combination of SQS to queue the requests an d then asynchronously processing them using On-Demand EC2 Instances and Using a combination of SNS to buffer the requests and then asynchronously processing them using On-Demand EC2 Instances are both incorrect as using On- Demand EC2 instances is not cost-effective. It is b etter to use a Lambda function instead. References: https://docs.aws.amazon.com/lambda/latest/dg/welcom e.html https://docs.aws.amazon.com/lambda/latest/dg/lambda -invocation.html https://aws.amazon.com/blogs/compute/new-aws-lambda -controls-for-stream-processing-and- asynchronous-invocations/ AWS Lambda Overview - Serverless Computing in AWS: https://www.youtube.com/watch?v=bPVX1zHwAnY", + "references": "" + }, + { + "question": "A media company hosts large volumes of archive data that are about 250 TB in size on their internal servers. They have decided to move these data to S3 because of its durability and redundancy. The company currently has a 100 Mbps dedicated line con necting their head office to the Internet. Which of the following is the FASTEST and the MOST cost-effective way to import all these data to Amazon S3?", + "options": [ + "A. Upload it directly to S3", + "B. Use AWS Snowmobile to transfer the data over to S 3.", + "C. Establish an AWS Direct Connect connection then t ransfer the data over to S3.", + "D. Order multiple AWS Snowball devices to upload the files to Amazon S3." + ], + "correct": "D. Order multiple AWS Snowball devices to upload the files to Amazon S3.", + "explanation": "Explanation/Reference: AWS Snowball is a petabyte-scale data transport sol ution that uses secure appliances to transfer large amounts of data into and out of the AWS cloud. Usin g Snowball addresses common challenges with large- scale data transfers including high network costs, long transfer times, and security concerns. Transfe rring data with Snowball is simple, fast, secure, and can be a s little as one-fifth the cost of high-speed Intern et. Snowball is a strong choice for data transfer if yo u need to more securely and quickly transfer teraby tes to many petabytes of data to AWS. Snowball can also be the right choice if you don't want to make expensi ve upgrades to your network infrastructure, if you fre quently experience large backlogs of data, if you'r e located in a physically isolated environment, or if you're in an area where high-speed Internet connections are n ot available or cost-prohibitive. As a rule of thumb, if it takes more than one week to upload your data to AWS using the spare capacity of your existing Internet connection, then you should consi der using Snowball. For example, if you have a 100 Mb connection that you can solely dedicate to transfer ring your data and need to transfer 100 TB of data, it takes more than 100 days to complete data transfer over t hat connection. You can make the same transfer by u sing multiple Snowballs in about a week. Hence, ordering multiple AWS Snowball devices to up load the files to Amazon S3 is the correct answer. Uploading it directly to S3 is incorrect since this would take too long to finish due to the slow Inte rnet connection of the company. Establishing an AWS Direct Connect connection then transferring the data over to S3 is incorrect since provisioning a line for Direct Connect would take t oo much time and might not give you the fastest dat a transfer solution. In addition, the scenario didn't warrant an establishment of a dedicated connection from you r on- premises data center to AWS. The primary goal is to just do a one-time migration of data to AWS which can be accomplished by using AWS Snowball devices. Using AWS Snowmobile to transfer the data over to S 3 is incorrect because Snowmobile is more suitable if you need to move extremely large amounts of data to AWS or need to transfer up to 100PB of data. Th is will be transported on a 45-foot long ruggedized shippin g container, pulled by a semi-trailer truck. Take n ote that you only need to migrate 250 TB of data, hence, thi s is not the most suitable and cost-effective solut ion. References: https://aws.amazon.com/snowball/ https://aws.amazon.com/snowball/faqs/ S3 Transfer Acceleration vs Direct Connect vs VPN v s Snowball vs Snowmobile: https://tutorialsdojo.com/s3-transfer-acceleration- vs-direct-connect-vs-vpn-vs-snowball-vs-snowmobile/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/", + "references": "" + }, + { + "question": "A company is working with a government agency to im prove traffic planning and maintenance of roadways to prevent accidents. The proposed solution is to m anage the traffic infrastructure in real-time, aler t traffic engineers and emergency response teams when problem s are detected, and automatically change traffic signals to get emergency personnel to accident scen es faster by using sensors and smart devices. Which AWS service will allow the developers of the agency to connect the smart devices to the cloud- based applications?", + "options": [ + "A. AWS Elastic Beanstalk", + "B. AWS CloudFormation", + "C. Amazon Elastic Container Service", + "D. AWS IoT Core" + ], + "correct": "", + "explanation": "Explanation/Reference: AWS IoT Core is a managed cloud service that lets c onnected devices easily and securely interact with cloud applications and other devices. AWS IoT Core provides secure communication and data processing across different kinds of connected devices and loc ations so you can easily build IoT applications. AWS IoT Core allows you to connect multiple devices to the cloud and to other devices without requirin g you to deploy or manage any servers. You can also filter, transform, and act upon device data on the fly base d on the rules you define. With AWS IoT Core, your applicati ons can keep track of and communicate with all of your devices, all the time, even when t hey aren't connected. Hence, the correct answer is: AWS IoT Core. AWS CloudFormation is incorrect because this is mai nly used for creating and managing the architecture and not for handling connected devices. You have to use AWS IoT Core instead. AWS Elastic Beanstalk is incorrect because this is just an easy-to-use service for deploying and scali ng web applications and services developed with Java, .NET , PHP, Node.js, Python, and other programming languages. Elastic Beanstalk can't be used to conne ct smart devices to cloud-based applications. Amazon Elastic Container Service is incorrect becau se this is mainly used for creating and managing docker instances and not for handling devices. References: https://aws.amazon.com/iot-core/ https://aws.amazon.com/iot/", + "references": "" + }, + { + "question": "A commercial bank has a forex trading application. They created an Auto Scaling group of EC2 instances that allow the bank to cope with the current traffi c and achieve cost-efficiency. They want the Auto S caling group to behave in such a way that it will follow a predefined set of parameters before it scales down the number of EC2 instances, which protects the system from unintended slowdown or unavailability. Which of the following statements are true regardin g the cooldown period? (Select TWO.)", + "options": [ + "A. Its default value is 300 seconds.", + "B. It ensures that the Auto Scaling group does not l aunch or terminate additional EC2 instances before the", + "C. It ensures that the Auto Scaling group launches o r terminates additional EC2 instances without any", + "D. Its default value is 600 seconds." + ], + "correct": "", + "explanation": "Explanation/Reference: In Auto Scaling, the following statements are corre ct regarding the cooldown period: It ensures that the Auto Scaling group does not lau nch or terminate additional EC2 instances before th e previous scaling activity takes effect. Its default value is 300 seconds. It is a configurable setting for your Auto Scaling group. The following options are incorrect: - It ensures that before the Auto Scaling group sca les out, the EC2 instances have ample time to cooldown. - It ensures that the Auto Scaling group launches o r terminates additional EC2 instances without any downtime. - Its default value is 600 seconds. These statements are inaccurate and don't depict wh at the word \"cooldown\" actually means for Auto Scaling. The cooldown period is a configurable sett ing for your Auto Scaling group that helps to ensur e that it doesn't launch or terminate additional instances be fore the previous scaling activity takes effect. Af ter the Auto Scaling group dynamically scales using a simple sca ling policy, it waits for the cooldown period to co mplete before resuming scaling activities. The figure below demonstrates the scaling cooldown:Reference: http://docs.aws.amazon.com/autoscaling/latest/userg uide/as-instance-termination.html Check out this AWS Auto Scaling Cheat Sheet: https://tutorialsdojo.com/aws-auto-scaling/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", + "references": "" + }, + { + "question": "An organization needs to control the access for sev eral S3 buckets. They plan to use a gateway endpoin t to allow access to trusted buckets. Which of the following could help you achieve this requirement?", + "options": [ + "A. Generate an endpoint policy for trusted S3 bucket s.", + "B. Generate a bucket policy for trusted VPCs.", + "C. Generate an endpoint policy for trusted VPCs.", + "D. Generate a bucket policy for trusted S3 buckets." + ], + "correct": "A. Generate an endpoint policy for trusted S3 bucket s.", + "explanation": "Explanation A VPC endpoint enables you to privately connect you r VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink withou t requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Insta nces in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network. When you create a VPC endpoint, you can attach an e ndpoint policy that controls access to the service to which you are connecting. You can modify the endpoi nt policy attached to your endpoint and add or remove the route tables used by the endpoint. An en dpoint policy does not override or replace IAM user policies or service-specific policies (such as S3 bucket pol icies). It is a separate policy for controlling acc ess from the endpoint to the specified service. We can use a bucket policy or an endpoint policy to allow the traffic to trusted S3 buckets. The optio ns that have 'trusted S3 buckets' key phrases will be the p ossible answer in this scenario. It would take you a lot of time to configure a bucket policy for each S3 bucket ins tead of using a single endpoint policy. Therefore, you should use an endpoint policy to control the traffic to th e trusted Amazon S3 buckets. Hence, the correct answer is: Generate an endpoint policy for trusted S3 buckets. The option that says: Generate a bucket policy for trusted S3 buckets is incorrect. Although this is a valid solution, it takes a lot of time to set up a bucket policy for each and every S3 bucket. This can simp ly be accomplished by creating an S3 endpoint policy. The option that says: Generate a bucket policy for trusted VPCs is incorrect because you are generatin g a policy for trusted VPCs. Remember that the scenario only requires you to allow the traffic for trusted S3 buckets, and not to the VPCs. The option that says: Generate an endpoint policy f or trusted VPCs is incorrect because it only allows access to trusted VPCs, and not to trusted Amazon S3 buckets References: https://docs.aws.amazon.com/vpc/latest/userguide/vp c-endpoints-s3.html https://aws.amazon.com/premiumsupport/knowledge-cen ter/connect-s3-vpc-endpoint/ Amazon VPC Overview: https://www.youtube.com/watch?v=oIDHKeNxvQQ Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", + "references": "" + }, + { + "question": "A company has an enterprise web application hosted on Amazon ECS Docker containers that use an Amazon FSx for Lustre filesystem for its high-performance computing workloads. A warm standby environment is running in another AWS region for disaster recovery . A Solutions Architect was assigned to design a s ystem that will automatically route the live traffic to t he disaster recovery (DR) environment only in the event that the primary application stack exp eriences an outage. What should the Architect do to satisfy this requir ement?", + "options": [ + "A. Set up a CloudWatch Events rule to monitor the pr imary Route 53 DNS endpoint and create a custom", + "B. Set up a Weighted routing policy configuration in Route 53 by adding health checks on both the prima ry", + "C. Set up a failover routing policy configuration in Route 53 by adding a health check on the primary s ervice", + "D. Set up a CloudWatch Alarm to monitor the primary Route 53 DNS endpoint and create a custom Lambda" + ], + "correct": "C. Set up a failover routing policy configuration in Route 53 by adding a health check on the primary s ervice", + "explanation": "Explanation/Reference: Use an active-passive failover configuration when y ou want a primary resource or group of resources to be available majority of the time and you want a secon dary resource or group of resources to be on standb y in case all the primary resources become unavailable. When responding to queries, Route 53 includes only the healthy primary resources. If all the primary resou rces are unhealthy, Route 53 begins to include only the healthy secondary resources in response to DNS quer ies. To create an active-passive failover configuration with one primary record and one secondary record, y ou just create the records and specify Failover for the rou ting policy. When the primary resource is healthy, Route 53 responds to DNS queries using the primary record. W hen the primary resource is unhealthy, Route 53 responds to DNS queries using the secondar y record. You can configure a health check that monitors an e ndpoint that you specify either by IP address or by domain name. At regular intervals that you specify, Route 53 submits automated requests over the Internet to your application, server, or other resource to verify th at it's reachable, available, and functional. Optio nally, you can configure the health check to make requests similar to those that your users make, such as requesting a web page from a specific URL. When Route 53 checks the health of an endpoint, it sends an HTTP, HTTPS, or TCP request to the IP address and port that you specified when you create d the health check. For a health check to succeed, your router and firewall rules must allow inbound traffi c from the IP addresses that the Route 53 health ch eckers use. Hence, the correct answer is: Set up a failover rou ting policy configuration in Route 53 by adding a health check on the primary service endpoint. Confi gure Route 53 to direct the DNS queries to the secondary record when the primary resource is unhea lthy. Configure the network access control list and the route table to allow Route 53 to send reque sts to the endpoints specified in the health checks . Enable the Evaluate Target Health option by setting it to Yes. The option that says: Set up a Weighted routing pol icy configuration in Route 53 by adding health checks on both the primary stack and the DR environ ment. Configure the network access control list and the route table to allow Route 53 to send reque sts to the endpoints specified in the health checks . Enable the Evaluate Target Health option by setting it to Yes is incorrect because Weighted routing simply lets you associate multiple resources with a single domain name (tutorialsdojo.com) or subdomai n name (blog.tutorialsdojo.com) and choose how much traffi c is routed to each resource. This can be useful fo r a variety of purposes, including load balancing and t esting new versions of software, but not for a fail over configuration. Remember that the scenario says that the solution should automatically route the live t raffic to the disaster recovery (DR) environment only in the event that the primary application stack experi ences an outage. This configuration is incorrectly distributing the traffic on both the primary and DR environment. The option that says: Set up a CloudWatch Alarm to monitor the primary Route 53 DNS endpoint and create a custom Lambda function. Execute the Change ResourceRecordSets API call using the function to initiate the failover to the secondary DNS record is incorrect because setting up a CloudWatch Alarm and using the Route 53 API is not applicable nor useful at all in this scenario. Remember that CloudWatch Alam is primarily used for monitoring CloudWatch metrics. You have to use a Failover routing policy instead. The option that says: Set up a CloudWatch Events ru le to monitor the primary Route 53 DNS endpoint and create a custom Lambda function. Execu te theChangeResourceRecordSets API call using the function to initiate the failover to the secondary DNS record is incorrect because the Amazo n CloudWatch Events service is commonly used to deliv er a near real-time stream of system events that describe changes in some Amazon Web Services (AWS) resources. There is no direct way for CloudWatch Events to monitor the status of your Route 53 endpo ints. You have to configure a health check and a failover configuration in Route 53 instead to satis fy the requirement in this scenario. References: https://docs.aws.amazon.com/Route53/latest/Develope rGuide/dns-failover-types.html https://docs.aws.amazon.com/Route53/latest/Develope rGuide/health-checks-types.html https://docs.aws.amazon.com/Route53/latest/Develope rGuide/dns-failover-router-firewall-rules.html Check out this Amazon Route 53 Cheat Sheet: https://tutorialsdojo.com/amazon-route-53/", + "references": "" + }, + { + "question": "A Solutions Architect is working for a company that uses Chef Configuration management in their data center. She needs to leverage their existing Chef r ecipes in AWS. Which of the following services should she use?", + "options": [ + "A. A. AWS CloudFormation", + "B. B. AWS OpsWorks", + "C. C. Amazon Simple Workflow Service", + "D. D. AWS Elastic Beanstalk" + ], + "correct": "B. B. AWS OpsWorks", + "explanation": "Explanation AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms th at allow you to use code to automate the configurations of your servers. OpsWorks lets you u se Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazo n EC2 instances or on-premises compute environments. OpsWorks has three offerings - AWS Op sworks for Chef Automate, AWS OpsWorks for Puppet Enterprise, and AWS OpsWorks Stacks. Amazon Simple Workflow Service is incorrect because AWS SWF is a fully-managed state tracker and task coordinator in the Cloud. It does not let you leverage Chef recipes. AWS Elastic Beanstalk is incorrect because this han dles an application's deployment details of capacit y provisioning, load balancing, auto-scaling, and app lication health monitoring. It does not let you lev erage Chef recipes just like Amazon SWF. AWS CloudFormation is incorrect because this is a s ervice that lets you create a collection of related AWS resources and provision them in a predictable fashi on using infrastructure as code. It does not let yo u leverage Chef recipes just like Amazon SWF and AWS Elastic B eanstalk.", + "references": "https://aws.amazon.com/opsworks/ Check out this AWS OpsWorks Cheat Sheet: https://tutorialsdojo.com/aws-opsworks/ Elastic Beanstalk vs CloudFormation vs OpsWorks vs CodeDeploy: https://tutorialsdojo.com/elastic-beanstalk-vs-clou dformation-vs-opsworks-vs-codedeploy/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/" + }, + { + "question": "An organization is currently using a tape backup so lution to store its application data on-premises. T hey plan to use a cloud storage service to preserve the backup data for up to 10 years that may be accessed about once or twice a year. Which of the following is the most cost-effective o ption to implement this solution?", + "options": [ + "A. Use AWS Storage Gateway to backup the data direct ly to Amazon S3 Glacier Deep Archive.", + "B. Order an AWS Snowball Edge appliance to import th e backup directly to Amazon S3 Glacier.", + "C. Use AWS Storage Gateway to backup the data direct ly to Amazon S3 Glacier.", + "D. Use Amazon S3 to store the backup data and add a lifecycle rule to transition the current version to Amazon" + ], + "correct": "A. Use AWS Storage Gateway to backup the data direct ly to Amazon S3 Glacier Deep Archive.", + "explanation": "Explanation/Reference: Tape Gateway enables you to replace using physical tapes on-premises with virtual tapes in AWS without changing existing backup workflows. Tape Gateway su pports all leading backup applications and caches virtual tapes on-premises for low-latency data acce ss. Tape Gateway encrypts data between the gateway and AWS for secure data transfer and compresses dat a and transitions virtual tapes between Amazon S3 and Amazon S3 Glacier, or Amazon S3 Glacier Deep Ar chive, to minimize storage costs. The scenario requires you to backup your applicatio n data to a cloud storage service for long-term ret ention of data that will be retained for 10 years. Since i t uses a tape backup solution, an option that uses AWS Storage Gateway must be the possible answer. Tape G ateway can move your virtual tapes archived in Amazon S3 Glacier or Amazon S3 Glacier Deep Archive storage class, enabling you to further reduce the monthly cost to store long-term data in the cloud b y up to 75%. Hence, the correct answer is: Use AWS Storage Gatew ay to backup the data directly to Amazon S3 Glacier Deep Archive. The option that says: Use AWS Storage Gateway to ba ckup the data directly to Amazon S3 Glacier is incorrect. Although this is a valid solution, movin g to S3 Glacier is more expensive than directly bac king it up to Glacier Deep Archive. The option that says: Order an AWS Snowball Edge ap pliance to import the backup directly to Amazon S3 Glacier is incorrect because Snowball Edg e can't directly integrate backups to S3 Glacier. Moreover, you have to use the Amazon S3 Glacier Dee p Archive storage class as it is more cost-effectiv e than the regular Glacier class. The option that says: Use Amazon S3 to store the ba ckup data and add a lifecycle rule to transition th e current version to Amazon S3 Glacier is incorrect. Although this is a possible solution, it is difficu lt to directly integrate a tape backup solution to S3 wit hout using Storage Gateway. References: https://aws.amazon.com/storagegateway/faqs/ https://aws.amazon.com/s3/storage-classes/ AWS Storage Gateway Overview: https://www.youtube.com/watch?v=pNb7xOBJjHE Check out this AWS Storage Gateway Cheat Sheet: https://tutorialsdojo.com/aws-storage-gateway/", + "references": "" + }, + { + "question": "Both historical records and frequently accessed dat a are stored on an on-premises storage system. The amount of current data is growing at an exponential rate. As the storage's capacity is nearing its lim it, the company's Solutions Architect has decided to move t he historical records to AWS to free up space for t he active data. Which of the following architectures deliver the be st solution in terms of cost and operational manage ment?", + "options": [ + "A. Use AWS Storage Gateway to move the historical re cords from on-premises to AWS. Choose Amazon S3", + "B. Use AWS Storage Gateway to move the historical re cords from on-premises to AWS. Choose Amazon S3", + "C. Use AWS DataSync to move the historical records f rom on-premises to AWS. Choose Amazon S3", + "D. Use AWS DataSync to move the historical records f rom on-premises to AWS. Choose Amazon S3 Glacier" + ], + "correct": "D. Use AWS DataSync to move the historical records f rom on-premises to AWS. Choose Amazon S3 Glacier", + "explanation": "Explanation/Reference: AWS DataSync makes it simple and fast to move large amounts of data online between on-premises storage and Amazon S3, Amazon Elastic File System ( Amazon EFS), or Amazon FSx for Windows File Server. Manual tasks related to data transfers can slow down migrations and burden IT operations. DataSync eliminates or automatically handles many o f these tasks, including scripting copy jobs, scheduling, and monitoring transfers, validating da ta, and optimizing network utilization. The DataSyn c software agent connects to your Network File System (NFS), S erver Message Block (SMB) storage, and your self-managed object storage, so you don't have to modify your applications. DataSync can transfer hundreds of terabytes and mil lions of files at speeds up to 10 times faster than open- source tools, over the Internet or AWS Direct Conne ct links. You can use DataSync to migrate active da ta sets or archives to AWS, transfer data to the cloud for timely analysis and processing, or replicate data t o AWS for business continuity. Getting started with DataSync is easy: deploy the DataSync agent, connect it to y our file system, select your AWS storage resources, and star t moving data between them. You pay only for the da ta you move. Since the problem is mainly about moving historical records from on-premises to AWS, using AWS DataSync is a more suitable solution. You can use D ataSync to move cold data from expensive on-premise s storage systems directly to durable and secure long -term storage, such as Amazon S3 Glacier or Amazon S3 Glacier Deep Archive. Hence, the correct answer is the option that says: Use AWS DataSync to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier D eep Archive to be the destination for the data. The following options are both incorrect: - Use AWS Storage Gateway to move the historical re cords from on-premises to AWS. Choose Amazon S3 Glacier Deep Archive to be the destinatio n for the data. - Use AWS Storage Gateway to move the historical re cords from on-premises to AWS. Choose Amazon S3 Glacier to be the destination for the dat a. Modify the S3 lifecycle configuration to move the data from the Standard tier to Amazon S3 Glacie r Deep Archive after 30 days. Although you can copy data from on-premises to AWS with Storage Gateway, it is not suitable for transferring large sets of data to AWS. Storage Gat eway is mainly used in providing low-latency access to data by caching frequently accessed data on-premises whi le storing archive data securely and durably in Ama zon cloud storage services. Storage Gateway optimizes d ata transfer to AWS by sending only changed data and compressing data. The option that says: Use AWS DataSync to move the historical records from on-premises to AWS. Choose Amazon S3 Standard to be the destination for the data. Modify the S3 lifecycle configuration to move the data from the Standard tier to Amazon S 3 Glacier Deep Archive after 30 days is incorrect because, with AWS DataSync, you can transfer data f rom on-premises directly to Amazon S3 Glacier Deep Archive. You don't have to configure the S3 lifecyc le policy and wait for 30 days to move the data to Glacier Deep Archive. References: https://aws.amazon.com/datasync/faqs/ https://aws.amazon.com/storagegateway/faqs/ Check out these AWS DataSync and Storage Gateway Ch eat Sheets: https://tutorialsdojo.com/aws-datasync/ https://tutorialsdojo.com/aws-storage-gateway/ AWS Storage Gateway vs DataSync: https://www.youtube.com/watch?v=tmfe1rO-AUs", + "references": "" + }, + { + "question": "A company is running a multi-tier web application f arm in a virtual private cloud (VPC) that is not connected to their corporate network. They are conn ecting to the VPC over the Internet to manage the f leet of Amazon EC2 instances running in both the public and private subnets. The Solutions Architect has added a bastion host with Microsoft Remote Desktop Protocol (RDP) access to the application instance security groups, but the company wants to further l imit administrative access to all of the instances in the VPC. Which of the following bastion host deployment opti ons will meet this requirement?", + "options": [ + "A. Deploy a Windows Bastion host on the corporate ne twork that has RDP access to all EC2 instances in t he", + "B. Deploy a Windows Bastion host with an Elastic IP address in the public subnet and allow SSH access t o the", + "C. Deploy a Windows Bastion host with an Elastic IP address in the public subnet and allow RDP access t o", + "D. Deploy a Windows Bastion host with an Elastic IP address in the private subnet, and restrict RDP acc ess to" + ], + "correct": "C. Deploy a Windows Bastion host with an Elastic IP address in the public subnet and allow RDP access t o", + "explanation": "Explanation/Reference: The correct answer is to deploy a Windows Bastion h ost with an Elastic IP address in the public subnet and allow RDP access to bastion only from the corporate IP addresses. A bastion host is a special purpose computer on a n etwork specifically designed and configured to withstand attacks. If you have a bastion host in AW S, it is basically just an EC2 instance. It should be in a public subnet with either a public or Elastic IP address w ith sufficient RDP or SSH access defined in the sec urity group. Users log on to the bastion host via SSH or RDP and then use that session to manage other hosts in the private subnets. Deploying a Windows Bastion host on the corporate n etwork that has RDP access to all EC2 instances in the VPC is incorrect since you do not deploy the Bastion host to your corporate network. It should be in the public subnet of a VPC. Deploying a Windows Bastion host with an Elastic IP address in the private subnet, and restricting RDP access to the bastion from only the corporate p ublic IP addresses is incorrect since it should be deployed in a public subnet, not a private subnet. Deploying a Windows Bastion host with an Elastic IP address in the public subnet and allowing SSH access to the bastion from anywhere is incorrect. S ince it is a Windows bastion, you should allow RDP access and not SSH as this is mainly used for Linux -based systems.", + "references": "https://docs.aws.amazon.com/quickstart/latest/linux -bastion/architecture.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/" + }, + { + "question": "A company is building a transcription service in wh ich a fleet of EC2 worker instances processes an uploaded audio file and generates a text file as an output. They must store both of these frequently a ccessed files in the same durable storage until the text fi le is retrieved by the uploader. Due to an expected surge in demand, they have to ensure that the storage is sca lable and can be retrieved within minutes. Which storage option in AWS can they use in this si tuation, which is both cost-efficient and scalable?", + "options": [ + "A. A single Amazon S3 bucket", + "B. Amazon S3 Glacier Deep Archive", + "C. Multiple Amazon EBS volume with snapshots", + "D. Multiple instance stores" + ], + "correct": "A. A single Amazon S3 bucket", + "explanation": "Explanation/Reference: Amazon Simple Storage Service (Amazon S3) is an obj ect storage service that offers industry-leading scalability, data availability, security, and perfo rmance. It provides easy-to-use management features so you can organize your data and configure finely-tun ed access controls to meet your specific business, organizational, and compliance requirements. Amazon S3 is designed for 99.999999999% (11 9's) of durability, and stores data for millions of applica tions for companies all around the world. In this scenario, the requirement is to have cost-e fficient and scalable storage. Among the given opti ons, the best option is to use Amazon S3. It's a simple stor age service that offers a highly-scalable, reliable , and low-latency data storage infrastructure at very low costs. Hence, the correct answer is: A single Amazon S3 bu cket. The option that says: Multiple Amazon EBS volume wi th snapshots is incorrect because Amazon S3 is more cost-efficient than EBS volumes. The option that says: Multiple instance stores is i ncorrect. Just like the option above, you must use Amazon S3 since it is scalable and cost-efficient t han instance store volumes. The option that says: Amazon S3 Glacier Deep Archiv e is incorrect because this is mainly used for data archives with data retrieval times that can take mo re than 12 hours. Hence, it is not suitable for the transcription service where the data are stored and frequently accessed. References: https://aws.amazon.com/s3/pricing/ https://docs.aws.amazon.com/AmazonS3/latest/gsg/Get StartedWithS3.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", + "references": "" + }, + { + "question": "A company has a static corporate website hosted in a standard S3 bucket and a new web domain name that was registered using Route 53. You are instructed b y your manager to integrate these two services in o rder to successfully launch their corporate website. What are the prerequisites when routing traffic usi ng Amazon Route 53 to a website that is hosted in a n Amazon S3 Bucket? (Select TWO.)", + "options": [ + "A. The S3 bucket must be in the same region as the h osted zone", + "B. The S3 bucket name must be the same as the domain name", + "C. A registered domain name", + "D. The record set must be of type \"MX\"" + ], + "correct": "", + "explanation": "Explanation/Reference: Here are the prerequisites for routing traffic to a website that is hosted in an Amazon S3 Bucket: - An S3 bucket that is configured to host a static website. The bucket must have the same name as your domain or subdomain. For example, if you want to us e the subdomain portal.tutorialsdojo.com, the name of the bucket must be portal.tutorialsdojo.com. - A registered domain name. You can use Route 53 as your domain registrar, or you can use a different registrar. - Route 53 as the DNS service for the domain. If yo u register your domain name by using Route 53, we automatically configure Route 53 as the DNS service for the domain. The option that says: The record set must be of typ e \"MX\" is incorrect since an MX record specifies th e mail server responsible for accepting email messages on behalf of a domain name. This is not what is being asked by the question. The option that says: The S3 bucket must be in the same region as the hosted zone is incorrect. There is no constraint that the S3 bucket must be in the same r egion as the hosted zone in order for the Route 53 service to route traffic into it. The option that says: The Cross-Origin Resource Sha ring (CORS) option should be enabled in the S3 bucket is incorrect because you only need to enable Cross-Origin Resource Sharing (CORS) when your client web application on one domain interacts with the resources in a different domain.", + "references": "https://docs.aws.amazon.com/Route53/latest/Develope rGuide/RoutingToS3Bucket.html Amazon Route 53 Overview: https://www.youtube.com/watch?v=Su308t19ubY Check out this Amazon Route 53 Cheat Sheet: https://tutorialsdojo.com/amazon-route-53/" + }, + { + "question": "A company plans to conduct a network security audit . The web application is hosted on an Auto Scaling group of EC2 Instances with an Application Load Balancer in front to evenly distribute the incoming traffic. A Solutions Architect has been tasked to enhance th e security posture of the company's cloud infrastru cture and minimize the impact of DDoS attacks on its reso urces. Which of the following is the most effective soluti on that should be implemented? A. Configure Amazon CloudFront distribution and set a Network Load Balancer as the origin. Use VPC Flow Logs to monitor abnormal traffic patterns. Set up a custom AWS Lambda function that processes the flow logs and invokes Amazon SNS for notification.", + "options": [ + "B. Configure Amazon CloudFront distribution and set a Network Load Balancer as the origin. Use Amazon", + "C. Configure Amazon CloudFront distribution and set an Application Load Balancer as the origin. Create a", + "D. Configure Amazon CloudFront distribution and set Application Load Balancer as the origin. Create a r ate-" + ], + "correct": "D. Configure Amazon CloudFront distribution and set Application Load Balancer as the origin. Create a r ate-", + "explanation": "Explanation/Reference: AWS WAF is a web application firewall that helps pr otect your web applications or APIs against common web exploits that may affect availability, compromi se security, or consume excessive resources. AWS WAF gives you control over how traffic reaches your applications by enabling you to create security ru les that block common attack patterns, such as SQL inje ction or cross-site scripting, and rules that filte r out specific traffic patterns you define. You can deplo y AWS WAF on Amazon CloudFront as part of your CDN solution, the Application Load Balancer that fr onts your web servers or origin servers running on EC2, or Amazon API Gateway for your APIs. To detect and mitigate DDoS attacks, you can use AW S WAF in addition to AWS Shield. AWS WAF is a web application firewall that helps detect and miti gate web application layer DDoS attacks by inspecti ng traffic inline. Application layer DDoS attacks use well-formed but malicious requests to evade mitigat ion and consume application resources. You can define c ustom security rules that contain a set of conditio ns, rules, and actions to block attacking traffic. Afte r you define web ACLs, you can apply them to CloudF ront distributions, and web ACLs are evaluated in the pr iority order you specified when you configured them . By using AWS WAF, you can configure web access cont rol lists (Web ACLs) on your CloudFront distributions or Application Load Balancers to filt er and block requests based on request signatures. Each Web ACL consists of rules that you can configure to str ing match or regex match one or more request attributes, such as the URI, query-string, HTTP met hod, or header key. In addition, by using AWS WAF's rate- based rules, you can automatically block the IP add resses of bad actors when requests matching a rule exceed a threshold that you define. Requests from offendin g client IP addresses will receive 403 Forbidden er ror responses and will remain blocked until request rat es drop below the threshold. This is useful for mit igating HTTP flood attacks that are disguised as regular we b traffic. It is recommended that you add web ACLs with rate-b ased rules as part of your AWS Shield Advanced protection. These rules can alert you to sudden spi kes in traffic that might indicate a potential DDoS event. A rate-based rule counts the requests that arrive fro m any individual address in any five-minute period. If the number of requests exceeds the limit that you defin e, the rule can trigger an action such as sending y ou a notification. Hence, the correct answer is: Configure Amazon Clou dFront distribution and set Application Load Balancer as the origin. Create a rate-based web ACL rule using AWS WAF and associate it with Amazon CloudFront. The option that says: Configure Amazon CloudFront d istribution and set a Network Load Balancer as the origin. Use VPC Flow Logs to monitor abnormal t raffic patterns. Set up a custom AWS Lambda function that processes the flow logs and invokes A mazon SNS for notification is incorrect because thi s option only allows you to monitor the traffic that is reac hing your instance. You can't use VPC Flow Logs to mitigate DDoS attacks. The option that says: Configure Amazon CloudFront d istribution and set an Application Load Balancer as the origin. Create a security group rul e and deny all the suspicious addresses. Use Amazon SNS for notification is incorrect. To deny s uspicious addresses, you must manually insert the I P addresses of these hosts. This is a manual task whi ch is not a sustainable solution. Take note that at tackers generate large volumes of packets or requests to ov erwhelm the target system. Using a security group i n this scenario won't help you mitigate DDoS attacks. The option that says: Configure Amazon CloudFront d istribution and set a Network Load Balancer as the origin. Use Amazon GuardDuty to block suspiciou s hosts based on its security findings. Set up a custom AWS Lambda function that processes the secur ity logs and invokes Amazon SNS for notification is incorrect because Amazon GuardDuty is just a threat detection service. You should use AWS WAF and create your own AWS WAF rate-based rule s for mitigating HTTP flood attacks that are disguised as regular web traffic. References: https://docs.aws.amazon.com/waf/latest/developergui de/ddos-overview.html https://docs.aws.amazon.com/waf/latest/developergui de/ddos-get-started-rate-based-rules.html https://d0.awsstatic.com/whitepapers/Security/DDoS_ White_Paper.pdf Check out this AWS WAF Cheat Sheet: https://tutorialsdojo.com/aws-waf/ AWS Security Services Overview - WAF, Shield, Cloud HSM, KMS: https://www.youtube.com/watch?v=-1S-RdeAmMo", + "references": "" + }, + { + "question": "A company runs a messaging application in the ap-no rtheast-1 and ap-southeast-2 region. A Solutions Architect needs to create a routing policy wherein a larger portion of traffic from the Philippines an d North India will be routed to the resource in the ap-northeast- 1 region. Which Route 53 routing policy should the Solutions Architect use?", + "options": [ + "A. Weighted Routing", + "B. Geoproximity Routing", + "C. Latency Routing", + "D. Geolocation Routing Correct Answer: B" + ], + "correct": "", + "explanation": "Explanation/Reference: Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service. You can use Route 53 to perform three main functions in any combination: domain registration, DNS routing, and health checking. After you create a hosted zone for your d omain, such as example.com, you create records to t ell the Domain Name System (DNS) how you want traffic to be routed for that domain. For example, you might create records that cause DN S to do the following: Route Internet traffic for example.com to the IP ad dress of a host in your data center. Route email for that domain (jose.rizal@tutorialsdo jo.com) to a mail server (mail.tutorialsdojo.com). Route traffic for a subdomain called operations.man ila.tutorialsdojo.com to the IP address of a differ ent host. Each record includes the name of a domain or a subd omain, a record type (for example, a record with a type of MX routes email), and other information app licable to the record type (for MX records, the hostname of one or more mail servers and a priority for each server). Route 53 has different routing policies that you ca n choose from. Below are some of the policies: Latency Routing lets Amazon Route 53 serve user req uests from the AWS Region that provides the lowest latency. It does not, however, guarantee that users in the same geographic region will be served from the same location. Geoproximity Routing lets Amazon Route 53 route tra ffic to your resources based on the geographic location of your users and your resources. You can also optionally choose to route more traffic or les s to a given resource by specifying a value, known as a bi as. A bias expands or shrinks the size of the geogr aphic region from which traffic is routed to a resource. Geolocation Routing lets you choose the resources t hat serve your traffic based on the geographic loca tion of your users, meaning the location that DNS queries o riginate from. Weighted Routing lets you associate multiple resour ces with a single domain name (tutorialsdojo.com) o r subdomain name (subdomain.tutorialsdojo.com) and ch oose how much traffic is routed to each resource. In this scenario, the problem requires a routing po licy that will let Route 53 route traffic to the re source in the Tokyo region from a larger portion of the Philippin es and North India. You need to use Geoproximity Routing and specify a bias to control the size of the geographic region f rom which traffic is routed to your resource. The sampl e image above uses a bias of -40 in the Tokyo regio n and a bias of 1 in the Sydney Region. Setting up the bias configuration in this manner would cause Route 53 to route traffic coming from the middle and northern part of the Philippines, as well as the northern part of I ndia to the resource in the Tokyo Region. Hence, the correct answer is: Geoproximity Routing. Geolocation Routing is incorrect because you cannot control the coverage size from which traffic is ro uted to your instance in Geolocation Routing. It just lets you choose the instances that will serve traffic ba sed on the location of your users. Latency Routing is incorrect because it is mainly u sed for improving performance by letting Route 53 serve user requests from the AWS Region that provid es the lowest latency. Weighted Routing is incorrect because it is used fo r routing traffic to multiple resources in proporti ons that you specify. This can be useful for load balancing and testing new versions of a software. References: https://docs.aws.amazon.com/Route53/latest/Develope rGuide/routing-policy.html#routing-policy-geoproxim ity https://docs.aws.amazon.com/Route53/latest/Develope rGuide/rrsets-working-with.html Latency Routing vs Geoproximity Routing vs Geolocat ion Routing: https://tutorialsdojo.com/latency-routing-vs-geopro ximity-routing-vs-geolocation-routing/", + "references": "" + }, + { + "question": "server-side encryption with Amazon S3-Managed encry ption keys (SSE-S3) to encrypt data using 256- bit Advanced Encryption Standard (AES-256) block ci pher. Which of the following request headers must be used ?", + "options": [ + "A. A. x-amz-server-side-encryption-customer-key", + "B. B. x-amz-server-side-encryption", + "C. C. x-amz-server-side-encryption-customer-algorith m", + "D. D. x-amz-server-side-encryption-customer-key-MD5" + ], + "correct": "B. B. x-amz-server-side-encryption", + "explanation": "Explanation/Reference: Server-side encryption protects data at rest. If yo u use Server-Side Encryption with Amazon S3-Managed Encryption Keys (SSE-S3), Amazon S3 will encrypt ea ch object with a unique key and as an additional safeguard, it encrypts the key itself with a master key that it rotates regularly. Amazon S3 server-si de encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES- 256), to encrypt your data. If you need server-side encryption for all of the o bjects that are stored in a bucket, use a bucket po licy. For example, the following bucket policy denies permiss ions to upload an object unless the request include s the x- amz-server-side-encryption header to request server -side encryption: However, if you chose to use server-side encryption with customer-provided encryption keys (SSE-C), yo u must provide encryption key information using the f ollowing request headers: x-amz-server-side-encryption-customer-algorithm x-amz-server-side-encryption-customer-key x-amz-server-side-encryption-customer-key-MD5 Hence, using the x-amz-server-side-encryption heade r is correct as this is the one being used for Amaz on S3-Managed Encryption Keys (SSE-S3). All other options are incorrect since they are used for SSE-C. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/ser v-side-encryption.html https://docs.aws.amazon.com/AmazonS3/latest/dev/Usi ngServerSideEncryption.html https://docs.aws.amazon.com/AmazonS3/latest/dev/Ser verSideEncryptionCustomerKeys.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", + "references": "" + }, + { + "question": "A company has a requirement to move 80 TB data ware house to the cloud. It would take 2 months to trans fer the data given their current bandwidth allocation. Which is the most cost-effective service that would allow you to quickly upload their data into AWS?", + "options": [ + "A. A. AWS Snowball Edge", + "B. B. AWS Snowmobile", + "C. C. AWS Direct Connect", + "D. D. Amazon S3 Multipart Upload" + ], + "correct": "A. A. AWS Snowball Edge", + "explanation": "Explanation/Reference: AWS Snowball Edge is a type of Snowball device with on-board storage and compute power for select AWS capabilities. Snowball Edge can undertake local processing and edge-computing workloads in addition to transferring data between your local en vironment and the AWS Cloud. Each Snowball Edge device can transport data at spe eds faster than the internet. This transport is don e by shipping the data in the appliances through a regio nal carrier. The appliances are rugged shipping containers, complete with E Ink shipping labels. Th e AWS Snowball Edge device differs from the standar d Snowball because it can bring the power of the AWS Cloud to your on-premises location, with local storage and compute functionality. Snowball Edge devices have three options for device configurations storage optimized, compute optimized, and with GPU. Hence, the correct answer is: AWS Snowball Edge. AWS Snowmobile is incorrect because this is an Exab yte-scale data transfer service used to move extremely large amounts of data to AWS. It is not s uitable for transferring a small amount of data, li ke 80 TB in this scenario. You can transfer up to 100PB per Sno wmobile, a 45-foot long ruggedized shipping container, pulled by a semi-trailer truck. A more c ost-effective solution here is to order a Snowball Edge device instead. AWS Direct Connect is incorrect because it is prima rily used to establish a dedicated network connecti on from your premises network to AWS. This is not suitable for one-time data transfer tasks, like what is depi cted in the scenario. Amazon S3 Multipart Upload is incorrect because thi s feature simply enables you to upload large object s in multiple parts. It still uses the same Internet con nection of the company, which means that the transf er will still take time due to its current bandwidth allocation. References: https://docs.aws.amazon.com/snowball/latest/ug/what issnowball.html https://docs.aws.amazon.com/snowball/latest/ug/devi ce-differences.html Check out this AWS Snowball Edge Cheat Sheet: https://tutorialsdojo.com/aws-snowball-edge/ AWS Snow Family Overview: https://youtu.be/9Ar-51Ip53Q", + "references": "" + }, + { + "question": "One member of your DevOps team consulted you about a connectivity problem in one of your Amazon EC2 instances. The application architecture is init ially set up with four EC2 instances, each with an EIP address that all belong to a public non-default subnet. You launched another instance to handle the increasing workload of your application. The EC2 in stances also belong to the same security group. Everything works well as expected except for one of the EC2 instances which is not able to send nor receive traffic over the Internet. Which of the following is the MOST likely reason fo r this issue?", + "options": [ + "A. A. The EC2 instance is running in an Availability Zone that is not connected to an Internet gateway.", + "B. B. The EC2 instance does not have a public IP add ress associated with it.", + "C. C. The EC2 instance does not have a private IP ad dress associated with it.", + "D. D. The route table is not properly configured to allow traffic to and from the Internet through the Internet" + ], + "correct": "B. B. The EC2 instance does not have a public IP add ress associated with it.", + "explanation": "Explanation/Reference: IP addresses enable resources in your VPC to commun icate with each other, and with resources over the Internet. Amazon EC2 and Amazon VPC support the IPv 4 and IPv6 addressing protocols. By default, Amazon EC2 and Amazon VPC use the IPv4 addressing protocol. When you create a VPC, you must assign it an IPv4 CIDR block (a range of priva te IPv4 addresses). Private IPv4 addresses are not reachable over the Internet. To connect to your ins tance over the Internet, or to enable communication between your instances and other AWS services that have pub lic endpoints, you can assign a globally- unique public IPv4 address to your instance. You can optionally associate an IPv6 CIDR block wit h your VPC and subnets, and assign IPv6 addresses from that block to the resources in your VPC. IPv6 addresses are public and reachable over the Interne t. All subnets have a modifiable attribute that determ ines whether a network interface created in that su bnet is assigned a public IPv4 address and, if applicable, an IPv6 address. This includes the primary network interface (eth0) that's created for an instance whe n you launch an instance in that subnet. Regardless of the subnet attribute, you can still override this setti ng for a specific instance during launch. By default, nondefault subnets have the IPv4 public addressing attribute set to false, and default sub nets have this attribute set to true. An exception is a nonde fault subnet created by the Amazon EC2 launch instance wizard -- the wizard sets the attribute to true. You can modify this attribute using the Amaz on VPC console. In this scenario, there are 5 EC2 instances that be long to the same security group that should be able to connect to the Internet. The main route table is pr operly configured but there is a problem connecting to one instance. Since the other four instances are workin g fine, we can assume that the security group and t he route table are correctly configured. One possible reason for this issue is that the problematic instance do es not have a public or an EIP address. Take note as well that the four EC2 instances all b elong to a public non-default subnet. Which means t hat a new EC2 instance will not have a public IP address by default since the since IPv4 public addressing a ttribute is initially set to false. Hence, the correct answer is the option that says: The EC2 instance does not have a public IP address associated with it. The option that says: The route table is not proper ly configured to allow traffic to and from the Inte rnet through the Internet gateway is incorrect because the other three instances, which are associated with the sam e route table and security group, do not have any issues. The option that says: The EC2 instance is running i n an Availability Zone that is not connected to an Internet gateway is incorrect because there is no r elationship between the Availability Zone and the Internet Gateway (IGW) that may have caused the iss ue. References: http://docs.aws.amazon.com/AmazonVPC/latest/UserGui de/VPC_Scenario1.html https://docs.aws.amazon.com/vpc/latest/userguide/vp c-ip-addressing.html#vpc-ip-addressing-subnet Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", + "references": "" + }, + { + "question": "A start-up company that offers an intuitive financi al data analytics service has consulted you about t heir AWS architecture. They have a fleet of Amazon EC2 worke r instances that process financial data and then ou tputs reports which are used by their clients. You must s tore the generated report files in a durable storag e. The number of files to be stored can grow over time as the start-up company is expanding rapidly overseas and hence, they also need a way to distribute the repor ts faster to clients located across the globe. Which of the following is a cost-efficient and scal able storage option that you should use for this sc enario?", + "options": [ + "A. A. Use Amazon S3 as the data storage and CloudFro nt as the CDN.", + "B. B. Use Amazon Redshift as the data storage and Cl oudFront as the CDN.", + "C. C. Use Amazon Glacier as the data storage and Ela stiCache as the CDN.", + "D. D. Use multiple EC2 instance stores for data stor age and ElastiCache as the CDN." + ], + "correct": "A. A. Use Amazon S3 as the data storage and CloudFro nt as the CDN.", + "explanation": "Explanation Explanation/Reference: A Content Delivery Network (CDN) is a critical comp onent of nearly any modern web application. It used to be that CDN merely improved the delivery of content by replicating commonly requested files (static conte nt) across a globally distributed set of caching server s. However, CDNs have become much more useful over time. For caching, a CDN will reduce the load on an appli cation origin and improve the experience of the requestor by delivering a local copy of the content from a nearby cache edge, or Point of Presence (Po P). The application origin is off the hook for opening the connection and delivering the content directly as t he CDN takes care of the heavy lifting. The end result is that t he application origins don't need to scale to meet demands for static content. Amazon CloudFront is a fast content delivery networ k (CDN) service that securely delivers data, videos , applications, and APIs to customers globally with l ow latency, high transfer speeds, all within a deve loper- friendly environment. CloudFront is integrated with AWS both physical locations that are directly connected to the AWS global infrastructure, as well as other AWS services. Amazon S3 offers a highly durable, scalable, and se cure destination for backing up and archiving your critical data. This is the correct option as the st art-up company is looking for a durable storage to store the audio and text files. In addition, ElastiCache is o nly used for caching and not specifically as a Glob al Content Delivery Network (CDN). Using Amazon Redshift as the data storage and Cloud Front as the CDN is incorrect as Amazon Redshift is usually used as a Data Warehouse. Using Amazon S3 Glacier as the data storage and Ela stiCache as the CDN is incorrect as Amazon S3 Glacier is usually used for data archives. Using multiple EC2 instance stores for data storage and ElastiCache as the CDN is incorrect as data stored in an instance store is not durable. References: https://aws.amazon.com/s3/ https://aws.amazon.com/caching/cdn/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", + "references": "" + }, + { + "question": "A company launched a website that accepts high-qual ity photos and turns them into a downloadable video montage. The website offers a free and a premium ac count that guarantees faster processing. All reques ts by both free and premium members go through a single S QS queue and then processed by a group of EC2 instances that generate the videos. The company nee ds to ensure that the premium users who paid for th e service have higher priority than the free members. How should the company re-design its architecture t o address this requirement?", + "options": [ + "A. A. Use Amazon S3 to store and process the photos and then generate the video montage afterward.", + "B. B. Create an SQS queue for free members and anoth er one for premium members. Configure your EC2", + "C. C. For the requests made by premium members, set a higher priority in the SQS queue so it will be", + "D. D. Use Amazon Kinesis to process the photos and g enerate the video montage in real-time." + ], + "correct": "B. B. Create an SQS queue for free members and anoth er one for premium members. Configure your EC2", + "explanation": "Explanation/Reference: Amazon Simple Queue Service (SQS) is a fully manage d message queuing service that enables you to decouple and scale microservices, distributed syste ms, and serverless applications. SQS eliminates the complexity and overhead associated with managing an d operating message oriented middleware, and empowers developers to focus on differentiating wor k. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other service s to be available. In this scenario, it is best to create 2 separate S QS queues for each type of members. The SQS queues for the premium members can be polled first by the EC2 Inst ances and once completed, the messages from the fre e members can be processed next. Hence, the correct answer is: Create an SQS queue f or free members and another one for premium members. Configure your EC2 instances to consume me ssages from the premium queue first and if it is empty, poll from the free members' SQS queue. The option that says: For the requests made by prem ium members, set a higher priority in the SQS queue so it will be processed first compared to the requests made by free members is incorrect as you cannot set a priority to individual items in the SQ S queue. The option that says: Using Amazon Kinesis to proce ss the photos and generate the video montage in real time is incorrect as Amazon Kinesis is used to process streaming data and it is not applicable in this scenario. The option that says: Using Amazon S3 to store and process the photos and then generating the video montage afterwards is incorrect as Amazon S3 is use d for durable storage and not for processing data.", + "references": "https://docs.aws.amazon.com/AWSSimpleQueueService/l atest/SQSDeveloperGuide/sqs-best-practices.html Check out this Amazon SQS Cheat Sheet: https://tutorialsdojo.com/amazon-sqs/" + }, + { + "question": "A company has developed public APIs hosted in Amazo n EC2 instances behind an Elastic Load Balancer. The APIs will be used by various clients from their respective on-premises data centers. A Solutions Architect received a report that the web service cl ients can only access trusted IP addresses whitelis ted on their firewalls. What should you do to accomplish the above requirem ent?", + "options": [ + "A. A. Associate an Elastic IP address to an Applicat ion Load Balancer.", + "B. B. Associate an Elastic IP address to a Network L oad Balancer.", + "C. C. Create an Alias Record in Route 53 which maps to the DNS name of the load balancer.", + "D. D. Create a CloudFront distribution whose origin points to the private IP addresses of your web serv ers." + ], + "correct": "B. B. Associate an Elastic IP address to a Network L oad Balancer.", + "explanation": "Explanation/Reference: A Network Load Balancer functions at the fourth lay er of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. Afte r the load balancer receives a connection request, it selects a target from the default rule's target gro up. It attempts to open a TCP connection to the sel ected target on the port specified in the listener config uration. Based on the given scenario, web service clients ca n only access trusted IP addresses. To resolve this requirement, you can use the Bring Your Own IP (BYO IP) feature to use the trusted IPs as Elastic IP addresses (EIP) to a Network Load Balancer (NLB). T his way, there's no need to re-establish the whitel ists with new IP addresses. Hence, the correct answer is: Associate an Elastic IP address to a Network Load Balancer. The option that says: Associate an Elastic IP addre ss to an Application Load Balancer is incorrect because you can't assign an Elastic IP address to a n Application Load Balancer. The alternative method you can do is assign an Elastic IP address to a Network Load Balancer in front of the Application Load Balancer. The option that says: Create a CloudFront distribut ion whose origin points to the private IP addresses of your web servers is incorrect because web service client s can only access trusted IP addresses. The fastest way to resolve this requirement is to attach an Elastic IP address to a Network Load Balancer. The option that says: Create an Alias Record in Rou te 53 which maps to the DNS name of the load balancer is incorrect. This approach won't still al low them to access the application because of trust ed IP addresses on their firewalls. References: https://aws.amazon.com/premiumsupport/knowledge-cen ter/elb-attach-elastic-ip-to-public-nlb/ https://aws.amazon.com/blogs/networking-and-content -delivery/using-static-ip-addresses-for-application - load- balancers/ https://docs.aws.amazon.com/elasticloadbalancing/la test/network/introduction.html Check out this AWS Elastic Load Balancing Cheat She et: https://tutorialsdojo.com/aws-elastic-load-balancin g-elb/", + "references": "" + }, + { + "question": "An accounting application uses an RDS database conf igured with Multi-AZ deployments to improve availability. What would happen to RDS if the prima ry database instance fails?", + "options": [ + "A. A. A new database instance is created in the stan dby Availability Zone.", + "B. B. The canonical name record (CNAME) is switched from the primary to standby instance.", + "C. C. The IP address of the primary DB instance is s witched to the standby DB instance.", + "D. D. The primary database instance will reboot." + ], + "correct": "B. B. The canonical name record (CNAME) is switched from the primary to standby instance.", + "explanation": "Explanation/Reference: In Amazon RDS, failover is automatically handled so that you can resume database operations as quickly as possible without administrative intervention in the event that your primary database instance went dow n. When failing over, Amazon RDS simply flips the cano nical name record (CNAME) for your DB instance to point at the standby, which is in turn promoted to become the new primary. The option that says: The IP address of the primary DB instance is switched to the standby DB instance is incorrect since IP addresses are per su bnet, and subnets cannot span multiple AZs. The option that says: The primary database instance will reboot is incorrect since in the event of a failure, there is no database to reboot with. The option that says: A new database instance is cr eated in the standby Availability Zone is incorrect since with multi-AZ enabled, you already have a standby databa se in another AZ. References: https://aws.amazon.com/rds/details/multi-az/ https://aws.amazon.com/rds/faqs/ Amazon RDS Overview: https://www.youtube.com/watch?v=aZmpLl8K1UU Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/", + "references": "" + }, + { + "question": "requirements is to ensure that the previous state o f a file is preserved and retrievable if a modified version of it is uploaded. Also, to meet regulatory compliance, d ata over 3 years must be retained in an archive and will only be accessible once a year. How should the solutions architect build the soluti on?", + "options": [ + "A. A. Create an S3 Standard bucket with object-level versioning enabled and configure a lifecycle rule that", + "B. B. Create an S3 Standard bucket and enable S3 Obj ect Lock in governance mode.", + "C. C. Create an S3 Standard bucket with S3 Object Lo ck in compliance mode enabled then configure a", + "D. D. Create a One-Zone-IA bucket with object-level versioning enabled and configure a lifecycle rule t hat" + ], + "correct": "A. A. Create an S3 Standard bucket with object-level versioning enabled and configure a lifecycle rule that", + "explanation": "Explanation/Reference: Versioning in Amazon S3 is a means of keeping multi ple variants of an object in the same bucket. You c an use the S3 Versioning feature to preserve, retrieve , and restore every version of every object stored in your buckets. With versioning, you can recover more easi ly from both unintended user actions and applicatio n failures. After versioning is enabled for a bucket, if Amazon S3 receives multiple write requests for the same object simultaneously, it stores all of those objects. Hence, the correct answer is: Create an S3 Standard bucket with object-level versioning enabled and configure a lifecycle rule that transfers files to Amazon S3 Glacier Deep Archive after 3 years. The S3 Object Lock feature allows you to store obje cts using a write-once-read-many (WORM) model. In the scenario, changes to objects are allowed but th eir previous versions should be preserved and remai n retrievable. If you enable the S3 Object Lock featu re, you won't be able to upload new versions of an object. This feature is only helpful when you want to preve nt objects from being deleted or overwritten for a fixed amount of time or indefinitely. Therefore, the following options are incorrect: - Create an S3 Standard bucket and enable S3 Object Lock in governance mode. - Create an S3 Standard bucket with S3 Object Lock in compliance mode enabled then configure a lifecycle rule that transfers files to Amazon S3 Gl acier Deep Archive after 3 years. The option that says: Create a One-Zone-IA bucket w ith object-level versioning enabled and configure a lifecycle rule that transfers files to Amazon S3 Glacier Deep Archive after 3 years is incorrect. On e- Zone-IA is not highly available as it only relies on one avail ability zone for storing data. References: https://docs.aws.amazon.com/AmazonS3/latest/usergui de/Versioning.html https://aws.amazon.com/blogs/aws/new-amazon-s3-stor age-class-glacier-deep-archive/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", + "references": "" + }, + { + "question": "For data privacy, a healthcare company has been ask ed to comply with the Health Insurance Portability and Accountability Act (HIPAA). The company stores all its backups on an Amazon S3 bucket. It is required that data stored on the S3 bucket must be encrypted . What is the best option to do this? (Select TWO.)", + "options": [ + "A. A. Before sending the data to Amazon S3 over HTTP S, encrypt the data locally first using your own", + "B. B. Enable Server-Side Encryption on an S3 bucket to make use of AES-128 encryption.", + "C. C. Store the data in encrypted EBS snapshots.", + "D. D. Store the data on EBS volumes with encryption enabled instead of using Amazon S3." + ], + "correct": "", + "explanation": "Explanation Server-side encryption is about data encryption at rest--that is, Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypts it for you when you access it. As long as you authenticate your request and you have access permi ssions, there is no difference in the way you acces s encrypted or unencrypted objects. For example, if y ou share your objects using a pre-signed URL, that URL works the same way for both encrypted and unenc rypted objects. You have three mutually exclusive options depending on how you choose to manage the encryption keys: Use Server-Side Encryption with Amazon S3-Managed K eys (SSE-S3) Use Server-Side Encryption with AWS KMS-Managed Key s (SSE-KMS) Use Server-Side Encryption with Customer-Provided K eys (SSE-C) The options that say: Before sending the data to Am azon S3 over HTTPS, encrypt the data locally first using your own encryption keys and Enable Server-Si de Encryption on an S3 bucket to make use of AES-256 encryption are correct because these option s are using client-side encryption and Amazon S3- Managed Keys (SSE-S3) respectively. Client-side enc ryption is the act of encrypting data before sendin g it to Amazon S3 while SSE-S3 uses AES-256 encryption. Storing the data on EBS volumes with encryption ena bled instead of using Amazon S3 and storing the data in encrypted EBS snapshots are incorrect b ecause both options use EBS encryption and not S3. Enabling Server-Side Encryption on an S3 bucket to make use of AES-128 encryption is incorrect as S3 doesn't provide AES-128 encryption, only AES-256 . References: http://docs.aws.amazon.com/AmazonS3/latest/dev/Usin gEncryption.html https://docs.aws.amazon.com/AmazonS3/latest/dev/Usi ngClientSideEncryption.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", + "references": "" + }, + { + "question": ": A company is planning to launch an application whic h requires a data warehouse that will be used for t heir infrequently accessed data. You need to use an EBS Volume that can handle large, sequential I/O operations. Which of the following is the most cost-effective s torage type that you should use to meet the require ment?", + "options": [ + "A. A. Cold HDD (sc1)", + "B. B. Throughput Optimized HDD (st1)", + "C. C. Provisioned IOPS SSD (io1)", + "D. D. EBS General Purpose SSD (gp2)" + ], + "correct": "A. A. Cold HDD (sc1)", + "explanation": "Explanation Cold HDD volumes provide low-cost magnetic storage that defines performance in terms of throughput rather than IOPS. With a lower throughput limit tha n Throughput Optimized HDD, this is a good fit idea l for large, sequential cold-data workloads. If you r equire infrequent access to your data and are looki ng to save costs, Cold HDD provides inexpensive block sto rage. Take note that bootable Cold HDD volumes are not supported. Cold HDD provides the lowest cost HDD volume and is designed for less frequently accessed workloads. Hence, Cold HDD (sc1) is the correct answer. In the exam, always consider the difference between SSD and HDD as shown on the table below. This will allow you to easily eliminate specific EBS-types in the options which are not SSD or not HDD, dependin g on whether the question asks for a storage type which has small, random I/O operations or large, sequential I/O operations. EBS General Purpose SSD (gp2) is incorrect because a General purpose SSD volume costs more and it is mainly used for a wide variety of workloads. It is recommended to be used as system boot volumes, virt ual desktops, low-latency interactive apps, and many mo re. Provisioned IOPS SSD (io1) is incorrect because thi s costs more than Cold HDD and thus, not cost- effective for this scenario. It provides the highes t performance SSD volume for mission-critical low-l atency or high-throughput workloads, which is not needed i n the scenario. Throughput Optimized HDD (st1) is incorrect because this is primarily used for frequently accessed, throughput-intensive workloads. In this scenario, C old HDD perfectly fits the requirement as it is use d for their infrequently accessed data and provides the l owest cost, unlike Throughput Optimized HDD. References: https://aws.amazon.com/ebs/details/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /EBSVolumeTypes.html Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/", + "references": "" + }, + { + "question": ": A company is receiving semi-structured and structur ed data from different sources every day. The Solutions Architect plans to use big data processin g frameworks to analyze vast amounts of data and ac cess it using various business intelligence tools and stand ard SQL queries. Which of the following provides the MOST high-perfo rming solution that fulfills this requirement? A. A. Use Amazon Kinesis Data Analytics and store th e processed data in Amazon DynamoDB.", + "options": [ + "B. B. Use AWS Glue and store the processed data in A mazon S3.", + "C. C. Create an Amazon EC2 instance and store the pr ocessed data in Amazon EBS.", + "D. D. Create an Amazon EMR cluster and store the pro cessed data in Amazon Redshift." + ], + "correct": "D. D. Create an Amazon EMR cluster and store the pro cessed data in Amazon Redshift.", + "explanation": "Explanation Amazon EMR is a managed cluster platform that simpl ifies running big data frameworks, such as Apache Hadoop and Apache Spark, on AWS to process and anal yze vast amounts of data. By using these frameworks and related open-source projects, such a s Apache Hive and Apache Pig, you can process data for analytics purposes and business intelligence wo rkloads. Additionally, you can use Amazon EMR to transform and move large amounts of data into and o ut of other AWS data stores and databases. Amazon Redshift is the most widely used cloud data warehouse. It makes it fast, simple and cost-effect ive to analyze all your data using standard SQL and your e xisting Business Intelligence (BI) tools. It allows you to run complex analytic queries against terabytes to petab ytes of structured and semi-structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution. The key phrases in the scenario are \"big data proce ssing frameworks\" and \"various business intelligenc e tools and standard SQL queries\" to analyze the data . To leverage big data processing frameworks, you n eed to use Amazon EMR. The cluster will perform data tr ansformations (ETL) and load the processed data int o Amazon Redshift for analytic and business intellige nce applications. Hence, the correct answer is: Create an Amazon EMR cluster and store the processed data in Amazon Redshift. The option that says: Use AWS Glue and store the pr ocessed data in Amazon S3 is incorrect because AWS Glue is just a serverless ETL service that craw ls your data, builds a data catalog, performs data preparation, data transformation, and data ingestio n. It won't allow you to utilize different big data frameworks effectively, unlike Amazon EMR. In addit ion, the S3 Select feature in Amazon S3 can only ru n simple SQL queries against a subset of data from a specific S3 object. To perform queries in the S3 bu cket, you need to use Amazon Athena. The option that says: Use Amazon Kinesis Data Analy tics and store the processed data in Amazon DynamoDB is incorrect because Amazon DynamoDB doesn 't fully support the use of standard SQL and Business Intelligence (BI) tools, unlike Amazon Red shift. It also doesn't allow you to run complex ana lytic queries against terabytes to petabytes of structure d and semi-structured data. The option that says: Create an Amazon EC2 instance and store the processed data in Amazon EBS is incorrect because a single EBS-backed EC2 instance is quite limited in its computing capability. Moreo ver, it also entails an administrative overhead since yo u have to manually install and maintain the big dat a frameworks for the EC2 instance yourself. The most suitable solution to leverage big data frameworks i s to use EMR clusters. References: https://docs.aws.amazon.com/emr/latest/ManagementGu ide/emr-what-is-emr.html https://docs.aws.amazon.com/redshift/latest/dg/load ing-data-from-emr.html Check out this Amazon EMR Cheat Sheet: https://tutorialsdojo.com/amazon-emr/", + "references": "" + }, + { + "question": ": A company has a dynamic web app written in MEAN sta ck that is going to be launched in the next month. There is a probability that the traffic will be qui te high in the first couple of weeks. In the event of a load failure, how can you set up DNS failover to a stati c website?", + "options": [ + "A. A. Add more servers in case the application fails .", + "B. B. Duplicate the exact application architecture i n another region and configure DNS weight-", + "C. C. Enable failover to an application hosted in an on-premises data center.", + "D. D. Use Route 53 with the failover option to a sta tic S3 website bucket or CloudFront" + ], + "correct": "D. D. Use Route 53 with the failover option to a sta tic S3 website bucket or CloudFront", + "explanation": "Explanation For this scenario, using Route 53 with the failover option to a static S3 website bucket or CloudFront distribution is correct. You can create a new Route 53 with the failover option to a static S3 website bucket or CloudFront distribution as an alternative . Duplicating the exact application architecture in a nother region and configuring DNS weight-based routing is incorrect because running a duplicate sy stem is not a cost-effective solution. Remember tha t you are trying to build a failover mechanism for your web a pp, not a distributed setup. Enabling failover to an application hosted in an on -premises data center is incorrect. Although you ca n set up failover to your on-premises data center, you are n ot maximizing the AWS environment such as using Route 53 failover. Adding more servers in case the application fails i s incorrect because this is not the best way to han dle a failover event. If you add more servers only in cas e the application fails, then there would be a peri od of downtime in which your application is unavailable. Since there are no running servers on that period, your application will be unavailable for a certain perio d of time until your new server is up and running.", + "references": "https://aws.amazon.com/premiumsupport/knowledge-cen ter/fail-over-s3-r53/ http://docs.aws.amazon.com/Route53/latest/Developer Guide/dns-failover.html Check out this Amazon Route 53 Cheat Sheet: https://tutorialsdojo.com/amazon-route-53/" + }, + { + "question": ": A company is running a custom application in an Aut o Scaling group of Amazon EC2 instances. Several instances are failing due to insufficient swap spac e. The Solutions Architect has been instructed to troubleshoot the issue and effectively monitor the available swap space of each EC2 instance. Which of the following options fulfills this requir ement?", + "options": [ + "A. A. Install the CloudWatch agent on each instance and monitor the SwapUtilization metric.", + "B. B. Create a new trail in AWS CloudTrail and confi gure Amazon CloudWatch Logs to monitor", + "C. C. Create a CloudWatch dashboard and monitor the SwapUsed metric.", + "D. D. Enable detailed monitoring on each instance an d monitor the SwapUtilization metric." + ], + "correct": "A. A. Install the CloudWatch agent on each instance and monitor the SwapUtilization metric.", + "explanation": "Explanation Amazon CloudWatch is a monitoring service for AWS c loud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and t rack metrics, collect and monitor log files, and se t alarms. Amazon CloudWatch can monitor AWS resources such as Amazon EC2 instances, Amazon DynamoDB tables, and Amazon RDS DB instances, as we ll as custom metrics generated by your applications and services, and any log files your a pplications generate. You can use Amazon CloudWatch to gain system-wide visibility into resource utilizati on, application performance, and operational health . The main requirement in the scenario is to monitor the SwapUtilization metric. Take note that you can' t use the default metrics of CloudWatch to monitor the Sw apUtilization metric. To monitor custom metrics, yo u must install the CloudWatch agent on the EC2 instan ce. After installing the CloudWatch agent, you can now collect system metrics and log files of an EC2 instance. Hence, the correct answer is: Install the CloudWatc h agent on each instance and monitor the SwapUtilization metric. The option that says: Enable detailed monitoring on each instance and monitor the SwapUtilization metric is incorrect because you can't monitor the S wapUtilization metric by just enabling the detailed monitoring option. You must install the CloudWatch agent on the instance. The option that says: Create a CloudWatch dashboard and monitor the SwapUsed metric is incorrect because you must install the CloudWatch agent first to add the custom metric in the dashboard. The option that says: Create a new trail in AWS Clo udTrail and configure Amazon CloudWatch Logs to monitor your trail logs is incorrect because Clo udTrail won't help you monitor custom metrics. CloudTrail is specifically used for monitoring API activities in an AWS account. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /mon-scripts.html https://aws.amazon.com/cloudwatch/faqs/ Check out this Amazon CloudWatch Cheat Sheet: https://tutorialsdojo.com/amazon-cloudwatch/ Amazon CloudWatch Overview: https://www.youtube.com/watch?v=q0DmxfyGkeU", + "references": "" + }, + { + "question": ": A start-up company has an EC2 instance that is host ing a web application. The volume of users is expec ted to grow in the coming months and hence, you need to add more elasticity and scalability in your AWS architecture to cope with the demand. Which of the following options can satisfy the abov e requirement for the given scenario? (Select TWO.)", + "options": [ + "A. A. Set up an AWS WAF behind your EC2 Instance.", + "B. B. Set up an S3 Cache in front of the EC2 instanc e.", + "C. C. Set up two EC2 instances deployed using Launch Templates and integrated with AWS Glue.", + "D. D. Set up two EC2 instances and use Route 53 to r oute traffic based on a Weighted Routing" + ], + "correct": "", + "explanation": "Explanation Using an Elastic Load Balancer is an ideal solution for adding elasticity to your application. Alterna tively, you can also create a policy in Route 53, such as a Weighted routing policy, to evenly distribute the traffic to 2 or more EC2 instances. Hence, setting up two E C2 instances and then put them behind an Elastic Load balancer (ELB) and setting up two EC2 instance s and using Route 53 to route traffic based on a Weighted Routing Policy are the correct answers. Setting up an S3 Cache in front of the EC2 instance is incorrect because doing so does not provide elasticity and scalability to your EC2 instances. Setting up an AWS WAF behind your EC2 Instance is i ncorrect because AWS WAF is a web application firewall that helps protect your web ap plications from common web exploits. This service i s more on providing security to your applications. Setting up two EC2 instances deployed using Launch Templates and integrated with AWS Glue is incorrect because AWS Glue is a fully managed extra ct, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for an alytics. It does not provide scalability or elastic ity to your instances. References: https://aws.amazon.com/elasticloadbalancing http://docs.aws.amazon.com/Route53/latest/Developer Guide/Welcome.html Check out this AWS Elastic Load Balancing (ELB) Che at Sheet: https://tutorialsdojo.com/aws-elastic-load-balancin g-elb/ Check out this Amazon Route 53 Cheat Sheet: https://tutorialsdojo.com/amazon-route-53/", + "references": "" + }, + { + "question": ": A company plans to migrate its suite of containeriz ed applications running on-premises to a container service in AWS. The solution must be cloud-agnostic and use an open-source platform that can automatically manage containerized workloads and se rvices. It should also use the same configuration a nd tools across various production environments. What should the Solution Architect do to properly m igrate and satisfy the given requirement?", + "options": [ + "A. A. Migrate the application to Amazon Container Re gistry (ECR) with Amazon EC2 instance", + "B. B. Migrate the application to Amazon Elastic Kube rnetes Service with EKS worker nodes.", + "C. C. Migrate the application to Amazon Elastic Cont ainer Service with ECS tasks that use the", + "D. D. Migrate the application to Amazon Elastic Cont ainer Service with ECS tasks that use the" + ], + "correct": "B. B. Migrate the application to Amazon Elastic Kube rnetes Service with EKS worker nodes.", + "explanation": "Explanation Amazon EKS provisions and scales the Kubernetes con trol plane, including the API servers and backend persistence layer, across multiple AWS availability zones for high availability and fault tolerance. A mazon EKS automatically detects and replaces unhealthy contro l plane nodes and provides patching for the control plane. Amazon EKS is integrated with many AWS services to provide scalability and security for your applications. These services include Elastic Load B alancing for load distribution, IAM for authenticat ion, Amazon VPC for isolation, and AWS CloudTrail for lo gging. To migrate the application to a container service, you can use Amazon ECS or Amazon EKS. But the key point in this scenario is cloud-agnostic and open-s ource platform. Take note that Amazon ECS is an AWS proprietary container service. This means that it i s not an open-source platform. Amazon EKS is a port able, extensible, and open-source platform for managing c ontainerized workloads and services. Kubernetes is considered cloud-agnostic because it allows you to move your containers to other cloud service provide rs. Amazon EKS runs up-to-date versions of the open-sou rce Kubernetes software, so you can use all of the existing plugins and tools from the Kubernetes comm unity. Applications running on Amazon EKS are fully compatible with applications running on any standar d Kubernetes environment, whether running in on- premises data centers or public clouds. This means that you can easily migrate any standard Kubernetes application to Amazon EKS without any code modifica tion required. Hence, the correct answer is: Migrate the applicati on to Amazon Elastic Kubernetes Service with EKS worker nodes. The option that says: Migrate the application to Am azon Container Registry (ECR) with Amazon EC2 instance worker nodes is incorrect because Amazon E CR is just a fully-managed Docker container registry. Also, this option is not an open-source p latform that can manage containerized workloads and services. The option that says: Migrate the application to Am azon Elastic Container Service with ECS tasks that use the AWS Fargate launch type is incorrect b ecause it is stated in the scenario that you have t o migrate the application suite to an open-source platform. A WS Fargate is just a serverless compute engine for containers. It is not cloud-agnostic since you cann ot use the same configuration and tools if you move d it to another cloud service provider such as Microsoft Az ure or Google Cloud Platform (GCP). The option that says: Migrate the application to Am azon Elastic Container Service with ECS tasks that use the Amazon EC2 launch type. is incorrect b ecause Amazon ECS is an AWS proprietary managed container orchestration service. You should use Amazon EKS since Kubernetes is an open-source platform and is considered cloud-agnostic. With Kub ernetes, you can use the same configuration and too ls that you're currently using in AWS even if you move your containers to another cloud service provider. References: https://docs.aws.amazon.com/eks/latest/userguide/wh at-is-eks.html https://aws.amazon.com/eks/faqs/ Check out our library of AWS Cheat Sheets: https://tutorialsdojo.com/links-to-all-aws-cheat-sh eets/", + "references": "" + }, + { + "question": ": A company recently adopted a hybrid architecture th at integrates its on-premises data center to AWS cl oud. You are assigned to configure the VPC and implement the required IAM users, IAM roles, IAM groups, and IAM policies. In this scenario, what is the best practice when cr eating IAM policies?", + "options": [ + "A. A. Determine what users need to do and then craft policies for them that let the users perform", + "B. B. Grant all permissions to any EC2 user.", + "C. C. Use the principle of least privilege which mea ns granting only the permissions required to", + "D. D. Use the principle of least privilege which mea ns granting only the least number of people" + ], + "correct": "C. C. Use the principle of least privilege which mea ns granting only the permissions required to", + "explanation": "Explanation One of the best practices in AWS IAM is to grant le ast privilege. When you create IAM policies, follow the standard s ecurity advice of granting least privilege--that is , granting only the permissions required to perform a task. De termine what users need to do and then craft polici es for them that let the users perform only those tasks. Therefore, using the principle of least privilege w hich means granting only the permissions required to perform a task is the correct answer. Start with a minimum set of permissions and grant a dditional permissions as necessary. Defining the ri ght set of permissions requires some understanding of the u ser's objectives. Determine what is required for th e specific task, what actions a particular service su pports, and what permissions are required in order to perform those actions. Granting all permissions to any EC2 user is incorre ct since you don't want your users to gain access t o everything and perform unnecessary actions. Doing s o is not a good security practice. Using the principle of least privilege which means granting only the least number of people with full root access is incorrect because this is not the co rrect definition of what the principle of least pri vilege is. Determining what users need to do and then craft po licies for them that let the users perform those tasks including additional administrative operation s is incorrect since there are some users who you should not give administrative access to. You shoul d follow the principle of least privilege when prov iding permissions and accesses to your resources.", + "references": "https://docs.aws.amazon.com/IAM/latest/UserGuide/be st-practices.html#use-groups-for-permissions Check out this AWS IAM Cheat Sheet: https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/ Service Control Policies (SCP) vs IAM Policies: https://tutorialsdojo.com/service-control-policies- scp-vs-iam-policies/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/" + }, + { + "question": "A company hosted a web application on a Linux Amazo n EC2 instance in the public subnet that uses a def ault network ACL. The instance uses a default security g roup and has an attached Elastic IP address. The network ACL has been configured to block all tr affic to the instance. The Solutions Architect must allow incoming traffic on port 443 to access the applicat ion from any source. Which combination of steps will accomplish this req uirement? (Select TWO.)", + "options": [ + "A. A. In the Network ACL, update the rule to allow i nbound TCP connection on port 443 from source 0.0.0 .0/0", + "B. B. In the Security Group, add a new rule to allow TCP connection on port 443 from source 0.0.0.0/0", + "C. C. In the Security Group, create a new rule to al low TCP connection on port 443 to destination 0.0.0 .0/0", + "D. D. In the Network ACL, update the rule to allow o utbound TCP connection on port 32768 - 65535 to", + "A. It enables you to establish a private and dedica ted network connection between your network and you r VPC", + "B. It provides a cost-effective, hybrid connection from your VPC to your on-premises data centers whic h", + "C. It allows you to connect your AWS cloud resource s to your on-premises data center using secure and private", + "D. It provides a networking connection between two VPCs which enables you to route traffic between the m" + ], + "correct": "C. It allows you to connect your AWS cloud resource s to your on-premises data center using secure and private", + "explanation": "Explanation Amazon VPC offers you the flexibility to fully mana ge both sides of your Amazon VPC connectivity by cr eating a VPN connection between your remote network and a so ftware VPN appliance running in your Amazon VPC network. This option is recommended if y ou must manage both ends of the VPN connection either for compliance purposes or for leveraging ga teway devices that are not currently supported by Amazon VPC's VPN solution. You can connect your Amazon VPC to remote networks and users using the following VPN connectivity options: AWS Site-to-Site VPN - creates an IPsec VPN connect ion between your VPC and your remote network. On the AWS side of the Site-to-Site VPN connection, a virtual private gateway or transit gateway provi des two VPN endpoints (tunnels) for automatic failover. AWS Client VPN - a managed client-based VPN service that provides secure TLS VPN connections between your AWS resources and on-premises networks. AWS VPN CloudHub - capable of wiring multiple AWS S ite-to-Site VPN connections together on a virtual private gateway. This is useful if you want to enab le communication between different remote networks that uses a Site-to-Site VPN connection. Third-party software VPN appliance - You can create a VPN connection to your remote network by using a n Amazon EC2 instance in your VPC that's running a th ird party software VPN appliance. 5 of 137 With a VPN connection, you can connect to an Amazon VPC in the cloud the same way you connect to your branches while establishing secure and private sess ions with IP Security (IPSec) or Transport Layer Security (TLS) tunnels. Hence, the correct answer is the option that says: It allows you to connect your AWS cloud resources t o your on-premises data center using secure and private se ssions with IP Security (IPSec) or Transport Layer Security (TLS) tunnels since one of the main advantages of having a VPN connection is that you w ill be able to connect your Amazon VPC to other remote net works securely. The option that says: It provides a cost-effective, hybrid connection from your VPC to your on-premise s data centers which bypasses the public Internet is incor rect. Although it is true that a VPN provides a cost-effective, hybrid connection from y our VPC to your on-premises data centers, it certai nly does not bypass the public Internet. A VPN connection ac tually goes through the public Internet, unlike the AWS Direct Connect connection which has a direct an d dedicated connection to your on-premises network. The option that says: It provides a networking conn ection between two VPCs which enables you to route traffic between them using private IPv4 addresses or IPv6 a ddresses is incorrect because this actually describes VPC Peering and not a VPN connec tion. The option that says: It enables you to establish a private and dedicated network connection between y our network and your VPC is incorrect because this is t he advantage of an AWS Direct Connect connection an d not a VPN. References: http://docs.aws.amazon.com/AmazonVPC/latest/UserGui de/vpn-connections.html https://docs.aws.amazon.com/whitepapers/latest/aws- vpc-connectivity-options/software-vpn-network-to- amazon.html Amazon VPC Overview: https://www.youtube.com/watch?v=oIDHKeNxvQQ Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", + "references": "" + }, + { + "question": "A company has an e-commerce application that saves the transaction logs to an S3 bucket. You are instr ucted by the CTO to configure the application to keep the transaction logs for one month for troubleshooting purposes, and then afterward, purge the logs. What should you do to accomplish this requirement?", + "options": [ + "A. A. Configure the lifecycle configuration rules on the Amazon S3 bucket to purge the transaction logs after a", + "B. B. Add a new bucket policy on the Amazon S3 bucke t.", + "C. C. Create a new IAM policy for the Amazon S3 buck et that automatically deletes the logs after a mont h", + "D. D. Enable CORS on the Amazon S3 bucket which will enable the automatic monthly deletion of data" + ], + "correct": "A. A. Configure the lifecycle configuration rules on the Amazon S3 bucket to purge the transaction logs after a", + "explanation": "Explanation/Reference: In this scenario, the best way to accomplish the re quirement is to simply configure the lifecycle conf iguration rules on the Amazon S3 bucket to purge the transact ion logs after a month. Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified a s follows: Transition actions In which you define when object s transition to another storage class. For example, you may choose to transition objects to the STANDAR D_IA (IA, for infrequent access) storage class 30 days after creation or archive objects to the GLACI ER storage class one year after creation. Expiration actions In which you specify when the o bjects expire. Then Amazon S3 deletes the expired objects on your behalf. Hence, the correct answer is: Configure the lifecyc le configuration rules on the Amazon S3 bucket to purge the transaction logs after a month. The option that says: Add a new bucket policy on th e Amazon S3 bucket is incorrect as it does not provide a solution to any of your needs in this sce nario. You add a bucket policy to a bucket to grant other AWS accounts or IAM users access permissions for th e bucket and the objects in it. The option that says: Create a new IAM policy for t he Amazon S3 bucket that automatically deletes the logs after a month is incorrect because IAM pol icies are primarily used to specify what actions ar e allowed or denied on your S3 buckets. You cannot co nfigure an IAM policy to automatically purge logs f or you in any way. The option that says: Enable CORS on the Amazon S3 bucket which will enable the automatic monthly deletion of data is incorrect. CORS allows client web applications that are loaded in one doma in to interact with resources in a different domain. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/obj ect-lifecycle-mgmt.html https://docs.amazonaws.cn/en_us/AmazonS3/latest/use rguide/lifecycle-transition-general-considerations. html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", + "references": "" + }, + { + "question": "A Solutions Architect is working for a large insura nce firm. To maintain compliance with HIPAA laws, a ll data that is backed up or stored on Amazon S3 needs to b e encrypted at rest. In this scenario, what is the best method of encryp tion for the data, assuming S3 is being used for st oring financial-related data? (Select TWO.)", + "options": [ + "A. A. Store the data in encrypted EBS snapshots", + "B. B. Encrypt the data using your own encryption key s then copy the data to Amazon S3 over HTTPS", + "C. C. Enable SSE on an S3 bucket to make use of AES- 256 encryption", + "D. D. Store the data on EBS volumes with encryption enabled instead of using Amazon S3" + ], + "correct": "", + "explanation": "Explanation Explanation/Reference: Data protection refers to protecting data while in- transit (as it travels to and from Amazon S3) and a t rest (while it is stored on disks in Amazon S3 data centers). Y ou can protect data in transit by using SSL or by u sing client- side encryption. You have the following options for protecting data at rest in Amazon S3. Use Server-Side Encryption You request Amazon S3 t o encrypt your object before saving it on disks in its data centers and decrypt it when you download the object s. Use Client-Side Encryption You can encrypt data cl ient-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process , the encryption keys, and related tools. Hence, the following options are the correct answer s: - Enable SSE on an S3 bucket to make use of AES-256 encryption - Encrypt the data using your own encryption keys t hen copy the data to Amazon S3 over HTTPS endpoints . This refers to using a Server-Side Encryption with Customer-Provided Keys (SSE-C). Storing the data in encrypted EBS snapshots and sto ring the data on EBS volumes with encryption enable d instead of using Amazon S3 are both incorrect becau se all these options are for protecting your data i n your EBS volumes. Note that an S3 bucket does not use EB S volumes to store your data. Using AWS Shield to protect your data at rest is in correct because AWS Shield is mainly used to protec t your entire VPC against DDoS attacks. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/ser v-side-encryption.html https://docs.aws.amazon.com/AmazonS3/latest/dev/Usi ngClientSideEncryption.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", + "references": "" + }, + { + "question": "A Solutions Architect working for a startup is desi gning a High Performance Computing (HPC) applicatio n which is publicly accessible for their customers. T he startup founders want to mitigate distributed de nial- of-service (DDoS) attacks on their application. Which of the following options are not suitable to be implemented in this scenario? (Select TWO.) A. A. Use Dedicated EC2 instances to ensure that each instance has the maximum performance possible.", + "options": [ + "B. B. Add multiple Elastic Fabric Adapters (EFA) to each EC2 instance to increase the network bandwidth .", + "C. C. Use an Application Load Balancer with Auto Sca ling groups for your EC2 instances. Prevent direct", + "D. D. Use AWS Shield and AWS WAF." + ], + "correct": "", + "explanation": "Explanation/Reference: Take note that the question asks about the viable m itigation techniques that are NOT suitable to preve nt Distributed Denial of Service (DDoS) attack. A Denial of Service (DoS) attack is an attack that can make your website or application unavailable to end users. To achieve this, attackers use a variety of techniques that consume network or other resources, disrupting access for legitimate end users. To protect your system from DDoS attack, you can do the following: - Use an Amazon CloudFront service for distributing both static and dynamic content. - Use an Application Load Balancer with Auto Scalin g groups for your EC2 instances then restrict direc t Internet traffic to your Amazon RDS database by dep loying to a private subnet. - Set up alerts in Amazon CloudWatch to look for hi gh Network In and CPU utilization metrics. Services that are available within AWS Regions, lik e Elastic Load Balancing and Amazon Elastic Compute Cloud (EC2), allow you to build Distributed Denial of Service resiliency and scale to handle unexpecte d volumes of traffic within a given region. Services that are available in AWS edge locations, like Amaz on CloudFront, AWS WAF, Amazon Route53, and Amazon API Gateway, allow you to take advantage of a global network of edge locations that can provide y our application with greater fault tolerance and increased scale for managing larger volumes of traf fic. In addition, you can also use AWS Shield and AWS WA F to fortify your cloud network. AWS Shield is a managed DDoS protection service that is available i n two tiers: Standard and Advanced. AWS Shield Standard applies always-on detection and inline mit igation techniques, such as deterministic packet filtering and priority-based traffic shaping, to mi nimize application downtime and latency. AWS WAF is a web application firewall that helps pr otect web applications from common web exploits that could affect application availability, comprom ise security, or consume excessive resources. You c an use AWS WAF to define customizable web security rul es that control which traffic accesses your web applications. If you use AWS Shield Advanced, you c an use AWS WAF at no extra cost for those protected resources and can engage the DRT to create WAF rule s. Using Dedicated EC2 instances to ensure that each i nstance has the maximum performance possible is not a viable mitigation technique because Dedica ted EC2 instances are just an instance billing opti on. Although it may ensure that each instance gives the maximum performance, that by itself is not enough to mitigate a DDoS attack. Adding multiple Elastic Fabric Adapters (EFA) to ea ch EC2 instance to increase the network bandwidth is also not a viable option as this is ma inly done for performance improvement, and not for DDoS attack mitigation. Moreover, you can attach on ly one EFA per EC2 instance. An Elastic Fabric Adapter (EFA) is a network device that you can atta ch to your Amazon EC2 instance to accelerate High- Performance Computing (HPC) and machine learning ap plications. The following options are valid mitigation techniqu es that can be used to prevent DDoS: - Use an Amazon CloudFront service for distributing both static and dynamic content. - Use an Application Load Balancer with Auto Scalin g groups for your EC2 instances. Prevent direct Internet traffic to your Amazon RDS database by dep loying it to a new private subnet. - Use AWS Shield and AWS WAF. References: https://aws.amazon.com/answers/networking/aws-ddos- attack-mitigation/ https://d0.awsstatic.com/whitepapers/DDoS_White_Pap er_June2015.pdf Best practices on DDoS Attack Mitigation: https://youtu.be/HnoZS5jj7pk/", + "references": "" + }, + { + "question": "An application needs to retrieve a subset of data f rom a large CSV file stored in an Amazon S3 bucket by using simple SQL expressions. The queries are made within Amazon S3 and must only return the needed da ta. Which of the following actions should be taken?", + "options": [ + "A. A. Perform an S3 Select operation based on the buck et's name and object's metadata. B. B. Perform an S3 Select operation based on the buck et's name and object tags.", + "C. C. Perform an S3 Select operation based on the bu cket's name.", + "D. D. Perform an S3 Select operation based on the bu cket's name and object's key." + ], + "correct": "D. D. Perform an S3 Select operation based on the bu cket's name and object's key.", + "explanation": "Explanation/Reference: S3 Select enables applications to retrieve only a s ubset of data from an object by using simple SQL expressions. By using S3 Select to retrieve only th e data needed by your application, you can achieve drastic performance increases. Amazon S3 is composed of buckets, object keys, obje ct metadata, object tags, and many other components as shown below: An Amazon S3 bucket name is globally unique, and th e namespace is shared by all AWS accounts. An Amazon S3 object key refers to the key name, whi ch uniquely identifies the object in the bucket. An Amazon S3 object metadata is a name-value pair t hat provides information about the object. . An Amazon S3 object tag is a key-pair value used fo r object tagging to categorize storage. You can perform S3 Select to query only the necessa ry data inside the CSV files based on the bucket's name and the object's key. The following snippet below shows how it is done us ing boto3 ( AWS SDK for Python ): Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam client = boto3.client('s3') resp = client.select_object_content( Bucket='tdojo-bucket', # Bucket Name. Key='s3-select/tutorialsdojofile.csv', # Object Key . ExpressionType= 'SQL', . Expression = \"select \\\"Sample\\\" from s3object s whe re s.\\\"tutorialsdojofile\\\" in ['A', 'B']\" Hence, the correct answer is the option that says: Perform an S3 Select operation based on the bucket' s name and object's key. The option that says: Perform an S3 Select operatio n based on the bucket's name and object's metadata is incorrect because metadata is not needed when query ing subsets of data in an object using S3 Select. The option that says: Perform an S3 Select operatio n based on the bucket's name and object tags is incorrect because object tags just provide addition al information to your object. This is not needed w hen querying with S3 Select although this can be useful for S3 Batch Operations. You can categorize object s based on tag values to provide S3 Batch Operations with a list of objects to operate on. . The option that says: Perform an S3 Select operatio n based on the bucket's name is incorrect because you need both the bucket's name and the object key to successfully perform an S3 Select operation. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/s3- glacier-select-sql-reference-select.html . https://docs.aws.amazon.com/AmazonS3/latest/dev/Usi ngObjects.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", + "references": "" + }, + { + "question": "A startup has resources deployed on the AWS Cloud. It is now going through a set of scheduled audits b y an external auditing firm for compliance. Which of the following services available in AWS ca n be utilized to help ensure the right information are present for auditing purposes?", + "options": [ + "A. A. Amazon CloudWatch B. B. Amazon EC2", + "C. C. AWS CloudTrail", + "D. D. Amazon VPC" + ], + "correct": "C. C. AWS CloudTrail", + "explanation": "Explanation/Reference: AWS CloudTrail is a service that enables governance , compliance, operational auditing, and risk auditi ng of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS accou nt activity, including actions taken through the AWS M anagement Console, AWS SDKs, command line tools, and other AWS services. This event history simplifi es security analysis, resource change tracking, and troubleshooting. CloudTrail provides visibility into user activity b y recording actions taken on your account. CloudTra il records important information about each action, in cluding who made the request, the services used, th e actions performed, parameters for the actions, and the response elements returned by the AWS service. This information helps you to track changes made to your AWS resources and troubleshoot operational issues. CloudTrail makes it easier to ensure compli ance with internal policies and regulatory standard s. Hence, the correct answer is: AWS CloudTrail. Amazon VPC is incorrect because a VPC is a logicall y isolated section of the AWS Cloud where you can . launch AWS resources in a virtual network that you define. It does not provide you the auditing information that were asked for in this scenario. Amazon EC2 is incorrect because EC2 is a service th at provides secure, resizable compute capacity in t he cloud and does not provide the needed information i n this scenario just like the option above. Amazon CloudWatch is incorrect because this is a mo nitoring tool for your AWS resources. Like the above options, it does not provide the needed infor mation to satisfy the requirement in the scenario.", + "references": "https://aws.amazon.com/cloudtrail/ . Check out this AWS CloudTrail Cheat Sheet: https://tutorialsdojo.com/aws-cloudtrail/" + }, + { + "question": "A Solutions Architect is designing a highly availab le environment for an application. She plans to hos t the application on EC2 instances within an Auto Scaling Group. One of the conditions requires data stored on root EBS volumes to be preserved if an instance terminat es. What should be done to satisfy the requirement?", + "options": [ + "A. A. Enable the Termination Protection option for a ll EC2 instances.", + "B. B. Set the value of DeleteOnTermination attribute of the EBS volumes to False.", + "C. C. Configure ASG to suspend the health check proc ess for each EC2 instance.", + "D. D. Use AWS DataSync to replicate root volume data to Amazon S3." + ], + "correct": "C. C. Configure ASG to suspend the health check proc ess for each EC2 instance.", + "explanation": "Explanation/Reference: By default, Amazon EBS root device volumes are auto matically deleted when the instance terminates. However, by default, any additional EBS volumes tha t you attach at launch, or any EBS volumes that you attach to an existing instance persist even after t he instance terminates. This behavior is controlled by the volume's DeleteOnTermination attribute, which you c an modify. . To preserve the root volume when an instance termin ates, change the DeleteOnTermination attribute for the root volume to False. This EBS attribute can be changed through the AWS C onsole upon launching the instance or through CLI/API command. Hence, the correct answer is the option that says: Set the value of DeleteOnTermination attribute of t he EBS volumes to False. The option that says: Use AWS DataSync to replicate root volume data to Amazon S3 is incorrect because AWS DataSync does not work with Amazon EBS volumes. DataSync can copy data between Network File System (NFS) shares, Server Message Bl ock (SMB) shares, self-managed object storage AWS Snowcone, Amazon Simple Storage Service (Amazon S3) buckets, Amazon Elastic File System (Amazon EFS) file systems, and Amazon FSx for Windo ws File Server file systems. The option that says: Configure ASG to suspend the health check process for each EC2 instance is incorrect because suspending the health check proce ss will prevent the ASG from replacing unhealthy EC 2 instances. This can cause availability issues to th e application. The option that says: Enable the Termination Protec tion option for all EC2 instances is incorrect. Termination Protection will just prevent your insta nce from being accidentally terminated using the Amazon EC2 console. References: https://aws.amazon.com/premiumsupport/knowledge-cen ter/deleteontermination-ebs/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /terminating-instances.html Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/", + "references": "" + }, + { + "question": "A large telecommunications company needs to run ana lytics against all combined log files from the Appl ication Load Balancer as part of the regulatory requirement s. Which AWS services can be used together to collect logs and then easily perform log analysis?", + "options": [ + "A. A. Amazon S3 for storing the ELB log files and an EC2 instance for analyzing the log files using a c ustom-", + "B. B. Amazon S3 for storing ELB log files and Amazon EMR for analyzing the log files.", + "C. C. Amazon DynamoDB for storing and EC2 for analyz ing the logs.", + "D. D. Amazon EC2 with EBS volumes for storing and an alyzing the log files." + ], + "correct": "B. B. Amazon S3 for storing ELB log files and Amazon EMR for analyzing the log files.", + "explanation": "Explanation/Reference: In this scenario, it is best to use a combination o f Amazon S3 and Amazon EMR: Amazon S3 for storing ELB log files and Amazon EMR for analyzing the log files. Access logging in the ELB is stored in Amazon S3 which means that the following are valid options: - Amazon S3 for storing the ELB log files and an EC 2 instance for analyzing the log files using a cust om-built application. - Amazon S3 for storing ELB log files and Amazon EM R for analyzing the log files. However, log analysis can be automatically provided by Amazon EMR, which is more economical than . building a custom-built log analysis application an d hosting it in EC2. Hence, the option that says: A mazon S3 for storing ELB log files and Amazon EMR for ana lyzing the log files is the best answer between the two. Access logging is an optional feature of Elastic Lo ad Balancing that is disabled by default. After you enable access logging for your load balancer, Elast ic Load Balancing captures the logs and stores them in the Amazon S3 bucket that you specify as compressed files. You can disable access logging at any time. . Amazon EMR provides a managed Hadoop framework that makes it easy, fast, and cost-effective to process vast amounts of data across dynamically sca lable Amazon EC2 instances. It securely and reliabl y handles a broad set of big data use cases, includin g log analysis, web indexing, data transformations (ETL), machine learning, financial analysis, scientific si mulation, and bioinformatics. You can also run othe r popular distributed frameworks such as Apache Spark , HBase, Presto, and Flink in Amazon EMR, and interact with data in other AWS data stores such as Amazon S3 and Amazon DynamoDB. The option that says: Amazon DynamoDB for storing a nd EC2 for analyzing the logs is incorrect because DynamoDB is a noSQL database solution of AW S. It would be inefficient to store logs in DynamoDB while using EC2 to analyze them. The option that says: Amazon EC2 with EBS volumes f or storing and analyzing the log files is incorrect because using EC2 with EBS would be costly, and EBS might not provide the most durable storage for you r logs, unlike S3. The option that says: Amazon S3 for storing the ELB log files and an EC2 instance for analyzing the log files using a custom-built application is incor rect because using EC2 to analyze logs would be inefficient and expensive since you will have to pr ogram the analyzer yourself. References: . https://aws.amazon.com/emr/ https://docs.aws.amazon.com/elasticloadbalancing/la test/application/load-balancer-access-logs.html", + "references": "" + }, + { + "question": "A company deployed a high-performance computing (HP C) cluster that spans multiple EC2 instances across multiple Availability Zones and processes various w ind simulation models. Currently, the Solutions Architect is experiencing a slowdown in their appli cations and upon further investigation, it was disc overed that it was due to latency issues. Which is the MOST suitable solution that the Soluti ons Architect should implement to provide low-laten cy network performance necessary for tightly-coupled n ode-to-node communication of the HPC cluster?", + "options": [ + "A. A. Set up AWS Direct Connect connections across m ultiple Availability Zones for increased", + "B. B. Set up a spread placement group across multipl e Availability Zones in multiple AWS Regions.", + "C. C. Set up a cluster placement group within a sing le Availability Zone in the same AWS Region.", + "D. D. Use EC2 Dedicated Instances." + ], + "correct": "C. C. Set up a cluster placement group within a sing le Availability Zone in the same AWS Region.", + "explanation": "Explanation/Reference: When you launch a new EC2 instance, the EC2 service attempts to place the instance in such a way that all of your instances are spread out across underlying hardware to minimize correlated failures. You can u se . placement groups to influence the placement of a gr oup of interdependent instances to meet the needs o f your workload. Depending on the type of workload, y ou can create a placement group using one of the following placement strategies: Cluster packs instances close together inside an A vailability Zone. This strategy enables workloads t o achieve the low-latency network performance necessa ry for tightly-coupled node-to-node communication that is typical of HPC applications. . Partition spreads your instances across logical pa rtitions such that groups of instances in one parti tion do not share the underlying hardware with groups of in stances in different partitions. This strategy is t ypically used by large distributed and replicated workloads, such as Hadoop, Cassandra, and Kafka. Spread strictly places a small group of instances across distinct underlying hardware to reduce corre lated failures. . Cluster placement groups are recommended for applic ations that benefit from low network latency, high network throughput, or both. They are also recommen ded when the majority of the network traffic is between the instances in the group. To provide the lowest latency and the highest packet-per-second network performance for your placement group, choos e an instance type that supports enhanced networking. Partition placement groups can be used to deploy la rge distributed and replicated workloads, such as H DFS, HBase, and Cassandra, across distinct racks. When y ou launch instances into a partition placement grou p, Amazon EC2 tries to distribute the instances evenly across the number of partitions that you specify. You can also launch instances into a specific partition to have more control over where the instances are placed. Spread placement groups are recommended for applica tions that have a small number of critical instance s that should be kept separate from each other. Launc hing instances in a spread placement group reduces the risk of simultaneous failures that might occur when instances share the same racks. Spread placement groups provide access to distinct racks, and are th erefore suitable for mixing instance types or launc hing instances over time. A spread placement group can s pan multiple Availability Zones in the same Region. You can have a maximum of seven running instances p er Availability Zone per group. . Hence, the correct answer is: Set up a cluster plac ement group within a single Availability Zone in th e same AWS Region. . The option that says: Set up a spread placement gro up across multiple Availability Zones in multiple AWS Regions is incorrect because although using a p lacement group is valid for this particular scenari o, you can only set up a placement group in a single A WS Region only. A spread placement group can span multiple Availability Zones in the same Region. The option that says: Set up AWS Direct Connect con nections across multiple Availability Zones for increased bandwidth throughput and more consistent network experience is incorrect because this is primarily used for hybrid architectures. It bypasse s the public Internet and establishes a secure, ded icated connection from your on-premises data center into A WS, and not used for having low latency within yourAWS network. The option that says: Use EC2 Dedicated Instances i s incorrect because these are EC2 instances that ru n in a VPC on hardware that is dedicated to a single cus tomer and are physically isolated at the host hardw are level from instances that belong to other AWS accou nts. It is not used for reducing latency. References: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ placement-groups.html https://aws.amazon.com/hpc/ Check out this Amazon EC2 Cheat Sheet: . https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", + "references": "" + }, + { + "question": "Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam An investment bank is working with an IT team to ha ndle the launch of the new digital wallet system. T he applications will run on multiple EBS-backed EC2 in stances which will store the logs, transactions, an d billing statements of the user in an S3 bucket. Due to tight security and compliance requirements, the IT team is exploring options on how to safely store se nsitive data on the EBS volumes and S3. Which of the below options should be carried out wh en storing sensitive data on AWS? (Select TWO.)", + "options": [ + "A. A. Create an EBS Snapshot", + "B. B. Enable Amazon S3 Server-Side or use Client-Sid e Encryption", + "C. C. Enable EBS Encryption", + "D. D. Migrate the EC2 instances from the public to p rivate subnet." + ], + "correct": "", + "explanation": "Explanation/Reference: Enabling EBS Encryption and enabling Amazon S3 Serv er-Side or use Client-Side Encryption are correct. Amazon EBS encryption offers a simple encryption so lution for your EBS volumes without the need to bui ld, maintain, and secure your own key management infras tructure. . In Amazon S3, data protection refers to protecting data while in-transit (as it travels to and from Am azon S3) and at rest (while it is stored on disks in Amazon S3 data centers). You can protect data in transit b y using SSL or by using client-side encryption. You have the fo llowing options to protect data at rest in Amazon S3. Use Server-Side Encryption You request Amazon S3 t o encrypt your object before saving it on disks in its data centers and decrypt it when you download t he objects. Use Client-Side Encryption You can encrypt data cl ient-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process , the encryption keys, and related tools. Creating an EBS Snapshot is incorrect because this is a backup solution of EBS. It does not provide security of data inside EBS volumes when executed. Migrating the EC2 instances from the public to priv ate subnet is incorrect because the data you want t o secure are those in EBS volumes and S3 buckets. Mov ing your EC2 instance to a private subnet involves a different matter of security practice, which does n ot achieve what you want in this scenario. Using AWS Shield and WAF is incorrect because these protect you from common security threats for your web applications. However, what you are trying to achieve is securing and encrypting your data in side EBS and S3. References: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ EBSEncryption.html http://docs.aws.amazon.com/AmazonS3/latest/dev/Usin gEncryption.html Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/", + "references": "" + }, + { + "question": "A Solutions Architect is working for a large IT con sulting firm. One of the clients is launching a fil e sharing web application in AWS which requires a dur able storage service for hosting their static conte nts such as PDFs, Word Documents, high-resolution image s, and many others. Which type of storage service should the Architect use to meet this requirement?", + "options": [ + "A. A. Amazon RDS instance", + "B. B. Amazon EBS volume", + "C. C. Amazon EC2 instance store", + "D. D. Amazon S3" + ], + "correct": "D. D. Amazon S3", + "explanation": "Explanation/Reference: Amazon S3 is storage for the Internet. It's a simpl e storage service that offers software developers a durable, highly-scalable, reliable, and low-latency data storage infrastructure at very low costs. Ama zon S3 provides customers with a highly durable storage in frastructure. Versioning offers an additional level of protection by providing a means of recovery when cu stomers accidentally overwrite or delete objects. Remember that the scenario requires a durable stora ge for static content. These two keywords are actua lly referring to S3, since it is highly durable and sui table for storing static content. Hence, Amazon S3 is the correct answer. Amazon EBS volume is incorrect because this is not as durable compared with S3. In addition, it is bes t to store the static contents in S3 rather than EBS. Amazon EC2 instance store is incorrect because it i s definitely not suitable - the data it holds will be wiped out immediately once the EC2 instance is rest arted. Amazon RDS instance is incorrect because an RDS ins tance is just a database and not suitable for stori ng static content. By default, RDS is not durable, unl ess you launch it to be in Multi-AZ deployments con figuration.", + "references": "https://aws.amazon.com/s3/faqs/ https://d1.awsstatic.com/whitepapers/Storage/AWS%20 Storage%20Services%20Whitepaper-v9.pdf#page=24 Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/" + }, + { + "question": "An on-premises server is using an SMB network file share to store application data. The application pr oduces around 50 MB of data per day but it only needs to a ccess some of it for daily processes. To save on st orage costs, the company plans to copy all the applicatio n data to AWS, however, they want to retain the abi lity to retrieve data with the same low-latency access as t he local file share. The company does not have the capacity to develop the needed tool for this operation. Which AWS service should the company use?", + "options": [ + "A. A. AWS Storage Gateway", + "B. B. Amazon FSx for Windows File Server", + "C. C. AWS Virtual Private Network (VPN)", + "D. D. AWS Snowball Edge" + ], + "correct": "A. A. AWS Storage Gateway", + "explanation": "Explanation/Reference: AWS Storage Gateway is a hybrid cloud storage servi ce that gives you on-premises access to virtually unlimited cloud storage. Customers use Storage Gate way to simplify storage management and reduce costs for key hybrid cloud storage use cases. These inclu de moving backups to the cloud, using on-premises f ile shares backed by cloud storage, and providing low l atency access to data in AWS for on-premises applications. Specifically for this scenario, you can use Amazon FSx File Gateway to support the SMB file share for the on- premises application. It also meets the requirement for low-latency access. Amazon FSx File Gateway he lps accelerate your file-based storage migration to the cloud to enable faster performance, improved data protection, and reduced cost. Hence, the correct answer is: AWS Storage Gateway. AWS Virtual Private Network (VPN) is incorrect beca use this service is mainly used for establishing en cryption connections from an on-premises network to AWS. Amazon FSx for Windows File Server is incorrect. Th is won't provide low-latency access since all the i les are stored on AWS, which means that they will be access ed via the internet. AWS Storage Gateway supports l ocal caching without any development overhead making it suitable for low-latency applications. AWS Snowball Edge is incorrect. A Snowball edge is a type of Snowball device with on-board storage and compute power that can do local processing in a ddition to transferring data between your local environment and the AWS Cloud. It's just a data mig ration tool and not a storage service. References: https://aws.amazon.com/storagegateway/ https://docs.aws.amazon.com/storagegateway/latest/u serguide/CreatingAnSMBFileShare.html AWS Storage Gateway Overview: https://www.youtube.com/watch?v=pNb7xOBJjHE Check out this AWS Storage Gateway Cheat Sheet: https://tutorialsdojo.com/aws-storage-gateway/", + "references": "" + }, + { + "question": "A company is setting up a cloud architecture for an international money transfer service to be deploye d in AWS which will have thousands of users around the globe . The service should be available 24/7 to avoid any business disruption and should be resilient eno ugh to handle the outage of an entire AWS region. T o meet this requirement, the Solutions Architect has deplo yed their AWS resources to multiple AWS Regions. He needs to use Route 53 and configure it to set al l of the resources to be available all the time as much as possible. When a resource becomes unavailable, Rout e 53 should detect that it's unhealthy and stop in cluding it when responding to queries. Which of the following is the most fault-tolerant r outing configuration that the Solutions Architect s hould use in this scenario?", + "options": [ + "A. A. Configure an Active-Active Failover with One P rimary and One Secondary Resource.", + "B. B. Configure an Active-Passive Failover with Mult iple Primary and Secondary Resources.", + "C. C. Configure an Active-Passive Failover with Weig hted Records.", + "D. D. Configure an Active-Active Failover with Weigh ted routing policy." + ], + "correct": "D. D. Configure an Active-Active Failover with Weigh ted routing policy.", + "explanation": "Explanation/Reference: You can use Route 53 health checking to configure a ctive-active and active-passive failover configurations. You configure active-active failove r using any routing policy (or combination of routi ng policies) other than failover, and you configure ac tive-passive failover using the failover routing po licy. Active-Active Failover Use this failover configuration when you want all o f your resources to be available the majority of th e time. When a resource becomes unavailable, Route 53 can d etect that it's unhealthy and stop including it whe n responding to queries. In active-active failover, all the records that hav e the same name, the same type (such as A or AAAA), and the same routing policy (such as weighted or latenc y) are active unless Route 53 considers them unheal thy. Route 53 can respond to a DNS query using any healt hy record. Active-Passive Failover Use an active-passive failover configuration when y ou want a primary resource or group of resources to be available the majority of the time and you want a s econdary resource or group of resources to be on st andby in case all the primary resources become unavailabl e. When responding to queries, Route 53 includes on ly the healthy primary resources. If all the primary r esources are unhealthy, Route 53 begins to include only the healthy secondary resources in response to DNS queries. Configuring an Active-Passive Failover with Weighte d Records and configuring an Active-Passive Failover with Multiple Primary and Secondary Resour ces are incorrect because an Active-Passive Failover is mainly used when you want a primary res ource or group of resources to be available most of the time and you want a secondary resource or group of resources to be on standby in case all the primary resources become unavailable. In this scenario, all of your resources should be available all the time as much as possible which is why you have to use an Ac tive-Active Failover instead. Configuring an Active-Active Failover with One Prim ary and One Secondary Resource is incorrect because you cannot set up an Active-Active Failover with One Primary and One Secondary Resource. Remember that an Active-Active Failover uses all av ailable resources all the time without a primary no r a secondary resource. References: https://docs.aws.amazon.com/Route53/latest/Develope rGuide/dns-failover-types.html https://docs.aws.amazon.com/Route53/latest/Develope rGuide/routing-policy.html https://docs.aws.amazon.com/Route53/latest/Develope rGuide/dns-failover-configuring.html Amazon Route 53 Overview: https://www.youtube.com/watch?v=Su308t19ubY Check out this Amazon Route 53 Cheat Sheet: https://tutorialsdojo.com/amazon-route-53/", + "references": "" + }, + { + "question": "A company has a global online trading platform in w hich the users from all over the world regularly up load terabytes of transactional data to a centralized S3 bucket. What AWS feature should you use in your present sys tem to improve throughput and ensure consistently f ast data transfer to the Amazon S3 bucket, regardless o f your user's location?", + "options": [ + "A. A. Use CloudFront Origin Access Identity", + "B. B. Amazon S3 Transfer Acceleration", + "C. C. FTP", + "D. D. AWS Direct Connect" + ], + "correct": "B. B. Amazon S3 Transfer Acceleration", + "explanation": "Explanation/Reference: Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and your Amazon S3 bucket. Tran sfer Acceleration leverages Amazon CloudFront's globally distributed AWS Edge Locations. As data ar rives at an AWS Edge Location, data is routed to yo ur Amazon S3 bucket over an optimized network path. FTP is incorrect because the File Transfer Protocol does not guarantee fast throughput and consistent, fast data transfer. AWS Direct Connect is incorrect because you have us ers all around the world and not just on your on- premises data center. Direct Connect would be too c ostly and is definitely not suitable for this purpo se. Using CloudFront Origin Access Identity is incorrec t because this is a feature which ensures that only CloudFront can serve S3 content. It does not increa se throughput and ensure fast delivery of content t o your customers.", + "references": "http://docs.aws.amazon.com/AmazonS3/latest/dev/tran sfer-acceleration.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/ S3 Transfer Acceleration vs Direct Connect vs VPN v s Snowball vs Snowmobile: https://tutorialsdojo.com/s3-transfer-acceleration- vs-direct-connect-vs-vpn-vs-snowball-vs-snowmobile/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/" + }, + { + "question": "In Amazon EC2, you can manage your instances from t he moment you launch them up to their termination. You can flexibly control your computing costs by ch anging the EC2 instance state. Which of the following statements is true regarding EC2 billing? (Select TWO.)", + "options": [ + "A. A. You will be billed when your Reserved instance is in terminated state.", + "B. B. You will be billed when your Spot instance is preparing to stop with a stopping state.", + "C. C. You will not be billed for any instance usage while an instance is not in the running state.", + "D. D. You will be billed when your On-Demand instanc e is in pending state." + ], + "correct": "", + "explanation": "Explanation/Reference: By working with Amazon EC2 to manage your instances from the moment you launch them through their termination, you ensure that your customers have th e best possible experience with the applications or sites that you host on your instances. The following illu stration represents the transitions between instanc e states. Notice that you can't stop and start an instance st ore-backed instance: . Below are the valid EC2 lifecycle instance states: pending - The instance is preparing to enter the ru nning state. An instance enters the pending state w hen it launches for the first time, or when it is restarte d after being in the stopped state. running - The instance is running and ready for use . stopping - The instance is preparing to be stopped. Take note that you will not billed if it is prepar ing to stop however, you will still be billed if it is jus t preparing to hibernate. stopped - The instance is shut down and cannot be used. The instance can be restarted a t any time. shutting-down - The instance is preparing to be ter minated. terminated - The instance has been permanently dele ted and cannot be restarted. Take note that Reserve d Instances that applied to terminated instances are still billed until the end of their term according to their payment option. The option that says: You will be billed when your On-Demand instance is preparing to hibernate with a stopping state is correct because when the instan ce state is stopping, you will not billed if it is preparing to stop however, you will still be billed if it is just preparing to hibernate. The option that says: You will be billed when your Reserved instance is in terminated state is correctbecause Reserved Instances that applied to terminat ed instances are still billed until the end of thei r term according to their payment option. I actually raise d a pull-request to Amazon team about the billing conditions for Reserved Instances, which has been a pproved and reflected on your official AWS Documentation: https://github.com/awsdocs/amazon-ec 2-user-guide/pull/45 The option that says: You will be billed when your On-Demand instance is in pending state is incorrect because you will not be billed if your instance is in pending state. The option that says: You will be billed when your Spot instance is preparing to stop with a stopping state is incorrect because you will not be billed i f your instance is preparing to stop with a stoppin g state. The option that says: You will not be billed for an y instance usage while an instance is not in the running state is incorrect because the statement is not entirely true. You can still be billed if your instance is preparing to hibernate with a stopping state. References: https://github.com/awsdocs/amazon-ec2-user-guide/pu ll/45 http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ ec2-instance-lifecycle.html Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", + "references": "" + }, + { + "question": "A Solutions Architect for a global news company is configuring a fleet of EC2 instances in a subnet th at currently is in a VPC with an Internet gateway atta ched. All of these EC2 instances can be accessed fr om the Internet. The architect launches another subnet and deploys an EC2 instance in it, however, the ar chitect is not able to access the EC2 instance from the Int ernet. What could be the possible reasons for this issue? (Select TWO.)", + "options": [ + "A. A. The route table is not configured properly to send traffic from the EC2 instance to the", + "B. B. The Amazon EC2 instance does not have a public IP address associated with it.", + "C. C. The Amazon EC2 instance is not a member of the same Auto Scaling group.", + "D. D. The Amazon EC2 instance does not have an attac hed Elastic Fabric Adapter (EFA)." + ], + "correct": "", + "explanation": "Explanation/Reference: Your VPC has an implicit router and you use route t ables to control where network traffic is directed. Each subnet in your VPC must be associated with a route table, which controls the routing for the subnet (s ubnet route table). You can explicitly associate a subnet with a particular route table. Otherwise, the subn et is implicitly associated with the main route table. A subnet can only be associated with one route tabl e at a time, but you can associate multiple subnets with the same subnet route table. You can optionally ass ociate a route table with an internet gateway or a virtual private gateway (gateway route table). This enables you to specify routing rules for inbound traffic t hat enters your VPC through the gateway Be sure that the subnet route table also has a rout e entry to the internet gateway. If this entry does n't exist, the instance is in a private subnet and is inaccess ible from the internet. In cases where your EC2 instance cannot be accessed from the Internet (or vice versa), you usually hav e to check two things: . - Does it have an EIP or public IP address? - Is the route table properly configured? Below are the correct answers: - Amazon EC2 instance does not have a public IP add ress associated with it. - The route table is not configured properly to sen d traffic from the EC2 instance to the Internet thr ough the Internet gateway. The option that says: The Amazon EC2 instance is no t a member of the same Auto Scaling group is incorrect since Auto Scaling Groups do not affect I nternet connectivity of EC2 instances. The option that says: The Amazon EC2 instance doesn 't have an attached Elastic Fabric Adapter (EFA) is incorrect because Elastic Fabric Adapter i s just a network device that you can attach to your Amazon EC2 instance to accelerate High Performance Computing (HPC) and machine learning applications. EFA enables you to achieve the applic ation performance of an on-premises HPC cluster, wi th the scalability, flexibility, and elasticity provid ed by AWS. However, this component is not required in order for your EC2 instance to access the public In ternet. The option that says: The route table is not config ured properly to send traffic from the EC2 instance to the Internet through the customer gateway (CGW) is incorrect since CGW is used when you are setting up a VPN. The correct gateway should be an Internet gateway. References: http://docs.aws.amazon.com/AmazonVPC/latest/UserGui de/VPC_Scenario2.html https://docs.aws.amazon.com/vpc/latest/userguide/VP C_Route_Tables.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", + "references": "" + }, + { + "question": "A company has clients all across the globe that acc ess product files stored in several S3 buckets, whi ch are behind each of their own CloudFront web distributio ns. They currently want to deliver their content to a specific client, and they need to make sure that on ly that client can access the data. Currently, all of their clients can access their S3 buckets directly using an S3 URL or through their CloudFront distribution. The Solutions Architect must serve the private content via CloudFront only, to secure the distribution of files. Which combination of actions should the Architect i mplement to meet the above requirements? (Select TW O.)", + "options": [ + "A. A. Use S3 pre-signed URLs to ensure that only the ir client can access the files. Remove permission t o use", + "B. B. Use AWS App Mesh to ensure that only their cli ent can access the files.", + "C. C. Restrict access to files in the origin by crea ting an origin access identity (OAI) and give it pe rmission to", + "D. D. Use AWS Cloud Map to ensure that only their cl ient can access the files." + ], + "correct": "", + "explanation": "Explanation/Reference: Many companies that distribute content over the Int ernet want to restrict access to documents, busines s data, media streams, or content that is intended for sele cted users, for example, users who have paid a fee. To securely serve this private content by using CloudF ront, you can do the following: - Require that your users access your private conte nt by using special CloudFront signed URLs or signe d cookies. - Require that your users access your Amazon S3 con tent by using CloudFront URLs, not Amazon S3 URLs. Requiring CloudFront URLs isn't necessary, but it i s recommended to prevent users from bypassing the restrictions that you specify in signed URLs or sig ned cookies. You can do this by setting up an origi n access identity (OAI) for your Amazon S3 bucket. You can a lso configure the custom headers for a private HTTP server or an Amazon S3 bucket configured as a websi te endpoint. All objects and buckets by default are private. The pre-signed URLs are useful if you want your user/customer to be able to upload a specific objec t to your bucket, but you don't require them to hav e AWS security credentials or permissions. You can generate a pre-signed URL programmatically using the AWS SDK for Java or the AWS SDK for .NET. If you are using Microsoft Visual Studio, you can also use AWS Explorer to generate a pre- signed object URL without writing any code. Anyone who receives a valid pre-signed URL can then programmatically upload an object. Hence, the correct answers are: - Restrict access to files in the origin by creatin g an origin access identity (OAI) and give it permi ssion to read the files in the bucket. - Require the users to access the private content b y using special CloudFront signed URLs or signed co okies. The option that says: Use AWS App Mesh to ensure th at only their client can access the files is incorr ect because AWS App Mesh is just a service mesh that pr ovides application-level networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure. The option that says: Use AWS Cloud Map to ensure t hat only their client can access the files is incor rect because AWS Cloud Map is simply a cloud resource di scovery service that enables you to name your application resources with custom names and automat ically update the locations of your dynamically cha nging resources. The option that says: Use S3 pre-signed URLs to ens ure that only their client can access the files. Remove permission to use Amazon S3 URLs to read the files for anyone else is incorrect. Although this could be a valid solution, it doesn't satisfy the r equirement to serve the private content via CloudFr ont only to secure the distribution of files. A better solut ion is to set up an origin access identity (OAI) th en use Signed URL or Signed Cookies in your CloudFront web distribution. References: https://docs.aws.amazon.com/AmazonCloudFront/latest /DeveloperGuide/PrivateContent.html https://docs.aws.amazon.com/AmazonS3/latest/dev/Pre signedUrlUploadObject.html Check out this Amazon CloudFront cheat sheet: https://tutorialsdojo.com/amazon-cloudfront/ S3 Pre-signed URLs vs CloudFront Signed URLs vs Ori gin Access Identity (OAI) https://tutorialsdojo.com/s3-pre-signed-urls-vs-clo udfront-signed-urls-vs-origin-access-identity-oai/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/", + "references": "" + }, + { + "question": "A company plans to use a durable storage service to store on-premises database backups to the AWS clou d. To move their backup data, they need to use a servi ce that can store and retrieve objects through stan dard file storage protocols for quick recovery. Which of the following options will meet this requi rement?", + "options": [ + "A. A. Use Amazon EBS volumes to store all the backup data and attach it to an Amazon EC2 instance.", + "B. B. Use AWS Snowball Edge to directly backup the d ata in Amazon S3 Glacier.", + "C. C. Use the AWS Storage Gateway file gateway to st ore all the backup data in Amazon S3.", + "D. D. Use the AWS Storage Gateway volume gateway to store the backup data and directly access it using" + ], + "correct": "C. C. Use the AWS Storage Gateway file gateway to st ore all the backup data in Amazon S3.", + "explanation": "Explanation/Reference: File Gateway presents a file-based interface to Ama zon S3, which appears as a network file share. It enables you to store and retrieve Amazon S3 objects through standard file storage protocols. File Gate way allows your existing file-based applications or dev ices to use secure and durable cloud storage withou t needing to be modified. With File Gateway, your con figured S3 buckets will be available as Network Fil e System (NFS) mount points or Server Message Block ( SMB) file shares. To store the backup data from on-premises to a dura ble cloud storage service, you can use File Gateway to store and retrieve objects through standard file st orage protocols (SMB or NFS). File Gateway enables your existing file-based applications, devices, and work flows to use Amazon S3, without modification. File Gateway securely and durably stores both file conte nts and metadata as objects while providing your on - premises applications low-latency access to cached data. Hence, the correct answer is: Use the AWS Storage G ateway file gateway to store all the backup data in Amazon S3. The option that says: Use the AWS Storage Gateway v olume gateway to store the backup data and directly access it using Amazon S3 API actions is i ncorrect. Although this is a possible solution, you cannot directly access the volume gateway using Ama zon S3 APIs. You should use File Gateway to access your data in Amazon S3. The option that says: Use Amazon EBS volumes to sto re all the backup data and attached it to an Amazon EC2 instance is incorrect. Take note that in the scenar io, you are required to store the backup data in a durable storage service. An Amazon EBS volume is not highly durable like Amazon S3. Also, file storage protoco ls such as NFS or SMB, are not directly supported by EBS. The option that says: Use AWS Snowball Edge to dire ctly backup the data in Amazon S3 Glacier is incorr ect because AWS Snowball Edge cannot store and retrieve objects through standard file storage protocols. A lso, Snowball Edge can't directly integrate backups to S 3 Glacier. References: https://aws.amazon.com/storagegateway/faqs/ https://aws.amazon.com/s3/storage-classes/ Check out this AWS Storage Gateway Cheat Sheet: https://tutorialsdojo.com/aws-storage-gateway/", + "references": "" + }, + { + "question": "A large insurance company has an AWS account that c ontains three VPCs (DEV, UAT and PROD) in the same region. UAT is peered to both PROD and DEV using a VPC peering connection. All VPCs have non-overlapping CIDR blocks. The company wants to push minor code releases from Dev to Prod to speed up time to market. Which of the fo llowing options helps the company accomplish this?", + "options": [ + "A. Change the DEV and PROD VPCs to have overlapping CIDR blocks to be able to connect them.", + "B. Create a new VPC peering connection between PROD and DEV with the appropriate routes.", + "C. Create a new entry to PROD in the DEV route table using the VPC peering connection as the target.", + "D. Do nothing. Since these two VPCs are already conn ected via UAT, they already have a connection to ea ch" + ], + "correct": "B. Create a new VPC peering connection between PROD and DEV with the appropriate routes.", + "explanation": "Explanation/Reference: A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them privately. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connecti on between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region. AWS uses the existing infrastructure of a VPC to cr eate a VPC peering connection; it is neither a gate way nor a VPN connection and does not rely on a separat e piece of physical hardware. There is no single po int of failure for communication or a bandwidth bottlen eck. Creating a new entry to PROD in the DEV route table using the VPC peering connection as the target is incorrect because even if you configure t he route tables, the two VPCs will still be disconn ected until you set up a VPC peering connection between t hem. Changing the DEV and PROD VPCs to have overlapping CIDR blocks to be able to connect them is incorrect because you cannot peer two VPCs with ove rlapping CIDR blocks. The option that says: Do nothing. Since these two V PCs are already connected via UAT, they already have a connection to each other is incorrect as tra nsitive VPC peering is not allowed hence, even thou gh DEV and PROD are both connected in UAT, these two V PCs do not have a direct connection to each other.", + "references": "https://docs.aws.amazon.com/AmazonVPC/latest/UserGu ide/vpc-peering.html Check out these Amazon VPC and VPC Peering Cheat Sh eets: https://tutorialsdojo.com/amazon-vpc/ https://tutorialsdojo.com/vpc-peering/ Here is a quick introduction to VPC Peering: https://youtu.be/i1A1eH8vLtk" + }, + { + "question": "Due to the large volume of query requests, the data base performance of an online reporting application significantly slowed down. The Solutions Architect is trying to convince her client to use Amazon RDS Read Replica for their application instead of setti ng up a Multi-AZ Deployments configuration. What are two benefits of using Read Replicas over M ulti-AZ that the Architect should point out? (Selec t TWO.)", + "options": [ + "A. A. It enhances the read performance of your prima ry database by increasing its IOPS and accelerates its", + "B. B. Allows both read and write operations on the r ead replica to complement the primary database.", + "C. C. Provides synchronous replication and automatic failover in the case of Availability Zone service failures.", + "D. D. Provides asynchronous replication and improves the performance of the primary database by taking" + ], + "correct": "", + "explanation": "Explanation/Reference: Amazon RDS Read Replicas provide enhanced performan ce and durability for database (DB) instances. This feature makes it easy to elastically scale out beyond the capacity constraints of a single DB ins tance for read-heavy database workloads. You can create one or more replicas of a given sour ce DB Instance and serve high-volume application re ad traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone D B instances. For the MySQL, MariaDB, PostgreSQL, and Oracle data base engines, Amazon RDS creates a second DB instance using a snapshot of the source DB instance . It then uses the engines' native asynchronous replication to update the read replica whenever the re is a change to the source DB instance. The read replica operates as a DB instance that allows only read-only connections; applications can connect to a read replica just as they would to any DB instance. Amaz on RDS replicates all databases in the source DB instance. When you create a read replica for Amazon RDS for M ySQL, MariaDB, PostgreSQL, and Oracle, Amazon RDS sets up a secure communications channel using p ublic-key encryption between the source DB instance and the read replica, even when replicatin g across regions. Amazon RDS establishes any AWS security configurations such as adding security gro up entries needed to enable the secure channel. You can also create read replicas within a Region o r between Regions for your Amazon RDS for MySQL, MariaDB, PostgreSQL, and Oracle database instances encrypted at rest with AWS Key Management Service (KMS). Hence, the correct answers are: - It elastically scales out beyond the capacity con straints of a single DB instance for read-heavy dat abase workloads. - Provides asynchronous replication and improves th e performance of the primary database by taking read-heavy database workloads from it. The option that says: Allows both read and write op erations on the read replica to complement the primary database is incorrect as Read Replicas are primarily used to offload read-only operations from the primary database instance. By default, you can't do a write operation to your Read Replica. The option that says: Provides synchronous replicat ion and automatic failover in the case of Availabil ity Zone service failures is incorrect as this is a ben efit of Multi-AZ and not of a Read Replica. Moreove r, Read Replicas provide an asynchronous type of repli cation and not synchronous replication. The option that says: It enhances the read performa nce of your primary database by increasing its IOPS and accelerates its query processing via AWS Global Accelerator is incorrect because Read Replicas do not do anything to upgrade or increase the read thr oughput on the primary DB instance per se, but it provides a way for your application to fetch data f rom replicas. In this way, it improves the overall performance of your entire database-tier (and not j ust the primary DB instance). It doesn't increase t he IOPS nor use AWS Global Accelerator to accelerate t he compute capacity of your primary database. AWS Global Accelerator is a networking service, not rel ated to RDS, that direct user traffic to the neares t application endpoint to the client, thus reducing i nternet latency and jitter. It simply routes the tr affic to the closest edge location via Anycast. References: https://aws.amazon.com/rds/details/read-replicas/ https://aws.amazon.com/rds/features/multi-az/ Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/ Additional tutorial - How do I make my RDS MySQL re ad replica writable? https://youtu.be/j5da6d2TIPc", + "references": "" + }, + { + "question": "A major TV network has a web application running on eight Amazon T3 EC2 instances. The number of requests that the application processes are consist ent and do not experience spikes. To ensure that ei ght instances are running at all times, the Solutions A rchitect should create an Auto Scaling group and di stribute the load evenly between all instances. Which of the following options can satisfy the give n requirements?", + "options": [ + "A. A. Deploy eight EC2 instances with Auto Scaling i n one Availability Zone behind an Amazon Elastic Lo ad", + "B. B. Deploy four EC2 instances with Auto Scaling in one region and four in another region behind an Am azon", + "C. C. Deploy two EC2 instances with Auto Scaling in four regions behind an Amazon Elastic Load Balancer .", + "D. D. Deploy four EC2 instances with Auto Scaling in one Availability Zone and four in another availabi lity zone" + ], + "correct": "D. D. Deploy four EC2 instances with Auto Scaling in one Availability Zone and four in another availabi lity zone", + "explanation": "Explanation/Reference: The best option to take is to deploy four EC2 insta nces in one Availability Zone and four in another availability zone in the same region behind an Amaz on Elastic Load Balancer. In this way, if one availability zone goes down, there is still another available zone that can accommodate traffic. When the first AZ goes down, the second AZ will onl y have an initial 4 EC2 instances. This will eventu ally be scaled up to 8 instances since the solution is u sing Auto Scaling. The 110% compute capacity for the 4 servers might c ause some degradation of the service, but not a tot al outage since there are still some instances that ha ndle the requests. Depending on your scale-up configuration in your Auto Scaling group, the addit ional 4 EC2 instances can be launched in a matter o f minutes. T3 instances also have a Burstable Performance capa bility to burst or go beyond the current compute capacity of the instance to higher performance as r equired by your workload. So your 4 servers will be able to manage 110% compute capacity for a short period of time. This is the power of cloud computing versu s our on-premises network architecture. It provides e lasticity and unparalleled scalability. Take note that Auto Scaling will launch additional EC2 instances to the remaining Availability Zone/s in the event of an Availability Zone outage in the reg ion. Hence, the correct answer is the option that s ays: Deploy four EC2 instances with Auto Scaling in one Availability Zone and four in another availability zone in the same region behind an Amaz on Elastic Load Balancer. The option that says: Deploy eight EC2 instances wi th Auto Scaling in one Availability Zone behind an Amazon Elastic Load Balancer is incorrect because t his architecture is not highly available. If that Availability Zone goes down then your web applicati on will be unreachable. The options that say: Deploy four EC2 instances wit h Auto Scaling in one region and four in another region behind an Amazon Elastic Load Balancer and D eploy two EC2 instances with Auto Scaling in four regions behind an Amazon Elastic Load Balancer are incorrect because the ELB is designed to only run in one region and not across multiple regi ons. References: https://aws.amazon.com/elasticloadbalancing/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ec2-increase-availability.html AWS Elastic Load Balancing Overview: https://youtu.be/UBl5dw59DO8 Check out this AWS Elastic Load Balancing (ELB) Che at Sheet: https://tutorialsdojo.com/aws-elastic-load-balancin g-elb/", + "references": "" + }, + { + "question": "An aerospace engineering company recently adopted a hybrid cloud infrastructure with AWS. One of the Solutions Architect's tasks is to launch a VPC with both public and private subnets for their EC2 inst ances as well as their database instances. Which of the following statements are true regardin g Amazon VPC subnets? (Select TWO.)", + "options": [ + "A. A. Each subnet spans to 2 Availability Zones.", + "B. B. EC2 instances in a private subnet can communic ate with the Internet only if they have an Elastic IP.", + "C. C. The allowed block size in VPC is between a /16 netmask (65,536 IP addresses) and /27 netmask (32 IP", + "D. D. Each subnet maps to a single Availability Zone ." + ], + "correct": "", + "explanation": "Explanation/Reference: A VPC spans all the Availability Zones in the regio n. After creating a VPC, you can add one or more su bnets in each Availability Zone. When you create a subnet, y ou specify the CIDR block for the subnet, which is a subset of the VPC CIDR block. Each subnet must reside enti rely within one Availability Zone and cannot span z ones. Availability Zones are distinct locations that are engineered to be isolated from failures in other Av ailability Zones. By launching instances in separate Availabil ity Zones, you can protect your applications from t he failure of a single location. Below are the important points you have to remember about subnets: - Each subnet maps to a single Availability Zone. - Every subnet that you create is automatically ass ociated with the main route table for the VPC. - If a subnet's traffic is routed to an Internet ga teway, the subnet is known as a public subnet. The option that says: EC2 instances in a private su bnet can communicate with the Internet only if they have an Elastic IP is incorrect. EC2 instances in a private subnet can communicate with the Internet not just by having an Elastic IP, but also with a public IP address vi a a NAT Instance or a NAT Gateway. Take note that t here is a distinction between private and public IP addresses . To enable communication with the Internet, a publ ic IPv4 address is mapped to the primary private IPv4 addre ss through network address translation (NAT). The option that says: The allowed block size in VPC is between a /16 netmask (65,536 IP addresses) and /27 netmask (32 IP addresses) is incorrect because the allowed block size in VPC is between a /16 netmask (65,536 IP addresses) and /28 netmask (16 IP addres ses) and not /27 netmask. The option that says: Each subnet spans to 2 Availa bility Zones is incorrect because each subnet must reside entirely within one Availability Zone and cannot sp an zones. References: https://docs.aws.amazon.com/AmazonVPC/latest/UserGu ide/VPC_Subnets.html https://docs.aws.amazon.com/vpc/latest/userguide/vp c-ip-addressing.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", + "references": "" + }, + { + "question": "A company plans to set up a cloud infrastructure in AWS. In the planning, it was discussed that you ne ed to deploy two EC2 instances that should continuously r un for three years. The CPU utilization of the EC2 instances is also expected to be stable and predict able. Which is the most cost-efficient Amazon EC2 Pricing type that is most appropriate for this scenario?", + "options": [ + "A. A. Spot instances", + "B. B. Reserved Instances", + "C. C. Dedicated Hosts", + "D. D. On-Demand instances" + ], + "correct": "B. B. Reserved Instances", + "explanation": "Explanation/Reference: Reserved Instances provide you with a significant d iscount (up to 75%) compared to On-Demand instance pricing. In addition, when Reserved Instances are a ssigned to a specific Availability Zone, they provi de a capacity reservation, giving you additional confide nce in your ability to launch instances when you ne ed them. For applications that have steady state or predicta ble usage, Reserved Instances can provide significa nt savings compared to using On-Demand instances. Reserved Instances are recommended for: - Applications with steady state usage - Applications that may require reserved capacity - Customers that can commit to using EC2 over a 1 o r 3 year term to reduce their total computing costs References: https://aws.amazon.com/ec2/pricing/ https://aws.amazon.com/ec2/pricing/reserved-instanc es/ Amazon EC2 Overview: https://www.youtube.com/watch?v=7VsGIHT_jQE Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", + "references": "" + }, + { + "question": "A Solutions Architect is unable to connect to the n ewly deployed EC2 instance via SSH using a home com puter. However, the Architect was able to successfully acc ess other existing instances in the VPC without any issues. Which of the following should the Architect check a nd possibly correct to restore connectivity?", + "options": [ + "A. A. Configure the Security Group of the EC2 instan ce to permit ingress traffic over port 22 from your IP.", + "B. B. Configure the Network Access Control List of y our VPC to permit ingress traffic over port 22 from your IP.", + "C. C. Use Amazon Data Lifecycle Manager.", + "D. D. Configure the Security Group of the EC2 instan ce to permit ingress traffic over port 3389 from yo ur IP." + ], + "correct": "A. A. Configure the Security Group of the EC2 instan ce to permit ingress traffic over port 22 from your IP.", + "explanation": "Explanation/Reference: When connecting to your EC2 instance via SSH, you n eed to ensure that port 22 is allowed on the securi ty group of your EC2 instance. A security group acts as a virtual firewall that co ntrols the traffic for one or more instances. When you launch an instance, you associate one or more security groups with the instance. You add rules to each security group that allow traffic to or from its associated instan ces. You can modify the rules for a security group at any time; the new rules are automatically applied to all inst ances that are associated with the security group. Using Amazon Data Lifecycle Manager is incorrect be cause this is primarily used to manage the lifecycl e of your AWS resources and not to allow certain traf fic to go through. Configuring the Network Access Control List of your VPC to permit ingress traffic over port 22 from your IP is incorrect because this is not neces sary in this scenario as it was specified that you were able to connect to other EC2 instances. In addition , Network ACL is much suitable to control the traff ic that goes in and out of your entire VPC and not jus t on one EC2 instance. Configure the Security Group of the EC2 instance to permit ingress traffic over port 3389 from your IP is incorrect because this is relevant to RDP and not SSH.", + "references": "http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ using-network-security.html Check out these AWS Comparison of Services Cheat Sh eets: https://tutorialsdojo.com/comparison-of-aws-service s/" + }, + { + "question": "A Solutions Architect needs to deploy a mobile appl ication that can collect votes for a popular singin g competition. Millions of users from around the worl d will submit votes using their mobile phones. Thes e votes must be collected and stored in a highly scal able and highly available data store which will be queried for real-time ranking. Which of the following combination of services shou ld the architect use to meet this requirement?", + "options": [ + "A. A. Amazon Redshift and AWS Mobile Hub", + "B. B. Amazon Relational Database Service (RDS) and A mazon MQ", + "C. C. Amazon Aurora and Amazon Cognito", + "D. D. Amazon DynamoDB and AWS AppSync" + ], + "correct": "D. D. Amazon DynamoDB and AWS AppSync", + "explanation": "Explanation/Reference: When the word durability pops out, the first servic e that should come to your mind is Amazon S3. Since this service is not available in the answer options, we can look at the other data store available which is Amazon DynamoDB. DynamoDB is durable, scalable, and highly available data store which can be used for real-time tabulat ion. You can also use AppSync with DynamoDB to make it e asy for you to build collaborative apps that keep s hared data updated in real time. You just specify the dat a for your app with simple code statements and AWS AppSync manages everything needed to keep the app d ata updated in real time. This will allow your app to access data in Amazon DynamoDB, trigger AWS Lambda functions, or run Amazon Elasticsearch queries and combine data from these services to provide the exa ct data you need for your app. Amazon Redshift and AWS Mobile Hub are incorrect as Amazon Redshift is mainly used as a data warehouse and for online analytic processing (OLAP). Although this service can be used for this scenario, Dynam oDB is still the top choice given its better durability an d scalability. Amazon Relational Database Service (RDS) and Amazon MQ and Amazon Aurora and Amazon Cognito are possible answers in this scenario, howe ver, DynamoDB is much more suitable for simple mobile apps that do not have complicated data relat ionships compared with enterprise web applications. It is stated in the scenario that the mobile app will be used from around the wo rld, which is why you need a data storage service which can be supported globally. It would be a mana gement overhead to implement multi-region deployment for your RDS and Aurora database instanc es compared to using the Global table feature of DynamoDB. References: https://aws.amazon.com/dynamodb/faqs/ https://aws.amazon.com/appsync/ Amazon DynamoDB Overview: https://www.youtube.com/watch?v=3ZOyUNIeorU Check out this Amazon DynamoDB Cheat Sheet: https://tutorialsdojo.com/amazon-dynamodb/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", + "references": "" + }, + { + "question": "A FinTech startup deployed an application on an Ama zon EC2 instance with attached Instance Store volumes and an Elastic IP address. The server is on ly accessed from 8 AM to 6 PM and can be stopped from 6 PM to 8 AM for cost efficiency using Lambda with the script that automates this based on tags. Which of the following will occur when the EC2 inst ance is stopped and started? (Select TWO.)", + "options": [ + "A. A. The underlying host for the instance is possib ly changed.", + "B. B. The ENI (Elastic Network Interface) is detache d.", + "C. C. All data on the attached instance-store device s will be lost.", + "D. D. The Elastic IP address is disassociated with t he instance." + ], + "correct": "", + "explanation": "Explanation/Reference: This question did not mention the specific type of EC2 instance, however, it says that it will be stop ped and started. Since only EBS-backed instances can be sto pped and restarted, it is implied that the instance is EBS-backed. Remember that an instance store-backed instance can only be rebooted or terminated and its data will be erased if the EC2 instance is either s topped or terminated. If you stopped an EBS-backed EC2 instance, the volu me is preserved but the data in any attached instan ce store volume will be erased. Keep in mind that an E C2 instance has an underlying physical host compute r. If the instance is stopped, AWS usually moves the i nstance to a new host computer. Your instance may s tay on the same host computer if there are no problems with the host computer. In addition, its Elastic IP address is disassociated from the instance if it is an EC2-Classic instance. Otherwise, if it is an EC 2-VPC instance, the Elastic IP address remains associated . Take note that an EBS-backed EC2 instance can have attached Instance Store volumes. This is the reason why there is an option that mentions the Instance S tore volume, which is placed to test your understan ding of this specific storage type. You can launch an EB S-backed EC2 instance and attach several Instance S tore volumes but remember that there are some EC2 Instan ce types that don't support this kind of set up. Hence, the correct answers are: - The underlying host for the instance is possibly changed. - All data on the attached instance-store devices w ill be lost. The option that says: The ENI (Elastic Network Inte rface) is detached is incorrect because the ENI wil l stay attached even if you stopped your EC2 instance. The option that says: The Elastic IP address is dis associated with the instance is incorrect because t he EIP will actually remain associated with your instance even after stopping it. The option that says: There will be no changes is i ncorrect because there will be a lot of possible ch anges in your EC2 instance once you stop and start it again. AWS may move the virtualized EC2 instance to anoth er host computer; the instance may get a new public IP address, and the data in your attached instance st ore volumes will be deleted. References: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ ec2-instance-lifecycle.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ComponentsAMIs.html#storage-for-the-root-device Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate-saa-c02/", + "references": "" + }, + { + "question": "A media company recently launched their newly creat ed web application. Many users tried to visit the website, but they are receiving a 503 Service Unava ilable Error. The system administrator tracked the EC2 instance status and saw the capacity is reaching it s maximum limit and unable to process all the reque sts. To gain insights from the application's data, they need to launch a real-time analytics service. Which of the following allows you to read records i n batches?", + "options": [ + "A. A. Create a Kinesis Data Stream and use AWS Lambd a to read records from the data stream.", + "B. B. Create an Amazon S3 bucket to store the captur ed data and use Amazon Athena to analyze the data.", + "C. C. Create a Kinesis Data Firehose and use AWS Lam bda to read records from the data stream.", + "D. D. Create an Amazon S3 bucket to store the captur ed data and use Amazon Redshift Spectrum to analyze" + ], + "correct": "A. A. Create a Kinesis Data Stream and use AWS Lambd a to read records from the data stream.", + "explanation": "Explanation/Reference: Amazon Kinesis Data Streams (KDS) is a massively sc alable and durable real-time data streaming service. KDS can continuously capture gigabytes of data per second from hundreds of thousands of sourc es. You can use an AWS Lambda function to process recor ds in Amazon KDS. By default, Lambda invokes your function as soon as records are available in t he stream. Lambda can process up to 10 batches in e ach shard simultaneously. If you increase the number of concurrent batches per shard, Lambda still ensures in- order processing at the partition-key level. The first time you invoke your function, AWS Lambda creates an instance of the function and runs its handler method to process the event. When the funct ion returns a response, it stays active and waits t o process additional events. If you invoke the functi on again while the first event is being processed, Lambda initializes another instance, and the function proc esses the two events concurrently. As more events c ome in, Lambda routes them to available instances and c reates new instances as needed. When the number of requests decreases, Lambda stops unused instances t o free upscaling capacity for other functions. Since the media company needs a real-time analytics service, you can use Kinesis Data Streams to gain insights from your data. The data collected is avai lable in milliseconds. Use AWS Lambda to read recor ds in batches and invoke your function to process records from the ba tch. If the batch that Lambda reads from the stream only has one record in it, Lambda sends only one re cord to the function. Hence, the correct answer in this scenario is: Crea te a Kinesis Data Stream and use AWS Lambda to read records from the data stream. The option that says: Create a Kinesis Data Firehos e and use AWS Lambda to read records from the data stream is incorrect. Although Amazon Kinesis D ata Firehose captures and loads data in near real- time, AWS Lambda can't be set as its destination. Y ou can write Lambda functions and integrate it with Kinesis Data Firehose to request additional, custom ized processing of the data before it is sent downs tream. However, this integration is primarily used for str eam processing and not the actual consumption of th e data stream. You have to use a Kinesis Data Stream in this scenario. The options that say: Create an Amazon S3 bucket to store the captured data and use Amazon Athena to analyze the data and Create an Amazon S3 bucket to store the captured data and use Amazon Redshift Spectrum to analyze the data are both inco rrect. As per the scenario, the company needs a rea l- time analytics service that can ingest and process data. You need to use Amazon Kinesis to process the data in real-time. References: https://aws.amazon.com/kinesis/data-streams/ https://docs.aws.amazon.com/lambda/latest/dg/with-k inesis.html https://aws.amazon.com/premiumsupport/knowledge-cen ter/503-error-classic/ Check out this Amazon Kinesis Cheat Sheet: https://tutorialsdojo.com/amazon-kinesis/", + "references": "" + }, + { + "question": "The media company that you are working for has a vi deo transcoding application running on Amazon EC2. Each EC2 instance polls a queue to find out which v ideo should be transcoded, and then runs a transcod ing process. If this process is interrupted, the video will be transcoded by another instance based on the queuing system. This application has a large backlog of vid eos which need to be transcoded. Your manager would like to reduce this backlog by adding more EC2 inst ances, however, these instances are only needed unt il the backlog is reduced. In this scenario, which type of Amazon EC2 instance is the most cost-effective type to use?", + "options": [ + "A. A. Spot instances", + "B. B. Reserved instances", + "C. C. Dedicated instances", + "D. D. On-demand instances" + ], + "correct": "A. A. Spot instances", + "explanation": "Explanation/Reference: You require an instance that will be used not as a primary server but as a spare compute resource to augment the transcoding process of your application . These instances should also be terminated once th e backlog has been significantly reduced. In addition , the scenario mentions that if the current process is interrupted, the video can be transcoded by another instance based on the queuing system. This means t hat the application can gracefully handle an unexpected termination of an EC2 instance, like in the event of a Spot instance termination when the Spot price is greater than your set maximu m price. Hence, an Amazon EC2 Spot instance is the best and cost-effective option for this scenario. Amazon EC2 Spot instances are spare compute capacit y in the AWS cloud available to you at steep discounts compared to On-Demand prices. EC2 Spot en ables you to optimize your costs on the AWS cloud and scale your application's throughput up to 10X f or the same budget. By simply selecting Spot when launching EC2 instances, you can save up-to 90% on On-Demand prices. The only difference between On- Demand instances and Spot Instances is that Spot in stances can be interrupted by EC2 with two minutes of notification when the EC2 needs the capacity back. You can specify whether Amazon EC2 should hibernate , stop, or terminate Spot Instances when they are interrupted. You can choose the interruption behavi or that meets your needs. Take note that there is no \"bid price\" anymore for Spot EC2 instances since March 2018. You simply hav e to set your maximum price instead. Reserved instances and Dedicated instances are inco rrect as both do not act as spare compute capacity. On-demand instances is a valid option but a Spot in stance is much cheaper than On-Demand. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /spot-interruptions.html http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ how-spot-instances-work.html https://aws.amazon.com/blogs/compute/new-amazon-ec2 -spot-pricing Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", + "references": "" + }, + { + "question": "A company has an On-Demand EC2 instance located in a subnet in AWS that hosts a web application. The security group attached to this EC2 instance has th e following Inbound Rules: The Route table attached to the VPC is shown below. You can establish an SSH connection into the EC2 instance from the Internet. However, you are not ab le to connect to the web server using your Chrome browser. Which of the below steps would resolve the issue?", + "options": [ + "A. A. In the Route table, add this new route entry: 10.0.0.0/27 -> local", + "B. B. In the Route table, add this new route entry: 0.0.0.0 -> igw-b51618cc", + "C. C. In the Security Group, add an Inbound HTTP rul e.", + "D. D. In the Security Group, remove the SSH rule." + ], + "correct": "C. C. In the Security Group, add an Inbound HTTP rul e.", + "explanation": "Explanation/Reference: In this particular scenario, you can already connec t to the EC2 instance via SSH. This means that ther e is no problem in the Route Table of your VPC. To fix t his issue, you simply need to update your Security Group and add an Inbound rule to allow HTTP traffic . The option that says: In the Security Group, remove the SSH rule is incorrect as doing so will not sol ve the issue. It will just disable SSH traffic that is already available. The options that say: In the Route table, add this new route entry: 0.0.0.0 -> igw-b51618cc and In the Route table, add this new route entry: 10.0.0.0/27 -> local are incorrect as there is no need to chang e the Route Tables.", + "references": "http://docs.aws.amazon.com/AmazonVPC/latest/UserGui de/VPC_SecurityGroups.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/" + }, + { + "question": "A company is hosting an application on EC2 instance s that regularly pushes and fetches data in Amazon S3. Due to a change in compliance, the instances need t o be moved on a private subnet. Along with this cha nge, the company wants to lower the data transfer costs by configuring its AWS resources. How can this be accomplished in the MOST cost-effic ient manner?", + "options": [ + "A. A. Create an Amazon S3 gateway endpoint to enable a connection between the instances and Amazon S3.", + "B. B. Set up a NAT Gateway in the public subnet to c onnect to Amazon S3.", + "C. C. Create an Amazon S3 interface endpoint to enab le a connection between the instances and Amazon S3 .", + "D. D. Set up an AWS Transit Gateway to access Amazon S3.", + "A. A. Spot Instances", + "B. B. On-Demand Capacity Reservations", + "C. C. Reserved Instances", + "D. D. On-Demand Instances" + ], + "correct": "A. A. Spot Instances", + "explanation": "Explanation/Reference: Amazon EC2 Spot instances are spare compute capacit y in the AWS cloud available to you at steep discounts compared to On-Demand prices. It can be i nterrupted by AWS EC2 with two minutes of notification when the EC2 needs the capacity back. To use Spot Instances, you create a Spot Instance r equest that includes the number of instances, the instance type, the Availability Zone, and the maxim um price that you are willing to pay per instance h our. If your maximum price exceeds the current Spot pric e, Amazon EC2 fulfills your request immediately if capacity is available. Otherwise, Amazon EC2 waits until your request can be fulfilled or until you ca ncel the request. References: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ using-spot-instances.html https://aws.amazon.com/ec2/spot/ Amazon EC2 Overview: https://youtu.be/7VsGIHT_jQE Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", + "references": "" + }, + { + "question": "A Solutions Architect is working for a financial co mpany. The manager wants to have the ability to automatically transfer obsolete data from their S3 bucket to a low-cost storage system in AWS. What is the best solution that the Architect can pr ovide to them?", + "options": [ + "A. A. Use an EC2 instance and a scheduled job to tra nsfer the obsolete data from their S3 location to A mazon", + "B. B. Use Lifecycle Policies in S3 to move obsolete data to Glacier.", + "C. C. Use CloudEndure Migration.", + "D. D. Use Amazon SQS." + ], + "correct": "A. A. Use an EC2 instance and a scheduled job to tra nsfer the obsolete data from their S3 location to A mazon", + "explanation": "Explanation/Reference: In this scenario, you can use lifecycle policies in S3 to automatically move obsolete data to Glacier. Lifecycle configuration in Amazon S3 enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more r ules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows: Transition actions In which you define when object s transition to another storage class. For example, you may choose to transition objects to the STANDAR D_IA (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLAC IER storage class one year after creation. Expiration actions In which you specify when the o bjects expire. Then Amazon S3 deletes the expired objects on your behalf. The option that says: Use an EC2 instance and a sch eduled job to transfer the obsolete data from their S3 location to Amazon S3 Glacier is incorrect becau se you don't need to create a scheduled job in EC2 as you can simply use the lifecycle policy in S3. The option that says: Use Amazon SQS is incorrect a s SQS is not a storage service. Amazon SQS is primarily used to decouple your applications by que ueing the incoming requests of your application. The option that says: Use CloudEndure Migration is incorrect because this service is just a highly automated lift-and-shift (rehost) solution that sim plifies, expedites, and reduces the cost of migrati ng applications to AWS. You cannot use this to automat ically transition your S3 objects to a cheaper stor age class. References: http://docs.aws.amazon.com/AmazonS3/latest/dev/obje ct-lifecycle-mgmt.html . https://aws.amazon.com/blogs/aws/archive-s3-to-glac ier/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", + "references": "" + }, + { + "question": "A manufacturing company has EC2 instances running i n AWS. The EC2 instances are configured with Auto Scaling. There are a lot of requests being lost bec ause of too much load on the servers. The Auto Scal ing is launching new EC2 instances to take the load accord ingly yet, there are still some requests that are b eing lost. Which of the following is the MOST suitable solutio n that you should implement to avoid losing recentl y submitted requests?", + "options": [ + "A. A. Set up Amazon Aurora Serverless for on-demand, auto-scaling configuration of your EC2 Instances a nd", + "B. B. Use an Amazon SQS queue to decouple the applic ation components and scale-out the EC2 instances", + "C. C. Use larger instances for your application with an attached Elastic Fabric Adapter (EFA).", + "D. D. Replace the Auto Scaling group with a cluster placement group to achieve a low-latency network" + ], + "correct": "B. B. Use an Amazon SQS queue to decouple the applic ation components and scale-out the EC2 instances", + "explanation": "Explanation/Reference: Amazon Simple Queue Service (SQS) is a fully manage d message queuing service that makes it easy to decouple and scale microservices, distributed syste ms, and serverless applications. Building applicati ons from individual components that each perform a discrete function improves scalability and reliability, and is best practice design for modern applications. SQS makes it simple and cost-effective to decouple and coordi nate the components of a cloud application. Using SQS, y ou can send, store, and receive messages between software components at any volume, without losing m essages or requiring other services to be always available. The number of messages in your Amazon SQS queue doe s not solely define the number of instances needed. In fact, the number of instances in the fle et can be driven by multiple factors, including how long it takes to process a message and the acceptable amoun t of latency (queue delay). The solution is to use a backlog per instance metri c with the target value being the acceptable backlo g per instance to maintain. You can calculate these numbe rs as follows: Backlog per instance: To determine your backlog per instance, start with the Amazon SQS metric ApproximateNumberOfMessages to determine the length of the SQS queue (number of messages available for retrieval from the queue). Divide that number b y the fleet's running capacity, which for an Auto S caling group is the number of instances in the InService s tate, to get the backlog per instance. Acceptable backlog per instance: To determine your target value, first calculate what your application can accept in terms of latency. Then, take the acceptab le latency value and divide it by the average time that an EC2 instance takes to process a message. To illustrate with an example, let's say that the c urrent ApproximateNumberOfMessages is 1500 and the fleet's running capacity is 10. If the average proc essing time is 0.1 seconds for each message and the longest acceptable latency is 10 seconds then the a cceptable backlog per instance is 10 / 0.1, which e quals 100. This means that 100 is the target value for yo ur target tracking policy. Because the backlog per instance is currently at 150 (1500 / 10), your fleet scales out by five instances to maintain proportion to the ta rget value. Hence, the correct answer is: Use an Amazon SQS que ue to decouple the application components and scale-out the EC2 instances based upon the Approxim ateNumberOfMessages metric in Amazon CloudWatch. Replacing the Auto Scaling group with a cluster pla cement group to achieve a low-latency network performance necessary for tightly-coupled node-to-n ode communication is incorrect because although it is true that a cluster placement group allows you t o achieve a low-latency network performance, you st ill need to use Auto Scaling for your architecture to a dd more EC2 instances. Using larger instances for your application with an attached Elastic Fabric Adapter (EFA) is incorrect because using a larger EC2 instance would not preve nt data from being lost in case of a larger spike. You can take advantage of the durability and elasticity of SQS to keep the messages available for consumpt ion by your instances. Elastic Fabric Adapter (EFA) is simply a network interface for Amazon EC2 instances that enables customers to run applications requirin g high levels of inter-node communications at scale on AWS. Setting up Amazon Aurora Serverless for on-demand, auto-scaling configuration of your EC2 Instances and also enabling Amazon Aurora Parallel Query feat ure for faster analytical queries over your current data is incorrect because although the Amazon Auror a Parallel Query feature provides faster analytical queries over your current data, Amazon Aurora Serve rless is an on-demand, auto-scaling configuration f or your database, and NOT for your EC2 instances. This is actually an auto-scaling configuration for your Amazon Aurora database and not for your compute ser vices. References: https://aws.amazon.com/sqs/ https://docs.aws.amazon.com/AWSSimpleQueueService/l atest/SQSDeveloperGuide/welcome.html https://docs.aws.amazon.com/autoscaling/ec2/usergui de/as-using-sqs-queue.html Check out this Amazon SQS Cheat Sheet: https://tutorialsdojo.com/amazon-sqs/", + "references": "" + }, + { + "question": "A travel company has a suite of web applications ho sted in an Auto Scaling group of On-Demand EC2 instances behind an Application Load Balancer that handles traffic from various web domains such as i- love-manila.com, i-love-boracay.com, i-love-cebu.co m and many others. To improve security and lessen t he overall cost, you are instructed to secure the syst em by allowing multiple domains to serve SSL traffi c without the need to reauthenticate and reprovision your cer tificate everytime you add a new domain. This migra tion from HTTP to HTTPS will help improve their SEO and Google search ranking. Which of the following is the most cost-effective s olution to meet the above requirement?", + "options": [ + "A. A. Use a wildcard certificate to handle multiple sub-domains and different domains.", + "B. B. Add a Subject Alternative Name (SAN) for each additional domain to your certificate.", + "C. C. Upload all SSL certificates of the domains in the ALB using the console and bind multiple certifi cates to", + "D. D. Create a new CloudFront web distribution and c onfigure it to serve HTTPS requests using dedicated IP" + ], + "correct": "C. C. Upload all SSL certificates of the domains in the ALB using the console and bind multiple certifi cates to", + "explanation": "Explanation/Reference: SNI Custom SSL relies on the SNI extension of the T ransport Layer Security protocol, which allows mult iple domains to serve SSL traffic over the same IP addre ss by including the hostname which the viewers are trying to connect to. You can host multiple TLS secured applications, eac h with its own TLS certificate, behind a single loa d balancer. In order to use SNI, all you need to do i s bind multiple certificates to the same secure lis tener on your load balancer. ALB will automatically choose the op timal TLS certificate for each client. These featur es are provided at no additional charge. To meet the requirements in the scenario, you can u pload all SSL certificates of the domains in the AL B using the console and bind multiple certificates to the s ame secure listener on your load balancer. ALB will automatically choose the optimal TLS certificate fo r each client using Server Name Indication (SNI). Hence, the correct answer is the option that says: Upload all SSL certificates of the domains in the A LB using the console and bind multiple certificates to the s ame secure listener on your load balancer.ALB will automatically choose the optimal TLS certificate fo r each client using Server NameIndication (SNI). Using a wildcard certificate to handle multiple sub -domains and different domains is incorrect because a wildcard certificate can only handle multiple sub-d omains but not different domains. Adding a Subject Alternative Name (SAN) for each ad ditional domain to your certificate is incorrect b ecause although using SAN is correct, you will still have to reauthenticate and reprovision your certificate every time you add a new domain. One of the requirements in th e scenario is that you should not have to reauthent icate and reprovision your certificate hence, this soluti on is incorrect. The option that says: Create a new CloudFront web d istribution and configure it to serve HTTPS requests using dedicated IP addresses in order to a ssociate your alternate domain names with a dedicated IP address in each CloudFront edge locati on is incorrect because although it is valid to use dedicated IP addresses to meet this requirement, th is solution is not cost-effective. Remember that if you configure CloudFront to serve HTTPS requests using dedicated IP addresses, you incur an additional monthly charge. The charge begins when you associat e your SSL/TLS certificate with your CloudFront distribution. You can just simply upload the certif icates to the ALB and use SNI to handle multiple domains in a cost-effective manner. References: https://aws.amazon.com/blogs/aws/new-application-lo ad-balancer-sni/ https://docs.aws.amazon.com/AmazonCloudFront/latest /DeveloperGuide/cnames-https-dedicated-ip-or- sni.html#cnames-https-dedicated-ip https://docs.aws.amazon.com/elasticloadbalancing/la test/application/create-https-listener.html Check out this Amazon CloudFront Cheat Sheet: https://tutorialsdojo.com/amazon-cloudfront/ SNI Custom SSL vs Dedicated IP Custom SSL: https://tutorialsdojo.com/sni-custom-ssl-vs-dedicat ed-ip-custom-ssl/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/", + "references": "" + }, + { + "question": "A new online banking platform has been re-designed to have a microservices architecture in which compl ex applications are decomposed into smaller, independe nt services. The new platform is using Docker consi dering that application containers are optimal for running small, decoupled services. The new solution should remove the need to provision and manage servers, let you s pecify and pay for resources per application, and i mprove security through application isolation by design. Which of the following is the MOST suitable service to use to migrate this new platform to AWS?", + "options": [ + "A. A. Amazon EBS", + "B. B. Amazon EFS", + "C. C. Amazon EKS D. D. AWS Fargate" + ], + "correct": "", + "explanation": "Explanation/Reference: AWS Fargate is a serverless compute engine for cont ainers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernet es Service (EKS). Fargate makes it easy for you to focus on building your applications. Fargate remove s the need to provision and manage servers, lets yo u specify and pay for resources per application, and improves security through application isolation by design. Fargate allocates the right amount of compute, elim inating the need to choose instances and scale clus ter capacity. You only pay for the resources required t o run your containers, so there is no over-provisio ning and paying for additional servers. Fargate runs eac h task or pod in its own kernel providing the tasks and pods their own isolated compute environment. This e nables your application to have workload isolation and improved security by design. This is why customers such as Vanguard, Accenture, Foursquare, and Ancestry have chosen to run their mission critical applications on Fargate. Hence, the correct answer is: AWS Fargate. Amazon EKS is incorrect because this is more suitab le to run the Kubernetes management infrastructure and not Docker. It does not remove the need to provisio n and manage servers nor let you specify and pay for resources per application, unlike AWS Fargate. Amazon EFS is incorrect because this is a file syst em for Linux-based workloads for use with AWS Cloud services and on-premises resources. Amazon EBS is incorrect because this is primarily u sed to provide persistent block storage volumes for use with Amazon EC2 instances in the AWS Cloud. References: https://aws.amazon.com/fargate/ https://docs.aws.amazon.com/AmazonECS/latest/develo perguide/ECS_GetStarted_Fargate.html Check out this Amazon ECS Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-container- service-amazon-ecs/", + "references": "" + }, + { + "question": "A company has established a dedicated network conne ction from its on-premises data center to AWS Cloud using AWS Direct Connect (DX). The core network ser vices, such as the Domain Name System (DNS) service and Active Directory services, are all hosted on-pr emises. The company has new AWS accounts that will also require consistent and dedicated access to these ne twork services. Which of the following can satisfy this requirement with the LEAST amount of operational overhead and in a cost-effective manner?", + "options": [ + "A. A. Create a new AWS VPN CloudHub. Set up a Virtua l Private Network (VPN) connection for additional", + "B. B. Set up a new Direct Connect gateway and integr ate it with the existing Direct Connect connection.", + "C. C. Set up another Direct Connect connection for e ach and every new AWS account that will be added.", + "D. D. Create a new Direct Connect gateway and integr ate it with the existing Direct Connect connection. Set up" + ], + "correct": "D. D. Create a new Direct Connect gateway and integr ate it with the existing Direct Connect connection. Set up", + "explanation": "Explanation/Reference: AWS Transit Gateway provides a hub and spoke design for connecting VPCs and on-premises networks. You can attach all your hybrid connectivity (VPN an d Direct Connect connections) to a single Transit Gateway consolidating and controlling your organiza tion's entire AWS routing configuration in one plac e. It also controls how traffic is routed among all th e connected spoke networks using route tables. This hub and spoke model simplifies management and reduces o perational costs because VPCs only connect to the Transit Gateway to gain access to the connected net works. By attaching a transit gateway to a Direct Connect gateway using a transit virtual interface, you can manage a single connection for multiple VPCs or VPN s that are in the same AWS Region. You can also advertise prefixes from on-premises to AWS and from AWS to on-premises. The AWS Transit Gateway and AWS Direct Connect solu tion simplify the management of connections between an Amazon VPC and your networks over a priv ate connection. It can also minimize network costs, improve bandwidth throughput, and provide a more re liable network experience than Internet-based connections. Hence, the correct answer is: Create a new Direct C onnect gateway and integrate it with the existing Direct Connect connection. Set up a Transit Gateway between AWS accounts and associate it with the Direct Connect gateway. The option that says: Set up another Direct Connect connection for each and every new AWS account that will be added is incorrect because this soluti on entails a significant amount of additional cost. Setting up a single DX connection requires a substantial bu dget and takes a lot of time to establish. It also has high management overhead since you will need to manage a ll of the Direct Connect connections for all AWS accounts. The option that says: Create a new AWS VPN CloudHub . Set up a Virtual Private Network (VPN) connection for additional AWS accounts is incorrect because a VPN connection is not capable of providing consistent and dedicated access to the on -premises network services. Take note that a VPN connection traverses the public Internet and doesn' t use a dedicated connection. The option that says: Set up a new Direct Connect g ateway and integrate it with the existing Direct Connect connection. Configure a VPC peering connect ion between AWS accounts and associate it with Direct Connect gateway is incorrect because VP C peering is not supported in a Direct Connect connection. VPC peering does not support transitive peering relationships. References: https://docs.aws.amazon.com/directconnect/latest/Us erGuide/direct-connect-transit-gateways.html https://docs.aws.amazon.com/whitepapers/latest/aws- vpc-connectivity-options/aws-direct-connect-aws-tra nsit- gateway.html https://aws.amazon.com/blogs/networking-and-content -delivery/integrating-sub-1-gbps-hosted-connections - with-aws-transit-gateway/ Check out this AWS Transit Gateway Cheat Sheet: https://tutorialsdojo.com/aws-transit-gateway/", + "references": "" + }, + { + "question": "A company is storing its financial reports and regu latory documents in an Amazon S3 bucket. To comply with the IT audit, they tasked their Solutions Architect to track all new objects added to the bucket as we ll as the removed ones. It should also track whether a versio ned object is permanently deleted. The Architect mu st configure Amazon S3 to publish notifications for th ese events to a queue for post-processing and to an Amazon SNS topic that will notify the Operations te am. Which of the following is the MOST suitable solutio n that the Architect should implement?", + "options": [ + "A. A. Create a new Amazon SNS topic and Amazon SQS q ueue. Add an S3 event notification configuration on", + "B. B. Create a new Amazon SNS topic and Amazon MQ. A dd an S3 event notification configuration on the", + "C. C. Create a new Amazon SNS topic and Amazon MQ. A dd an S3 event notification configuration on the", + "D. D. Create a new Amazon SNS topic and Amazon SQS q ueue. Add an S3 event notification configuration on" + ], + "correct": "D. D. Create a new Amazon SNS topic and Amazon SQS q ueue. Add an S3 event notification configuration on", + "explanation": "Explanation/Reference: The Amazon S3 notification feature enables you to r eceive notifications when certain events happen in your bucket. To enable notifications, you must first add a notification configuration that identifies the e vents you want Amazon S3 to publish and the destinations where you want Amazon S3 to send the notifications. You stor e this configuration in the notification subresource that is associated with a bucket. Amazon S3 provides an API for you to manage this subresource. Amazon S3 event notifications typically deliver eve nts in seconds but can sometimes take a minute or l onger. If two writes are made to a single non-versioned objec t at the same time, it is possible that only a sing le event notification will be sent. If you want to ensure th at an event notification is sent for every successf ul write, you can enable versioning on your bucket. With versioni ng, every successful write will create a new versio n of your object and will also send an event notification. Amazon S3 can publish notifications for the followi ng events: 1. New object created events 2. Object removal events 3. Restore object events 4. Reduced Redundancy Storage (RRS) object lost eve nts 5. Replication events Amazon S3 supports the following destinations where it can publish events: 1. Amazon Simple Notification Service (Amazon SNS) topic 2. Amazon Simple Queue Service (Amazon SQS) queue 3. AWS Lambda If your notification ends up writing to the bucket that triggers the notification, this could cause an execution loop. For example, if the bucket triggers a Lambda function each time an object is uploaded and the function uploads an object to the bucket, then the function indirectly triggers itself. To avoid this, use two buckets, or configure the trigger to only apply to a prefix used for incoming objects. Hence, the correct answers is: Create a new Amazon SNS topic and Amazon SQS queue. Add an S3 event notification configuration on the bucket to p ublish s3:ObjectCreated:* and s3:ObjectRemoved:Delete event types to SQS and SNS. The option that says: Create a new Amazon SNS topic and Amazon MQ. Add an S3 event notification configuration on the bucket to publish s3:ObjectAdd ed:* and s3:ObjectRemoved:* event types to SQS and SNS is incorrect. There is no s3:ObjectAdded:* type in Amazon S3. You should add an S3 event notification configuration on the bucket to publish events of th e s3:ObjectCreated:* type instead. Moreover, Amazon S3 does support Amazon MQ as a destination to publish events. The option that says: Create a new Amazon SNS topic and Amazon SQS queue. Add an S3 event notification configuration on the bucket to publish s3:ObjectCre ated:* and ObjectRemoved:DeleteMarkerCreated event types to SQS and SNS is incorrect because the s3:Ob jectRemoved:DeleteMarkerCreated type is only trigge red when a delete marker is created for a versioned obj ect and not when an object is deleted or a versione d object is permanently deleted. The option that says: Create a new Amazon SNS topic and Amazon MQ. Add an S3 event notification configuration on the bucket to publish s3:ObjectCre ated:* and ObjectRemoved:DeleteMarkerCreated event types to SQS and SNS is incorrect because Amazon S3 does public event messages to Amazon MQ. You should use an Amazon SQS instead. In addition, the s3:ObjectRemoved:DeleteMarkerCreated type is only triggered when a delete marker is created for a ver sioned object. Remember that the scenario asked to publish events when an object is deleted or a versioned obj ect is permanently deleted. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/Not ificationHowTo.html https://docs.aws.amazon.com/AmazonS3/latest/dev/way s-to-add-notification-config-to-bucket.html https://aws.amazon.com/blogs/aws/s3-event-notificat ion/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/ Amazon SNS Overview: https://www.youtube.com/watch?v=ft5R45lEUJ8", + "references": "" + }, + { + "question": "A data analytics company is setting up an innovativ e checkout-free grocery store. Their Solutions Arch itect developed a real-time monitoring application that u ses smart sensors to collect the items that the cus tomers are getting from the grocery's refrigerators and sh elves then automatically deduct it from their accou nts. The company wants to analyze the items that are fre quently being bought and store the results in S3 fo r durable storage to determine the purchase behavior of its customers. What service must be used to easily capture, transf orm, and load streaming data into Amazon S3, Amazon Elasticsearch Service, and Splunk?", + "options": [ + "A. Amazon Kinesis Data Firehose", + "B. Amazon Redshift", + "C. Amazon Kinesis", + "D. Amazon SQS" + ], + "correct": "C. Amazon Kinesis", + "explanation": "Explanation/Reference: Amazon Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics tools. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling near re al-time analytics with existing business intelligen ce tools and dashboards you are already using today. It is a fully managed service that automatically sc ales to match the throughput of your data and requi res no ongoing administration. It can also batch, compress , and encrypt the data before loading it, minimizin g the amount of storage used at the destination and incre asing security. In the diagram below, you gather the data from your smart refrigerators and use Kinesis Data firehouse to prepare and load the data. S3 will be used as a met hod of durably storing the data for analytics and t he eventual ingestion of data for output using analyti cal tools. You can use Amazon Kinesis Data Firehose in conjunc tion with Amazon Kinesis Data Streams if you need t o implement real-time processing of streaming big dat a. Kinesis Data Streams provides an ordering of rec ords, as well as the ability to read and/or replay record s in the same order to multiple Amazon Kinesis Applications. The Amazon Kinesis Client Library (KC L) delivers all records for a given partition key t o the same record processor, making it easier to build mu ltiple applications reading from the same Amazon Kinesis data stream (for example, to perform counti ng, aggregation, and filtering). Amazon Simple Queue Service (Amazon SQS) is differe nt from Amazon Kinesis Data Firehose. SQS offers a reliable, highly scalable hosted queue for storing messages as they travel between computers. Amazon SQS lets you easily move data between distri buted application components and helps you build applications in which messages are processed indepe ndently (with message-level ack/fail semantics), su ch as automated workflows. Amazon Kinesis Data Firehos e is primarily used to load streaming data into dat a stores and analytics tools. Hence, the correct answer is: Amazon Kinesis Data F irehose. Amazon Kinesis is incorrect because this is the str eaming data platform of AWS and has four distinct services under it: Kinesis Data Firehose, Kinesis D ata Streams, Kinesis Video Streams, and Amazon Kinesis Data Analytics. For the specific use case j ust as asked in the scenario, use Kinesis Data Fire hose. Amazon Redshift is incorrect because this is mainly used for data warehousing making it simple and cos t- effective to analyze your data across your data war ehouse and data lake. It does not meet the requirem ent of being able to load and stream data into data stores for analytics. You have to use Kinesis Data Fireho se instead. Amazon SQS is incorrect because you can't capture, transform, and load streaming data into Amazon S3, Amazon Elasticsearch Service, and Splunk using this service. You have to use Kinesis Data Firehose instead. References: https://aws.amazon.com/kinesis/data-firehose/ https://aws.amazon.com/kinesis/data-streams/faqs/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate-saa-c02/", + "references": "" + }, + { + "question": "A company is using Amazon VPC that has a CIDR block of 10.31.0.0/27< that is connected to the on- prem ises data center. There was a requirement to create a La mbda function that will process massive amounts of cryptocurrency transactions every minute and then s tore the results to EFS. After setting up the serve rless architecture and connecting the Lambda function to the VPC, the Solutions Architect noticed an increas e in invocation errors with EC2 error types such as EC2T hrottledException at certain times of the day. Which of the following are the possible causes of t his issue? (Select TWO.)", + "options": [ + "A. A. You only specified one subnet in your Lambda f unction configuration. That single subnet runs out of", + "B. B. The attached IAM execution role of your functi on does not have the necessary permissions to acces s the", + "C. C. The associated security group of your function does not allow outbound connections.", + "D. D. Your VPC does not have sufficient subnet ENIs or subnet IPs." + ], + "correct": "", + "explanation": "Explanation/Reference: You can configure a function to connect to a virtua l private cloud (VPC) in your account. Use Amazon Virtual Private Cloud (Amazon VPC) to create a priv ate network for resources such as databases, cache instances, or internal services. Connect your funct ion to the VPC to access private resources during execution. AWS Lambda runs your function code securely within a VPC by default. However, to enable your Lambda function to access resources inside your private VP C, you must provide additional VPC-specific configuration information that includes VPC subnet IDs and security group IDs. AWS Lambda uses this information to set up elastic network interfaces (E NIs) that enable your function to connect securely to other resources within your private VPC. Lambda functions cannot connect directly to a VPC w ith dedicated instance tenancy. To connect to resources in a dedicated VPC, peer it to a second V PC with default tenancy. Your Lambda function automatically scales based on the number of events it processes. If your Lambda function accesses a VPC, you must make sure that yo ur VPC has sufficient ENI capacity to support the scale requirements of your Lambda function. It is a lso recommended that you specify at least one subne t in each Availability Zone in your Lambda function conf iguration. By specifying subnets in each of the Availability Z ones, your Lambda function can run in another Availability Zone if one goes down or runs out of I P addresses. If your VPC does not have sufficient E NIs or subnet IPs, your Lambda function will not scale as requests increase, and you will see an increase in invocation errors with EC2 error types like EC2Thro ttledException. For asynchronous invocation, if you see an increase in errors without corresponding Clo udWatch Logs, invoke the Lambda function synchronously in the console to get the error respo nses. Hence, the correct answers for this scenario are: - You only specified one subnet in your Lambda func tion configuration. That single subnet runs out of available IP addresses and there is no other sub net or Availability Zone which can handle the peak load. - Your VPC does not have sufficient subnet ENIs or subnet IPs. The option that says: Your VPC does not have a NAT gateway is incorrect because an issue in the NAT Gateway is unlikely to cause a request throttling i ssue or produce an EC2ThrottledException error in Lambda. As per the scenario, the issue is happening only at certain times of the day, which means that the issue is only intermittent and the function works a t other times. We can also conclude that an availab ility issue is not an issue since the application is alre ady using a highly available NAT Gateway and not ju st a NAT instance. The option that says: The associated security group of your function does not allow outbound connections is incorrect because if the associated security group does not allow outbound connections then the Lambda function will not work at all in the fir st place. Remember that as per the scenario, the is sue only happens intermittently. In addition, Internet traffic restrictions do not usually produce EC2ThrottledException errors. The option that says: The attached IAM execution ro le of your function does not have the necessary permissions to access the resources of your VPC is incorrect because just as what is explained above, the issue is intermittent and thus, the IAM execution r ole of the function does have the necessary permiss ions to access the resources of the VPC since it works a t those specific times. In case the issue is indeed caused by a permission problem then an EC2AccessDeniedExce ption the error would most likely be returned and not an EC2ThrottledException error. References: https://docs.aws.amazon.com/lambda/latest/dg/vpc.ht ml https://aws.amazon.com/premiumsupport/knowledge-cen ter/internet-access-lambda-function/ https://aws.amazon.com/premiumsupport/knowledge-cen ter/lambda-troubleshoot-invoke-error-502-500/ Check out this AWS Lambda Cheat Sheet: https://tutorialsdojo.com/aws-lambda/", + "references": "" + }, + { + "question": "A tech startup is launching an on-demand food deliv ery platform using Amazon ECS cluster with an AWS Fargate serverless compute engine and Amazon Aurora . It is expected that the database read queries wil l significantly increase in the coming weeks ahead. A Solutions Architect recently launched two Read Rep licas to the database cluster to improve the platform's scal ability. Which of the following is the MOST suitable configu ration that the Architect should implement to load balance all of the incoming read requests equally to the tw o Read Replicas?", + "options": [ + "A. A. Use the built-in Reader endpoint of the Amazon Aurora database.", + "B. B. Enable Amazon Aurora Parallel Query. C. C. Create a new Network Load Balancer to evenly d istribute the read queries to the Read Replicas of the", + "D. D. Use the built-in Cluster endpoint of the Amazo n Aurora database." + ], + "correct": "A. A. Use the built-in Reader endpoint of the Amazon Aurora database.", + "explanation": "Explanation/Reference: Amazon Aurora typically involves a cluster of DB in stances instead of a single instance. Each connecti on is handled by a specific DB instance. When you connect to an Aurora cluster, the hostname and port that you specify point to an intermediate handler called an endpoint. Aurora uses the endpoint mechanism to abstract these connections. Thus, you don't have to hardcode all the hostnames or write your own logic for load-balancing and rerouting connections when some DB instances aren't available. For certain Aurora tasks, different instances or gr oups of instances perform different roles. For exam ple, the primary instance handles all data definition la nguage (DDL) and data manipulation language (DML) statements. Up to 15 Aurora Replicas handle read-on ly query traffic. Using endpoints, you can map each connection to the appropriate instance or group of instances based o n your use case. For example, to perform DDL statemen ts you can connect to whichever instance is the primary instance. To perform queries, you can conne ct to the reader endpoint, with Aurora automaticall y performing load-balancing among all the Aurora Repl icas. For clusters with DB instances of different capacities or configurations, you can connect to cu stom endpoints associated with different subsets of DB instances. For diagnosis or tuning, you can connect to a specific instance endpoint to examine details about a specific DB instance. A reader endpoint for an Aurora DB cluster provides load-balancing support for read-only connections t o the DB cluster. Use the reader endpoint for read op erations, such as queries. By processing those stat ements on the read-only Aurora Replicas, this endpoint red uces the overhead on the primary instance. It also helps the cluster to scale the capacity to handle simulta neous SELECT queries, proportional to the number of Aurora Replicas in the cluster. Each Aurora DB clus ter has one reader endpoint. If the cluster contains one or more Aurora Replicas , the reader endpoint load-balances each connection request among the Aurora Replicas. In that case, yo u can only perform read-only statements such as SELECT in that session. If the cluster only contain s a primary instance and no Aurora Replicas, the re ader endpoint connects to the primary instance. In that case, you can perform write operations through the endpoint. Hence, the correct answer is to use the built-in Re ader endpoint of the Amazon Aurora database. The option that says: Use the built-in Cluster endp oint of the Amazon Aurora database is incorrect because a cluster endpoint (also known as a writer endpoint) simply connects to the current primary DB instance for that DB cluster. This endpoint can per form write operations in the database such as DDL statements, which is perfect for handling productio n traffic but not suitable for handling queries for reporting since there will be no write database ope rations that will be sent. The option that says: Enable Amazon Aurora Parallel Query is incorrect because this feature simply enables Amazon Aurora to push down and distribute t he computational load of a single query across thousands of CPUs in Aurora's storage layer. Take n ote that it does not load balance all of the incomi ng read requests equally to the two Read Replicas. Wit h Parallel Query, query processing is pushed down t o the Aurora storage layer. The query gains a large a mount of computing power, and it needs to transfer far less data over the network. In the meantime, the Au rora database instance can continue serving transac tions with much less interruption. This way, you can run transactional and analytical workloads alongside ea ch other in the same Aurora database, while maintainin g high performance. The option that says: Create a new Network Load Bal ancer to evenly distribute the read queries to the Read Replicas of the Amazon Aurora database is inco rrect because a Network Load Balancer is not the suitable service/component to use for this requirem ent since an NLB is primarily used to distribute tr affic to servers, not Read Replicas. You have to use the built-in Reader endpoint of the Amazon Aurora datab ase instead. References: https://docs.aws.amazon.com/AmazonRDS/latest/Aurora UserGuide/Aurora.Overview.Endpoints.html https://docs.aws.amazon.com/AmazonRDS/latest/Aurora UserGuide/Aurora.Overview.html https://aws.amazon.com/rds/aurora/parallel-query/ Amazon Aurora Overview: https://youtu.be/iwS1h7rLNBQ Check out this Amazon Aurora Cheat Sheet: https://tutorialsdojo.com/amazon-aurora/", + "references": "" + }, + { + "question": "A company is using multiple AWS accounts that are c onsolidated using AWS Organizations. They want to c opy several S3 objects to another S3 bucket that belong ed to a different AWS account which they also own. The Solutions Architect was instructed to set up the ne cessary permissions for this task and to ensure tha t the destination account owns the copied objects and not the account it was sent from. How can the Architect accomplish this requirement?", + "options": [ + "A. A. Set up cross-origin resource sharing (CORS) in S3 by creating a bucket policy that allows an IAM user or", + "B. B. Enable the Requester Pays feature in the sourc e S3 bucket. The fees would be waived through", + "C. C. Configure cross-account permissions in S3 by c reating an IAM customer-managed policy that allows an", + "D. D. Connect the two S3 buckets from two different AWS accounts to Amazon WorkDocs. Set up cross-" + ], + "correct": "", + "explanation": "Explanation/Reference: By default, an S3 object is owned by the account th at uploaded the object. That's why granting the destination account the permissions to perform the cross-account copy makes sure that the destination owns the copied objects. You can also change the ownersh ip of an object by changing its access control list (ACL) to bucket-owner-full-control. However, object ACLs can be difficult to manage for multiple objects, so it's a best practice to grant programmatic cross-account permissions to the desti nation account. Object ownership is important for managing permissions using a bucket policy. For a b ucket policy to apply to an object in the bucket, t he object must be owned by the account that owns the b ucket. You can also manage object permissions using the object's ACL. However, object ACLs can be diffi cult to manage for multiple objects, so it's best practice to use the bucket policy as a centralized method for setting permissions. To be sure that a destination account owns an S3 ob ject copied from another account, grant the destina tion account the permissions to perform the cross-accoun t copy. Follow these steps to configure cross-accou nt permissions to copy objects from a source bucket in Account A to a destination bucket in Account B: - Attach a bucket policy to the source bucket in Ac count A. - Attach an AWS Identity and Access Management (IAM ) policy to a user or role in Account B. - Use the IAM user or role in Account B to perform the cross-account copy. Hence, the correct answer is: Configure cross-accou nt permissions in S3 by creating an IAM customer- managed policy that allows an IAM user or role to c opy objects from the source bucket in one account to the destination bucket in the other acco unt. Then attach the policy to the IAM user or role that you want to use to copy objects between accoun ts. The option that says: Enable the Requester Pays fea ture in the source S3 bucket. The fees would be waived through Consolidated Billing since both AWS accounts are part of AWS Organizations is incorrect because the Requester Pays feature is pri marily used if you want the requester, instead of t he bucket owner, to pay the cost of the data transfer request and download from the S3 bucket. This solut ion lacks the necessary IAM Permissions to satisfy the requirement. The most suitable solution here is to configure cross-account permissions in S3. The option that says: Set up cross-origin resource sharing (CORS) in S3 by creating a bucket policy that allows an IAM user or role to copy objects fro m the source bucket in one account to the destination bucket in the other account is incorrec t because CORS simply defines a way for client web applications that are loaded in one domain to inter act with resources in a different domain, and not o n a different AWS account. The option that says: Connect the two S3 buckets fr om two different AWS accounts to Amazon WorkDocs. Set up cross-account access to integrate the two S3 buckets. Use the Amazon WorkDocs console to copy the objects from one account to the other with modified object ownership assigned to the destination account is incorrect because Amazon WorkDocs is commonly used to easily collaborate, share content, provide rich feedback, and collabora tively edit documents with other users. There is no direct way for you to integrate WorkDocs and an Ama zon S3 bucket owned by a different AWS account. A better solution here is to use cross-account permis sions in S3 to meet the requirement. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/exa mple-walkthroughs-managing-access-example2.html https://aws.amazon.com/premiumsupport/knowledge-cen ter/copy-s3-objects-account/ https://aws.amazon.com/premiumsupport/knowledge-cen ter/cross-account-access-s3/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", + "references": "" + }, + { + "question": "A document sharing website is using AWS as its clou d infrastructure. Free users can upload a total of 5 GB data while premium users can upload as much as 5 TB . Their application uploads the user files, which c an have a max file size of 1 TB, to an S3 Bucket. In this scenario, what is the best way for the appl ication to upload the large files in S3?", + "options": [ + "A. A. Use Multipart Upload", + "B. B. Use a single PUT request to upload the large f ile", + "C. C. Use AWS Import/Export", + "D. D. Use AWS Snowball" + ], + "correct": "A. A. Use Multipart Upload", + "explanation": "Explanation/Reference: The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objec ts can range in size from a minimum of 0 bytes to a ma ximum of 5 terabytes. The largest object that can b e uploaded in a single PUT is 5 gigabytes. For object s larger than 100 megabytes, customers should consi der using the Multipart Upload capability. The Multipart upload API enables you to upload larg e objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: you i nitiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multi part upload request, Amazon S3 constructs the objec t from the uploaded parts and you can then access the object just as you would any other object in your bucket. Using a single PUT request to upload the large file is incorrect because the largest file size you can upload using a single PUT request is 5 GB. Files la rger than this will fail to be uploaded. Using AWS Snowball is incorrect because this is a m igration tool that lets you transfer large amounts of data from your on-premises data center to AWS S3 an d vice versa. This tool is not suitable for the giv en scenario. And when you provision Snowball, the devi ce gets transported to you, and not to your custome rs. Therefore, you bear the responsibility of securing the device. Using AWS Import/Export is incorrect because Import /Export is similar to AWS Snowball in such a way that it is meant to be used as a migration tool, an d not for multiple customer consumption such as in the given scenario. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpu overview.html https://aws.amazon.com/s3/faqs/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", + "references": "" + }, + { + "question": "A solutions architect is formulating a strategy for a startup that needs to transfer 50 TB of on-premi ses data to Amazon S3. The startup has a slow network transfer speed between its data center and AWS which causes a bottleneck for data migration. Which of the following should the solutions archite ct implement?", + "options": [ + "A. A. Integrate AWS Storage Gateway File Gateway wit h the on-premises data center.", + "B. B. Request an Import Job to Amazon S3 using a Sno wball device in the AWS Snowball Console.", + "C. C. Enable Amazon S3 Transfer Acceleration on the target S3 bucket.", + "D. D. Deploy an AWS Migration Hub Discovery agent in the on-premises data center." + ], + "correct": "B. B. Request an Import Job to Amazon S3 using a Sno wball device in the AWS Snowball Console.", + "explanation": "Explanation/Reference: AWS Snowball uses secure, rugged devices so you can bring AWS computing and storage capabilities to your edge environments, and transfer data into and out of AWS. The service delivers you Snowball Edge devices with storage and optional Amazon EC2 and AW S IOT Greengrass compute in shippable, hardened, secure cases. With AWS Snowball, you bring cloud ca pabilities for machine learning, data analytics, processing, and storage to your edge for migrations , short-term data collection, or even long-term deployments. AWS Snowball devices work with or with out the internet, do not require a dedicated IT operator, and are designed to be used in remote env ironments. Hence, the correct answer is: Request an Import Job to Amazon S3 using a Snowball device in the AWS Snowball Console. The option that says: Deploy an AWS Migration Hub D iscovery agent in the on-premises data center is incorrect. The AWS Migration Hub service is just a central service that provides a single location to track the progress of application migrations across multi ple AWS and partner solutions. The option that says: Enable Amazon S3 Transfer Acc eleration on the target S3 bucket is incorrect because this S3 feature is not suitable for large-s cale data migration. Enabling this feature won't al ways guarantee faster data transfer as it's only benefic ial for long-distance transfer to and from your Ama zon S3 buckets. The option that says: Integrate AWS Storage Gateway File Gateway with the on-premises data center is incorrect because this service is mostly used fo r building hybrid cloud solutions where you still n eed on- premises access to unlimited cloud storage. Based o n the scenario, this service is not the best option because you would still rely on the existing low ba ndwidth internet connection. References: https://aws.amazon.com/snowball https://aws.amazon.com/blogs/storage/making-it-even -simpler-to-create-and-manage-your-aws-snow-family- jobs/ Check out this AWS Snowball Cheat Sheet: https://tutorialsdojo.com/aws-snowball/ AWS Snow Family Overview: https://www.youtube.com/watch?v=9Ar-51Ip53Q", + "references": "" + }, + { + "question": "A global online sports betting company has its popu lar web application hosted in AWS. They are plannin g to develop a new online portal for their new business venture and they hired you to implement the cloud architecture for a new online portal that will acce pt bets globally for world sports. You started to d esign the system with a relational database that runs on a si ngle EC2 instance, which requires a single EBS volu me that can support up to 30,000 IOPS. In this scenario, which Amazon EBS volume type can you use that will meet the performance requirements of this new online portal?", + "options": [ + "A. A. EBS General Purpose SSD (gp2)", + "B. B. EBS Cold HDD (sc1)", + "C. C. EBS Provisioned IOPS SSD (io1)", + "D. D. EBS Throughput Optimized HDD (st1)" + ], + "correct": "C. C. EBS Provisioned IOPS SSD (io1)", + "explanation": "Explanation/Reference: The scenario requires a storage type for a relation al database with a high IOPS performance. For these scenarios, SSD volumes are more suitable to use ins tead of HDD volumes. Remember that the dominant performance attribute of SSD is IOPS while HDD is T hroughput. In the exam, always consider the difference between SSD and HDD as shown on the table below. This will allow you to easily eliminate specific EBS-types in the options which are not SSD or not HDD, dependin g on whether the question asks for a storage type whi ch has small, random I/O operations or large, sequential I/O operations. Since the requirement is 30,000 IOPS, you have to u se an EBS type of Provisioned IOPS SSD. This provides sustained performance for mission-critical low-latency workloads. Hence, EBS Provisioned IOPS SSD (io1) is the correct answer. EBS Throughput Optimized HDD (st1) and EBS Cold HDD (sc1) are incorrect because these are HDD volumes which are more suitable for large streaming workloads rather than transactional database workloads. EBS General Purpose SSD (gp2) is incorrect because although a General Purpose SSD volume can be used for this scenario, it does not provide the hig h IOPS required by the application, unlike the Prov isioned IOPS SSD volume.", + "references": "https://aws.amazon.com/ebs/details/ Amazon EBS Overview - SSD vs HDD: https://www.youtube.com/watch?v=LW7x8wyLFvw Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/" + }, + { + "question": "A company needs to use Amazon Aurora as the Amazon RDS database engine of their web application. The Solutions Architect has been instructed to impl ement a 90-day backup retention policy. Which of the following options can satisfy the give n requirement?", + "options": [ + "A. A. Configure an automated backup and set the back up retention period to 90 days.", + "B. B. Create a daily scheduled event using CloudWatc h Events and AWS Lambda to directly download the", + "C. C. Configure RDS to export the automated snapshot automatically to Amazon S3 and create a lifecycle", + "D. D. Create an AWS Backup plan to take daily snapsh ots with a retention period of 90 days." + ], + "correct": "D. D. Create an AWS Backup plan to take daily snapsh ots with a retention period of 90 days.", + "explanation": "Explanation/Reference: AWS Backup is a centralized backup service that mak es it easy and cost-effective for you to backup you r application data across AWS services in the AWS Clo ud, helping you meet your business and regulatory backup compliance requirements. AWS Backup makes pr otecting your AWS storage volumes, databases, and file systems simple by providing a central plac e where you can configure and audit the AWS resourc es you want to backup, automate backup scheduling, set retention policies, and monitor all recent backup and restore activity. In this scenario, you can use AWS Backup to create a backup plan with a retention period of 90 days. A backup plan is a policy expression that defines whe n and how you want to back up your AWS resources. You assign resources to backup plans, and AWS Backu p then automatically backs up and retains backups for those resources according to the backup plan. Hence, the correct answer is: Create an AWS Backup plan to take daily snapshots with a retention period of 90 days. The option that says: Configure an automated backup and set the backup retention period to 90 days is incorrect because the maximum backup retention p eriod for automated backup is only 35 days. The option that says: Configure RDS to export the a utomated snapshot automatically to Amazon S3 and create a lifecycle policy to delete the object after 90 days is incorrect because you can't export an automated snapshot automatically to Amazon S3. You must export the snapshot manually. The option that says: Create a daily scheduled even t using CloudWatch Events and AWS Lambda to directly download the RDS automated snapshot to an S3 bucket. Archive snapshots older than 90 days to Glacier is incorrect because you cannot dir ectly download or export an automated snapshot in R DS to Amazon S3. You have to copy the automated snapshot first for i t to become a manual snapshot, which you can move t o an Amazon S3 bucket. A better solution for this sce nario is to simply use AWS Backup. References: https://docs.aws.amazon.com/aws-backup/latest/devgu ide/create-a-scheduled-backup.html https://aws.amazon.com/backup/faqs/ Check out these AWS Cheat Sheets: https://tutorialsdojo.com/links-to-all-aws-cheat-sh eets/", + "references": "" + }, + { + "question": "A company is deploying a Microsoft SharePoint Serve r environment on AWS using CloudFormation. The Solutions Architect needs to install and configure the architecture that is composed of Microsoft Acti ve Directory (AD) domain controllers, Microsoft SQL Server 2012, multiple Amazon EC2 instances to host the Microsof t SharePoint Server and many other dependencies. The Architect needs to ensure that the required compone nts are properly running before the stack creation proc eeds. Which of the following should the Architect do to m eet this requirement?", + "options": [ + "A. A. Configure the UpdateReplacePolicy attribute in the CloudFormation template. Send a success signal", + "B. B. Configure the DependsOn attribute in the Cloud Formation template. Send a success signal after the", + "C. C. Configure a CreationPolicy attribute to the in stance in the CloudFormation template. Send a succe ss", + "D. D. Configure a UpdatePolicy attribute to the inst ance in the CloudFormation template. Send a success" + ], + "correct": "C. C. Configure a CreationPolicy attribute to the in stance in the CloudFormation template. Send a succe ss", + "explanation": "Explanation/Reference: You can associate the CreationPolicy attribute with a resource to prevent its status from reaching cre ate complete until AWS CloudFormation receives a specif ied number of success signals or the timeout period is exceeded. To signal a resource, you can use the cfn-signal helper script or SignalResource API. AWS CloudFormation publishes valid signals to the stack events so that you track the number of signals sen t. The creation policy is invoked only when AWS CloudF ormation creates the associated resource. Currently , the only AWS CloudFormation resources that support creation policies are AWS::AutoScaling::AutoScalingGroup, AWS::EC2::Insta nce, and AWS::CloudFormation::WaitCondition. Use the CreationPolicy attribute when you want to w ait on resource configuration actions before stack creation proceeds. For example, if you install and configure software applications on an EC2 instance, you might want those applications to be running before proceeding. In such cases, you can add a CreationPo licy attribute to the instance, and then send a success signal to the instance after the applications are i nstalled and configured. Hence, the option that says: Configure a CreationPo licy attribute to the instance in the CloudFormation template. Send a success signal afte r the applications are installed and configured using the cfn-signal helper script is correct. The option that says: Configure the DependsOn attri bute in the CloudFormation template. Send a success signal after the applications are installed and configured using the cfn-init helper script is incorrect because the cfn-init helper script is not suitable to be used to signal another resource. Yo u have to use cfn-signal instead. And although you can use th e DependsOn attribute to ensure the creation of a specific resource follows another, it is still bett er to use the CreationPolicy attribute instead as i t ensures that the applications are properly running before t he stack creation proceeds. The option that says: Configure a UpdatePolicy attr ibute to the instance in the CloudFormation template. Send a success signal after the applicati ons are installed and configured using the cfn-sign al helper script is incorrect because the UpdatePolicy attribute is primarily used for updating resources and for stack update rollback operations. The option that says: Configure the UpdateReplacePo licy attribute in the CloudFormation template. Send a success signal after the applications are in stalled and configured using the cfn-signal helper script is incorrect because the UpdateReplacePolicy attribute is primarily used to retain or in some c ases, back up the existing physical instance of a resourc e when it is replaced during a stack update operati on. References: https://docs.aws.amazon.com/AWSCloudFormation/lates t/UserGuide/aws-attribute-creationpolicy.html https://docs.aws.amazon.com/AWSCloudFormation/lates t/UserGuide/deploying.applications.html#deployment- walkthrough-cfn-signal https://aws.amazon.com/blogs/devops/use-a-creationp olicy-to-wait-for-on-instance-configurations/ Check out this AWS CloudFormation Cheat Sheet: https://tutorialsdojo.com/aws-cloudformation/ AWS CloudFormation - Templates, Stacks, Change Sets : https://www.youtube.com/watch?v=9Xpuprxg7aY", + "references": "" + }, + { + "question": "A company needs to collect gigabytes of data per se cond from websites and social media feeds to gain i nsights on its product offerings and continuously improve t he user experience. To meet this design requirement , you have developed an application hosted on an Auto Sca ling group of Spot EC2 instances which processes th e data and stores the results to DynamoDB and Redshif t. The solution should have a built-in enhanced fan -out feature. Which fully-managed AWS service can you use to coll ect and process large streams of data records in re al- time with the LEAST amount of administrative overhe ad?", + "options": [ + "A. A. Amazon Redshift with AWS Cloud Development Kit (AWS CDK)", + "B. B. Amazon Managed Streaming for Apache Kafka (Ama zon MSK)", + "C. C. Amazon Kinesis Data Streams", + "D. D. Amazon S3 Access Points", + "A. A. Amazon ElastiCache", + "B. B. Amazon DynamoDB", + "C. C. Amazon RDS", + "D. D. Amazon Redshift" + ], + "correct": "B. B. Amazon DynamoDB", + "explanation": "Explanation/Reference: Basically, a database service in which you no longe r need to worry about database management tasks suc h as hardware or software provisioning, setup, and co nfiguration is called a fully managed database. Thi s means that AWS fully manages all of the database ma nagement tasks and the underlying host server. The main differentiator here is the keyword \"scaling\" i n the question. In RDS, you still have to manually scale up your resources and create Read Replicas to impro ve scalability while in DynamoDB, this is automatically done. Amazon DynamoDB is the best option to use in this s cenario. It is a fully managed non-relational datab ase service you simply create a database table, set yo ur target utilization for Auto Scaling, and let the service handle the rest. You no longer need to worry about database management tasks such as hardware or software provisioning, setup, and configuration, so ftware patching, operating a reliable, distributed database cluster, or partitioning data over multipl e instances as you scale. DynamoDB also lets you ba ckup and restore all your tables for data archival, help ing you meet your corporate and governmental regula tory requirements. Amazon RDS is incorrect because this is just a \"man aged\" service and not \"fully managed\". This means that you still have to handle the backups and other administrative tasks such as when the automated OS patching will take place. Amazon ElastiCache is incorrect. Although ElastiCac he is fully managed, it is not a database service b ut an In-Memory Data Store. Amazon Redshift is incorrect. Although this is full y managed, it is not a database service but a Data Warehouse. References: https://aws.amazon.com/dynamodb/ https://aws.amazon.com/products/databases/ Check out this Amazon DynamoDB Cheat Sheet: https://tutorialsdojo.com/amazon-dynamodb/", + "references": "" + }, + { + "question": "A tech company is currently using Auto Scaling for their web application. A new AMI now needs to be u sed for launching a fleet of EC2 instances. Which of the fo llowing changes needs to be done?", + "options": [ + "A. A. Create a new target group.", + "B. B. Do nothing. You can start directly launching E C2 instances in the Auto Scaling group with the sam e launch configuration.", + "C. C. Create a new launch configuration.", + "D. D. Create a new target group and launch configura tion." + ], + "correct": "C. C. Create a new launch configuration.", + "explanation": "Explanation/Reference: A launch configuration is a template that an Auto S caling group uses to launch EC2 instances. When you create a launch configuration, you specify informat ion for the instances such as the ID of the Amazon Machine Image (AMI), the instance type, a key pair, one or more security groups, and a block device mapping. I f you've launched an EC2 instance before, you specified the same information in order to launch the instance. You can specify your launch configuration with mult iple Auto Scaling groups. However, you can only specify one launch configuration for an Auto Scalin g group at a time, and you can't modify a launch configuration after you've created it. Therefore, i f you want to change the launch configuration for a n Auto Scaling group, you must create a launch configurati on and then update your Auto Scaling group with the new launch configuration. For this scenario, you have to create a new launch configuration. Remember that you can't modify a lau nch configuration after you've created it. Hence, the correct answer is: Create a new launch c onfiguration. The option that says: Do nothing. You can start dir ectly launching EC2 instances in the Auto Scaling group with the same launch configuration is incorre ct because what you are trying to achieve is change the AMI being used by your fleet of EC2 instances. Therefore, you need to change the launch configurat ion to update what your instances are using. The option that says: create a new target group and create a new target group and launch configuration are both incorrect because you only want to change the AMI being used by your instances, and not the instances themselves. Target groups are primarily u sed in ELBs and not in Auto Scaling. The scenario didn't mention that the architecture has a load bal ancer. Therefore, you should be updating your launc h configuration, not the target group. References: http://docs.aws.amazon.com/autoscaling/latest/userg uide/LaunchConfiguration.html https://docs.aws.amazon.com/autoscaling/ec2/usergui de/AutoScalingGroup.html Check out this AWS Auto Scaling Cheat Sheet: https://tutorialsdojo.com/aws-auto-scaling/", + "references": "" + }, + { + "question": "A large financial firm needs to set up a Linux bast ion host to allow access to the Amazon EC2 instance s running in their VPC. For security purposes, only t he clients connecting from the corporate external p ublic IP address 175.45.116.100 should have SSH access to the host. Which is the best option that can meet the customer 's requirement?", + "options": [ + "A. A. Security Group Inbound Rule: Protocol UDP, Po rt Range 22, Source 175.45.116.100/32", + "B. B. Security Group Inbound Rule: Protocol TCP. Po rt Range 22, Source 175.45.116.100/32", + "C. C. Network ACL Inbound Rule: Protocol TCP, Port Range-22, Source 175.45.116.100/0", + "D. D. Network ACL Inbound Rule: Protocol UDP, Port Range 22, Source 175.45.116.100/32" + ], + "correct": "B. B. Security Group Inbound Rule: Protocol TCP. Po rt Range 22, Source 175.45.116.100/32", + "explanation": "Explanation/Reference: A bastion host is a special purpose computer on a n etwork specifically designed and configured to withstand attacks. The computer generally hosts a s ingle application, for example a proxy server, and all other services are removed or limited to reduce the threat to the computer. When setting up a bastion host in AWS, you should o nly allow the individual IP of the client and not t he entire network. Therefore, in the Source, the prope r CIDR notation should be used. The /32 denotes one IP address and the /0 refers to the entire network. The option that says: Security Group Inbound Rule: Protocol UDP, Port Range 22, Source 175.45.116.100/32 is incorrect since the SSH protoc ol uses TCP and port 22, and not UDP. The option that says: Network ACL Inbound Rule: Pro tocol UDP, Port Range 22, Source 175.45.116.100/32 is incorrect since the SSH protoc ol uses TCP and port 22, and not UDP. Aside from th at, network ACLs act as a firewall for your whole VPC s ubnet while security groups operate on an instance level. Since you are securing an EC2 instance, you should be using security groups. The option that says: Network ACL Inbound Rule: Pro tocol TCP, Port Range-22, Source 175.45.116.100/0 is incorrect as it allowed the ent ire network instead of a single IP to gain access t o the host.", + "references": "http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ ec2-instance-metadata.html Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/" + }, + { + "question": "A Solutions Architect is managing a company's AWS a ccount of approximately 300 IAM users. They have a new company policy that requires changing the assoc iated permissions of all 100 IAM users that control the access to Amazon S3 buckets. What will the Solutions Architect do to avoid the t ime-consuming task of applying the policy to each u ser?", + "options": [ + "A. A. Create a new policy and apply it to multiple I AM users using a shell script.", + "B. B. Create a new S3 bucket access policy with unli mited access for each IAM user.", + "C. C. Create a new IAM role and add each user to the IAM role.", + "D. D. Create a new IAM group and then add the users that require access to the S3 bucket. Afterward, ap ply" + ], + "correct": "D. D. Create a new IAM group and then add the users that require access to the S3 bucket. Afterward, ap ply", + "explanation": "Explanation/Reference: In this scenario, the best option is to group the s et of users in an IAM Group and then apply a policy with the required access to the Amazon S3 bucket. T his will enable you to easily add, remove, and manage the users instead of manually adding a polic y to each and every 100 IAM users. Creating a new policy and applying it to multiple I AM users using a shell script is incorrect because you need a new IAM Group for this scenario and not assign a policy to each user via a shell script. Th is method can save you time but afterward, it will be difficult to manage all 100 users that are not cont ained in an IAM Group. Creating a new S3 bucket access policy with unlimit ed access for each IAM user is incorrect because you need a new IAM Group and the method is also tim e-consuming. Creating a new IAM role and adding each user to the IAM role is incorrect because you need to use an IAM Group and not an IAM role. Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_ groups.html AWS Identity Services Overview: https://www.youtube.com/watch?v=AIdUw0i8rr0 Check out this AWS IAM Cheat Sheet: https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", + "references": "" + }, + { + "question": "A company needs to launch an Amazon EC2 instance wi th persistent block storage to host its application . The stored data must be encrypted at rest. Which of the following is the most suitable storage solution in this scenario?", + "options": [ + "A. A. Amazon EBS volume with server-side encryption (SSE) enabled.", + "B. B. Amazon EC2 Instance Store with SSL encryption.", + "C. C. Encrypted Amazon EBS volume using AWS KMS.", + "D. D. Encrypted Amazon EC2 Instance Store using AWS KMS." + ], + "correct": "C. C. Encrypted Amazon EBS volume using AWS KMS.", + "explanation": "Explanation/Reference: Amazon Elastic Block Store (Amazon EBS) provides bl ock-level storage volumes for use with EC2 instances. EBS volumes behave like raw, unformatted block devices. You can mount these volumes as devices on your instances. EBS volumes that are att ached to an instance are exposed as storage volumes that persist independently from the life of the ins tance. Amazon EBS is the persistent block storage volume a mong the options given. It is mainly used as the ro ot volume to store the operating system of an EC2 inst ance. To encrypt an EBS volume at rest, you can use AWS KMS customer master keys for the encryption of both the boot and data volumes of an EC2 instance. Hence, the correct answer is: Encrypted Amazon EBS volume using AWS KMS. The options that say: Amazon EC2 Instance Store wit h SSL encryption and Encrypted Amazon EC2 Instance Store using AWS KMS are both incorrect bec ause the scenario requires persistent block storage and not temporary storage. Also, enabling SSL is no t a requirement in the scenario as it is primarily used to encrypt data in transit. The option that says: Amazon EBS volume with server -side encryption (SSE) enabled is incorrect because EBS volumes are only encrypted using AWS KM S. Server-side encryption (SSE) is actually an option for Amazon S3, but not for Amazon EC2. References: https://aws.amazon.com/ebs/faqs/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /AmazonEBS.html Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/", + "references": "" + }, + { + "question": "A company is generating confidential data that is s aved on their on-premises data center. As a backup solution, the company wants to upload their data to an Amazon S3 bucket. In compliance with its internal securit y mandate, the encryption of the data must be done be fore sending it to Amazon S3. The company must spen d time managing and rotating the encryption keys as w ell as controlling who can access those keys. Which of the following methods can achieve this req uirement? (Select TWO.)", + "options": [ + "A. A. Set up Client-Side Encryption using a client-s ide master key.", + "B. B. Set up Client-Side Encryption with a customer master key stored in AWS Key Management Service", + "C. C. Set up Client-Side Encryption with Amazon S3 m anaged encryption keys.", + "D. D. Set up Server-Side Encryption (SSE) with EC2 k ey pair." + ], + "correct": "", + "explanation": "Explanation/Reference: Data protection refers to protecting data while in- transit (as it travels to and from Amazon S3) and a t rest (while it is stored on disks in Amazon S3 data cent ers). You can protect data in transit by using SSL or by using client-side encryption. You have the followin g options for protecting data at rest in Amazon S3: Use Server-Side Encryption You request Amazon S3 t o encrypt your object before saving it on disks in its data centers and decrypt it when you download t he objects. Use Server-Side Encryption with Amazon S3-Managed K eys (SSE-S3) Use Server-Side Encryption with AWS KMS-Managed Key s (SSE-KMS) Use Server-Side Encryption with Customer-Provided K eys (SSE-C) Use Client-Side Encryption You can encrypt data cl ient-side and upload the encrypted data to Amazon S 3. In this case, you manage the encryption process, the e ncryption keys, and related tools. Use Client-Side Encryption with AWS KMSManaged Cust omer Master Key (CMK) Use Client-Side Encryption Using a Client-Side Mast er Key Hence, the correct answers are: - Set up Client-Side Encryption with a customer mas ter key stored in AWS Key Management Service (AWS KMS). - Set up Client-Side Encryption using a client-side master key. The option that says: Set up Server-Side Encryption with keys stored in a separate S3 bucket is incorrect because you have to use AWS KMS to store your encryption keys or alternatively, choose an AWS-managed CMK instead to properly implement Serve r-Side Encryption in Amazon S3. In addition, storing any type of encryption key in Amazon S3 is actually a security risk and is not recommended. The option that says: Set up Client-Side encryption with Amazon S3 managed encryption keys is incorrect because you can't have an Amazon S3 manag ed encryption key for client-side encryption. As it s name implies, an Amazon S3 managed key is fully man aged by AWS and also rotates the key automatically without any manual intervention. For this scenario, you have to set up a customer master key (CMK) in AWS KMS that you can manage, rotate, and a udit or alternatively, use a client-side master key that you manually maintain. The option that says: Set up Server-Side encryption (SSE) with EC2 key pair is incorrect because you can't use a key pair of your Amazon EC2 instance fo r encrypting your S3 bucket. You have to use a clie nt- side master key or a customer master key stored in AWS KMS. References: http://docs.aws.amazon.com/AmazonS3/latest/dev/Usin gEncryption.html https://docs.aws.amazon.com/AmazonS3/latest/dev/Usi ngClientSideEncryption.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", + "references": "" + }, + { + "question": "A company deployed several EC2 instances in a priva te subnet. The Solutions Architect needs to ensure the security of all EC2 instances. Upon checking the ex isting Inbound Rules of the Network ACL, she saw th is configuration: If a computer with an IP address of 110.238.109.37 sends a request to the VPC, what will happen?", + "options": [ + "A. A. Initially, it will be allowed and then after a while, the connection will be denied.", + "B. B. It will be denied.", + "C. C. Initially, it will be denied and then after a while, the connection will be allowed.", + "D. D. It will be allowed." + ], + "correct": "D. D. It will be allowed.", + "explanation": "Explanation/Reference: Rules are evaluated starting with the lowest number ed rule. As soon as a rule matches traffic, it's ap plied immediately regardless of any higher-numbered rule that may contradict it. We have 3 rules here: 1. Rule 100 permits all traffic from any source. 2. Rule 101 denies all traffic coming from 110.238. 109.37 3. The Default Rule (*) denies all traffic from any source. The Rule 100 will first be evaluated. If there is a match then it will allow the request. Otherwise, i t will then go to Rule 101 to repeat the same process unti l it goes to the default rule. In this case, when t here is a request from 110.238.109.37, it will go through Rul e 100 first. As Rule 100 says it will permit all tr affic from any source, it will allow this request and wil l not further evaluate Rule 101 (which denies 110.238.109.37) nor the default rule.", + "references": "http://docs.aws.amazon.com/AmazonVPC/latest/UserGui de/VPC_ACLs.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/" + }, + { + "question": "A company currently has an Augment Reality (AR) mob ile game that has a serverless backend. It is using a DynamoDB table which was launched using the AWS CLI to store all the user data and information gathered from the players and a Lambda function to pull the data from DynamoDB. The game is being used by millions of users each day to read and store dat a. How would you design the application to improve its overall performance and make it more scalable whil e keeping the costs low? (Select TWO.)", + "options": [ + "A. A. Enable DynamoDB Accelerator (DAX) and ensure t hat the Auto Scaling is enabled and increase the", + "B. B. Configure CloudFront with DynamoDB as the orig in; cache frequently accessed data on the client de vice", + "C. C. Use API Gateway in conjunction with Lambda and turn on the caching on frequently accessed data an d", + "D. D. Use AWS SSO and Cognito to authenticate users and have them directly access DynamoDB using" + ], + "correct": "", + "explanation": "Explanation/Reference: The correct answers are the options that say: - Enable DynamoDB Accelerator (DAX) and ensure that the Auto Scaling is enabled and increase the maximum provisioned read and write capacity. - Use API Gateway in conjunction with Lambda and tu rn on the caching on frequently accessed data and enable DynamoDB global replication. Amazon DynamoDB Accelerator (DAX) is a fully manage d, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance impr ovement from milliseconds to microseconds even at millions of requests per second. DAX does a ll the heavy lifting required to add in-memory acceleration to your DynamoDB tables, without requi ring developers to manage cache invalidation, data population, or cluster management. Amazon API Gateway lets you create an API that acts as a \"front door\" for applications to access data, business logic, or functionality from your back-end services, such as code running on AWS Lambda. Amazon API Gateway handles all of the tasks involve d in accepting and processing up to hundreds of thousands of concurrent API calls, including traffi c management, authorization and access control, monitoring, and API version management. Amazon API Gateway has no minimum fees or startup costs. AWS Lambda scales your functions automatically on y our behalf. Every time an event notification is received for your function, AWS Lambda quickly loca tes free capacity within its compute fleet and runs your code. Since your code is stateless, AWS Lambda can start as many copies of your function as neede d without lengthy deployment and configuration delays . The option that says: Configure CloudFront with Dyn amoDB as the origin; cache frequently accessed data on the client device using ElastiCache is inco rrect. Although CloudFront delivers content faster to your users using edge locations, you still cannot i ntegrate DynamoDB table with CloudFront as these tw o are incompatible. The option that says: Use AWS SSO and Cognito to au thenticate users and have them directly access DynamoDB using single-sign on. Manually set the pro visioned read and write capacity to a higher RCU and WCU is incorrect because AWS Single Sign-On (SSO) is a cloud SSO service that just makes it easy to centrally manage SSO access to multiple AWS accounts and business applications. This will not be of much help on the scalability and performance of the application. It is costly to manually set the provisioned read and write capacity to a higher RCU and WCU because this capacity will run round the clock and will still be the same even if the incomi ng traffic is stable and there is no need to scale. The option that says: Since Auto Scaling is enabled by default, the provisioned read and write capacit y will adjust automatically. Also enable DynamoDB Acc elerator (DAX) to improve the performance from milliseconds to microseconds is incorrect beca use, by default, Auto Scaling is not enabled in a DynamoDB table which is created using the AWS CLI. References: https://aws.amazon.com/lambda/faqs/ https://aws.amazon.com/api-gateway/faqs/ https://aws.amazon.com/dynamodb/dax/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", + "references": "" + }, + { + "question": "A large financial firm in the country has an AWS en vironment that contains several Reserved EC2 instan ces hosting a web application that has been decommissio ned last week. To save costs, you need to stop incu rring charges for the Reserved instances as soon as possi ble. What cost-effective steps will you take in this cir cumstance? (Select TWO.)", + "options": [ + "A. A. Contact AWS to cancel your AWS subscription.", + "B. B. Go to the Amazon.com online shopping website a nd sell the Reserved instances.", + "C. C. Go to the AWS Reserved Instance Marketplace an d sell the Reserved instances.", + "D. D. Terminate the Reserved instances as soon as po ssible to avoid getting billed at the on- demand pr ice" + ], + "correct": "", + "explanation": "Explanation Explanation/Reference: The Reserved Instance Marketplace is a platform tha t supports the sale of third-party and AWS customers' unused Standard Reserved Instances, whic h vary in terms of lengths and pricing options. For example, you may want to sell Reserved Instances af ter moving instances to a new AWS region, changing to a new instance type, ending projects before the term expiration, when your business needs change, o r if you have unneeded capacity. Hence, the correct answers are: - Go to the AWS Reserved Instance Marketplace and s ell the Reserved instances. - Terminate the Reserved instances as soon as possi ble to avoid getting billed at the on-demand price when it expires. Stopping the Reserved instances as soon as possible is incorrect because a stopped instance can still be restarted. Take note that when a Reserved Instance expires, any instances that were covered by the Reserved Instance are billed at the on-demand price which costs significantly higher. Since the applic ation is already decommissioned, there is no point of kee ping the unused instances. It is also possible that there are associated Elastic IP addresses, which will inc ur charges if they are associated with stopped inst ances Contacting AWS to cancel your AWS subscription is i ncorrect as you don't need to close down your AWS account. Going to the Amazon.com online shopping website and selling the Reserved instances is incorrect as you have to use AWS Reserved Instance Marketplace t o sell your instances. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ri-market-general.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ec2-instance-lifecycle.html Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", + "references": "" + }, + { + "question": "A company generates large financial datasets with m illions of rows. The Solutions Architect needs to s tore all the data in a columnar fashion to reduce the number of disk I/O requests and reduce the amount of data needed to load from the disk. The bank has an exist ing third-party business intelligence application t hat will connect to the storage service and then generate da ily and monthly financial reports for its clients a round the globe. In this scenario, which is the best storage service to use to meet the requirement?", + "options": [ + "A. A. Amazon Redshift", + "B. B. Amazon RDS", + "C. C. Amazon DynamoDB", + "D. D. Amazon Aurora" + ], + "correct": "A. A. Amazon Redshift", + "explanation": "Explanation/Reference: Amazon Redshift is a fast, scalable data warehouse that makes it simple and cost-effective to analyze all your data across your data warehouse and data lake. Redshift delivers ten times faster performance tha n other data warehouses by using machine learning, ma ssively parallel query execution, and columnar storage on high-performance disk. In this scenario, there is a requirement to have a storage service that will be used by a business int elligence application and where the data must be stored in a columnar fashion. Business Intelligence reporting systems are a type of Online Analytical Processing (OLAP) which Redshift is known to support. In addition, Redshift also provides columnar storage, unlike the other options. Hence, the correct answer in this scenario is Amazo n Redshift. References: https://docs.aws.amazon.com/redshift/latest/dg/c_co lumnar_storage_disk_mem_mgmnt.html https://aws.amazon.com/redshift/ Amazon Redshift Overview: https://youtu.be/jlLERNzhHOg Check out this Amazon Redshift Cheat Sheet: https://tutorialsdojo.com/amazon-redshift/ Here is a case study on finding the most suitable a nalytical tool - Kinesis vs EMR vs Athena vs Redshift: https://youtu.be/wEOm6aiN4ww", + "references": "" + }, + { + "question": "A Solutions Architect needs to set up a bastion hos t in the cheapest, most secure way. The Architect s hould be the only person that can access it via SSH. Which of the following steps would satisfy this req uirement?", + "options": [ + "A. A. Set up a large EC2 instance and a security gro up that only allows access on port 22", + "B. B. Set up a large EC2 instance and a security gro up that only allows access on port 22 via your IP a ddress", + "C. C. Set up a small EC2 instance and a security gro up that only allows access on port 22 via your IP a ddress", + "D. D. Set up a small EC2 instance and a security gro up that only allows access on port 22" + ], + "correct": "C. C. Set up a small EC2 instance and a security gro up that only allows access on port 22 via your IP a ddress", + "explanation": "Explanation/Reference: A bastion host is a server whose purpose is to prov ide access to a private network from an external network, such as the Internet. Because of its expos ure to potential attack, a bastion host must minimi ze the chances of penetration. To create a bastion host, you can create a new EC2 instance which should only have a security group fr om a particular IP address for maximum security. Since the cost is also considered in the question, you s hould choose a small instance for your host. By default, t2.micro instance is used by AWS but you can change these settings during deployment. Setting up a large EC2 instance and a security grou p which only allows access on port 22 via your IP address is incorrect because you don't need to prov ision a large EC2 instance to run a single bastion host. At the same time, you are looking for the cheapest solution possible. The options that say: Set up a large EC2 instance a nd a security group which only allows access on port 22 and Set up a small EC2 instance and a secur ity group which only allows access on port 22 are both incorrect because you did not set your specifi c IP address to the security group rules, which pos sibly means that you publicly allow traffic from all sour ces in your security group. This is wrong as you sh ould only be the one to have access to the bastion host. References: https://docs.aws.amazon.com/quickstart/latest/linux -bastion/architecture.html https://aws.amazon.com/blogs/security/how-to-record -ssh-sessions-established-through-a-bastion-host/ Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", + "references": "" + }, + { + "question": "An online stocks trading application that stores fi nancial data in an S3 bucket has a lifecycle policy that moves older data to Glacier every month. There is a stric t compliance requirement where a surprise audit can happen at anytime and you should be able to retrieve the r equired data in under 15 minutes under all circumst ances. Your manager instructed you to ensure that retrieva l capacity is available when you need it and should handle up to 150 MB/s of retrieval throughput. Which of the following should you do to meet the ab ove requirement? (Select TWO.)", + "options": [ + "A. A. Specify a range, or portion, of the financial data archive to retrieve.", + "B. B. Use Bulk Retrieval to access the financial dat a.", + "C. C. Purchase provisioned retrieval capacity.", + "D. D. Retrieve the data using Amazon Glacier Select." + ], + "correct": "", + "explanation": "Explanation/Reference: Expedited retrievals allow you to quickly access yo ur data when occasional urgent requests for a subse t of . archives are required. For all but the largest arch ives (250 MB+), data accessed using Expedited retri evals are typically made available within 15 minutes. Pro visioned Capacity ensures that retrieval capacity f or Expedited retrievals is available when you need it. To make an Expedited, Standard, or Bulk retrieval, set the Tier parameter in the Initiate Job (POST jo bs) REST API request to the option you want, or the equ ivalent in the AWS CLI or AWS SDKs. If you have purchased provisioned capacity, then all expedited retrievals are automatically served through your provisioned capacity. Provisioned capacity ensures that your retrieval ca pacity for expedited retrievals is available when y ou need it. Each unit of capacity provides that at lea st three expedited retrievals can be performed ever y five minutes and provides up to 150 MB/s of retrieval th roughput. You should purchase provisioned retrieval capacity if your workload requires highly reliable and predictable access to a subset of your data in minutes. Without provisioned capacity Expedited retrievals a re accepted, except for rare situations of unusuall y high demand. However, if you require access to Expedited retrievals under all circumstances, you must purch ase provisioned retrieval capacity. Retrieving the data using Amazon Glacier Select is incorrect because this is not an archive retrieval option and is primarily used to perform filtering o perations using simple Structured Query Language (S QL) statements directly on your data archive in Glacier . Using Bulk Retrieval to access the financial data i s incorrect because bulk retrievals typically compl ete within 512 hours hence, this does not satisfy the r equirement of retrieving the data within 15 minutes . The provisioned capacity option is also not compatible with Bulk retrievals. Specifying a range, or portion, of the financial da ta archive to retrieve is incorrect because using ranged archive retrievals is not enough to meet the requirement of retrieving the whole archive in the given timeframe. In addition, it does not provide additio nal retrieval capacity which is what the provisione d capacity option can offer. References: https://docs.aws.amazon.com/amazonglacier/latest/de v/downloading-an-archive-two-steps.html https://docs.aws.amazon.com/amazonglacier/latest/de v/glacier-select.html Check out this Amazon S3 Glacier Cheat Sheet: https://tutorialsdojo.com/amazon-glacier/", + "references": "" + }, + { + "question": "A company has a web application that is relying ent irely on slower disk-based databases, causing it to . perform slowly. To improve its performance, the Sol utions Architect integrated an in-memory data store to the web application using ElastiCache. How does Amazon ElastiCache improve database perfor mance? A. A. By caching database query results.", + "options": [ + "B. B. It reduces the load on your database by routin g read queries from your applications to the Read R eplica.", + "C. C. It securely delivers data to customers globall y with low latency and high transfer speeds.", + "D. D. It provides an in-memory cache that delivers u p to 10x performance improvement from milliseconds to" + ], + "correct": "", + "explanation": "Explanation/Reference: ElastiCache improves the performance of your databa se through caching query results. The primary purpose of an in-memory key-value store is to provide ultra-fast (submillisecond latency) and inexpensive access to copies of data. Most data sto res have areas of data that are frequently accessed but seldom updated. Additionally, querying a database i s always slower and more expensive than locating a key in a key-value pair cache. Some database querie s are especially expensive to perform, for example, queries that involve joins across multiple tables o r queries with intensive calculations. By caching such query results, you pay the price of the query once and then are able to quickly retrie ve the data multiple times without having to re-execute th e query. The option that says: It securely delivers data to customers globally with low latency and high transf er speeds is incorrect because this option describes w hat CloudFront does and not ElastiCache. The option that says: It provides an in-memory cach e that delivers up to 10x performance improvement from milliseconds to microseconds or ev en at millions of requests per second is incorrect because this option describes what Amazon DynamoDB Accelerator (DAX) does and not ElastiCache. Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB. Amazon ElastiCache cannot provi de a performance improvement from milliseconds to microseconds, let alone millions of requests per second like DAX can. The option that says: It reduces the load on your d atabase by routing read queries from your applications to the Read Replica is incorrect becau se this option describes what an RDS Read Replica does and not ElastiCache. Amazon RDS Read Replicas enable you to create one or more read-only copies of your database instance within the same AWS Regio n or in a different AWS Region. References: https://aws.amazon.com/elasticache/ https://docs.aws.amazon.com/AmazonElastiCache/lates t/red-ug/elasticache-use-cases.html Check out this Amazon Elasticache Cheat Sheet: https://tutorialsdojo.com/amazon-elasticache/", + "references": "" + }, + { + "question": "You are automating the creation of EC2 instances in your VPC. Hence, you wrote a python script to trig ger the Amazon EC2 API to request 50 EC2 instances in a sin gle Availability Zone. However, you noticed that af ter 20 successful requests, subsequent requests failed. Wh at could be a reason for this issue and how would y ou resolve it?", + "options": [ + "A. By default, AWS allows you to provision a maximum of 20 instances per region. Select a different reg ion", + "B. There was an issue with the Amazon EC2 API. Just resend the requests and these will be provisioned", + "C. By default, AWS allows you to provision a maximum of 20 instances per Availability Zone. Select a di fferent", + "D. There is a vCPU-based On-Demand Instance limit pe r region which is why subsequent requests failed. J ust" + ], + "correct": "D. There is a vCPU-based On-Demand Instance limit pe r region which is why subsequent requests failed. J ust", + "explanation": "Explanation/Reference: You are limited to running On-Demand Instances per your vCPU-based On-Demand Instance limit, purchasin g 20 Reserved Instances, and requesting Spot Instance s per your dynamic Spot limit per region. New AWS accounts may start with limits that are low er than the limits described here. If you need more instances, complete the Amazon EC2 limit increase request form with your use case, an d . your limit increase will be considered. Limit incre ases are tied to the region they were requested for . Hence, the correct answer is: There is a vCPU-based On-Demand Instance limit per region which is why subsequent requests failed. Just submit the limit i ncrease form to AWS and retry the failed requests o nce approved. The option that says: There was an issue with the A mazon EC2 API. Just resend the requests and these will be provisioned successfully is incorrect becau se you are limited to running On-Demand Instances p er your vCPU-based On-Demand Instance limit. There is also a limit of purchasing 20 Reserved Instances, and requesting Spot Instances per your dynamic Spot limit per region hence, there is no problem with t he EC2 API. The option that says: By default, AWS allows you to provision a maximum of 20 instances per region. Select a different region and retry the failed requ est is incorrect. There is no need to select a diff erent region since this limit can be increased after subm itting a request form to AWS. The option that says: By default, AWS allows you to provision a maximum of 20 instances per Availability Zone. Select a different Availability Zone and retry the failed request is incorrect beca use the vCPU-based On-Demand Instance limit is set per region and not per Availa bility Zone. This can be increased after submitting a request form to AWS. References: https://docs.aws.amazon.com/general/latest/gr/aws_s ervice_limits.html#limits_ec2 https://aws.amazon.com/ec2/faqs/#How_many_instances _can_I_run_in_Amazon_EC2 Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", + "references": "" + }, + { + "question": "A company has a decoupled application in AWS using EC2, Auto Scaling group, S3, and SQS. The Solutions Architect designed the architecture in such a way t hat the EC2 instances will consume the message from the SQS queue and will automatically scale up or down b ased on the number of messages in the queue. In this scenario, which of the following statements is false about SQS?", + "options": [ + "A. A. Amazon SQS can help you build a distributed ap plication with decoupled components.", + "B. B. FIFO queues provide exactly-once processing.", + "C. C. Standard queues preserve the order of messages .", + "D. D. Standard queues provide at-least-once delivery , which means that each message is delivered at lea st" + ], + "correct": "C. C. Standard queues preserve the order of messages .", + "explanation": "Explanation/Reference: All of the answers are correct except for the optio n that says: Standard queues preserve the order of messages. Only FIFO queues can preserve the order o f messages and not standard queues.", + "references": "https://aws.amazon.com/sqs/faqs/ Check out this Amazon SQS Cheat Sheet: https://tutorialsdojo.com/amazon-sqs/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate-saa-c02/" + }, + { + "question": "A production MySQL database hosted on Amazon RDS is running out of disk storage. The management has consulted its solutions architect to increase t he disk space without impacting the database perfor mance. How can the solutions architect satisfy the require ment with the LEAST operational overhead?", + "options": [ + "A. A. Change the default_storage_engine of the DB in stance's parameter group to MyISAM.", + "B. B. Modify the DB instance storage type to Provisi oned IOPS.", + "C. C. Modify the DB instance settings and enable sto rage autoscaling.", + "D. D. Increase the allocated storage for the DB inst ance." + ], + "correct": "C. C. Modify the DB instance settings and enable sto rage autoscaling.", + "explanation": "Explanation/Reference: RDS Storage Auto Scaling automatically scales stora ge capacity in response to growing database workloads, with zero downtime. Under-provisioning could result in application down time, and over-provisioning could result in underutilized resources and higher costs. With RDS Storage Auto Scaling, you simply set your desired maximum storage limit, and Auto Scaling takes care of the rest. RDS Storage Auto Scaling continuously monitors actu al storage consumption, and scales capacity up automatically when actual utilization approaches pr ovisioned storage capacity. Auto Scaling works with new and existing database instances. You can enable Auto Scaling with just a few clicks in the AWS Management Console. There is no additional cost for RDS Storage Auto Scaling. You pay only for the RDS resources needed to run your applications. Hence, the correct answer is: Modify the DB instanc e settings and enable storage autoscaling. The option that says: Increase the allocated storag e for the DB instance is incorrect. Although this w ill solve the problem of low disk space, increasing the allocated storage might cause performance degradat ion during the change. The option that says: Change the default_storage_en gine of the DB instance's parameter group to MyISAM is incorrect. This is just a storage engine for MySQL. It won't increase the disk space in any way. The option that says: Modify the DB instance storag e type to Provisioned IOPS is incorrect. This may improve disk performance but it won't solve the pro blem of low database storage. References: https://aws.amazon.com/about-aws/whats-new/2019/06/ rds-storage-auto-scaling/ https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/ USER_PIOPS.StorageTypes.html#USER_PIOPS.Autoscaling Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/", + "references": "" + }, + { + "question": "A company installed sensors to track the number of people who visit the park. The data is sent every d ay to an Amazon Kinesis stream with default settings for pro cessing, in which a consumer is configured to proc ess the data every other day. You noticed that the S3 bucke t is not receiving all of the data that is being se nt to the Kinesis stream. You checked the sensors if they are properly sending the data to Amazon Kinesis and ve rified that the data is indeed sent every day. What could be the reason for this?", + "options": [ + "A. A. By default, Amazon S3 stores the data for 1 da y and moves it to Amazon Glacier.", + "B. B. There is a problem in the sensors. They probab ly had some intermittent connection hence, the data is", + "C. C. By default, the data records are only accessib le for 24 hours from the time they are added to a Kinesis", + "D. Your AWS account was hacked and someone has delet ed some data in your Kinesis stream." + ], + "correct": "C. C. By default, the data records are only accessib le for 24 hours from the time they are added to a Kinesis", + "explanation": "Explanation/Reference: Kinesis Data Streams supports changes to the data r ecord retention period of your stream. A Kinesis da ta stream is an ordered sequence of data records meant to be written to and read from in real-time. Data records are therefore stored in shards in your stre am temporarily. The time period from when a record is added to when it is no longer accessible is called the retention period. A Kinesis data stream stores records from 2 4 hours by default to a maximum of 8760 hours (365 days). This is the reason why there are missing data in yo ur S3 bucket. To fix this, you can either configure your sensors to send the data everyday instead of every other day or alternatively, you can increase the re tention period of your Kinesis data stream. The option that says: There is a problem in the sen sors. They probably had some intermittent connection hence, the data is not sent to the strea m is incorrect. You already verified that the senso rs are working as they should be hence, this is not the ro ot cause of the issue. The option that says: By default, Amazon S3 stores the data for 1 day and moves it to Amazon Glacier is incorrect because by default, Amazon S3 does not store the data for 1 day only and move it to Amazo n Glacier. The option that says: Your AWS account was hacked a nd someone has deleted some data in your Kinesis stream is incorrect. Although this could be a possibility, you should verify first if there ar e other more probable reasons for the missing data in your S3 bucket. Be sure to follow and apply security bes t practices as well to prevent being hacked by someon e. By default, the data records are only accessible fo r 24 hours from the time they are added to a Kinesis stream, which depicts the root cause of thi s issue.", + "references": "http://docs.aws.amazon.com/streams/latest/dev/kines is-extended-retention.html Check out this Amazon Kinesis Cheat Sheet: https://tutorialsdojo.com/amazon-kinesis/" + }, + { + "question": "An auto scaling group of Linux EC2 instances is cre ated with basic monitoring enabled in CloudWatch. You noticed that your application is slow so you as ked one of your engineers to check all of your EC2 instances. After checking your instances, you notic ed that the auto scaling group is not launching mor e instances as it should be, even though the servers already have high memory usage. Which of the following options should the Architect implement to solve this issue?", + "options": [ + "A. A. Enable detailed monitoring on the instances.", + "B. B. Install AWS SDK in the EC2 instances. Create a script that will trigger the Auto Scaling event if there is", + "C. C. Modify the scaling policy to increase the thre shold to scale out the number of instances.", + "D. D. Install the CloudWatch agent to the EC2 instan ces which will trigger your Auto Scaling group to s cale out." + ], + "correct": "D. D. Install the CloudWatch agent to the EC2 instan ces which will trigger your Auto Scaling group to s cale out.", + "explanation": "Explanation/Reference: Amazon CloudWatch agent enables you to collect both system metrics and log files from Amazon EC2 instances and on-premises servers. The agent suppor ts both Windows Server and Linux and allows you to select the metrics to be collected, including sub-r esource metrics such as per-CPU core. The premise of the scenario is that the EC2 servers have high memory usage, but since this specific me tric is not tracked by the Auto Scaling group by default , the scaling out activity is not being triggered. Remember that by default, CloudWatch doesn't monito r memory usage but only the CPU utilization, Network utilization, Disk performance, and Disk Rea ds/Writes. This is the reason why you have to install a CloudW atch agent in your EC2 instances to collect and mon itor the custom metric (memory usage), which will be use d by your Auto Scaling Group as a trigger for scali ng activities. Hence, the correct answer is: Install the CloudWatc h agent to the EC2 instances which will trigger your Auto Scaling group to scale out. The option that says: Install AWS SDK in the EC2 in stances. Create a script that will trigger the Auto Scaling event if there is a high memory usage is in correct because AWS SDK is a set of programming tools that allow you to create applications that ru n using Amazon cloud services. You would have to program the alert which is not the best strategy fo r this scenario. The option that says: Enable detailed monitoring on the instances is incorrect because detailed monitoring does not provide metrics for memory usag e. CloudWatch does not monitor memory usage in its default set of EC2 metrics and detailed monitoring just provides a higher frequency of metrics (1-minu te frequency). The option that says: Modify the scaling policy to increase the threshold to scale out the number of instances is incorrect because you are already maxi ng out your usage, which should in effect cause an auto-scaling event. References: https://docs.aws.amazon.com/AmazonCloudWatch/latest /monitoring/Install-CloudWatch-Agent.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /viewing_metrics_with_cloudwatch.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /monitoring_ec2.html Check out these Amazon EC2 and CloudWatch Cheat She ets: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/ https://tutorialsdojo.com/amazon-cloudwatch/", + "references": "" + }, + { + "question": "A technical lead of the Cloud Infrastructure team w as consulted by a software developer regarding the required AWS resources of the web application that he is bui lding. The developer knows that an Instance Store o nly provides ephemeral storage where the data is automa tically deleted when the instance is terminated. T o ensure that the data of the web application persist s, the app should be launched in an EC2 instance th at has a durable, block-level storage volume attached. The d eveloper knows that they need to use an EBS volume, but they are not sure what type they need to use. In this scenario, which of the following is true ab out Amazon EBS volume types and their respective us age? (Select TWO.)", + "options": [ + "A. A. Single root I/O virtualization (SR-IOV) volume s are suitable for a broad range of workloads, incl uding", + "B. B. Provisioned IOPS volumes offer storage with co nsistent and low-latency performance, and are desig ned", + "C. C. Magnetic volumes provide the lowest cost per g igabyte of all EBS volume types and are ideal for", + "D. D. General Purpose SSD (gp3) volumes with multi-a ttach enabled offer consistent and low-latency" + ], + "correct": "", + "explanation": "Explanation/Reference: Amazon EBS provides three volume types to best meet the needs of your workloads: General Purpose (SSD) , Provisioned IOPS (SSD), and Magnetic. General Purpose (SSD) is the new, SSD-backed, gener al purpose EBS volume type that is recommended as the default choice for customers. General Purpose ( SSD) volumes are suitable for a broad range of workloads, including small to medium sized database s, development, and test environments, and boot volumes. Provisioned IOPS (SSD) volumes offer storage with c onsistent and low-latency performance and are designed for I/O intensive applications such as lar ge relational or NoSQL databases. Magnetic volumes provide the lowest cost per gigabyte of all EBS vol ume types. Magnetic volumes are ideal for workloads where data are accessed infrequently, and applications where the lowest storage cost is important. Take note that th is is a Previous Generation Volume. The latest low- cost magnetic storage types are Cold HDD (sc1) and Throu ghput Optimized HDD (st1) volumes. Hence, the correct answers are: - Provisioned IOPS volumes offer storage with consi stent and low-latency performance, and are designed for I/ O intensive applications such as large relational o r NoSQL databases. - Magnetic volumes provide the lowest cost per giga byte of all EBS volume types and are ideal for work loads where data is accessed infrequently, and applicatio ns where the lowest storage cost is important. The option that says: Spot volumes provide the lowe st cost per gigabyte of all EBS volume types and are ideal for workloads where data is accessed infr equently, and applications where the lowest storage cost is important is incorrect because ther e is no EBS type called a \"Spot volume\" however, th ere is an Instance purchasing option for Spot Instances . The option that says: General Purpose SSD (gp3) vol umes with multi-attach enabled offer consistent and low-latency performance, and are designed for a pplications requiring multi-az resiliency is incorrect because the multi-attach feature can only be enabled on EBS Provisioned IOPS io2 or io1 volumes. In addition, multi-attach won't offer mult i-az resiliency because this feature only allows an EBS volume to be attached on multiple instances within an availability zone. The option that says: Single root I/O virtualizatio n (SR-IOV) volumes are suitable for a broad range of workloads, including small to medium-sized datab ases, development and test environments, and boot volumes is incorrect because SR-IOV is related with Enhanced Networking on Linux and not in EBS. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /EBSVolumeTypes.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /AmazonEBS.html Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/", + "references": "" + }, + { + "question": "A media company needs to configure an Amazon S3 buc ket to serve static assets for the public-facing we b application. Which methods ensure that all of the o bjects uploaded to the S3 bucket can be read public ly all over the Internet? (Select TWO.)", + "options": [ + "A. A. Create an IAM role to set the objects inside t he S3 bucket to public read.", + "B. B. Grant public read access to the object when up loading it using the S3 Console.", + "C. C. Configure the cross-origin resource sharing (C ORS) of the S3 bucket to allow objects to be public ly", + "D. D. Do nothing. Amazon S3 objects are already publ ic by default." + ], + "correct": "", + "explanation": "Explanation/Reference: By default, all Amazon S3 resources such as buckets , objects, and related subresources are private whi ch means that only the AWS account holder (resource ow ner) that created it has access to the resource. Th e resource owner can optionally grant access permissi ons to others by writing an access policy. In S3, y ou also set the permissions of the object during uploa d to make it public. Amazon S3 offers access policy options broadly cate gorized as resource-based policies and user policie s. Access policies you attach to your resources (bucke ts and objects) are referred to as resource-based p olicies. For example, bucket policies and access control lis ts (ACLs) are resource-based policies. You can also attach access policies to users in your account. Th ese are called user policies. You may choose to use resource-based policies, user policies, or some com bination of these to manage permissions to your Amazon S3 resources. You can also manage the public permissions of your objects during upload. Under Manage public permissions, you can grant read access to your obje cts to the general public (everyone in the world), for all of the files that you're uploading. Granting public read access is applicable to a small subset of use cases such as when buckets are used for websites. Hence, the correct answers are: - Grant public read access to the object when uploa ding it using the S3 Console. - Configure the S3 bucket policy to set all objects to public read. The option that says: Configure the cross-origin re source sharing (CORS) of the S3 bucket to allow objects to be publicly accessible from all domains is incorrect. CORS will only allow objects from one domain (travel.cebu.com) to be loaded and accessibl e to a different domain (palawan.com). It won't necessarily expose objects for public access all ov er the internet. The option that says: Creating an IAM role to set t he objects inside the S3 bucket to public read is incorrect. You can create an IAM role and attach it to an EC2 instance in order to retrieve objects fr om the S3 bucket or add new ones. An IAM Role, in itself, cannot directly make the S3 objects public or chang e the permissions of each individual object. The option that says: Do nothing. Amazon S3 objects are already public by default is incorrect because , by default, all the S3 resources are private, so on ly the AWS account that created the resources can a ccess them. References: http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-a ccess-control.html https://docs.aws.amazon.com/AmazonS3/latest/dev/Buc ketRestrictions.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", + "references": "" + }, + { + "question": "A Fortune 500 company which has numerous offices an d customers around the globe has hired you as their Principal Architect. You have staff and customers t hat upload gigabytes to terabytes of data to a cent ralized S3 bucket from the regional data centers, across conti nents, all over the world on a regular basis. At th e end of the financial year, there are thousands of data bei ng uploaded to the central S3 bucket which is in ap - southeast-2 (Sydney) region and a lot of employees are starting to complain about the slow upload time s. You were instructed by the CTO to resolve this issue as soon as possible to avoid any delays in processing their global end of financial year (EOFY) reports. Which feature in Amazon S3 enables fast, easy, and secure transfer of your files over long distances b etween your client and your Amazon S3 bucket?", + "options": [ + "A. A. Cross-Region Replication", + "B. B. Multipart Upload", + "C. C. AWS Global Accelerator", + "D. D. Transfer Acceleration" + ], + "correct": "D. D. Transfer Acceleration", + "explanation": "Explanation/Reference: Amazon S3 Transfer Acceleration enables fast, easy, and secure transfer of files over long distances between your client and your Amazon S3 bucket. Tran sfer Acceleration leverages Amazon CloudFront's globally distributed AWS Edge Locations. As data ar rives at an AWS Edge Location, data is routed to yo ur Amazon S3 bucket over an optimized network path. Amazon S3 Transfer Acceleration can speed up conten t transfers to and from Amazon S3 by as much as 50-500% for long-distance transfer of larger object s. Customers who have either web or mobile applications with widespread users or applications hosted far away from their S3 bucket can experience long and variable upload and download speeds over t he Internet. S3 Transfer Acceleration (S3TA) reduce s the variability in Internet routing, congestion and speeds that can affect transfers, and logically sh ortens the distance to S3 for remote applications. S3TA improv es transfer performance by routing traffic through Amazon CloudFront's globally distributed Edge Locat ions and over AWS backbone networks, and by using network protocol optimizations. Hence, Transfer Acceleration is the correct answer. AWS Global Accelerator is incorrect because this se rvice is primarily used to optimize the path from y our users to your applications which improves the perfo rmance of your TCP and UDP traffic. Using Amazon S3 Transfer Acceleration is a more suitable service for this scenario. Cross-Region Replication is incorrect because this simply enables you to automatically copy S3 objects from one bucket to another bucket that is placed in a different AWS Region or within the same Region. Multipart Upload is incorrect because this feature simply allows you to upload a single object as a se t of parts. You can upload these object parts independen tly and in any order. If transmission of any part f ails, you can retransmit that part without affecting othe r parts. After all parts of your object are uploade d, Amazon S3 assembles these parts and creates the obj ect. In general, when your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single operation. References: https://aws.amazon.com/s3/faqs/ https://aws.amazon.com/s3/transfer-acceleration/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", + "references": "" + }, + { + "question": "A company has a web-based order processing system t hat is currently using a standard queue in Amazon SQS. The IT Manager noticed that there are a lot of cases where an order was processed twice. This iss ue has caused a lot of trouble in processing and made the customers very unhappy. The manager has asked you t o ensure that this issue will not recur. What can you do to prevent this from happening agai n in the future? (Select TWO.)", + "options": [ + "A. A. Alter the visibility timeout of SQS.", + "B. B. Alter the retention period in Amazon SQS.", + "C. C. Replace Amazon SQS and instead, use Amazon Sim ple Workflow service.", + "D. D. Use an Amazon SQS FIFO Queue instead." + ], + "correct": "", + "explanation": "Explanation/Reference: Amazon SQS FIFO (First-In-First-Out) Queues have al l the capabilities of the standard queue with additional capabilities designed to enhance messagi ng between applications when the order of operation s and events is critical, or where duplicates can't b e tolerated, for example: - Ensure that user-entered commands are executed in the right order. - Display the correct product pri ce by . sending price modifications in the right order. - P revent a student from enrolling in a course before registering for an account. Amazon SWF provides useful guarantees around task a ssignments. It ensures that a task is never duplicated and is assigned only once. Thus, even th ough you may have multiple workers for a particular activity type (or a number of instances of a decide r), Amazon SWF will give a specific task to only on e worker (or one decider instance). Additionally, Ama zon SWF keeps at most one decision task outstanding at a time for a workflow execution. Thus, you can r un multiple decider instances without worrying abou t two instances operating on the same execution simul taneously. These facilities enable you to coordinat e your workflow without worrying about duplicate, los t, or conflicting tasks. The main issue in this scenario is that the order m anagement system produces duplicate orders at times . Since the company is using SQS, there is a possibil ity that a message can have a duplicate in case an EC2 . instance failed to delete the already processed mes sage. To prevent this issue from happening, you hav e to use Amazon Simple Workflow service instead of SQS. Therefore, the correct answers are: - Replace Amazon SQS and instead, use Amazon Simple Workflow service. - Use an Amazon SQS FIFO Queue instead. Altering the retention period in Amazon SQS is inco rrect because the retention period simply specifies if the Amazon SQS should delete the messages that have been in a queue for a certain period of time. Altering the visibility timeout of SQS is incorrect because for standard queues, the visibility timeou t isn't a guarantee against receiving a message twice. To a void duplicate SQS messages, it is better to design your applications to be idempotent (they should not be a ffected adversely when processing the same message more than once). Changing the message size in SQS is incorrect becau se this is not related at all in this scenario. References: https://aws.amazon.com/swf/faqs/ https://aws.amazon.com/swf/ https://docs.aws.amazon.com/AWSSimpleQueueService/l atest/SQSDeveloperGuide/sqs-visibility-timeout.html Check out this Amazon SWF Cheat Sheet: https://tutorialsdojo.com/amazon-simple-workflow-am azon-swf/ Amazon Simple Workflow (SWF) vs AWS Step Functions vs Amazon SQS: https://tutorialsdojo.com/amazon-simple-workflow-sw f-vs-aws-step-functions-vs-amazon-sqs/", + "references": "" + }, + { + "question": "A startup plans to develop a multiplayer game that uses UDP as the protocol for communication between clients and game servers. The data of the users wil l be stored in a key-value store. As the Solutions Architect, you need to implement a solution that will distribu te the traffic across a number of servers. Which of the following could help you achieve this requirement?", + "options": [ + "A. A. Distribute the traffic using Network Load Bala ncer and store the data in Amazon DynamoDB.", + "B. B. Distribute the traffic using Application Load Balancer and store the data in Amazon RDS.", + "C. C. Distribute the traffic using Network Load Bala ncer and store the data in Amazon Aurora.", + "D. D. Distribute the traffic using Application Load Balancer and store the data in Amazon DynamoDB." + ], + "correct": "A. A. Distribute the traffic using Network Load Bala ncer and store the data in Amazon DynamoDB.", + "explanation": "Explanation/Reference: A Network Load Balancer functions at the fourth lay er of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. Afte r the load balancer receives a connection request, it selects a target from the target group for the defa ult rule. For UDP traffic, the load balancer select s a target using a flow hash algorithm based on the protocol, source IP address, source port, destination IP addr ess, and destination port. A UDP flow has the same sourc e and destination, so it is consistently routed to a single target throughout its lifetime. Different UD P flows have different source IP addresses and port s, so they can be routed to different targets. In this scenario, a startup plans to create a multi player game that uses UDP as the protocol for communications. Since UDP is a Layer 4 traffic, we can limit the option that uses Network Load Balance r. The data of the users will be stored in a key-value store. This means that we should select Amazon DynamoDB since it supports both document and key-va lue store models. Hence, the correct answer is: Distribute the traffi c using Network Load Balancer and store the data in Amazon DynamoDB. The option that says: Distribute the traffic using Application Load Balancer and store the data in Amazon DynamoDB is incorrect because UDP is not sup ported in Application Load Balancer. Remember that UDP is a Layer 4 traffic. Therefore, you shoul d use a Network Load Balancer. The option that says: Distribute the traffic using Network Load Balancer and store the data in Amazon Aurora is incorrect because Amazon Aurora is a relational database service. Instead of Aurora, you should use Amazon DynamoDB. The option that says: Distribute the traffic using Application Load Balancer and store the data in Amazon RDS is incorrect because Application Load Ba lancer only supports application traffic (Layer 7). Also, Amazon RDS is not suitable as a key-value sto re. You should use DynamoDB since it supports both document and key-value store models. References: https://aws.amazon.com/blogs/aws/new-udp-load-balan cing-for-network-load-balancer/ https://docs.aws.amazon.com/elasticloadbalancing/la test/network/introduction.html Check out this AWS Elastic Load Balancing Cheat She et: https://tutorialsdojo.com/aws-elastic-load-balancin g-elb/", + "references": "" + }, + { + "question": "An online trading platform with thousands of client s across the globe is hosted in AWS. To reduce late ncy, you have to direct user traffic to the nearest applicat ion endpoint to the client. The traffic should be r outed to the closest edge location via an Anycast static IP addr ess. AWS Shield should also be integrated into the solution for DDoS protection. Which of the following is the MOST suitable service that the Solutions Architect should use to satisfy the above requirements?", + "options": [ + "A. A. AWS WAF", + "B. B. Amazon CloudFront", + "C. C. AWS PrivateLink", + "D. D. AWS Global Accelerator" + ], + "correct": "D. D. AWS Global Accelerator", + "explanation": "Explanation/Reference: AWS Global Accelerator is a service that improves t he availability and performance of your application s with local or global users. It provides static IP a ddresses that act as a fixed entry point to your ap plication endpoints in a single or multiple AWS Regions, such as your Application Load Balancers, Network Load Balancers or Amazon EC2 instances. AWS Global Accelerator uses the AWS global network to optimize the path from your users to your applications, improving the performance of your TCP and UDP traffic. AWS Global Accelerator continually monitors the health of your application endpoints and will detect an unhealthy endpoint an d redirect traffic to healthy endpoints in less than 1 minute. Many applications, such as gaming, media, mobile ap plications, and financial applications, need very l ow latency for a great user experience. To improve the user experience, AWS Global Accelerator directs us er traffic to the nearest application endpoint to the client, thus reducing internet latency and jitter. It routes the traffic to the closest edge location via Anycast, t hen by routing it to the closest regional endpoint over the AWS global network. AWS Global Accelerator quickly reacts to changes in network performance to improve your users' application performance. AWS Global Accelerator and Amazon CloudFront are se parate services that use the AWS global network and its edge locations around the world. CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (su ch as API acceleration and dynamic site delivery). Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions. Global Accelerator is a good f it for non-HTTP use cases, such as gaming (UDP), IoT ( MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover. Both ser vices integrate with AWS Shield for DDoS protection. Hence, the correct answer is AWS Global Accelerator . Amazon CloudFront is incorrect because although thi s service uses edge locations, it doesn't have the capability to route the traffic to the closest edge location via an Anycast static IP address. AWS WAF is incorrect because the this service is ju st a web application firewall that helps protect yo ur web applications or APIs against common web exploit s that may affect availability, compromise security , or consume excessive resources AWS PrivateLink is incorrect because this service s imply provides private connectivity between VPCs, AWS services, and on-premises applications, securel y on the Amazon network. It doesn't route traffic t o the closest edge location via an Anycast static IP address. References: https://aws.amazon.com/global-accelerator/ https://aws.amazon.com/global-accelerator/faqs/ Check out this AWS Global Accelerator Cheat Sheet: https://tutorialsdojo.com/aws-global-accelerator/", + "references": "" + }, + { + "question": "A company launched an online platform that allows p eople to easily buy, sell, spend, and manage their cryptocurrency. To meet the strict IT audit require ments, each of the API calls on all of the AWS reso urces should be properly captured and recorded. You used CloudTrail in the VPC to help you in the compliance , operational auditing, and risk auditing of your AWS account. In this scenario, where does CloudTrail store all o f the logs that it creates?", + "options": [ + "A. A. DynamoDB", + "B. B. Amazon S3", + "C. C. Amazon Redshift", + "D. D. A RDS instance" + ], + "correct": "B. B. Amazon S3", + "explanation": "Explanation/Reference: CloudTrail is enabled on your AWS account when you create it. When activity occurs in your AWS account, that activity is recorded in a CloudTrail event. You can easily view events in the CloudTrail console by going to Event history. Event history allows you to view, search, and downl oad the past 90 days of supported activity in your AWS account. In addition, you can create a CloudTrail t rail to further archive, analyze, and respond to ch anges in your AWS resources. A trail is a configuration that enables the delivery of events to an Amazon S3 buc ket that you specify. You can also deliver and analyze events in a trail with Amazon CloudWatch Logs and Amazon CloudWatch Events. You can create a trail wi th the CloudTrail console, the AWS CLI, or the CloudTrail API. The rest of the answers are incorrect. DynamoDB and an RDS instance are for database; Amazon Redshift is used for data warehouse that scales hor izontally and allows you to store terabytes and pet abytes of data. References: https://docs.aws.amazon.com/awscloudtrail/latest/us erguide/how-cloudtrail-works.html https://aws.amazon.com/cloudtrail/ Check out this AWS CloudTrail Cheat Sheet: https://tutorialsdojo.com/aws-cloudtrail/", + "references": "" + }, + { + "question": "An application is using a RESTful API hosted in AWS which uses Amazon API Gateway and AWS Lambda. There is a requirement to trace and analyze user re quests as they travel through your Amazon API Gatew ay APIs to the underlying services. Which of the following is the most suitable service to use to meet this requirement?", + "options": [ + "A. A. CloudWatch", + "B. B. CloudTrail", + "C. C. AWS X-Ray", + "D. D. VPC Flow Logs" + ], + "correct": "C. C. AWS X-Ray", + "explanation": "Explanation/Reference: You can use AWS X-Ray to trace and analyze user req uests as they travel through your Amazon API Gateway APIs to the underlying services. API Gatewa y supports AWS X-Ray tracing for all API Gateway endpoint types: regional, edge-optimized, and priva te. You can use AWS X-Ray with Amazon API Gateway in all regions where X-Ray is available. X-Ray gives you an end-to-end view of an entire req uest, so you can analyze latencies in your APIs and their backend services. You can use an X-Ray servic e map to view the latency of an entire request and that of the downstream services that are integrated with X-Ray. And you can configure sampling rules to tel l X- Ray which requests to record, at what sampling rate s, according to criteria that you specify. If you c all an API Gateway API from a service that's already being traced, API Gateway passes the trace through, even if X-Ray tracing is not enabled on the API. You can enable X-Ray for an API stage by using the API Gateway management console, or by using the API Gateway API or CLI. VPC Flow Logs is incorrect because this is a featur e that enables you to capture information about the IP traffic going to and from network interfaces in you r entire VPC. Although it can capture some details about the incoming user requests, it is still better to u se AWS X-Ray as it provides a better way to debug a nd analyze your microservices applications with reques t tracing so you can find the root cause of your is sues and performance. CloudWatch is incorrect because this is a monitorin g and management service. It does not have the capability to trace and analyze user requests as th ey travel through your Amazon API Gateway APIs. CloudTrail is incorrect because this is primarily u sed for IT audits and API logging of all of your AW S resources. It does not have the capability to trace and analyze user requests as they travel through y our Amazon API Gateway APIs, unlike AWS X-Ray.", + "references": "https://docs.aws.amazon.com/apigateway/latest/devel operguide/apigateway-xray.html Check out this AWS X-Ray Cheat Sheet: https://tutorialsdojo.com/aws-x-ray/ Instrumenting your Application with AWS X-Ray: https://tutorialsdojo.com/instrumenting-your-applic ation-with-aws-x-ray/" + }, + { + "question": "A real-time data analytics application is using AWS Lambda to process data and store results in JSON f ormat to an S3 bucket. To speed up the existing workflow, you have to use a service where you can run sophis ticated Big Data analytics on your data without moving them into a separate analytics system. Which of the following group of services can you us e to meet this requirement?", + "options": [ + "A. A. Amazon X-Ray, Amazon Neptune, DynamoDB", + "B. B. S3 Select, Amazon Neptune, DynamoDB DAX", + "C. C. Amazon Glue, Glacier Select, Amazon Redshift", + "D. D. S3 Select, Amazon Athena, Amazon Redshift Spec trum", + "A. A. Set the IOPS to 400 then maintain a low queue length.", + "B. B. Set the IOPS to 500 then maintain a low queue length.", + "C. C. Set the IOPS to 800 then maintain a low queue length.", + "D. D. Set the IOPS to 600 then maintain a high queue length." + ], + "correct": "B. B. Set the IOPS to 500 then maintain a low queue length.", + "explanation": "Explanation/Reference: Provisioned IOPS SSD (io1) volumes are designed to meet the needs of I/O-intensive workloads, particul arly database workloads, that are sensitive to storage p erformance and consistency. Unlike gp2, which uses a bucket and credit model to calculate performance, a n io1 volume allows you to specify a consistent IOPS rate when you create the volume, an d Amazon EBS delivers within 10 percent of the provisioned IOPS performance 99.9 percent of the ti me over a given year. An io1 volume can range in size from 4 GiB to 16 Ti B. You can provision from 100 IOPS up to 64,000 IOPS per volume on Nitro system instance families a nd up to 32,000 on other instance families. The maximum ratio of provisioned IOPS to requested volu me size (in GiB) is 50:1. For example, a 100 GiB volume can be provisioned wi th up to 5,000 IOPS. On a supported instance type, any volume 1,280 GiB in size or greater allows prov isioning up to the 64,000 IOPS maximum (50 \u00d7 1,280 GiB = 64,000). An io1 volume provisioned with up to 32,000 IOPS su pports a maximum I/O size of 256 KiB and yields as much as 500 MiB/s of throughput. With the I/O size at the maximum, peak throughput is reached at 2,000 IOPS. A volume provisioned with more than 32,000 IO PS (up to the cap of 64,000 IOPS) supports a maximum I/O size of 16 KiB and yields as much as 1, 000 MiB/s of throughput. The volume queue length is the number of pending I/ O requests for a device. Latency is the true end-to -end client time of an I/O operation, in other words, th e time elapsed between sending an I/O to EBS and receiving an acknowledgement from EBS that the I/O read or write is complete. Queue length must be correctly calibrated with I/O size and latency to a void creating bottlenecks either on the guest opera ting system or on the network link to EBS. Optimal queue length varies for each workload, depe nding on your particular application's sensitivity to IOPS and latency. If your workload is not deliverin g enough I/O requests to fully use the performance available to your EBS volume then your volume might not deliver the IOPS or throughput that you have provisioned. Transaction-intensive applications are sensitive to increased I/O latency and are well-suited for SSD- backed io1 and gp2 volumes. You can maintain high I OPS while keeping latency down by maintaining a low queue length and a high number of IOPS availabl e to the volume. Consistently driving more IOPS to a volume than it has available can cause increased I/ O latency. Throughput-intensive applications are less sensitiv e to increased I/O latency, and are well-suited for HDD- backed st1 and sc1 volumes. You can maintain high t hroughput to HDD-backed volumes by maintaining a high queue length when performing large, sequential I/O. Therefore, for instance, a 10 GiB volume can be pro visioned with up to 500 IOPS. Any volume 640 GiB in size or greater allows provisioning up to a maximum of 32,000 IOPS (50 \u00d7 640 GiB = 32,000). Hence, the correct answer is to set the IOPS to 500 then maint ain a low queue length. Setting the IOPS to 400 then maintaining a low queu e length is incorrect because although a value of 400 is an acceptable value, it is not the maximum v alue for the IOPS. You will not fully utilize the available IOPS that the volume can offer if you jus t set it to 400. The options that say: Set the IOPS to 600 then main tain a high queue length and Set the IOPS to 800 then maintain a low queue length are both incorrect because the maximum IOPS for the 10 GiB volume is only 500. Therefore, any value greater than the maximum amount, such as 600 or 800, is wrong. Moreover, you should keep the latency down by maint aining a low queue length, and not higher. References: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ EBSVolumeTypes.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ebs-io-characteristics.html Amazon EBS Overview - SSD vs HDD: https://youtube.com/watch?v=LW7x8wyLFvw&t=8s Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/", + "references": "https://aws.amazon.com/s3/features/#Query_in_Place Amazon Redshift Overview: https://youtu.be/jlLERNzhHOg Check out these AWS Cheat Sheets: https://tutorialsdojo.com/amazon-s3/ https://tutorialsdojo.com/amazon-athena/ https://tutorialsdojo.com/amazon-redshift/ QUESTION 216 A company has a High Performance Computing (HPC) cl uster that is composed of EC2 Instances with Provisioned IOPS volume to process transaction-inte nsive, low-latency workloads. The Solutions Archite ct must maintain high IOPS while keeping the latency down b y setting the optimal queue length for the volume. The size of each volume is 10 GiB. Which of the following is the MOST suitable configu ration that the Architect should set up?" + }, + { + "question": "A Solutions Architect is designing the cloud archit ecture for the enterprise application suite of the company. Both the web and application tiers need to access t he Internet to fetch data from public APIs. However , these servers should be inaccessible from the Internet. Which of the following steps should the Architect i mplement to meet the above requirements? A. A. Deploy the web and application tier instances to a public subnet and then allocate an Elastic IP address to each EC2 instance.", + "options": [ + "B. B. Deploy the web and application tier instances to a private subnet and then allocate an Elastic IP address", + "C. C. Deploy a NAT gateway in the private subnet and add a route to it from the public subnet where the web", + "D. D. Deploy a NAT gateway in the public subnet and add a route to it from the private subnet where the web" + ], + "correct": "D. D. Deploy a NAT gateway in the public subnet and add a route to it from the private subnet where the web", + "explanation": "Explanation/Reference: You can use a network address translation (NAT) gat eway to enable instances in a private subnet to connect to the internet or other AWS services, but prevent the internet from initiating a connection w ith those instances. You are charged for creating and u sing a NAT gateway in your account. NAT gateway hourly usage and data processing rates apply. Amazon EC2 charges for data transfer also apply. NAT gateways are not supported for IPv6 traf fic--use an egress-only internet gateway instead. To create a NAT gateway, you must specify the publi c subnet in which the NAT gateway should reside. You must also specify an Elastic IP address to asso ciate with the NAT gateway when you create it. The Elastic IP address cannot be changed once you assoc iate it with the NAT Gateway. After you've created a NAT gateway, you must update the route table associated with one or more of you r private subnets to point Internet-bound traffic to the NAT gateway. This enables instances in your pri vate subnets to communicate with the internet. Each NAT gateway is created in a specific Availability Zone and implemented with redundancy in that zone. You have a limit on the number of NAT gateways you can create in an Availability Zone. Hence, the correct answer is to deploy a NAT gatewa y in the public subnet and add a route to it from the private subnet where the web and application ti ers are hosted. Deploying the web and application tier instances to a private subnet and then allocating an Elastic IP address to each EC2 instance is incorrect because a n Elastic IP address is just a static, public IPv4 address. In this scenario, you have to use a NAT Gateway ins tead. Deploying a NAT gateway in the private subnet and a dding a route to it from the public subnet where the web and application tiers are hosted is i ncorrect because you have to deploy a NAT gateway in the public subnet instead and not on a private o ne. Deploying the web and application tier instances to a public subnet and then allocating an Elastic IP address to each EC2 instance is incorrect because h aving an EIP address is irrelevant as it is only a static, public IPv4 address. Moreover, you should deploy th e web and application tier in the private subnet in stead of a public subnet to make it inaccessible from the Internet and then just add a NAT Gateway to allow outbound Internet connection.", + "references": "https://docs.aws.amazon.com/vpc/latest/userguide/vp c-nat-gateway.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate-saa-c02/" + }, + { + "question": "A company has a web application hosted in AWS cloud where the application logs are sent to Amazon CloudWatch. Lately, the web application has recentl y been encountering some errors which can be resolved simply by restarting the instance. What will you do to automatically restart the EC2 i nstances whenever the same application error occurs ?", + "options": [ + "A. A. First, look at the existing CloudWatch logs fo r keywords related to the application error to crea te a", + "B. B. First, look at the existing CloudWatch logs fo r keywords related to the application error to crea te a", + "C. C. First, look at the existing Flow logs for keyw ords related to the application error to create a c ustom", + "D. D. First, look at the existing Flow logs for keyw ords related to the application error to create a c ustom metric. Then, create a CloudWatch alarm for that cu stom metric which calls a Lambda function that invo kes" + ], + "correct": "A. A. First, look at the existing CloudWatch logs fo r keywords related to the application error to crea te a", + "explanation": "Explanation/Reference: In this scenario, you can look at the existing Clou dWatch logs for keywords related to the application error to create a custom metric. Then, create a CloudWatc h alarm for that custom metric which invokes an act ion to restart the EC2 instance. You can create alarms that automatically stop, term inate, reboot, or recover your EC2 instances using Amazon CloudWatch alarm actions. You can use the st op or terminate actions to help you save money when you no longer need an instance to be running. You can use the reboot and recover actions to automatically reboot those instances or recover the m onto new hardware if a system impairment occurs. Hence, the correct answer is: First, look at the ex isting CloudWatch logs for keywords related to the application error to create a custom metric. Then, create a CloudWatch alarm for that custom metric which invokes an action to restart the EC2 i nstance. The option that says: First, look at the existing C loudWatch logs for keywords related to the application error to create a custom metric. Then, create an alarm in Amazon SNS for that custom metric which invokes an action to restart the EC2 i nstance is incorrect because you can't create an alarm in Amazon SNS. The following options are incorrect because Flow Lo gs are used in VPC and not on specific EC2 instance : - First, look at the existing Flow logs for keyword s related to the application error to create a cust om metric. Then, create a CloudWatch alarm for that cu stom metric which invokes an action to restart the EC2 instance. First, look at the existing Flow logs for keywords related to the application error to create a custom metric. Then, create a CloudWatch alarm for that cu stom metric which calls a Lambda function that invokes an action to restart the EC2 instance.", + "references": "https://docs.aws.amazon.com/AmazonCloudWatch/latest /monitoring/UsingAlarmActions.html Check out this Amazon CloudWatch Cheat Sheet: https://tutorialsdojo.com/amazon-cloudwatch/" + }, + { + "question": "A company decided to change its third-party data an alytics tool to a cheaper solution. They sent a ful l data export on a CSV file which contains all of their an alytics information. You then save the CSV file to an S3 bucket for storage. Your manager asked you to do so me validation on the provided data export. In this scenario, what is the most cost-effective a nd easiest way to analyze export data using standar d SQL?", + "options": [ + "A. A. Create a migration tool to load the CSV export file from S3 to a DynamoDB instance. Once the data has", + "B. B. To be able to run SQL queries, use AWS Athena to analyze the export data file in S3.", + "C. C. Use a migration tool to load the CSV export fi le from S3 to a database that is designed for onlin e analytic", + "D. D. Use mysqldump client utility to load the CSV e xport file from S3 to a MySQL RDS instance. Run som e" + ], + "correct": "B. B. To be able to run SQL queries, use AWS Athena to analyze the export data file in S3.", + "explanation": "Explanation/Reference: Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon Simple Storage Service (Amazon S3) using standard S QL. With a few actions in the AWS Management Console, you can point Athena at your data stored i n Amazon S3 and begin using standard SQL to run ad-hoc queries and get results in seconds. Athena is serverless, so there is no infrastructure to set up or manage, and you pay only for the quer ies you run. Athena scales automatically--executing queries in parallel--so results are fast, even with large datasets and complex queries. Athena helps you analyze unstructured, semi-structu red, and structured data stored in Amazon S3. Examples include CSV, JSON, or columnar data format s such as Apache Parquet and Apache ORC. You can use Athena to run ad-hoc queries using ANSI SQL , without the need to aggregate or load the data in to Athena. Hence, the correct answer is: To be able to run SQL queries, use Amazon Athena to analyze the export data file in S3. The rest of the options are all incorrect because i t is not necessary to set up a database to be able to analyze the CSV export file. You can use a cost-effective o ption (AWS Athena), which is a serverless service t hat enables you to pay only for the queries you run.", + "references": "https://docs.aws.amazon.com/athena/latest/ug/what-i s.html Check out this Amazon Athena Cheat Sheet: https://tutorialsdojo.com/amazon-athena/" + }, + { + "question": "A company has hundreds of VPCs with multiple VPN co nnections to their data centers spanning 5 AWS Regions. As the number of its workloads grows, the company must be able to scale its networks across multiple accounts and VPCs to keep up. A Solutions Architect is tasked to interconnect all of the comp any's on- premises networks, VPNs, and VPCs into a single gat eway, which includes support for inter- region peer ing across multiple AWS regions. Which of the following is the BEST solution that th e architect should set up to support the required interconnectivity?", + "options": [ + "A. A. Set up an AWS VPN CloudHub for inter-region VP C access and a Direct Connect gateway for the VPN", + "B. B. Set up an AWS Direct Connect Gateway to achiev e inter-region VPC access to all of the AWS resourc es and on-premises data centers. Set up a link aggrega tion group (LAG) to aggregate multiple connections at", + "C. C. Enable inter-region VPC peering that allows pe ering relationships to be established between multi ple", + "D. D. Set up an AWS Transit Gateway in each region t o interconnect all networks within it. Then, route traffic" + ], + "correct": "D. D. Set up an AWS Transit Gateway in each region t o interconnect all networks within it. Then, route traffic", + "explanation": "Explanation/Reference: AWS Transit Gateway is a service that enables custo mers to connect their Amazon Virtual Private Clouds (VPCs) and their on-premises networks to a single g ateway. As you grow the number of workloads running on AWS, you need to be able to scale your networks across multiple accounts and Amazon VPCs to keep up with the growth. Today, you can connect pairs of Amazon VPCs using p eering. However, managing point-to-point connectivity across many Amazon VPCs without the ab ility to centrally manage the connectivity policies can be operationally costly and cumbersome. For on- premises connectivity, you need to attach your AWS VPN to each individual Amazon VPC. This solution ca n be time-consuming to build and hard to manage when the number of VPCs grows into the hundreds. With AWS Transit Gateway, you only have to create a nd manage a single connection from the central gateway to each Amazon VPC, on-premises data center , or remote office across your network. Transit Gateway acts as a hub that controls how traffic is routed among all the connected networks which act l ike spokes. This hub and spoke model significantly simp lifies management and reduces operational costs because each network only has to connect to the Tra nsit Gateway and not to every other network. Any ne w VPC is simply connected to the Transit Gateway and is then automatically available to every other netw ork that is connected to the Transit Gateway. This ease of connectivity makes it easy to scale your networ k as you grow. . It acts as a Regional virtual router for traffic fl owing between your virtual private clouds (VPC) and VPN connections. A transit gateway scales elastically b ased on the volume of network traffic. Routing thro ugh a transit gateway operates at layer 3, where the pack ets are sent to a specific next-hop attachment, bas ed on their destination IP addresses. A transit gateway attachment is both a source and a destination of packets. You can attach the followi ng resources to your transit gateway: - One or more VPCs - One or more VPN connections - One or more AWS Direct Connect gateways - One or more transit gateway peering connections If you attach a transit gateway peering connection, the transit gateway must be in a different Region. Hence, the correct answer is: Set up an AWS Transit Gateway in each region to interconnect all networks within it. Then, route traffic between the transit gateways through a peering connection. The option that says: Set up an AWS Direct Connect Gateway to achieve inter-region VPC access to all of the AWS resources and on-premises data cente rs. Set up a link aggregation group (LAG) to aggregate multiple connections at a single AWS Dire ct Connect endpoint in order to treat them as a single, managed connection. Launch a virtual privat e gateway in each VPC and then create a public virtual interface for each AWS Direct Connect conne ction to the Direct Connect Gateway is incorrect. You can only create a private virtual interface to a Direct Connect gateway and not a public virtual interface. Using a link aggregation group (LAG) is also irrelevant in this scenario because it is just a logical interface that uses the Link Aggregation Control Pr otocol (LACP) to aggregate multiple connections at a single AWS Direct Connect endpoint, allowing you to treat them as a single, managed connection. The option that says: Enable inter-region VPC peeri ng which allows peering relationships to be established between VPCs across different AWS regio ns. This will ensure that the traffic will always stay on the global AWS backbone and will never trav erse the public Internet is incorrect. This would require a lot of manual set up and management overh ead to successfully build a functional, error-free inter- region VPC network compared with just using a Trans it Gateway. Although the Inter-Region VPC Peering provides a cost-effective way to share resources be tween regions or replicate data for geographic redundancy, its connections are not dedicated and h ighly available. Moreover, it doesn't support the company's on-premises data centers in multiple AWS Regions. The option that says: Set up an AWS VPN CloudHub fo r inter-region VPC access and a Direct Connect gateway for the VPN connections to the on-p remises data centers. Create a virtual private gateway in each VPC, then create a private virtual interface for each AWS Direct Connect connection to the Direct Connect gateway is incorre ct. This option doesn't meet the requirement of interconnecting all of the company's on-premises ne tworks, VPNs, and VPCs into a single gateway, which includes support for inter-region peering across mu ltiple AWS regions. As its name implies, the AWS VP N CloudHub is only for VPNs and not for VPCs. It is a lso not capable of managing hundreds of VPCs with multiple VPN connections to their data centers that span multiple AWS Regions. References: https://aws.amazon.com/transit-gateway/ https://docs.aws.amazon.com/vpc/latest/tgw/how-tran sit-gateways-work.html https://aws.amazon.com/blogs/networking-and-content -delivery/building-a-global-network-using-aws-trans it- gateway-inter-region-peering/ Check out this AWS Transit Gateway Cheat Sheet: https://tutorialsdojo.com/aws-transit-gateway/", + "references": "" + }, + { + "question": "A popular augmented reality (AR) mobile game is hea vily using a RESTful API which is hosted in AWS. The API uses Amazon API Gateway and a DynamoDB tabl e with a preconfigured read and write capacity. Based on your systems monitoring, the DynamoDB tabl e begins to throttle requests during high peak load s which causes the slow performance of the game. Which of the following can you do to improve the pe rformance of your app? A. A. Add the DynamoDB table to an Auto Scaling Group.", + "options": [ + "B. B. Create an SQS queue in front of the DynamoDB t able.", + "C. C. Integrate an Application Load Balancer with yo ur DynamoDB table.", + "D. D. Use DynamoDB Auto Scaling" + ], + "correct": "D. D. Use DynamoDB Auto Scaling", + "explanation": "Explanation/Reference: DynamoDB auto scaling uses the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns. This enables a table or a global secondary index to increase its provisi oned read and write capacity to handle sudden incre ases in traffic, without throttling. When the workload d ecreases, Application Auto Scaling decreases the throughput so that you don't pay for unused provisi oned capacity. Using DynamoDB Auto Scaling is the best answer. Dyn amoDB Auto Scaling uses the AWS Application Auto Scaling service to dynamically adjust provisio ned throughput capacity on your behalf. Integrating an Application Load Balancer with your DynamoDB table is incorrect because an Application Load Balancer is not suitable to be use d with DynamoDB and in addition, this will not incr ease the throughput of your DynamoDB table. Adding the DynamoDB table to an Auto Scaling Group is incorrect because you usually put EC2 instances on an Auto Scaling Group, and not a Dynam oDB table. Creating an SQS queue in front of the DynamoDB tabl e is incorrect because this is not a design principle for high throughput DynamoDB table. Using SQS is for handling queuing and polling the reques t. This will not increase the throughput of DynamoDB w hich is required in this situation.", + "references": "https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/AutoScaling.html Check out this Amazon DynamoDB Cheat Sheet: https://tutorialsdojo.com/amazon-dynamodb/ Amazon DynamoDB Overview: https://youtube.com/watch?v=3ZOyUNIeorU" + }, + { + "question": "A new company policy requires IAM users to change t heir passwords' minimum length to 12 characters. After a random inspection, you found out that there are still employees who do not follow the policy. How can you automatically check and evaluate whethe r the current password policy for an account compli es with the company password policy?", + "options": [ + "A. A. Create a Scheduled Lambda Function that will r un a custom script to check compliance against chan ges", + "B. B. Create a CloudTrail trail. Filter the result b y setting the attribute to \"Event Name\" and lookup value to", + "C. C. Create a rule in the Amazon CloudWatch event. Build an event pattern to match events on IAM. Set the", + "D. D. Configure AWS Config to trigger an evaluation that will check the compliance for a user's passwor d" + ], + "correct": "", + "explanation": "Explanation/Reference: AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. In the scenario given, we can utilize AWS Config to check for compliance on the password policy by configuring the Config rule to check the IAM_PASSWO RD_POLICY on an account. Additionally, because Config integrates with AWS Organizations, w e can improve the set up to aggregate compliance information across accounts to a central dashboard. Hence, the correct answer is: Configure AWS Config to trigger an evaluation that will check the compliance for a user's password periodically. Create a CloudTrail trail. Filter the result by set ting the attribute to \"Event Name\" and lookup value to \"ChangePassword\". This easily gives you the list of users who have made changes to their passwords is incorrect because this setup will just give you the name of the users who have made chang es to their respective passwords. It will not give you the ability to check whether their passwords have met the required minimum length. Create a Scheduled Lambda function that will run a custom script to check compliance against changes made to the passwords periodically is a val id solution but still incorrect. AWS Config is alre ady integrated with AWS Lambda. You don't have to creat e and manage your own Lambda function. You just have to define a Config rule where you will check c ompliance, and Lambda will process the evaluation. Moreover, you can't directly create a scheduled fun ction by using Lambda itself. You have to create a rule in AWS CloudWatch Events to run the Lambda function s on the schedule that you define. Create a rule in the Amazon CloudWatch event. Build an event pattern to match events on IAM. Set the event name to \"ChangePassword\" in the event pat tern. Configure SNS to send notifications to you whenever a user has made changes to his passwor d is incorrect because this setup will just alert y ou whenever a user changes his password. Sure, you'll have information about w ho made changes, but that is not enough to check whether it complies with the required minimum passw ord length. This can be easily done in AWS Config. References: https://docs.aws.amazon.com/config/latest/developer guide/evaluate-config-rules.html https://aws.amazon.com/config/ Check out this AWS Config Cheat Sheet: https://tutorialsdojo.com/aws-config/", + "references": "" + }, + { + "question": "A company has stored 200 TB of backup files in Amaz on S3. The files are in a vendor-proprietary format . The Solutions Architect needs to use the vendor's p roprietary file conversion software to retrieve the files from their Amazon S3 bucket, transform the files to an industry-standard format, and re-upload the fil es back to Amazon S3. The solution must minimize the d ata transfer costs. Which of the following options can satisfy the give n requirement?", + "options": [ + "A. A. Export the data using AWS Snowball Edge device . Install the file conversion software on the devic e.", + "B. B. Deploy the EC2 instance in a different Region. Install the conversion software on the instance. P erform", + "C. C. Install the file conversion software in Amazon S3. Use S3 Batch Operations to perform data", + "D. D. Deploy the EC2 instance in the same Region as Amazon S3. Install the file conversion software on the" + ], + "correct": "D. D. Deploy the EC2 instance in the same Region as Amazon S3. Install the file conversion software on the", + "explanation": "Explanation/Reference: Amazon S3 is object storage built to store and retr ieve any amount of data from anywhere on the Intern et. It's a simple storage service that offers industry- leading durability, availability, performance, secu rity, and virtually unlimited scalability at very low costs. Amazon S3 is also designed to be highly flexible. S tore any type and amount of data that you want; read the same piece of data a million times or only for emergency disaster recovery; build a simple FTP app lication or a sophisticated web application. You pay for all bandwidth into and out of Amazon S3 , except for the following: - Data transferred in from the Internet. - Data transferred out to an Amazon EC2 instance, w hen the instance is in the same AWS Region as the S 3 bucket (including to a different account in the sam e AWS region). - Data transferred out to Amazon CloudFront. To minimize the data transfer charges, you need to deploy the EC2 instance in the same Region as Amazo n S3. Take note that there is no data transfer cost b etween S3 and EC2 in the same AWS Region. Install t he conversion software on the instance to perform data transformation and re-upload the data to Amazon S3 . Hence, the correct answer is: Deploy the EC2 instan ce in the same Region as Amazon S3. Install the file conversion software on the instance. Perform d ata transformation and re-upload it to Amazon S3. The option that says: Install the file conversion s oftware in Amazon S3. Use S3 Batch Operations to perform data transformation is incorrect because it is not possible to install the software in Amazon S3. The S3 Batch Operations just runs multiple S3 opera tions in a single request. It can't be integrated w ith your conversion software. The option that says: Export the data using AWS Sno wball Edge device. Install the file conversion software on the device. Transform the data and re-u pload it to Amazon S3 is incorrect. Although this is possible, it is not mentioned in the scenario th at the company has an on-premises data center. Thus , there's no need for Snowball. The option that says: Deploy the EC2 instance in a different Region. Install the file conversion software on the instance. Perform data transformati on and re-upload it to Amazon S3 is incorrect because this approach wouldn't minimize the data tr ansfer costs. You should deploy the instance in the same Region as Amazon S3. References: https://aws.amazon.com/s3/pricing/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /AmazonS3.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", + "references": "" + }, + { + "question": "A web application requires a minimum of six Amazon Elastic Compute Cloud (EC2) instances running at all times. You are tasked to deploy the application to three availability zones in the EU Ireland regi on (eu- west-1a, eu-west-1b, and eu-west-1c). It is require d that the system is fault-tolerant up to the loss of one Availability Zone. Which of the following setup is the most cost-effec tive solution which also maintains the fault-tolera nce of your system?", + "options": [ + "A. A. 2 instances in eu-west-1a, 2 instances in eu-w est-1b, and 2 instances in eu-west-1c", + "B. B. 6 instances in eu-west-1a, 6 instances in eu-w est-1b, and no instances in eu-west-1c", + "C. C. 6 instances in eu-west-1a, 6 instances in eu-w est-1b, and 6 instances in eu-west-1c", + "D. D. 3 instances in eu-west-1a, 3 instances in eu-w est-1b, and 3 instances in eu-west-1c" + ], + "correct": "D. D. 3 instances in eu-west-1a, 3 instances in eu-w est-1b, and 3 instances in eu-west-1c", + "explanation": "Explanation/Reference: Basically, fault-tolerance is the ability of a syst em to remain in operation even in the event that so me of its components fail, without any service degradation. I n AWS, it can also refer to the minimum number of running EC2 instances or resources which should be running at all times in order for the system to pro perly operate and serve its consumers. Take note that thi s is quite different from the concept of High Avail ability, which is just concerned with having at least one ru nning instance or resource in case of failure. In this scenario, 3 instances in eu-west-1a, 3 inst ances in eu-west-1b, and 3 instances in eu-west-1c is the correct answer because even if there was an out age in one of the Availability Zones, the system st ill satisfies the requirement of having a minimum of 6 running instances. It is also the most cost-effecti ve solution among other options. The option that says: 6 instances in eu-west-1a, 6 instances in eu-west-1b, and 6 instances in eu-west -1c is incorrect because although this solution provide s the maximum fault-tolerance for the system, it en tails a significant cost to maintain a total of 18 instance s across 3 AZs. The option that says: 2 instances in eu-west-1a, 2 instances in eu-west-1b, and 2 instances in eu-west -1c is incorrect because if one Availability Zone goes down, there will only be 4 running instances availa ble. Although this is the most cost-effective solution, it does not provide fault-tolerance. The option that says: 6 instances in eu-west-1a, 6 instances in eu-west-1b, and no instances in eu-wes t- 1c is incorrect because although it provides fault- tolerance, it is not the most cost-effective soluti on as compared with the options above. This solution has 12 running instances, unlike the correct answer whi ch only has 9 instances. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ec2-increase-availability.html https://media.amazonwebservices.com/AWS_Building_Fa ult_Tolerant_Applications.pdf", + "references": "" + }, + { + "question": "The company you are working for has a set of AWS re sources hosted in ap-northeast-1 region. You have b een asked by your IT Manager to create an AWS CLI shell script that will call an AWS service which could c reate duplicate resources in another region in the event that ap-northeast-1 region fails. The duplicated re sources should also contain the VPC Peering configuration a nd other networking components from the primary sta ck. Which of the following AWS services could help fulf ill this task?", + "options": [ + "A. A. AWS CloudFormation", + "B. B. Amazon LightSail", + "C. C. Amazon SNS", + "D. D. Amazon SQS" + ], + "correct": "A. A. AWS CloudFormation", + "explanation": "Explanation/Reference: AWS CloudFormation is a service that helps you mode l and set up your Amazon Web Services resources so that you can spend less time managing those resourc es and more time focusing on your applications that run in AWS. You can create a template that describes all the AW S resources that you want (like Amazon EC2 instance s or Amazon RDS DB instances), and AWS CloudFormation takes care of provisioning and configuring those resources for you. With this, you can deploy an exact copy of your AWS architecture, along with all of the AWS resources which are hosted in one region to another. Hence, the correct answer is AWS CloudFormation. Amazon LightSail is incorrect because you can't use this to duplicate your resources in your VPC. You have to use CloudFormation instead. Amazon SQS and Amazon SNS are both incorrect becaus e SNS and SQS are just messaging services. References: https://docs.aws.amazon.com/AWSCloudFormation/lates t/UserGuide/Welcome.html https://docs.aws.amazon.com/AWSCloudFormation/lates t/UserGuide/using-cfn-cli-creating-stack.html Check out this AWS CloudFormation Cheat Sheet: https://tutorialsdojo.com/aws-cloudformation/ AWS CloudFormation - Templates, Stacks, Change Sets : https://youtube.com/watch?v=9Xpuprxg7aY", + "references": "" + }, + { + "question": "A technology company is building a new cryptocurren cy trading platform that allows the buying and sell ing of Bitcoin, Ethereum, Ripple, Tether, and many others. You were hired as a Cloud Engineer to build the re quired infrastructure needed for this new trading platform . On your first week at work, you started to create CloudFormation YAML scripts that define all of the needed AWS resources for the application. Your mana ger was shocked that you haven't created the EC2 instan ces, S3 buckets, and other AWS resources straight a way. He does not understand the text-based scripts that you have done and has asked for your clarification. In this scenario, what are the benefits of using th e Amazon CloudFormation service that you should tel l your manager to clarify his concerns? (Select TWO.) A. A. Enables modeling, provisioning, and version-co ntrolling of your entire AWS infrastructure", + "options": [ + "B. B. Allows you to model your entire infrastructure in a text file", + "C. C. A storage location for the code of your applic ation", + "D. D. Provides highly durable and scalable data stor age" + ], + "correct": "", + "explanation": "Explanation/Reference: AWS CloudFormation provides a common language for y ou to describe and provision all the infrastructure resources in your cloud environment. CloudFormation allows you to use a simple text fil e to model and provision, in an automated and secure man ner, all the resources needed for your applications across all regions and accounts. This file serves a s the single source of truth for your cloud environ ment. AWS CloudFormation is available at no additional ch arge, and you pay only for the AWS resources needed to run your applications. Hence, the correct answers are: - Enables modeling, provisioning, and version-contr olling of your entire AWS infrastructure - Allows you to model your entire infrastructure in a text file The option that says: Provides highly durable and s calable data storage is incorrect because CloudForm ation is not a data storage service. The option that says: A storage location for the co de of your application is incorrect because CloudFormation is not used to store your applicatio n code. You have to use CodeCommit as a code repository and not CloudFormation. The option that says: Using CloudFormation itself i s free, including the AWS resources that have been created is incorrect because although the use of Cl oudFormation service is free, you have to pay the A WS resources that you created. References: https://aws.amazon.com/cloudformation/ https://aws.amazon.com/cloudformation/faqs/ Check out this AWS CloudFormation Cheat Sheet: https://tutorialsdojo.com/aws-cloudformation/", + "references": "" + }, + { + "question": "A data analytics company, which uses machine learni ng to collect and analyze consumer data, is using Redshift cluster as their data warehouse. You are i nstructed to implement a disaster recovery plan for their systems to ensure business continuity even in the e vent of an AWS region outage. Which of the following is the best approach to meet this requirement?", + "options": [ + "A. A. Enable Cross-Region Snapshots Copy in your Ama zon Redshift Cluster.", + "B. B. Create a scheduled job that will automatically take the snapshot of your Redshift Cluster and sto re it to", + "C. C. Use Automated snapshots of your Redshift Clust er.", + "D. D. Do nothing because Amazon Redshift is a highly available, fully-managed data warehouse which can" + ], + "correct": "A. A. Enable Cross-Region Snapshots Copy in your Ama zon Redshift Cluster.", + "explanation": "Explanation/Reference: You can configure Amazon Redshift to copy snapshots for a cluster to another region. To configure cros s- region snapshot copy, you need to enable this copy feature for each cluster and configure where to cop y snapshots and how long to keep copied automated sna pshots in the destination region. When cross-region copy is enabled for a cluster, all new manual and a utomatic snapshots are copied to the specified regi on. The option that says: Create a scheduled job that w ill automatically take the snapshot of your Redshif t Cluster and store it to an S3 bucket. Restore the s napshot in case of an AWS region outage is incorrect because although this option is possible, this entails a lot of manual work and hence, not t he best option. You should configure cross-region snapshot copy instead. The option that says: Do nothing because Amazon Red shift is a highly available, fully-managed data warehouse which can withstand an outage of an entir e AWS region is incorrect because although Amazon Redshift is a fully-managed data warehouse, you will still need to configure cross-region snaps hot copy to ensure that your data is properly replicate d to another region. Using Automated snapshots of your Redshift Cluster is incorrect because using automated snapshots is not enough and will not be available in case the en tire AWS region is down.", + "references": "https://docs.aws.amazon.com/redshift/latest/mgmt/ma naging-snapshots-console.html Amazon Redshift Overview: https://youtu.be/jlLERNzhHOg Check out this Amazon Redshift Cheat Sheet: https://tutorialsdojo.com/amazon-redshift/" + }, + { + "question": "A company has a distributed application in AWS that periodically processes large volumes of data acros s multiple instances. The Solutions Architect designe d the application to recover gracefully from any in stance failures. He is then required to launch the applica tion in the most cost-effective way. Which type of EC2 instance will meet this requireme nt?", + "options": [ + "A. A. Dedicated instances B. B. Reserved instances", + "C. C. Spot Instances", + "D. D. On-Demand instances" + ], + "correct": "C. C. Spot Instances", + "explanation": "Explanation/Reference: You require an EC2 instance that is the most cost-e ffective among other types. In addition, the applic ation it will host is designed to gracefully recover in c ase of instance failures. In terms of cost-effectiveness, Spot and Reserved i nstances are the top options. And since the applica tion can gracefully recover from instance failures, the Spot instance is the best option for this case as i t is the cheapest type of EC2 instance. Remember that when y ou use Spot Instances, there will be interruptions. Amazon EC2 can interrupt your Spot Instance when the Spot price ex ceeds your maximum price, when the demand for Spot Instances rise, or when the supply of Spot Instance s decreases. Hence, the correct answer is: Spot Instances. Reserved instances is incorrect. Although you can a lso use reserved instances to save costs, it entail s a commitment of 1-year or 3-year terms of usage. Sinc e your processes only run periodically, you won't b e able to maximize the discounted price of using rese rved instances. Dedicated instances and On-Demand instances are als o incorrect because Dedicated and on-demand instances are not a cost-effective solution to use for your application.", + "references": "http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ how-spot-instances-work.html Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/ Here is an in-depth look at Spot Instances: https://youtu.be/PKvss-RgSjI" + }, + { + "question": "A company plans to reduce the amount of data that A mazon S3 transfers to the servers in order to lower the operating costs as well as lower the latency of ret rieving the data. To accomplish this, you need to u se simple structured query language (SQL) statements t o filter the contents of Amazon S3 objects and retr ieve just the subset of data that you need. Which of the following services will help you accom plish this requirement?", + "options": [ + "A. A. S3 Select", + "B. B. Redshift Spectrum", + "C. C. RDS", + "D. D. AWS Step Functions" + ], + "correct": "A. A. S3 Select", + "explanation": "Explanation/Reference: With Amazon S3 Select, you can use simple structure d query language (SQL) statements to filter the contents of Amazon S3 objects and retrieve just the subset of data that you need. By using Amazon S3 Select to filter this data, you can reduce the amou nt of data that Amazon S3 transfers, which reduces the cost and latency to retrieve this data. Amazon S3 Select works on objects stored in CSV, JS ON, or Apache Parquet format. It also works with objects that are compressed with GZIP or BZIP2 (for CSV and JSON objects only), and server-side encrypted objects. You can specify the format of th e results as either CSV or JSON, and you can determ ine how the records in the result are delimited. RDS is incorrect. Although RDS is an SQL database w here you can perform SQL operations, it is still no t valid because you want to apply SQL transactions on S3 itself, and not on the database, which RDS cann ot do. Redshift Spectrum is incorrect. Although Amazon Red shift Spectrum provides a similar in-query functionality like S3 Select, this service is more suitable for querying your data from the Redshift e xternal tables hosted in S3. The Redshift queries are run o n your cluster resources against local disk. Redshi ft Spectrum queries run using per-query scale-out reso urces against data in S3 which can entail additiona l costs compared with S3 Select. AWS Step Functions is incorrect because this only l ets you coordinate multiple AWS services into serverless workflows so you can build and update ap ps quickly. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/sel ecting-content-from-objects.html https://docs.aws.amazon.com/redshift/latest/dg/c-us ing-spectrum.html Check out these AWS Cheat Sheets: https://tutorialsdojo.com/amazon-s3/ https://tutorialsdojo.com/amazon-athena/ https://tutorialsdojo.com/amazon-redshift/", + "references": "" + }, + { + "question": "A company plans to migrate a NoSQL database to an E C2 instance. The database is configured to replicat e the data automatically to keep multiple copies of d ata for redundancy. The Solutions Architect needs t o launch an instance that has a high IOPS and sequent ial read/write access. Which of the following options fulfills the require ment if I/O throughput is the highest priority?", + "options": [ + "A. A. Use General purpose instances with EBS volume.", + "B. B. Use Memory optimized instances with EBS volume .", + "C. C. Use Storage optimized instances with instance store volume.", + "D. D. Use Compute optimized instance with instance s tore volume." + ], + "correct": "C. C. Use Storage optimized instances with instance store volume.", + "explanation": "Explanation/Reference: Amazon EC2 provides a wide selection of instance ty pes optimized to fit different use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give you the flexibility to choose the appropriate mix of resour ces for your applications. Each instance type inclu des one or more instance sizes, allowing you to scale your resources to the requirements of your target worklo ad. A storage optimized instance is designed for worklo ads that require high, sequential read and write ac cess to very large data sets on local storage. They are optimized to deliver tens of thousands of low-laten cy, random I/O operations per second (IOPS) to applicat ions. Some instance types can drive more I/O throughput than what you can provision for a single EBS volume. You can join multiple volumes together in a RAID 0 configuration to use the available band width for these instances. Based on the given scenario, the NoSQL database wil l be migrated to an EC2 instance. The suitable instance type for NoSQL database is I3 and I3en ins tances. Also, the primary data storage for I3 and I 3en instances is non-volatile memory express (NVMe) SSD instance store volumes. Since the data is replicat ed automatically, there will be no problem using an in stance store volume. Hence, the correct answer is: Use Storage optimized instances with instance store volume. The option that says: Use Compute optimized instanc es with instance store volume is incorrect because this type of instance is ideal for compute-bound ap plications that benefit from high-performance proce ssors. It is not suitable for a NoSQL database. The option that says: Use General purpose instances with EBS volume is incorrect because this instanceonly provides a balance of computing, memory, and n etworking resources. Take note that the requirement in the scenario is high sequential read and write a ccess. Therefore, you must use a storage optimized instance. The option that says: Use Memory optimized instance s with EBS volume is incorrect. Although this type of instance is suitable for a NoSQL database, it is not designed for workloads that require high, sequ ential read and write access to very large data sets on lo cal storage. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /storage-optimized-instances.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /instance-types.html Amazon EC2 Overview: https://youtube.com/watch?v=7VsGIHT_jQE Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", + "references": "" + }, + { + "question": "A company needs to implement a solution that will p rocess real-time streaming data of its users across the globe. This will enable them to track and analyze g lobally-distributed user activity on their website and mobile applications, including clickstream analysis. The s olution should process the data in close geographic al proximity to their users and respond to user reques ts at low latencies. Which of the following is the most suitable solutio n for this scenario?", + "options": [ + "A. A. Use a CloudFront web distribution and Route 53 with a latency-based routing policy, in order to p rocess", + "B. B. Integrate CloudFront with Lambda@Edge in order to process the data in close geographical proximit y to", + "C. C. Integrate CloudFront with Lambda@Edge in order to process the data in close geographical proximit y to", + "D. D. Use a CloudFront web distribution and Route 53 with a Geoproximity routing policy in order to pro cess" + ], + "correct": "C. C. Integrate CloudFront with Lambda@Edge in order to process the data in close geographical proximit y to", + "explanation": "Explanation/Reference: Lambda@Edge is a feature of Amazon CloudFront that lets you run code closer to users of your applicati on, which improves performance and reduces latency. Wit h Lambda@Edge, you don't have to provision or manage infrastructure in multiple locations around the world. You pay only for the compute time you consume - there is no charge when your code is not running. With Lambda@Edge, you can enrich your web applicati ons by making them globally distributed and improving their performance -- all with zero server administration. Lambda@Edge runs your code in response to events generated by the Amazon CloudFro nt content delivery network (CDN). Just upload your code to AWS Lambda, which takes care of everything required to run and scale your code with high availability at an AWS location closest to your end user. By using Lambda@Edge and Kinesis together, you can process real-time streaming data so that you can track and analyze globally-distributed user activit y on your website and mobile applications, includin g clickstream analysis. Hence, the correct answer in this scenario is the option that says: Integrate CloudFront with Lambda@Edge in order to process the data in close geographical proximity to users and respond to user requests at low latencies . Process real-time streaming data using Kinesis and durably store the results to an Amazon S3 bucke t. The options that say: Use a CloudFront web distribu tion and Route 53 with a latency-based routing policy, in order to process the data in close geogr aphical proximity to users and respond to user requests at low latencies. Process real-time stream ing data using Kinesis and durably store the results to an Amazon S3 bucket and Use a CloudFront web distribution and Route 53 with a Geoproximity routing policy in order to process the data in close geographical proximity to users and respond to user requests at low latencies. Process real-time streaming data using Kinesis and durably store the results to an Amazon S3 bucket are both i ncorrect because you can only route traffic using Route 53 since it does not have any computing capab ility. This solution would not be able to process a nd return the data in close geographical proximity to your users since it is not using Lambda@Edge. The option that says: Integrate CloudFront with Lam bda@Edge in order to process the data in close geographical proximity to users and respond to user requests at low latencies. Process real-time streaming data using Amazon Athena and durably stor e the results to an Amazon S3 bucket is incorrect because although using Lambda@Edge is cor rect, Amazon Athena is just an interactive query service that enables you to easily analyze data in Amazon S3 using standard SQL. Kinesis should be use d to process the streaming data in real-time. References: https://aws.amazon.com/lambda/edge/ https://aws.amazon.com/blogs/networking-and-content -delivery/global-data-ingestion-with-amazon-cloudfr ont- and-lambdaedge/", + "references": "" + }, + { + "question": "A company is using an On-Demand EC2 instance to hos t a legacy web application that uses an Amazon Instance Store-Backed AMI. The web application shou ld be decommissioned as soon as possible and hence, you need to terminate the EC2 instance. When the instance is terminated, what happens to th e data on the root volume?", + "options": [ + "A. A. Data is automatically saved as an EBS snapshot .", + "B. B. Data is unavailable until the instance is rest arted.", + "C. C. Data is automatically deleted.", + "D. D. Data is automatically saved as an EBS volume." + ], + "correct": "C. C. Data is automatically deleted.", + "explanation": "Explanation/Reference: AMIs are categorized as either backed by Amazon EBS or backed by instance store. The former means that the root device for an instance launched from the A MI is an Amazon EBS volume created from an Amazon EBS snapshot. The latter means that the root device for an instance launched from the AMI is an instan ce store volume created from a template stored in Amaz on S3. The data on instance store volumes persist only dur ing the life of the instance which means that if th e instance is terminated, the data will be automatica lly deleted. Hence, the correct answer is: Data is automatically deleted.", + "references": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ComponentsAMIs.html Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/" + }, + { + "question": "A company launched a global news website that is de ployed to AWS and is using MySQL RDS. The website has millions of viewers from all over the world whi ch means that the website has read-heavy database workloads. All database transactions must be ACID c ompliant to ensure data integrity. In this scenario, which of the following is the bes t option to use to increase the read throughput on the MySQL database?", + "options": [ + "A. A. Enable Amazon RDS Read Replicas", + "B. B. Use SQS to queue up the requests C. C. Enable Amazon RDS Standby Replicas", + "D. D. Enable Multi-AZ deployments" + ], + "correct": "A. A. Enable Amazon RDS Read Replicas", + "explanation": "Explanation/Reference: Amazon RDS Read Replicas provide enhanced performan ce and durability for database (DB) instances. This feature makes it easy to elastically scale out beyond the capacity constraints of a single DB ins tance for read-heavy database workloads. You can create o ne or more replicas of a given source DB Instance a nd serve high-volume application read traffic from mul tiple copies of your data, thereby increasing aggre gate read throughput. Read replicas can also be promoted when needed to become standalone DB instances. Read replicas are available in Amazon RDS for MySQL , MariaDB, Oracle, and PostgreSQL as well as Amazon Aurora. Enabling Multi-AZ deployments is incorrect because the Multi-AZ deployments feature is mainly used to achieve high availability and failover support for your database. Enabling Amazon RDS Standby Replicas is incorrect b ecause a Standby replica is used in Multi-AZ deployments and hence, it is not a solution to redu ce read-heavy database workloads. Using SQS to queue up the requests is incorrect. Al though an SQS queue can effectively manage the requests, it won't be able to entirely improve the read-throughput of the database by itself. References: https://aws.amazon.com/rds/details/read-replicas/ https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/USER_ReadRepl.html Amazon RDS Overview: https://youtube.com/watch?v=aZmpLl8K1UU Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/", + "references": "" + }, + { + "question": "A company is using the AWS Directory Service to int egrate their on-premises Microsoft Active Directory (AD) domain with their Amazon EC2 instances via an AD co nnector. The below identity-based policy is attache d to the IAM Identities that use the AWS Directory servi ce: { \"Version\":\"2012-10-17\", \"Statement\":[ { \"Sid\":\"DirectoryTutorialsDojo1234\", \"Effect\":\"Allow\", \"Action\":[ \"ds:*\" ], \"Resource\":\"arn:aws:ds:us-east-1:987654321012:dire ctory/d-1234567890\" }, { \"Effect\":\"Allow\", \"Action\":[ \"ec2:*\" ], \"Resource\":\"*\" } ] } Which of the following BEST describes what the abov e resource policy does?", + "options": [ + "A. A. Allows all AWS Directory Service (ds) calls as long as the resource contains the directory name o f:", + "B. B. Allows all AWS Directory Service (ds) calls as long as the resource contains the directory ID:", + "C. C. Allows all AWS Directory Service (ds) calls as long as the resource contains the directory ID:", + "D. D. Allows all AWS Directory Service (ds) calls as long as the resource contains the directory ID: d-" + ], + "correct": "D. D. Allows all AWS Directory Service (ds) calls as long as the resource contains the directory ID: d-", + "explanation": "Explanation/Reference: AWS Directory Service provides multiple ways to use Amazon Cloud Directory and Microsoft Active Directory (AD) with other AWS services. Directories store information about users, groups, and devices , and administrators use them to manage access to inf ormation and resources. AWS Directory Service provides multiple directory choices for customers w ho want to use existing Microsoft AD or Lightweight Directory Access Protocol (LDAP)aware applications in the cloud. It also offers those same choices to developers who need a directory to manage users, gr oups, devices, and access. Every AWS resource is owned by an AWS account, and permissions to create or access the resources are governed by permissions policies. An account admini strator can attach permissions policies to IAM identities (that is, users, groups, and roles), and some services (such as AWS Lambda) also support attaching permissions policies to resources. The following resource policy example allows all ds calls as long as the resource contains the directo ry ID \"d-1234567890\". { \"Version\":\"2012-10-17\", \"Statement\":[ { \"Sid\":\"VisualEditor0\", \"Effect\":\"Allow\", \"Action\":[ \"ds:*\" ], \"Resource\":\"arn:aws:ds:us-east-1:123456789012:direc tory/d-1234567890\" }, { \"Effect\":\"Allow\", \"Action\":[ \"ec2:*\" ], \"Resource\":\"*\" } ] } Hence, the correct answer is the option that says: Allows all AWS Directory Service (ds) calls as long as the resource contains the directory ID: d-123456789 0. The option that says: Allows all AWS Directory Serv ice (ds) calls as long as the resource contains the directory ID: DirectoryTutorialsDojo1234 is incorre ct because DirectoryTutorialsDojo1234 is the Statement ID (SID) and not the Directory ID. The option that says: Allows all AWS Directory Serv ice (ds) calls as long as the resource contains the directory ID: 987654321012 is incorrect because the numbers: 987654321012 is the Account ID and not the Directory ID. The option that says: Allows all AWS Directory Serv ice (ds) calls as long as the resource contains the directory name of: DirectoryTutorialsDojo1234 is in correct because DirectoryTutorialsDojo1234 is the Statement ID (SID) and not the Directory name. References: https://docs.aws.amazon.com/directoryservice/latest /admin-guide/IAM_Auth_Access_IdentityBased.html https://docs.aws.amazon.com/directoryservice/latest /admin-guide/IAM_Auth_Access_Overview.html AWS Identity Services Overview: https://youtube.com/watch?v=AIdUw0i8rr0 Check out this AWS Identity & Access Management (IA M) Cheat Sheet: https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/", + "references": "" + }, + { + "question": "A company recently launched an e-commerce applicati on that is running in eu-east-2 region, which stric tly requires six EC2 instances running at all times. In that region, there are 3 Availability Zones (AZ) t hat you can use - eu-east-2a, eu-east-2b, and eu-east-2c. Which of the following deployments provide 100% fau lt tolerance if any single AZ in the region becomes unavailable? (Select TWO.)", + "options": [ + "A. A. eu-east-2a with four EC2 instances, eu-east-2b with two EC2 instances, and eu-east-2c with two EC 2", + "B. B. eu-east-2a with two EC2 instances, eu-east-2b with four EC2 instances, and eu-east-2c with two EC 2", + "C. C. eu-east-2a with six EC2 instances, eu-east-2b with six EC2 instances, and eu-east-2c with no EC2", + "D. D. eu-east-2a with three EC2 instances, eu-east-2 b with three EC2 instances, and eu-east-2c with thr ee" + ], + "correct": "", + "explanation": "Explanation/Reference: Fault Tolerance is the ability of a system to remai n in operation even if some of the components used to build the system fail. In AWS, this means that in t he event of server fault or system failures, the nu mber of running EC2 instances should not fall below the min imum number of instances required by the system for it to work properly. So if the application requires a minimum of 6 instances, there should be at least 6 instances running in case there is an outage in one of the Availability Zones or if there are server i ssues. In this scenario, you have to simulate a situation where one Availability Zone became unavailable for each option and check whether it still has 6 running ins tances. Hence, the correct answers are: eu-east-2a with six EC2 instances, eu-east-2b with six EC2 instances, and eu-east-2c with no EC2 instances and eu-east-2a with three EC2 instances, eu-east-2b with three EC2 instances, and eu-east-2c with three EC2 instances because eve n if one of the availability zones were to go down, there would still be 6 active instances.", + "references": "https://media.amazonwebservices.com/AWS_Building_Fa ult_Tolerant_Applications.pdf" + }, + { + "question": "A newly hired Solutions Architect is checking all o f the security groups and network access control li st rules of the company's AWS resources. For security purposes, the MS SQL connection via port 1433 of th e database tier should be secured. Below is the secur ity group configuration of their Microsoft SQL Serv er database: The application tier hosted in an Auto Scaling grou p of EC2 instances is the only identified resource that needs to connect to the database. The Architect sho uld ensure that the architecture complies with the best practice of granting least privilege. Which of the following changes should be made to th e security group configuration?", + "options": [ + "A. A. For the MS SQL rule, change the Source to the Network ACL ID attached to the application tier.", + "B. B. For the MS SQL rule, change the Source to the security group ID attached to the application tier.", + "C. C. For the MS SQL rule, change the Source to the EC2 instance IDs of the underlying instances of the Auto", + "D. D. For the MS SQL rule, change the Source to the static AnyCast IP address attached to the applicati on tier." + ], + "correct": "B. B. For the MS SQL rule, change the Source to the security group ID attached to the application tier.", + "explanation": "Explanation/Reference: A security group acts as a virtual firewall for you r instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security g roups act at the instance level, not the subnet level. Th erefore, each instance in a subnet in your VPC can be assigned to a different set of security groups. If you launch an instance using the Amazon EC2 API or a command line tool and you don't specify a security group, the instance is automatically assig ned to the default security group for the VPC. If y ou launch an instance using the Amazon EC2 console, yo u have an option to create a new security group for the instance. For each security group, you add rules that control the inbound traffic to instances, and a separate s et of rules that control the outbound traffic. This secti on describes the basic things that you need to know about security groups for your VPC and their rules. Amazon security groups and network ACLs don't filte r traffic to or from link-local addresses (169.254.0.0/16) or AWS reserved IPv4 addresses (th ese are the first four IPv4 addresses of the subnet , including the Amazon DNS server address for the VPC ). Similarly, flow logs do not capture IP traffic t o or from these addresses. In the scenario, the security group configuration a llows any server (0.0.0.0/0) from anywhere to estab lish an MS SQL connection to the database via the 1433 p ort. The most suitable solution here is to change t he Source field to the security group ID attached to t he application tier. Hence, the correct answer is the option that says: For the MS SQL rule, change the Source to the security group ID attached to the application tier. The option that says: For the MS SQL rule, change t he Source to the EC2 instance IDs of the underlying instances of the Auto Scaling group is i ncorrect because using the EC2 instance IDs of the underlying instances of the Auto Scaling group as t he source can cause intermittent issues. New instan ces will be added and old instances will be removed fro m the Auto Scaling group over time, which means tha t you have to manually update the security group sett ing once again. A better solution is to use the sec urity group ID of the Auto Scaling group of EC2 instances . The option that says: For the MS SQL rule, change t he Source to the static AnyCast IP address attached to the application tier is incorrect becau se a static AnyCast IP address is primarily used fo r AWS Global Accelerator and not for security group c onfigurations. The option that says: For the MS SQL rule, change t he Source to the Network ACL ID attached to the application tier is incorrect because you have to u se the security group ID instead of the Network ACL ID of the application tier. Take note that the Network ACL covers the entire subnet which means that othe r applications that use the same subnet will also be affected. References: https://docs.aws.amazon.com/vpc/latest/userguide/VP C_SecurityGroups.html https://docs.aws.amazon.com/vpc/latest/userguide/VP C_Security.html", + "references": "" + }, + { + "question": "A company is building an internal application that processes loans, accruals, and interest rates for t heir clients. They require a storage service that is abl e to handle future increases in storage capacity of up to 16 TB and can provide the lowest-latency access to the ir data. The web application will be hosted in a si ngle m5ad.24xlarge Reserved EC2 instance that will proce ss and store data to the storage service. Which of the following storage services would you r ecommend?", + "options": [ + "A. A. EFS", + "B. B. Storage Gateway", + "C. C. EBS", + "D. D. S3" + ], + "correct": "C. C. EBS", + "explanation": "Explanation/Reference: Amazon Web Services (AWS) offers cloud storage serv ices to support a wide range of storage workloads such as Amazon S3, EFS and EBS. Amazon EFS is a fil e storage service for use with Amazon EC2. Amazon EFS provides a file system interface, file s ystem access semantics (such as strong consistency and file locking), and concurrently-accessible storage for up to thousands of Amazon EC2 instances. Amazon S3 is an object storage service. Amazon S3 makes da ta available through an Internet API that can be accessed anywhere. Amazon EBS is a block-level stor age service for use with Amazon EC2. Amazon EBS can deliver performance for workloads that require the lowest-latency access to data from a single EC2 instance. You can also increase EBS storage for up to 16TB or add new volumes for additional storage. In this scenario, the company is looking for a stor age service which can provide the lowest-latency ac cess to their data which will be fetched by a single m5a d.24xlarge Reserved EC2 instance. This type of workloads can be supported better by using either E FS or EBS but in this case, the latter is the most suitable storage service. As mentioned above, EBS p rovides the lowest-latency access to the data for y our EC2 instance since the volume is directly attached to the instance. In addition, the scenario does not require concurrently-accessible storage since they only hav e one instance. Hence, the correct answer is EBS. Storage Gateway is incorrect since this is primaril y used to extend your on-premises storage to your A WS Cloud. S3 is incorrect because although this is also highl y available and highly scalable, it still does not provide the lowest-latency access to the data, unlike EBS. Remember that S3 does not reside within your VPC by default, which means the data will traverse the pub lic Internet that may result to higher latency. You can set up a VPC Endpoint for S3 yet still, its latency is greater than that of EBS. EFS is incorrect because the scenario does not requ ire concurrently-accessible storage since the inter nal application is only hosted in one instance. Althoug h EFS can provide low latency data access to the EC 2 instance as compared with S3, the storage service t hat can provide the lowest latency access is still EBS. References: https://aws.amazon.com/ebs/ https://aws.amazon.com/efs/faq/ Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/", + "references": "" + }, + { + "question": "A company has a set of Linux servers running on mul tiple On-Demand EC2 Instances. The Audit team wants to collect and process the application log fi les generated from these servers for their report. Which of the following services is best to use in t his case? A. Amazon S3 Glacier Deep Archive for storing the ap plication log files and AWS ParallelCluster for processing the log files.", + "options": [ + "B. Amazon S3 for storing the application log files a nd Amazon Elastic MapReduce for processing the log files.", + "C. A single On-Demand Amazon EC2 instance for both s toring and processing the log files", + "D. Amazon S3 Glacier for storing the application log files and Spot EC2 Instances for processing them." + ], + "correct": "B. Amazon S3 for storing the application log files a nd Amazon Elastic MapReduce for processing the log files.", + "explanation": "Explanation/Reference: Amazon EMR is a managed cluster platform that simpl ifies running big data frameworks, such as Apache Hadoop and Apache Spark, on AWS to process and anal yze vast amounts of data. By using these frameworks and related open-source projects such as Apache Hive and Apache Pig, you can process data for analytics purposes and business intelligence wo rkloads. Additionally, you can use Amazon EMR to transform and move large amounts of data into and o ut of other AWS data stores and databases such as Amazon Simple Storage Service (Amazon S3) and Amazo n DynamoDB. Hence, the correct answer is: Amazon S3 for storing the application log files and Amazon Elastic MapReduce for processing the log files. The option that says: Amazon S3 Glacier for storing the application log files and Spot EC2 Instances for processing them is incorrect as Amazon S3 Glaci er is used for data archive only. The option that says: A single On-Demand Amazon EC2 instance for both storing and processing the log files is incorrect as an EC2 instance is not a recommended storage service. In addition, Amazon EC 2 does not have a built-in data processing engine to process large amounts of data. The option that says: Amazon S3 Glacier Deep Archiv e for storing the application log files and AWS ParallelCluster for processing the log files is inc orrect because the long retrieval time of Amazon S3 Glacier Deep Archive makes this option unsuitable. Moreover, AWS ParallelCluster is just an AWS- supported open-source cluster management tool that makes it easy for you to deploy and manage High- Performance Computing (HPC) clusters on AWS. ParallelCluster uses a simpl e text file to model and provision all the resource s needed for your HPC applications in an automated an d secure manner. References: http://docs.aws.amazon.com/emr/latest/ManagementGui de/emr-what-is-emr.html https://aws.amazon.com/hpc/parallelcluster/ Check out this Amazon EMR Cheat Sheet: https://tutorialsdojo.com/amazon-emr/", + "references": "" + }, + { + "question": "A startup launched a new FTP server using an On-Dem and EC2 instance in a newly created VPC with default settings. The server should not be accessib le publicly but only through the IP address 175.45.116.100 and nowhere else. Which of the following is the most suitable way to implement this requirement?", + "options": [ + "A. A. Create a new inbound rule in the security grou p of the EC2 instance with the following", + "B. Create a new Network ACL inbound rule in the subn et of the EC2 instance with the following", + "C. Create a new Network ACL inbound rule in the subn et of the EC2 instance with the following", + "D. Create a new inbound rule in the security group o f the EC2 instance with the following details:" + ], + "correct": "A. A. Create a new inbound rule in the security grou p of the EC2 instance with the following", + "explanation": "Explanation Explanation/Reference: The FTP protocol uses TCP via ports 20 and 21. This should be configured in your security groups or in your Network ACL inbound rules. As required by the scenario, you should only allow the individual IP o f the client and not the entire network. Therefore, i n the Source, the proper CIDR notation should be us ed. The /32 denotes one IP address and the /0 refers to the entire network. It is stated in the scenario that you launched the EC2 instances in a newly created VPC with default s ettings. Your VPC automatically comes with a modifiable defa ult network ACL. By default, it allows all inbound and outbound IPv4 traffic and, if applicable, IPv6 traffic. Hence, you actually don't need to explicit ly add inbound rules to your Network ACL to allow inbound traffic, if your VPC has a default setting. The below option is incorrect: Create a new inbound rule in the security group of the EC2 instance with the following details: Protocol: UDP Port Range: 20 - 21 Source: 175.45.116.100/32 Although the configuration of the Security Group is valid, the provided Protocol is incorrect. Take no te that FTP uses TCP and not UDP. The below option is also incorrect: Create a new Network ACL inbound rule in the subnet of the EC2 instance with the following details: Protocol: TCP Port Range: 20 - 21 Source: 175.45.116.100/0 Allow/Deny: ALLOW Although setting up an inbound Network ACL is valid , the source is invalid since it must be an IPv4 or IPv6 CIDR block. In the provided IP address, the /0 refers to the entire network and not a specific IP address. In addition, it is stated in the scenario that the newly created VPC has default settings and by default, the Network ACL allows all traffic. This m eans that there is actually no need to configure yo ur Network ACL. Likewise, the below option is also incorrect: Create a new Network ACL inbound rule in the subnet of the EC2 instance with the following details: Protocol: UDP Port Range: 20 - 21 Source: 175.45.116.100/0 Allow/Deny: ALLOW Just like in the above, the source is also invalid. Take note that FTP uses TCP and not UDP, which is one of the reasons why this option is wrong. In addition, it is stated in the scenario that the newly created VPC has default settings and by default, the Network ACL al lows all traffic. This means that there is actually no need to configure your Network ACL. References: https://docs.aws.amazon.com/vpc/latest/userguide/VP C_SecurityGroups.html https://docs.aws.amazon.com/vpc/latest/userguide/vp c-network-acls.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", + "references": "" + }, + { + "question": "A Solutions Architect is designing a setup for a da tabase that will run on Amazon RDS for MySQL. He needs to ensure that the database can automatically failover to an RDS instance to continue operating in the event of failure. The architecture should also be a s highly available as possible. Which among the following actions should the Soluti ons Architect do?", + "options": [ + "A. A. Create five cross-region read replicas in each region. In the event of an Availability Zone outag e, promote", + "B. B. Create five read replicas across different ava ilability zones. In the event of an Availability Zo ne outage,", + "D. D. Create a read replica in the same region where the DB instance resides. In addition, create a rea d replica" + ], + "correct": "", + "explanation": "Explanation/Reference: You can run an Amazon RDS DB instance in several AZ s with Multi-AZ deployment. Amazon automatically provisions and maintains a secondary standby DB instance in a different AZ. Your primary DB instance is synchronously replicated across AZs to the secondary instance to provide data redundanc y, failover support, eliminate I/O freezes, and minimi ze latency spikes during systems backup. As described in the scenario, the architecture must meet two requirements: The database should automatically failover to an RD S instance in case of failures. The architecture should be as highly available as p ossible. Hence, the correct answer is: Create a standby repl ica in another availability zone by enabling Multi- AZ deployment because it meets both of the requirem ents. The option that says: Create a read replica in the same region where the DB instance resides. In addition, create a read replica in a different regi on to survive a region's failure. In the event of a n Availability Zone outage, promote any replica to be come the primary instance is incorrect. Although this architecture provides higher availability since it can survive a region failure, it still does not meet the first requirement since the process is not automated. The architecture should also supp ort automatic failover to an RDS instance in case o f failures. Both the following options are incorrect: - Create five read replicas across different availa bility zones. In the event of an Availability Zone outage, promote any replica to become the primary i nstance - Create five cross-region read replicas in each re gion. In the event of an Availability Zone outage, promote any replica to become the primary instance Although it is possible to achieve high availabilit y with these architectures by promoting a read repl ica into the primary instance in an event of failure, it doe s not support automatic failover to an RDS instance which is also a requirement in the problem. References: https://aws.amazon.com/rds/features/multi-az/ https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/Concepts.MultiAZ.html Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/", + "references": "" + }, + { + "question": "A large multinational investment bank has a web app lication that requires a minimum of 4 EC2 instances to run to ensure that it can cater to its users across the globe. You are instructed to ensure fault tole rance of this system. Which of the following is the best option?", + "options": [ + "A. A. Deploy an Auto Scaling group with 4 instances in one Availability Zone behind an Application Load", + "B. B. Deploy an Auto Scaling group with 2 instances in each of 2 Availability Zones behind an Applicati on Load", + "C. C. Deploy an Auto Scaling group with 2 instances in each of 3 Availability Zones behind an Applicati on Load", + "D. D. Deploy an Auto Scaling group with 1 instance i n each of 4 Availability Zones behind an Applicatio n Load" + ], + "correct": "C. C. Deploy an Auto Scaling group with 2 instances in each of 3 Availability Zones behind an Applicati on Load", + "explanation": "Explanation/Reference: Fault Tolerance is the ability of a system to remai n in operation even if some of the components used to build the system fail. In AWS, this means that in t he event of server fault or system failures, the nu mber of running EC2 instances should not fall below the min imum number of instances required by the system for it to work properly. So if the application requires a minimum of 4 instances, there should be at least 4 instances running in case there is an outage in one of the Availability Zones or if there are server i ssues. One of the differences between Fault Tolerance and High Availability is that the former refers to the minimum number of running instances. For example, y ou have a system that requires a minimum of 4 running instances and currently has 6 running insta nces deployed in two Availability Zones. There was a component failure in one of the Availability Zones which knocks out 3 instances. In this case, the sys tem can still be regarded as Highly Available since the re are still instances running that can accommodate the requests. However, it is not Fault-Tolerant since t he required minimum of four instances has not been met. Hence, the correct answer is: Deploy an Auto Scalin g group with 2 instances in each of 3 Availability Zones behind an Application Load Balancer. The option that says: Deploy an Auto Scaling group with 2 instances in each of 2 Availability Zones behind an Application Load Balancer is incorrect be cause if one Availability Zone went out, there will only be 2 running instances available out of the re quired 4 minimum instances. Although the Auto Scali ng group can spin up another 2 instances, the fault to lerance of the web application has already been compromised. The option that says: Deploy an Auto Scaling group with 4 instances in one Availability Zone behind an Application Load Balancer is incorrect because i f the Availability Zone went out, there will be no running instance available to accommodate the reque st. The option that says: Deploy an Auto Scaling group with 1 instance in each of 4 Availability Zones behind an Application Load Balancer is incorrect be cause if one Availability Zone went out, there will only be 3 instances available to accommodate the re quest. References: https://media.amazonwebservices.com/AWS_Building_Fa ult_Tolerant_Applications.pdf https://d1.awsstatic.com/whitepapers/aws-building-f ault-tolerant-applications.pdf AWS Overview Cheat Sheets: https://tutorialsdojo.com/aws-cheat-sheets-overview /", + "references": "" + }, + { + "question": "There is a new compliance rule in your company that audits every Windows and Linux EC2 instances each month to view any performance issues. They have mor e than a hundred EC2 instances running in productio n, and each must have a logging function that collects various system details regarding that instance. Th e SysOps team will periodically review these logs and analyz e their contents using AWS Analytics tools, and the result will need to be retained in an S3 bucket. In this scenario, what is the most efficient way to collect and analyze logs from the instances with m inimal effort?", + "options": [ + "A. A. Install AWS Inspector Agent in each instance w hich will collect and push data to CloudWatch Logs", + "B. B. Install AWS SDK in each instance and create a custom daemon script that would collect and push da ta", + "C. C. Install the AWS Systems Manager Agent (SSM Age nt) in each instance which will automatically colle ct", + "D. D. Install the unified CloudWatch Logs agent in e ach instance which will automatically collect and p ush data" + ], + "correct": "D. D. Install the unified CloudWatch Logs agent in e ach instance which will automatically collect and p ush data", + "explanation": "Explanation/Reference: To collect logs from your Amazon EC2 instances and on-premises servers into CloudWatch Logs, AWS offers both a new unified CloudWatch agent, and an older CloudWatch Logs agent. It is recommended to use the unified CloudWatch agent which has the foll owing advantages: - You can collect both logs and advanced metrics wi th the installation and configuration of just one a gent. - The unified agent enables the collection of logs from servers running Windows Server. - If you are using the agent to collect CloudWatch metrics, the unified agent also enables the collect ion of additional system metrics, for in-guest visibility. - The unified agent provides better performance. CloudWatch Logs Insights enables you to interactive ly search and analyze your log data in Amazon CloudWatch Logs. You can perform queries to help yo u quickly and effectively respond to operational issues. If an issue occurs, you can use CloudWatch Logs Insights to identify potential causes and vali date deployed fixes. CloudWatch Logs Insights includes a purpose-built q uery language with a few simple but powerful commands. CloudWatch Logs Insights provides sample queries, command descriptions, query autocompletion, and log field discovery to help you get started quickly. Sample queries are included f or several types of AWS service logs. The option that says: Install AWS SDK in each insta nce and create a custom daemon script that would collect and push data to CloudWatch Logs periodical ly. Enable CloudWatch detailed monitoring and use CloudWatch Logs Insights to analyze the log dat a of all instances is incorrect. Although this is a valid solution, this entails a lot of effort to imp lement as you have to allocate time to install the AWS SDK to each instance and develop a custom monitoring so lution. Remember that the question is specifically looking for a solution that can be implemented with minimal effort. In addition, it is unnecessary and not cost-efficient to enable detailed monitoring in Clo udWatch in order to meet the requirements of this scenario since this can be done using CloudWatch Lo gs. The option that says: Install the AWS Systems Manag er Agent (SSM Agent) in each instance which will automatically collect and push data to CloudWa tch Logs. Analyze the log data with CloudWatch Logs Insights is incorrect. Although this is also a valid solution, it is more efficient to use CloudW atch agent than an SSM agent. Manually connecting to an instance to view log files and troubleshoot an issu e with SSM Agent is time-consuming hence, for more ef ficient instance monitoring, you can use the CloudWatch Agent instead to send the log data to Am azon CloudWatch Logs. The option that says: Install AWS Inspector Agent i n each instance which will collect and push data to CloudWatch Logs periodically. Set up a CloudWatch d ashboard to properly analyze the log data of all instances is incorrect because AWS Inspector is simply a security assessments service which only h elps you in checking for unintended network accessibilit y of your EC2 instances and for vulnerabilities on those EC2 instances. Furthermore, setting up an Amazon Cl oudWatch dashboard is not suitable since its primarily used for scenarios where you have to moni tor your resources in a single view, even those resources that are spread across different AWS Regi ons. It is better to use CloudWatch Logs Insights instead since it enables you to interactively searc h and analyze your log data. References: https://docs.aws.amazon.com/AmazonCloudWatch/latest /logs/WhatIsCloudWatchLogs.html https://docs.aws.amazon.com/systems-manager/latest/ userguide/monitoring-ssm-agent.html https://docs.aws.amazon.com/AmazonCloudWatch/latest /logs/AnalyzingLogData.html Amazon CloudWatch Overview: https://youtube.com/watch?v=q0DmxfyGkeU Check out this Amazon CloudWatch Cheat Sheet: https://tutorialsdojo.com/amazon-cloudwatch/ CloudWatch Agent vs SSM Agent vs Custom Daemon Scri pts: https://tutorialsdojo.com/cloudwatch-agent-vs-ssm-a gent-vs-custom-daemon-scripts/", + "references": "" + }, + { + "question": "A company is using an Auto Scaling group which is c onfigured to launch new t2.micro EC2 instances when there is a significant load increase in the ap plication. To cope with the demand, you now need to replace those instances with a larger t2.2xlarge in stance type. How would you implement this change?", + "options": [ + "A. A. Change the instance type of each EC2 instance manually.", + "B. B. Create a new launch configuration with the new instance type and update the Auto Scaling Group.", + "C. C. Just change the instance type to t2.2xlarge in the current launch configuration", + "D. D. Create another Auto Scaling Group and attach t he new instance type." + ], + "correct": "B. B. Create a new launch configuration with the new instance type and update the Auto Scaling Group.", + "explanation": "Explanation/Reference: You can only specify one launch configuration for a n Auto Scaling group at a time, and you can't modif y a launch configuration after you've created it. There fore, if you want to change the launch configuratio n for an Auto Scaling group, you must create a launch con figuration and then update your Auto Scaling group with the new launch configuration. Hence, the correct answer is: Create a new launch c onfiguration with the new instance type and update the Auto Scaling Group. The option that says: Just change the instance type to t2.2xlarge in the current launch configuration is incorrect because you can't change your launch conf iguration once it is created. You have to create a new one instead. The option that says: Create another A uto Scaling Group and attach the new instance type is incorrect because you can't directly attach or d eclare the new instance type to your Auto Scaling g roup. You have to create a new launch configuration first , with a new instance type, then attach it to your existing Auto Scaling group. The option that says: Change th e instance type of each EC2 instance manually is incorrect because you can't directly change the ins tance type of your EC2 instance. This should be don e by creating a brand new launch configuration then atta ching it to your existing Auto Scaling group. References: https://docs.aws.amazon.com/autoscaling/ec2/usergui de/LaunchConfiguration.html https://docs.aws.amazon.com/autoscaling/ec2/usergui de/create-asg.html Check out this AWS Auto Scaling Cheat Sheet: https://tutorialsdojo.com/aws-auto-scaling/", + "references": "" + }, + { + "question": "A company has a two-tier environment in its on-prem ises data center which is composed of an applicatio n tier and database tier. You are instructed to migrate th eir environment to the AWS cloud, and to design the subnets in their VPC with the following requirements: 1. There is an application load balancer that would distribute the incoming traffic among the servers in the application tier. 2. The application tier and the database tier must not be accessible from the public Internet. The application tier should only accept traffic com ing from the load balancer. 3. The database tier contains very sensitive data. It must not share the same subnet with other AWS re sources and its custom route table with other instances in the environment. 4. The environment must be highly available and sca lable to handle a surge of incoming traffic over th e Internet. How many subnets should you create to meet the abov e requirements?", + "options": [ + "A. A. 4", + "B. B. 6", + "C. C. 3", + "D. D. 2" + ], + "correct": "B. B. 6", + "explanation": "Explanation/Reference: The given scenario indicated 4 requirements that sh ould be met in order to successfully migrate their two- tier environment from their on-premises data center to AWS Cloud. The first requirement means that you have to use an application load balancer (ALB) to d istribute the incoming traffic to your application servers. The second requirement specifies that both your app lication and database tier should not be accessible from the public Internet. This means that you could crea te a single private subnet for both of your applica tion and database tier. However, the third requirement m entioned that the database tier should not share th e same subnet with other AWS resources to protect its sensitive data. This means that you should provisi on one private subnet for your application tier and an other private subnet for your database tier. The last requirement alludes to the need for using at least two Availability Zones to achieve high availability. This means that you have to distribut e your application servers to two AZs as well as yo ur database which can be set up with a master-slave co nfiguration to properly replicate the data between two zones. If you have more than one private subnet in the sam e Availability Zone that contains instances that ne ed to be registered with the load balancer, you only need to create one public subnet. You need only one pub lic subnet per Availability Zone; you can add the priva te instances in all the private subnets that reside in that particular Availability Zone. Since you have a public internet-facing load balanc er that has a group of backend Amazon EC2 instances that are deployed in a private subnet, you must cre ate the corresponding public subnets in the same Availability Zones. This new public subnet is on to p of the private subnet that is used by your privat e EC2 instances. Lastly, you should associate these publi c subnets to the Internet-facing load balancer to c omplete the setup. To summarize, we need to have one private subnet fo r the application tier and another one for the data base tier. We then need to create another public subnet in the same Availability Zone where the private EC2 instances are hosted, in order to properly connect the public Internet-facing load balancer to your in stances. This means that we have to use a total of 3 subnets consisting of 2 private subnets and 1 public subne t. To meet the requirement of high availability, we ha ve to deploy the stack to two Availability Zones. T his means that you have to double the number of subnets you are using. Take note as well that you must cre ate the corresponding public subnet in the same Availab ility Zone of your private EC2 servers in order for it to properly communicate with the load balancer. Hence, the correct answer is 6 subnets. References: https://docs.aws.amazon.com/vpc/latest/userguide/VP C_Scenario2.html https://aws.amazon.com/premiumsupport/knowledge-cen ter/public-load-balancer-private-ec2/ Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", + "references": "" + }, + { + "question": "A financial firm is designing an application archit ecture for its online trading platform that must ha ve high availability and fault tolerance. Their Solutions A rchitect configured the application to use an Amazo n S3 bucket located in the us-east-1 region to store lar ge amounts of intraday financial data. The stored f inancial data in the bucket must not be affected even if the re is an outage in one of the Availability Zones or if there's a regional service failure. What should the Architect do to avoid any costly se rvice disruptions and ensure data durability?", + "options": [ + "A. A. Create a Lifecycle Policy to regularly backup th e S3 bucket to Amazon Glacier. B. B. Copy the S3 bucket to an EBS-backed EC2 instance .", + "C. C. Create a new S3 bucket in another region and c onfigure Cross-Account Access to the bucket located in", + "D. D. Enable Cross-Region Replication." + ], + "correct": "D. D. Enable Cross-Region Replication.", + "explanation": "Explanation/Reference: In this scenario, you need to enable Cross-Region R eplication to ensure that your S3 bucket would not be affected even if there is an outage in one of the A vailability Zones or a regional service failure in us-east-1. When you upload your data in S3, your objects are r edundantly stored on multiple devices across multip le facilities within the region only, where you create d the bucket. Thus, if there is an outage on the en tire region, your S3 bucket will be unavailable if you d o not enable Cross-Region Replication, which should make your data available to another region. Note that an Availability Zone (AZ) is more related with Amazon EC2 instances rather than Amazon S3 so if there is any outage in the AZ, the S3 bucket is usually not affected but only the EC2 instances dep loyed on that zone. . Hence, the correct answer is: Enable Cross-Region R eplication. The option that says: Copy the S3 bucket to an EBS- backed EC2 instance is incorrect because EBS is not as durable as Amazon S3. Moreover, if the Availabil ity Zone where the volume is hosted goes down then the data will also be inaccessible. The option that says: Create a Lifecycle Policy to regularly backup the S3 bucket to Amazon Glacier is incorrect because Glacier is primarily used for dat a archival. You also need to replicate your data to another region for better durability. The option that says: Create a new S3 bucket in ano ther region and configure Cross-Account Access to the bucket located in us-east-1 is incorrect becaus e Cross-Account Access in Amazon S3 is primarily us ed if you want to grant access to your objects to anot her AWS account, and not just to another AWS Region . For example, Account MANILA can grant another AWS a ccount (Account CEBU) permission to access its resources such as buckets and objects. S3 Cross-Acc ount Access does not replicate data from one region to another. A better solution is to enable Cross-Regio n Replication (CRR) instead. References: https://aws.amazon.com/s3/faqs/ . https://aws.amazon.com/s3/features/replication/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", + "references": "" + }, + { + "question": "A Solutions Architect is designing a monitoring app lication which generates audit logs of all operatio nal activities of the company's cloud infrastructure. T heir IT Security and Compliance team mandates that the application retain the logs for 5 years before the data can be deleted. How can the Architect meet the above requirement?", + "options": [ + "A. A. Store the audit logs in a Glacier vault and us e the Vault Lock feature.", + "B. B. Store the audit logs in an Amazon S3 bucket an d enable Multi-Factor Authentication Delete (MFA De lete)", + "C. C. Store the audit logs in an EBS volume and then take EBS snapshots every month.", + "D. D. Store the audit logs in an EFS volume and use Network File System version 4 (NFSv4) file- locking" + ], + "correct": "A. A. Store the audit logs in a Glacier vault and us e the Vault Lock feature.", + "explanation": "Explanation/Reference: An Amazon S3 Glacier (Glacier) vault can have one r esource-based vault access policy and one Vault Lock policy attached to it. A Vault Lock policy is a vault access policy that you can lock. Using a Va ult Lock policy can help you enforce regulatory and com pliance requirements. Amazon S3 Glacier provides a set of API operations for you to manage the Vault L ock policies. As an example of a Vault Lock policy, suppose that you are required to retain archives for one year be fore you can delete them. To implement this requirement, you can create a Vault Lock policy that denies use rs permissions to delete an archive until the archive has existed for one year. You can test this policy before locking it down. After you lock the policy, the pol icy becomes immutable. For more information about t he locking process, see Amazon S3 Glacier Vault Lock. If you want to manage other user permissions that c an be changed, you can use the vault access policy Amazon S3 Glacier supports the following archive op erations: Upload, Download, and Delete. Archives are immutable and cannot be modified. Hence, the co rrect answer is to store the audit logs in a Glacie r vault and use the Vault Lock feature. Storing the audit logs in an EBS volume and then ta king EBS snapshots every month is incorrect because this is not a suitable and secure solution. Anyone who has access to the EBS Volume can simply delete and modify the audit logs. Snapshots can be deleted too. Storing the audit logs in an Amazon S3 bucket and e nabling Multi-Factor Authentication Delete (MFA Delete) on the S3 bucket is incorrect because this would still not meet the requirement. If someo ne has access to the S3 bucket and also has the proper MFA privileges then the audit logs can be edited. Storing the audit logs in an EFS volume and using N etwork File System version 4 (NFSv4) file- locking mechanism is incorrect because the data int egrity of the audit logs can still be compromised i f it is stored in an EFS volume with Network File System ve rsion 4 (NFSv4) file-locking mechanism and hence, not suitable as storage for the files. Although it will provide some sort of security, the file lock c an still be overridden and the audit logs might be edited by so meone else. References: https://docs.aws.amazon.com/amazonglacier/latest/de v/vault-lock.html https://docs.aws.amazon.com/amazonglacier/latest/de v/vault-lock-policy.html https://aws.amazon.com/blogs/aws/glacier-vault-lock / Amazon S3 and S3 Glacier Overview: https://youtube.com/watch?v=1ymyeN2tki4 Check out this Amazon S3 Glacier Cheat Sheet: https://tutorialsdojo.com/amazon-glacier/", + "references": "" + }, + { + "question": "A web application is hosted in an Auto Scaling grou p of EC2 instances deployed across multiple Availability Zones behind an Application Load Balan cer. You need to implement an SSL solution for your system to improve its security which is why you req uested an SSL/TLS certificate from a third-party certificate authority (CA). Where can you safely import the SSL/TLS certificate of your application? (Select TWO.)", + "options": [ + "A. A. An S3 bucket configured with server-side encry ption with customer-provided encryption keys (SSE-C )", + "B. B. AWS Certificate Manager", + "C. C. A private S3 bucket with versioning enabled", + "D. D. CloudFront" + ], + "correct": "", + "explanation": "Explanation/Reference: If you got your certificate from a third-party CA, import the certificate into ACM or upload it to the IAM certificate store. Hence, AWS Certificate Manager a nd IAM certificate store are the correct answers. ACM lets you import third-party certificates from t he ACM console, as well as programmatically. If ACM is not available in your region, use AWS CLI to upl oad your third-party certificate to the IAM certifi cate store. A private S3 bucket with versioning enabled and an S3 bucket configured with server-side encryption with customer-provided encryption keys (SSE-C) are both incorrect as S3 is not a suitable service to store the SSL certificate. CloudFront is incorrect. Although you can upload ce rtificates to CloudFront, it doesn't mean that you can import SSL certificates on it. You would not be abl e to export the certificate that you have loaded in CloudFront nor assign them to your EC2 or ELB insta nces as it would be tied to a single CloudFront distribution.", + "references": "https://docs.aws.amazon.com/AmazonCloudFront/latest /DeveloperGuide/cnames-and-https- procedures.html#cnames-and-https-uploading-certific ates Check out this Amazon CloudFront Cheat Sheet: https://tutorialsdojo.com/amazon-cloudfront/ AWS Security Services Overview - Secrets Manager, A CM, Macie: https://youtube.com/watch?v=ogVamzF2Dzk Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate-saa-c02/" + }, + { + "question": "A company has a web application hosted in an On-Dem and EC2 instance. You are creating a shell script that needs the instance's public and private IP add resses. What is the best way to get the instance's associat ed IP addresses which your shell script can use?", + "options": [ + "A. A. By using a Curl or Get Command to get the late st metadata information from", + "B. B. By using a CloudWatch metric.", + "C. C. By using a Curl or Get Command to get the late st user data information from", + "D. D. By using IAM." + ], + "correct": "A. A. By using a Curl or Get Command to get the late st metadata information from", + "explanation": "Explanation/Reference: Instance metadata is data about your EC2 instance t hat you can use to configure or manage the running instance. Because your instance metadata is availab le from your running instance, you do not need to u se the Amazon EC2 console or the AWS CLI. This can be helpful when you're writing scripts to run from your instance. For example, you can access the loca l IP address of your instance from instance metadat a to manage a connection to an external application. To view the private IPv4 address, public IPv4 addre ss, and all other categories of instance metadata f rom within a running instance, use the following URL: http://169.254.169.254/latest/meta-data/", + "references": "http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ ec2-instance-metadata.html Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/" + }, + { + "question": "A company needs to integrate the Lightweight Direct ory Access Protocol (LDAP) directory service from the on-premises data center to the AWS VPC using IA M. The identity store which is currently being used is not compatible with SAML. Which of the following provides the most valid appr oach to implement the integration? A. A. Use an IAM policy that references the LDAP ide ntifiers and AWS credentials.", + "options": [ + "B. B. Use AWS Single Sign-On (SSO) service to enable single sign-on between AWS and your LDAP.", + "C. C. Develop an on-premises custom identity broker application and use STS to issue short-lived AWS", + "D. D. Use IAM roles to rotate the IAM credentials wh enever LDAP credentials are updated.", + "A. A. Configure RAID 1 in multiple instance store vo lumes.", + "B. B. Attach multiple Provisioned IOPS SSD volumes i n the instance.", + "C. C. Configure RAID 0 in multiple instance store vo lumes.", + "D. D. Enable Transfer Acceleration in Amazon S3." + ], + "correct": "C. C. Configure RAID 0 in multiple instance store vo lumes.", + "explanation": "Explanation/Reference: Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon Web Services (AWS) Cloud. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic . RAID 0 configuration enables you to improve your st orage volumes' performance by distributing the I/O across the volumes in a stripe. Therefore, if you a dd a storage volume, you get the straight addition of throughput and IOPS. This configuration can be impl emented on both EBS or instance store volumes. Since the main requirement in the scenario is stora ge performance, you need to use an instance store volume. It uses NVMe or SATA-based SSD to deliver h igh random I/O performance. This type of storage is a good option when you need storage with very lo w latency, and you don't need the data to persist w hen the instance terminates. Hence, the correct answer is: Configure RAID 0 in m ultiple instance store volumes. The option that says: Enable Transfer Acceleration in Amazon S3 is incorrect because S3 Transfer Acceleration is mainly used to speed up the transfe r of gigabytes or terabytes of data between clients and an S3 bucket. The option that says: Configure RAID 1 in multiple instance volumes is incorrect because RAID 1 configuration is used for data mirroring. You need to configure RAID 0 to improve the performance of your storage volumes. The option that says: Attach multiple Provisioned I OPS SSD volumes in the instance is incorrect because persistent storage is not needed in the sce nario. Also, instance store volumes have greater I/ O performance than EBS volumes. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /InstanceStorage.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /raid-config.html Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", + "references": "" + }, + { + "question": "A Solutions Architect is working for a large global media company with multiple office locations all around the world. The Architect is instructed to bu ild a system to distribute training videos to all e mployees. Using CloudFront, what method would be used to serv e content that is stored in S3, but not publicly ac cessible from S3 directly?", + "options": [ + "A. A. Create an Origin Access Identity (OAI) for Clo udFront and grant access to the objects in your S3 bucket", + "B. B. Create a web ACL in AWS WAF to block any publi c S3 access and attach it to the Amazon CloudFront", + "C. C. Create an Identity and Access Management (IAM) user for CloudFront and grant access to the object s in", + "D. D. Create an S3 bucket policy that lists the Clou dFront distribution ID as the principal and the tar get bucket", + "A. A. Use Amazon S3 Glacier Deep Archive to store th e data.", + "B. B. Use Amazon S3 to store the data.", + "C. C. Amazon Certificate Manager", + "D. D. Configure Server-Side Encryption with AWS KMS- Managed Keys (SSE-KMS)." + ], + "correct": "A. A. Create an Origin Access Identity (OAI) for Clo udFront and grant access to the objects in your S3 bucket", + "explanation": "Explanation/Reference: Server-side encryption is the encryption of data at its destination by the application or service that receives it. AWS Key Management Service (AWS KMS) is a servi ce that combines secure, highly available hardware and software to provide a key management s ystem scaled for the cloud. Amazon S3 uses AWS KMS customer master keys (CMKs) to encrypt your Ama zon S3 objects. SSE-KMS encrypts only the object data. Any object metadata is not encrypted. If you use customer-managed CMKs, you use AWS KMS via the AWS Management Console or AWS KMS APIs to centrally create encryption keys, define the policies that control how keys can be used, and audit key usage to prove that they are being used correctly. You can use these keys to protect your d ata in Amazon S3 buckets. A customer master key (CMK) is a logical representa tion of a master key. The CMK includes metadata, such as the key ID, creation date, description, and key state. The CMK also contains the key material used to encrypt and decrypt data. You can use a CMK to e ncrypt and decrypt up to 4 KB (4096 bytes) of data. Typically, you use CMKs to generate, encrypt, and d ecrypt the data keys that you use outside of AWS KMS to encrypt your data. This strategy is known as envelope encryption. You have three mutually exclusive options depending on how you choose to manage the encryption keys: Use Server-Side Encryption with Amazon S3-Managed K eys (SSE-S3) Each object is encrypted with a unique key. As an additional safeguard, it encryp ts the key itself with a master key that it regular ly rotates. Amazon S3 server-side encryption uses one of the st rongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data . Use Server-Side Encryption with Customer Master Key s (CMKs) Stored in AWS Key Management Service (SSE-KMS) Similar to SSE-S3, but with some additional benefits and charges for using this service. There are separate permissions for the use of a CMK that provides added protection against unauthorized access of your objects in Amazon S3. S SE-KMS also provides you with an audit trail that shows when your CMK was used and by whom. Additiona lly, you can create and manage customer- managed CMKs or use AWS managed CMKs that are uniqu e to you, your service, and your Region. Use Server-Side Encryption with Customer-Provided K eys (SSE-C) You manage the encryption keys and Amazon S3 manages the encryption, as it writes to disks, and decryption when you access your objec ts. In the scenario, the company needs to store financi al files in AWS which are accessed every week and t he solution should use envelope encryption. This requi rement can be fulfilled by using an Amazon S3 configured with Server-Side Encryption with AWS KMS -Managed Keys (SSE-KMS). Hence, using Amazon S3 to store the data and configuring Server- Side Encryption with AWS KMS-Managed Keys (SSE-KMS) are the correct answers. Using Amazon S3 Glacier Deep Archive to store the d ata is incorrect. Although this provides the most cost-effective storage solution, it is not the appr opriate service to use if the files being stored ar e frequently accessed every week. Configuring Server-Side Encryption with Customer-Pr ovided Keys (SSE-C) and configuring Server- Side Encryption with Amazon S3-Managed Keys (SSE-S3 ) are both incorrect. Although you can configure automatic key rotation, these two do not provide you with an audit trail that shows when you r CMK was used and by whom, unlike Server-Side Encryp tion with AWS KMS-Managed Keys (SSE-KMS). References: https://docs.aws.amazon.com/AmazonS3/latest/dev/ser v-side-encryption.html https://docs.aws.amazon.com/AmazonS3/latest/dev/Usi ngKMSEncryption.html https://docs.aws.amazon.com/kms/latest/developergui de/services-s3.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", + "references": "https://docs.aws.amazon.com/AmazonCloudFront/latest /DeveloperGuide/private-content-restricting-access- to- s3.html#private-content-granting-permissions-to-oai Check out this Amazon CloudFront Cheat Sheet: https://tutorialsdojo.com/amazon-cloudfront/ S3 Pre-signed URLs vs CloudFront Signed URLs vs Ori gin Access Identity (OAI) https://tutorialsdojo.com/s3-pre-signed-urls-vs-clo udfront-signed-urls-vs-origin-access-identity-oai/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/ QUESTION 252 A company is looking to store their confidential fi nancial files in AWS which are accessed every week. The Architect was instructed to set up the storage system which uses envelope encryption and automates key rotation. It should also provide an audit trail tha t shows who used the encryption key and by whom for security purposes. Which combination of actions should the Architect i mplement to satisfy the requirement in the most cos t- effective way? (Select TWO.)" + }, + { + "question": "A tech startup has recently received a Series A rou nd of funding to continue building their mobile for ex trading application. You are hired to set up their cloud architecture in AWS and to implement a highly available, fault tolerant system. For their databas e, they are using DynamoDB and for authentication, they have chosen to use Cognito. Since the mobile applic ation contains confidential financial transactions, there is a requirement to add a second authentication met hod that doesn't rely solely on user name and passw ord. How can you implement this in AWS?", + "options": [ + "A. A. Add a new IAM policy to a user pool in Cognito .", + "B. B. Add multi-factor authentication (MFA) to a use r pool in Cognito to protect the identity of your u sers.", + "C. C. Develop a custom application that integrates w ith Cognito that implements a second layer of", + "D. D. Integrate Cognito with Amazon SNS Mobile Push to allow additional authentication via SMS." + ], + "correct": "B. B. Add multi-factor authentication (MFA) to a use r pool in Cognito to protect the identity of your u sers.", + "explanation": "Explanation/Reference: You can add multi-factor authentication (MFA) to a user pool to protect the identity of your users. MF A adds a second authentication method that doesn't re ly solely on user name and password. You can choose to use SMS text messages, or time-based one-time (TOTP ) passwords as second factors in signing in your users. You can also use adaptive authentication wit h its risk-based model to predict when you might ne ed another authentication factor. It's part of the use r pool advanced security features, which also inclu de protections against compromised credentials.", + "references": "https://docs.aws.amazon.com/cognito/latest/develope rguide/managing-security.html" + }, + { + "question": "A company has an OLTP (Online Transactional Process ing) application that is hosted in an Amazon ECS cluster using the Fargate launch type. It has an Am azon RDS database that stores data of its productio n website. The Data Analytics team needs to run queri es against the database to track and audit all user transactions. These query operations against the pr oduction database must not impact application performance in any way. Which of the following is the MOST suitable and cos t-effective solution that you should implement?", + "options": [ + "A. A. Upgrade the instance type of the RDS database to a large instance.", + "B. B. Set up a new Amazon Redshift database cluster. Migrate the product database into Redshift and all ow", + "C. C. Set up a new Amazon RDS Read Replica of the pr oduction database. Direct the Data Analytics team t o query the production data from the replica.", + "D. D. Set up a Multi-AZ deployments configuration of your production database in RDS. Direct the Data" + ], + "correct": "C. C. Set up a new Amazon RDS Read Replica of the pr oduction database. Direct the Data Analytics team t o query the production data from the replica.", + "explanation": "Explanation/Reference: Amazon RDS Read Replicas provide enhanced performan ce and durability for database (DB) instances. This feature makes it easy to elastically scale out beyond the capacity constraints of a single DB ins tance for read-heavy database workloads. You can create one or more replicas of a given sour ce DB Instance and serve high-volume application re ad traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone D B instances. Read replicas are available in Amazon RDS for MySQL, MariaDB, Oracle and PostgreSQL, as w ell as Amazon Aurora. You can reduce the load on your source DB instance by routing read queries from your applications to t he read replica. These replicas allow you to elastical ly scale out beyond the capacity constraints of a s ingle DB instance for read-heavy database workloads. Because read replicas can be promoted to master sta tus, they are useful as part of a sharding implementation. To shard your database, add a read replica and promote it to master status, then, from each of the resulting DB Instances, delete the data that belongs to the other shard. Hence, the correct answer is: Set up a new Amazon R DS Read Replica of the production database. Direct the Data Analytics team to query the product ion data from the replica. The option that says: Set up a new Amazon Redshift database cluster. Migrate the product database into Redshift and allow the Data Analytics team to fetch data from it is incorrect because Redshift is primarily used for OLAP (Online Analytical Processi ng) applications and not for OLTP. The option that says: Set up a Multi-AZ deployments configuration of your production database in RDS. Direct the Data Analytics team to query the pr oduction data from the standby instance is incorrect because you can't directly connect to the standby instance. This is only used in the event o f a database failover when your primary instance encoun tered an outage. The option that says: Upgrade the instance type of the RDS database to a large instance is incorrect because this entails a significant amount of cost. Moreover, the production database could still be af fected by the queries done by the Data Analytics team. A b etter solution for this scenario is to use a Read R eplica instead. References: https://aws.amazon.com/caching/database-caching/ https://aws.amazon.com/rds/details/read-replicas/ https://aws.amazon.com/elasticache/ Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/", + "references": "" + }, + { + "question": "A company deployed an online enrollment system data base on a prestigious university, which is hosted i n RDS. The Solutions Architect is required to monitor the database metrics in Amazon CloudWatch to ensur e the availability of the enrollment system. What are the enhanced monitoring metrics that Amazo n CloudWatch gathers from Amazon RDS DB instances which provide more accurate information? (Select TW O.)", + "options": [ + "A. A. Database Connections", + "B. B. CPU Utilization", + "C. C. RDS child processes.", + "D. D. Freeable Memory" + ], + "correct": "", + "explanation": "Explanation/Reference: Amazon RDS provides metrics in real time for the op erating system (OS) that your DB instance runs on. You can view the metrics for your DB instance using the console, or consume the Enhanced Monitoring JSON output from CloudWatch Logs in a monitoring sy stem of your choice. CloudWatch gathers metrics about CPU utilization fr om the hypervisor for a DB instance, and Enhanced Monitoring gathers its metrics from an agent on the instance. As a result, you might find differences between the measurements, because the hypervisor la yer performs a small amount of work. The difference s can be greater if your DB instances use smaller ins tance classes, because then there are likely more v irtual machines (VMs) that are managed by the hypervisor l ayer on a single physical instance. Enhanced Monitoring metrics are useful when you want to see how different processes or threads on a DB instance use the CPU. In RDS, the Enhanced Monitoring metrics shown in th e Process List view are organized as follows: RDS child processes Shows a summary of the RDS pro cesses that support the DB instance, for example aurora for Amazon Aurora DB clusters and mysqld for MySQL DB instances. Process threads appear nested beneath the parent process. Process threads show CPU utilization only as other metrics are the same for all threads for the process. The console displa ys a maximum of 100 processes and threads. The resu lts are a combination of the top CPU consuming and memo ry consuming processes and threads. If there are more than 50 processes and more than 50 threads, th e console displays the top 50 consumers in each category. This display helps you identify which pro cesses are having the greatest impact on performanc e. RDS processes Shows a summary of the resources use d by the RDS management agent, diagnostics monitoring processes, and other AWS processes that are required to support RDS DB instances. OS processes Shows a summary of the kernel and sys tem processes, which generally have minimal impact on performance. CPU Utilization, Database Connections, and Freeable Memory are incorrect because these are just the regular items provided by Amazon RDS Metrics in Clo udWatch. Remember that the scenario is asking for the Enhanced Monitoring metrics. References: https://docs.aws.amazon.com/AmazonCloudWatch/latest /monitoring/rds-metricscollected.html https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/ USER_Monitoring.OS.html#USER_Monitoring.OS.CloudWat chLogs Check out this Amazon CloudWatch Cheat Sheet: https://tutorialsdojo.com/amazon-cloudwatch/ Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/", + "references": "" + }, + { + "question": "An organization plans to use an AWS Direct Connect connection to establish a dedicated connection between its on-premises network and AWS. The organi zation needs to launch a fully managed solution tha t will automate and accelerate the replication of dat a to and from various AWS storage services. Which of the following solutions would you recommen d?", + "options": [ + "A. A. Use an AWS Storage Gateway file gateway to sto re and retrieve files directly using the SMB file s ystem", + "B. B. Use an AWS DataSync agent to rapidly move the data over the Internet.", + "C. C. Use an AWS DataSync agent to rapidly move the data over a service endpoint.", + "D. D. Use an AWS Storage Gateway tape gateway to sto re data on virtual tape cartridges and asynchronous ly" + ], + "correct": "C. C. Use an AWS DataSync agent to rapidly move the data over a service endpoint.", + "explanation": "Explanation/Reference: AWS DataSync allows you to copy large datasets with millions of files, without having to build custom solutions with open source tools or license and man age expensive commercial network acceleration software. You can use DataSync to migrate active da ta to AWS, transfer data to the cloud for analysis and processing, archive data to free up on-premises sto rage capacity, or replicate data to AWS for busines s continuity. AWS DataSync simplifies, automates, and accelerates copying large amounts of data to and from AWS storage services over the internet or AWS Direct Co nnect. DataSync can copy data between Network File System (NFS), Server Message Block (SMB) file serve rs, self-managed object storage, or AWS Snowcone, and Amazon Simple Storage Service (Amazon S3) bucke ts, Amazon EFS file systems, and Amazon FSx for Windows File Server file systems. You deploy an AWS DataSync agent to your on-premise s hypervisor or in Amazon EC2. To copy data to or from an on-premises file server, you download th e agent virtual machine image from the AWS Console and deploy to your on-premises VMware ESXi, Linux K ernel-based Virtual Machine (KVM), or Microsoft Hyper-V hypervisor. To copy data to or from an in-c loud file server, you create an Amazon EC2 instance using a DataSync agent AMI. In both cases the agent must be deployed so that it can access your file s erver using the NFS, SMB protocol, or the Amazon S3 API. To set up transfers between your AWS Snowcone device and AWS storage, use the DataSync agent AMI that comes pre-installed on your device. Since the scenario plans to use AWS Direct Connect for private connectivity between on-premises and AWS, you can use DataSync to automate and accelerat e online data transfers to AWS storage services. Th e AWS DataSync agent will be deployed in your on-prem ises network to accelerate data transfer to AWS. To connect programmatically to an AWS service, you wil l need to use an AWS Direct Connect service endpoint. Hence, the correct answer is: Use an AWS DataSync a gent to rapidly move the data over a service endpoint. The option that says: Use AWS DataSync agent to rap idly move the data over the Internet is incorrect because the organization will be using an AWS Direc t Connect connection for private connectivity. This means that the connection will not pass through the public Internet. The options that say: Use AWS Storage Gateway tape gateway to store data on virtual tape cartridges and asynchronously copy your backups to AWS and Use AWS Storage Gateway file gateway to store and retrieve files directly using the SMB file syst em protocol are both incorrect because, in the scen ario, you need to accelerate the replication of data, and not establish a hybrid cloud storage architecture. AWS Storage Gateway only supports a few AWS storage ser vices as a target based on the type of gateway that you launched. AWS DataSync is more suitable in auto mating and accelerating online data transfers to a variety of AWS storage services. References: https://aws.amazon.com/datasync/faqs/ https://docs.aws.amazon.com/datasync/latest/usergui de/what-is-datasync.html https://docs.aws.amazon.com/general/latest/gr/dc.ht ml AWS DataSync Overview: https://youtube.com/watch?v=uQDVZfj_VEA Check out this AWS DataSync Cheat Sheet: https://tutorialsdojo.com/aws-datasync/ .", + "references": "" + }, + { + "question": "A large electronics company is using Amazon Simple Storage Service to store important documents. For reporting purposes, they want to track and log ever y request access to their S3 buckets including the requester, bucket name, request time, request actio n, referrer, turnaround time, and error code inform ation. The solution should also provide more visibility in to the object-level operations of the bucket. Which is the best solution among the following opti ons that can satisfy the requirement?", + "options": [ + "A. A. Enable AWS CloudTrail to audit all Amazon S3 b ucket access.", + "B. B. Enable server access logging for all required Amazon S3 buckets.", + "C. C. Enable the Requester Pays option to track acce ss via AWS Billing.", + "D. D. Enable Amazon S3 Event Notifications for PUT a nd POST." + ], + "correct": "B. B. Enable server access logging for all required Amazon S3 buckets.", + "explanation": "Explanation/Reference: Amazon S3 is integrated with AWS CloudTrail, a serv ice that provides a record of actions taken by a us er, role, or an AWS service in Amazon S3. CloudTrail ca ptures a subset of API calls for Amazon S3 as event s, including calls from the Amazon S3 console and code calls to the Amazon S3 APIs. AWS CloudTrail logs provide a record of actions tak en by a user, role, or an AWS service in Amazon S3, while Amazon S3 server access logs provide detailed records for the requests that are made to an S3 bucket. For this scenario, you can use CloudTrail and the S erver Access Logging feature of Amazon S3. However, it is mentioned in the scenario that they need deta iled information about every access request sent to the S3 bucket including the referrer and turn-around time information. These two records are not available in CloudTrail. Hence, the correct answer is: Enable server access logging for all required Amazon S3 buckets. The option that says: Enable AWS CloudTrail to audi t all Amazon S3 bucket access is incorrect because enabling AWS CloudTrail alone won't give de tailed logging information for object-level access. The option that says: Enabling the Requester Pays o ption to track access via AWS Billing is incorrect because this action refers to AWS billing and not f or logging. The option that says: Enabling Amazon S3 Event Noti fications for PUT and POST is incorrect because we are looking for a logging solution and not an ev ent notification. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/clo udtrail-logging.html#cloudtrail-logging-vs-server-l ogs https://docs.aws.amazon.com/AmazonS3/latest/dev/Log Format.html https://docs.aws.amazon.com/AmazonS3/latest/dev/Ser verLogs.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", + "references": "" + }, + { + "question": "A data analytics company has been building its new generation big data and analytics platform on their AWS cloud infrastructure. They need a storage servi ce that provides the scale and performance that the ir big data applications require such as high throughp ut to compute nodes coupled with read-after-write consistency and low-latency file operations. In add ition, their data needs to be stored redundantly ac ross multiple AZs and allows concurrent connections from multiple EC2 instances hosted on multiple AZs. Which of the following AWS storage services will yo u use to meet this requirement?", + "options": [ + "A. A. Glacier", + "B. B. S3", + "C. C. EBS", + "D. D. EFS" + ], + "correct": "D. D. EFS", + "explanation": "Explanation/Reference: In this question, you should take note of the two k eywords/phrases: \"file operation\" and \"allows concu rrent connections from multiple EC2 instances\". There are various AWS storage options that you can choose bu t whenever these criteria show up, always consider us ing EFS instead of using EBS Volumes which is mainly used as a \"block\" storage and can only have one connection to one EC2 instance at a time. Amazo n EFS provides the scale and performance required for big data applications that require high throughput to compute nodes coupled with read-after-write consist ency and low-latency file operations. Amazon EFS is a fully-managed service that makes it easy to set up and scale file storage in the Amazo n Cloud. With a few clicks in the AWS Management Cons ole, you can create file systems that are accessibl e to Amazon EC2 instances via a file system interface (using standard operating system file I/O APIs) an d supports full file system access semantics (such as strong consistency and file locking). Amazon EFS file systems can automatically scale fro m gigabytes to petabytes of data without needing to provision storage. Tens, hundreds, or even thousand s of Amazon EC2 instances can access an Amazon EFS file system at the same time, and Amazon EFS provid es consistent performance to each Amazon EC2 instance. Amazon EFS is designed to be highly durab le and highly available. EBS is incorrect because it does not allow concurre nt connections from multiple EC2 instances hosted o n multiple AZs and it does not store data redundantly across multiple AZs by default, unlike EFS. S3 is incorrect because although it can handle conc urrent connections from multiple EC2 instances, it does not have the ability to provide low-latency file op erations, which is required in this scenario. Glacier is incorrect because this is an archiving s torage solution and is not applicable in this scena rio. References: https://docs.aws.amazon.com/efs/latest/ug/performan ce.html https://aws.amazon.com/efs/faq/ Check out this Amazon EFS Cheat Sheet: https://tutorialsdojo.com/amazon-efs/ Check out this Amazon S3 vs EBS vs EFS Cheat Sheet: https://tutorialsdojo.com/amazon-s3-vs-ebs-vs-efs/ Here's a short video tutorial on Amazon EFS: https://youtu.be/AvgAozsfCrY", + "references": "" + }, + { + "question": "A company launched an EC2 instance in the newly cre ated VPC. They noticed that the generated instance does not have an associated DNS hostname. Which of the following options could be a valid rea son for this issue?", + "options": [ + "A. A. The newly created VPC has an invalid CIDR bloc k.", + "B. B. The DNS resolution and DNS hostname of the VPC configuration should be enabled.", + "C. C. Amazon Route53 is not enabled.", + "D. D. The security group of the EC2 instance needs t o be modified." + ], + "correct": "B. B. The DNS resolution and DNS hostname of the VPC configuration should be enabled.", + "explanation": "Explanation/Reference: When you launch an EC2 instance into a default VPC, AWS provides it with public and private DNS hostnames that correspond to the public IPv4 and pr ivate IPv4 addresses for the instance. However, when you launch an instance into a non-def ault VPC, AWS provides the instance with a private DNS hostname only. New instances will only be provi ded with public DNS hostname depending on these two DNS attributes: the DNS resolution and DNS hostnames, t hat you have specified for your VPC, and if your instance has a public IPv4 address. In this case, the new EC2 instance does not automat ically get a DNS hostname because the DNS resolutio n and DNS hostnames attributes are disabled in the ne wly created VPC. Hence, the correct answer is: The DNS resolution an d DNS hostname of the VPC configuration should be enabled. The option that says: The newly created VPC has an invalid CIDR block is incorrect since it's very unlikely that a VPC has an invalid CIDR block becau se of AWS validation schemes. The option that says: Amazon Route 53 is not enable d is incorrect since Route 53 does not need to be enabled. Route 53 is the DNS service of AWS, but th e VPC is the one that enables assigning of instance hostnames. The option that says: The security group of the EC2 instance needs to be modified is incorrect since security groups are just firewalls for your instanc es. They filter traffic based on a set of security group rules. References: https://docs.aws.amazon.com/AmazonVPC/latest/UserGu ide/vpc-dns.html https://aws.amazon.com/vpc/ Amazon VPC Overview: https://youtube.com/watch?v=oIDHKeNxvQQ Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", + "references": "" + }, + { + "question": "A company has a global news website hosted in a fle et of EC2 Instances. Lately, the load on the websit e has increased which resulted in slower response tim e for the site visitors. This issue impacts the rev enue of the company as some readers tend to leave the site if it does not load after 10 seconds. Which of the below services in AWS can be used to s olve this problem? (Select TWO.)", + "options": [ + "A. A. Use Amazon ElastiCache for the website's in-me mory data store or cache.", + "B. B. Deploy the website to all regions in different VPCs for faster processing.", + "C. C. Use Amazon CloudFront with website as the cust om origin.", + "D. D. For better read throughput, use AWS Storage Ga teway to distribute the content across multiple reg ions.", + "A. A. Enable DynamoDB Streams to capture table activ ity and automatically trigger the Lambda function.", + "B. B. Use CloudWatch Alarms to trigger the Lambda fu nction whenever a new entry is created in the", + "C. C. Use Systems Manager Automation to detect new e ntries in the DynamoDB table then automatically", + "D. D. Invoke the Lambda functions using SNS each tim e that the ECS Cluster successfully processed finan cial" + ], + "correct": "A. A. Enable DynamoDB Streams to capture table activ ity and automatically trigger the Lambda function.", + "explanation": "Explanation/Reference: Amazon DynamoDB is integrated with AWS Lambda so th at you can create triggers--pieces of code that automatically respond to events in DynamoDB Streams . With triggers, you can build applications that re act to data modifications in DynamoDB tables. If you enable DynamoDB Streams on a table, you can associate the stream ARN with a Lambda function that you write. Immediately after an item in the ta ble is modified, a new record appears in the table' s stream. AWS Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records. You can create a Lambda function which can perform a specific action that you specify, such as sending a notification or initiating a workflow. For instance , you can set up a Lambda function to simply copy e ach stream record to persistent storage, such as EFS or S3, to create a permanent audit trail of write act ivity in your table. Suppose you have a mobile gaming app that writes to a TutorialsDojoCourses table. Whenever the TopCourse attribute of the TutorialsDojoScores tabl e is updated, a corresponding stream record is writ ten to the table's stream. This event could then trigge r a Lambda function that posts a congratulatory mes sage on a social media network. (The function would simp ly ignore any stream records that are not updates t o TutorialsDojoCourses or that do not modify the TopC ourse attribute.) Hence, enabling DynamoDB Streams to capture table a ctivity and automatically trigger the Lambda function is the correct answer because the requirem ent can be met with minimal configuration change using DynamoDB streams which can automatically trig ger Lambda functions whenever there is a new entry. Using CloudWatch Alarms to trigger the Lambda funct ion whenever a new entry is created in the DynamoDB table is incorrect because CloudWatch Alar ms only monitor service metrics, not changes in DynamoDB table data. Invoking the Lambda functions using SNS each time t hat the ECS Cluster successfully processed financial data is incorrect because you don't need to create an SNS topic just to invoke Lambda functi ons. You can enable DynamoDB streams instead to meet the requirement with less configuration. Using Systems Manager Automation to detect new entr ies in the DynamoDB table then automatically invoking the Lambda function for processing is inco rrect because the Systems Manager Automation service is primarily used to simplify common mainte nance and deployment tasks of Amazon EC2 instances and other AWS resources. It does not have the capab ility to detect new entries in a DynamoDB table. References: https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/Streams.Lambda.html https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/Streams.html Check out this Amazon DynamoDB cheat sheet: https://tutorialsdojo.com/amazon-dynamodb/", + "references": "" + }, + { + "question": "A tech company currently has an on-premises infrast ructure. They are currently running low on storage and want to have the ability to extend their storage us ing the AWS cloud. Which AWS service can help them achieve this requir ement?", + "options": [ + "A. A. Amazon Storage Gateway", + "B. B. Amazon Elastic Block Storage", + "C. C. Amazon SQS", + "D. D. Amazon EC2" + ], + "correct": "A. A. Amazon Storage Gateway", + "explanation": "Explanation/Reference: AWS Storage Gateway connects an on-premises softwar e appliance with cloud-based storage to provide seamless integration with data security features be tween your on-premises IT environment and the AWS storage infrastructure. You can use the service to store data in the AWS Cloud for scalable and cost-e ffective storage that helps maintain data security. Amazon EC2 is incorrect since this is a compute ser vice, not a storage service. Amazon Elastic Block Storage is incorrect since EBS is primarily used as a storage of your EC2 instanc es. Amazon SQS is incorrect since this is a message que uing service, and does not extend your on-premises storage capacity.", + "references": "http://docs.aws.amazon.com/storagegateway/latest/us erguide/WhatIsStorageGateway.html AWS Storage Gateway Overview: https://youtu.be/pNb7xOBJjHE Check out this AWS Storage Gateway Cheat Sheet: https://tutorialsdojo.com/aws-storage-gateway/" + }, + { + "question": "There are a few, easily reproducible but confidenti al files that your client wants to store in AWS wit hout worrying about storage capacity. For the first mont h, all of these files will be accessed frequently b ut after that, they will rarely be accessed at all. The old files will only be accessed by developers so there is no set retrieval time requirement. However, the files unde r a specific tdojo-finance prefix in the S3 bucket will be used for post-processing that requires millisecond retrieval time. Given these conditions, which of the following opti ons would be the most cost-effective solution for y our client's storage needs?", + "options": [ + "A. A. Store the files in S3 then after a month, chan ge the storage class of the tdojo-finance prefix to S3-IA", + "B. B. Store the files in S3 then after a month, chan ge the storage class of the bucket to S3-IA using l ifecycle", + "C. C. Store the files in S3 then after a month, chan ge the storage class of the tdojo-finance prefix to One Zone-", + "D. D. Store the files in S3 then after a month, chan ge the storage class of the bucket to Intelligent-T iering using" + ], + "correct": "C. C. Store the files in S3 then after a month, chan ge the storage class of the tdojo-finance prefix to One Zone-", + "explanation": "Explanation/Reference: Initially, the files will be accessed frequently, a nd S3 is a durable and highly available storage sol ution for that. After a month has passed, the files won't be accessed frequently anymore, so it is a good idea t o use lifecycle policies to move them to a storage class that would have a lower cost for storing them. Since the files are easily reproducible and some of them are needed to be retrieved quickly based on a specific prefix filter (tdojo-finance), S3-One Zone IA would be a good choice for storing them. The ot her files that do not contain such prefix would then be moved to Glacier for low-cost archival. This setup would also be the most cost-effective for the clien t. Hence, the correct answer is: Store the files in S3 then after a month, change the storage class of th e tdojo-finance prefix to One Zone-IA while the remai ning go to Glacier using lifecycle policy. The option that says: Storing the files in S3 then after a month, changing the storage class of the bucket to S3-IA using lifecycle policy is incorrect . Although it is valid to move the files to S3-IA, this solution still costs more compared with using a com bination of S3-One Zone IA and Glacier. The option that says: Storing the files in S3 then after a month, changing the storage class of the bucket to Intelligent-Tiering using lifecycle polic y is incorrect. While S3 Intelligent-Tiering can automatically move data between two access tiers (f requent access and infrequent access) when access patterns change, it is more suitable for scenarios where you don't know the access patterns of your da ta. It may take some time for S3 Intelligent-Tiering to an alyze the access patterns before it moves the data to a cheaper storage class like S3-IA which means you ma y still end up paying more in the beginning. In addition, you already know the access patterns of t he files which means you can directly change the st orage class immediately and save cost right away. The option that says: Storing the files in S3 then after a month, changing the storage class of the td ojo- finance prefix to S3-IA while the remaining go to G lacier using lifecycle policy is incorrect. Even though S3-IA costs less than the S3 Standard storag e class, it is still more expensive than S3-One Zon e IA. Remember that the files are easily reproducible so you can safely move the data to S3-One Zone IA and in case there is an outage, you can simply generate th e missing data again. References: https://aws.amazon.com/blogs/compute/amazon-s3-adds -prefix-and-suffix-filters-for-lambda-function-trig gering https://docs.aws.amazon.com/AmazonS3/latest/dev/obj ect-lifecycle-mgmt.html https://docs.aws.amazon.com/AmazonS3/latest/dev/lif ecycle-configuration-examples.html https://aws.amazon.com/s3/pricing Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", + "references": "" + }, + { + "question": "To save costs, your manager instructed you to analy ze and review the setup of your AWS cloud infrastru cture. You should also provide an estimate of how much you r company will pay for all of the AWS resources tha t they are using. In this scenario, which of the following will incur costs? (Select TWO.)", + "options": [ + "A. A. A stopped On-Demand EC2 Instance", + "B. B. Public Data Set", + "C. C. EBS Volumes attached to stopped EC2 Instances", + "D. D. A running EC2 Instance" + ], + "correct": "", + "explanation": "Explanation/Reference: Billing commences when Amazon EC2 initiates the boo t sequence of an AMI instance. Billing ends when the instance terminates, which could occur through a web services command, by running \"shutdown -h\", o r through instance failure. When you stop an instance , AWS shuts it down but doesn't charge hourly usage for a stopped instance or data transfer fees. Howev er, AWS does charge for the storage of any Amazon EBS volumes. Hence, a running EC2 Instance and EBS Volumes attac hed to stopped EC2 Instances are the right answers and conversely, a stopped On-Demand EC2 Ins tance is incorrect as there is no charge for a stopped EC2 instance that you have shut down. Using Amazon VPC is incorrect because there are no additional charges for creating and using the VPC itself. Usage charges for other Amazon Web Services , including Amazon EC2, still apply at published ra tes for those resources, including data transfer charge s. Public Data Set is incorrect due to the fact that A mazon stores the data sets at no charge to the comm unity and, as with all AWS services, you pay only for the compute and storage you use for your own applicati ons. References: https://aws.amazon.com/cloudtrail/ https://aws.amazon.com/vpc/faqs https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /using-public-data-sets.html Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", + "references": "" + }, + { + "question": "An automotive company is working on an autonomous v ehicle development and deployment project using AWS. The solution requires High Performance Computi ng (HPC) in order to collect, store and manage massive amounts of data as well as to support deep learning frameworks. The Linux EC2 instances that w ill be used should have a lower latency and higher thro ughput than the TCP transport traditionally used in cloud-based HPC systems. It should also enhance the performance of inter-instance communication and must include an OS-bypass functionality to allow th e HPC to communicate directly with the network interface hardware to provide low-latency, reliable transport functionality. Which of the following is the MOST suitable solutio n that you should implement to achieve the above requirements?", + "options": [ + "A. A. Attach an Elastic Network Interface (ENI) on e ach Amazon EC2 instance to accelerate High Performa nce", + "B. B. Attach an Elastic Network Adapter (ENA) on eac h Amazon EC2 instance to accelerate High Performanc e", + "C. C. Attach a Private Virtual Interface (VIF) on ea ch Amazon EC2 instance to accelerate High Performan ce", + "D. D. Attach an Elastic Fabric Adapter (EFA) on each Amazon EC2 instance to accelerate High Performance" + ], + "correct": "D. D. Attach an Elastic Fabric Adapter (EFA) on each Amazon EC2 instance to accelerate High Performance", + "explanation": "Explanation/Reference: An Elastic Fabric Adapter (EFA) is a network device that you can attach to your Amazon EC2 instance to accelerate High Performance Computing (HPC) and machine learning applications. EFA enables you to achieve the application performance of an on-premis es HPC cluster, with the scalability, flexibility, and elasticity provided by the AWS Cloud. EFA provides lower and more consistent latency and higher throughput than the TCP transport traditiona lly used in cloud-based HPC systems. It enhances the pe rformance of inter-instance communication that is critical for scaling HPC and machine learning appli cations. It is optimized to work on the existing AW S network infrastructure and it can scale depending o n application requirements. EFA integrates with Libfabric 1.9.0 and it supports Open MPI 4.0.2 and Intel MPI 2019 Update 6 for HPC applications, and Nvidia Collective Communications Library (NCCL) for machine learning applications. The OS-bypass capabilities of EFAs are not supporte d on Windows instances. If you attach an EFA to a Windows instance, the instance functions as an Elas tic Network Adapter, without the added EFA capabilities. Elastic Network Adapters (ENAs) provide traditional IP networking features that are required to suppor t VPC networking. EFAs provide all of the same tradit ional IP networking features as ENAs, and they also support OS-bypass capabilities. OS-bypass enables H PC and machine learning applications to bypass the operating system kernel and to communicate directly with the EFA device. Hence, the correct answer is to attach an Elastic F abric Adapter (EFA) on each Amazon EC2 instance to accelerate High Performance Computing (HPC). Attaching an Elastic Network Adapter (ENA) on each Amazon EC2 instance to accelerate High Performance Computing (HPC) is incorrect because El astic Network Adapter (ENA) doesn't have OS- bypass capabilities, unlike EFA. Attaching an Elastic Network Interface (ENI) on eac h Amazon EC2 instance to accelerate High Performance Computing (HPC) is incorrect because an Elastic Network Interface (ENI) is simply a logical networking component in a VPC that represen ts a virtual network card. It doesn't have OS-bypas s capabilities that allow the HPC to communicate dire ctly with the network interface hardware to provide low-latency, reliable transport functionality. Attaching a Private Virtual Interface (VIF) on each Amazon EC2 instance to accelerate High Performance Computing (HPC) is incorrect because Pr ivate Virtual Interface just allows you to connect to your VPC resources on your private IP address or en dpoint. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /efa.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /enhanced-networking-ena Check out this Elastic Fabric Adapter (EFA) Cheat S heet: https://tutorialsdojo.com/elastic-fabric-adapter-ef a/", + "references": "" + }, + { + "question": "A financial company instructed you to automate the recurring tasks in your department such as patch management, infrastructure selection, and data sync hronization to improve their current processes. You need to have a service which can coordinate multipl e AWS services into serverless workflows. Which of the following is the most cost-effective s ervice to use in this scenario?", + "options": [ + "A. A. AWS Step Functions", + "B. B. AWS Lambda", + "C. C. SWF", + "D. D. AWS Batch" + ], + "correct": "A. A. AWS Step Functions", + "explanation": "Explanation/Reference: AWS Step Functions provides serverless orchestratio n for modern applications. Orchestration centrally manages a workflow by breaking it into multiple ste ps, adding flow logic, and tracking the inputs and outputs between the steps. As your applications exe cute, Step Functions maintains application state, tracking exactly which workflow step your applicati on is in, and stores an event log of data that is p assed between application components. That means that if networks fail or components hang, your application can pick up right where it left off. Application development is faster and more intuitiv e with Step Functions, because you can define and manage the workflow of your application independent ly from its business logic. Making changes to one does not affect the other. You can easily update an d modify workflows in one place, without having to struggle with managing, monitoring and maintaining multiple point-to-point integrations. Step Function s frees your functions and containers from excess cod e, so your applications are faster to write, more r esilient, and easier to maintain. SWF is incorrect because this is a fully-managed st ate tracker and task coordinator service. It does n ot provide serverless orchestration to multiple AWS re sources. AWS Lambda is incorrect because although Lambda is used for serverless computing, it does not provide a direct way to coordinate multiple AWS services in to serverless workflows. AWS Batch is incorrect because this is primarily us ed to efficiently run hundreds of thousands of batc h computing jobs in AWS.", + "references": "https://aws.amazon.com/step-functions/features/ Check out this AWS Step Functions Cheat Sheet: https://tutorialsdojo.com/aws-step-functions/ Amazon Simple Workflow (SWF) vs AWS Step Functions vs Amazon SQS: https://tutorialsdojo.com/amazon-simple-workflow-sw f-vs-aws-step-functions-vs-amazon-sqs/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/" + }, + { + "question": "A media company is using Amazon EC2, ELB, and S3 fo r its video-sharing portal for filmmakers. They are using a standard S3 storage class to store all high -quality videos that are frequently accessed only d uring the first three months of posting. As a Solutions Archi tect, what should you do if the company needs to automatically transfer or archive media data from a n S3 bucket to Glacier?", + "options": [ + "A. A. Use Amazon SQS", + "B. B. Use Amazon SWF", + "C. C. Use a custom shell script that transfers data from the S3 bucket to Glacier", + "D. D. Use Lifecycle Policies" + ], + "correct": "D. D. Use Lifecycle Policies", + "explanation": "Explanation/Reference: You can create a lifecycle policy in S3 to automati cally transfer your data to Glacier. Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows: Transition actions In which you define when object s transition to another storage class. For example, you may choose to transition objects to the STANDAR D_IA (IA, for infrequent access) storage class 30 days after creation or archive objects to the GLACI ER storage class one year after creation. Expiration actions In which you specify when the o bjects expire. Then Amazon S3 deletes the expired objects on your behalf.", + "references": "https://docs.aws.amazon.com/AmazonS3/latest/dev/obj ect-lifecycle-mgmt.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/" + }, + { + "question": "A company recently migrated their applications to A WS. The Solutions Architect must ensure that the applications are highly available and safe from com mon web security vulnerabilities. Which is the most suitable AWS service to use to mi tigate Distributed Denial of Service (DDoS) attacks from hitting your back-end EC2 instances?", + "options": [ + "A. A. AWS WAF", + "B. B. AWS Firewall Manager", + "C. C. AWS Shield", + "D. D. Amazon GuardDuty" + ], + "correct": "C. C. AWS Shield", + "explanation": "Explanation/Reference: AWS Shield is a managed Distributed Denial of Servi ce (DDoS) protection service that safeguards applications running on AWS. AWS Shield provides al ways-on detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection. There are two tiers of AWS Sh ield - Standard and Advanced. All AWS customers benefit from the automatic protec tions of AWS Shield Standard, at no additional charge. AWS Shield Standard defends against most co mmon, frequently occurring network and transport layer DDoS attacks that target your web site or app lications. When you use AWS Shield Standard with Amazon CloudFront and Amazon Route 53, you receive comprehensive availability protection against all known infrastructure (Layer 3 and 4) attacks. AWS WAF is incorrect because this is a web applicat ion firewall service that helps protect your web ap ps from common exploits that could affect app availabi lity, compromise security, or consume excessive resources. Although this can help you against DDoS attacks, AWS WAF alone is not enough to fully protect your VPC. You still need to use AWS Shield in this scenario. AWS Firewall Manager is incorrect because this just simplifies your AWS WAF administration and maintenance tasks across multiple accounts and reso urces. Amazon GuardDuty is incorrect because this is just an intelligent threat detection service to protect your AWS accounts and workloads. Using this alone will n ot fully protect your AWS resources against DDoS attacks. References: https://docs.aws.amazon.com/waf/latest/developergui de/waf-which-to-choose.html https://aws.amazon.com/answers/networking/aws-ddos- attack-mitigation/ Check out this AWS Shield Cheat Sheet: https://tutorialsdojo.com/aws-shield/ AWS Security Services Overview - WAF, Shield, Cloud HSM, KMS: https://youtube.com/watch?v=-1S-RdeAmMo", + "references": "" + }, + { + "question": "A customer is transitioning their ActiveMQ messagin g broker service onto the AWS cloud in which they require an alternative asynchronous service that su pports NMS and MQTT messaging protocol. The customer does not have the time and resources neede d to recreate their messaging service in the cloud. The service has to be highly available and should requi re almost no management overhead. Which of the following is the most suitable service to use to meet the above requirement?", + "options": [ + "A. A. Amazon MQ", + "B. B. Amazon SNS", + "C. C. AWS Step Functions", + "D. D. Amazon SWF" + ], + "correct": "A. A. Amazon MQ", + "explanation": "Explanation/Reference: Amazon MQ is a managed message broker service for A pache ActiveMQ that makes it easy to set up and operate message brokers in the cloud. Connecting yo ur current applications to Amazon MQ is easy becaus e it uses industry-standard APIs and protocols for me ssaging, including JMS, NMS, AMQP, STOMP, MQTT, and WebSocket. Using standards means that in most c ases, there's no need to rewrite any messaging code when you migrate to AWS. Amazon MQ, Amazon SQS, and Amazon SNS are messaging services that are suitable for anyone from startups to enterprises. If you're using messaging with existing applications and want to move your messaging service to the cloud quickly and easily, it is recommended that you consider Amazon MQ. It supports industry-standard APIs and protocols so yo u can switch from any standards-based message broke r to Amazon MQ without rewriting the messaging code i n your applications. If you are building brand new applications in the c loud, then it is highly recommended that you consid er Amazon SQS and Amazon SNS. Amazon SQS and SNS are l ightweight, fully managed message queue and topic services that scale almost infinitely and pro vide simple, easy-to-use APIs. You can use Amazon S QS and SNS to decouple and scale microservices, distri buted systems, and serverless applications, and imp rove reliability. Hence, Amazon MQ is the correct answer. Amazon SNS is incorrect because this is more suitab le as a pub/sub messaging service instead of a message broker service. Amazon SQS is incorrect. Although this is a fully m anaged message queuing service, it does not support an extensive list of industry-standard messaging AP Is and protocol, unlike Amazon MQ. Moreover, using Amazon SQS requires you to do additional changes in the messaging code of applications to make it compatible. AWS Step Functions is incorrect because this is a s erverless function orchestrator and not a messaging service, unlike Amazon MQ, AmazonSQS, and Amazon SN S. References: https://aws.amazon.com/amazon-mq/ https://aws.amazon.com/messaging/ https://docs.aws.amazon.com/AWSSimpleQueueService/l atest/SQSDeveloperGuide/welcome.html#sqs- difference-from-amazon-mq-sns Check out this Amazon MQ Cheat Sheet: https://tutorialsdojo.com/amazon-mq/", + "references": "" + }, + { + "question": "A company plans to develop a custom messaging servi ce that will also be used to train their AI for an automatic response feature which they plan to imple ment in the future. Based on their research and tes ts, the service can receive up to thousands of messages a day, and all of these data are to be sent to Ama zon EMR for further processing. It is crucial that none of the messages are lost, no duplicates are produc ed, and that they are processed in EMR in the same order as their arrival. Which of the following options can satisfy the give n requirement?", + "options": [ + "A. A. Set up an Amazon SNS Topic to handle the messa ges.", + "B. B. Set up a default Amazon SQS queue to handle th e messages.", + "C. C. Create an Amazon Kinesis Data Stream to collect the messages. D. D. Create a pipeline using AWS Data Pipeline to han dle the messages.", + "A. A. Amazon S3 Glacier Deep Archive", + "B. B. Amazon FSx for Windows File Server", + "C. C. AWS DataSync", + "D. D. Amazon FSx for Lustre" + ], + "correct": "B. B. Amazon FSx for Windows File Server", + "explanation": "Explanation/Reference: Amazon FSx provides fully managed third-party file systems. Amazon FSx provides you with the native compatibility of third-party file systems with feat ure sets for workloads such as Windows-based storag e, high-performance computing (HPC), machine learning, and electronic design automation (EDA). You don't have to worry about managing file servers and storage, as Amazon FSx automates the time- consuming administration tasks such as hardware pro visioning, software configuration, patching, and backups. Amazon FSx integrates the file systems wit h cloud-native AWS services, making them even more useful for a broader set of workloads. Amazon FSx provides you with two file systems to ch oose from: Amazon FSx for Windows File Server for Windows-based applications and Amazon FSx for Lustr e for compute-intensive workloads. For Windows-based applications, Amazon FSx provides fully managed Windows file servers with features and performance optimized for \"lift-and-shift\" busi ness-critical application workloads including home directories (user shares), media workflows, and ERP applications. It is accessible from Windows and Li nux instances via the SMB protocol. If you have Linux-b ased applications, Amazon EFS is a cloud-native ful ly managed file system that provides simple, scalable, elastic file storage accessible from Linux instanc es via the NFS protocol. For compute-intensive and fast processing workloads , like high-performance computing (HPC), machine learning, EDA, and media processing, Amazon FSx for Lustre, provides a file system that's optimized fo r performance, with input and output stored on Amazon S3. Hence, the correct answer is: Amazon FSx for Window s File Server. Amazon S3 Glacier Deep Archive is incorrect because this service is primarily used as a secure, durabl e, and extremely low-cost cloud storage for data archi ving and long-term backup. AWS DataSync is incorrect because this service simp ly provides a fast way to move large amounts of dat a online between on-premises storage and Amazon S3 or Amazon Elastic File System (Amazon EFS). Amazon FSx for Lustre is incorrect because this ser vice doesn't support the Windows-based applications as well as Windows servers. References: https://aws.amazon.com/fsx/ https://aws.amazon.com/getting-started/use-cases/hp c/3/ Check out this Amazon FSx Cheat Sheet: https://tutorialsdojo.com/amazon-fsx/", + "references": "" + }, + { + "question": "A Solutions Architect is setting up configuration m anagement in an existing cloud architecture. The Ar chitect needs to deploy and manage the EC2 instances includ ing the other AWS resources using Chef and Puppet. Which of the following is the most suitable service to use in this scenario?", + "options": [ + "A. A. AWS OpsWorks", + "B. B. AWS Elastic Beanstalk", + "C. C. AWS CodeDeploy", + "D. D. AWS CloudFormation" + ], + "correct": "A. A. AWS OpsWorks", + "explanation": "Explanation/Reference: AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms th at allow you to use code to automate the configurations of your servers. OpsWorks lets you u se Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazo n EC2 instances or on-premises compute environments. Reference: https://aws.amazon.com/opsworks/ Check out this AWS OpsWorks Cheat Sheet: https://tutorialsdojo.com/aws-opsworks/ Elastic Beanstalk vs CloudFormation vs OpsWorks vs CodeDeploy: https://tutorialsdojo.com/elastic-beanstalk-vs-clou dformation-vs-opsworks-vs-codedeploy/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/", + "references": "" + }, + { + "question": "The start-up company that you are working for has a batch job application that is currently hosted on an EC2 instance. It is set to process messages from a queu e created in SQS with default settings. You configured the application to process the messa ges once a week. After 2 weeks, you noticed that no t all messages are being processed by the application. Wh at is the root cause of this issue?", + "options": [ + "A. The SQS queue is set to short-polling.", + "B. Missing permissions in SQS.", + "C. Amazon SQS has automatically deleted the messages that have been in a queue for more than the", + "D. The batch job application is configured to long p olling." + ], + "correct": "C. Amazon SQS has automatically deleted the messages that have been in a queue for more than the", + "explanation": "Explanation/Reference: Amazon SQS automatically deletes messages that have been in a queue for more than the maximum message retention period. The default message retention per iod is 4 days. Since the queue is configured to the default settings and the batch job application only process es the messages once a week, the messages that are in the queue for more than 4 days are deleted. This is the root cause of the issue. To fix this, you can increase the message retention period to a maximum of 14 days using the SetQueueAttributes action. References: https://aws.amazon.com/sqs/faqs/ https://docs.aws.amazon.com/AWSSimpleQueueService/l atest/SQSDeveloperGuide/sqs-message- lifecycle.html Check out this Amazon SQS Cheat Sheet: https://tutorialsdojo.com/amazon-sqs/", + "references": "" + }, + { + "question": "An organization plans to run an application in a de dicated physical server that doesn't use virtualiza tion. The application data will be stored in a storage so lution that uses an NFS protocol. To prevent data l oss, you need to use a durable cloud storage service to stor e a copy of your data. Which of the following is the most suitable solutio n to meet the requirement?", + "options": [ + "A. A. Use an AWS Storage Gateway hardware appliance for your compute resources. Configure Volume", + "B. B. Use AWS Storage Gateway with a gateway VM appl iance for your compute resources. Configure File", + "C. C. Use an AWS Storage Gateway hardware appliance for your compute resources. Configure File Gateway", + "D. D. Use an AWS Storage Gateway hardware appliance for your compute resources. Configure Volume" + ], + "correct": "C. C. Use an AWS Storage Gateway hardware appliance for your compute resources. Configure File Gateway", + "explanation": "Explanation/Reference: AWS Storage Gateway is a hybrid cloud storage servi ce that gives you on-premises access to virtually unlimited cloud storage by linking it to S3. Storag e Gateway provides 3 types of storage solutions for your on-premises applications: file, volume, and tape ga teways. The AWS Storage Gateway Hardware Appliance is a physical, standalone, validated serv er configuration for on-premises deployments. The AWS Storage Gateway Hardware Appliance is a phy sical hardware appliance with the Storage Gateway software preinstalled on a validated server configuration. The hardware appliance is a high- performance 1U server that you can deploy in your d ata center, or on-premises inside your corporate firewall. When you buy and activate your hardware a ppliance, the activation process associates your hardware appliance with your AWS account. After act ivation, your hardware appliance appears in the console as a gateway on the Hardware page. You can configure your hardware appliance as a file gateway , tape gateway, or volume gateway type. The procedure that you use to deploy and activate these gateway types on a hardware appliance is the same as on a v irtual platform. Since the company needs to run a dedicated physical appliance, you can use an AWS Storage Gateway Hardware Appliance. It comes pre-loaded with Storag e Gateway software, and provides all the required resources to create a file gateway. A file gateway can be configured to store and retrieve objects in Amazon S3 using the protocols NFS and SMB. Hence, the correct answer in this scenario is: Use an AWS Storage Gateway hardware appliance for your compute resources. Configure File Gateway to s tore the application data and create an Amazon S3 bucket to store a backup of your data. The option that says: Use AWS Storage Gateway with a gateway VM appliance for your compute resources. Configure File Gateway to store the appl ication data and backup data is incorrect because as per the scenario, the company needs to use an on -premises hardware appliance and not just a Virtual Machine (VM). The options that say: Use an AWS Storage Gateway ha rdware appliance for your compute resources. Configure Volume Gateway to store the application d ata and backup data and Use an AWS Storage Gateway hardware appliance for your compute resourc es. Configure Volume Gateway to store the application data and create an Amazon S3 bucket to store a backup of your data are both incorrect. As per the scenario, the requirement is a file system that uses an NFS protocol and not iSCSI devices. Am ong the AWS Storage Gateway storage solutions, only fil e gateway can store and retrieve objects in Amazon S3 using the protocols NFS and SMB. References: https://docs.aws.amazon.com/storagegateway/latest/u serguide/hardware-appliance.html https://docs.aws.amazon.com/storagegateway/latest/u serguide/WhatIsStorageGateway.html AWS Storage Gateway Overview: https://www.youtube.com/watch?v=pNb7xOBJjHE Check out this AWS Storage Gateway Cheat Sheet: https://tutorialsdojo.com/aws-storage-gateway/", + "references": "" + }, + { + "question": "A leading media company has recently adopted a hybr id cloud architecture which requires them to migrat e their application servers and databases in AWS. One of their applications requires a heterogeneous dat abase migration in which you need to transform your on-pr emises Oracle database to PostgreSQL in AWS. This entails a schema and code transformation before the proper data migration starts. Which of the following options is the most suitable approach to migrate the database in AWS?", + "options": [ + "A. A. Use Amazon Neptune to convert the source schem a and code to match that of the target database in", + "B. B. First, use the AWS Schema Conversion Tool to c onvert the source schema and application code to", + "C. C. Heterogeneous database migration is not suppor ted in AWS. You have to transform your database fir st", + "D. D. Configure a Launch Template that automatically converts the source schema and code to match that of" + ], + "correct": "B. B. First, use the AWS Schema Conversion Tool to c onvert the source schema and application code to", + "explanation": "Explanation/Reference: AWS Database Migration Service helps you migrate da tabases to AWS quickly and securely. The source database remains fully operational during the migra tion, minimizing downtime to applications that rely on the database. The AWS Database Migration Service ca n migrate your data to and from most widely used commercial and open-source databases. AWS Database Migration Service can migrate your dat a to and from most of the widely used commercial and open source databases. It supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle to Amazon Aurora. Migrations can be from on-premises databases to Ama zon RDS or Amazon EC2, databases running on EC2 to RDS, or vice versa, as well as from one RDS database to another RDS database. It can also move data between SQL, NoSQL, and text based targets. In heterogeneous database migrations the source and target databases engines are different, like in th e case of Oracle to Amazon Aurora, Oracle to PostgreSQL, o r Microsoft SQL Server to MySQL migrations. In this case, the schema structure, data types, and da tabase code of source and target databases can be q uite different, requiring a schema and code transformati on before the data migration starts. That makes heterogeneous migrations a two step process. First use the AWS Schema Conversion Tool to convert the source schema and code to match that of the target database, and then use the AWS Database Migration Service to migrate data from the source database to the target database. All the required data type conversions will automatically be done by the AWS D atabase Migration Service during the migration. The source database can be located in your own premises outside of AWS, running on an Amazon EC2 instance, or it can be an Amazon RDS database. The target can be a database in Amazon EC2 or Amazon RDS. The option that says: Configure a Launch Template t hat automatically converts the source schema and code to match that of the target database. Then , use the AWS Database Migration Service to migrate data from the source database to the target database is incorrect because Launch templates are primarily used in EC2 to enable you to store launch parameters so that you do not have to specify them every time you launch an instance. The option that says: Use Amazon Neptune to convert the source schema and code to match that of the target database in RDS. Use the AWS Batch to effect ively migrate the data from the source database to the target database in a batch process is incorr ect because Amazon Neptune is a fully-managed graph database service and not a suitable service to use to convert the source schema. AWS Batch is not a database migration service and hence, it is not suitable to be used in this sc enario. You should use the AWS Schema Conversion To ol and AWS Database Migration Service instead. The option that says: Heterogeneous database migrat ion is not supported in AWS. You have to transform your database first to PostgreSQL and the n migrate it to RDS is incorrect because heterogeneous database migration is supported in AW S using the Database Migration Service. References: https://aws.amazon.com/dms/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ec2-launch-templates.html https://aws.amazon.com/batch/ Check out this AWS Database Migration Service Cheat Sheet: https://tutorialsdojo.com/aws-database-migration-se rvice/ AWS Migration Services Overview: https://www.youtube.com/watch?v=yqNBkFMnsL8", + "references": "" + }, + { + "question": "A company has both on-premises data center as well as AWS cloud infrastructure. They store their graph ics, audios, videos, and other multimedia assets primari ly in their on-premises storage server and use an S 3 Standard storage class bucket as a backup. Their da ta is heavily used for only a week (7 days) but aft er that period, it will only be infrequently used by their customers. The Solutions Architect is instructed to save storage costs in AWS yet maintain the ability to fe tch a subset of their media assets in a matter of m inutes for a surprise annual data audit, which will be con ducted on their cloud storage. Which of the following are valid options that the S olutions Architect can implement to meet the above requirement? (Select TWO.)", + "options": [ + "A. A. Set a lifecycle policy in the bucket to transi tion the data to S3 Glacier Deep Archive storage cl ass after", + "B. B. Set a lifecycle policy in the bucket to transi tion the data to S3 - Standard IA storage class aft er one week", + "C. C. Set a lifecycle policy in the bucket to transi tion the data to Glacier after one week (7 days).", + "D. D. Set a lifecycle policy in the bucket to transi tion to S3 - Standard IA after 30 days" + ], + "correct": "", + "explanation": "Explanation/Reference: You can add rules in a lifecycle configuration to t ell Amazon S3 to transition objects to another Amaz on S3 storage class. For example: When you know that obje cts are infrequently accessed, you might transition them to the STANDARD_IA storage class. Or transitio n your data to the GLACIER storage class in case you want to archive objects that you don't need to access in real time. In a lifecycle configuration, you can define rules to transition objects from one storage class to ano ther to save on storage costs. When you don't know the acce ss patterns of your objects or your access patterns are changing over time, you can transition the objects to the INTELLIGENT_TIERING storage class for automatic cost savings. The lifecycle storage class transitions have a cons traint when you want to transition from the STANDAR D storage classes to either STANDARD_IA or ONEZONE_IA . The following constraints apply: - For larger objects, there is a cost benefit for t ransitioning to STANDARD_IA or ONEZONE_IA. Amazon S 3 does not transition objects that are smaller than 1 28 KB to the STANDARD_IA or ONEZONE_IA storage classes because it's not cost effective. - Objects must be stored at least 30 days in the cu rrent storage class before you can transition them to STANDARD_IA or ONEZONE_IA. For example, you cannot create a lifecycle rule to transition objects to th e STANDARD_IA storage class one day after you create them. Amazon S3 doesn't transition objects within t he first 30 days because newer objects are often acces sed more frequently or deleted sooner than is suita ble for STANDARD_IA or ONEZONE_IA storage. - If you are transitioning noncurrent objects (in v ersioned buckets), you can transition only objects that are at least 30 days noncurrent to STANDARD_IA or ONEZONE_ IA storage. Since there is a time constraint in transitioning o bjects in S3, you can only change the storage class of your objects from S3 Standard storage class to STANDARD_ IA or ONEZONE_IA storage after 30 days. This limitation does not apply on INTELLIGENT_TIERING, G LACIER, and DEEP_ARCHIVE storage class. In addition, the requirement says that the media as sets should be fetched in a matter of minutes for a surprise annual data audit. This means that the ret rieval will only happen once a year. You can use expedited retrievals in Glacier which will allow yo u to quickly access your data (within 15 minutes) w hen occasional urgent requests for a subset of archives are required. In this scenario, you can set a lifecycle policy in the bucket to transition to S3 - Standard IA after 30 days or alternatively, you can directly transition your data to Glacier after one week (7 days). Hence, the following are the correct answers: - Set a lifecycle policy in the bucket to transitio n the data from Standard storage class to Glacier a fter one week (7 days). - Set a lifecycle policy in the bucket to transitio n to S3 - Standard IA after 30 days. Setting a lifecycle policy in the bucket to transit ion the data to S3 - Standard IA storage class afte r one week (7 days) and setting a lifecycle policy in the bucket to transition the data to S3 - One Zone - Infrequent Access storage class after one week (7 d ays) are both incorrect because there is a constrai nt in S3 that objects must be stored at least 30 days in the current storage class before you can transition them to STANDARD_IA or ONEZONE_IA. You cannot create a life cycle rule to transition objects to either STANDARD_IA or ONEZONE_IA storage class 7 days afte r you create them because you can only do this after the 30-day period has elapsed. Hence, these o ptions are incorrect. Setting a lifecycle policy in the bucket to transit ion the data to S3 Glacier Deep Archive storage cla ss after one week (7 days) is incorrect because althou gh DEEP_ARCHIVE storage class provides the most cost-effective storage option, it does not have the ability to do expedited retrievals, unlike Glacier . In the event that the surprise annual data audit happens, it may take several hours before you can retrieve y our data. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/lif ecycle-transition-general-considerations.html https://docs.aws.amazon.com/AmazonS3/latest/dev/res toring-objects.html https://aws.amazon.com/s3/storage-classes/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", + "references": "" + }, + { + "question": "A Solutions Architect is working for a fast-growing startup that just started operations during the pa st 3 months. They currently have an on-premises Active Directory and 10 computers. To save costs in procuring physi cal workstations, they decided to deploy virtual deskto ps for their new employees in a virtual private clo ud in AWS. The new cloud infrastructure should leverage the ex isting security controls in AWS but can still commu nicate with their on-premises network. Which set of AWS services will the Architect use to meet these requirements?", + "options": [ + "A. A. AWS Directory Services, VPN connection, and Am azon S3", + "B. B. AWS Directory Services, VPN connection, and AW S Identity and Access Management", + "C. C. AWS Directory Services, VPN connection, and Cl assicLink", + "D. D. AWS Directory Services, VPN connection, and Am azon Workspaces Correct Answer: D" + ], + "correct": "", + "explanation": "Explanation/Reference: For this scenario, the best answer is: AWS Director y Services, VPN connection, and Amazon Workspaces. First, you need a VPN connection to connect the VPC and your on-premises network. Second, you need AWS Directory Services to integrate with your on-pr emises Active Directory and lastly, you need to use Amazon Workspace to create the needed virtual deskt ops in your VPC. References: https://aws.amazon.com/directoryservice/ https://docs.aws.amazon.com/AmazonVPC/latest/UserGu ide/vpn-connections.html https://aws.amazon.com/workspaces/ AWS Identity Services Overview: https://www.youtube.com/watch?v=AIdUw0i8rr0 Check out these cheat sheets on AWS Directory Servi ce, Amazon VPC, and Amazon WorkSpaces: https://tutorialsdojo.com/aws-directory-service/ https://tutorialsdojo.com/amazon-vpc/", + "references": "" + }, + { + "question": "A health organization is using a large Dedicated EC 2 instance with multiple EBS volumes to host its he alth records web application. The EBS volumes must be en crypted due to the confidentiality of the data that they are handling and also to comply with the HIPAA (Health Insurance Portability and Accountability A ct) standard. In EBS encryption, what service does AWS use to sec ure the volume's data at rest? (Select TWO.)", + "options": [ + "A. A. By using your own keys in AWS Key Management S ervice (KMS).", + "B. B. By using S3 Server-Side Encryption.", + "C. C. By using the SSL certificates provided by the AWS Certificate Manager (ACM).", + "D. D. By using a password stored in CloudHSM." + ], + "correct": "", + "explanation": "Explanation/Reference: Amazon EBS encryption offers seamless encryption of EBS data volumes, boot volumes, and snapshots, eliminating the need to build and maintain a secure key management infrastructure. EBS encryption enables data at rest security by encrypting your da ta using Amazon-managed keys, or keys you create an d manage using the AWS Key Management Service (KMS). The encryption occurs on the servers that host EC2 instances, providing encryption of data as it m oves between EC2 instances and EBS storage. Hence, the correct answers are: using your own keys in AWS Key Management Service (KMS) and using Amazon-managed keys in AWS Key Management Ser vice (KMS). Using S3 Server-Side Encryption and using S3 Client -Side Encryption are both incorrect as these relate only to S3. Using a password stored in CloudHSM is incorrect as you only store keys in CloudHSM and not passwords. Using the SSL certificates provided by the AWS Cert ificate Manager (ACM) is incorrect as ACM only provides SSL certificates and not data encryption o f EBS Volumes.", + "references": "https://aws.amazon.com/ebs/faqs/ Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/" + }, + { + "question": "A multimedia company needs to deploy web services t o an AWS region that they have never used before. The company currently has an IAM role for its Amazo n EC2 instance that permits the instance to access Amazon DynamoDB. They want their EC2 instances in t he new region to have the exact same privileges. What should be done to accomplish this?", + "options": [ + "A. A. Assign the existing IAM role to instances in t he new region.", + "B. B. Duplicate the IAM role and associated policies to the new region and attach it to the instances.", + "C. C. In the new Region, create a new IAM role and a ssociated policies then assign it to the new instan ce.", + "D. D. Create an Amazon Machine Image (AMI) of the in stance and copy it to the new region." + ], + "correct": "A. A. Assign the existing IAM role to instances in t he new region.", + "explanation": "Explanation/Reference: In this scenario, the company has an existing IAM r ole hence you don't need to create a new one. IAM roles are global services that are available to all regions hence, all you have to do is assign the ex isting IAM role to the instance in the new region. . The option that says: In the new Region, create a n ew IAM role and associated policies then assign it to the new instance is incorrect because you don't need to create another IAM role - there is already an existing one. Duplicating the IAM role and associated policies to the new region and attaching it to the instances i s incorrect as you don't need duplicate IAM roles for each region. One IAM role suffices for the instanc es on two regions. Creating an Amazon Machine Image (AMI) of the insta nce and copying it to the new region is incorrect because creating an AMI image does not af fect the IAM role of the instance.", + "references": "https://docs.aws.amazon.com/AmazonS3/latest/dev/Not ificationHowTo.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/" + }, + { + "question": "An On-Demand EC2 instance is launched into a VPC su bnet with the Network ACL configured to allow all inbound traffic and deny all outbound traffic. The instance's security group has an inbound rule to al low SSH from any IP address and does not have any outbo und rules. In this scenario, what are the changes needed to al low SSH connection to the instance?", + "options": [ + "A. A. Both the outbound security group and outbound network ACL need to be modified to allow outbound", + "B. B. No action needed. It can already be accessed f rom any IP address using SSH.", + "C. C. The network ACL needs to be modified to allow outbound traffic.", + "D. D. The outbound security group needs to be modifi ed to allow outbound traffic." + ], + "correct": "C. C. The network ACL needs to be modified to allow outbound traffic.", + "explanation": "Explanation/Reference: In order for you to establish an SSH connection fro m your home computer to your EC2 instance, you need to do the following: - On the Security Group, add an Inbound Rule to all ow SSH traffic to your EC2 instance. - On the NACL, add both an Inbound and Outbound Rul e to allow SSH traffic to your EC2 instance. The reason why you have to add both Inbound and Out bound SSH rule is due to the fact that Network ACLs are stateless which means that responses to al low inbound traffic are subject to the rules for outbound traffic (and vice versa). In other words, if you only enabled an Inbound rule in NACL, the tr affic can only go in but the SSH response will not go out since there is no Outbound rule. Security groups are stateful which means that if an incoming request is granted, then the outgoing tra ffic will be automatically granted as well, regardless o f the outbound rules. References: https://docs.aws.amazon.com/AmazonVPC/latest/UserGu ide/VPC_ACLs.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /authorizing-access-to-an-instance.html", + "references": "" + }, + { + "question": "A company has a web-based ticketing service that ut ilizes Amazon SQS and a fleet of EC2 instances. The EC2 instances that consume messages from the SQS qu eue are configured to poll the queue as often as possible to keep end-to-end throughput as high as p ossible. The Solutions Architect noticed that polli ng the queue in tight loops is using unnecessary CPU cycle s, resulting in increased operational costs due to empty responses. In this scenario, what should the Solutions Archite ct do to make the system more cost-effective?", + "options": [ + "A. A. Configure Amazon SQS to use long polling by se tting the ReceiveMessageWaitTimeSeconds to zero. .", + "B. B. Configure Amazon SQS to use short polling by s etting the ReceiveMessageWaitTimeSeconds to zero.", + "C. C. Configure Amazon SQS to use short polling by s etting the ReceiveMessageWaitTimeSeconds to a", + "D. D. Configure Amazon SQS to use long polling by se tting the ReceiveMessageWaitTimeSeconds to a" + ], + "correct": "D. D. Configure Amazon SQS to use long polling by se tting the ReceiveMessageWaitTimeSeconds to a", + "explanation": "Explanation Explanation/Reference: In this scenario, the application is deployed in a fleet of EC2 instances that are polling messages fr om a single SQS queue. Amazon SQS uses short polling by default, querying only a subset of the servers (bas ed on a weighted random distribution) to determine whe ther any messages are available for inclusion in th e response. Short polling works for scenarios that re quire higher throughput. However, you can also configure the queue to use Long polling instead, to reduce cost. The ReceiveMessageWaitTimeSeconds is the queue attr ibute that determines whether you are using Short or Long polling. By default, its value is zero whic h means it is using Short polling. If it is set to a value greater than zero, then it is Long polling. Hence, configuring Amazon SQS to use long polling b y setting the ReceiveMessageWaitTimeSeconds to a number greater than zero is the correct answer . Quick facts about SQS Long Polling: - Long polling helps reduce your cost of using Amaz on SQS by reducing the number of empty responses when there are no messages available to return in r eply to a ReceiveMessage request sent to an Amazon SQS queue and eliminating false empty responses whe n messages are available in the queue but aren't included in the response. - Long polling reduces the number of empty response s by allowing Amazon SQS to wait until a message is available in the queue before sending a response. U nless the connection times out, the response to the ReceiveMessage request contains at least one of the available messages, up to the maximum number of messages specified in the ReceiveMessage action. - Long polling eliminates false empty responses by querying all (rather than a limited number) of the servers. Long polling returns messages as soon any message becomes available.", + "references": "https://docs.aws.amazon.com/AWSSimpleQueueService/l atest/SQSDeveloperGuide/sqs-long-polling.html Check out this Amazon SQS Cheat Sheet: https://tutorialsdojo.com/amazon-sqs/" + }, + { + "question": "A data analytics company keeps a massive volume of data that they store in their on-premises data cent er. To scale their storage systems, they are looking fo r cloud-backed storage volumes that they can mount using Internet Small Computer System Interface (iSC SI) devices from their on-premises application serv ers. They have an on-site data analytics application tha t frequently accesses the latest data subsets local ly while the older data are rarely accessed. You are require d to minimize the need to scale the on-premises sto rage infrastructure while still providing their web appl ication with low-latency access to the data. . Which type of AWS Storage Gateway service will you use to meet the above requirements?", + "options": [ + "A. A. Volume Gateway in cached mode", + "B. B. Volume Gateway in stored mode", + "C. C. Tape Gateway", + "D. D. File Gateway" + ], + "correct": "A. A. Volume Gateway in cached mode", + "explanation": "Explanation/Reference: In this scenario, the technology company is looking for a storage service that will enable their analy tics application to frequently access the latest data su bsets and not the entire data set (as it was mentio ned that the old data are rarely being used). This requireme nt can be fulfilled by setting up a Cached Volume Gateway in AWS Storage Gateway. By using cached volumes, you can use Amazon S3 as y our primary data storage, while retaining frequentl y accessed data locally in your storage gateway. Cach ed volumes minimize the need to scale your on- premises storage infrastructure, while still provid ing your applications with low-latency access to frequently accessed data. You can create storage vo lumes up to 32 TiB in size and afterward, attach th ese volumes as iSCSI devices to your on-premises applic ation servers. When you write to these volumes, you r gateway stores the data in Amazon S3. It retains th e recently read data in your on-premises storage gateway's cache and uploads buffer storage. Cached volumes can range from 1 GiB to 32 TiB in si ze and must be rounded to the nearest GiB. Each gateway configured for cached volumes can support u p to 32 volumes for a total maximum storage volume of 1,024 TiB (1 PiB). In the cached volumes solution, AWS Storage Gateway stores all your on-premises application data in a storage volume in Amazon S3. Hence, the correct ans wer is: Volume Gateway in cached mode. Volume Gateway in stored mode is incorrect because the requirement is to provide low latency access to the frequently accessed data subsets locally. Store d Volumes are used if you need low-latency access t o your entire dataset. Tape Gateway is incorrect because this is just a co st-effective, durable, long-term offsite alternativ e for data archiving, which is not needed in this scenari o. File Gateway is incorrect because the scenario requ ires you to mount volumes as iSCSI devices. File Gateway is used to store and retrieve Amazon S3 obj ects through NFS and SMB protocols. References: https://docs.aws.amazon.com/storagegateway/latest/u serguide/StorageGatewayConcepts.html#volume- gateway-concepts https://docs.aws.amazon.com/storagegateway/latest/u serguide/WhatIsStorageGateway.html AWS Storage Gateway Overview: https://www.youtube.com/watch?v=pNb7xOBJjHE Check out this AWS Storage Gateway Cheat Sheet: https://tutorialsdojo.com/aws-storage-gateway/", + "references": "" + }, + { + "question": "An application is hosted in an Auto Scaling group o f EC2 instances. To improve the monitoring process, you have to configure the current capacity to increase or decrease based on a set of scaling adjustments. This should be done by specifying the scaling metrics an d threshold values for the CloudWatch alarms that t rigger the scaling process. Which of the following is the most suitable type of scaling policy that you should use?", + "options": [ + "A. A. Step scaling", + "B. B. Simple scaling", + "C. C. Target tracking scaling", + "D. D. Scheduled Scaling" + ], + "correct": "", + "explanation": "Explanation/Reference: With step scaling, you choose scaling metrics and t hreshold values for the CloudWatch alarms that trig ger the scaling process as well as define how your scal able target should be scaled when a threshold is in breach for a specified number of evaluation periods . Step scaling policies increase or decrease the cu rrent capacity of a scalable target based on a set of sca ling adjustments, known as step adjustments. The adjustments vary based on the size of the alarm bre ach. After a scaling activity is started, the polic y continues to respond to additional alarms, even whi le a scaling activity is in progress. Therefore, al l alarms that are breached are evaluated by Application Auto Scaling as it receives the alarm messages. When you configure dynamic scaling, you must define how to scale in response to changing demand. For example, you have a web application that currently runs on two instances and you want the CPU utilizat ion of the Auto Scaling group to stay at around 50 perc ent when the load on the application changes. This gives you extra capacity to handle traffic spikes without maintaining an excessive amount of idle resources. You can configure your Auto Scaling group to scale auto matically to meet this need. The policy type determ ines how the scaling action is performed. Amazon EC2 Auto Scaling supports the following type s of scaling policies: Target tracking scaling - Increase or decrease the current capacity of the group based on a target val ue for a specific metric. This is similar to the way that your thermostat maintains the temperature of your h ome you select a temperature and the thermostat does th e rest. Step scaling - Increase or decrease the current cap acity of the group based on a set of scaling adjust ments, known as step adjustments, that vary based on the s ize of the alarm breach. Simple scaling - Increase or decrease the current c apacity of the group based on a single scaling adju stment. If you are scaling based on a utilization metric th at increases or decreases proportionally to the num ber of instances in an Auto Scaling group, then it is reco mmended that you use target tracking scaling polici es. Otherwise, it is better to use step scaling policie s instead. Hence, the correct answer in this scenario is Step Scaling. Target tracking scaling is incorrect because the ta rget tracking scaling policy increases or decreases the current capacity of the group based on a target val ue for a specific metric, instead of a set of scali ng adjustments. Simple scaling is incorrect because the simple scal ing policy increases or decreases the current capac ity of the group based on a single scaling adjustment, ins tead of a set of scaling adjustments. Scheduled Scaling is incorrect because the schedule d scaling policy is based on a schedule that allows you to set your own scaling schedule for predictable lo ad changes. This is not considered as one of the ty pes of dynamic scaling. References: https://docs.aws.amazon.com/autoscaling/ec2/usergui de/as-scale-based-on-demand.html https://docs.aws.amazon.com/autoscaling/application /userguide/application-auto-scaling-step-scaling- policies.html", + "references": "" + }, + { + "question": "A company troubleshoots the operational issues of t heir cloud architecture by logging the AWS API call history of all AWS resources. The Solutions Archite ct must implement a solution to quickly identify th e most recent changes made to resources in their envi ronment, including creation, modification, and dele tion of AWS resources. One of the requirements is that t he generated log files should be encrypted to avoid any security issues. Which of the following is the most suitable approac h to implement the encryption?", + "options": [ + "A. A. Use CloudTrail and configure the destination S 3 bucket to use Server Side Encryption (SSE) with A ES-", + "B. B. Use CloudTrail with its default settings", + "C. C. Use CloudTrail and configure the destination A mazon Glacier archive to use Server-Side Encryption", + "D. D. Use CloudTrail and configure the destination S 3 bucket to use Server-Side Encryption (SSE)." + ], + "correct": "B. B. Use CloudTrail with its default settings", + "explanation": "Explanation/Reference: By default, CloudTrail event log files are encrypte d using Amazon S3 server-side encryption (SSE). You can also choose to encrypt your log files with an A WS Key Management Service (AWS KMS) key. You can store your log files in your bucket for as long as you want. You can also define Amazon S3 lifecyc le rules to archive or delete log files automatically. If you want notifications about log file delivery and validation, you can set up Amazon SNS notifications . Using CloudTrail and configuring the destination Am azon Glacier archive to use Server-Side Encryption (SSE) is incorrect because CloudTrail st ores the log files to S3 and not in Glacier. Take n ote that by default, CloudTrail event log files are alr eady encrypted using Amazon S3 server-side encrypti on (SSE). Using CloudTrail and configuring the destination S3 bucket to use Server-Side Encryption (SSE) is incorrect because CloudTrail event log files are al ready encrypted using the Amazon S3 server-side encryption (SSE) which is why you do not have to do this anymore. Use CloudTrail and configure the destination S3 buc ket to use Server Side Encryption (SSE) with AES-128 encryption algorithm is incorrect because C loudtrail event log files are already encrypted usi ng the Amazon S3 server-side encryption (SSE) by defau lt. Additionally, SSE-S3 only uses the AES-256 encryption algorithm and not the AES-128. References: https://docs.aws.amazon.com/awscloudtrail/latest/us erguide/how-cloudtrail-works.html https://aws.amazon.com/blogs/aws/category/cloud-tra il/ Check out this AWS CloudTrail Cheat Sheet: https://tutorialsdojo.com/aws-cloudtrail/", + "references": "" + }, + { + "question": "A company has an infrastructure that allows EC2 ins tances from a private subnet to fetch objects from Amazon S3 via a NAT Instance. The company's Solutio ns Architect was instructed to lower down the cost incurred by the current solution. How should the Solutions Architect redesign the arc hitecture in the most cost-efficient manner?", + "options": [ + "A. A. Remove the NAT instance and create an S3 gatew ay endpoint to access S3 objects.", + "B. B. Remove the NAT instance and create an S3 inter face endpoint to access S3 objects.", + "C. C. Replace the NAT instance with NAT Gateway to a ccess S3 objects.", + "D. D. Use a smaller instance type for the NAT instan ce." + ], + "correct": "A. A. Remove the NAT instance and create an S3 gatew ay endpoint to access S3 objects.", + "explanation": "Explanation/Reference: A VPC endpoint enables you to privately connect you r VPC to supported AWS services and VPC endpoint services powered by PrivateLink without re quiring an Internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Insta nces in your VPC do not require public IP addresses to communicate with resources in the service. Traff ic between your VPC and the other services does not leave the Amazon network. Endpoints are virtual devices. They are horizontall y scaled, redundant, and highly available VPC components that allow communication between instanc es in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic. There are two types of VPC endpoints: interface end points and gateway endpoints. You should create the type of VPC endpoint required by the supported serv ice. As a rule of thumb, most AWS services use VPC Interface Endpoint except for S3 and DynamoDB, whic h use VPC Gateway Endpoint. There is no additional charge for using gateway end points. However, standard charges for data transfer and resource usage still apply. Let's assume you created a NAT gateway and you have an EC2 instance routing to the Internet through th e NAT gateway. Your EC2 instance behind the NAT gatew ay sends a 1 GB file to one of your S3 buckets. The EC2 instance, NAT gateway, and S3 Bucket are in the same region US East (Ohio), and the NAT gateway and EC2 instance are in the same availabili ty zone. Your cost will be calculated as follows: - NAT Gateway Hourly Charge: NAT Gateway is charged on an hourly basis. For example, the rate is $0.045 per hour in this region. - NAT Gateway Data Processing Charge: 1 GB data wen t through NAT gateway. The NAT Gateway Data Processing charge is applied and will result i n a charge of $0.045. - Data Transfer Charge: This is the standard EC2 Da ta Transfer charge. 1 GB data was transferred from the EC2 instance to S3 via the NAT gateway. There w as no charge for the data transfer from the EC2 instance to S3 as it is Data Transfer Out to Amazon EC2 to S3 in the same region. There was also no charge for the data transfer between the NAT Gatewa y and the EC2 instance since the traffic stays in t he same availability zone using private IP addresses. There will be a data transfer charge between your N AT Gateway and EC2 instance if they are in the differe nt availability zone. In summary, your charge will be $0.045 for 1 GB of data processed by the NAT gateway and a charge of $0.045 per hour will always apply once the NAT gate way is provisioned and available. The data transfer has no charge in this example. However, if you send the file to a non-AWS Internet location instead, t here will be a data transfer charge as it is data transf er out from Amazon EC2 to the Internet. To avoid the NAT Gateway Data Processing charge in this example, you could set up a Gateway Type VPC endpoint and route the traffic to/from S3 throu gh the VPC endpoint instead of going through the NA T Gateway. There is no data processing or hourly charges for u sing Gateway Type VPC endpoints. Hence, the correct answer is the option that says: Remove the NAT instance and create an S3 gateway endpoint to access S3 objects. The option that says: Replace the NAT instance with NAT Gateway to access S3 objects is incorrect. A NAT Gateway is just a NAT instance that is managed for you by AWS. It provides less operational management and you pay for the hour that your NAT G ateway is running. This is not the most effective solution since you will still pay for the idle time . The option that says: Use a smaller instance type f or the NAT instance is incorrect. Although this mig ht reduce the cost, it still is not the most cost-effi cient solution. An S3 Gateway endpoint is still the best solution because it comes with no additional charge . The option that says: Remove the NAT instance and c reate an S3 interface endpoint to access S3 objects is incorrect. An interface endpoint is an e lastic network interface with a private IP address from the IP address range of your subnet. Unlike a Gateway e ndpoint, you still get billed for the time your int erface endpoint is running and the GB data it has processe d. From a cost standpoint, using the S3 Gateway endpoint is the most favorable solution. References: https://docs.aws.amazon.com/vpc/latest/privatelink/ vpce-gateway.html https://aws.amazon.com/blogs/architecture/reduce-co st-and-increase-security-with-amazon-vpc-endpoints/ https://aws.amazon.com/vpc/pricing/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", + "references": "" + }, + { + "question": "An application is hosted on an EC2 instance with mu ltiple EBS Volumes attached and uses Amazon Neptune as its database. To improve data security, you encrypted all of the EBS volumes attached to th e instance to protect the confidential data stored in the volumes. Which of the following statements are true about en crypted Amazon Elastic Block Store volumes? (Select TWO.)", + "options": [ + "A. A. Snapshots are automatically encrypted.", + "B. B. All data moving between the volume and the ins tance are encrypted.", + "C. C. Snapshots are not automatically encrypted.", + "D. D. The volumes created from the encrypted snapsho t are not encrypted." + ], + "correct": "", + "explanation": "Explanation/Reference: Amazon Elastic Block Store (Amazon EBS) provides bl ock level storage volumes for use with EC2 instances. EBS volumes are highly available and rel iable storage volumes that can be attached to any running instance that is in the same Availability Zone. EBS volumes that are attached to an EC2 instance are exposed as stor age volumes that persist independently from the life of the instance. When you create an encrypted EBS volume and attach it to a supported instance type, the following type s of data are encrypted: - Data at rest inside the volume - All data moving between the volume and the instan ce - All snapshots created from the volume - All volumes created from those snapshots Encryption operations occur on the servers that hos t EC2 instances, ensuring the security of both data -at- rest and data-in-transit between an instance and it s attached EBS storage. You can encrypt both the bo ot and data volumes of an EC2 instance. References: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ AmazonEBS.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /EBSEncryption.html Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/", + "references": "" + }, + { + "question": "A Solutions Architect is working for a multinationa l telecommunications company. The IT Manager wants to consolidate their log streams including the acce ss, application, and security logs in one single sy stem. Once consolidated, the company will analyze these l ogs in real-time based on heuristics. There will be some time in the future where the company will need to valida te heuristics, which requires going back to data sa mples extracted from the last 12 hours. What is the best approach to meet this requirement?", + "options": [ + "A. A. First, configure Amazon Cloud Trail to receive custom logs and then use EMR to apply heuristics o n the logs.", + "B. B. First, send all the log events to Amazon SQS t hen set up an Auto Scaling group of EC2 servers to", + "C. C. First, set up an Auto Scaling group of EC2 ser vers then store the logs on Amazon S3 then finally, use", + "D. D. First, send all of the log events to Amazon Ki nesis then afterwards, develop a client process to apply" + ], + "correct": "D. D. First, send all of the log events to Amazon Ki nesis then afterwards, develop a client process to apply", + "explanation": "Explanation/Reference: In this scenario, you need a service that can colle ct, process, and analyze data in real-time hence, t he right service to use here is Amazon Kinesis. Amazon Kinesis makes it easy to collect, process, a nd analyze real-time, streaming data so you can get timely insights and react quickly to new informatio n. Amazon Kinesis offers key capabilities to cost- effectively process streaming data at any scale, al ong with the flexibility to choose the tools that b est suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine le arning, analytics, and other applications. Amazon Kinesis enables you to process and analyze data as it arrives and respond instantly instead of having to wait until all your data is collected before the process ing can begin. All other options are incorrect since these service s do not have real-time processing capability, unli ke Amazon Kinesis.", + "references": "https://aws.amazon.com/kinesis/ Check out this Amazon Kinesis Cheat Sheet: https://tutorialsdojo.com/amazon-kinesis/" + }, + { + "question": "process both mission-critical data as well as non-e ssential batch jobs. Which of the following is the most cost-effective o ption to use in implementing this architecture?", + "options": [ + "A. A. Use ECS as the container management service th en set up a combination of Reserved and Spot EC2", + "B. B. Use ECS as the container management service th en set up Reserved EC2 Instances for processing bot h", + "C. C. Use ECS as the container management service th en set up On-Demand EC2 Instances for processing", + "D. D. Use ECS as the container management service th en set up Spot EC2 Instances for processing both" + ], + "correct": "A. A. Use ECS as the container management service th en set up a combination of Reserved and Spot EC2", + "explanation": "Explanation/Reference: Amazon ECS lets you run batch workloads with manage d or custom schedulers on Amazon EC2 On- Demand Instances, Reserved Instances, or Spot Insta nces. You can launch a combination of EC2 instances to set up a cost-effective architecture depending o n your workload. You can launch Reserved EC2 instances to process the mission-critical data and Spot EC2 instances for processing non-essential bat ch jobs. There are two different charge models for Amazon El astic Container Service (ECS): Fargate Launch Type Model and EC2 Launch Type Model. With Fargate, you pay for the amount of vCPU and memory resources that your containerized application reque sts while for EC2 launch type model, there is no additional charge. You pay for AWS resources (e.g. EC2 instances or EBS volumes) you create to store a nd run your application. You only pay for what you use , as you use it; there are no minimum fees and no upfront commitments. In this scenario, the most cost-effective solution is to use ECS as the container management service t hen set up a combination of Reserved and Spot EC2 Instances for processing mission-critical and non-essential batch jobs respectively. You can use Scheduled Rese rved Instances (Scheduled Instances) which enables you to purchase capacity reservations that recur on a daily, weekly , or monthly basis, with a specified start time and duration, for a one-year term. This will ensure tha t you have an uninterrupted compute capacity to pro cess your mission-critical batch jobs. Hence, the correct answer is the option that says: Use ECS as the container management service then se t up a combination of Reserved and Spot EC2 Instances for processing mission-critical and non- essential batch jobs respectively. Using ECS as the container management service then setting up Reserved EC2 Instances for processing both mission-critical and non-essential batch jobs is incorrect because processing the non- essential batch jobs can be handled much cheaper by using Spot EC2 instances instead of Reserved Instances. Using ECS as the container management service then setting up On-Demand EC2 Instances for processing both mission-critical and non-essential batch jobs is incorrect because an On-Demand instance costs more compared to Reserved and Spot E C2 instances. Processing the non-essential batch jo bs can be handled much cheaper by using Spot EC2 insta nces instead of On-Demand instances. Using ECS as the container management service then setting up Spot EC2 Instances for processing both mission-critical and non-essential batch jobs is incorrect because although this set up provides the cheapest solution among other options, it will not be able to meet the required workload. Using Spot instances to process mission-critical workloads is not suitable since these types of instances can be terminated by AWS at any time, which can affect cri tical processing. References: https://docs.aws.amazon.com/AmazonECS/latest/develo perguide/Welcome.html https://aws.amazon.com/ec2/spot/containers-for-less /get-started/ Check out this Amazon ECS Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-container- service-amazon-ecs/ AWS Container Services Overview: https://www.youtube.com/watch?v=5QBgDX7O7pw", + "references": "" + }, + { + "question": "A financial analytics application that collects, pr ocesses and analyzes stock data in real-time is usi ng Kinesis Data Streams. The producers continually pus h data to Kinesis Data Streams while the consumers process the data in real time. In Amazon Kinesis, w here can the consumers store their results? (Select TWO.)", + "options": [ + "A. A. Glacier Select", + "B. B. Amazon Athena", + "C. C. Amazon Redshift", + "D. D. Amazon S3" + ], + "correct": "", + "explanation": "Explanation/Reference: In Amazon Kinesis, the producers continually push d ata to Kinesis Data Streams and the consumers process the data in real time. Consumers (such as a custom application running on Amazon EC2, or an Amazon Kinesis Data Firehose delivery stream) can s tore their results using an AWS service such as Amazon DynamoDB, Amazon Redshift, or Amazon S3. Hence, Amazon S3 and Amazon Redshift are the correc t answers. The following diagram illustrates the high-level architecture of Kinesis Data Streams: Glacier Select is incorrect because this is not a s torage service. It is primarily used to run queries directly on data stored in Amazon Glacier, retrieving only t he data you need out of your archives to use for analytics. AWS Glue is incorrect because this is not a storage service. It is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. Amazon Athena is incorrect because this is just an interactive query service that makes it easy to ana lyze data in Amazon S3 using standard SQL. It is not a s torage service where you can store the results proc essed by the consumers.", + "references": "http://docs.aws.amazon.com/streams/latest/dev/key-c oncepts.html Amazon Redshift Overview: https://youtu.be/jlLERNzhHOg Check out this Amazon Kinesis Cheat Sheet: https://tutorialsdojo.com/amazon-kinesis/" + }, + { + "question": "A client is hosting their company website on a clus ter of web servers that are behind a public-facing load balancer. The client also uses Amazon Route 53 to m anage their public DNS. How should the client configure the DNS zone apex r ecord to point to the load balancer?", + "options": [ + "A. A. Create an alias for CNAME record to the load b alancer DNS name.", + "B. B. Create a CNAME record pointing to the load bal ancer DNS name.", + "C. C. Create an A record aliased to the load balance r DNS name.", + "D. D. Create an A record pointing to the IP address of the load balancer. Correct Answer: C" + ], + "correct": "", + "explanation": "Explanation/Reference: Route 53's DNS implementation connects user request s to infrastructure running inside (and outside) of Amazon Web Services (AWS). For example, if you have multiple web servers running on EC2 instances behind an Elastic Load Balancing load balancer, Rou te 53 will route all traffic addressed to your webs ite (e.g. www.tutorialsdojo.com) to the load balancer D NS name (e.g. elbtutorialsdojo123.elb.amazonaws.com). Additionally, Route 53 supports the alias resource record set, which lets you map your zone apex (e.g. tutorialsdojo.com) DNS name to your load balancer D NS name. IP addresses associated with Elastic Load Balancing can change at any time due to scaling or software updates. Route 53 responds to each request for an Alias resource record set with one IP address fo r the load balancer. Creating an A record pointing to the IP address of the load balancer is incorrect. You should be using an Alias record pointing to the DNS name of the loa d balancer since the IP address of the load balance r can change at any time. Creating a CNAME record pointing to the load balanc er DNS name and creating an alias for CNAME record to the load balancer DNS name are inco rrect because CNAME records cannot be created for your zone apex. You should create an al ias record at the top node of a DNS namespace which is also known as the zone apex. References: http://docs.aws.amazon.com/govcloud-us/latest/UserG uide/setting-up-route53-zoneapex-elb.html https://docs.aws.amazon.com/Route53/latest/Develope rGuide/resource-record-sets-choosing-alias-non- alias.html Check out this Amazon Route 53 Cheat Sheet: https://tutorialsdojo.com/amazon-route-53/", + "references": "" + }, + { + "question": "A company plans to use Route 53 instead of an ELB t o load balance the incoming request to the web application. The system is deployed to two EC2 inst ances to which the traffic needs to be distributed. You want to set a specific percentage of traffic to go to ea ch instance. Which routing policy would you use?", + "options": [ + "A. A. Failover", + "B. B. Weighted", + "C. C. Geolocation", + "D. D. Latency" + ], + "correct": "B. B. Weighted", + "explanation": "Explanation/Reference: Weighted routing lets you associate multiple resour ces with a single domain name (tutorialsdojo.com) o r subdomain name (portal.tutorialsdojo.com) and choos e how much traffic is routed to each resource. This can be useful for a variety of purposes including l oad balancing and testing new versions of software. You can set a specific percentage of how much traffic w ill be allocated to the resource by specifying the weights. For example, if you want to send a tiny portion of your traffic to one resource and the rest to anothe r resource, you might specify weights of 1 and 255. T he resource with a weight of 1 gets 1/256th of the traffic (1/1+255), and the other resource gets 255/ 256ths (255/1+255). You can gradually change the balance by changing th e weights. If you want to stop sending traffic to a resource, you can change the weight for that record to 0. Hence, the correct answer is Weighted. Latency is incorrect because you cannot set a speci fic percentage of traffic for the 2 EC2 instances w ith this routing policy. Latency routing policy is prim arily used when you have resources in multiple AWS Regions and if you need to automatically route traf fic to a specific AWS Region that provides the best latency with less round-trip time. Failover is incorrect because this type is commonly used if you want to set up an active-passive failo ver configuration for your web application. Geolocation is incorrect because this is more suita ble for routing traffic based on the location of yo ur users, and not for distributing a specific percentage of t raffic to two AWS resources.", + "references": "http://docs.aws.amazon.com/Route53/latest/Developer Guide/routing-policy.html Amazon Route 53 Overview: https://youtu.be/Su308t19ubY Check out this Amazon Route 53 Cheat Sheet: https://tutorialsdojo.com/amazon-route-53/" + }, + { + "question": "A web application is hosted on an EC2 instance that processes sensitive financial information which is launched in a private subnet. All of the data are s tored in an Amazon S3 bucket. Financial information is accessed by users over the Internet. The security t eam of the company is concerned that the Internet connectivity to Amazon S3 is a security risk. In this scenario, what will you do to resolve this security vulnerability in the most cost-effective m anner?", + "options": [ + "A. A. Change the web architecture to access the fina ncial data in S3 through an interface VPC endpoint, which", + "B. B. Change the web architecture to access the fina ncial data hosted in your S3 bucket by creating a c ustom", + "C. C. Change the web architecture to access the fina ncial data through a Gateway VPC Endpoint.", + "D. D. Change the web architecture to access the fina ncial data in your S3 bucket through a VPN connecti on." + ], + "correct": "C. C. Change the web architecture to access the fina ncial data through a Gateway VPC Endpoint.", + "explanation": "Explanation/Reference: Take note that your VPC lives within a larger AWS n etwork and the services, such as S3, DynamoDB, RDS, and many others, are located outside of your V PC, but still within the AWS network. By default, t he connection that your VPC uses to connect to your S3 bucket or any other service traverses the public Internet via your Internet Gateway. A VPC endpoint enables you to privately connect you r VPC to supported AWS services and VPC endpoint services powered by PrivateLink without re quiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Insta nces in your VPC do not require public IP addresses to communicate with resources in the service. Traff ic between your VPC and the other service does not leave the Amazon network. There are two types of VPC endpoints: interface end points and gateway endpoints. You have to create th e type of VPC endpoint required by the supported serv ice. An interface endpoint is an elastic network interfa ce with a private IP address that serves as an entr y point for traffic destined to a supported service. A gate way endpoint is a gateway that is a target for a sp ecified route in your route table, used for traffic destine d to a supported AWS service. Hence, the correct answer is: Change the web archit ecture to access the financial data through a Gateway VPC Endpoint. The option that says: Changing the web architecture to access the financial data in your S3 bucket through a VPN connection is incorrect because a VPN connection still goes through the public Internet. You have to use a VPC Endpoint in this scenario and not VPN, to privately connect your VPC to supporte d AWS services such as S3. The option that says: Changing the web architecture to access the financial data hosted in your S3 bucket by creating a custom VPC endpoint service is incorrect because a \"VPC endpoint service\" is quite different from a \"VPC endpoint\". With the VPC endpoint service, you are the service provider whe re you can create your own application in your VPC and configure it as an AWS PrivateLink-powered service (referred to as an endpoint service). Other AWS pri ncipals can create a connection from their VPC to y our endpoint service using an interface VPC endpoint. The option that says: Changing the web architecture to access the financial data in S3 through an interface VPC endpoint, which is powered by AWS Pri vateLink is incorrect. Although you can use an Interface VPC Endpoint to satisfy the requirement, this type entails an associated cost, unlike a Gate way VPC Endpoint. Remember that you won't get billed if you use a Gateway VPC endpoint for your Amazon S3 bucket, unlike an Interface VPC endpoint that is billed for hourly usage and data processing charge s. Take note that the scenario explicitly asks for the most cost-effective solution. References: https://docs.aws.amazon.com/AmazonVPC/latest/UserGu ide/vpc-endpoints.html https://docs.aws.amazon.com/vpc/latest/userguide/vp ce-gateway.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", + "references": "" + }, + { + "question": "A news company is planning to use a Hardware Securi ty Module (CloudHSM) in AWS for secure key storage of their web applications. You have launched the Cloud HSM cluster but after just a few hours, a support s taff mistakenly attempted to log in as the administrator three times using an invalid password in the Hardw are Security Module. This has caused the HSM to be zero ized, which means that the encryption keys on it ha ve been wiped. Unfortunately, you did not have a copy of the keys stored anywhere else. How can you obtain a new copy of the keys that you have stored on Hardware Security Module?", + "options": [ + "A. A. Contact AWS Support and they will provide you a copy of the keys.", + "B. B. Restore a snapshot of the Hardware Security Mo dule.", + "C. C. Use the Amazon CLI to get a copy of the keys.", + "D. D. The keys are lost permanently if you did not h ave a copy." + ], + "correct": "D. D. The keys are lost permanently if you did not h ave a copy.", + "explanation": "Explanation/Reference: Attempting to log in as the administrator more than twice with the wrong password zeroizes your HSM appliance. When an HSM is zeroized, all keys, certi ficates, and other data on the HSM is destroyed. Yo u can use your cluster's security group to prevent an unauthenticated user from zeroizing your HSM. Amazon does not have access to your keys nor to the credentials of your Hardware Security Module (HSM) and therefore has no way to recover your keys if yo u lose your credentials. Amazon strongly recommends that you use two or more HSMs in separate Availabil ity Zones in any production CloudHSM Cluster to avoid loss of cryptographic keys. Refer to the CloudHSM FAQs for reference: Q: Could I lose my keys if a single HSM instance fa ils? Yes. It is possible to lose keys that were created since the most recent daily backup if the CloudHSM cluster that you are using fails and you are not us ing two or more HSMs. Amazon strongly recommends that you use two or more HSMs, in separate Availabi lity Zones, in any production CloudHSM Cluster to avoid loss of cryptographic keys. Q: Can Amazon recover my keys if I lose my credenti als to my HSM? No. Amazon does not have access to your keys or cre dentials and therefore has no way to recover your keys if you lose your credentials. References: https://aws.amazon.com/premiumsupport/knowledge-cen ter/stop-cloudhsm/ https://aws.amazon.com/cloudhsm/faqs/ https://d1.awsstatic.com/whitepapers/Security/secur ity-of-aws-cloudhsm-backups.pdf", + "references": "" + }, + { + "question": "A company deployed a web application that stores st atic assets in an Amazon Simple Storage Service (S3 ) bucket. The Solutions Architect expects the S3 buck et to immediately receive over 2000 PUT requests an d 3500 GET requests per second at peak hour. What should the Solutions Architect do to ensure op timal performance?", + "options": [ + "A. A. Do nothing. Amazon S3 will automatically manag e performance at this scale.", + "B. B. Use Byte-Range Fetches to retrieve multiple ra nges of an object data per GET request.", + "C. C. Add a random prefix to the key names.", + "D. D. Use a predictable naming scheme in the key nam es such as sequential numbers or date time" + ], + "correct": "", + "explanation": "Explanation/Reference: Amazon S3 now provides increased performance to sup port at least 3,500 requests per second to add data and 5,500 requests per second to retrieve data, whi ch can save significant processing time for no addi tional charge. Each S3 prefix can support these request ra tes, making it simple to increase performance significantly. Applications running on Amazon S3 today will enjoy this performance improvement with no changes, and customers building new applications on S3 do not ha ve to make any application customizations to achiev e this performance. Amazon S3's support for parallel requests means you can scale your S3 performance by the factor of your compute cluster, without making any customizations to your application. Performance scales per prefix, so you can use as many prefixes as you need in parallel to achieve the required throughput. There are no limits to the number of pr efixes. This S3 request rate performance increase removes a ny previous guidance to randomize object prefixes t o achieve faster performance. That means you can now use logical or sequential naming patterns in S3 obj ect naming without any performance implications. This i mprovement is now available in all AWS Regions. Using Byte-Range Fetches to retrieve multiple range s of an object data per GET request is incorrect because although a Byte-Range Fetch helps you achie ve higher aggregate throughput, Amazon S3 does not support retrieving multiple ranges of data per GET request. Using the Range HTTP header in a GET Objec t request, you can fetch a byte-range from an object, transferring only the specified portion. You can u se concurrent connections to Amazon S3 to fetch differ ent byte ranges from within the same object. Fetchi ng smaller ranges of a large object also allows your a pplication to improve retry times when requests are interrupted. Adding a random prefix to the key names is incorrec t. Adding a random prefix is not required in this scenario because S3 can now scale automatically to adjust perfomance. You do not need to add a random prefix anymore for this purpose since S3 has increa sed performance to support at least 3,500 requests per second to add data and 5,500 requests per second to retrieve data, which covers the workload in the scenario. Using a predictable naming scheme in the key names such as sequential numbers or date time sequences is incorrect because Amazon S3 already ma intains an index of object key names in each AWS region. S3 stores key names in alphabetical order. The key name dictates which partition the key is st ored in. Using a sequential prefix increases the likelih ood that Amazon S3 will target a specific partition for a large number of your keys, overwhelming the I/O cap acity of the partition. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/req uest-rate-perf-considerations.html https://d1.awsstatic.com/whitepapers/AmazonS3BestPr actices.pdf https://docs.aws.amazon.com/AmazonS3/latest/dev/Get tingObjectsUsingAPIs.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", + "references": "" + }, + { + "question": "A financial company wants to store their data in Am azon S3 but at the same time, they want to store th eir frequently accessed data locally on their on-premis es server. This is due to the fact that they do not have the option to extend their on-premises storage, which i s why they are looking for a durable and scalable s torage service to use in AWS. What is the best solution fo r this scenario?", + "options": [ + "A. A. Use the Amazon Storage Gateway - Cached Volume s.", + "B. B. Use both Elasticache and S3 for frequently acc essed data.", + "C. C. Use Amazon Glacier.", + "D. D. Use a fleet of EC2 instance with EBS volumes t o store the commonly used data.", + "A. A. Upload the data to S3 then use a lifecycle pol icy to transfer data to S3 One Zone-IA.", + "B. B. Upload the data to Amazon FSx for Windows File Server using the Server Message Block (SMB)", + "C. C. Upload the data to S3 then use a lifecycle pol icy to transfer data to S3-IA.", + "D. D. Upload the data to S3 and set a lifecycle poli cy to transition data to Glacier after 0 days." + ], + "correct": "D. D. Upload the data to S3 and set a lifecycle poli cy to transition data to Glacier after 0 days.", + "explanation": "Explanation/Reference: Glacier is a cost-effective archival solution for l arge amounts of data. Bulk retrievals are S3 Glacie r's lowest-cost retrieval option, enabling you to retri eve large amounts, even petabytes, of data inexpens ively in a day. Bulk retrievals typically complete within 5 12 hours. You can specify an absolute or relati ve time period (including 0 days) after which the spec ified Amazon S3 objects should be transitioned to Amazon Glacier. Hence, the correct answer is the option that says: Upload the data to S3 and set a lifecycle policy to transition data to Glacier after 0 days. Glacier has a management console that you can use t o create and delete vaults. However, you cannot directly upload archives to Glacier by using the ma nagement console. To upload data such as photos, videos, and other documents, you must either use th e AWS CLI or write code to make requests by using either the REST API directly or by using the AWS SD Ks. Take note that uploading data to the S3 Console and setting its storage class of \"Glacier\" is a differ ent story as the proper way to upload data to Glacier is stil l via its API or CLI. In this way, you can set up y our vaults and configure your retrieval options. If you uploaded your data using the S3 console then it wi ll be managed via S3 even though it is internally using a Glacier storage class. Uploading the data to S3 then using a lifecycle pol icy to transfer data to S3-IA is incorrect because using Glacier would be a more cost-effective soluti on than using S3-IA. Since the required retrieval p eriod should not exceed more than a day, Glacier would be the best choice. Uploading the data to Amazon FSx for Windows File S erver using the Server Message Block (SMB) protocol is incorrect because this option costs mor e than Amazon Glacier, which is more suitable for storing infrequently accessed data. Amazon FSx for Windows File Server provides fully managed, highly reliable, and scalable file storage that is accessi ble over the industry-standard Server Message Block (SMB) protocol. Uploading the data to S3 then using a lifecycle pol icy to transfer data to S3 One Zone-IA is incorrect because with S3 One Zone-IA, the data will only be stored in a single availability zone and thus, this storage solution is not durable. It also costs more compared to Glacier. References: https://aws.amazon.com/glacier/faqs/ https://docs.aws.amazon.com/AmazonS3/latest/dev/obj ect-lifecycle-mgmt.html https://docs.aws.amazon.com/amazonglacier/latest/de v/uploading-an-archive.html Amazon S3 and S3 Glacier Overview: https://www.youtube.com/watch?v=1ymyeN2tki4 Check out this Amazon S3 Glacier Cheat Sheet: https://tutorialsdojo.com/amazon-glacier/", + "references": "https://aws.amazon.com/storagegateway/faqs/ Check out this AWS Storage Gateway Cheat Sheet: https://tutorialsdojo.com/aws-storage-gateway/ QUESTION 296 A company has 10 TB of infrequently accessed financ ial data files that would need to be stored in AWS. These data would be accessed infrequently during sp ecific weeks when they are retrieved for auditing purposes. The retrieval time is not strict as long as it does not exceed 24 hours. Which of the following would be a secure, durable, and cost-effective solution for this scenario?" + }, + { + "question": "A company has an On-Demand EC2 instance with an att ached EBS volume. There is a scheduled job that creates a snapshot of this EBS volume every midnigh t at 12 AM when the instance is not used. One night , there has been a production incident where you need to pe rform a change on both the instance and on the EBS volume at the same time when the snapshot is curren tly taking place. Which of the following scenario is true when it com es to the usage of an EBS volume while the snapshot is in progress?", + "options": [ + "A. A. The EBS volume can be used in read-only mode w hile the snapshot is in progress.", + "B. B. The EBS volume cannot be used until the snapsh ot completes.", + "C. C. The EBS volume cannot be detached or attached to an EC2 instance until the snapshot completes", + "D. D. The EBS volume can be used while the snapshot is in progress." + ], + "correct": "D. D. The EBS volume can be used while the snapshot is in progress.", + "explanation": "Explanation/Reference: Snapshots occur asynchronously; the point-in-time s napshot is created immediately, but the status of t he snapshot is pending until the snapshot is complete (when all of the modified blocks have been transfer red to Amazon S3), which can take several hours for large initial snapshots or subsequent snapshots where man y blocks have changed. While it is completing, an in-progress snapshot is not affected by ongoing reads and writes to the vol ume hence, you can still use the EBS volume normally. When you create an EBS volume based on a snapshot, the new volume begins as an exact replica of the original volume that was used to create the snapsho t. The replicated volume loads data lazily in the background so that you can begin using it immediate ly. If you access data that hasn't been loaded yet, the volume immediately downloads the requested data fro m Amazon S3, and then continues loading the rest of the volume's data in the background. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ebs-creating-snapshot.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /EBSSnapshots.html Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/", + "references": "" + }, + { + "question": "In a startup company you are working for, you are a sked to design a web application that requires a No SQL database that has no limit on the storage size for a given table. The startup is still new in the mark et and it has very limited human resources who can take care of the database infrastructure. Which is the most suitable service that you can imp lement that provides a fully managed, scalable and highly available NoSQL service?", + "options": [ + "A. A. SimpleDB", + "B. B. Amazon Neptune", + "C. C. DynamoDB", + "D. D. Amazon Aurora" + ], + "correct": "C. C. DynamoDB", + "explanation": "Explanation/Reference: The term \"fully managed\" means that Amazon will man age the underlying infrastructure of the service hence, you don't need an additional human resource to support or maintain the service. Therefore, Amaz on DynamoDB is the right answer. Remember that Amazon RDS is a managed service but not \"fully managed\" as you still have the option to maintain a nd configure the underlying server of the database. Amazon DynamoDB is a fast and flexible NoSQL databa se service for all applications that need consisten t, single-digit millisecond latency at any scale. It i s a fully managed cloud database and supports both document and key-value store models. Its flexible d ata model, reliable performance, and automatic scal ing of throughput capacity make it a great fit for mobi le, web, gaming, ad tech, IoT, and many other applications. Amazon Neptune is incorrect because this is primari ly used as a graph database. Amazon Aurora is incorrect because this is a relati onal database and not a NoSQL database. SimpleDB is incorrect. Although SimpleDB is also a highly available and scalable NoSQL database, it ha s a limit on the request capacity or storage size for a given table, unlike DynamoDB.", + "references": "https://aws.amazon.com/dynamodb/ Check out this Amazon DynamoDB Cheat Sheet: https://tutorialsdojo.com/amazon-dynamodb/ Amazon DynamoDB Overview: https://www.youtube.com/watch?v=3ZOyUNIeorU" + }, + { + "question": "A leading e-commerce company is in need of a storag e solution that can be simultaneously accessed by 1 000 Linux servers in multiple availability zones. The s ervers are hosted in EC2 instances that use a hiera rchical directory structure via the NFSv4 protocol. The ser vice should be able to handle the rapidlynchanging data at scale while still maintaining high performance. It should also be highly durable and highly available whenever the servers will pull data from it, with little nee d for management. As the Solutions Architect, which of the following services is the most cost-effective choice that you should use to meet the above requirement?", + "options": [ + "A. A. EFS", + "B. B. S3", + "C. C. EBS", + "D. D. Storage Gateway" + ], + "correct": "A. A. EFS", + "explanation": "Explanation/Reference: Amazon Web Services (AWS) offers cloud storage serv ices to support a wide range of storage workloads such as EFS, S3 and EBS. You have to understand whe n you should use Amazon EFS, Amazon S3 and Amazon Elastic Block Store (EBS) based on the speci fic workloads. In this scenario, the keywords are rapidly changing data and 1000 Linux servers. Amazon EFS is a file storage service for use with A mazon EC2. Amazon EFS provides a file system interface, file system access semantics (such as st rong consistency and file locking), and concurrentl y- accessible storage for up to thousands of Amazon EC 2 instances. EFS provides the same level of high availability and high scalability like S3 however, this service is more suitable for scenarios where i t is required to have a POSIX-compatible file system or if you are storing rapidly changing data. Data that must be updated very frequently might be better served by storage solutions that take into a ccount read and write latencies, such as Amazon EBS volume s, Amazon RDS, Amazon DynamoDB, Amazon EFS, or relational databases running on Amazon EC2. Amazon EBS is a block-level storage service for use with Amazon EC2. Amazon EBS can deliver performance for workloads that require the lowest-l atency access to data from a single EC2 instance. Amazon S3 is an object storage service. Amazon S3 m akes data available through an Internet API that ca n be accessed anywhere. In this scenario, EFS is the best answer. As stated above, Amazon EFS provides a file system interface , file system access semantics (such as strong consistency and file locking), and concurrently-accessible sto rage for up to thousands of Amazon EC2 instances. EFS pr ovides the performance, durability, high availability, and storage capacity needed by the 10 00 Linux servers in the scenario. S3 is incorrect because although this provides the same level of high availability and high scalabilit y like EFS, this service is not suitable for storing data which are rapidly changing, just as mentioned in th e above explanation. It is still more effective to use EFS as it offers strong consistency and file locking wh ich the S3 service lacks. EBS is incorrect because an EBS Volume cannot be sh ared by multiple instances. Storage Gateway is incorrect because this is primar ily used to extend the storage of your on-premises data center to your AWS Cloud. References: https://docs.aws.amazon.com/efs/latest/ug/how-it-wo rks.html https://aws.amazon.com/efs/features/ https://d1.awsstatic.com/whitepapers/AWS%20Storage% 20Services%20Whitepaper-v9.pdf#page=9 Check out this Amazon EFS Cheat Sheet: https://tutorialsdojo.com/amazon-efs/", + "references": "" + }, + { + "question": "A company has an application hosted in an Amazon EC S Cluster behind an Application Load Balancer. The Solutions Architect is building a sophisticated web filtering solution that allows or blocks web r equests based on the country that the requests originate fr om. However, the solution should still allow specif ic IP addresses from that country. Which combination of steps should the Architect imp lement to satisfy this requirement? (Select TWO.)", + "options": [ + "A. A. In the Application Load Balancer, create a lis tener rule that explicitly allows requests from app roved IP", + "B. B. Add another rule in the AWS WAF web ACL with a geo match condition that blocks requests that", + "C. C. Place a Transit Gateway in front of the VPC wh ere the application is hosted and set up Network AC Ls", + "D. D. Using AWS WAF, create a web ACL with a rule th at explicitly allows requests from approved IP" + ], + "correct": "", + "explanation": "Explanation Explanation/Reference: If you want to allow or block web requests based on the country that the requests originate from, crea te one or more geo match conditions. A geo match condition lists countries that your requests originate from. Later in the process, when you create a web ACL, yo u specify whether to allow or block requests from those countries. You can use geo match conditions with other AWS WAF Classic conditions or rules to build sophisticated filtering. For example, if you want to block certai n countries but still allow specific IP addresses f rom that country, you could create a rule containing a geo m atch condition and an IP match condition. Configure the rule to block requests that originate from that cou ntry and do not match the approved IP addresses. As another example, if you want to prioritize resource s for users in a particular country, you could incl ude a geo match condition in two different rate-based rul es. Set a higher rate limit for users in the prefer red country and set a lower rate limit for all other us ers. If you are using the CloudFront geo restriction fea ture to block a country from accessing your content , any request from that country is blocked and is not for warded to AWS WAF Classic. So if you want to allow or block requests based on geography plus other AWS WA F Classic conditions, you should not use the CloudFront geo restriction feature. Instead, you sh ould use an AWS WAF Classic geo match condition. Hence, the correct answers are: - Using AWS WAF, create a web ACL with a rule that explicitly allows requests from approved IP addresses declared in an IP Set. - Add another rule in the AWS WAF web ACL with a ge o match condition that blocks requests that originate from a specific country. The option that says: In the Application Load Balan cer, create a listener rule that explicitly allows requests from approved IP addresses is incorrect be cause a listener rule just checks for connection requests using the protocol and port that you confi gure. It only determines how the load balancer rout es the requests to its registered targets. The option that says: Set up a geo match condition in the Application Load Balancer that block requests that originate from a specific country is incorrect because you can't configure a geo match condition in an Application Load Balancer. You have to use AWS WAF instead. The option that says: Place a Transit Gateway in fr ont of the VPC where the application is hosted and set up Network ACLs that block requests that origin ate from a specific country is incorrect because AWS Transit Gateway is simply a service that enable s customers to connect their Amazon Virtual Private Clouds (VPCs) and their on-premises networks to a s ingle gateway. Using this type of gateway is not warranted in this scenario. Moreover, Network ACLs are not suitable for blocking requests from a speci fic country. You have to use AWS WAF instead. References: https://docs.aws.amazon.com/waf/latest/developergui de/classic-web-acl-geo-conditions.html https://docs.aws.amazon.com/waf/latest/developergui de/how-aws-waf-works.html Check out this AWS WAF Cheat Sheet: https://tutorialsdojo.com/aws-waf/ AWS Security Services Overview - WAF, Shield, Cloud HSM, KMS: https://www.youtube.com/watch?v=-1S-RdeAmMo", + "references": "" + }, + { + "question": "A company plans to migrate a MySQL database from an on-premises data center to the AWS Cloud. This database will be used by a legacy batch application that has steady-state workloads in the morning but has its peak load at night for the end-of-day processin g. You need to choose an EBS volume that can handle a maximum of 450 GB of data and can also be used as t he system boot volume for your EC2 instance. Which of the following is the most cost-effective s torage type to use in this scenario?", + "options": [ + "A. A. Amazon EBS Throughput Optimized HDD (st1)", + "B. B. Amazon EBS Provisioned IOPS SSD (io1)", + "C. C. Amazon EBS General Purpose SSD (gp2)", + "D. D. Amazon EBS Cold HDD (sc1)" + ], + "correct": "C. C. Amazon EBS General Purpose SSD (gp2)", + "explanation": "Explanation/Reference: In this scenario, a legacy batch application which has steady-state workloads requires a relational My SQL database. The EBS volume that you should use has to handle a maximum of 450 GB of data and can also be used as the system boot volume for your EC2 inst ance. Since HDD volumes cannot be used as a bootable volume, we can narrow down our options by selecting SSD volumes. In addition, SSD volumes are more suitable for transactional database worklo ads, as shown in the table below: General Purpose SSD (gp2) volumes offer cost-effect ive storage that is ideal for a broad range of workloads. These volumes deliver single-digit milli second latencies and the ability to burst to 3,000 IOPS for extended periods of time. AWS designs gp2 volum es to deliver the provisioned performance 99% of th e time. A gp2 volume can range in size from 1 GiB to 16 TiB. Amazon EBS Provisioned IOPS SSD (io1) is incorrect because this is not the most cost-effective EBS type and is primarily used for critical business ap plications that require sustained IOPS performance. Amazon EBS Throughput Optimized HDD (st1) is incorr ect because this is primarily used for frequently accessed, throughput-intensive workloads. Although it is a low-cost HDD volume, it cannot be used as a system boot volume. Amazon EBS Cold HDD (sc1) is incorrect. Although Am azon EBS Cold HDD provides lower cost HDD volume compared to General Purpose SSD, it cannot b e used as a system boot volume.", + "references": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /EBSVolumeTypes.html#EBSVolumeTypes_gp2 Amazon EBS Overview - SSD vs HDD: https://www.youtube.com/watch?v=LW7x8wyLFvw Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/" + }, + { + "question": "A loan processing application is hosted in a single On-Demand EC2 instance in your VPC. To improve the scalability of your application, you have to use Au to Scaling to automatically add new EC2 instances t o handle a surge of incoming requests. Which of the following items should be done in orde r to add an existing EC2 instance to an Auto Scalin g group? (Select TWO.) A. A. You have to ensure that the instance is launch ed in one of the Availability Zones defined in your Auto Scaling group.", + "options": [ + "B. B. You must stop the instance first.", + "C. C. You have to ensure that the AMI used to launch the instance still exists.", + "D. D. You have to ensure that the instance is in a d ifferent Availability Zone as the Auto Scaling grou p." + ], + "correct": "", + "explanation": "Explanation/Reference: Amazon EC2 Auto Scaling provides you with an option to enable automatic scaling for one or more EC2 instances by attaching them to your existing Auto S caling group. After the instances are attached, the y become a part of the Auto Scaling group. The instance that you want to attach must meet the following criteria: - The instance is in the running state. - The AMI used to launch the instance must still ex ist. - The instance is not a member of another Auto Scal ing group. - The instance is launched into one of the Availabi lity Zones defined in your Auto Scaling group. - If the Auto Scaling group has an attached load ba lancer, the instance and the load balancer must bot h be in EC2-Classic or the same VPC. If the Auto Scaling group has an attached target group, the instance a nd the load balancer must both be in the same VPC. Based on the above criteria, the following are the correct answers among the given options: - You have to ensure that the AMI used to launch th e instance still exists. - You have to ensure that the instance is launched in one of the Availability Zones defined in your Auto Scaling group. The option that says: You must stop the instance fi rst is incorrect because you can directly add a run ning EC2 instance to an Auto Scaling group without stopp ing it. The option that says: You have to ensure that the A MI used to launch the instance no longer exists is incorrect because it should be the other way around . The AMI used to launch the instance should still exist. The option that says: You have to ensure that the i nstance is in a different Availability Zone as the Auto Scaling group is incorrect because the instanc e should be launched in one of the Availability Zon es defined in your Auto Scaling group. References: http://docs.aws.amazon.com/autoscaling/latest/userg uide/attach-instance-asg.html https://docs.aws.amazon.com/autoscaling/ec2/usergui de/scaling_plan.html Check out this AWS Auto Scaling Cheat Sheet: https://tutorialsdojo.com/aws-auto-scaling/", + "references": "" + }, + { + "question": "An e-commerce application is using a fanout messagi ng pattern for its order management system. For eve ry order, it sends an Amazon SNS message to an SNS top ic, and the message is replicated and pushed to multiple Amazon SQS queues for parallel asynchronou s processing. A Spot EC2 instance retrieves the message from each SQS queue and processes the messa ge. There was an incident that while an EC2 instance is currently processing a message, the ins tance was abruptly terminated, and the processing w as not completed in time. In this scenario, what happens to the SQS message?", + "options": [ + "A. A. The message will be sent to a Dead Letter Queu e in AWS DataSync.", + "B. B. The message is deleted and becomes duplicated in the SQS when the EC2 instance comes online.", + "C. C. When the message visibility timeout expires, t he message becomes available for processing by othe r", + "D. D. The message will automatically be assigned to the same EC2 instance when it comes back online wit hin" + ], + "correct": "C. C. When the message visibility timeout expires, t he message becomes available for processing by othe r", + "explanation": "Explanation/Reference: A \"fanout\" pattern is when an Amazon SNS message is sent to a topic and then replicated and pushed to multiple Amazon SQS queues, HTTP endpoints, or emai l addresses. This allows for parallel asynchronous processing. For example, you could develop an appli cation that sends an Amazon SNS message to a topic whenever an order is placed for a product. Then, th e Amazon SQS queues that are subscribed to that top ic would receive identical notifications for the new o rder. The Amazon EC2 server instance attached to on e of the queues could handle the processing or fulfillme nt of the order, while the other server instance co uld be attached to a data warehouse for analysis of all or ders received. When a consumer receives and processes a message fr om a queue, the message remains in the queue. Amazon SQS doesn't automatically delete the message . Because Amazon SQS is a distributed system, there's no guarantee that the consumer actually rec eives the message (for example, due to a connectivi ty issue, or due to an issue in the consumer applicati on). Thus, the consumer must delete the message fro m the queue after receiving and processing it. Immediately after the message is received, it remai ns in the queue. To prevent other consumers from processing the message again, Amazon SQS sets a vis ibility timeout, a period of time during which Amazon SQS prevents other consumers from receiving and processing the message. The default visibility timeout for a message is 30 seconds. The maximum is 12 hours. The option that says: The message will automaticall y be assigned to the same EC2 instance when it comes back online within or after the visibility ti meout is incorrect because the message will not be automatically assigned to the same EC2 instance onc e it is abruptly terminated. When the message visibility timeout expires, the message becomes ava ilable for processing by other EC2 instances. The option that says: The message is deleted and be comes duplicated in the SQS when the EC2 instance comes online is incorrect because the message will not be deleted and won't be duplicated in the SQS queue when the EC2 instance comes online. The option that says: The message will be sent to a Dead Letter Queue in AWS DataSync is incorrect because although the message could be programmatica lly sent to a Dead Letter Queue (DLQ), it won't be handled by AWS DataSync but by Amazon SQS instead. AWS DataSync is primarily used to simplify your migration with AWS. It makes it simple and fast to move large amounts of data online between on- premises storage and Amazon S3 or Amazon Elastic Fi le System (Amazon EFS). References: http://docs.aws.amazon.com/AWSSimpleQueueService/la test/SQSDeveloperGuide/sqs-visibility-timeout.html https://docs.aws.amazon.com/sns/latest/dg/sns-commo n-scenarios.html Check out this Amazon SQS Cheat Sheet: https://tutorialsdojo.com/amazon-sqs/", + "references": "" + }, + { + "question": "A company needs to use Amazon S3 to store irreprodu cible financial documents. For their quarterly reporting, the files are required to be retrieved a fter a period of 3 months. There will be some occas ions when a surprise audit will be held, which requires access to the archived data that they need to prese nt immediately. What will you do to satisfy this requirement in a c ost-effective way?", + "options": [ + "A. A. Use Amazon S3 Standard", + "B. B. Use Amazon S3 Standard - Infrequent Access C. C. Use Amazon S3 -Intelligent Tiering", + "D. D. Use Amazon Glacier Deep Archive" + ], + "correct": "B. B. Use Amazon S3 Standard - Infrequent Access C. C. Use Amazon S3 -Intelligent Tiering", + "explanation": "Explanation/Reference: In this scenario, the requirement is to have a stor age option that is cost-effective and has the abili ty to access or retrieve the archived data immediately. T he cost-effective options are Amazon Glacier Deep Archive and Amazon S3 Standard- Infrequent Access ( Standard - IA). However, the former option is not designed for rapid retrieval of data which is requi red for the surprise audit. Hence, using Amazon Glacier Deep Archive is incorre ct and the best answer is to use Amazon S3 Standard - Infrequent Access. Using Amazon S3 Standard is incorrect because the s tandard storage class is not cost-efficient in this scenario. It costs more than Glacier Deep Archive a nd S3 Standard - Infrequent Access. Using Amazon S3 -Intelligent Tiering is incorrect b ecause the Intelligent Tiering storage class entail s an additional fee for monitoring and automation of eac h object in your S3 bucket vs. the Standard storage class and S3 Standard - Infrequent Access. Amazon S3 Standard - Infrequent Access is an Amazon S3 storage class for data that is accessed less frequently but requires rapid access when needed. S tandard - IA offers the high durability, throughput , and low latency of Amazon S3 Standard, with a low per G B storage price and per GB retrieval fee. This combination of low cost and high performance m akes Standard - IA ideal for long-term storage, backups, and as a data store for disaster recovery. The Standard - IA storage class is set at the obje ct level and can exist in the same bucket as Standard, allow ing you to use lifecycle policies to automatically transition objects between storage classes without any application changes. References: https://aws.amazon.com/s3/storage-classes/ https://aws.amazon.com/s3/faqs/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/ S3 Standard vs S3 Standard-IA vs S3 One Zone IA vs S3 Intelligent Tiering: https://tutorialsdojo.com/s3-standard-vs-s3-standar d-ia-vs-s3-one-zone-ia/", + "references": "" + }, + { + "question": "A company has a running m5ad.large EC2 instance wit h a default attached 75 GB SSD instance-store backed volume. You shut it down and then start the instance. You noticed that the data which you have saved earlier on the attached volume is no longer a vailable. What might be the cause of this?", + "options": [ + "A. A. The EC2 instance was using EBS backed root vol umes, which are ephemeral and only live for the lif e of", + "B. B. The EC2 instance was using instance store volu mes, which are ephemeral and only live for the life of the", + "C. C. The volume of the instance was not big enough to handle all of the processing data.", + "D. D. The instance was hit by a virus that wipes out all data." + ], + "correct": "B. B. The EC2 instance was using instance store volu mes, which are ephemeral and only live for the life of the", + "explanation": "Explanation/Reference: An instance store provides temporary block-level st orage for your instance. This storage is located on disks that are physically attached to the host comp uter. Instance store is ideal for temporary storage of information that changes frequently, such as buffer s, caches, scratch data, and other temporary conten t, or for data that is replicated across a fleet of insta nces, such as a load-balanced pool of web servers. An instance store consists of one or more instance store volumes exposed as block devices. The size of an instance store as well as the number of devices ava ilable varies by instance type. While an instance s tore is dedicated to a particular instance, the disk subsys tem is shared among instances on a host computer. The data in an instance store persists only during the lifetime of its associated instance. If an inst ance reboots (intentionally or unintentionally), data in the instance store persists. However, data in the instance store is lost under the following circumstances: - The underlying disk drive fails - The instance stops - The instance terminates", + "references": "http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ InstanceStorage.html Amazon EC2 Overview: https://www.youtube.com/watch?v=7VsGIHT_jQE Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/" + }, + { + "question": "A company has several microservices that send messa ges to an Amazon SQS queue and a backend application that poll the queue to process the mess ages. The company also has a Service Level Agreemen t (SLA) which defines the acceptable amount of time t hat can elapse from the point when the messages are received until a response is sent. The backend oper ations are I/O-intensive as the number of messages is constantly growing, causing the company to miss its SLA. The Solutions Architect must implement a new architecture that improves the application's proces sing time and load management. Which of the following is the MOST effective soluti on that can satisfy the given requirement?", + "options": [ + "A. A. Create an AMI of the backend application's EC2 instance and launch it to a cluster placement grou p.", + "B. B. Create an AMI of the backend application's EC2 instance. Use the image to set up an Auto Scaling group", + "C. C. Create an AMI of the backend application's EC2 instance and replace it with a larger instance siz e.", + "D. D. Create an AMI of the backend application's EC2 instance. Use the image to set up an Auto Scaling group" + ], + "correct": "D. D. Create an AMI of the backend application's EC2 instance. Use the image to set up an Auto Scaling group", + "explanation": "Explanation/Reference: Amazon Simple Queue Service (SQS) is a fully manage d message queuing service that enables you to decouple and scale microservices, distributed syste ms, and serverless applications. SQS eliminates the complexity and overhead associated with managing an d operating message-oriented middleware and empowers developers to focus on differentiating wor k. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other service s to be available. The ApproximateAgeOfOldestMessage metric is useful when applications have time-sensitive messages and you need to ensure that messages are processed within a specific time period. You can use this met ric to set Amazon CloudWatch alarms that issue alerts w hen messages remain in the queue for extended periods of time. You can also use alerts to take ac tion, such as increasing the number of consumers to process messages more quickly. With a target tracking scaling policy, you can scal e (increase or decrease capacity) a resource based on a target value for a specific CloudWatch metric. To c reate a custom metric for this policy, you need to use AWS CLI or AWS SDKs. Take note that you need to cre ate an AMI from the instance first before you can create an Auto Scaling group to scale the instances based on the ApproximateAgeOfOldestMessage metric. Hence, the correct answer is: Create an AMI of the backend application's EC2 instance. Use the image to set up an Auto Scaling Group and configure a tar get tracking scaling policy based on the ApproximateAgeOfOldestMessage metric. The option that says: Create an AMI of the backend application's EC2 instance. Use the image to set up an Auto Scaling Group and configure a target tra cking scaling policy based on the CPUUtilization metric with a target value of 80% is incorrect. Alt hough this will improve the backend processing, the scaling policy based on the CPUUtilization metric i s not meant for time-sensitive messages where you n eed to ensure that the messages are processed within a specific time period. It will only trigger the scal e-out activities based on the CPU Utilization of the curr ent instances, and not based on the age of the mess age, which is a crucial factor in meeting the SLA. To sa tisfy the requirement in the scenario, you should u se the ApproximateAgeOfOldestMessage metric. The option that says: Create an AMI of the backend application's EC2 instance and replace it with a larger instance size is incorrect because replacing the instance with a large size won't be enough to dynamically handle workloads at any level. You need to implement an Auto Scaling group to automaticall y adjust the capacity of your computing resources. The option that says: Create an AMI of the backend application's EC2 instance and launch it to a cluster placement group is incorrect because a clus ter placement group is just a logical grouping of E C2 instances. Instead of launching the instance in a p lacement group, you must set up an Auto Scaling gro up for your EC2 instances and configure a target track ing scaling policy based on the ApproximateAgeOfOldestMessage metric. References: https://aws.amazon.com/about-aws/whats-new/2016/08/ new-amazon-cloudwatch-metric-for-amazon-sqs- monitors-the-age-of-the-oldest-message/ https://docs.aws.amazon.com/AWSSimpleQueueService/l atest/SQSDeveloperGuide/sqs-available-cloudwatch- metrics.html https://docs.aws.amazon.com/autoscaling/ec2/usergui de/as-using-sqs-queue.html Check out this Amazon SQS Cheat Sheet: https://tutorialsdojo.com/amazon-sqs/", + "references": "" + }, + { + "question": "A company needs secure access to its Amazon RDS for MySQL database that is used by multiple applications. Each IAM user must use a short-lived authentication token to connect to the database. Which of the following is the most suitable solutio n in this scenario?", + "options": [ + "A. A. Use AWS Secrets Manager to generate and store short-lived authentication tokens.", + "B. B. Use an MFA token to access and connect to a da tabase.", + "C. C. Use IAM DB Authentication and create database accounts using the AWS-provided", + "D. D. Use AWS SSO to access the RDS database." + ], + "correct": "C. C. Use IAM DB Authentication and create database accounts using the AWS-provided", + "explanation": "Explanation/Reference: You can authenticate to your DB instance using AWS Identity and Access Management (IAM) database authentication. IAM database authentication works w ith MySQL and PostgreSQL. With this authentication method, you don't need to use a password when you c onnect to a DB instance. An authentication token is a string of characters t hat you use instead of a password. After you genera te an authentication token, it's valid for 15 minutes bef ore it expires. If you try to connect using an expi red token, the connection request is denied. Since the scenario asks you to create a short-lived authentication token to access an Amazon RDS datab ase, you can use an IAM database authentication when con necting to a database instance. Authentication is handled by AWSAuthenticationPlugin--an AWS-provided plugin that works seamlessly with IAM to authentic ate your IAM users. IAM database authentication provides the following benefits: Network traffic to and from the database is encrypt ed using Secure Sockets Layer (SSL). You can use IAM to centrally manage access to your database resources, instead of managing access individually on each DB instance. For applications running on Amazon EC2, you can use profile credentials specific to your EC2 instance to access your database instead of a password, for gre ater security Hence, the correct answer is the option that says: Use IAM DB Authentication and create database accounts using the AWS-provided AWSAuthenticationPl ugin plugin in MySQL. The options that say: Use AWS SSO to access the RDS database is incorrect because AWS SSO just enables you to centrally manage SSO access and user permissions for all of your AWS accounts managed through AWS Organizations. The option that says: Use AWS Secrets Manager to ge nerate and store short-lived authentication tokens is incorrect because AWS Secrets Manager is not a suitable service to create an authentication token to access an Amazon RDS database. It's primarily us ed to store passwords, secrets, and other sensitive credentials. It can't generate a short-lived token either. You have to use IAM DB Authentication inste ad. The option that says: Use an MFA token to access an d connect to a database is incorrect because you can't use an MFA token to connect to your database. You have to set up IAM DB Authentication instead. References: https://docs.aws.amazon.com/AmazonRDS/latest/Aurora UserGuide/ UsingWithRDS.IAMDBAuth.Connecting.html https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/UsingWithRDS.IAMDBAuth.DBAccounts.html Check out this AWS IAM Cheat Sheet: https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/", + "references": "" + }, + { + "question": "A company has a web application hosted on a fleet o f EC2 instances located in two Availability Zones t hat are all placed behind an Application Load Balancer. As a Solutions Architect, you have to add a health check configuration to ensure your application is h ighly-available. Which health checks will you implement?", + "options": [ + "A. A. ICMP health check", + "B. B. FTP health check", + "C. C. HTTP or HTTPS health check", + "D. D. TCP health check" + ], + "correct": "C. C. HTTP or HTTPS health check", + "explanation": "Explanation/Reference: A load balancer takes requests from clients and dis tributes them across the EC2 instances that are reg istered with the load balancer. You can create a load balan cer that listens to both the HTTP (80) and HTTPS (4 43) ports. If you specify that the HTTPS listener sends requests to the instances on port 80, the load bal ancer terminates the requests, and communication from the load balancer to the instances is not encrypted. I f the HTTPS listener sends requests to the instances on p ort 443, communication from the load balancer to th e instances is encrypted. If your load balancer uses an encrypted connection to communicate with the instances, you can optional ly enable authentication of the instances. This ensure s that the load balancer communicates with an insta nce only if its public key matches the key that you spe cified to the load balancer for this purpose. The type of ELB that is mentioned in this scenario is an Application Elastic Load Balancer. This is us ed if you want a flexible feature set for your web applic ations with HTTP and HTTPS traffic. Conversely, it only allows 2 types of health check: HTTP and HTTPS. Hence, the correct answer is: HTTP or HTTPS health check. ICMP health check and FTP health check are incorrec t as these are not supported. TCP health check is incorrect. A TCP health check i s only offered in Network Load Balancers and Classi c Load Balancers. References: http://docs.aws.amazon.com/elasticloadbalancing/lat est/classic/elb-healthchecks.html https://docs.aws.amazon.com/elasticloadbalancing/la test/application/introduction.html Check out this AWS Elastic Load Balancing (ELB) Che at Sheet: https://tutorialsdojo.com/aws-elastic-load-balancin g-elb/ EC2 Instance Health Check vs ELB Health Check vs Au to Scaling and Custom Health Check: https://tutorialsdojo.com/ec2-instance-health-check -vs-elb-health-check-vs-auto-scaling-and-custom-hea lth- check/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/", + "references": "" + }, + { + "question": "A startup needs to use a shared file system for its .NET web application running on an Amazon EC2 Windows instance. The file system must provide a hi gh level of throughput and IOPS that can also be integrated with Microsoft Active Directory. Which is the MOST suitable service that you should use to achieve this requirement?", + "options": [ + "A. A. Amazon FSx for Windows File Server", + "B. B. AWS Storage Gateway - File Gateway", + "C. C. Amazon EBS Provisioned IOPS SSD volumes", + "D. D. Amazon Elastic File System" + ], + "correct": "A. A. Amazon FSx for Windows File Server", + "explanation": "Explanation/Reference: Amazon FSx for Windows File Server provides fully m anaged, highly reliable, and scalable file storage accessible over the industry-standard Service Messa ge Block (SMB) protocol. It is built on Windows Server, delivering a wide range of administrative f eatures such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration. Amazon FSx supports the use of Microsoft's Distribu ted File System (DFS) Namespaces to scale-out performance across multiple file systems in the sam e namespace up to tens of Gbps and millions of IOPS . The key phrases in this scenario are \"file system\" and \"Active Directory integration.\" You need to implement a solution that will meet these requireme nts. Among the options given, the possible answers are FSx Windows File Server and File Gateway. But you n eed to consider that the question also states that you need to provide a high level of throughput and IOPS . Amazon FSx Windows File Server can scale-out storage to hundreds of petabytes of data with tens of GB/s of throughput performance and millions of I OPS. Hence, the correct answer is: Amazon FSx for Window s File Server. Amazon EBS Provisioned IOPS SSD volumes is incorrec t because this is just a block storage volume and not a full-fledged file system. Amazon EBS is prima rily used as persistent block storage for EC2 insta nces. Amazon Elastic File System is incorrect because it is stated in the scenario that the startup uses an Amazon EC2 Windows instance. Remember that Amazon E FS can only handle Linux workloads. AWS Storage Gateway - File Gateway is incorrect. Al though it can be used as a shared file system for Windows and can also be integrated with Microsoft A ctive Directory, Amazon FSx still has a higher leve l of throughput and IOPS compared with AWS Storage Gateway. Amazon FSX is capable of providing hundreds of thousands (or even millions) of IOPS. References: https://aws.amazon.com/fsx/windows/faqs/ https://docs.aws.amazon.com/fsx/latest/WindowsGuide /what-is.html Check out this Amazon FSx Cheat Sheet: https://tutorialsdojo.com/amazon-fsx/", + "references": "" + }, + { + "question": "A company plans to implement a hybrid architecture. They need to create a dedicated connection from th eir Amazon Virtual Private Cloud (VPC) to their on-prem ises network. The connection must provide high bandwidth throughput and a more consistent network experience than Internet-based solutions. Which of the following can be used to create a priv ate connection between the VPC and the company's on - premises network?", + "options": [ + "A. A. Transit VPC", + "B. B. AWS Site-to-Site VPN", + "C. C. AWS Direct Connect", + "D. D. Transit Gateway with equal-cost multipath rout ing (ECMP)" + ], + "correct": "C. C. AWS Direct Connect", + "explanation": "Explanation/Reference: AWS Direct Connect links your internal network to a n AWS Direct Connect location over a standard Ethernet fiber-optic cable. One end of the cable is connected to your router, the other to an AWS Dire ct Connect router. With this connection, you can create virtual interf aces directly to public AWS services (for example, to Amazon S3) or to Amazon VPC, bypassing internet ser vice providers in your network path. An AWS Direct Connect location provides access to AWS in t he region with which it is associated. You can use a single connection in a public Region or AWS GovClou d (US) to access public AWS services in all other public Regions Hence, the correct answer is: AWS Direct Connect. The option that says: Transit VPC is incorrect beca use this in itself is not enough to integrate your on- premises network to your VPC. You have to either us e a VPN or a Direct Connect connection. A transit VPC is primarily used to connect multiple VPCs and remote networks in order to create a global network transit center and not for establishing a dedicated connection to your on-premises network. The option that says: Transit Gateway with equal-co st multipath routing (ECMP) is incorrect because a transit gateway is commonly used to connect multipl e VPCs and on-premises networks through a central hub. Just like transit VPC, a transit gateway is no t capable of establishing a direct and dedicated co nnection to your on-premises network. The option that says: AWS Site-to-Site VPN is incor rect because this type of connection traverses the public Internet. Moreover, it doesn't provide a hig h bandwidth throughput and a more consistent networ k experience than Internet-based solutions. References: https://aws.amazon.com/premiumsupport/knowledge-cen ter/connect-vpc/ https://docs.aws.amazon.com/directconnect/latest/Us erGuide/Welcome.html Check out this AWS Direct Connect Cheat Sheet: https://tutorialsdojo.com/aws-direct-connect/ S3 Transfer Acceleration vs Direct Connect vs VPN v s Snowball vs Snowmobile: https://tutorialsdojo.com/s3-transfer-acceleration- vs-direct-connect-vs-vpn-vs-snowball-vs-snowmobile/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/", + "references": "" + }, + { + "question": "A startup launched a fleet of on-demand EC2 instanc es to host a massively multiplayer online role-play ing game (MMORPG). The EC2 instances are configured wit h Auto Scaling and AWS Systems Manager. What can be used to configure the EC2 instances wit hout having to establish an RDP or SSH connection t o each instance?", + "options": [ + "A. A. EC2Config", + "B. B. AWS Config", + "C. C. Run Command", + "D. D. AWS CodePipeline" + ], + "correct": "C. C. Run Command", + "explanation": "Explanation/Reference: You can use Run Command from the console to configu re instances without having to login to each instance. AWS Systems Manager Run Command lets you remotely a nd securely manage the configuration of your managed instances. A managed instance is any Amazon EC2 instance or on-premises machine in your hybrid environment that has been configured for Sys tems Manager. Run Command enables you to automate common administrative tasks and perform ad hoc configuration changes at scale. You can use Run Command from the AWS console, the AWS Command L ine Interface, AWS Tools for Windows PowerShell, or the AWS SDKs. Run Command is offered at no additional cost. Hence, the correct answer is: Run Command.", + "references": "https://docs.aws.amazon.com/systems-manager/latest/ userguide/execute-remote-commands.html AWS Systems Manager Overview: https://www.youtube.com/watch?v=KVFKyMAHxqY Check out this AWS Systems Manager Cheat Sheet: https://tutorialsdojo.com/aws-systems-manager/" + }, + { + "question": "A company has a UAT and production EC2 instances ru nning on AWS. They want to ensure that employees who are responsible for the UAT instances don't have the access to work on the production instances to minimize security risks. Which of the following would be the best way to ach ieve this?", + "options": [ + "A. A. Define the tags on the UAT and production serv ers and add a condition to the IAM policy which all ows", + "B. B. Launch the UAT and production instances in dif ferent Availability Zones and use Multi Factor", + "C. C. Launch the UAT and production EC2 instances in separate VPC's connected by VPC peering.", + "D. D. Provide permissions to the users via the AWS R esource Access Manager (RAM) service to only accessEC2 instances that are used for production or devel opment." + ], + "correct": "A. A. Define the tags on the UAT and production serv ers and add a condition to the IAM policy which all ows", + "explanation": "Explanation/Reference: For this scenario, the best way to achieve the requ ired solution is to use a combination of Tags and I AM policies. You can define the tags on the UAT and pr oduction EC2 instances and add a condition to the I AM policy which allows access to specific tags. Tags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. This is useful when you have many reso urces of the same type -- you can quickly identify a specific resource based on the tags you've assigned to it. By default, IAM users don't have permission to crea te or modify Amazon EC2 resources, or perform tasks using the Amazon EC2 API. (This means that they als o can't do so using the Amazon EC2 console or CLI.) To allow IAM users to create or modify resources an d perform tasks, you must create IAM policies that grant IAM users permission to use the specific reso urces and API actions they'll need, and then attach those policies to the IAM users or groups that require th ose permissions. Hence, the correct answer is: Define the tags on th e UAT and production servers and add a condition to the IAM policy which allows access to specific t ags. The option that says: Launch the UAT and production EC2 instances in separate VPC's connected by VPC peering is incorrect because these are just net work changes to your cloud architecture and don't h ave any effect on the security permissions of your user s to access your EC2 instances. The option that says: Provide permissions to the us ers via the AWS Resource Access Manager (RAM) service to only access EC2 instances that are used for production or development is incorrect because the AWS Resource Access Manager (RAM) is primarily used to securely share your resources across AWS accounts or within your Organization and not on a s ingle AWS account. You also have to set up a custom IAM Policy in order for this to work. The option that says: Launch the UAT and production instances in different Availability Zones and use Multi Factor Authentication is incorrect becaus e placing the EC2 instances to different AZs will o nly improve the availability of the systems but won't h ave any significance in terms of security. You have to set up an IAM Policy that allows access to EC2 instance s based on their tags. In addition, a Multi-Factor Authentication is not a suitable security feature t o be implemented for this scenario. References: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ Using_Tags.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /iam-policies-for-amazon-ec2.html Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", + "references": "" + }, + { + "question": "An investment bank has a distributed batch processi ng application which is hosted in an Auto Scaling group of Spot EC2 instances with an SQS queue. You configured your components to use client-side buffering so that the calls made from the client wi ll be buffered first and then sent as a batch reque st to SQS. What is a period of time during which the SQS queue prevents other consuming components from receiving and processing a message?", + "options": [ + "A. A. Processing Timeout", + "B. B. Receiving Timeout", + "C. C. Component Timeout", + "D. D. Visibility Timeout" + ], + "correct": "D. D. Visibility Timeout", + "explanation": "Explanation/Reference: The visibility timeout is a period of time during w hich Amazon SQS prevents other consuming components from receiving and processing a message. When a consumer receives and processes a message fr om a queue, the message remains in the queue. Amazon SQS doesn't automatically delete the message . Because Amazon SQS is a distributed system, there's no guarantee that the consumer actually rec eives the message (for example, due to a connectivi ty issue, or due to an issue in the consumer applicati on). Thus, the consumer must delete the message fro m the queue after receiving and processing it. Immediately after the message is received, it remai ns in the queue. To prevent other consumers from processing the message again, Amazon SQS sets a vis ibility timeout, a period of time during which Amazon SQS prevents other consumers from receiving and processing the message. The default visibility timeout for a message is 30 seconds. The maximum is 12 hours. References: https://aws.amazon.com/sqs/faqs/ https://docs.aws.amazon.com/AWSSimpleQueueService/l atest/SQSDeveloperGuide/sqs-visibility-timeout.html Check out this Amazon SQS Cheat Sheet: https://tutorialsdojo.com/amazon-sqs/", + "references": "" + }, + { + "question": "An organization created a new CloudFormation templa te that creates 4 EC2 instances that are connected to one Elastic Load Balancer (ELB). Which section of t he template should be configured to get the Domain Name Server hostname of the ELB upon the creation o f the AWS stack?", + "options": [ + "A. A. Resources", + "B. B. Parameters", + "C. C. Mappings", + "D. D. Outputs" + ], + "correct": "D. D. Outputs", + "explanation": "Explanation/Reference: Outputs is an optional section of the CloudFormatio n template that describes the values that are retur ned whenever you view your stack's properties.", + "references": "https://docs.aws.amazon.com/AWSCloudFormation/lates t/UserGuide/template-anatomy.html https://aws.amazon.com/cloudformation/ Check out this AWS CloudFormation Cheat Sheet: https://tutorialsdojo.com/aws-cloudformation/ AWS CloudFormation - Templates, Stacks, Change Sets : https://www.youtube.com/watch?v=9Xpuprxg7aY" + }, + { + "question": "A company is planning to launch a High Performance Computing (HPC) cluster in AWS that does Computational Fluid Dynamics (CFD) simulations. The solution should scale-out their simulation jobs to experiment with more tunable parameters for faster and more accurate results. The cluster is composed of Windows servers hosted on t3a.medium EC2 instances. As the Solutions Architect, you should ensure that the architecture provides higher bandwidth, higher packet per second (PPS) performance, and consistent ly lower inter-instance latencies. Which is the MOST suitable and cost-effective solut ion that the Architect should implement to achieve the above requirements?", + "options": [ + "A. A. Use AWS ParallelCluster to deploy and manage t he HPC cluster to provide higher bandwidth, higher", + "B. B. Enable Enhanced Networking with Intel 82599 Vi rtual Function (VF) interface on the Windows EC2", + "C. C. Enable Enhanced Networking with Elastic Fabric Adapter (EFA) on the Windows EC2 Instances.", + "D. D. Enable Enhanced Networking with Elastic Networ k Adapter (ENA) on the Windows EC2 Instances." + ], + "correct": "D. D. Enable Enhanced Networking with Elastic Networ k Adapter (ENA) on the Windows EC2 Instances.", + "explanation": "Explanation/Reference: Enhanced networking uses single root I/O virtualiza tion (SR-IOV) to provide high-performance networkin g capabilities on supported instance types. SR-IOV is a method of device virtualization that provides hi gher I/O performance and lower CPU utilization when comp ared to traditional virtualized network interfaces. Enhanced networking provides higher bandwidth, high er packet per second (PPS) performance, and consistently lower inter-instance latencies. There is no additional charge for using enhanced networki ng. Amazon EC2 provides enhanced networking capabilitie s through the Elastic Network Adapter (ENA). It supports network speeds of up to 100 Gbps for suppo rted instance types. Elastic Network Adapters (ENAs ) provide traditional IP networking features that are required to support VPC networking. An Elastic Fabric Adapter (EFA) is simply an Elasti c Network Adapter (ENA) with added capabilities. It provides all of the functionality of an ENA, with a dditional OS-bypass functionality. OS-bypass is an access model that allows HPC and machine learning a pplications to communicate directly with the networ k interface hardware to provide low-latency, reliable transport functionality. The OS-bypass capabilities of EFAs are not supporte d on Windows instances. If you attach an EFA to a Windows instance, the instance functions as an Elas tic Network Adapter, without the added EFA capabilities. Hence, the correct answer is to enable Enhanced Net working with Elastic Network Adapter (ENA) on the Windows EC2 Instances. Enabling Enhanced Networking with Elastic Fabric Ad apter (EFA) on the Windows EC2 Instances is incorrect because the OS-bypass capabilities of the Elastic Fabric Adapter (EFA) are not supported on Windows instances. Although you can attach EFA to y our Windows instances, this will just act as a regu lar Elastic Network Adapter, without the added EFA capa bilities. Moreover, it doesn't support the t3a.medi um instance type that is being used in the HPC cluster . Enabling Enhanced Networking with Intel 82599 Virtu al Function (VF) interface on the Windows EC2 Instances is incorrect because although you can attach an Intel 82599 Virtual Function (VF) interf ace on your Windows EC2 Instances to improve its networ king capabilities, it doesn't support the t3a.mediu m instance type that is being used in the HPC cluster . Using AWS ParallelCluster to deploy and manage the HPC cluster to provide higher bandwidth, higher packet per second (PPS) performance, and low er inter-instance latencies is incorrect because an AWS ParallelCluster is just an AWS-supported ope n-source cluster management tool that makes it easy for you to deploy and manage High Performance Compu ting (HPC) clusters on AWS. It does not provide higher bandwidth, higher packet per second (PPS) pe rformance, and lower inter-instance latencies, unli ke ENA or EFA. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /enhanced-networking.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /efa.html", + "references": "" + }, + { + "question": "A Solutions Architect needs to ensure that all of t he AWS resources in Amazon VPC don't go beyond thei r respective service limits. The Architect should pre pare a system that provides real-time guidance in provisioning resources that adheres to the AWS best practices. Which of the following is the MOST appropriate serv ice to use to satisfy this task?", + "options": [ + "A. A. Amazon Inspector", + "B. B. AWS Trusted Advisor", + "C. C. AWS Cost Explorer", + "D. D. AWS Budgets" + ], + "correct": "B. B. AWS Trusted Advisor", + "explanation": "Explanation/Reference: AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices. It inspects your AWS environment and makes recommendations for saving money, improving system performance and reliability, or closing security gaps. Whether establishing new workflows, developing appl ications, or as part of ongoing improvement, take advantage of the recommendations provided by Truste d Advisor on a regular basis to help keep your solutions provisioned optimally. Trusted Advisor includes an ever-expanding list of checks in the following five categories: Cost Optimization recommendations that can potenti ally save you money by highlighting unused resources and opportunities to reduce your bill. Security identification of security settings that could make your AWS solution less secure. Fault Tolerance recommendations that help increase the resiliency of your AWS solution by highlighting redundancy shortfalls, current service limits, and over-utilized resources. Performance recommendations that can help to impro ve the speed and responsiveness of your applications. Service Limits recommendations that will tell you when service usage is more than 80% of the service limit. Hence, the correct answer in this scenario is AWS T rusted Advisor. AWS Cost Explorer is incorrect because this is just a tool that enables you to view and analyze your c osts and usage. You can explore your usage and costs usi ng the main graph, the Cost Explorer cost and usage reports, or the Cost Explorer RI reports. It has an easy-to-use interface that lets you visualize, und erstand, and manage your AWS costs and usage over time. AWS Budgets is incorrect because it simply gives yo u the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to ex ceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization dr ops below the threshold you define. Amazon Inspector is incorrect because it is just an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities , and deviations from best practices. References: https://aws.amazon.com/premiumsupport/technology/tr usted-advisor/ https://aws.amazon.com/premiumsupport/technology/tr usted-advisor/faqs/ Check out this AWS Trusted Advisor Cheat Sheet: https://tutorialsdojo.com/aws-trusted-advisor/", + "references": "" + }, + { + "question": "A local bank has an in-house application that handl es sensitive financial data in a private subnet. Af ter the data is processed by the EC2 worker instances, they will be delivered to S3 for ingestion by other ser vices. How should you design this solution so that the dat a does not pass through the public Internet?", + "options": [ + "A. A. Provision a NAT gateway in the private subnet with a corresponding route entry that directs the d ata to", + "B. B. Create an Internet gateway in the public subne t with a corresponding route entry that directs the data to", + "C. C. Configure a VPC Endpoint along with a correspo nding route entry that directs the data to S3.", + "D. D. Configure a Transit gateway along with a corre sponding route entry that directs the data to S3." + ], + "correct": "C. C. Configure a VPC Endpoint along with a correspo nding route entry that directs the data to S3.", + "explanation": "Explanation/Reference: The important concept that you have to understand i n this scenario is that your VPC and your S3 bucket are located within the larger AWS network. However, the traffic coming from your VPC to your S3 bucket is traversing the public Internet by default. To bette r protect your data in transit, you can set up a VP C endpoint so the incoming traffic from your VPC will not pass through the public Internet, but instead through the private AWS network. A VPC endpoint enables you to privately connect you r VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring a n Internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VP C do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other services does not le ave the Amazon network. Endpoints are virtual devices. They are horizontall y scaled, redundant, and highly available VPC components that allow communication between instanc es in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic. Hence, the correct answer is: Configure a VPC Endpo int along with a corresponding route entry that directs the data to S3. The option that says: Create an Internet gateway in the public subnet with a corresponding route entry that directs the data to S3 is incorrect beca use the Internet gateway is used for instances in t he public subnet to have accessibility to the Internet . The option that says: Configure a Transit gateway a long with a corresponding route entry that directs the data to S3 is incorrect because the Transit Gat eway is used for interconnecting VPCs and on-premis es networks through a central hub. Since Amazon S3 is outside of VPC, you still won't be able to connect to it privately. The option that says: Provision a NAT gateway in th e private subnet with a corresponding route entry that directs the data to S3 is incorrect because NA T Gateway allows instances in the private subnet to gain access to the Internet, but not vice versa. References: https://docs.aws.amazon.com/vpc/latest/userguide/vp c-endpoints.html https://docs.aws.amazon.com/vpc/latest/userguide/vp ce-gateway.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", + "references": "" + }, + { + "question": "An online shopping platform is hosted on an Auto Sc aling group of On-Demand EC2 instances with a default Auto Scaling termination policy and no inst ance protection configured. The system is deployed across three Availability Zones in the US West regi on (us-west-1) with an Application Load Balancer in front to provide high availability and fault tolera nce for the shopping platform. The us-west-1a, us-w est-1b, and us-west-1c Availability Zones have 10, 8 and 7 running instances respectively. Due to the low numb er of incoming traffic, the scale-in operation has bee n triggered. Which of the following will the Auto Scaling group do to determine which instance to terminate first i n this scenario? (Select THREE.)", + "options": [ + "A. A. Select the instance that is farthest to the next billing hour. B. B. Select the instance that is closest to the next billing hour.", + "C. C. Select the instances with the most recent laun ch configuration.", + "D. D. Choose the Availability Zone with the most num ber of instances, which is the us-west-1a Availabil ity" + ], + "correct": "", + "explanation": "Explanation/Reference: The default termination policy is designed to help ensure that your network architecture spans Availab ility Zones evenly. With the default termination policy, the behavior of the Auto Scaling group is as follow s: 1. If there are instances in multiple Availability Zones, choose the Availability Zone with the most instances and at least one instance that is not pro tected from scale in. If there is more than one Ava ilability Zone with this number of instances, choose the Avai lability Zone with the instances that use the oldes t launch configuration. 2. Determine which unprotected instances in the sel ected Availability Zone use the oldest launch configuration. If there is one such instance, termi nate it. 3. If there are multiple instances to terminate bas ed on the above criteria, determine which unprotect ed instances are closest to the next billing hour. (Th is helps you maximize the use of your EC2 instances and manage your Amazon EC2 usage costs.) If there is on e such instance, terminate it. 4. If there is more than one unprotected instance c losest to the next billing hour, choose one of thes e instances at random. The following flow diagram illustrates how the defa ult termination policy works: Reference: https://docs.aws.amazon.com/autoscaling/ec2/usergui de/as-instance-termination.html#default-termination - policy Check out this AWS Auto Scaling Cheat Sheet: https://tutorialsdojo.com/aws-auto-scaling/", + "references": "" + }, + { + "question": "An application is hosted in an On-Demand EC2 instan ce and is using Amazon SDK to communicate to other AWS services such as S3, DynamoDB, and many others. As part of the upcoming IT audit, you need to ensu re that all API calls to your AWS resources are logged and durably stored. Which is the most suitable service that you should use to meet this requirement?", + "options": [ + "A. A. Amazon API Gateway", + "B. B. AWS CloudTrail", + "C. C. Amazon CloudWatch", + "D. D. AWS X-Ray" + ], + "correct": "B. B. AWS CloudTrail", + "explanation": "Explanation/Reference: AWS CloudTrail increases visibility into your user and resource activity by recording AWS Management Console actions and API calls. You can identify whi ch users and accounts called AWS, the source IP address from which the calls were made, and when th e calls occurred. Amazon CloudWatch is incorrect because this is prim arily used for systems monitoring based on the server metrics. It does not have the capability to track API calls to your AWS resources. AWS X-Ray is incorrect because this is usually used to debug and analyze your microservices applicatio ns with request tracing so you can find the root cause of issues and performance. Unlike CloudTrail, it d oes not record the API calls that were made to your AWS resources. Amazon API Gateway is incorrect because this is not used for logging each and every API call to your AWS resources. It is a fully managed service that m akes it easy for developers to create, publish, mai ntain, monitor, and secure APIs at any scale.", + "references": "https://aws.amazon.com/cloudtrail/ Check out this AWS CloudTrail Cheat Sheet: https://tutorialsdojo.com/aws-cloudtrail/" + }, + { + "question": "A company has recently adopted a hybrid cloud archi tecture and is planning to migrate a database hoste d on-premises to AWS. The database currently has over 50 TB of consumer data, handles highly transactional (OLTP) workloads, and is expected to grow. The Solutions Architect should ensure that th e database is ACID-compliant and can handle complex q ueries of the application. Which type of database service should the Architect use?", + "options": [ + "A. A. Amazon RDS", + "B. B. Amazon Redshift", + "C. C. Amazon DynamoDB", + "D. D. Amazon Aurora" + ], + "correct": "D. D. Amazon Aurora", + "explanation": "Explanation/Reference: Amazon Aurora (Aurora) is a fully managed relationa l database engine that's compatible with MySQL and PostgreSQL. You already know how MySQL and Post greSQL combine the speed and reliability of high-end commercial databases with the simplicity a nd cost-effectiveness of open-source databases. The code, tools, and applications you use today with yo ur existing MySQL and PostgreSQL databases can be used with Aurora. With some workloads, Aurora can d eliver up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL with out requiring changes to most of your existing applications. Aurora includes a high-performance storage subsyste m. Its MySQL- and PostgreSQL-compatible database engines are customized to take advantage of that fa st distributed storage. The underlying storage grow s automatically as needed, up to 64 tebibytes (TiB). Aurora also automates and standardizes database clustering and replication, which are typically amo ng the most challenging aspects of database configuration and administration. For Amazon RDS MariaDB DB instances, the maximum pr ovisioned storage limit constrains the size of a table to a maximum size of 64 TB when using InnoDB file-per-table tablespaces. This limit also constra ins the system tablespace to a maximum size of 16 TB. InnoDB file- per-table tablespaces (with tables each in their ow n tablespace) is set by default for Amazon RDS MariaD B DB instances. Hence, the correct answer is Amazon Aurora. Amazon Redshift is incorrect because this is primar ily used for OLAP applications and not for OLTP. Moreover, it doesn't scale automatically to handle the exponential growth of the database. Amazon DynamoDB is incorrect. Although you can use this to have an ACID-compliant database, it is not capable of handling complex queries and highly tran sactional (OLTP) workloads. Amazon RDS is incorrect. Although this service can host an ACID-compliant relational database that can handle complex queries and transactional (OLTP) wor kloads, it is still not scalable to handle the grow th of the database. Amazon Aurora is the better choice as its underlying storage can grow automatically as needed. References: https://aws.amazon.com/rds/aurora/ https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/SQLtoNoSQL.html https://aws.amazon.com/nosql/ Amazon Aurora Overview: https://youtu.be/iwS1h7rLNBQ Check out this Amazon Aurora Cheat Sheet: https://tutorialsdojo.com/amazon-aurora/", + "references": "" + }, + { + "question": "A healthcare company stores sensitive patient healt h records in their on-premises storage systems. The se records must be kept indefinitely and protected fro m any type of modifications once they are stored. Compliance regulations mandate that the records mus t have granular access control and each data access must be audited at all levels. Currently, there are millions of obsolete records that are not accessed by their web application, and their on-premises storage is q uickly running out of space. The Solutions Architec t must design a solution to immediately move existing records to AWS and support the ever-growing number of new health records. Which of the following is the most suitable solutio n that the Solutions Architect should implement to meet the above requirements?", + "options": [ + "A. A. Set up AWS Storage Gateway to move the existin g health records from the on-premises network to th e", + "B. B. Set up AWS DataSync to move the existing healt h records from the on-premises network to the AWS", + "C. C. Set up AWS Storage Gateway to move the existin g health records from the on-premises network to th e", + "D. D. Set up AWS DataSync to move the existing healt h records from the on-premises network to the AWS" + ], + "correct": "B. B. Set up AWS DataSync to move the existing healt h records from the on-premises network to the AWS", + "explanation": "Explanation/Reference: AWS Storage Gateway is a set of hybrid cloud servic es that gives you on-premises access to virtually unlimited cloud storage. Customers use Storage Gate way to integrate AWS Cloud storage with existing on-site workloads so they can simplify storage mana gement and reduce costs for key hybrid cloud storag e use cases. These include moving backups to the clou d, using on-premises file shares backed by cloud storage, and providing low latency access to data i n AWS for on-premises applications. AWS DataSync is an online data transfer service tha t simplifies, automates, and accelerates moving dat a between on-premises storage systems and AWS Storage services, as well as between AWS Storage services. You can use DataSync to migrate active datasets to AWS, archive data to free up on-premises storage capacity, replicate data to AWS for business contin uity, or transfer data to the cloud for analysis an d processing. Both AWS Storage Gateway and AWS DataSync can send data from your on-premises data center to AWS and vice versa. However, AWS Storage Gateway is mor e suitable to be used in integrating your storage services by replicating your data while AWS DataSyn c is better for workloads that require you to move or migrate your data. You can also use a combination of DataSync and File Gateway to minimize your on-premises infrastructur e while seamlessly connecting on-premises application s to your cloud storage. AWS DataSync enables you to automate and accelerate online data transfers to AWS storage services. File Gateway is a fully mana ged solution that will automate and accelerate the repl ication of data between the on-premises storage sys tems and AWS storage services. AWS CloudTrail is an AWS service that helps you ena ble governance, compliance, and operational and risk auditing of your AWS account. Actions taken by a user, role, or an AWS service are recorded as ev ents in CloudTrail. Events include actions taken in the AWS Management Console, AWS Command Line Interface, and AWS SDKs and APIs. There are two types of events that you configure yo ur CloudTrail for: - Management Events - Data Events Management Events provide visibility into managemen t operations that are performed on resources in your AWS account. These are also known as control p lane operations. Management events can also include non-API events that occur in your account. Data Events, on the other hand, provide visibility into the resource operations performed on or within a resource. These are also known as data plane operat ions. It allows granular control of data event logg ing with advanced event selectors. You can currently lo g data events on different resource types such as Amazon S3 object-level API activity (e.g. GetObject , DeleteObject, and PutObject API operations), AWS Lambda function execution activity (the Invoke API) , DynamoDB Item actions, and many more. With S3 Object Lock, you can store objects using a write-once-read-many (WORM) model. Object Lock can help prevent objects from being deleted or over written for a fixed amount of time or indefinitely. You can use Object Lock to help meet regulatory require ments that require WORM storage or to simply add another layer of protection against object changes and deletion. You can record the actions that are taken by users, roles, or AWS services on Amazon S3 resources and maintain log records for auditing and compliance pu rposes. To do this, you can use server access loggi ng, AWS CloudTrail logging, or a combination of both. A WS recommends that you use AWS CloudTrail for logging bucket and object-level actions for your Am azon S3 resources. Hence, the correct answer is: Set up AWS DataSync t o move the existing health records from the on- premises network to the AWS Cloud. Launch a new Ama zon S3 bucket to store existing and new records. Enable AWS CloudTrail with Data Events and Amazon S3 Object Lock in the bucket. The option that says: Set up AWS Storage Gateway to move the existing health records from the on- premises network to the AWS Cloud. Launch a new Ama zon S3 bucket to store existing and new records. Enable AWS CloudTrail with Management Even ts and Amazon S3 Object Lock in the bucket is incorrect. The requirement explicitly say s that the Solutions Architect must immediately mov e the existing records to AWS and not integrate or re plicate the data. Using AWS DataSync is a more suitable service to use here since the primary obje ctive is to migrate or move data. You also have to use Data Events here and not Management Events in Cloud Trail, to properly track all the data access and changes to your objects. The option that says: Set up AWS Storage Gateway to move the existing health records from the on- premises network to the AWS Cloud. Launch an Amazon EBS-backed EC2 instance to store both the existing and new records. Enable Amazon S3 server a ccess logging and S3 Object Lock in the bucket is incorrect. Just as mentioned in the previous opt ion, using AWS Storage Gateway is not a recommended service to use in this situation since the objectiv e is to move the obsolete data. Moreover, using Ama zon EBS to store health records is not a scalable solut ion compared with Amazon S3. Enabling server access logging can help audit the stored objects. However, it is better to CloudTrail as it provides more gra nular access control and tracking. The option that says: Set up AWS DataSync to move t he existing health records from the on-premises network to the AWS Cloud. Launch a new Amazon S3 bu cket to store existing and new records. Enable AWS CloudTrail with Management Events and Am azon S3 Object Lock in the bucket is incorrect. Although it is right to use AWS DataSync to move the health records, you still have to conf igure Data Events in AWS CloudTrail and not Management Ev ents. This type of event only provides visibility into management operations that are performed on re sources in your AWS account and not the data events that are happening in the individual objects in Ama zon S3. References: https://aws.amazon.com/datasync/faqs/ https://aws.amazon.com/about-aws/whats-new/2020/12/ aws-cloudtrail-provides-more-granular-control-of-da ta- event-logging/ https://docs.aws.amazon.com/AmazonS3/latest/usergui de/object-lock.html Check out this AWS DataSync Cheat Sheet: https://tutorialsdojo.com/aws-datasync/ AWS Storage Gateway vs DataSync: https://www.youtube.com/watch?v=tmfe1rO-AUs", + "references": "" + }, + { + "question": "A top IT Consultancy has a VPC with two On-Demand E C2 instances with Elastic IP addresses. You were notified that the EC2 instances are currently under SSH brute force attacks over the Internet. The IT Security team has identified the IP addresses where these attacks originated. You have to immediately implement a temporary fix to stop these attacks whi le the team is setting up AWS WAF, GuardDuty, and AWS Shield Advanced to permanently fix the security vulnerability. Which of the following provides the quickest way to stop the attacks to the instances?", + "options": [ + "A. A. Remove the Internet Gateway from the VPC", + "B. B. Assign a static Anycast IP address to each EC2 instance", + "C. C. Place the EC2 instances into private subnets", + "D. D. Block the IP addresses in the Network Access C ontrol List" + ], + "correct": "", + "explanation": "Explanation/Reference: A network access control list (ACL) is an optional layer of security for your VPC that acts as a firew all for controlling traffic in and out of one or more s ubnets. You might set up network ACLs with rules si milar to your security groups in order to add an addition al layer of security to your VPC. The following are the basic things that you need to know about network ACLs: - Your VPC automatically comes with a modifiable de fault network ACL. By default, it allows all inboun d and outbound IPv4 traffic and, if applicable, IPv6 traffic. - You can create a custom network ACL and associate it with a subnet. By default, each custom network ACL denies all inbound and outbound traffic until y ou add rules. - Each subnet in your VPC must be associated with a network ACL. If you don't explicitly associate a subnet with a network ACL, the subnet is automatica lly associated with the default network ACL. - You can associate a network ACL with multiple sub nets; however, a subnet can be associated with only one network ACL at a time. When you associate a net work ACL with a subnet, the previous association is removed. - A network ACL contains a numbered list of rules t hat we evaluate in order, starting with the lowest numbered rule, to determine whether traffic is allo wed in or out of any subnet associated with the net work ACL. The highest number that you can use for a rule is 32766. We recommend that you start by creating rules in increments (for example, increments of 10 or 100) so that you can insert new rules where you need to later on. - A network ACL has separate inbound and outbound r ules, and each rule can either allow or deny traffi c. - Network ACLs are stateless; responses to allowed inbound traffic are subject to the rules for outbou nd traffic (and vice versa). The scenario clearly states that it requires the qu ickest way to fix the security vulnerability. In th is situation, you can manually block the offending IP addresses u sing Network ACLs since the IT Security team alread y identified the list of offending IP addresses. Alte rnatively, you can set up a bastion host, however, this option entails additional time to properly set up a s you have to configure the security configurations of your bastion host. Hence, blocking the IP addresses in the Network Acc ess Control List is the best answer since it can quickly resolve the issue by blocking the IP addres ses using Network ACL. Placing the EC2 instances into private subnets is i ncorrect because if you deploy the EC2 instance in the private subnet without public or EIP address, it wo uld not be accessible over the Internet, even to yo u. Removing the Internet Gateway from the VPC is incor rect because doing this will also make your EC2 instance inaccessible to you as it will cut down th e connection to the Internet. Assigning a static Anycast IP address to each EC2 i nstance is incorrect because a static Anycast IP address is primarily used by AWS Global Accelerator to enable organizations to seamlessly route traffi c to multiple regions and improve availability and perfo rmance for their end-users. References: https://docs.aws.amazon.com/AmazonVPC/latest/UserGu ide/VPC_ACLs.html https://docs.aws.amazon.com/vpc/latest/userguide/VP C_Security.html Security Group vs NACL: https://tutorialsdojo.com/security-group-vs-nacl/", + "references": "" + }, + { + "question": "A web application hosted in an Auto Scaling group o f EC2 instances in AWS. The application receives a burst of traffic every morning, and a lot of users are complaining about request timeouts. The EC2 ins tance takes 1 minute to boot up before it can respond to user requests. The cloud architecture must be redes igned to better respond to the changing traffic of the ap plication. How should the Solutions Architect redesign the arc hitecture?", + "options": [ + "A. A. Create a new launch template and upgrade the siz e of the instance. B. B. Create a step scaling policy and configure an in stance warm-up time condition.", + "C. C. Create a CloudFront distribution and set the E C2 instance as the origin.", + "D. D. Create a Network Load Balancer with slow-start mode." + ], + "correct": "", + "explanation": "Explanation/Reference: Amazon EC2 Auto Scaling helps you maintain applicat ion availability and allows you to automatically add or remove EC2 instances according to conditions you define. You can use the fleet management features of EC2 Auto Scaling to maintain the health and availability of your fleet. You can also use t he dynamic and predictive scaling features of EC2 Auto Scaling to add or remove EC2 instances. Dynamic scaling responds to changing demand and predictive scaling automatically schedules the right number of EC2 instances based on predicted demand. Dynamic sc aling and predictive scaling can be used together t o scale faster. Step scaling applies \"step adjustments\" which means you can set multiple actions to vary the scaling depending on the size of the alarm breach. When you create a step scaling policy, you can also specify the number of seconds that it takes for a newly launche d instance to warm up. Hence, the correct answer is: Create a step scaling policy and configure an instance warm-up time condition. The option that says: Create a Network Load Balance r with slow start mode is incorrect because Network Load Balancer does not support slow start m ode. If you need to enable slow start mode, you should use Application Load Balancer. The option that says: Create a new launch template and upgrade the size of the instance is incorrect because a larger instance does not always improve t he boot time. Instead of upgrading the instance, yo u should create a step scaling policy and add a warm- up time. The option that says: Create a CloudFront distribut ion and set the EC2 instance as the origin is incorrect because this approach only resolves the t raffic latency. Take note that the requirement in t he scenario is to resolve the timeout issue and not th e traffic latency. References: https://docs.aws.amazon.com/autoscaling/ec2/usergui de/as-scaling-simple-step.html https://aws.amazon.com/ec2/autoscaling/faqs/ Check out these AWS Cheat Sheets: https://tutorialsdojo.com/aws-auto-scaling/ https://tutorialsdojo.com/step-scaling-vs-simple-sc aling-policies-in-amazon-ec2/", + "references": "" + }, + { + "question": "A Solutions Architect joined a large tech company w ith an existing Amazon VPC. When reviewing the Auto Scaling events, the Architect noticed that their we b application is scaling up and down multiple times within the hour. What design change could the Architect make to opti mize cost while preserving elasticity?", + "options": [ + "A. A. Change the cooldown period of the Auto Scaling group and set the CloudWatch metric to a higher", + "B. B. Add provisioned IOPS to the instances", + "C. C. Increase the base number of Auto Scaling insta nces for the Auto Scaling group", + "D. D. Increase the instance type in the launch confi guration" + ], + "correct": "A. A. Change the cooldown period of the Auto Scaling group and set the CloudWatch metric to a higher", + "explanation": "Explanation/Reference: Since the application is scaling up and down multip le times within the hour, the issue lies on the coo ldown period of the Auto Scaling group. The cooldown period is a configurable setting for y our Auto Scaling group that helps to ensure that it doesn't launch or terminate additional instances be fore the previous scaling activity takes effect. Af ter the Auto Scaling group dynamically scales using a simpl e scaling policy, it waits for the cooldown period to complete before resuming scaling activities. When you manually scale your Auto Scaling group, th e default is not to wait for the cooldown period, b ut you can override the default and honor the cooldown period. If an instance becomes unhealthy, the Auto Scaling group does not wait for the cooldown period to complete before replacing the unhealthy instanc e.", + "references": "http://docs.aws.amazon.com/autoscaling/latest/userg uide/as-scale-based-on-demand.html Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/" + }, + { + "question": "A data analytics startup is collecting clickstream data and stores them in an S3 bucket. You need to l aunch an AWS Lambda function to trigger the ETL jobs to r un as soon as new data becomes available in Amazon S3. Which of the following services can you use as an e xtract, transform, and load (ETL) service in this s cenario?", + "options": [ + "A. A. S3 Select", + "B. B. AWS Glue", + "C. C. Redshift Spectrum", + "D. D. AWS Step Functions" + ], + "correct": "B. B. AWS Glue", + "explanation": "Explanation/Reference: AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customer s to prepare and load their data for analytics. You c an create and run an ETL job with a few clicks in t he AWS Management Console. You simply point AWS Glue t o your data stored on AWS, and AWS Glue discovers your data and stores the associated metad ata (e.g. table definition and schema) in the AWS G lue Data Catalog. Once cataloged, your data is immediat ely searchable, queryable, and available for ETL. AWS Glue generates the code to execute your data tr ansformations and data loading processes. Reference: https://aws.amazon.com/glue/ Check out this AWS Glue Cheat Sheet: https://tutorialsdojo.com/aws-glue/", + "references": "" + }, + { + "question": "A company is running a batch job on an EC2 instance inside a private subnet. The instance gathers inpu t data from an S3 bucket in the same region through a NAT Gateway. The company is looking for a solution that will reduce costs without imposing risks on re dundancy or availability. Which solution will accomplish this?", + "options": [ + "A. A. Deploy a Transit Gateway to peer connection be tween the instance and the S3 bucket.", + "B. B. Re-assign the NAT Gateway to a lower EC2 insta nce type.", + "C. C. Replace the NAT Gateway with a NAT instance ho sted on a burstable instance type.", + "D. D. Remove the NAT Gateway and use a Gateway VPC e ndpoint to access the S3 bucket from the instance." + ], + "correct": "D. D. Remove the NAT Gateway and use a Gateway VPC e ndpoint to access the S3 bucket from the instance.", + "explanation": "Explanation/Reference: A gateway endpoint is a gateway that you specify in your route table to access Amazon S3 from your VPC over the AWS network. Interface endpoints extend th e functionality of gateway endpoints by using priva te IP addresses to route requests to Amazon S3 from wi thin your VPC, on-premises, or from a different AWS Region. Interface endpoints are compatible with gat eway endpoints. If you have an existing gateway endpoint in the VPC, you can use both types of endp oints in the same VPC. There is no additional charge for using gateway end points. However, standard charges for data transfer and resource usage still apply. Hence, the correct answer is: Remove the NAT Gatewa y and use a Gateway VPC endpoint to access the S3 bucket from the instance. The option that says: Replace the NAT Gateway with a NAT instance hosted on burstable instance type is incorrect. This solution may possibly reduc e costs, but the availability and redundancy will b e compromised. The option that says: Deploy a Transit Gateway to p eer connection between the instance and the S3 bucket is incorrect. Transit Gateway is a service t hat is specifically used for connecting multiple VP Cs through a central hub. The option that says: Re-assign the NAT Gateway to a lower EC2 instance type is incorrect. NAT Gateways are fully managed resources. You cannot ac cess nor modify the underlying instance that hosts it. References: https://docs.aws.amazon.com/AmazonS3/latest/usergui de/privatelink-interface-endpoints.html https://docs.aws.amazon.com/vpc/latest/privatelink/ vpce-gateway.html Amazon VPC Overview: https://youtu.be/oIDHKeNxvQQ Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", + "references": "" + }, + { + "question": "A top investment bank is in the process of building a new Forex trading platform. To ensure high availability and scalability, you designed the trad ing platform to use an Elastic Load Balancer in fro nt of an Auto Scaling group of On-Demand EC2 instances acros s multiple Availability Zones. For its database tie r, you chose to use a single Amazon Aurora instance to take advantage of its distributed, fault-tolerant, and self-healing storage system. In the event of system failure on the primary datab ase instance, what happens to Amazon Aurora during the failover?", + "options": [ + "A. A. Aurora will attempt to create a new DB Instanc e in the same Availability Zone as the original ins tance and", + "B. B. Aurora will first attempt to create a new DB I nstance in a different Availability Zone of the ori ginal", + "C. C. Amazon Aurora flips the canonical name record (CNAME) for your DB Instance to point atthe healthy", + "D. D. Amazo n Aurora flips the A record of your DB I nstance to point at the healthy replica, which in t urn is" + ], + "correct": "A. A. Aurora will attempt to create a new DB Instanc e in the same Availability Zone as the original ins tance and", + "explanation": "Explanation/Reference: Failover is automatically handled by Amazon Aurora so that your applications can resume database operations as quickly as possible without manual ad ministrative intervention. If you have an Amazon Aurora Replica in the same or a different Availability Zone, when failing over, Amazon Aurora flips the canonical name record (CNAM E) for your DB Instance to point at the healthy replica, which in turn is promoted to become the ne w primary. Start-to-finish, failover typically comp letes within 30 seconds. If you are running Aurora Serverless and the DB ins tance or AZ become unavailable, Aurora will automatically recreate the DB instance in a differe nt AZ. If you do not have an Amazon Aurora Replica (i.e. s ingle instance) and are not running Aurora Serverle ss, Aurora will attempt to create a new DB Instance in the same Availability Zone as the original instance . This replacement of the original instance is done o n a best-effort basis and may not succeed, for exam ple, if there is an issue that is broadly affecting the Ava ilability Zone. Hence, the correct answer is the option that says: Aurora will attempt to create a new DB Instance in the same Availability Zone as the original instance and is done on a best-effort basis. The options that say: Amazon Aurora flips the canon ical name record (CNAME) for your DB Instance to point at the healthy replica, which in turn is p romoted to become the new primary and Amazon Aurora flips the A record of your DB Instance to po int at the healthy replica, which in turn is promoted to become the new primary are incorrect be cause this will only happen if you are using an Amazon Aurora Replica. In addition, Amazon Aurora f lips the canonical name record (CNAME) and not the A record (IP address) of the instance. The option that says: Aurora will first attempt to create a new DB Instance in a different Availabilit y Zone of the original instance. If unable to do so, Aurora will attempt to create a new DB Instance in the original Availability Zone in which the instanc e was first launched is incorrect because Aurora wi ll first attempt to create a new DB Instance in the sa me Availability Zone as the original instance. If u nable to do so, Aurora will attempt to create a new DB Insta nce in a different Availability Zone and not the ot her way around. References: https://aws.amazon.com/rds/aurora/faqs/ https://docs.aws.amazon.com/AmazonRDS/latest/Aurora UserGuide/Concepts.AuroraHighAvailability.html Amazon Aurora Overview: https://youtu.be/iwS1h7rLNBQ Check out this Amazon Aurora Cheat Sheet: https://tutorialsdojo.com/amazon-aurora/", + "references": "" + }, + { + "question": "The social media company that you are working for n eeds to capture the detailed information of all HTTP requests that went through their public-facing application load balancer every five minutes. They want to use this data for analyzing traffic pa tterns and for troubleshooting their web applications in AWS. Which of the following options meet the customer requirements?", + "options": [ + "A. Enable Amazon CloudWatch metrics on the applicati on load balancer.", + "B. Enable AWS CloudTrail for their application load balancer.", + "C. Add an Amazon CloudWatch Logs agent on the applic ation load balancer.", + "D. Enable access logs on the application load balanc er." + ], + "correct": "D. Enable access logs on the application load balanc er.", + "explanation": "Explanation/Reference: Elastic Load Balancing provides access logs that ca pture detailed information about requests sent to y our load balancer. Each log contains information such a s the time the request was received, the client's I P address, latencies, request paths, and server respo nses. You can use these access logs to analyze traf fic patterns and troubleshoot issues. Access logging is an optional feature of Elastic Lo ad Balancing that is disabled by default. After you enable access logging for your load balancer, Elast ic Load Balancing captures the logs and stores them in the Amazon S3 bucket that you specify as compressed files. You can disable access logging at any time. Hence, the correct answer is: Enable access logs on the application load balancer. The option that says: Enable AWS CloudTrail for the ir application load balancer is incorrect because AWS CloudTrail is primarily used to monitor and rec ord the account activity across your AWS resources and not your web applications. You cannot use Cloud Trail to capture the detailed information of all HT TP requests that go through your public-facing Applica tion Load Balancer (ALB). CloudTrail can only trackthe resource changes made to your ALB, but not the actual IP traffic that goes through it. For this us e case, you have to enable the access logs feature instead. The option that says: Add an Amazon CloudWatch Logs agent on the application load balancer is incorrect because you cannot directly install a Clo udWatch Logs agent to an Application Load Balancer. This is commonly installed on an Amazon EC2 instanc e and not on a load balancer. The option that says: Enable Amazon CloudWatch metr ics on the application load balancer is incorrect because CloudWatch doesn't track the actu al traffic to your ALB. It only monitors the change s to your ALB itself and the actual IP traffic that it d istributes to the target groups. References: http://docs.aws.amazon.com/elasticloadbalancing/lat est/application/load-balancer-access-logs.html https://docs.aws.amazon.com/elasticloadbalancing/la test/application/load-balancer-monitoring.html AWS Elastic Load Balancing Overview: https://youtu.be/UBl5dw59DO8 Check out this AWS Elastic Load Balancing (ELB) Che at Sheet: https://tutorialsdojo.com/aws-elastic-load-balancin g-elb/ Application Load Balancer vs Network Load Balancer vs Classic Load Balancer vs Gateway Load Balancer: https://tutorialsdojo.com/application-load-balancer -vs-network-load-balancer-vs-classic-load-balancer/", + "references": "" + }, + { + "question": "A company has an application hosted in an Auto Scal ing group of Amazon EC2 instances across multiple Availability Zones behind an Application Load Balan cer. There are several occasions where some instances are automatically terminated after failin g the HTTPS health checks in the ALB and then purge s all the ephemeral logs stored in the instance. A So lutions Architect must implement a solution that co llects all of the application and server logs effectively. She should be able to perform a root cause analysi s based on the logs, even if the Auto Scaling group immedia tely terminated the instance. What is the EASIEST way for the Architect to automa te the log collection from the Amazon EC2 instances ?", + "options": [ + "A. A. Add a lifecycle hook to your Auto Scaling grou p to move instances in the Terminating state to the", + "B. B. Add a lifecycle hook to your Auto Scaling grou p to move instances in the Terminating stateto the", + "C. C. the Pending:Wait state to delay the terminatio n of the unhealthy Amazon EC2 instances.", + "D. Add a lifecycle hook to your Auto Scaling group t o move instances in the Terminating state to the" + ], + "correct": "B. B. Add a lifecycle hook to your Auto Scaling grou p to move instances in the Terminating stateto the", + "explanation": "Explanation/Reference: The EC2 instances in an Auto Scaling group have a p ath, or lifecycle, that differs from that of other EC2 instances. The lifecycle starts when the Auto Scali ng group launches an instance and puts it into serv ice. The lifecycle ends when you terminate the instance, or the Auto Scaling group takes the instance out o f service and terminates it. You can add a lifecycle hook to your Auto Scaling g roup so that you can perform custom actions when instances launch or terminate. When Amazon EC2 Auto Scaling responds to a scale ou t event, it launches one or more instances. These instances start in the Pending state. If you added an autoscaling:EC2_INSTANCE_LAUNCHING lifecycle hook to your Auto Scaling group, the instances move from the Pending state to the Pending:Wait state. After you complete the lifecycle action, the instan ces enter the Pending:Proceed state. When the insta nces are fully configured, they are attached to the Auto Scaling group and they enter the InService state. When Amazon EC2 Auto Scaling responds to a scale in event, it terminates one or more instances. These instances are detached from the Auto Scaling group and enter the Terminating state. If you added an autoscaling:EC2_INSTANCE_TERMINATING lifecycle hook to your Auto Scaling group, the instances move from the Terminating state to the Terminating: Wait state. After you complete the lifecycle action , the instances enter the Terminating:Proceed state. When the instances are fully terminated, they enter the Terminated state. Using CloudWatch agent is the most suitable tool to use to collect the logs. The unified CloudWatch ag ent enables you to do the following: - Collect more system-level metrics from Amazon EC2 instances across operating systems. The metrics ca n include in-guest metrics, in addition to the metric s for EC2 instances. The additional metrics that ca n be collected are listed in Metrics Collected by the Cl oudWatch Agent . - Collect system-level metrics from on-premises ser vers. These can include servers in a hybrid environ ment as well as servers not managed by AWS. - Retrieve custom metrics from your applications or services using the StatsD and collectd protocols. StatsD is supported on both Linux servers and serve rs running Windows Server. collectd is supported on ly on Linux servers. - Collect logs from Amazon EC2 instances and on-pre mises servers, running either Linux or Windows Server. You can store and view the metrics that you collect with the CloudWatch agent in CloudWatch just as yo u can with any other CloudWatch metrics. The default namespace for metrics collected by the CloudWatch agent is CWAgent, although you can specify a differ ent namespace when you configure the agent. Hence, the correct answer is: Add a lifecycle hook to your Auto Scaling group to move instances in the Terminating state to the Terminating:Wait state to delay the termination of unhealthy Amazon EC2 instances. Configure a CloudWatch Events rule for the EC2 Inst ance-terminate Lifecycle Action Auto Scaling Event with an associated Lambda function. Trigger t he CloudWatch agent to push the application logs and then resume the instance termination once all t he logs are sent to CloudWatch Logs. The option that says: Add a lifecycle hook to your Auto Scaling group to move instances in the Terminating state to the Pending:Wait state to dela y the termination of the unhealthy Amazon EC2 instances. Configure a CloudWatch Events rule for t he EC2 Instance-terminate Lifecycle Action Auto Scaling Event with an associated Lambda function. S et up an AWS Systems Manager Automation script that collects and uploads the application logs from the instance to a CloudWatch Logs group. Configure the solution to only resume the instance terminatio n once all the logs were successfully sent is incor rect because the Pending:Wait state refers to the scale- out action in Amazon EC2 Auto Scaling and not for scale-in or for terminating the instances. The option that says: Add a lifecycle hook to your Auto Scaling group to move instances in the Terminating state to the Terminating:Wait state to delay the termination of the unhealthy Amazon EC2 instances. Set up AWS Step Functions to collect the application logs and send them to a CloudWatch Log group. Configure the solution to resume the ins tance termination as soon as all the logs were successfully sent to CloudWatch Logs is incorrect b ecause using AWS Step Functions is inappropriate in collecting the logs from your EC2 instances. You sh ould use a CloudWatch agent instead. The option that says: Add a lifecycle hook to your Auto Scaling group to move instances in the Terminating state to the Terminating:Wait state to delay the termination of the unhealthy Amazon EC2 instances. Configure a CloudWatch Events rule for t he EC2 Instance Terminate Successful Auto Scaling Event with an associated Lambda function. S et up the AWS Systems Manager Run Command service to run a script that collects and uploads t he application logs from the instance to a CloudWat ch Logs group. Resume the instance termination once al l the logs are sent is incorrect because although t his solution could work, it entails a lot of effort to write a custom script that the AWS Systems Manager Run Command will run. Remember that the scenario asks f or a solution that you can implement with the least amount of effort. This solution can be simplified b y automatically uploading the logs using a CloudWat ch Agent. You have to use the EC2 Instance-terminate L ifecycle Action event instead. References: https://docs.aws.amazon.com/autoscaling/ec2/usergui de/AutoScalingGroupLifecycle.html https://docs.aws.amazon.com/autoscaling/ec2/usergui de/cloud-watch-events.html#terminate-successful https://aws.amazon.com/premiumsupport/knowledge-cen ter/auto-scaling-delay-termination/ Check out this AWS Auto Scaling Cheat Sheet: https://tutorialsdojo.com/aws-auto-scaling/", + "references": "" + }, + { + "question": "A company needs to set up a cost-effective architec ture for a log processing application that has freq uently accessed, throughput-intensive workloads with large , sequential I/O operations. The application should be hosted in an already existing On-Demand EC2 instanc e in the VPC. You have to attach a new EBS volume that will be used by the application. Which of the following is the most suitable EBS vol ume type that you should use in this scenario? A. A. EBS Throughput Optimized HDD (st1)", + "options": [ + "B. B. EBS General Purpose SSD (gp2)", + "C. C. EBS Provisioned IOPS SSD (io1)", + "D. D. EBS Cold HDD (sc1)" + ], + "correct": "", + "explanation": "Explanation/Reference: In the exam, always consider the difference between SSD and HDD as shown on the table below. This will allow you to easily eliminate specific EBS-types in the options which are not SSD or not HDD, dependin g on whether the question asks for a storage type whi ch has small, random I/O operations or large, sequential I/O operations. Since the scenario has workloads with large, sequen tial I/O operations, we can narrow down our options by selecting HDD volumes, instead of SDD volumes which are more suitable for small, random I/O operations . Throughput Optimized HDD (st1) volumes provide low- cost magnetic storage that defines performance in terms of throughput rather than IOPS. This volum e type is a good fit for large, sequential workload s such as Amazon EMR, ETL, data warehouses, and log proces sing. Bootable st1 volumes are not supported. Throughput Optimized HDD (st1) volumes, though simi lar to Cold HDD (sc1) volumes, are designed to support frequently accessed data. EBS Provisioned IOPS SSD (io1) is incorrect because Amazon EBS Provisioned IOPS SSD is not the most cost-effective EBS type and is primarily used for critical business applications that require sus tained IOPS performance. EBS General Purpose SSD (gp2) is incorrect. Althoug h an Amazon EBS General Purpose SSD volume balances price and performance for a wide variety o f workloads, it is not suitable for frequently acce ssed, throughput-intensive workloads. Throughput Optimize d HDD is a more suitable option to use than General Purpose SSD. EBS Cold HDD (sc1) is incorrect. Although this prov ides lower cost HDD volume compared to General Purpose SSD, it is much suitable for less frequentl y accessed workloads.", + "references": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /EBSVolumeTypes.html#EBSVolumeTypes_st1 Amazon EBS Overview - SSD vs HDD: https://www.youtube.com/watch?v=LW7x8wyLFvw&t=8s Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/" + }, + { + "question": "A company plans to design a highly available archit ecture in AWS. They have two target groups with thr ee EC2 instances each, which are added to an Applicati on Load Balancer. In the security group of the EC2 instance, you have verified that port 80 for HTTP i s allowed. However, the instances are still showing out of service from the load balancer. What could be the root cause of this issue?", + "options": [ + "A. A. The wrong subnet was used in your VPC", + "B. B. The instances are using the wrong AMI.", + "C. C. The health check configuration is not properly defined.", + "D. D. The wrong instance type was used for the EC2 i nstance." + ], + "correct": "C. C. The health check configuration is not properly defined.", + "explanation": "Explanation/Reference: Since the security group is properly configured, th e issue may be caused by a wrong health check configuration in the Target Group. Your Application Load Balancer periodically sends r equests to its registered targets to test their sta tus. These tests are called health checks. Each load bal ancer node routes requests only to the healthy targ ets in the enabled Availability Zones for the load balance r. Each load balancer node checks the health of eac h target, using the health check settings for the tar get group with which the target is registered. Afte r your target is registered, it must pass one health check to be considered healthy. After each health check is completed, the load balancer node closes the connec tion that was established for the health check.", + "references": "http://docs.aws.amazon.com/elasticloadbalancing/lat est/classic/elb-healthchecks.html AWS Elastic Load Balancing Overview: https://www.youtube.com/watch?v=UBl5dw59DO8 Check out this AWS Elastic Load Balancing (ELB) Che at Sheet: https://tutorialsdojo.com/aws-elastic-load-balancin g-elb/ ELB Health Checks vs Route 53 Health Checks For Tar get Health Monitoring: https://tutorialsdojo.com/elb-health-checks-vs-rout e-53-health-checks-for-target-health-monitoring/" + }, + { + "question": "A company is using AWS IAM to manage access to AWS services. The Solutions Architect of the company created the following IAM policy for AWS Lambda: { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"lambda:CreateFunction\", \"lambda:DeleteFunction\" ], \"Resource\": \"*\" }, { \"Effect\": \"Deny\", \"Action\": [ \"lambda:CreateFunction\", \"lambda:DeleteFunction\", \"lambda:InvokeFunction\", \"lambda:TagResource\" ], \"Resource\": \"*\", \"Condition\": { \"IpAddress\": { \"aws:SourceIp\": \"187.5.104.11/32\" } } } ] } Which of the following options are allowed by this policy? A. A. Delete an AWS Lambda function from any network a ddress.", + "options": [ + "B. B. Create an AWS Lambda function using the 187.5. 104.11/32 address.", + "C. C. Delete an AWS Lambda function using the 187.5. 104.11/32 address.", + "D. D. Create an AWS Lambda function using the 100.22 0.0.11/32 address." + ], + "correct": "D. D. Create an AWS Lambda function using the 100.22 0.0.11/32 address.", + "explanation": "Explanation/Reference: You manage access in AWS by creating policies and a ttaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an o bject in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when an IAM principal (user or role) makes a request. Permissions in the policies determ ine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents. You can use AWS Identity and Access Management (IAM ) to manage access to the Lambda API and resources like functions and layers. Based on the g iven IAM policy, you can create and delete a Lambda function from any network address except for the IP address 187.5.104.11/32. Since the IP address, 100.220.0.11/32 is not denied in the policy, you ca n use this address to create a Lambda function. Hence, the correct answer is: Create an AWS Lambda function using the 100.220.0.11/32 address. The option that says: Delete an AWS Lambda function using the 187.5.104.11/32 address is incorrect because the source IP used in this option is denied by the IAM policy. The option that says: Delete an AWS Lambda function from any network address is incorrect. You can't delete a Lambda function from any network add ress because the address 187.5.104.11/32 is denied by the policy. The option that says: Create an AWS Lambda function using the 187.5.104.11/32 address is incorrect. Just like the option above, the IAM policy denied t he IP address 187.5.104.11/32. References: https://docs.aws.amazon.com/IAM/latest/UserGuide/ac cess_policies.html https://docs.aws.amazon.com/lambda/latest/dg/lambda -permissions.html Check out this AWS IAM Cheat Sheet: https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/", + "references": "" + }, + { + "question": "A company has multiple AWS Site-to-Site VPN connect ions placed between their VPCs and their remote network. During peak hours, many employees are expe riencing slow connectivity issues, which limits the ir productivity. The company has asked a solutions arc hitect to scale the throughput of the VPN connectio ns. Which solution should the architect carry out?", + "options": [ + "A. A. Add more virtual private gateways to a VPC and enable Equal Cost Multipath Routing (ECMR) to get", + "B. B. Associate the VPCs to an Equal Cost Multipath Routing (ECMR)-enabled transit gateway and attach", + "C. C. Re-route some of the VPN connections to a seco ndary customer gateway device on the remote", + "D. D. Modify the VPN configuration by increasing the number of tunnels to scale the throughput." + ], + "correct": "B. B. Associate the VPCs to an Equal Cost Multipath Routing (ECMR)-enabled transit gateway and attach", + "explanation": "Explanation/Reference: With AWS Transit Gateway, you can simplify the conn ectivity between multiple VPCs and also connect to any VPC attached to AWS Transit Gateway with a sing le VPN connection. AWS Transit Gateway also enables you to scale the I Psec VPN throughput with equal-cost multi-path (ECMP) routing support over multiple VPN tunnels. A single VPN tunnel still has a maximum throughput of 1.25 Gbps. If you establish multiple VPN tunnels to an ECMP-enabled transit gateway, it can scale beyond the default limit of 1.25 Gbps. Hence, the correct answer is: Associate the VPCs to an Equal Cost Multipath Routing (ECMR)- enabled transit gateway and attach additional VPN t unnels. The option that says: Add more virtual private gate ways to a VPC and enable Equal Cost Multipath Routing (ECMR) to get higher VPN bandwidth is incor rect because a VPC can only have a single virtual private gateway attached to it one at a tim e. Also, there is no option to enable ECMR in a vir tual private gateway. The option that says: Modify the VPN configuration by increasing the number of tunnels to scale the throughput is incorrect. The maximum tunnel for a V PN connection is two. You cannot increase this beyond its limit. The option that says: Re-route some of the VPN conn ections to a secondary customer gateway device on the remote network's end is incorrect. This woul d only increase connection redundancy and won't increase throughput. For example, connections can f ailover to the secondary customer gateway device in case the primary customer gateway device becomes un available. References: https://aws.amazon.com/premiumsupport/knowledge-cen ter/transit-gateway-ecmp-multiple-tunnels/ https://aws.amazon.com/blogs/networking-and-content -delivery/scaling-vpn-throughput-using-aws-transit- gateway/ Check out this AWS Transit Gateway Cheat Sheet: https://tutorialsdojo.com/aws-transit-gateway/", + "references": "" + }, + { + "question": "A company has a web application hosted in their on- premises infrastructure that they want to migrate t o AWS cloud. Your manager has instructed you to ensur e that there is no downtime while the migration process is on-going. In order to achieve this, your team decided to divert 50% of the traffic to the n ew application in AWS and the other 50% to the applica tion hosted in their on-premises infrastructure. On ce the migration is over and the application works wit h no issues, a full diversion to AWS will be implemented. The company's VPC is connected to its on-premises network via an AWS Direct Connect connection. Which of the following are the possible solutions t hat you can implement to satisfy the above requirem ent? (Select TWO.)", + "options": [ + "A. A. Use an Application Elastic Load balancer with Weighted Target Groups to divert and proportion the traffic", + "B. B. Use Route 53 with Failover routing policy to d ivert and proportion the traffic between the on-pre mises and", + "C. C. Use AWS Global Accelerator to divert and propo rtion the HTTP and HTTPS traffic between the on-", + "D. Use a Network Load balancer with Weighted Target Groups to divert the traffic between the on-premise s" + ], + "correct": "", + "explanation": "Explanation/Reference: Application Load Balancers support Weighted Target Groups routing. With this feature, you will be able to do weighted routing of the traffic forwarded by a r ule to multiple target groups. This enables various use cases like blue-green, canary and hybrid deployment s without the need for multiple load balancers. It even enables zero-downtime migration between on-premises and cloud or between different compute types like EC2 and Lambda. To divert 50% of the traffic to the new application in AWS and the other 50% to the application, you c an also use Route 53 with Weighted routing policy. Thi s will divert the traffic between the on-premises a nd AWS-hosted application accordingly. Weighted routing lets you associate multiple resour ces with a single domain name (tutorialsdojo.com) o r subdomain name (portal.tutorialsdojo.com) and choos e how much traffic is routed to each resource. This can be useful for a variety of purposes, including load balancing and testing new versions of software . You can set a specific percentage of how much traffic w ill be allocated to the resource by specifying the weights. For example, if you want to send a tiny portion of your traffic to one resource and the rest to anothe r resource, you might specify weights of 1 and 255. T he resource with a weight of 1 gets 1/256th of the traffic (1/1+255), and the other resource gets 255/ 256ths (255/1+255). You can gradually change the balance by changing th e weights. If you want to stop sending traffic to a resource, you can change the weight for that record to 0. When you create a target group in your Application Load Balancer, you specify its target type. This determines the type of target you specify when regi stering with this target group. You can select the following target types: 1. instance - The targets are specified by instance ID. 2. ip - The targets are IP addresses. 3. Lambd a - The target is a Lambda function. When the target type is ip, you can specify IP addr esses from one of the following CIDR blocks: - 10.0.0.0/8 (RFC 1918) - 100.64.0.0/10 (RFC 6598) - 172.16.0.0/12 (RFC 1918) - 192.168.0.0/16 (RFC 1918) - The subnets of the VPC for the target group These supported CIDR blocks enable you to register the following with a target group: ClassicLink instances, instances in a VPC that is peered to the load balancer VPC, AWS resources that are addressa ble by IP address and port (for example, databases), an d on-premises resources linked to AWS through AWS Direct Connect or a VPN connection. Take note that you can not specify publicly routabl e IP addresses. If you specify targets using an ins tance ID, traffic is routed to instances using the primar y private IP address specified in the primary netwo rk interface for the instance. If you specify targets using IP addresses, you can route traffic to an ins tance using any private IP address from one or more netwo rk interfaces. This enables multiple applications o n an instance to use the same port. Each network interfa ce can have its own security group. Hence, the correct answers are the following option s: - Use an Application Elastic Load balancer with Wei ghted Target Groups to divert and proportion the traffic between the on-premises and AWS-hosted application. Divert 50% of the traffic to the new application in AWS and the other 50% to the app lication hosted in their on-premises infrastructure. - Use Route 53 with Weighted routing policy to dive rt the traffic between the on-premises and AWS- hosted application. Divert 50% of the traffic to th e new application in AWS and the other 50% to the application hosted in their on-premises infrastruct ure. The option that says: Use a Network Load balancer w ith Weighted Target Groups to divert the traffic between the on-premises and AWS-hosted application. Divert 50% of the traffic to the new application in AWS and the other 50% to the applica tion hosted in their on-premises infrastructure is incorrect because a Network Load balancer doesn' t have Weighted Target Groups to divert the traffic between the on-premises and AWS-hosted application. The option that says: Use Route 53 with Failover ro uting policy to divert and proportion the traffic between the on-premises and AWS-hosted application. Divert 50% of the traffic to the new application in AWS and the other 50% to the applica tion hosted in their on-premises infrastructure is incorrect because you cannot divert and proporti on the traffic between the on-premises and AWS-host ed application using Route 53 with Failover routing po licy. This is primarily used if you want to configu re active-passive failover to your application archite cture. The option that says: Use AWS Global Accelerator to divert and proportion the HTTP and HTTPS traffic between the on-premises and AWS-hosted appl ication. Ensure that the on-premises network has an AnyCast static IP address and is connected t o your VPC via a Direct Connect Gateway is incorrect because although you can control the prop ortion of traffic directed to each endpoint using A WS Global Accelerator by assigning weights across the endpoints, it is still wrong to use a Direct Connec t Gateway and an AnyCast IP address since these are n ot required at all. You can only associate static I P addresses provided by AWS Global Accelerator to reg ional AWS resources or endpoints, such as Network Load Balancers, Application Load Balancers, EC2 Ins tances, and Elastic IP addresses. Take note that a Direct Connect Gateway, per se, doesn't establish a connection from your on-premises network to your Amazon VPCs. It simply enables you to use your AWS Direct Connect connection to connect to two or more VPCs that are located in different AWS Regions . References: http://docs.aws.amazon.com/Route53/latest/Developer Guide/routing-policy.html https://aws.amazon.com/blogs/aws/new-application-lo ad-balancer-simplifies-deployment-with-weighted-tar get- groups/ https://docs.aws.amazon.com/elasticloadbalancing/la test/application/load-balancer-target-groups.html Check out this Amazon Route 53 Cheat Sheet: https://tutorialsdojo.com/amazon-route-53/", + "references": "" + }, + { + "question": "An operations team has an application running on EC 2 instances inside two custom VPCs. The VPCs are located in the Ohio and N.Virginia Region respectiv ely. The team wants to transfer data between the instances without traversing the public internet. Which combination of steps will achieve this? (Sele ct TWO.)", + "options": [ + "A. A. Re-configure the route table's target and dest ination of the instances' subnet.", + "B. B. Deploy a VPC endpoint on each region to enable a private connection.", + "C. C. Create an Egress-only Internet Gateway.", + "D. D. Set up a VPC peering connection between the VP Cs." + ], + "correct": "", + "explanation": "Explanation/Reference: A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 a ddresses. Instances in either VPC can communicate with each other as if they are within the same netw ork. You can create a VPC peering connection betwee n your own VPCs, or with a VPC in another AWS account . The VPCs can be in different regions (also known as an inter-region VPC peering connection). Inter-Region VPC Peering provides a simple and cost -effective way to share resources between regions or replicate data for geographic redundancy. Built on the same horizontally scaled, redundant, and hig hly available technology that powers VPC today, Inter-R egion VPC Peering encrypts inter-region traffic wit h no single point of failure or bandwidth bottleneck. Traffic using Inter-Region VPC Peering always stay s on the global AWS backbone and never traverses the pub lic internet, thereby reducing threat vectors, such as common exploits and DDoS attacks. Hence, the correct answers are: - Set up a VPC peering connection between the VPCs. - Re-configure the route table's target and destina tion of the instances' subnet. The option that says: Create an Egress only Interne t Gateway is incorrect because this will just enabl e outbound IPv6 communication from instances in a VPC to the internet. Take note that the scenario requi res private communication to be enabled between VPCs fr om two different regions. The option that says: Launch a NAT Gateway in the p ublic subnet of each VPC is incorrect because NAT Gateways are used to allow instances in private subnets to access the public internet. Note that t he requirement is to make sure that communication betw een instances will not traverse the internet. The option that says: Deploy a VPC endpoint on each region to enable private connection is incorrect. VPC endpoints are region-specific only and do not s upport inter-region communication. References: https://docs.aws.amazon.com/vpc/latest/peering/what -is-vpc-peering.html https://aws.amazon.com/about-aws/whats-new/2017/11/ announcing-support-for-inter-region-vpc-peering/ Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", + "references": "" + }, + { + "question": "A company plans to design an application that can h andle batch processing of large amounts of financia l data. The Solutions Architect is tasked to create t wo Amazon S3 buckets to store the input and output data. The application will transfer the data between mult iple EC2 instances over the network to complete the data processing. Which of the following options would reduce the dat a transfer costs?", + "options": [ + "A. A. Deploy the Amazon EC2 instances in private sub nets in different Availability Zones.", + "B. B. Deploy the Amazon EC2 instances in the same Av ailability Zone.", + "C. C. Deploy the Amazon EC2 instances in the same AW S Region.", + "D. D. Deploy the Amazon EC2 instances behind an Appl ication Load Balancer." + ], + "correct": "B. B. Deploy the Amazon EC2 instances in the same Av ailability Zone.", + "explanation": "Explanation/Reference: Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon Web Services (AWS) Cloud. Using Amazon EC2 eliminat es your need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requir ements or spikes in popularity, reducing your need to forecast traffic. In this scenario, you should deploy all the EC2 ins tances in the same Availability Zone. If you recall , data transferred between Amazon EC2, Amazon RDS, Amazon Redshift, Amazon ElastiCache instances, and Elastic Network Interfaces in the same Availability Zone is free. Instead of using the public network to transfer the data, you can use the private network to reduce the overall data transfer costs. Hence, the correct answer is: Deploy the Amazon EC2 instances in the same Availability Zone. The option that says: Deploy the Amazon EC2 instanc es in the same AWS Region is incorrect because even if the instances are deployed in the same Regi on, they could still be charged with inter-Availabi lity Zone data transfers if the instances are distribute d across different availability zones. You must dep loy the instances in the same Availability Zone to avoid th e data transfer costs. The option that says: Deploy the Amazon EC2 instanc es behind an Application Load Balancer is incorrect because this approach won't reduce the ov erall data transfer costs. An Application Load Bala ncer is primarily used to distribute the incoming traffi c to underlying EC2 instances. The option that says: Deploy the Amazon EC2 instanc es in private subnets in different Availability Zones is incorrect. Although the data transfer betw een instances in private subnets is free, there wil l be an issue with retrieving the data in Amazon S3. Rememb er that you won't be able to connect to your Amazon S3 bucket if you are using a private subnet unless you have a VPC Endpoint. References: https://aws.amazon.com/ec2/pricing/on-demand/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /concepts.html https://aws.amazon.com/blogs/mt/using-aws-cost-expl orer-to-analyze-data-transfer-costs/ Amazon EC2 Overview: https://www.youtube.com/watch?v=7VsGIHT_jQE Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", + "references": "" + }, + { + "question": "An intelligence agency is currently hosting a learn ing and training portal in AWS. Your manager instru cted you to launch a large EC2 instance with an attached EBS Volume and enable Enhanced Networking. What are the valid case scenarios in using Enhanced Netw orking? (Select TWO.) A. A. When you need a low packet-per-second performa nce", + "options": [ + "B. B. When you need a consistently lower inter-insta nce latencies", + "C. C. When you need a dedicated connection to your o n-premises data center", + "D. D. When you need a higher packet per second (PPS) performance", + "A. A. Enable the Requester Pays feature in the Amazo n S3 bucket.", + "B. B. Create a bucket policy that will require the u sers to set the object's ACL to bucket-owner- full- control.", + "C. C. Create a CORS configuration in the S3 bucket.", + "D. D. Enable server access logging and set up an IAM policy that will require the users to set the obje ct's ACL" + ], + "correct": "B. B. Create a bucket policy that will require the u sers to set the object's ACL to bucket-owner- full- control.", + "explanation": "Explanation/Reference: Amazon S3 stores data as objects within buckets. An object is a file and any optional metadata that describes the file. To store a file in Amazon S3, y ou upload it to a bucket. When you upload a file as an object, you can set permissions on the object and any metadata. Buckets are containers for objects. You can have one or mo re buckets. You can control access for each bucket, de ciding who can create, delete, and list objects in it. You can also choose the geographical Region where Amazo n S3 will store the bucket and its contents and vie w access logs for the bucket and its objects. By default, an S3 object is owned by the AWS accoun t that uploaded it even though the bucket is owned by another account. To get full access to the object, the object owner must explicitly grant the bucket o wner access. You can create a bucket policy to require e xternal users to grant bucket-owner-full-control wh en uploading objects so the bucket owner can have full access to the objects. Hence, the correct answer is: Create a bucket polic y that will require the users to set the object's A CL to bucket-owner-full-control. The option that says: Enable the Requester Pays fea ture in the Amazon S3 bucket is incorrect because this option won't grant the bucket owner full acces s to the uploaded objects in the S3 bucket. With Requester Pays buckets, the requester, instead of t he bucket owner, pays the cost of the request and t he data download from the bucket. The option that says: Create a CORS configuration i n the S3 bucket is incorrect because this option on ly allows cross-origin access to your Amazon S3 resour ces. If you need to grant the bucket owner full con trol in the uploaded objects, you must create a bucket p olicy and require external users to grant bucket-ow ner- full-control when uploading objects. The option that says: Enable server access logging and set up an IAM policy that will require the user s to set the bucket's ACL to bucket-owner-full-contro l is incorrect because this option only provides detailed records for the requests that are made to a bucket. In addition, the bucket-owner-full-contro l canned ACL must be associated with the bucket polic y and not to an IAM policy. This will require the users to set the object's ACL (not the bucket's) to bucket-owner-full-control. References: https://aws.amazon.com/premiumsupport/knowledge-cen ter/s3-bucket-owner-access/ https://aws.amazon.com//premiumsupport/knowledge-ce nter/s3-require-object-ownership/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", + "references": "http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ enhanced-networking.html Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/ QUESTION 338 A company is using Amazon S3 to store frequently ac cessed data. The S3 bucket is shared with external users that will upload files regularly. A Solutions Architect needs to implement a solution that will grant the bucket owner full access to all uploaded object s in the S3 bucket. What action should be done to achieve this task?" + }, + { + "question": "A Solutions Architect designed a real-time data ana lytics system based on Kinesis Data Stream and Lambda. A week after the system has been deployed, the users noticed that it performed slowly as the d ata rate increases. The Architect identified that the p erformance of the Kinesis Data Streams is causing t his problem. Which of the following should the Architect do to i mprove performance?", + "options": [ + "A. A. Replace the data stream with Amazon Kinesis Da ta Firehose instead.", + "B. B. Implement Step Scaling to the Kinesis Data Str eam.", + "C. C. Increase the number of shards of the Kinesis s tream by using the UpdateShardCount", + "D. D. Improve the performance of the stream by decre asing the number of its shards using the MergeShard" + ], + "correct": "C. C. Increase the number of shards of the Kinesis s tream by using the UpdateShardCount", + "explanation": "Explanation/Reference: Amazon Kinesis Data Streams supports resharding, wh ich lets you adjust the number of shards in your stream to adapt to changes in the rate of data flow through the stream. Resharding is considered an advanced operation. There are two types of resharding operations: shard split and shard merge. In a shard split, you divid e a single shard into two shards. In a shard merge, you combine two shards into a single shard. Resharding is always pairwise in the sense that you cannot split into more than two shards in a single operation, an d you cannot merge more than two shards in a single opera tion. The shard or pair of shards that the reshardi ng operation acts on are referred to as parent shards. The shard or pair of shards that result from the r esharding operation are referred to as child shards. Splitting increases the number of shards in your st ream and therefore increases the data capacity of t he stream. Because you are charged on a per-shard basi s, splitting increases the cost of your stream. Sim ilarly, merging reduces the number of shards in your stream and therefore decreases the data capacity--and cost--of the stream. If your data rate increases, you can also increase the number of shards allocated to your stream to ma intain the application performance. You can reshard your s tream using the UpdateShardCount API. The throughput of an Amazon Kinesis data stream is desi gned to scale without limits via increasing the num ber of shards within a data stream. Hence, the correct answer is to increase the number of shards of the Kinesis stream by using the UpdateShardCount comman d. Replacing the data stream with Amazon Kinesis Data Firehose instead is incorrect because the throughput of Kinesis Firehose is not exceptionally higher than Kinesis Data Streams. In fact, the throughput of an Amazon Kinesis data stream is desi gned to scale without limits via increasing the num ber of shards within a data stream. Improving the performance of the stream by decreasi ng the number of its shards using the MergeShard command is incorrect because merging the shards will effectively decrease the performance of the stream rather than improve it. Implementing Step Scaling to the Kinesis Data Strea m is incorrect because there is no Step Scaling feature for Kinesis Data Streams. This is only appl icable for EC2. References: https://aws.amazon.com/blogs/big-data/scale-your-am azon-kinesis-stream-capacity-with-updateshardcount/ https://aws.amazon.com/kinesis/data-streams/faqs/ https://docs.aws.amazon.com/streams/latest/dev/kine sis-using-sdk-java-resharding.html Check out this Amazon Kinesis Cheat Sheet: https://tutorialsdojo.com/amazon-kinesis/", + "references": "" + }, + { + "question": "A fast food company is using AWS to host their onli ne ordering system which uses an Auto Scaling group of EC2 instances deployed across multiple Availabil ity Zones with an Application Load Balancer in fron t. To better handle the incoming traffic from various digital devices, you are planning to implement a ne w routing system where requests which have a URL of < server>/api/android are forwarded to one specific target group named \"Android-Target-Group\". Converse ly, requests which have a URL of /api/ios are forwarded to another separate target group name d \"iOS-Target-Group\". How can you implement this change in AWS?", + "options": [ + "A. A. Use path conditions to define rules that forwa rd requests to different target groups based on the URL in", + "B. B. Replace your ALB with a Gateway Load Balancer then use path conditions to define rules that forwa rd", + "C. C. Use host conditions to define rules that forwa rd requests to different target groups based on the", + "D. D. Replace your ALB with a Network Load Balancer then use host conditions to define rules that forwa rd" + ], + "correct": "A. A. Use path conditions to define rules that forwa rd requests to different target groups based on the URL in", + "explanation": "Explanation/Reference: If your application is composed of several individu al services, an Application Load Balancer can route a request to a service based on the content of the re quest such as Host field, Path URL, HTTP header, HT TP method, Query string, or Source IP address. Path-ba sed routing allows you to route a client request ba sed on the URL path of the HTTP header. Each path condi tion has one path pattern. If the URL in a request matches the path pattern in a listener rule exactly , the request is routed using that rule. A path pattern is case-sensitive, can be up to 128 characters in length, and can contain any of the fo llowing characters. You can include up to three wildcard ch aracters. AZ, az, 09 _ - . $ / ~ \" ' @ : + & (using &) * (matches 0 or more characters) ? (matches exactly 1 character) Example path patterns /img/* /js/* You can use path conditions to define rules that fo rward requests to different target groups based on the URL in the request (also known as path-based routin g). This type of routing is the most appropriate solution for this scenario hence, the correct answe r is: Use path conditions to define rules that forw ard requests to different target groups based on the URL in the request. The option that says: Use host conditions to define rules that forward requests to different target groups based on the hostname in the host header. Th is enables you to support multiple domains using a single load balancer is incorrect because h ost-based routing defines rules that forward reques ts to different target groups based on the hostname in th e host header instead of the URL, which is what is needed in this scenario. The option that says: Replace your ALB with a Gatew ay Load Balancer then use path conditions to define rules that forward requests to different tar get groups based on the URL in the request is incorrect because a Gateway Load Balancer does not support path-based routing. You must use an Application Load Balancer. The option that says: Replace your ALB with a Netwo rk Load Balancer then use host conditions to define rules that forward requests to different tar get groups based on the URL in the request is incorrect because a Network Load Balancer is used f or applications that need extreme network performance and static IP. It also does not support path-based routing which is what is needed in this scenario. Furthermore, the statement mentions host- based routing even though the scenario is about pat h- based routing. References: https://docs.aws.amazon.com/elasticloadbalancing/la test/application/introduction.html#application-load - balancer-benefits https://docs.aws.amazon.com/elasticloadbalancing/la test/application/load-balancer-listeners.html#path- conditions Check out this AWS Elastic Load Balancing (ELB) Che at Sheet: https://tutorialsdojo.com/aws-elastic-load-balancin g-elb/ Application Load Balancer vs Network Load Balancer vs Classic Load Balancer: https://tutorialsdojo.com/application-load-balancer -vs-network-load-balancer-vs-classic-load-balancer/", + "references": "" + }, + { + "question": "A website hosted on Amazon ECS container instances loads slowly during peak traffic, affecting its availability. Currently, the container instances ar e run behind an Application Load Balancer, and CloudWatch alarms are configured to send notificati ons to the operations team if there is a problem in availability so they can scale out if needed. A sol utions architect needs to create an automatic scali ng solution when such problems occur. Which solution could satisfy the requirement? (Sele ct TWO.) A. A. Create an AWS Auto Scaling policy that scales out the ECS cluster when the cluster's CPU utilizat ion is too high.", + "options": [ + "B. B. Create an AWS Auto Scaling policy that scales out the ECS service when the ALB hits a high CPU", + "C. C. Create an AWS Auto Scaling policy that scales out an ECS service when the ALB endpoint becomes", + "D. D. Create an AWS Auto Scaling policy that scales out the ECS service when the service's memory" + ], + "correct": "", + "explanation": "Explanation/Reference: AWS Auto Scaling monitors your applications and aut omatically adjusts capacity to maintain steady, pre dictable performance at the lowest possible cost. Using AWS Auto Scaling, it's easy to set up application scaling for multiple resources across m ultiple services in minutes. The service provides a simple, powerful user interface that lets you build scaling plans for resources including Amazon EC2 instances and Spot Fleets, Amazon ECS tasks, Amazon DynamoDB tables and indexes, and Amazon Aurora Replicas. In this scenario, you can set up a scaling policy t hat triggers a scale-out activity to an ECS service or ECS container instance based on the metric that you pre fer. The following metrics are available for instances: CPU Utilization Disk Reads Disk Read Operations Disk Writes Disk Write Operations Network In Network Out Status Check Failed (Any) Status Check Failed (Instance) Status Check Failed (System) The following metrics are available for ECS Service : ECSServiceAverageCPUUtilization--Average CPU utiliz ation of the service. ECSServiceAverageMemoryUtilization--Average memory utilization of the service. ALBRequestCountPerTarget--Number of requests comple ted per target in an Application Load Balancer target group. Hence, the correct answers are: - Create an AWS Auto scaling policy that scales out the ECS service when the service's memory utilizat ion is too high. - Create an AWS Auto scaling policy that scales out the ECS cluster when the cluster's CPU utilization is too high. The option that says: Create an AWS Auto scaling po licy that scales out an ECS service when the ALB endpoint becomes unreachable is incorrect. This wou ld be a different problem that needs to be addresse d differently if this is the case. An unreachable ALB endpoint could mean other things like a misconfigu red security group or network access control lists. The option that says: Create an AWS Auto scaling po licy that scales out the ECS service when the ALB h its a high CPU utilization is incorrect. ALB is a managed resource. You cannot track nor view its resource u tilization. The option that says: Create an AWS Auto scaling po licy that scales out the ECS cluster when the ALB target group's CPU utilization is too high is i ncorrect. AWS Auto Scaling does not support this metric for ALB. References: https://docs.aws.amazon.com/AmazonECS/latest/develo perguide/service-configure-auto-scaling.html https://docs.aws.amazon.com/autoscaling/ec2/usergui de/as-instance-monitoring.html Check out this AWS Auto Scaling Cheat Sheet: https://tutorialsdojo.com/aws-auto-scaling/", + "references": "" + }, + { + "question": "A disaster recovery team is planning to back up on- premises records to a local file server share throu gh SMB protocol. To meet the company's business contin uity plan, the team must ensure that a copy of data from 48 hours ago is available for immediate access . Accessing older records with delay is tolerable. Which should the DR team implement to meet the obje ctive with the LEAST amount of configuration effort ?", + "options": [ + "A. A. Use an AWS Storage File gateway with enough st orage to keep data from the last 48 hours. Send the", + "B. B. Create an AWS Backup plan to copy data backups to a local SMB share every 48 hours.", + "C. C. Mount an Amazon EFS file system on the on-prem ises client and copy all backups to an NFS share.", + "D. D. Create an SMB file share in Amazon FSx for Win dows File Server that has enough storage to store a ll" + ], + "correct": "A. A. Use an AWS Storage File gateway with enough st orage to keep data from the last 48 hours. Send the", + "explanation": "Explanation/Reference: Amazon S3 File Gateway presents a file interface th at enables you to store files as objects in Amazon S3 using the industry-standard NFS and SMB file protoc ols, and access those files via NFS and SMB from your data center or Amazon EC2, or access those fil es as objects directly in Amazon S3. When you deploy File Gateway, you specify how much disk space you want to allocate for local cache. This local cache acts as a buffer for writes and pr ovides low latency access to data that was recently written to or read from Amazon S3. When a client writes dat a to a file via File Gateway, that data is first wr itten to the local cache disk on the gateway itself. Once th e data has been safely persisted to the local cache , only then does the File Gateway acknowledge the write ba ck to the client. From there, File Gateway transfer s the data to the S3 bucket asynchronously in the bac kground, optimizing data transfer using multipart parallel uploads, and encrypting data in transit us ing HTTPS. In this scenario, you can deploy an AWS Storage Fil e Gateway to the on-premises client. After activati ng the File Gateway, create an SMB share and mount it as a local disk at the on-premises end. Copy the backups to the SMB share. You must ensure that you size the File Gateway's local cache appropriately t o the backup data that needs immediate access. After the backup is done, you will be able to access the older data but with a delay. There will be a small delay since data (not in cache) needs to be retrieved fro m Amazon S3. Hence, the correct answer is: Use an AWS Storage Fi le gateway with enough storage to keep data from the last 48 hours. Send the backups to an SMB share mounted as a local disk. The option that says: Create an SMB file share in A mazon FSx for Windows File Server that has enough storage to store all backups. Access the fil e share from on-premises is incorrect because this requires additional setup. You need to set up a Dir ect Connect or VPN connection from on-premises to AWS first in order for this to work. The option that says: Mount an Amazon EFS file syst em on the on-premises client and copy all backups to an NFS share is incorrect because the fi le share required in the scenario needs to be using the SMB protocol. The option that says: Create an AWS Backup plan to copy data backups to a local SMB share every 48 hours is incorrect. AWS Backup only works on AWS re sources. References: https://aws.amazon.com/blogs/storage/easily-store-y our-sql-server-backups-in-amazon-s3-using-file-gate way/ https://aws.amazon.com/storagegateway/faqs/ AWS Storage Gateway Overview: https://www.youtube.com/watch?v=pNb7xOBJjHE Check out this AWS Storage Gateway Cheat Sheet: https://tutorialsdojo.com/aws-storage-gateway/", + "references": "" + }, + { + "question": "An application is using a Lambda function to proces s complex financial data that run for 15 minutes on average. Most invocations were successfully process ed. However, you noticed that there are a few terminated invocations throughout the day, which ca used data discrepancy in the application. Which of the following is the most likely cause of this issue?", + "options": [ + "A. A. The failed Lambda functions have been running for over 15 minutes and reached the maximum", + "B. B. The Lambda function contains a recursive code and has been running for over 15 minutes.", + "C. C. The concurrent execution limit has been reache d.", + "D. D. The failed Lambda Invocations contain a Servic eException error which means that the AWS Lambda" + ], + "correct": "A. A. The failed Lambda functions have been running for over 15 minutes and reached the maximum", + "explanation": "Explanation/Reference: A Lambda function consists of code and any associat ed dependencies. In addition, a Lambda function als o has configuration information associated with it. I nitially, you specify the configuration information when you create a Lambda function. Lambda provides an AP I for you to update some of the configuration data. You pay for the AWS resources that are used to run your Lambda function. To prevent your Lambda function from running indefinitely, you specify a t imeout. When the specified timeout is reached, AWS Lambda terminates execution of your Lambda function . It is recommended that you set this value based o n your expected execution time. The default timeout i s 3 seconds and the maximum execution duration per request in AWS Lambda is 900 seconds, which is equi valent to 15 minutes. Hence, the correct answer is the option that says: The failed Lambda functions have been running for over 15 minutes and reached the maximum execution t ime. Take note that you can invoke a Lambda function syn chronously either by calling the Invoke operation o r by using an AWS SDK in your preferred runtime. If y ou anticipate a long-running Lambda function, your client may time out before function execution compl etes. To avoid this, update the client timeout or y our SDK configuration. The option that says: The concurrent execution limi t has been reached is incorrect because, by default , the AWS Lambda limits the total concurrent executio ns across all functions within a given region to 10 00. By setting a concurrency limit on a function, Lambd a guarantees that allocation will be applied specif ically to that function, regardless of the amount of traff ic processing the remaining functions. If that limi t is exceeded, the function will be throttled but not te rminated, which is in contrast with what is happeni ng in the scenario. The option that says: The Lambda function contains a recursive code and has been running for over 15 minutes is incorrect because having a recursive code in your Lambda function does not directly resu lt to an abrupt termination of the function execution. Th is is a scenario wherein the function automatically calls itself until some arbitrary criteria is met. This c ould lead to an unintended volume of function invoc ations and escalated costs, but not an abrupt termination because Lambda will throttle all invocations to the function. The option that says: The failed Lambda Invocations contain a ServiceException error which means that the AWS Lambda service encountered an internal error is incorrect because although this is a valid root cause, it is unlikely to have several Se rviceException errors throughout the day unless the re is an outage or disruption in AWS. Since the scenario says that the Lambda function runs for about 10 to 15 minutes, the maximum execution duration is the most likely cause of the issue and not the AWS Lambda service encountering an internal error. References: https://docs.aws.amazon.com/lambda/latest/dg/limits .html https://docs.aws.amazon.com/lambda/latest/dg/resour ce-model.html AWS Lambda Overview - Serverless Computing in AWS: https://www.youtube.com/watch?v=bPVX1zHwAnY Check out this AWS Lambda Cheat Sheet: https://tutorialsdojo.com/aws-lambda/", + "references": "" + }, + { + "question": "A company launched a cryptocurrency mining server o n a Reserved EC2 instance in us-east-1 region's private subnet that uses IPv6. Due to the financial data that the server contains, the system should b e secured to prevent any unauthorized access and to m eet the regulatory compliance requirements. In this scenario, which VPC feature allows the EC2 instance to communicate to the Internet but prevent s inbound traffic?", + "options": [ + "A. A. Egress-only Internet gateway", + "B. B. NAT Gateway", + "C. C. NAT instances", + "D. D. Internet Gateway" + ], + "correct": "A. A. Egress-only Internet gateway", + "explanation": "Explanation/Reference: An egress-only Internet gateway is a horizontally s caled, redundant, and highly available VPC componen t that allows outbound communication over IPv6 from i nstances in your VPC to the Internet, and prevents the Internet from initiating an IPv6 connection wit h your instances. Take note that an egress-only Internet gateway is f or use with IPv6 traffic only. To enable outbound-o nly Internet communication over IPv4, use a NAT gateway instead. Hence, the correct answer is: Egress-only Internet gateway. NAT Gateway and NAT instances are incorrect because these are only applicable for IPv4 and not IPv6. Even though these two components can enable the EC2 instance in a private subnet to communicate to the Internet and prevent inbound traffic, it is only li mited to instances which are using IPv4 addresses a nd not IPv6. The most suitable VPC component to use is the egress-only Internet gateway. Internet Gateway is incorrect because this is prima rily used to provide Internet access to your instan ces in the public subnet of your VPC, and not for private subnets. However, with an Internet gateway, traffic originating from the public Internet will also be a ble to reach your instances. The scenario is asking you to prevent inbound access, so this is not the correct answer. Reference: https://docs.aws.amazon.com/vpc/latest/userguide/eg ress-only-internet-gateway.html Amazon VPC Overview: https://www.youtube.com/watch?v=oIDHKeNxvQQ Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", + "references": "" + }, + { + "question": "A multinational corporate and investment bank is re gularly processing steady workloads of accruals, lo an interests, and other critical financial calculation s every night from 10 PM to 3 AM on their on-premis es data center for their corporate clients. Once the p rocess is done, the results are then uploaded to th e Oracle General Ledger which means that the processing shou ld not be delayed or interrupted. The CTO has decided to move its IT infrastructure to AWS to sav e costs. The company needs to reserve compute capacity in a specific Availability Zone to properl y run their workloads. As the Senior Solutions Architect, how can you impl ement a cost-effective architecture in AWS for thei r financial system?", + "options": [ + "A. A. Use Dedicated Hosts which provide a physical h ost that is fully dedicated to running your instanc es, and", + "B. B. Use On-Demand EC2 instances which allows you t o pay for the instances that you launch and use by the", + "C. C. Use Regional Reserved Instances to reserve cap acity on a specific Availability Zone and lower dow n the", + "D. D. Use On-Demand Capacity Reservations, which pro vide compute capacity that is always available on t he" + ], + "correct": "D. D. Use On-Demand Capacity Reservations, which pro vide compute capacity that is always available on t he", + "explanation": "Explanation/Reference: On-Demand Capacity Reservations enable you to reser ve compute capacity for your Amazon EC2 instances in a specific Availability Zone for any d uration. This gives you the ability to create and m anage Capacity Reservations independently from the billin g discounts offered by Savings Plans or Regional Reserved Instances. By creating Capacity Reservations, you ensure that you always have access to EC2 capacity when you nee d it, for as long as you need it. You can create Capa city Reservations at any time, without entering int o a one- year or three-year term commitment, and the capacit y is available immediately. Billing starts as soon as the capacity is provisioned and the Capacity Reservatio n enters the active state. When you no longer need it, cancel the Capacity Reservation to stop incurring c harges. When you create a Capacity Reservation, you specify : - The Availability Zone in which to reserve the cap acity - The number of instances for which to reserve capa city - The instance attributes, including the instance t ype, tenancy, and platform/OS Capacity Reservations can only be used by instances that match their attributes. By default, they are automatically used by running instances that match the attributes. If you don't have any running insta nces that match the attributes of the Capacity Reservati on, it remains unused until you launch an instance with matching attributes. In addition, you can use Savings Plans and Regional Reserved Instances with your Capacity Reservations to benefit from billing discounts. AWS automaticall y applies your discount when the attributes of a Capacity Reservation match the attributes of a Savi ngs Plan or Regional Reserved Instance. Hence, the correct answer is to use On-Demand Capac ity Reservations, which provide compute capacity that is always available on the specified recurring schedule. Using On-Demand EC2 instances which allows you to p ay for the instances that you launch and use by the second. Reserve compute capacity in a specif ic Availability Zone to avoid any interruption is incorrect because although an On-Demand instance is stable and suitable for processing critical data, it costs more than any other option. Moreover, the cri tical financial calculations are only done every ni ght from 10 PM to 3 AM only and not 24/7. This means th at your compute capacity will not be utilized for a total of 19 hours every single day. On-Demand insta nces cannot reserve compute capacity at all. So thi s option is incorrect. Using Regional Reserved Instances to reserve capaci ty on a specific Availability Zone and lower down the operating cost through its billing discoun ts is incorrect because this feature is available i n Zonal Reserved Instances only and not on Regional R eserved Instances. Using Dedicated Hosts which provide a physical host that is fully dedicated to running your instances, and bringing your existing per-socket, per-core, or per-VM software licenses to reduce costs is incorrect because the use of a fully dedicated phys ical host is not warranted in this scenario. Moreov er, this will be underutilized since you only run the proces s for 5 hours (from 10 PM to 3 AM only), wasting 19 hours of compute capacity every single day. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ec2-capacity-reservations.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /instance-purchasing-options.html Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", + "references": "" + }, + { + "question": "A Solutions Architect needs to set up the required compute resources for the application which have wo rkloads that require high, sequential read and write access to very large data sets on local storage. Which of the following instance type is the most su itable one to use in this scenario?", + "options": [ + "A. A. Compute Optimized Instances", + "B. B. Memory Optimized Instances", + "C. C. General Purpose Instances", + "D. D. Storage Optimized Instances" + ], + "correct": "D. D. Storage Optimized Instances", + "explanation": "Explanation/Reference: Storage optimized instances are designed for worklo ads that require high, sequential read and write ac cess to very large data sets on local storage. They are optimized to deliver tens of thousands of low-laten cy, random I/O operations per second (IOPS) to applicat ions. Hence, the correct answer is: Storage Optimized Ins tances. Memory Optimized Instances is incorrect because the se are designed to deliver fast performance for workloads that process large data sets in memory, w hich is quite different from handling high read and write capacity on local storage. Compute Optimized Instances is incorrect because th ese are ideal for compute-bound applications that benefit from high-performance processors, such as b atch processing workloads and media transcoding. General Purpose Instances is incorrect because thes e are the most basic type of instances. They provid e a balance of compute, memory, and networking resource s, and can be used for a variety of workloads. Sinc e you are requiring higher read and write capacity, s torage optimized instances should be selected inste ad.", + "references": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /storage-optimized-instances.html Amazon EC2 Overview: https://www.youtube.com/watch?v=7VsGIHT_jQE Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/" + }, + { + "question": "Architect has been instructed to restrict access to the database tier to only accept traffic from the application-tier and deny traffic from other source s. The application-tier is composed of application servers hosted in an Auto Scaling group of EC2 instances. Which of the following options is the MOST suitable solution to implement in this scenario?", + "options": [ + "A. A. Set up the Network ACL of the database subnet to deny all inbound non-database traffic from the s ubnet", + "B. B. Set up the security group of the database tier to allow database traffic from a specified list of application", + "C. C. Set up the security group of the database tier to allow database traffic from the security group of the", + "D. D. Set up the Network ACL of the database subnet to allow inbound database traffic from the subnet o f the" + ], + "correct": "C. C. Set up the security group of the database tier to allow database traffic from the security group of the", + "explanation": "Explanation/Reference: A security group acts as a virtual firewall for you r instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security g roups act at the instance level, not the subnet level. Th erefore, each instance in a subnet in your VPC coul d be assigned to a different set of security groups. If you don't specify a particular group at launch time , the instance is automatically assigned to the default s ecurity group for the VPC. For each security group, you add rules that control the inbound traffic to instances, and a separate s et of rules that control the outbound traffic. This secti on describes the basic things you need to know abou t security groups for your VPC and their rules. You can add or remove rules for a security group wh ich is also referred to as authorizing or revoking inbound or outbound access. A rule applies either t o inbound traffic (ingress) or outbound traffic (eg ress). You can grant access to a specific CIDR range, or t o another security group in your VPC or in a peer V PC (requires a VPC peering connection). In the scenario, the servers of the application-tie r are in an Auto Scaling group which means that the number of EC2 instances could grow or shrink over t ime. An Auto Scaling group could also cover one or more Availability Zones (AZ) which have their own s ubnets. Hence, the most suitable solution would be to set up the security group of the database tier to a llow database traffic from the security group of th e application servers since you can utilize the secur ity group of the application-tier Auto Scaling grou p as the source for the security group rule in your data base tier. Setting up the security group of the database tier to allow database traffic from a specified list of application server IP addresses is incorrect becaus e the list of application server IP addresses will change over time since an Auto Scaling group can add or re move EC2 instances based on the configured scaling policy. This will create inconsistencies in your ap plication because the newly launched instances, whi ch are not included in the initial list of IP addresses, w ill not be able to access the database. Setting up the Network ACL of the database subnet t o deny all inbound non-database traffic from the subnet of the application-tier is incorrect bec ause doing this could affect the other EC2 instance s of other applications, which are also hosted in the sa me subnet of the application-tier. For example, a l arge subnet with a CIDR block of /16 could be shared by several applications. Denying all inbound non- database traffic from the entire subnet will impact other applications which use this subnet. Setting up the Network ACL of the database subnet t o allow inbound database traffic from the subnet of the application-tier is incorrect because although this solution can work, the subnet of the application-tier could be shared by another tier or another set of EC2 instances other than the applic ation- tier. This means that you would inadvertently be gr anting database access to unauthorized servers host ed in the same subnet other than the application-tier. References: https://docs.aws.amazon.com/vpc/latest/userguide/VP C_Security.html#VPC_Security_Comparison http://docs.aws.amazon.com/AmazonVPC/latest/UserGui de/VPC_SecurityGroups.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", + "references": "" + }, + { + "question": "A Solutions Architect needs to launch a web applica tion that will be served globally using Amazon CloudFront. The application is hosted in an Amazon EC2 instance which will be configured as the origin server to process and serve dynamic content to its customers. Which of the following options provides high availa bility for the application?", + "options": [ + "A. A. Launch an Auto Scaling group of EC2 instances and configure it to be part of an origin group.", + "B. B. Use Lambda@Edge to improve the performance of your web application and ensure high availability. Set", + "C. C. Use Amazon S3 to serve the dynamic content of your web application and configure the S3 bucket to be", + "D. D. Provision two EC2 instances deployed in differ ent Availability Zones and configure them to be par t of an" + ], + "correct": "D. D. Provision two EC2 instances deployed in differ ent Availability Zones and configure them to be par t of an", + "explanation": "Explanation/Reference: An origin is a location where content is stored, an d from which CloudFront gets content to serve to vi ewers. Amazon CloudFront is a service that speeds up the d istribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locat ions. When a user requests content that you're serv ing with CloudFront, the user is routed to the edge loc ation that provides the lowest latency (time delay) , so that content is delivered with the best possible pe rformance. You can also set up CloudFront with origin failover for scenarios that require high availability. An o rigin group may contain two origins: a primary and a seco ndary. If the primary origin is unavailable or retu rns specific HTTP response status codes that indicate a failure, CloudFront automatically switches to the secondary origin. To set up origin failover, you mu st have a distribution with at least two origins. The scenario uses an EC2 instance as an origin. Tak e note that we can also use an EC2 instance or a cu stom origin in configuring CloudFront. To achieve high a vailability in an EC2 instance, we need to deploy t he instances in two or more Availability Zones. You al so need to configure the instances to be part of th e origin group to ensure that the application is high ly available. Hence, the correct answer is: Provision two EC2 ins tances deployed in different Availability Zones and configure them to be part of an origin group. The option that says: Use Amazon S3 to serve the dy namic content of your web application and configure the S3 bucket to be part of an origin gro up is incorrect because Amazon S3 can only serve static content. If you need to host dynamic content , you have to use an Amazon EC2 instance instead. The option that says: Launch an Auto Scaling group of EC2 instances and configure it to be part of an origin group is incorrect because you must have at least two origins to set up an origin failover in CloudFront. In addition, you can't directly use a s ingle Auto Scaling group as an origin. The option that says: Use Lambda@Edge to improve th e performance of your web application and ensure high availability. Set the Lambda@Edge funct ions to be part of an origin group is incorrect because Lambda@Edge is primarily used for serverles s edge computing. You can't set Lambda@Edge functions as part of your origin group in CloudFron t. References: https://docs.aws.amazon.com/AmazonCloudFront/latest /DeveloperGuide/high_availability_origin_failover.h tml https://docs.aws.amazon.com/AmazonCloudFront/latest /DeveloperGuide/Introduction.html https://aws.amazon.com/cloudfront/faqs/ Check out this Amazon CloudFront Cheat Sheet: https://tutorialsdojo.com/amazon-cloudfront/", + "references": "" + }, + { + "question": "A multinational company has been building its new d ata analytics platform with high-performance computing workloads (HPC) which requires a scalable , POSIX-compliant storage service. The data need to be stored redundantly across multiple AZs and allow s concurrent connections from thousands of EC2 instances hosted on multiple Availability Zones. Which of the following AWS storage service is the m ost suitable one to use in this scenario?", + "options": [ + "A. A. Amazon S3", + "B. B. Amazon EBS Volumes", + "C. C. Amazon Elastic File System", + "D. D. Amazon ElastiCache" + ], + "correct": "C. C. Amazon Elastic File System", + "explanation": "Explanation/Reference: In this question, you should take note of this phra se, \"allows concurrent connections from multiple EC 2 instances\". There are various AWS storage options t hat you can choose but whenever these criteria show up, always consider using EFS instead of using EBS Volumes which is mainly used as a \"block\" storage and can only have one connection to one EC2 instanc e at a time. Amazon EFS is a fully-managed service that makes it easy to set up and scale file storage in the Amazo n Cloud. With a few clicks in the AWS Management Cons ole, you can create file systems that are accessibl e to Amazon EC2 instances via a file system interface (using standard operating system file I/O APIs) an d supports full file system access semantics (such as strong consistency and file locking). Amazon EFS file systems can automatically scale fro m gigabytes to petabytes of data without needing to provision storage. Tens, hundreds, or even thousand s of Amazon EC2 instances can access an Amazon EFS file system at the same time, and Amazon EFS provid es consistent performance to each Amazon EC2 instance. Amazon EFS is designed to be highly durab le and highly available. References: https://docs.aws.amazon.com/efs/latest/ug/performan ce.html https://aws.amazon.com/efs/faq/ Check out this Amazon EFS Cheat Sheet: https://tutorialsdojo.com/amazon-efs/ Here's a short video tutorial on Amazon EFS: https://youtu.be/AvgAozsfCrY", + "references": "" + }, + { + "question": "A company requires corporate IT governance and cost oversight of all of its AWS resources across its divisions around the world. Their corporate divisio ns want to maintain administrative control of the d iscrete AWS resources they consume and ensure that those re sources are separate from other divisions. Which of the following options will support the aut onomy of each corporate division while enabling the corporate IT to maintain governance and cost oversight? (Sele ct TWO.)", + "options": [ + "A. A. Use AWS Trusted Advisor and AWS Resource Group s Tag Editor", + "B. B. Create separate VPCs for each division within the corporate IT AWS account. Launch an AWS Transit", + "C. C. Use AWS Consolidated Billing by creating AWS O rganizations to link the divisions' accounts to a p arent", + "D. D. Create separate Availability Zones for each di vision within the corporate IT AWS account Improve" + ], + "correct": "", + "explanation": "Explanation/Reference: You can use an IAM role to delegate access to resou rces that are in different AWS accounts that you ow n. You share resources in one account with users in a different account. By setting up cross-account acce ss in this way, you don't need to create individual IAM u sers in each account. In addition, users don't have to sign out of one account and sign into another in or der to access resources that are in different AWS accounts. You can use the consolidated billing feature in AWS Organizations to consolidate payment for multiple AWS accounts or multiple AISPL accounts. With conso lidated billing, you can see a combined view of AWS charges incurred by all of your accounts. You c an also get a cost report for each member account t hat is associated with your master account. Consolidate d billing is offered at no additional charge. AWS a nd AISPL accounts can't be consolidated together. The combined use of IAM and Consolidated Billing wi ll support the autonomy of each corporate division while enabling corporate IT to maintain governance and cost oversight. Hence, the correct choices are: - Enable IAM cross-account access for all corporate IT administrators in each child account - Use AWS Consolidated Billing by creating AWS Orga nizations to link the divisions' accounts to a parent corporate account Using AWS Trusted Advisor and AWS Resource Groups T ag Editor is incorrect. Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS be st practices. It only provides you alerts on areas whe re you do not adhere to best practices and tells yo u how to improve them. It does not assist in maintaining governance over your AWS accounts. Additionally, th e AWS Resource Groups Tag Editor simply allows you to add, edit, and delete tags to multiple AWS resources at once for easier identification and mon itoring. Creating separate VPCs for each division within the corporate IT AWS account. Launch an AWS Transit Gateway with equal-cost multipath routing ( ECMP) and VPN tunnels for intra-VPC communication is incorrect because creating separat e VPCs would not separate the divisions from each other since they will still be operating under the same account and therefore contribute to the same b illing each month. AWS Transit Gateway connects VPCs and o n-premises networks through a central hub and acts as a cloud router where each new connection is only made once. For this particular scenario, it i s suitable to use AWS Organizations instead of settin g up an AWS Transit Gateway since the objective is for maintaining administrative control of the AWS resou rces and not for network connectivity. Creating separate Availability Zones for each divis ion within the corporate IT AWS account. Improve communication between the two AZs using the AWS Global Accelerator is incorrect because you do not need to create Availability Zones. They are already provided for you by AWS right from the start, and not all services support multiple AZ dep loyments. In addition, having separate Availability Zones in your VPC does not meet the requirement of suppor ting the autonomy of each corporate division. The AWS Global Accelerator is a service that uses the A WS global network to optimize the network path from your users to your applications and not between you r Availability Zones. References: http://docs.aws.amazon.com/awsaccountbilling/latest /aboutv2/consolidated-billing.html https://docs.aws.amazon.com/IAM/latest/UserGuide/tu torial_cross-account-with-roles.html Check out this AWS Billing and Cost Management Chea t Sheet: https://tutorialsdojo.com/aws-billing-and-cost-mana gement/", + "references": "" + }, + { + "question": "A game company has a requirement of load balancing the incoming TCP traffic at the transport level (Layer 4) to their containerized gaming servers hos ted in AWS Fargate. To maintain performance, it sho uld handle millions of requests per second sent by game rs around the globe while maintaining ultra-low lat encies. Which of the following must be implemented in the c urrent architecture to satisfy the new requirement?", + "options": [ + "A. A. Launch a new microservice in AWS Fargate that acts as a load balancer since using an ALB or NLB w ith", + "B. B. Create a new record in Amazon Route 53 with We ighted Routing policy to load balance the incoming", + "D. D. Launch a new Network Load Balancer." + ], + "correct": "D. D. Launch a new Network Load Balancer.", + "explanation": "Explanation/Reference: Elastic Load Balancing automatically distributes in coming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying loa d of your application traffic in a single Availabilit y Zone or across multiple Availability Zones. Elast ic Load Balancing offers three types of load balancers that all feature the high availability, automatic scali ng, and robust security necessary to make your applications fault-tolerant. They are: Application Load Balance r, Network Load Balancer, and Classic Load Balancer Network Load Balancer is best suited for load balan cing of TCP traffic where extreme performance is required. Operating at the connection level (Layer 4), Network Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) an d is capable of handling millions of requests per second while maintaining ultra-low latencies. Netwo rk Load Balancer is also optimized to handle sudden and volatile traffic patterns. Hence, the correct answer is to launch a new Networ k Load Balancer. The option that says: Launch a new Application Load Balancer is incorrect because it cannot handle TCP or Layer 4 connections, only Layer 7 (HTTP and HTTP S). The option that says: Create a new record in Amazon Route 53 with Weighted Routing policy to load balance the incoming traffic is incorrect because a lthough Route 53 can act as a load balancer by assi gning each record a relative weight that corresponds to h ow much traffic you want to send to each resource, it is still not capable of handling millions of requests per second while maintaining ultra-low latencies. Y ou have to use a Network Load Balancer instead. The option that says: Launch a new microservice in AWS Fargate that acts as a load balancer since using an ALB or NLB with Fargate is not possible is incorrect because you can place an ALB and NLB in front of your AWS Fargate cluster. References: https://aws.amazon.com/elasticloadbalancing/feature s/#compare https://docs.aws.amazon.com/AmazonECS/latest/develo perguide/load-balancer-types.html https://aws.amazon.com/getting-started/projects/bui ld-modern-app-fargate-lambda-dynamodb-python/module - two/ Check out this AWS Elastic Load Balancing (ELB) Che at Sheet: https://tutorialsdojo.com/aws-elastic-load-balancin g-elb/", + "references": "" + }, + { + "question": "A tech company is running two production web server s hosted on Reserved EC2 instances with EBS- backed root volumes. These instances have a consist ent CPU load of 90%. Traffic is being distributed t o these instances by an Elastic Load Balancer. In add ition, they also have Multi-AZ RDS MySQL databases for their production, test, and development environ ments. What recommendation would you make to reduce cost i n this AWS environment without affecting availabili ty and performance of mission-critical systems? Choose the best answer.", + "options": [ + "A. A. Consider using On-demand instances instead of Reserved EC2 instances", + "B. B. Consider using Spot instances instead of reser ved EC2 instances", + "C. C. Consider not using a Multi-AZ RDS deployment f or the development and test database", + "D. D. Consider removing the Elastic Load Balancer" + ], + "correct": "C. C. Consider not using a Multi-AZ RDS deployment f or the development and test database", + "explanation": "Explanation/Reference: One thing that you should notice here is that the c ompany is using Multi-AZ databases in all of their environments, including their development and test environment. This is costly and unnecessary as thes e two environments are not critical. It is better to use Multi-AZ for production environments to reduce costs, which is why the option that says: Consider not usi ng a Multi-AZ RDS deployment for the development and test database is the correct answer . The option that says: Consider using On-demand inst ances instead of Reserved EC2 instances is incorrect because selecting Reserved instances is c heaper than On-demand instances for long term usage due to the discounts offered when purchasing reserv ed instances. The option that says: Consider using Spot instances instead of reserved EC2 instances is incorrect because the web servers are running in a production environment. Never use Spot instances for producti on level web servers unless you are sure that they are not that critical in your system. This is because your spot instances can be terminated once the maximum price goes over the maximum amount that you specified. The option that says: Consider removing the Elastic Load Balancer is incorrect because the Elastic Loa d Balancer is crucial in maintaining the elasticity a nd reliability of your system. References: https://aws.amazon.com/rds/details/multi-az/ https://aws.amazon.com/pricing/cost-optimization/ Amazon RDS Overview: https://www.youtube.com/watch?v=aZmpLl8K1UU Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/", + "references": "" + }, + { + "question": "A Solutions Architect is managing a three-tier web application that processes credit card payments and online transactions. Static web pages are used on t he front-end tier while the application tier contai ns a single Amazon EC2 instance that handles long-runnin g processes. The data is stored in a MySQL database . The Solutions Architect is instructed to decouple t he tiers to create a highly available application. Which of the following options can satisfy the give n requirement?", + "options": [ + "A. A. Move all the static assets and web pages to Am azon S3. Re-host the application to Amazon Elastic", + "B. B. Move all the static assets, web pages, and the backend application to a larger instance. Use Auto Scaling", + "C. C. Move all the static assets to Amazon S3. Set c oncurrency limit in AWS Lambda to move the applicat ion", + "D. D. Move all the static assets and web pages to Am azon CloudFront. Use Auto Scaling in Amazon EC2" + ], + "correct": "A. A. Move all the static assets and web pages to Am azon S3. Re-host the application to Amazon Elastic", + "explanation": "Explanation/Reference: Amazon Elastic Container Service (ECS) is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. Amazon ECS makes it easy to u se containers as a building block for your applications by eliminating the need for you to ins tall, operate, and scale your own cluster managemen t infrastructure. Amazon ECS lets you schedule long-r unning applications, services, and batch processes using Docker containers. Amazon ECS maintains appli cation availability and allows you to scale your containers up or down to meet your application's ca pacity requirements. The requirement in the scenario is to decouple the services to achieve a highly available architecture . To accomplish this requirement, you must move the exis ting set up to each AWS services. For static assets , you should use Amazon S3. You can use Amazon ECS fo r your web application and then migrate the database to Amazon RDS with Multi-AZ deployment. Decoupling you r app with application integration services allows them to remain interoperable, but if one ser vice has a failure or spike in workload, it won't a ffect the rest of them. Hence, the correct answer is: Move all the static a ssets and web pages to Amazon S3. Re-host the application to Amazon Elastic Container Service (Am azon ECS) containers and enable Service Auto Scaling. Migrate the database to Amazon RDS with Mu lti-AZ deployments configuration. The option that says: Move all the static assets to Amazon S3. Set concurrency limit in AWS Lambda to move the application to a serverless architectur e. Migrate the database to Amazon DynamoDB is incorrect because Lambda functions can't process lo ng-running processes. Take note that a Lambda function has a maximum processing time of 15 minute s. The option that says: Move all the static assets, w eb pages, and the backend application to a larger instance. Use Auto Scaling in Amazon EC2 instance. Migrate the database to Amazon Aurora is incorrect because static assets are more suitable a nd cost-effective to be stored in S3 instead of sto ring them in an EC2 instance. The option that says: Move all the static assets an d web pages to Amazon CloudFront. Use Auto Scaling in Amazon EC2 instance. Migrate the databas e to Amazon RDS with Multi-AZ deployments configuration is incorrect because you can't store data in Amazon CloudFront. Technically, you only st ore cache data in CloudFront, but you can't host applic ations or web pages using this service. You have to use Amazon S3 to host the static web pages and use Clou dFront as the CDN. References: https://docs.aws.amazon.com/AmazonECS/latest/develo perguide/service-auto-scaling.html https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/Concepts.MultiAZ.html Check out this Amazon ECS Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-container- service-amazon-ecs/", + "references": "" + }, + { + "question": "A company plans to use a cloud storage service to t emporarily store its log files. The number of files to be stored is still unknown, but it only needs to be ke pt for 12 hours. Which of the following is the most cost-effective s torage class to use in this scenario?", + "options": [ + "A. A. Amazon S3 Standard-IA", + "B. B. Amazon S3 One Zone-IA", + "C. C. Amazon S3 Standard", + "D. D. Amazon S3 Glacier Deep Archive" + ], + "correct": "C. C. Amazon S3 Standard", + "explanation": "Explanation/Reference: AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Amazon Simple Storage Service (Amazon S3) is an obj ect storage service that offers industry-leading scalability, data availability, security, and perfo rmance. Amazon S3 also offers a range of storage cl asses for the objects that you store. You choose a class depending on your use case scenario and performance access requirements. All of these storage classes o ffer high durability. The scenario requires you to select a cost-effectiv e service that does not have a minimum storage duration since the data will only last for 12 hours . Among the options given, only Amazon S3 Standard has the feature of no minimum storage duration. It is also the most cost-effective storage service bec ause you will only be charged for the last 12 hours, unl ike in other storage classes where you will still b e charged based on its respective storage duration (e .g. 30 days, 90 days, 180 days). S3 Intelligent-Tie ring also has no minimum storage duration and this is de signed for data with changing or unknown access patters. S3 Standard-IA is designed for long-lived but infre quently accessed data that is retained for months o r years. Data that is deleted from S3 Standard-IA wit hin 30 days will still be charged for a full 30 day s. S3 Glacier Deep Archive is designed for long-lived but rarely accessed data that is retained for 7-10 years or more. Objects that are archived to S3 Glacier De ep Archive have a minimum of 180 days of storage, a nd objects deleted before 180 days incur a pro-rated c harge equal to the storage charge for the remaining days. Hence, the correct answer is: Amazon S3 Standard. Amazon S3 Standard-IA is incorrect because this sto rage class has a minimum storage duration of at lea st 30 days. Remember that the scenario requires the da ta to be kept for 12 hours only. Amazon S3 One Zone-IA is incorrect. Just like S3 St andard-IA, this storage class has a minimum storage duration of at least 30 days. Amazon S3 Glacier Deep Archive is incorrect. Althou gh it is the most cost-effective storage class amon g all other options, it has a minimum storage duratio n of at least 180 days which is only suitable for b ackup and data archival. If you store your data in Glacie r Deep Archive for only 12 hours, you will still be charged for the full 180 days. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/sto rage-class-intro.html https://aws.amazon.com/s3/storage-classes/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/ S3 Standard vs S3 Standard-IA vs S3 One Zone-IA Che at Sheet: https://tutorialsdojo.com/s3-standard-vs-s3-standar d-ia-vs-s3-one-zone-ia/", + "references": "" + }, + { + "question": "A company created a VPC with a single subnet then l aunched an On-Demand EC2 instance in that subnet. You have attached an Internet gateway (IGW) to the VPC and verified that the EC2 instance has a public IP. The main route table of the VPC is as shown below: However, the instance still cannot be reached from the Internet when you tried to connect to it from y our computer. Which of the following should be made to the route table to fix this issue?", + "options": [ + "A. A. Modify the above route table: 10.0.0.0/27 -> Y our Internet Gateway", + "B. B. Add the following entry to the route table: 10 .0.0.0/27 -> Your Internet Gateway", + "C. C. Add this new entry to the route table: 0.0.0.0 /27 -> Your Internet Gateway", + "D. D. Add this new entry to the route table: 0.0.0.0 /0 -> Your Internet Gateway" + ], + "correct": "D. D. Add this new entry to the route table: 0.0.0.0 /0 -> Your Internet Gateway", + "explanation": "Explanation/Reference: Apparently, the route table does not have an entry for the Internet Gateway. This is why you cannot connect to the EC2 instance. To fix this, you have to add a route with a destination of 0.0.0.0/0 for IPv4 traffic or ::/0 for IPv6 traffic, and then a target of the Internet gateway ID (igw-xxxxxxxx). This should be the correct route table configuratio n after adding the new entry. Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGui de/VPC_Route_Tables.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", + "references": "" + }, + { + "question": "A large Philippine-based Business Process Outsourci ng company is building a two-tier web application in their VPC to serve dynamic transacti on-based content. The data tier is leveraging an Online Transactional Processing (OLTP) database but for the web tier, they are still deciding what service they will use. What AWS services should you leverage to build an elastic and scalable web tier ?", + "options": [ + "A. Amazon RDS with Multi-AZ and Auto Scaling", + "B. Elastic Load Balancing, Amazon EC2, and Auto Scal ing", + "C. Elastic Load Balancing, Amazon RDS with Multi-AZ, and Amazon S3", + "D. Amazon EC2, Amazon DynamoDB, and Amazon S3" + ], + "correct": "B. Elastic Load Balancing, Amazon EC2, and Auto Scal ing", + "explanation": "Explanation/Reference: Amazon RDS is a suitable database service for onlin e transaction processing (OLTP) applications. However, the question asks for a list of AWS servic es for the web tier and not the database tier. Also , when it comes to services providing scalability and elas ticity for your web tier, you should always conside r using Auto Scaling and Elastic Load Balancer. To build an elastic and a highly-available web tier , you can use Amazon EC2, Auto Scaling, and Elastic Load Balancing. You can deploy your web servers on a fleet of EC2 instances to an Auto Scaling group, which will automatically monitor your applications and automatically adjust capacity to maintain stead y, predictable performance at the lowest possible cost . Load balancing is an effective way to increase th e availability of a system. Instances that fail can b e replaced seamlessly behind the load balancer whil e other instances continue to operate. Elastic Load Balanci ng can be used to balance across instances in multi ple availability zones of a region. The rest of the options are all incorrect since the y don't mention all of the required services in bui lding a highly available and scalable web tier, such as EC2 , Auto Scaling, and Elastic Load Balancer. Although Amazon RDS with Multi-AZ and DynamoDB are highly sc alable databases, the scenario is more focused on building its web tier and not the database tier. Hence, the correct answer is Elastic Load Balancing , Amazon EC2, and Auto Scaling. The option that says: Elastic Load Balancing, Amazo n RDS with Multi-AZ, and Amazon S3 is incorrect because you can't host your web tier using Amazon S 3 since the application is doing a dynamic transact ions. Amazon S3 is only suitable if you plan to have a st atic website. The option that says: Amazon RDS with Multi-AZ and Auto Scaling is incorrect because the focus of the question is building a scalable web tier. You need a service, like EC2, in which you can run your web tier. The option that says: Amazon EC2, Amazon DynamoDB, and Amazon S3 is incorrect because you need Auto Scaling and ELB in order to scale the web tier . References: https://media.amazonwebservices.com/AWS_Building_Fa ult_Tolerant_Applications.pdf https://d1.awsstatic.com/whitepapers/aws-building-f ault-tolerant-applications.pdf https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ec2-increase-availability.html", + "references": "" + }, + { + "question": "A game development company operates several virtual reality (VR) and augmented reality (AR) games which use various RESTful web APIs hosted on their on-premises data center. Due to the unprecedented growth of their company, they decided to migrate th eir system to AWS Cloud to scale out their resource s as well to minimize costs. Which of the following should you recommend as the most cost-effective and scalable solution to meet t he above requirement?", + "options": [ + "A. A. Use AWS Lambda and Amazon API Gateway.", + "B. B. Set up a micro-service architecture with ECS, ECR, and Fargate.", + "C. C. Use a Spot Fleet of Amazon EC2 instances, each with an Elastic Fabric Adapter (EFA) for more", + "D. D. Host the APIs in a static S3 web hosting bucke t behind a CloudFront web distribution." + ], + "correct": "A. A. Use AWS Lambda and Amazon API Gateway.", + "explanation": "Explanation Explanation/Reference: With AWS Lambda, you pay only for what you use. You are charged based on the number of requests for your functions and the duration, the time it takes for your code to execute. Lambda counts a request each time it starts executi ng in response to an event notification or invoke c all, including test invokes from the console. You are ch arged for the total number of requests across all y our functions. Duration is calculated from the time your code begi ns executing until it returns or otherwise terminat es, rounded up to the nearest 1ms. The price depends on the amount of memory you allocate to your function . The Lambda free tier includes 1M free requests per month and over 400,000 GB-seconds of compute time per month. The best possible answer here is to use a combinati on of AWS Lambda and Amazon API Gateway because this solution is both scalable and cost-effective. You will only be charged when you use your Lambda function, unlike having an EC2 instance that always runs even though you don't use it. Hence, the correct answer is: Use AWS Lambda and Am azon API Gateway. Setting up a micro-service architecture with ECS, E CR, and Fargate is incorrect because ECS is mainly used to host Docker applications, and in add ition, using ECS, ECR, and Fargate alone is not scalable and not recommended for this type of scena rio. Hosting the APIs in a static S3 web hosting bucket behind a CloudFront web distribution is not a suitable option as there is no compute capability f or S3 and you can only use it as a static website. Although this solution is scalable since uses Cloud Front, the use of S3 to host the web APIs or the dy namic website is still incorrect. The option that says: Use a Spot Fleet of Amazon EC 2 instances, each with an Elastic Fabric Adapter (EFA) for more consistent latency and higher networ k throughput. Set up an Application Load Balancer to distribute traffic to the instances is incorrect. EC2 alone, without Auto Scaling, is not scalable. Even though you use Spot EC2 instance, it is still more expensive compared to Lambda because you will be charged only when your function is bein g used. An Elastic Fabric Adapter (EFA) is simply a network device that you can attach to your Amazon E C2 instance that enables you to achieve the application performance of an on-premises HPC clust er, with scalability, flexibility, and elasticity p rovided by the AWS Cloud. Although EFA is scalable, the Spo t Fleet configuration of this option doesn't have A uto Scaling involved. References: https://docs.aws.amazon.com/apigateway/latest/devel operguide/getting-started-with-lambda-integration.h tml https://aws.amazon.com/lambda/pricing/ Check out this AWS Lambda Cheat Sheet: https://tutorialsdojo.com/aws-lambda/ EC2 Container Service (ECS) vs Lambda: https://tutorialsdojo.com/ec2-container-service-ecs -vs-lambda/", + "references": "" + }, + { + "question": "A computer animation film studio has a web applicat ion running on an Amazon EC2 instance. It uploads 5 GB video objects to an Amazon S3 bucket. Video uplo ads are taking longer than expected, which impacts the performance of your application. Which method will help improve the performance of t he application?", + "options": [ + "A. A. Leverage on Amazon CloudFront and use HTTP POS T method to reduce latency.", + "B. B. Use Amazon S3 Multipart Upload API.", + "C. C. Enable Enhanced Networking with the Elastic Ne twork Adapter (ENA) on your EC2 Instances.", + "D. D. Use Amazon Elastic Block Store Provisioned IOP S and an Amazon EBS-optimized instance." + ], + "correct": "B. B. Use Amazon S3 Multipart Upload API.", + "explanation": "Explanation/Reference: The main issue is the slow upload time of the video objects to Amazon S3. To address this issue, you c an use Multipart upload in S3 to improve the throughpu t. It allows you to upload parts of your object in parallel thus, decreasing the time it takes to uplo ad big objects. Each part is a contiguous portion o f the object's data. You can upload these object parts independently and in any order. If transmission of any part fails, y ou can retransmit that part without affecting other parts. After all parts of your object are uploaded, Amazo n S3 assembles these parts and creates the object. In ge neral, when your object size reaches 100 MB, you sh ould consider using multipart uploads instead of uploadi ng the object in a single operation. Using multipart upload provides the following advan tages: Improved throughput - You can upload parts in paral lel to improve throughput. Quick recovery from any network issues - Smaller pa rt size minimizes the impact of restarting a failed upload due to a network error. Pause and resume object uploads - You can upload ob ject parts over time. Once you initiate a multipart upload, there is no expiry; you must explicitly com plete or abort the multipart upload. Begin an upload before you know the final object si ze - You can upload an object as you are creating i t. Enabling Enhanced Networking with the Elastic Netwo rk Adapter (ENA) on your EC2 Instances is incorrect. Even though this will improve network pe rformance, the issue will still persist since the p roblem lies in the upload time of the object to Amazon S3. You should use the Multipart upload feature instea d. Leveraging on Amazon CloudFront and using HTTP POST method to reduce latency is incorrect because CloudFront is a CDN service and is not used to expedite the upload process of objects to Amazo n S3. Amazon CloudFront is a fast content delivery ne twork (CDN) service that securely delivers data, videos, applications, and APIs to customers globall y with low latency, high transfer speeds, all withi n a developer-friendly environment. Using Amazon Elastic Block Store Provisioned IOPS a nd an Amazon EBS-optimized instance is incorrect. Although the use of Amazon Elastic Block Store Provisioned IOPS will speed up the I/O performance of the EC2 instance, the root cause is still not resolved since the primary problem here i s the slow video upload to Amazon S3. There is no network contention in the EC 2 instance. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/upl oadobjusingmpu.html http://docs.aws.amazon.com/AmazonS3/latest/dev/qfac ts.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", + "references": "" + }, + { + "question": "A company deployed a web application to an EC2 inst ance that adds a variety of photo effects to a pict ure uploaded by the users. The application will put the generated photos to an S3 bucket by sending PUT requests to the S3 API. What is the best option for this scenario consideri ng that you need to have API credentials to be able to send a request to the S3 API?", + "options": [ + "A. A. Encrypt the API credentials and store in any d irectory of the EC2 instance.", + "B. B. Store the API credentials in the root web appl ication directory of the EC2 instance.", + "C. C. Store your API credentials in Amazon Glacier.", + "D. D. Create a role in IAM. Afterwards, assign this role to a new EC2 instance." + ], + "correct": "D. D. Create a role in IAM. Afterwards, assign this role to a new EC2 instance.", + "explanation": "Explanation/Reference: The best option is to create a role in IAM. Afterwa rd, assign this role to a new EC2 instance. Applica tions must sign their API requests with AWS credentials. Therefore, if you are an application developer, you need a strategy for managing credentials for your a pplications that run on EC2 instances. You can securely distribute your AWS credentials to the instances, enabling the applications on those instances to use your credentials to sign requests while protecting your credentials from other users. However, it's challenging to securely distribute cr edentials to each instance, especially those that A WS creates on your behalf such as Spot Instances or in stances in Auto Scaling groups. You must also be ab le to update the credentials on each instance when you rotate your AWS credentia ls. In this scenario, you have to use IAM roles so that your applications can securely make API requests f rom your instances without requiring you to manage the security credentials that the applications use. Ins tead of creating and distributing your AWS credentials, you can delegate permission to make API requests using IAM roles. Hence, the correct answer is: Create a role in IAM. Afterwards, assign this role to a new EC2 instance . The option that says: Encrypt the API credentials a nd storing in any directory of the EC2 instance and Store the API credentials in the root web applicati on directory of the EC2 instance are incorrect. Though you can store and use the API credentials in the EC2 instance, it will be difficult to manage j ust as mentioned above. You have to use IAM Roles. The option that says: Store your API credentials in Amazon S3 Glacier is incorrect as Amazon S3 Glacier is used for data archives and not for manag ing API credentials.", + "references": "http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ iam-roles-for-amazon-ec2.html Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/" + }, + { + "question": "A company has an application that uses multiple EC2 instances located in various AWS regions such as U S East (Ohio), US West (N. California), and EU (Irela nd). The manager instructed the Solutions Architect to set up a latency-based routing to route incoming tr affic for www.tutorialsdojo.com to all the EC2 inst ances across all AWS regions. Which of the following options can satisfy the give n requirement? A. A. Use a Network Load Balancer to distribute the load to the multiple EC2 instances across all AWS Regions.", + "options": [ + "B. B. Use AWS DataSync to distribute the load to the multiple EC2 instances across all AWS Regions.", + "C. C. Use an Application Load Balancer to distribute the load to the multiple EC2 instances across all AWS", + "D. D. Use Route 53 to distribute the load to the mul tiple EC2 instances across all AWS Regions." + ], + "correct": "D. D. Use Route 53 to distribute the load to the mul tiple EC2 instances across all AWS Regions.", + "explanation": "Explanation/Reference: If your application is hosted in multiple AWS Regio ns, you can improve performance for your users by serving their requests from the AWS Region that pro vides the lowest latency. You can create latency records for your resources i n multiple AWS Regions by using latency-based routi ng. In the event that Route 53 receives a DNS query for your domain or subdomain such as tutorialsdojo.com or portal.tutorialsdojo.com, it determines which AW S Regions you've created latency records for, determines which region gives the user the lowest l atency and then selects a latency record for that r egion. Route 53 responds with the value from the selected record which can be the IP address for a web server or the CNAME of your elastic load balancer. Hence, using Route 53 to distribute the load to the multiple EC2 instances across all AWS Regions is the correct answer. Using a Network Load Balancer to distribute the loa d to the multiple EC2 instances across all AWS Regions and using an Application Load Balancer to d istribute the load to the multiple EC2 instances across all AWS Regions are both incorrect because l oad balancers distribute traffic only within their respective regions and not to other AWS regions by default. Although Network Load Balancers support connections from clients to IP-based targets in pee red VPCs across different AWS Regions, the scenario didn't mention that the VPCs are peered with each o ther. It is best to use Route 53 instead to balance the incoming load to two or more AWS regions more effec tively. Using AWS DataSync to distribute the load to the mu ltiple EC2 instances across all AWS Regions is incorrect because the AWS DataSync service simply p rovides a fast way to move large amounts of data online between on-premises storage and Amazon S3 or Amazon Elastic File System (Amazon EFS). References: https://docs.aws.amazon.com/Route53/latest/Develope rGuide/routing-policy.html#routing-policy-latency https://docs.aws.amazon.com/Route53/latest/Develope rGuide/TutorialAddingLBRRegion.html Check out this Amazon Route 53 Cheat Sheet: https://tutorialsdojo.com/amazon-route-53/", + "references": "" + }, + { + "question": "A commercial bank has designed its next-generation online banking platform to use a distributed systemarchitecture. As their Software Architect, you have to ensure that their architecture is highly scalab le, yet still cost-effective. Which of the following will p rovide the most suitable solution for this scenario ?", + "options": [ + "A. A. Launch an Auto-Scaling group of EC2 instances to host your application services and an", + "B. B. Launch multiple On-Demand EC2 instances to hos t your application services and an SQS", + "C. C. Launch multiple EC2 instances behind an Applic ation Load Balancer to host your", + "D. D. Launch multiple EC2 instances behind an Applic ation Load Balancer to host your" + ], + "correct": "A. A. Launch an Auto-Scaling group of EC2 instances to host your application services and an", + "explanation": "Explanation/Reference: There are three main parts in a distributed messagi ng system: the components of your distributed syste m which can be hosted on EC2 instance; your queue (di stributed on Amazon SQS servers); and the messages in the queue. To improve the scalability of your distributed syst em, you can add Auto Scaling group to your EC2 instances. References: https://docs.aws.amazon.com/autoscaling/ec2/usergui de/as-using-sqs-queue.html https://docs.aws.amazon.com/AWSSimpleQueueService/l atest/SQSDeveloperGuide/sqs-basic- architecture.html Check out this AWS Auto Scaling Cheat Sheet: https://tutorialsdojo.com/aws-auto-scaling/", + "references": "" + }, + { + "question": "A company plans to host a movie streaming app in AW S. The chief information officer (CIO) wants to ensure that the application is highly available and scalable. The application is deployed to an Auto S caling group of EC2 instances on multiple AZs. A load bala ncer must be configured to distribute incoming requests evenly to all EC2 instances across multipl e Availability Zones. Which of the following features should the Solution s Architect use to satisfy these criteria?", + "options": [ + "A. A. AWS Direct Connect SiteLink", + "B. B. Cross-zone load balancing", + "C. C. Amazon VPC IP Address Manager (IPAM)", + "D. D. Path-based Routing" + ], + "correct": "B. B. Cross-zone load balancing", + "explanation": "Explanation/Reference: The nodes for your load balancer distribute request s from clients to registered targets. When cross-zo ne load balancing is enabled, each load balancer node distributes traffic across the registered targets i n all enabled Availability Zones. When cross-zone load ba lancing is disabled, each load balancer node distributes traffic only across the registered targ ets in its Availability Zone. The following diagrams demonstrate the effect of cr oss-zone load balancing. There are two enabled Availability Zones, with two targets in Availabilit y Zone A and eight targets in Availability Zone B. Clients send requests, and Amazon Route 53 responds to each request with the IP address of one of the load balancer nodes. This distributes traffic such that each load balancer node receives 50% of the traffic from the clients. Each load balancer node distributes it s share of the traffic across the registered target s in its scope. If cross-zone load balancing is enabled, each of th e 10 targets receives 10% of the traffic. This is b ecause each load balancer node can route 50% of the client traffic to all 10 targets. If cross-zone load balancing is disabled: Each of the two targets in Availability Zone A rece ives 25% of the traffic. Each of the eight targets in Availability Zone B re ceives 6.25% of the traffic. This is because each load balancer node can route 5 0% of the client traffic only to targets in its Ava ilability Zone. With Application Load Balancers, cross-zone load ba lancing is always enabled. With Network Load Balancers and Gateway Load Balanc ers, cross-zone load balancing is disabled by default. After you create the load balancer, you ca n enable or disable cross-zone load balancing at an y time. When you create a Classic Load Balancer, the defaul t for cross-zone load balancing depends on how you create the load balancer. With the API or CLI, cros s-zone load balancing is disabled by default. With the AWS Management Console, the option to enable cross- zone load balancing is selected by default. After you create a Classic Load Balancer, you can enable or disable cross-zone load balancing at any time Hence, the right answer is to enable cross-zone loa d balancing. Amazon VPC IP Address Manager (IPAM) is incorrect b ecause this is merely a feature in Amazon VPC that provides network administrators with an automa ted IP management workflow. It does not enable your load balancers to distribute incoming requests even ly to all EC2 instances across multiple Availabilit y Zones. Path-based Routing is incorrect because this featur e is based on the paths that are in the URL of the request. It automatically routes traffic to a parti cular target group based on the request URL. This f eature will not set each of the load balancer nodes to dis tribute traffic across the registered targets in al l enabled Availability Zones. AWS Direct Connect SiteLink is incorrect because th is is a feature of AWS Direct Connect connection and not of Amazon Elastic Load Balancing. The AWS D irect Connect SiteLink feature simply lets you create connections between your on-premises network s through the AWS global network backbone. References: https://docs.aws.amazon.com/elasticloadbalancing/la test/userguide/how-elastic-load-balancing-works.htm l https://aws.amazon.com/elasticloadbalancing/feature s https://aws.amazon.com/blogs/aws/network-address-ma nagement-and-auditing-at-scale-with-amazon-vpc-ip- address-manager/ AWS Elastic Load Balancing Overview: https://youtu.be/UBl5dw59DO 8 Check out this AWS Elastic Load Balancing (ELB) Che at Sheet: https://tutorialsdojo.com/aws-elastic-load-balancin g-elb/", + "references": "" + }, + { + "question": "A software development company needs to connect its on-premises infrastructure to the AWS cloud. Which of the following AWS services can you use to accomplish this? (Select TWO.)", + "options": [ + "A. A. NAT Gateway", + "B. B. VPC Peering", + "C. C. IPsec VPN connection", + "D. D. AWS Direct Connect" + ], + "correct": "", + "explanation": "Explanation/Reference: You can connect your VPC to remote networks by usin g a VPN connection which can be IPsec VPN connection, AWS VPN CloudHub, or a third party soft ware VPN appliance. A VPC VPN Connection utilizes IPSec to establish encrypted network connectivity b etween your intranet and Amazon VPC over the Intern et. AWS Direct Connect is a network service that provid es an alternative to using the Internet to connect customer's on-premises sites to AWS. AWS Direct Con nect does not involve the Internet; instead, it use s dedicated, private network connections between your intranet and Amazon VPC. Hence, IPsec VPN connection and AWS Direct Connect are the correct answers. Amazon Connect is incorrect because this is not a V PN connectivity option. It is actually a self-servi ce, cloud-based contact center service in AWS that make s it easy for any business to deliver better custom er service at a lower cost. Amazon Connect is based on the same contact center technology used by Amazon customer service associates around the world to pow er millions of customer conversations. VPC Peering is incorrect because this is a networki ng connection between two VPCs only, which enables you to route traffic between them privately. This c an't be used to connect your on-premises network to your VPC. NAT Gateway is incorrect because you only use a net work address translation (NAT) gateway to enable instances in a private subnet to connect to the Int ernet or other AWS services, but prevent the Intern et from initiating a connection with those instances. This is not used to connect to your on-premises network. References: https://docs.aws.amazon.com/AmazonVPC/latest/UserGu ide/vpn-connections.html https://aws.amazon.com/directconnect/faqs Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", + "references": "" + }, + { + "question": "A web application is hosted on a fleet of EC2 insta nces inside an Auto Scaling Group with a couple of Lambda functions for ad hoc processing. Whenever yo u release updates to your application every week, there are inconsistencies where some resources are not updated properly. You need a way to group the resources together and deploy the new version of yo ur code consistently among the groups with minimal downtime. Which among these options should you do to satisfy the given requirement with the least effort? A. A. Use CodeCommit to publish your code quickly in a private repository and push them to your resourc es for fast updates.", + "options": [ + "B. B. Use deployment groups in CodeDeploy to automat e code deployments in a consistent manner.", + "C. C. Create CloudFormation templates that have the latest configurations and code in them.", + "D. D. Create OpsWorks recipes that will automaticall y launch resources containing the latest version of the" + ], + "correct": "B. B. Use deployment groups in CodeDeploy to automat e code deployments in a consistent manner.", + "explanation": "Explanation/Reference: CodeDeploy is a deployment service that automates a pplication deployments to Amazon EC2 instances, on-premises instances, or serverless Lambda functio ns. It allows you to rapidly release new features, update Lambda function versions, avoid downtime during app lication deployment, and handle the complexity of updating your applications, without many of the ris ks associated with error-prone manual deployments. Creating CloudFormation templates that have the lat est configurations and code in them is incorrect since this is used for provisioning and managing st acks of AWS resources based on templates you create to model your infrastructure architecture. CloudFormat ion is recommended if you want a tool for granular control over the provisioning and management of you r own infrastructure. Using CodeCommit to publish your code quickly in a private repository and pushing them to your resources for fast updates is incorrect as you main ly use CodeCommit for managing a source-control service that hosts private Git repositories. You ca n store anything from code to binaries and work seamlessly with your existing Git- based tools. CodeCommit integrates with CodePipelin e and CodeDeploy to streamline your development and release process. You could also use OpsWorks to deploy your code, ho wever, creating OpsWorks recipes that will automatically launch resources containing the lates t version of the code is still incorrect because yo u don't need to launch new resources containing your new code when you can just update the ones that are already running. References: https://docs.aws.amazon.com/codedeploy/latest/userg uide/deployment-groups.html https://docs.aws.amazon.com/codedeploy/latest/userg uide/welcome.html Check out this AWS CodeDeploy Cheat Sheet: https://tutorialsdojo.com/aws-codedeploy/ AWS CodeDeploy - Primary Components: https://www.youtube.com/watch?v=ClWBJT6k20Q Elastic Beanstalk vs CloudFormation vs OpsWorks vs CodeDeploy: https://tutorialsdojo.com/elastic-beanstalk-vs-clou dformation-vs-opsworks-vs-codedeploy/", + "references": "" + }, + { + "question": "A global medical research company has a molecular i maging system that provides each client with frequently updated images of what is happening insi de the human body at the molecular and cellular lev els. The system is hosted in AWS and the images are host ed in an S3 bucket behind a CloudFront web distribution. When a fresh batch of images is uploa ded to S3, it is required to keep the previous ones in order to prevent them from being overwritten. Which of the following is the most suitable solutio n to solve this issue?", + "options": [ + "A. A. Use versioned objects", + "B. B. Invalidate the files in your CloudFront web di stribution", + "C. C. Add Cache-Control no-cache, no-store, or priva te directives in the S3 bucket", + "D. D. Add a separate cache behavior path for the con tent and configure a custom object caching with a" + ], + "correct": "A. A. Use versioned objects", + "explanation": "Explanation/Reference: To control the versions of files that are served fr om your distribution, you can either invalidate fil es or give them versioned file names. If you want to update yo ur files frequently, AWS recommends that you primarily use file versioning for the following rea sons: - Versioning enables you to control which file a re quest returns even when the user has a version cach ed either locally or behind a corporate caching proxy. If you invalidate the file, the user might continu e to see the old version until it expires from those caches. - CloudFront access logs include the names of your files, so versioning makes it easier to analyze the results of file changes. - Versioning provides a way to serve different vers ions of files to different users. - Versioning simplifies rolling forward and back be tween file revisions. - Versioning is less expensive. You still have to p ay for CloudFront to transfer new versions of your files to edge locations, but you don't have to pay for inval idating files. Invalidating the files in your CloudFront web distr ibution is incorrect because even though using invalidation will solve this issue, this solution i s more expensive as compared to using versioned obj ects. Adding a separate cache behavior path for the conte nt and configuring a custom object caching with a Minimum TTL of 0 is incorrect because this alone is not enough to solve the problem. A cache behavio r is primarily used to configure a variety of CloudFr ont functionality for a given URL path pattern for files on your website. Although this solution may work, i t is still better to use versioned objects where yo u can control which image will be returned by the system even when the user has another version cached eithe r locally or behind a corporate caching proxy. Adding Cache-Control no-cache, no-store, or private directives in the S3 bucket is incorrect because although it is right to configure your origin to ad d the Cache-Control or Expires header field, you sh ould do this to your objects and not on the entire S3 bu cket. References: https://docs.aws.amazon.com/AmazonCloudFront/latest /DeveloperGuide/UpdatingExistingObjects.html https://aws.amazon.com/premiumsupport/knowledge-cen ter/prevent-cloudfront-from-caching-files/ https://docs.aws.amazon.com/AmazonCloudFront/latest /DeveloperGuide/ Invalidation.html#PayingForInvalidation Check out this Amazon CloudFront Cheat Sheet: https://tutorialsdojo.com/amazon-cloudfront/", + "references": "" + }, + { + "question": "A company is using an Amazon RDS for MySQL 5.6 with Multi-AZ deployment enabled and several web servers across two AWS Regions. The database is cur rently experiencing highly dynamic reads due to the growth of the company's website. The Solutions Arch itect tried to test the read performance from the secondary AWS Region and noticed a notable slowdown on the SQL queries. Which of the following options would provide a read replication latency of less than 1 second?", + "options": [ + "A. A. Use Amazon ElastiCache to improve database per formance.", + "B. B. Migrate the existing database to Amazon Aurora and create a cross-region read replica.", + "C. C. Create an Amazon RDS for MySQL read replica in the secondary AWS Region.", + "D. D. Upgrade the MySQL database engine." + ], + "correct": "B. B. Migrate the existing database to Amazon Aurora and create a cross-region read replica.", + "explanation": "Explanation/Reference: Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that combines the performance and availability of tradit ional enterprise databases with the simplicity and cost- effectiveness of open source databases. Amazon Auro ra is up to five times faster than standard MySQL databases and three times faster than standard Post greSQL databases. It provides the security, availability, and reliabi lity of commercial databases at 1/10th the cost. Am azon Aurora is fully managed by Amazon RDS, which automa tes time-consuming administration tasks like hardware provisioning, database setup, patching, an d backups. Based on the given scenario, there is a significant slowdown after testing the read performance from t he secondary AWS Region. Since the existing setup is a n Amazon RDS for MySQL, you should migrate the database to Amazon Aurora and create a cross-region read replica. The read replication latency of less than 1 second is only possible if you would use Amazon Aurora replicas. Aurora replicas are independent endpoints in an Aurora DB cluster, best used for scaling rea d operations and increasing availability. You can cre ate up to 15 replicas within an AWS Region. Hence, the correct answer is: Migrate the existing database to Amazon Aurora and create a cross- region read replica. The option that says: Upgrade the MySQL database en gine is incorrect because upgrading the database engine wouldn't improve the read replication latenc y to milliseconds. To achieve the read replication latency of less than 1-second requirement, you need to use Amazon Aurora replicas. The option that says: Use Amazon ElastiCache to imp rove database performance is incorrect. Amazon ElastiCache won't be able to improve the database p erformance because it is experiencing highly dynami c reads. This option would be helpful if the database frequently receives the same queries. The option that says: Create an Amazon RDS for MySQ L read replica in the secondary AWS Region is incorrect because MySQL replicas won't provide y ou a read replication latency of less than 1 second . RDS Read Replicas can only provide asynchronous rep lication in seconds and not in milliseconds. You have to use Amazon Aurora replicas in this scenario . References: https://aws.amazon.com/rds/aurora/faqs/ https://docs.aws.amazon.com/AmazonRDS/latest/Aurora UserGuide/ AuroraMySQL.Replication.CrossRegion.html Amazon Aurora Overview: https://youtu.be/iwS1h7rLNBQ Check out this Amazon Aurora Cheat Sheet: https://tutorialsdojo.com/amazon-aurora/", + "references": "" + }, + { + "question": "A construction company has an online system that tr acks all of the status and progress of their projec ts. The system is hosted in AWS and there is a requirement to monitor the read and write IOPs metrics for thei r MySQL RDS instance and send real-time alerts to the ir DevOps team. Which of the following services in AWS can you use to meet the requirements? (Select TWO.)", + "options": [ + "A. A. Amazon Simple Queue Service", + "B. B. CloudWatch C. C. Route 53", + "D. D. SWF" + ], + "correct": "", + "explanation": "Explanation/Reference: In this scenario, you can use CloudWatch to monitor your AWS resources and SNS to provide notification . Hence, the correct answers are CloudWatch and Amazo n Simple Notification Service. Amazon Simple Notification Service (SNS) is a flexi ble, fully managed pub/sub messaging and mobile notifications service for coordinating the delivery of messages to subscribing endpoints and clients. Amazon CloudWatch is a monitoring service for AWS c loud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and t rack metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources. SWF is incorrect because this is mainly used for ma naging workflows and not for monitoring and notifications. Amazon Simple Queue Service is incorrect because th is is a messaging queue service and not suitable fo r this kind of scenario. Route 53 is incorrect because this is primarily use d for routing and domain name registration and management. References: http://docs.aws.amazon.com/AmazonCloudWatch/latest/ monitoring/CW_Support_For_AWS.html https://aws.amazon.com/sns/ Check out this Amazon CloudWatch Cheat Sheet: https://tutorialsdojo.com/amazon-cloudwatch/", + "references": "" + }, + { + "question": "A company has several EC2 Reserved Instances in the ir account that need to be decommissioned and shut down since they are no longer used by the developme nt team. However, the data is still required by the audit team for compliance purposes. Which of the following steps can be taken in this s cenario? (Select TWO.)", + "options": [ + "A. A. Stop all the running EC2 instances.", + "B. B. Convert the EC2 instance to On-Demand instance s", + "C. C. Take snapshots of the EBS volumes and terminat e the EC2 instances.", + "D. D. You can opt to sell these EC2 instances on the AWS Reserved Instance Marketplace" + ], + "correct": "", + "explanation": "Explanation/Reference: Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-s cale cloud computing easier for developers. Amazon EC2's simple web service interface allows you to ob tain and configure capacity with minimal friction. It provides you with complete control of your computin g resources and lets you run on Amazon's proven computing environment. The first requirement as per the scenario is to dec ommission and shut down several EC2 Reserved Instances. However, it is also mentioned that the a udit team still requires the data for compliance pu rposes. To fulfill the given requirements, you can first cr eate a snapshot of the instance to save its data an d then sell the instance to the Reserved Instance Marketplace. The Reserved Instance Marketplace is a platform tha t supports the sale of third-party and AWS customer s' unused Standard Reserved Instances, which vary in t erms of length and pricing options. For example, yo u may want to sell Reserved Instances after moving in stances to a new AWS region, changing to a new instance type, ending projects before the term expi ration, when your business needs change, or if you have unneeded capacity. Hence, the correct answers are: - You can opt to sell these EC2 instances on the AW S Reserved Instance Marketplace. - Take snapshots of the EBS volumes and terminate t he EC2 instances. The option that says: Convert the EC2 instance to O n-Demand instances is incorrect because it's stated in the scenario that the development team no longer needs several EC2 Reserved Instances. By convertin g it to On-Demand instances, the company will still h ave instances running in their infrastructure and t his will result in additional costs. The option that says: Convert the EC2 instances to Spot instances with a persistent Spot request type is incorrect because the requirement in the scenari o is to terminate or shut down several EC2 Reserved Instances. Converting the existing instances to Spo t instances will not satisfy the given requirement. The option that says: Stop all the running EC2 inst ances is incorrect because doing so will still incu r storage cost. Take note that the requirement in the scenario is to decommission and shut down several EC2 Reserved Instances. Therefore, this approach won't fulfill the given requirement. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ri-market-general.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ebs-creating-snapshot.html Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/ AWS Container Services Overview: https://www.youtube.com/watch?v=5QBgDX7O7pw", + "references": "" + }, + { + "question": "A top university has recently launched its online l earning portal where the students can take e-learni ng courses from the comforts of their homes. The porta l is on a large On-Demand EC2 instance with a singl e Amazon Aurora database. How can you improve the availability of your Aurora database to prevent any unnecessary downtime of th e online portal?", + "options": [ + "A. A. Use an Asynchronous Key Prefetch in Amazon Aur ora to improve the performance of queries that join", + "B. B. Enable Hash Joins to improve the database quer y performance.", + "C. C. Deploy Aurora to two Auto-Scaling groups of EC 2 instances across two Availability Zones with an e lastic load balancer which handles load balancing.", + "D. D. Create Amazon Aurora Replicas." + ], + "correct": "D. D. Create Amazon Aurora Replicas.", + "explanation": "Explanation/Reference: Amazon Aurora MySQL and Amazon Aurora PostgreSQL su pport Amazon Aurora Replicas, which share the same underlying volume as the primary ins tance. Updates made by the primary are visible to a ll Amazon Aurora Replicas. With Amazon Aurora MySQL, y ou can also create MySQL Read Replicas based on MySQL's binlog-based replication engine. In MySQ L Read Replicas, data from your primary instance is replayed on your replica as transactions. For most use cases, including read scaling and high availabi lity, it is recommended using Amazon Aurora Replicas. Read Replicas are primarily used for improving the read performance of the application. The most suita ble solution in this scenario is to use Multi-AZ deploy ments instead but since this option is not availabl e, you can still set up Read Replicas which you can promot e as your primary stand-alone DB cluster in the eve nt of an outage. Hence, the correct answer here is to create Amazon Aurora Replicas. Deploying Aurora to two Auto-Scaling groups of EC2 instances across two Availability Zones with an elastic load balancer which handles load balanci ng is incorrect because Aurora is a managed database engine for RDS and not deployed on typical EC2 instances that you manually provision. Enabling Hash Joins to improve the database query p erformance is incorrect because Hash Joins are mainly used if you need to join a large amount of d ata by using an equijoin and not for improving availability. Using an Asynchronous Key Prefetch in Amazon Aurora to improve the performance of queries that join tables across indexes is incorrect because the Asynchronous Key Prefetch is mainly used to improv e the performance of queries that join tables across indexes. References: https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/AuroraMySQL.BestPractices.html https://aws.amazon.com/rds/aurora/faqs/ Amazon Aurora Overview: https://youtu.be/iwS1h7rLNBQ Check out this Amazon Aurora Cheat Sheet: https://tutorialsdojo.com/amazon-aurora/", + "references": "" + }, + { + "question": "A global news network created a CloudFront distribu tion for their web application. However, you notice d that the application's origin server is being hit f or each request instead of the AWS Edge locations, which serve the cached objects. The issue occurs even for the commonly requested objects. What could be a possible cause of this issue?", + "options": [ + "A. A. The file sizes of the cached objects are too l arge for CloudFront to handle.", + "B. B. An object is only cached by Cloudfront once a successful request has been made hence, the objects", + "C. C. There are two primary origins configured in yo ur Amazon CloudFront Origin Group.", + "D. D. The Cache-Control max-age directive is set to zero.", + "A. A. Someone has manually deleted the record in Ama zon S3.", + "B. B. Amazon S3 bucket has encountered a data loss.", + "C. C. The access of the Kinesis stream to the S3 buc ket is insufficient.", + "D. D. By default, data records in Kinesis are only a ccessible for 24 hours from the time they are added to a" + ], + "correct": "D. D. By default, data records in Kinesis are only a ccessible for 24 hours from the time they are added to a", + "explanation": "Explanation/Reference: By default, records of a stream in Amazon Kinesis a re accessible for up to 24 hours from the time they are added to the stream. You can raise this limit to up to 7 days by enabling extended data retention. Hence, the correct answer is: By default, data reco rds in Kinesis are only accessible for 24 hours fro m the time they are added to a stream. The option that says: Amazon S3 bucket has encounte red a data loss is incorrect because Amazon S3 rarely experiences data loss. Amazon has an SLA for S3 that it commits to its customers. Amazon S3 Standard, S3 StandardIA, S3 One Zone-IA, and S3 Gla cier are all designed to provide 99.999999999% durability of objects over a given year. This durab ility level corresponds to an average annual expect ed loss of 0.000000001% of objects. Hence, Amazon S3 bucket data loss is highly unlikely. The option that says: Someone has manually deleted the record in Amazon S3 is incorrect because if someone has deleted the data, this should have been visible in CloudTrail. Also, deleting that much da ta manually shouldn't have occurred in the first place if you have put in the appropriate security measur es. The option that says: The access of the Kinesis str eam to the S3 bucket is insufficient is incorrect because having insufficient access is highly unlike ly since you are able to access the bucket and view the contents of the previous day's data collected by Ki nesis.", + "references": "https://aws.amazon.com/kinesis/data-streams/faqs/ https://docs.aws.amazon.com/AmazonS3/latest/dev/Dat aDurability.html Check out this Amazon Kinesis Cheat Sheet: https://tutorialsdojo.com/amazon-kinesis/" + }, + { + "question": "A Solutions Architect is implementing a new High-Pe rformance Computing (HPC) system in AWS that involves orchestrating several Amazon Elastic Conta iner Service (Amazon ECS) tasks with an EC2 launch type that is part of an Amazon ECS cluster. The sys tem will be frequently accessed by users around the globe and it is expected that there would be hundre ds of ECS tasks running most of the time. The Archi tect must ensure that its storage system is optimized fo r high-frequency read and write operations. The out put data of each ECS task is around 10 MB but the obsol ete data will eventually be archived and deleted so the total storage size won't exceed 10 TB. Which of the following is the MOST suitable solutio n that the Architect should recommend?", + "options": [ + "A. A. Launch an Amazon Elastic File System (Amazon E FS) file system with Bursting Throughput mode and", + "B. B. Launch an Amazon Elastic File System (Amazon E FS) with Provisioned Throughput mode and set the", + "C. C. Launch an Amazon DynamoDB table with Amazon Dy namoDB Accelerator (DAX) and DynamoDB", + "D. D. Set up an SMB file share by creating an Amazon FSx File Gateway in Storage Gateway. Set the file" + ], + "correct": "B. B. Launch an Amazon Elastic File System (Amazon E FS) with Provisioned Throughput mode and set the", + "explanation": "Explanation/Reference: Amazon Elastic File System (Amazon EFS) provides si mple, scalable file storage for use with your Amazon ECS tasks. With Amazon EFS, storage capacity is elastic, growing and shrinking automatically as you add and remove files. Your applications can hav e the storage they need when they need it. You can use Amazon EFS file systems with Amazon ECS to access file system data across your fleet of Amazon ECS tasks. That way, your tasks have access to the same persistent storage, no matter the infrastructure or container instance on which they land. When you reference your Amazon EFS file syste m and container mount point in your Amazon ECS task d efinition, Amazon ECS takes care of mounting the file system in your container. To support a wide variety of cloud storage workload s, Amazon EFS offers two performance modes: - General Purpose mode - Max I/O mode. You choose a file system's performance mode when yo u create it, and it cannot be changed. The two performance modes have no additional costs, so your Amazon EFS file system is billed and metered the same, regardless of your performance mode. There are two throughput modes to choose from for y our file system: - Bursting Throughput - Provisioned Throughput With Bursting Throughput mode, a file system's thro ughput scales as the amount of data stored in the E FS Standard or One Zone storage class grows. File-base d workloads are typically spiky, driving high level s of throughput for short periods of time, and low level s of throughput the rest of the time. To accommodat e this, Amazon EFS is designed to burst to high throu ghput levels for periods of time. Provisioned Throughput mode is available for applic ations with high throughput to storage (MiB/s per T iB) ratios, or with requirements greater than those all owed by the Bursting Throughput mode. For example, say you're using Amazon EFS for development tools, web serving, or content management applications where the amount of data in your file system is low relat ive to throughput demands. Your file system can now get the high levels of throughput your applications req uire without having to pad your file system. In the scenario, the file system will be frequently accessed by users around the globe so it is expect ed that there would be hundreds of ECS tasks running most o f the time. The Architect must ensure that its stor age system is optimized for high-frequency read and wri te operations. Hence, the correct answer is: Launch an Amazon Elas tic File System (Amazon EFS) with Provisioned Throughput mode and set the performance mode to Max I/O. Configure the EFS file system as the container mount point in the ECS task definition of the Amazon ECS cluster. The option that says: Set up an SMB file share by c reating an Amazon FSx File Gateway in Storage Gateway. Set the file share as the container mount point in the ECS task definition of the Amazon ECS cluster is incorrect. Although you can use an A mazon FSx for Windows File Server in this situation , it is not appropriate to use this since the applica tion is not connected to an on-premises data center . Take note that the AWS Storage Gateway service is primar ily used to integrate your existing on-premises sto rage to AWS. The option that says: Launch an Amazon Elastic File System (Amazon EFS) file system with Bursting Throughput mode and set the performance mode to Gen eral Purpose. Configure the EFS file system as the container mount point in the ECS task defini tion of the Amazon ECS cluster is incorrect because using Bursting Throughput mode won't be abl e to sustain the constant demand of the global application. Remember that the application will be frequently accessed by users around the world and t here are hundreds of ECS tasks running most of the time. The option that says: Launch an Amazon DynamoDB tab le with Amazon DynamoDB Accelerator (DAX) and DynamoDB Streams enabled. Configure the t able to be accessible by all Amazon ECS cluster instances. Set the DynamoDB table as the co ntainer mount point in the ECS task definition of the Amazon ECS cluster is incorrect because you can not directly set a DynamoDB table as a container mount point. In the first place, DynamoDB is a data base and not a file system which means that it can' t be \"mounted\" to a server. References: https://docs.aws.amazon.com/AmazonECS/latest/develo perguide/tutorial-efs-volumes.html https://docs.aws.amazon.com/efs/latest/ug/performan ce.html https://docs.aws.amazon.com/AmazonECS/latest/develo perguide/tutorial-wfsx-volumes.html Check out this Amazon EFS Cheat Sheet: https://tutorialsdojo.com/amazon-efs/", + "references": "" + }, + { + "question": "A company has a fleet of running Spot EC2 instances behind an Application Load Balancer. The incoming traffic comes from various users across multiple AW S regions and you would like to have the user's ses sion shared among the fleet of instances. You are requir ed to set up a distributed session management layer that will provide a scalable and shared data storage for the user sessions. Which of the following would be the best choice to meet the requirement while still providing sub- millisecond latency for the users?", + "options": [ + "A. A. Multi-AZ RDS", + "B. B. Elastic ache in-memory caching", + "C. C. Multi-master DynamoDB", + "D. D. ELB sticky sessions" + ], + "correct": "B. B. Elastic ache in-memory caching", + "explanation": "Explanation/Reference: For sub-millisecond latency caching, ElastiCache is the best choice. In order to address scalability a nd to provide a shared data storage for sessions that can be accessed from any individual web server, you ca n abstract the HTTP sessions from the web servers the mselves. A common solution for this is to leverage an In-Memory Key/Value store such as Redis and Memcach ed. ELB sticky sessions is incorrect because the scenar io does not require you to route a user to the part icular web server that is managing that individual user's session. Since the session state is shared among th e instances, the use of the ELB sticky sessions featu re is not recommended in this scenario. Multi-master DynamoDB and Multi-AZ RDS are incorrec t. Although you can use DynamoDB and RDS for storing session state, these two are not the be st choices in terms of cost-effectiveness and perfo rmance when compared to ElastiCache. There is a significan t difference in terms of latency if you used Dynamo DB and RDS when you store the session data. References: https://aws.amazon.com/caching/session-management/ https://d0.awsstatic.com/whitepapers/performance-at -scale-with-amazon-elasticache.pdf Check out this Amazon Elasticache Cheat Sheet: https://tutorialsdojo.com/amazon-elasticache/ Redis (cluster mode enabled vs disabled) vs Memcach ed: https://tutorialsdojo.com/redis-cluster-mode-enable d-vs-disabled-vs-memcached/", + "references": "" + }, + { + "question": "A company is planning to deploy a High Performance Computing (HPC) cluster in its VPC that requires a scalable, high-performance file system. The storage service must be optimized for efficient workload processing, and the data must be accessible via a f ast and scalable file system interface. It should a lso work natively with Amazon S3 that enables you to easily process your S3 data with a high-performance POSIX interface. Which of the following is the MOST suitable service that you should use for this scenario?", + "options": [ + "A. A. Amazon Elastic File System (EFS)", + "B. B. Amazon Elastic Block Storage (EBS)", + "C. C. Amazon FSx for Lustre", + "D. D. Amazon FSx for Windows File Server" + ], + "correct": "C. C. Amazon FSx for Lustre", + "explanation": "Explanation/Reference: Amazon FSx for Lustre provides a high-performance f ile system optimized for fast processing of workloads such as machine learning, high performanc e computing (HPC), video processing, financial modeling, and electronic design automation (EDA). T hese workloads commonly require data to be presented via a fast and scalable file system inter face, and typically have data sets stored on long-t erm data stores like Amazon S3. Operating high-performance file systems typically r equire specialized expertise and administrative overhead, requiring you to provision storage server s and tune complex performance parameters. With Amazon FSx, you can launch and run a file system th at provides sub-millisecond access to your data and allows you to read and write data at speeds of up t o hundreds of gigabytes per second of throughput an d millions of IOPS. Amazon FSx for Lustre works natively with Amazon S3 , making it easy for you to process cloud data sets with high-performance file systems. When linked to an S3 bucket, an FSx for Lustre file system transparently presents S3 objects as files and allo ws you to write results back to S3. You can also us e FSx for Lustre as a standalone high-performance file sy stem to burst your workloads from on-premises to th e cloud. By copying on-premises data to an FSx for Lu stre file system, you can make that data available for fast processing by compute instances running on AWS . With Amazon FSx, you pay for only the resources you use. There are no minimum commitments, upfront hardware or software costs, or additional fees. For Windows-based applications, Amazon FSx provides fully managed Windows file servers with features and performance optimized for \"lift-and-shift\" busi ness-critical application workloads including home directories (user shares), media workflows, and ERP applications. It is accessible from Windows and Li nux instances via the SMB protocol. If you have Linux-b ased applications, Amazon EFS is a cloud-native ful ly managed file system that provides simple, scalable, elastic file storage accessible from Linux instanc es via the NFS protocol. For compute-intensive and fast processing workloads , like high-performance computing (HPC), machine learning, EDA, and media processing, Amazon FSx for Lustre, provides a file system that's optimized fo r performance, with input and output stored on Amazon S3. Hence, the correct answer is: Amazon FSx for Lustre . Amazon Elastic File System (EFS) is incorrect becau se although the EFS service can be used for HPC applications, it doesn't natively work with Amazon S3. It doesn't have the capability to easily proces s your S3 data with a high-performance POSIX interface, un like Amazon FSx for Lustre. Amazon FSx for Windows File Server is incorrect bec ause although this service is a type of Amazon FSx, it does not work natively with Amazon S3. This serv ice is a fully managed native Microsoft Windows fil e system that is primarily used for your Windows-base d applications that require shared file storage to AWS. Amazon Elastic Block Storage (EBS) is incorrect bec ause this service is not a scalable, high-performan ce file system. References: https://aws.amazon.com/fsx/lustre/ https://aws.amazon.com/getting-started/use-cases/hp c/3/ Check out this Amazon FSx Cheat Sheet: https://tutorialsdojo.com/amazon-fsx/", + "references": "" + }, + { + "question": "A Solutions Architect created a brand new IAM User with a default setting using AWS CLI. This is intended to be used to send API requests to Amazon S3, DynamoDB, Lambda, and other AWS resources of the company's cloud infrastructure. Which of the following must be done to allow the us er to make API calls to the AWS resources?", + "options": [ + "A. A. Do nothing as the IAM User is already capable of sending API calls to your AWS resources.", + "B. B. Enable Multi-Factor Authentication for the use r.", + "C. C. Create a set of Access Keys for the user and a ttach the necessary permissions.", + "D. D. Assign an IAM Policy to the user to allow it t o send API calls." + ], + "correct": "C. C. Create a set of Access Keys for the user and a ttach the necessary permissions.", + "explanation": "Explanation/Reference: You can choose the credentials that are right for y our IAM user. When you use the AWS Management Console to create a user, you must choose to at lea st include a console password or access keys. By de fault, a brand new IAM user created using the AWS CLI or A WS API has no credentials of any kind. You must create the type of credentials for an IAM user base d on the needs of your user. Access keys are long-term credentials for an IAM us er or the AWS account root user. You can use access keys to sign programmatic requests to the AWS CLI o r AWS API (directly or using the AWS SDK). Users need their own access keys to make programmatic calls to AWS from the AWS Command Line Interface (AWS CLI), Tools for Windows PowerShell, the AWS SDKs, or dire ct HTTP calls using the APIs for individual AWS services. To fill this need, you can create, modify, view, or rotate access keys (access key IDs and secret acce ss keys) for IAM users. When you create an access key, IAM r eturns the access key ID and secret access key. You should save these in a secure location and give the m to the user. The option that says: Do nothing as the IAM User is already capable of sending API calls to your AWS resources is incorrect because by default, a brand new IAM user created using the AWS CLI or AWS API has no credentials of any kind. Take note that in t he scenario, you created the new IAM user using the AWS CLI and not via the AWS Management Console, whe re you must choose to at least include a console password or access keys when creating a new IAM use r. Enabling Multi-Factor Authentication for the user i s incorrect because this will still not provide the required Access Keys needed to send API calls to yo ur AWS resources. You have to grant the IAM user with Access Keys to meet the requirement. Assigning an IAM Policy to the user to allow it to send API calls is incorrect because adding a new IA M policy to the new user will not grant the needed Ac cess Keys needed to make API calls to the AWS resources. References: https://docs.aws.amazon.com/IAM/latest/UserGuide/id _credentials_access-keys.html https://docs.aws.amazon.com/IAM/latest/UserGuide/id _users.html#id_users_creds Check out this AWS IAM Cheat Sheet: https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/", + "references": "" + }, + { + "question": "A company plans to implement a network monitoring s ystem in AWS. The Solutions Architect launched an EC2 instance to host the monitoring system and used CloudWatch to monitor, store, and access the log f iles of the instance. Which of the following provides an automated way to send log data to CloudWatch Logs from the Amazon EC2 instance?", + "options": [ + "A. A. AWS Transfer for SFTP", + "B. B. CloudTrail with log file validation", + "C. C. CloudWatch Logs agent", + "D. D. CloudTrail Processing Library" + ], + "correct": "", + "explanation": "Explanation/Reference: CloudWatch Logs enables you to centralize the logs from all of your systems, applications, and AWS services that you use, in a single, highly scalable service. You can then easily view them, search the m for specific error codes or patterns, filter them based on specific fields, or archive them securely for f uture analysis. CloudWatch Logs enables you to see all of your logs, regardless of their source, as a single and consistent flow of events ordered by time, and you can query them and sort them based on other dimensions, group them by specific fields, create c ustom computations with a powerful query language, and visualize log data in dashboards. The CloudWatch Logs agent is comprised of the follo wing components: - A plug-in to the AWS CLI that pushes log data to CloudWatch Logs. - A script (daemon) that initiates the process to p ush data to CloudWatch Logs. - A cron job that ensures that the daemon is always running. CloudWatch Logs agent provides an automated way to send log data to CloudWatch Logs from Amazon EC2 instances hence, CloudWatch Logs agent is the c orrect answer. CloudTrail with log file validation is incorrect as this is mainly used for tracking the API calls of your AWS resources and not for sending EC2 logs to Cloud Watch. AWS Transfer for SFTP is incorrect as this is only a fully managed SFTP service for Amazon S3 used for tracking the traffic coming into the VPC and not fo r EC2 instance monitoring. This service enables you to easily move your file transfer workloads that use the Secure Sh ell File Transfer Protocol (SFTP) to AWS without needing to modify your applications or manage any S FTP servers. This can't be used to send log data fr om your EC2 instance to Amazon CloudWatch. CloudTrail Processing Library is incorrect because this is just a Java library that provides an easy w ay to process AWS CloudTrail logs. It cannot send your lo g data to CloudWatch Logs. References: https://docs.aws.amazon.com/AmazonCloudWatch/latest /logs/WhatIsCloudWatchLogs.html https://docs.aws.amazon.com/AmazonCloudWatch/latest /logs/AgentReference.html Check out this Amazon CloudWatch Cheat Sheet: https://tutorialsdojo.com/amazon-cloudwatch/", + "references": "" + }, + { + "question": "A cryptocurrency company wants to go global with it s international money transfer app. Your project is to make sure that the database of the app is highly av ailable in multiple regions. What are the benefits of adding Multi-AZ deployment s in Amazon RDS? (Select TWO.)", + "options": [ + "A. A. Provides SQL optimization.", + "B. B. Increased database availability in the case of system upgrades like OS patching or DB Instance sc aling.", + "C. C. Provides enhanced database durability in the e vent of a DB instance component failure or an Avail ability", + "D. D. Significantly increases the database performan ce.", + "A. A. Do nothing. The architecture is already secure because the access keys are already in the Amazon", + "B. B. Remove the stored access keys in the AMI. Crea te a new IAM role with permissions to access the", + "C. C. Put the access keys in an Amazon S3 bucket ins tead.", + "D. D. Put the access keys in Amazon Glacier instead." + ], + "correct": "B. B. Remove the stored access keys in the AMI. Crea te a new IAM role with permissions to access the", + "explanation": "Explanation/Reference: You should use an IAM role to manage temporary cred entials for applications that run on an EC2 instanc e. When you use an IAM role, you don't have to distrib ute long-term credentials (such as a user name and password or access keys) to an EC2 instance. Instead, the role supplies temporary permissions th at applications can use when they make calls to oth er AWS resources. When you launch an EC2 instance, you specify an IAM role to associate with the instance . Applications that run on the instance can then use the role-supplied temporary credentials to sign API requests. Hence, the best option here is to remove the stored access keys first in the AMI. Then, create a new I AM role with permissions to access the DynamoDB table and a ssign it to the EC2 instances. Putting the access keys in Amazon Glacier or in an Amazon S3 bucket are incorrect because S3 and Glacier are mainly used as a storage option. It is better to use an IAM role instead of storing access keys in these storage services. The option that says: Do nothing. The architecture is already secure because the access keys are already in the Amazon Machine Image is incorrect be cause you can make the architecture more secure by using IAM.", + "references": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id _roles_use_switch-role-ec2.html Check out this AWS Identity & Access Management (IA M) Cheat Sheet: https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/" + }, + { + "question": "A tech company is having an issue whenever they try to connect to the newly created EC2 instance using a Remote Desktop connection from a computer. Upon che cking, the Solutions Architect has verified that th e instance has a public IP and the Internet gateway a nd route tables are in place. What else should he do to resolve this issue?", + "options": [ + "A. A. You should restart the EC2 instance since ther e might be some issue with the instance", + "B. B. Adjust the security group to allow inbound tra ffic on port 3389 from the company's IP address.", + "C. C. Adjust the security group to allow inbound traff ic on port 22 from the company's IP address. D. D. You should create a new instance since there mig ht be some issue with the instance" + ], + "correct": "B. B. Adjust the security group to allow inbound tra ffic on port 3389 from the company's IP address.", + "explanation": "Explanation/Reference: Since you are using a Remote Desktop connection to access your EC2 instance, you have to ensure that t he Remote Desktop Protocol is allowed in the security group. By default, the server listens on TCP port 3 389 and UDP port 3389. Hence, the correct answer is: Adjust the security g roup to allow inbound traffic on port 3389 from the company's IP address. The option that says: Adjust the security group to allow inbound traffic on port 22 from the company's IP address is incorrect as port 22 is use d for SSH connections and not for RDP. The options that say: You should restart the EC2 in stance since there might be some issue with the instance and You should create a new instance since there might be some issue with the instance are incorrect as the EC2 instance is newly created and hence, unlikely to cause the issue. You have to che ck the security group first if it allows the Remote Deskto p Protocol (3389) before investigating if there is indeed an issue on the specific instance.", + "references": "https://docs.aws.amazon.com/AWSEC2/latest/WindowsGu ide/troubleshooting-windows-instances.html#rdp- issues https://docs.aws.amazon.com/vpc/latest/userguide/VP C_SecurityGroups.html Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/" + }, + { + "question": "A Solutions Architect is working for a weather stat ion in Asia with a weather monitoring system that n eeds to be migrated to AWS. Since the monitoring system requires a low network latency and high network throughput, the Architect decided to launch the EC2 instances to a new cluster placement group. The system was working fine for a couple of weeks, howe ver, when they try to add new instances to the placement group that already has running EC2 instan ces, they receive an 'insufficient capacity error'. How will the Architect fix this issue?", + "options": [ + "A. A. Stop and restart the instances in the Placemen t Group and then try the launch again.", + "B. B. Verify all running instances are of the same siz e and type and then try the launch again. C. C. Create another Placement Group and launch the ne w instances in the new group.", + "D. D. Submit a capacity increase request to AWS as y ou are initially limited to only 12 instances" + ], + "correct": "A. A. Stop and restart the instances in the Placemen t Group and then try the launch again.", + "explanation": "Explanation/Reference: A cluster placement group is a logical grouping of instances within a single Availability Zone. A clus ter placement group can span peered VPCs in the same Re gion. Instances in the same cluster placement group enjoy a higher per-flow throughput limit for TCP/IP traffic and are placed in the same high-bisection bandwidth segment of the network. It is recommended that you launch the number of ins tances that you need in the placement group in a si ngle launch request and that you use the same instance t ype for all instances in the placement group. If yo u try to add more instances to the placement group later, or if you try to launch more than one instance type i n the placement group, you increase your chances of getti ng an insufficient capacity error. If you stop an i nstance in a placement group and then start it again, it st ill runs in the placement group. However, the start fails if there isn't enough capacity for the instance. If you receive a capacity error when launching an i nstance in a placement group that already has runni ng instances, stop and start all of the instances in t he placement group, and try the launch again. Resta rting the instances may migrate them to hardware that has cap acity for all the requested instances. Stop and restart the instances in the Placement gro up and then try the launch again can resolve this i ssue. If the instances are stopped and restarted, AWS may mo ve the instances to a hardware that has the capacit y for all the requested instances. Hence, the correct answer is: Stop and restart the instances in the Placement Group and then try the launch again. The option that says: Create another Placement Grou p and launch the new instances in the new group is incorrect because to benefit from the enhanced n etworking, all the instances should be in the same Placement Group. Launching the new ones in a new Pl acement Group will not work in this case. The option that says: Verify all running instances are of the same size and type and then try the laun ch again is incorrect because the capacity error is no t related to the instance size. The option that says: Submit a capacity increase re quest to AWS as you are initially limited to only 1 2 instances per Placement Group is incorrect because there is no such limit on the number of instances i n a Placement Group. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /placement-groups.html#placement-groups-cluster http://docs.amazonaws.cn/en_us/AWSEC2/latest/UserGu ide/troubleshooting-launch.html#troubleshooting- launch-capacity Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", + "references": "" + }, + { + "question": "There is a technical requirement by a financial fir m that does online credit card processing to have a secure application environment on AWS. They are try ing to decide on whether to use KMS or CloudHSM. Which of the following statements is righ t when it comes to Cloud HSM and KMS?", + "options": [ + "A. A. If you want a managed service for creating and controlling your encryption keys but don't want or need to", + "B. B. AWS Cloud HSM should always be used for any pa yment transactions.", + "C. C. You should consider using AWS Cloud HSM over A WS KMS if you require your keys stored in dedicated ,", + "D. D. No major difference. They both do the same thi ng.", + "A. A. Ingest the data using Amazon Simple Queue Serv ice and create an AWS Lambda function to store the", + "B. B. Ingest the data using Amazon Kinesis Data Fire hose and create an AWS Lambda function to store the", + "C. C. Ingest the data using Amazon Kinesis Data Stre ams and create an AWS Lambda function to store the", + "D. D. Ingest the data using Amazon Kinesis Data Stre ams and create an AWS Lambda function to store the" + ], + "correct": "C. C. Ingest the data using Amazon Kinesis Data Stre ams and create an AWS Lambda function to store the", + "explanation": "Explanation/Reference: Amazon Kinesis Data Streams enables you to build cu stom applications that process or analyze streaming data for specialized needs. You can continuously ad d various types of data such as clickstreams, appli cation logs, and social media to an Amazon Kinesis data st ream from hundreds of thousands of sources. Within seconds, the data will be available for your Amazon Kinesis Applications to read and process from the stream. Based on the given scenario, the key points are \"in gest and analyze the data in real-time\" and \"millis econd response times\". For the first key point and based on the given options, you can use Amazon Kinesis Da ta Streams because it can collect and process large st reams of data records in real-time. For the second key point, you should use Amazon DynamoDB since it supp orts single-digit millisecond response times at any scale. Hence, the correct answer is: Ingest the data using Amazon Kinesis Data Streams and create an AWS Lambda function to store the data in Amazon DynamoD B. The option that says: Ingest the data using Amazon Kinesis Data Streams and create an AWS Lambda function to store the data in Amazon Redshift is in correct because Amazon Redshift only delivers sub- second response times. Take note that as per the sc enario, the solution must have millisecond response times to meet the requirements. Amazon DynamoDB Acc elerator (DAX), which is a fully managed, highly available, in-memory cache for Amazon DynamoDB, can deliver microsecond response times. The option that says: Ingest the data using Amazon Kinesis Data Firehose and create an AWS Lambda function to store the data in Amazon DynamoD B is incorrect. Amazon Kinesis Data Firehose only supports Amazon S3, Amazon Redshift, Amazon El asticsearch, and an HTTP endpoint as the destination. The option that says: Ingest the data using Amazon Simple Queue Service and create an AWS Lambda function to store the data in Amazon Redshif t is incorrect because Amazon SQS can't analyze data in real-time. You have to use an Amazon Kinesi s Data Stream to process the data in near-real-time . References: https://aws.amazon.com/kinesis/data-streams/faqs/ https://aws.amazon.com/dynamodb/ Check out this Amazon Kinesis Cheat Sheet: https://tutorialsdojo.com/amazon-kinesis/", + "references": "" + }, + { + "question": "A software development company has hundreds of Amaz on EC2 instances with multiple Application Load Balancers (ALBs) across multiple AWS Regions. The p ublic applications hosted in their EC2 instances ar e accessed on their on-premises network. The company needs to reduce the number of IP addresses that it needs to regularly whitelist on the corporate firew all device. Which of the following approach can be used to fulf ill this requirement?", + "options": [ + "A. A. Create a new Lambda function that tracks the c hanges in the IP addresses of all ALBs across multi ple", + "B. B. Use AWS Global Accelerator and create an endpo int group for each AWS Region. Associate the", + "C. C. Use AWS Global Accelerator and create multiple endpoints for all the available AWS Regions. Assoc iate", + "D. D. Launch a Network Load Balancer with an associa ted Elastic IP address. Set the ALBs in multiple" + ], + "correct": "B. B. Use AWS Global Accelerator and create an endpo int group for each AWS Region. Associate the", + "explanation": "Explanation/Reference: AWS Global Accelerator is a service that improves t he availability and performance of your application s with local or global users. It provides static IP a ddresses that act as a fixed entry point to your ap plication endpoints in a single or multiple AWS Regions, such as your Application Load Balancers, Network Load Balancers, or Amazon EC2 instances. When the application usage grows, the number of IP addresses and endpoints that you need to manage als o increase. AWS Global Accelerator allows you to scal e your network up or down. AWS Global Accelerator lets you associate regional resources, such as load balancers and EC2 instances, to two static IP addr esses. You only whitelist these addresses once in your cli ent applications, firewalls, and DNS records. With AWS Global Accelerator, you can add or remove endpoints in the AWS Regions, run blue/green deployment, and A/B test without needing to update the IP addresses in your client applications. This is particularly useful for IoT, retail, media, automot ive, and healthcare use cases in which client appli cations cannot be updated frequently. certkingd If you have multiple resources in multiple regions, you can use AWS Global Accelerator to reduce the number of IP addresses. By creating an endpoint gro up, you can add all of your EC2 instances from a single region in that group. You can add additional endpoint groups for instances in other regions. Af ter it, you can then associate the appropriate ALB endpoints to each of your endpoint groups. The created accelerator would have two static IP addresses that you can use to create a security rule in your fire wall device. Instead of regularly adding the Amazon EC2 IP addre sses in your firewall, you can use the static IP ad dresses of AWS Global Accelerator to automate the process a nd eliminate this repetitive task. Hence, the correct answer is: Use AWS Global Accele rator and create an endpoint group for each AWS Region. Associate the Application Load Balancer from each region to the corresponding endpoint group. The option that says: Use AWS Global Accelerator an d create multiple endpoints for all the available AWS Regions. Associate all the private IP addresses of the EC2 instances to the corresponding endpoints is incorrect. It is better to create one endpoint group instead of multiple endpoints. Moreo ver, you have to associate the ALBs in AWS Global Accele rator and not the underlying EC2 instances. The option that says: Create a new Lambda function that tracks the changes in the IP addresses of all ALBs across multiple AWS Regions. Schedule the func tion to run and update the corporate firewall certkingd every hour using Amazon CloudWatch Events is incorr ect because this approach entails a lot of administrative overhead and takes a significant amo unt of time to implement. Using a custom Lambda function is actually not necessary since you can si mply use AWS Global Accelerator to achieve this requirement. The option that says: Launch a Network Load Balance r with an associated Elastic IP address. Set the ALBs in multiple Regions as targets is incorrect. A lthough you can allocate an Elastic IP address to y our certkingd ELB, it is not suitable to route traffic to your AL Bs across multiple Regions. You have to use AWS Glo bal Accelerator instead. References: https://docs.aws.amazon.com/global-accelerator/late st/dg/about-endpoint-groups.html https://aws.amazon.com/global-accelerator/faqs/ https://docs.aws.amazon.com/global-accelerator/late st/dg/introduction-how-it-works.html Check out this AWS Global Accelerator Cheat Sheet: https://tutorialsdojo.com/aws-global-accelerator/", + "references": "" + }, + { + "question": "A Solutions Architect needs to create a publicly ac cessible EC2 instance by using an Elastic IP (EIP) address and generate a report on how much it will c ost to use that EIP. Which of the following statements is correct regard ing the pricing of EIP?", + "options": [ + "A. A. There is no cost if the instance is running an d it has only one associated EIP.", + "B. B. There is no cost if the instance is terminated and it has only one associated EIP.", + "C. C. There is no cost if the instance is running an d it has at least two associated EIP.", + "D. D. There is no cost if the instance is stopped an d it has only one associated EIP." + ], + "correct": "A. A. There is no cost if the instance is running an d it has only one associated EIP.", + "explanation": "Explanation/Reference: An Elastic IP address doesn't incur charges as long as the following conditions are true: - The Elastic IP address is associated with an Amaz on EC2 instance. - The instance associated with the Elastic IP addre ss is running. - The instance has only one Elastic IP address atta ched to it. If you've stopped or terminated an EC2 instance wit h an associated Elastic IP address and you don't ne ed that Elastic IP address anymore, consider disassoci ating or releasing the Elastic IP address.", + "references": "https://aws.amazon.com/premiumsupport/knowledge-cen ter/elastic-ip-charges/" + }, + { + "question": "A startup is building a microservices architecture in which the software is composed of small independ ent services that communicate over well-defined APIs. I n building large-scale systems, fine-grained decoup ling of microservices is a recommended practice to implemen t. The decoupled services should scale horizontally from each other to improve scalability . What is the difference between Horizontal scaling a nd Vertical scaling?", + "options": [ + "A. A. Vertical scaling means running the same softwa re on a fully serverless architecture using", + "B. B. Horizontal scaling means running the same soft ware on smaller containers such as Docker", + "C. C. Horizontal scaling means running the same soft ware on bigger machines which is limited by", + "D. D. Vertical scaling means running the same softwa re on bigger machines which is limited by" + ], + "correct": "D. D. Vertical scaling means running the same softwa re on bigger machines which is limited by", + "explanation": "Explanation/Reference: Vertical scaling means running the same software on bigger machines which is limited by the capacity o f the individual server. Horizontal scaling is adding more servers to the existing pool and doesn't run into certkingd limitations of individual servers. Fine-grained decoupling of microservices is a best practice for building large-scale systems. It's a p rerequisite for performance optimization since it allows choosi ng the appropriate and optimal technologies for a specific service. Each service can be impleme nted with the appropriate programming languages and frameworks, leverage the optimal data persistence s olution, and be fine-tuned with the best performing service configurations. Properly decoupled services can be scaled horizonta lly and independently from each other. Vertical sca ling, which is running the same software on bigger machin es, is limited by the capacity of individual server s and can incur downtime during the scaling process. Hori zontal scaling, which is adding more servers to the existing pool, is highly dynamic and doesn't run in to limitations of individual servers. The scaling p rocess can be completely automated. Furthermore, the resiliency of the application can be improved because failing components can be easil y and automatically replaced. Hence, the correct answ er is the option that says: Vertical scaling means running the same software on bigger machines which is limited by the capacity of the individual server. Horizontal scaling is adding more servers t o the existing pool and doesn't run into limitation s of individual servers. The option that says: Vertical scaling means runnin g the same software on a fully serverless architecture using Lambda. Horizontal scaling means adding more servers to the existing pool and it doesn't run into limitations of individual servers is incorrect because Vertical scaling is not about running the same software on a fully serverless arc hitecture. AWS Lambda is not required for scaling. The option that says: Horizontal scaling means runn ing the same software on bigger machines which is limited by the capacity of individual servers. V ertical scaling is adding more servers to the exist ing pool and doesn't run into limitations of individual servers is incorrect because the definitions for the two co ncepts were switched. Vertical scaling means running the same s oftware on bigger machines which is limited by the capacity of the individual server . Horizontal scaling is adding more servers to the existing pool and doesn't run into limitations of individual serv ers. The option that says: Horizontal scaling means runn ing the same software on smaller containers such as Docker and Kubernetes using ECS or EKS. Vertical scaling is adding more servers to the existing pool and doesn't run into limitations of individual servers is incorrect because Horizontal scaling is not related to using ECS or EKS containers on a smaller instance. Reference: https://docs.aws.amazon.com/aws-technical-content/l atest/microservices-on-aws/microservices-on- aws.pdf#page=8", + "references": "" + }, + { + "question": "A new DevOps engineer has created a CloudFormation template for a web application and she raised a pull request in GIT for you to check a nd review. After checking the template, you immediately told her that the template will not wor k. Which of the following is the reason why this CloudFormation template will fail to deploy the sta ck? { \"AWSTemplateFormatVersion\":\"2010-09-09\", \"Parameters\":{ \"VPCId\":{ \"Type\":\"String\", \"Description\":\"manila\" }, \"SubnetId\":{ \"Type\":\"String\", \"Description\":\"subnet-b46032ec\" } }, \"Outputs\":{ \"InstanceId\":{ \"Value\":{ \"Ref\":\"manilaInstance\" }, \"Description\":\"Instance Id\" } } }", + "options": [ + "A. A. The Resources section is missing.", + "B. B. The Conditions section is missing.", + "C. C. An invalid section named Parameters is present . This will cause the CloudFormation stack to fail.", + "D. D. The value of the AWSTemplateFormatVersion is i ncorrect. It should be 2017-06-06." + ], + "correct": "A. A. The Resources section is missing.", + "explanation": "Explanation/Reference: In CloudFormation, a template is a JSON or a YAML-f ormatted text file that describes your AWS infrastructure. Templates include several major sec tions. The Resources section is the only required s ection. Some sections in a template can be in any order. Ho wever, as you build your template, it might be help ful to use the logical ordering of the following list, as values in one section might refer to values from a previous section. Take note that all of the section s here are optional, except for Resources, which is the only one required. - Format Version - Description - Metadata - Parameters - Mappings - Conditions - Transform - Resources (required) - Outputs", + "references": "http://docs.aws.amazon.com/AWSCloudFormation/latest /UserGuide/template-anatomy.html Check out this AWS CloudFormation Cheat Sheet: https://tutorialsdojo.com/aws-cloudformation/ AWS CloudFormation - Templates, Stacks, Change Sets : https://www.youtube.com/watch?v=9Xpuprxg7aY" + }, + { + "question": "An online shopping platform has been deployed to AW S using Elastic Beanstalk. They simply uploaded their Node.js application, and Elastic Beanstalk au tomatically handles the details of capacity provisi oning, load balancing, scaling, and application health mon itoring. Since the entire deployment process is automated, the DevOps team is not sure where to get the application log files of their shopping platfo rm. In Elastic Beanstalk, where does it store the appli cation files and server log files?", + "options": [ + "A. A. Application files are stored in S3. The server log files can only be stored in the attached EBS v olumes of", + "B. B. Application files are stored in S3. The server log files can also optionally be stored in S3 or i n", + "C. C. Application files are stored in S3. The server log files can be stored directly in Glacier or in CloudWatch", + "D. D. Application files are stored in S3. The server log files can be optionally stored in CloudTrail o r in" + ], + "correct": "B. B. Application files are stored in S3. The server log files can also optionally be stored in S3 or i n", + "explanation": "Explanation/Reference: AWS Elastic Beanstalk stores your application files and optionally, server log files in Amazon S3. If you are using the AWS Management Console, the AWS Toolk it for Visual Studio, or AWS Toolkit for Eclipse, an Amazon S3 bucket will be created in your account and the files you upload will be automatically cop ied from your local client to Amazon S3. Optionally, yo u may configure Elastic Beanstalk to copy your serv er log files every hour to Amazon S3. You do this by e diting the environment configuration settings. Thus, the correct answer is the option that says: A pplication files are stored in S3. The server log f iles can also optionally be stored in S3 or in CloudWatc h Logs. With CloudWatch Logs, you can monitor and archive y our Elastic Beanstalk application, system, and custom log files from Amazon EC2 instances of your environments. You can also configure alarms that make it easier for you to react to specific log str eam events that your metric filters extract. The Cl oudWatch Logs agent installed on each Amazon EC2 instance in your environment publishes metric data points to t he CloudWatch service for each log group you configure . Each log group applies its own filter patterns to determine what log stream events to send to CloudWa tch as data points. Log streams that belong to the same log group share the same retention, monitoring , and access control settings. You can configure El astic Beanstalk to automatically stream logs to the Cloud Watch service. The option that says: Application files are stored in S3. The server log files can only be stored in t he attached EBS volumes of the EC2 instances, which were launch ed by AWS Elastic Beanstalk is incorrect because the server log files can also be stored in either S3 or CloudWatch Logs, and not onl y on the EBS volumes of the EC2 instances which are laun ched by AWS Elastic Beanstalk. The option that says: Application files are stored in S3. The server log files can be stored directly in Glacier or in CloudWatch Logs is incorrect because the server log files can optionally be stored in ei ther S3 or CloudWatch Logs, but not directly to Glacier. You can create a lifecycle policy to the S3 bucket to store the server logs and archive it in Glacier, bu t there is no direct way of storing the server logs to Glacier using Elastic Beanstalk unless you do it programmat ically. The option that says: Application files are stored in S3. The server log files can be optionally store d in CloudTrail or in CloudWatch Logs is incorrect becau se the server log files can optionally be stored in either S3 or CloudWatch Logs, but not directly to C loudTrail as this service is primarily used for aud iting API calls.", + "references": "https://aws.amazon.com/elasticbeanstalk/faqs/ AWS Elastic Beanstalk Overview: https://www.youtube.com/watch?v=rx7e7Fej1Oo Check out this AWS Elastic Beanstalk Cheat Sheet: https://tutorialsdojo.com/aws-elastic-beanstalk/" + }, + { + "question": "A Solutions Architect is trying to enable Cross-Reg ion Replication to an S3 bucket but this option is disabled. Which of the following options is a valid reason for this?", + "options": [ + "A. A. In order to use the Cross-Region Replication f eature in S3, you need to first enable versioning o n the", + "B. B. The Cross-Region Replication feature is only ava ilable for Amazon S3 - One Zone-IA C. C. The Cross-Region Replication feature is only ava ilable for Amazon S3 - Infrequent Access.", + "D. D. This is a premium feature which is only for AW S Enterprise accounts." + ], + "correct": "A. A. In order to use the Cross-Region Replication f eature in S3, you need to first enable versioning o n the", + "explanation": "Explanation/Reference: To enable the cross-region replication feature in S 3, the following items should be met: The source and destination buckets must have versio ning enabled. The source and destination buckets must be in diffe rent AWS Regions. Amazon S3 must have permissions to replicate object s from that source bucket to the destination bucket on your behalf. The options that say: The Cross-Region Replication feature is only available for Amazon S3 - One Zone-IA and The Cross-Region Replication feature is only available for Amazon S3 - Infrequent Access are incorrect as this feature is available t o all types of S3 classes. The option that says: This is a premium feature whi ch is only for AWS Enterprise accounts is incorrect as this CRR feature is available to all Support Pla ns. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/crr .html https://aws.amazon.com/blogs/aws/new-cross-region-r eplication-for-amazon-s3/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", + "references": "" + }, + { + "question": "An online stock trading system is hosted in AWS and uses an Auto Scaling group of EC2 instances, an RDS database, and an Amazon ElastiCache for Redis. You need to improve the data security of your in- memory data store by requiring the user to enter a password before they are granted permission to exec ute Redis commands. Which of the following should you do to meet the ab ove requirement?", + "options": [ + "A. A. Do nothing. This feature is already enabled by default.", + "B. B. Enable the in-transit encryption for Redis rep lication groups.", + "C. C. Create a new Redis replication group and set t he AtRestEncryptionEnabled parameter to true.", + "D. D. None of the above." + ], + "correct": "", + "explanation": "Explanation/Reference: Using Redis AUTH command can improve data security by requiring the user to enter a password before they are granted permission to execute Redis comman ds on a password-protected Redis server. Hence, the correct answer is to authenticate the us ers using Redis AUTH by creating a new Redis Cluster with both the --transit-encryption-enabled and --auth-token parameters enabled. To require that users enter a password on a passwor d-protected Redis server, include the parameter --a uth- token with the correct password when you create you r replication group or cluster and on all subsequen t commands to the replication group or cluster. Enabling the in-transit encryption for Redis replic ation groups is incorrect because although in-trans it encryption is part of the solution, it is missing t he most important thing which is the Redis AUTH opt ion. Creating a new Redis replication group and setting the AtRestEncryptionEnabled parameter to true is incorrect because the Redis At-Rest Encryption f eature only secures the data inside the in-memory d ata store. You have to use Redis AUTH option instead. The option that says: Do nothing. This feature is a lready enabled by default is incorrect because the Redis AUTH option is disabled by default. References: https://docs.aws.amazon.com/AmazonElastiCache/lates t/red-ug/auth.html https://docs.aws.amazon.com/AmazonElastiCache/lates t/red-ug/encryption.html Check out this Amazon ElastiCache Cheat Sheet: https://tutorialsdojo.com/amazon-elasticache/ Redis Append-Only Files vs Redis Replication: https://tutorialsdojo.com/redis-append-only-files-v s-redis-replication/", + "references": "" + }, + { + "question": "A mobile application stores pictures in Amazon Simp le Storage Service (S3) and allows application sign-in using an OpenID Connect-compatible identity provider. Which AWS Security Token Service approach to temporary access should you use for thi s scenario?", + "options": [ + "A. A. SAML-based Identity Federation", + "B. B. Web Identity Federation", + "C. C. Cross-Account Access", + "D. D. AWS Identity and Access Management roles" + ], + "correct": "B. B. Web Identity Federation", + "explanation": "Explanation/Reference: With web identity federation, you don't need to cre ate custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) --such a s Login with Amazon, Facebook, Google, or any other O penID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources i n your AWS account. Using an IdP helps you keep you r AWS account secure because you don't have to embed and distribute long-term security credentials with your application.", + "references": "http://docs.aws.amazon.com/IAM/latest/UserGuide/id_ roles_providers_oidc.html Check out this AWS IAM Cheat Sheet: https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/" + }, + { + "question": "A web application, which is hosted in your on-premi ses data center and uses a MySQL database, must be migrated to AWS Cloud. You need to ensure that the network traffic to and from your RDS database instance is encrypted using SSL. For improved secur ity, you have to use the profile credentials specif ic to your EC2 instance to access your database, instead of a password. Which of the following should you do to meet the ab ove requirement?", + "options": [ + "A. A. Launch a new RDS database instance with the Ba cktrack feature enabled.", + "B. B. Set up an RDS database and enable the IAM DB A uthentication.", + "C. C. Configure your RDS database to enable encrypti on.", + "D. D. Launch the mysql client using the --ssl-ca par ameter when connecting to the database." + ], + "correct": "B. B. Set up an RDS database and enable the IAM DB A uthentication.", + "explanation": "Explanation Explanation/Reference: You can authenticate to your DB instance using AWS Identity and Access Management (IAM) database authentication. IAM database authentication works w ith MySQL and PostgreSQL. With this authentication method, you don't need to use a password when you c onnect to a DB instance. Instead, you use an authentication token. An authentication token is a unique string of chara cters that Amazon RDS generates on request. Authentication tokens are generated using AWS Signa ture Version 4. Each token has a lifetime of 15 minutes. You don't need to store user credentials i n the database, because authentication is managed externally using IAM. You can also still use standa rd database authentication. IAM database authentication provides the following benefits: - Network traffic to and from the database is encry pted using Secure Sockets Layer (SSL). - You can use IAM to centrally manage access to you r database resources, instead of managing access individually on each DB instance. - For applications running on Amazon EC2, you can u se profile credentials specific to your EC2 instanc e to access your database instead of a password, for gre ater security Hence, setting up an RDS database and enable the IA M DB Authentication is the correct answer based on the above reference. Launching a new RDS database instance with the Back track feature enabled is incorrect because the Backtrack feature simply \"rewinds\" the DB cluster t o the time you specify. Backtracking is not a replacement for backing up your DB cluster so that you can restore it to a point in time. However, you can easily undo mistakes using the backtrack feature if you mistakenly perform a destructive action, such as a DELETE without a WHERE clause. Configuring your RDS database to enable encryption is incorrect because this encryption feature in RDS is mainly for securing your Amazon RDS DB insta nces and snapshots at rest. The data that is encrypted at rest includes the underlying storage f or DB instances, its automated backups, Read Replic as, and snapshots. Launching the mysql client using the --ssl-ca param eter when connecting to the database is incorrect because even though using the --ssl-ca parameter ca n provide SSL connection to your database, you stil l need to use IAM database connection to use the prof ile credentials specific to your EC2 instance to ac cess your database instead of a password.", + "references": "https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/UsingWithRDS.IAMDBAuth.html Check out this Amazon RDS cheat sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/" + }, + { + "question": "A company has several unencrypted EBS snapshots in their VPC. The Solutions Architect must ensure that all of the new EBS volumes restored from the unencr ypted snapshots are automatically encrypted. What should be done to accomplish this requirement?", + "options": [ + "A. A. Enable the EBS Encryption By Default feature f or the AWS Region.", + "B. B. Enable the EBS Encryption By Default feature f or specific EBS volumes.", + "C. C. Launch new EBS volumes and encrypt them using an asymmetric customer master key (CMK).", + "D. D. Launch new EBS volumes and specify the symmetr ic customer master key (CMK) for encryption. Correct Answer: A" + ], + "correct": "", + "explanation": "Explanation/Reference: You can configure your AWS account to enforce the e ncryption of the new EBS volumes and snapshot copies that you create. For example, Amazon EBS enc rypts the EBS volumes created when you launch an instance and the snapshots that you copy from an un encrypted snapshot. Encryption by default has no effect on existing EBS volumes or snapshots. The following are important considerations in EBS encryption: - Encryption by default is a Region-specific settin g. If you enable it for a Region, you cannot disabl e it for individual volumes or snapshots in that Region. - When you enable encryption by default, you can la unch an instance only if the instance type supports EBS encryption. - Amazon EBS does not support asymmetric CMKs. When migrating servers using AWS Server Migration S ervice (SMS), do not turn on encryption by default. If encryption by default is already on and you are experiencing delta replication failures, turn off e ncryption by default. Instead, enable AMI encryption when you create the replication job. You cannot change the CMK that is associated with a n existing snapshot or encrypted volume. However, you can associate a different CMK during a snapshot copy operation so that the resulting copied snapsh ot is encrypted by the new CMK. Although there is no direct way to encrypt an exist ing unencrypted volume or snapshot, you can encrypt them by creating either a volume or a snapshot. If you enabled encryption by default, Amazon EBS encrypts the resulting new volume or snapshot using your default key for EBS encryption. Even if you h ave not enabled encryption by default, you can enable encry ption when you create an individual volume or snapshot. Whether you enable encryption by default or in individual creation operations, you can overr ide the default key for EBS encryption and use symmetric cu stomer-managed CMK. Hence, the correct answer is: Enable the EBS Encryp tion By Default feature for the AWS Region. The option that says: Launch new EBS volumes and en crypt them using an asymmetric customer master key (CMK) is incorrect because Amazon EBS do es not support asymmetric CMKs. To encrypt an EBS snapshot, you need to use symmetric CMK. The option that says: Launch new EBS volumes and sp ecify the symmetric customer master key (CMK) for encryption is incorrect. Although this so lution will enable data encryption, this process is manual and can potentially cause some unencrypted E BS volumes to be launched. A better solution is to enable the EBS Encryption By Default feature. It is stated in the scenario that all of the new EBS vol umes restored from the unencrypted snapshots must be aut omatically encrypted. The option that says: Enable the EBS Encryption By Default feature for specific EBS volumes is incorrect because the Encryption By Default feature is a Region-specific setting and thus, you can't e nable it to selected EBS volumes only. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /EBSEncryption.html#encryption-by-default https://docs.aws.amazon.com/kms/latest/developergui de/services-ebs.html Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/ Comparison of Amazon S3 vs Amazon EBS vs Amazon EFS : https://tutorialsdojo.com/amazon-s3-vs-ebs-vs-efs/", + "references": "" + }, + { + "question": "An application is hosted in an Auto Scaling group o f EC2 instances and a Microsoft SQL Server on Amazon RDS. There is a requirement that all in-flig ht data between your web servers and RDS should be secured. Which of the following options is the MOST suitable solution that you should implement? (Select TWO.)", + "options": [ + "A. A. Force all connections to your DB instance to u se SSL by setting the rds.force_ssl parameter to tr ue.", + "B. B. Download the Amazon RDS Root CA certificate. I mport the certificate to your servers and configure your", + "C. C. Enable the IAM DB authentication in RDS using the AWS Management Console.", + "D. D. Configure the security groups of your EC2 inst ances and RDS to only allow traffic to and from por t 443." + ], + "correct": "", + "explanation": "Explanation/Reference: You can use Secure Sockets Layer (SSL) to encrypt c onnections between your client applications and you r Amazon RDS DB instances running Microsoft SQL Serve r. SSL support is available in all AWS regions for all supported SQL Server editions. When you create an SQL Server DB instance, Amazon R DS creates an SSL certificate for it. The SSL certificate includes the DB instance endpoint as th e Common Name (CN) for the SSL certificate to guard against spoofing attacks. There are 2 ways to use SSL to connect to your SQL Server DB instance: - Force SSL for all connections -- this happens tra nsparently to the client, and the client doesn't ha ve to do any work to use SSL. - Encrypt specific connections -- this sets up an S SL connection from a specific client computer, and you must do work on the client to encrypt connections. You can force all connections to your DB instance t o use SSL, or you can encrypt connections from specific client computers only. To use SSL from a s pecific client, you must obtain certificates for th e client computer, import certificates on the client compute r, and then encrypt the connections from the client computer. If you want to force SSL, use the rds.force_ssl par ameter. By default, the rds.force_ssl parameter is set to false. Set the rds.force_ssl parameter to true to f orce connections to use SSL. The rds.force_ssl para meter is static, so after you change the value, you must reb oot your DB instance for the change to take effect. Hence, the correct answers for this scenario are th e options that say: - Force all connections to your DB instance to use SSL by setting the rds.force_ssl parameter to true. Once done, reboot your DB instance. - Download the Amazon RDS Root CA certificate. Impo rt the certificate to your servers and configure your application to use SSL to encrypt th e connection to RDS. Specifying the TDE option in an RDS option group th at is associated with that DB instance to enable transparent data encryption (TDE) is incorrect beca use transparent data encryption (TDE) is primarily used to encrypt stored data on your DB instances ru nning Microsoft SQL Server, and not the data that a re in transit. Enabling the IAM DB authentication in RDS using the AWS Management Console is incorrect because IAM database authentication is only support ed in MySQL and PostgreSQL database engines. With IAM database authentication, you don't need to use a password when you connect to a DB instance but instead, you use an authentication token. Configuring the security groups of your EC2 instanc es and RDS to only allow traffic to and from port 443 is incorrect because it is not enough to d o this. You need to either force all connections to your DB instance to use SSL, or you can encrypt connecti ons from specific client computers, just as mention ed above. References: https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/SQLServer.Concepts.General.SSL.Using.html https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/Appendix.SQLServer.Options.TDE.html https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/UsingWithRDS.IAMDBAuth.html", + "references": "" + }, + { + "question": "In a tech company that you are working for, there i s a requirement to allow one IAM user to modify the configuration of one of your Elastic Load Balancers (ELB) which is used in a specific project. Each developer in your company has an individual IAM use r and they usually move from one project to another . Which of the following would be the best way to all ow this access?", + "options": [ + "A. A. Provide the user temporary access to the root account for 8 hours only. Afterwards, change the pa ssword", + "B. B. Create a new IAM user that has access to modif y the ELB. Delete that user when the work is comple ted.", + "C. C. Open up the port that ELB uses in a security g roup and then give the user access to that securit y group", + "D. D. Create a new IAM Role which will be assumed by the IAM user. Attach a policy allowing access to m odify", + "A. A. You will receive an email from Amazon SNS info rming you that the object is successfully stored.", + "B. B. Amazon S3 has 99.999999999% durability hence, there is no need to confirm that data was inserted.", + "C. C. You will receive an SMS from Amazon SNS inform ing you that the object is successfully stored.", + "D. D. HTTP 200 result code and MD5 checksum." + ], + "correct": "D. D. HTTP 200 result code and MD5 checksum.", + "explanation": "Explanation/Reference: If you triggered an S3 API call and got HTTP 200 re sult code and MD5 checksum, then it is considered a s a successful upload. The S3 API will return an erro r code in case the upload is unsuccessful. The option that says: Amazon S3 has 99.999999999% d urability hence, there is no need to confirm that data was inserted is incorrect because althoug h S3 is durable, it is not an assurance that all ob jects uploaded using S3 API calls will be successful. The options that say: You will receive an SMS from Amazon SNS informing you that the object is successfully stored and You will receive an email f rom Amazon SNS informing you that the object is successfully stored are both incorrect because you don't receive an SMS nor an email notification by default, unless you added an event notification.", + "references": "https://docs.aws.amazon.com/AmazonS3/latest/API/RES TObjectPOST.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/" + }, + { + "question": "A company has a VPC for its human resource departme nt, and another VPC located in a different region for their finance department. The Solutions Archite ct must redesign the architecture to allow the fina nce department to access all resources that are in the human resource department, and vice versa. Which type of networking connection in AWS should t he Solutions Architect set up to satisfy the above requirement?", + "options": [ + "A. A. VPN Connection", + "B. B. AWS Cloud Map", + "C. C. VPC Endpoint", + "D. D. Inter-Region VPC Peering" + ], + "correct": "D. D. Inter-Region VPC Peering", + "explanation": "Explanation/Reference: Amazon Virtual Private Cloud (Amazon VPC) offers a comprehensive set of virtual networking capabilities that provide AWS customers with many o ptions for designing and implementing networks on the AWS cloud. With Amazon VPC, you can provision l ogically isolated virtual networks to host your AWS resources. You can create multiple VPCs within the same region or in different regions, in the sam e account or in different accounts. This is useful fo r customers who require multiple VPCs for security, billing, regulatory, or other purposes, and want to integrate AWS resources between their VPCs more easily. More often than not, these different VPCs n eed to communicate privately and securely with one another for sharing data or applications. A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them privately. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connecti on between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Regio n. AWS uses the existing infrastructure of a VPC to cr eate a VPC peering connection; it is neither a gate way nor a VPN connection and does not rely on a separat e piece of physical hardware. There is no single po int of failure for communication or a bandwidth bottlen eck. Hence, the correct answer is: Inter-Region VPC Peer ing. AWS Cloud Map is incorrect because this is simply a cloud resource discovery service. With Cloud Map, you can define custom names for your application re sources, and it maintains the updated location of t hese dynamically changing resources. This increases your application availability because your web service always disco vers the most up-to-date locations of its resources. VPN Connection is incorrect. This is technically po ssible, but since you already have 2 VPCs on AWS, i t is easier to set up a VPC peering connection. The b andwidth is also faster for VPC peering since the connection will be going through the AWS backbone n etwork instead of the public Internet when you use a VPN connection. VPC Endpoint is incorrect because this is primarily used to allow you to privately connect your VPC to supported AWS services and VPC endpoint services po wered by PrivateLink, but not to the other VPC itself. References: https://docs.aws.amazon.com/AmazonVPC/latest/UserGu ide/vpc-peering.html https://aws.amazon.com/answers/networking/aws-multi ple-region-multi-vpc-connectivity/ Check out these Amazon VPC and VPC Peering Cheat Sh eets: https://tutorialsdojo.com/amazon-vpc/", + "references": "" + }, + { + "question": "A company plans to launch an application that track s the GPS coordinates of delivery trucks in the cou ntry. The coordinates are transmitted from each delivery truck every five seconds. You need to design an architecture that will enable real-time processing of these coordinates from multiple consumers. The aggregated data will be analyzed in a separate repo rting application. Which AWS service should you use for this scenario?", + "options": [ + "A. A. Amazon Simple Queue Service", + "B. B. Amazon Kinesis", + "C. C. Amazon AppStream", + "D. D. AWS Data Pipeline" + ], + "correct": "B. B. Amazon Kinesis", + "explanation": "Explanation/Reference: Amazon Kinesis makes it easy to collect, process, a nd analyze real-time, streaming data so you can get timely insights and react quickly to new informatio n. It offers key capabilities to cost-effectively p rocess streaming data at any scale, along with the flexibi lity to choose the tools that best suit the require ments of your application. With Amazon Kinesis, you can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine le arning, analytics, and other applications. Amazon Kinesis enables you to process and analyze data as it arrives and responds instantly instead of having to wait until all your data are collected before the p rocessing can begin.", + "references": "https://aws.amazon.com/kinesis/ Check out this Amazon Kinesis Cheat Sheet: https://tutorialsdojo.com/amazon-kinesis/" + }, + { + "question": "A multinational manufacturing company has multiple accounts in AWS to separate their various departments such as finance, human resources, engin eering and many others. There is a requirement to ensure that certain access to services and actions are properly controlled to comply with the security policy of the company. As the Solutions Architect, which is the most suita ble way to set up the multi-account AWS environment of the company?", + "options": [ + "A. A. Use AWS Organizations and Service Control Poli cies to control services on each account.", + "B. B. Set up a common IAM policy that can be applied across all AWS accounts.", + "C. C. Connect all departments by setting up a cross- account access to each of the AWS accounts of the", + "D. D. Provide access to externally authenticated use rs via Identity Federation. Set up an IAM role to s pecify" + ], + "correct": "A. A. Use AWS Organizations and Service Control Poli cies to control services on each account.", + "explanation": "Explanation/Reference: Using AWS Organizations and Service Control Policie s to control services on each account is the correct answer. Refer to the diagram below: AWS Organizations offers policy-based management fo r multiple AWS accounts. With Organizations, you can create groups of accounts, automate account cre ation, apply and manage policies for those groups. Organizations enables you to centrally manage polic ies across multiple accounts, without requiring cus tom scripts and manual processes. It allows you to crea te Service Control Policies (SCPs) that centrally c ontrol AWS service use across multiple AWS accounts. Setting up a common IAM policy that can be applied across all AWS accounts is incorrect because it is not possible to create a common IAM policy for mult iple AWS accounts. The option that says: Connect all departments by se tting up a cross-account access to each of the AWS accounts of the company. Create and attach IAM poli cies to your resources based on their respective departments to control access is incorrect because although you can set up cross-account access to eac h department, this entails a lot of configuration com pared with using AWS Organizations and Service Control Policies (SCPs). Cross-account access would be a more suitable choice if you only have two accounts to manage, but not for multiple accounts. The option that says: Provide access to externally authenticated users via Identity Federation. Set up an IAM role to specify permissions for users from e ach department whose identity is federated from your organization or a third-party identity provide r is incorrect as this option is focused on the Ide ntity Federation authentication set up for your AWS accou nts but not the IAM policy management for multiple AWS accounts. A combination of AWS Organizations an d Service Control Policies (SCPs) is a better choice compared to this option.", + "references": "https://aws.amazon.com/organizations/ Check out this AWS Organizations Cheat Sheet: https://tutorialsdojo.com/aws-organizations/ Service Control Policies (SCP) vs IAM Policies: https://tutorialsdojo.com/service-control-policies- scp-vs-iam-policies/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/" + }, + { + "question": "A company deployed a fleet of Windows-based EC2 ins tances with IPv4 addresses launched in a private subnet. Several software installed in the EC2 insta nces are required to be updated via the Internet. Which of the following services can provide the fir m a highly available solution to safely allow the i nstances to fetch the software patches from the Internet but pr event outside network from initiating a connection?", + "options": [ + "A. A. VPC Endpoint", + "B. B. NAT Gateway", + "C. C. NAT Instance", + "D. D. Egress-Only Internet Gateway" + ], + "correct": "B. B. NAT Gateway", + "explanation": "Explanation/Reference: AWS offers two kinds of NAT devices -- a NAT gatewa y or a NAT instance. It is recommended to use NAT gateways, as they provide better availability a nd bandwidth over NAT instances. The NAT Gateway service is also a managed service that does not req uire your administration efforts. A NAT instance is launched from a NAT AMI. Just like a NAT instance, you can use a network add ress translation (NAT) gateway to enable instances in a private subnet to connect to the internet or other AWS services, but prevent the internet from initiat ing a connection with those instances. Here is a diagram showing the differences between N AT gateway and NAT instance: Egress-Only Internet Gateway is incorrect because t his is primarily used for VPCs that use IPv6 to enable instances in a private subnet to connect to the Internet or other AWS services, but prevent the Internet from initiating a connection with those in stances, just like what NAT Instance and NAT Gatewa y do. The scenario explicitly says that the EC2 insta nces are using IPv4 addresses which is why Egress-o nly Internet gateway is invalid, even though it can pro vide the required high availability. VPC Endpoint is incorrect because this simply enabl es you to privately connect your VPC to supported AWS services and VPC endpoint services powered by P rivateLink without requiring an Internet gateway, NAT device, VPN connection, or AWS Direct Connect c onnection. NAT Instance is incorrect because although this can also enable instances in a private subnet to conne ct to the Internet or other AWS services and prevent the Inte rnet from initiating a connection with those instances, it is not as highly available compared t o a NAT Gateway. References: https://docs.aws.amazon.com/AmazonVPC/latest/UserGu ide/vpc-nat-gateway.html https://docs.aws.amazon.com/vpc/latest/userguide/vp c-nat-comparison.html https://docs.aws.amazon.com/vpc/latest/userguide/eg ress-only-internet-gateway.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", + "references": "" + }, + { + "question": "A company developed a financial analytics web appli cation hosted in a Docker container using MEAN (MongoDB, Express.js, AngularJS, and Node.js) stack . You want to easily port that web application to AWS Cloud which can automatically handle all the ta sks such as balancing load, auto-scaling, monitorin g, and placing your containers across your cluster. Which of the following services can be used to fulf ill this requirement?", + "options": [ + "A. A. OpsWorks", + "B. B. ECS", + "C. C. AWS Elastic Beanstalk", + "D. D. AWS Code Deploy" + ], + "correct": "C. C. AWS Elastic Beanstalk", + "explanation": "Explanation/Reference: AWS Elastic Beanstalk supports the deployment of we b applications from Docker containers. With Docker containers, you can define your own runtime environment. You can choose your own platform, programming language, and any application dependenc ies (such as package managers or tools), that aren' t supported by other platforms. Docker containers are self-contained and include all the configuration information and software your web application requi res to run. By using Docker with Elastic Beanstalk, you have an infrastructure that automatically handles the deta ils of capacity provisioning, load balancing, scaling, and application health monitoring. You can manage your web application in an environment that supports the range of services that are integrated with Elastic Beanstalk, including but not limited to VPC, RDS, a nd IAM. Hence, the correct answer is: AWS Elastic Beanstalk . ECS is incorrect. Although it also provides Service Auto Scaling, Service Load Balancing, and Monitori ng with CloudWatch, these features are not automatical ly enabled by default unlike with Elastic Beanstalk . Take note that the scenario requires a service that will automatically handle all the tasks such as ba lancing load, auto-scaling, monitoring, and placing your co ntainers across your cluster. You will have to manu ally configure these things if you wish to use ECS. With Elastic Beanstalk, you can manage your web application in an environment that supports the ran ge of services easier. OpsWorks and AWS CodeDeploy are incorrect because t hese are primarily used for application deployment and configuration only, without providin g load balancing, auto-scaling, monitoring, or ECS cluster management.", + "references": "https://docs.aws.amazon.com/elasticbeanstalk/latest /dg/create_deploy_docker.html Check out this AWS Elastic Beanstalk Cheat Sheet: https://tutorialsdojo.com/aws-elastic-beanstalk/ AWS Elastic Beanstalk Overview: https://www.youtube.com/watch?v=rx7e7Fej1Oo Elastic Beanstalk vs CloudFormation vs OpsWorks vs CodeDeploy: https://tutorialsdojo.com/elastic-beanstalk-vs-clou dformation-vs-opsworks-vs-codedeploy/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/" + }, + { + "question": "A manufacturing company wants to implement predicti ve maintenance on its machinery equipment. The company will install thousands of IoT sensors that will send data to AWS in real time. A solutions architect is tasked with implementing a s olution that will receive events in an ordered manner for each machinery asset and ensure that dat a is saved for further processing at a later time. Which solution would be MOST efficient?", + "options": [ + "A. Use Amazon Kinesis Data Streams for real-time eve nts with a partition for each equipment asset. Use", + "B. Use Amazon Kinesis Data Streams for real-time eve nts with a shard for each equipment asset. Use", + "C. Use an Amazon SQS FIFO queue for real-time events with one queue for each equipment asset. Trigger a n", + "D. Use an Amazon SQS standard queue for real-time ev ents with one queue for each equipment asset." + ], + "correct": "D. Use an Amazon SQS standard queue for real-time ev ents with one queue for each equipment asset.", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A company ?\u20ac\u2122s website runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The website has a mix of dynamic and static content. Users around the globe are reporting that the website is slow. Which set of actions will improve website performan ce for users worldwide?", + "options": [ + "A. Create an Amazon CloudFront distribution and conf igure the ALB as an origin. Then update the Amazon", + "B. Create a latency-based Amazon Route 53 record for the ALB. Then launch new EC2 instances with larger", + "C. Launch new EC2 instances hosting the same web app lication in different Regions closer to the users. Then", + "D. Host the website in an Amazon S3 bucket in the Re gions closest to the users and delete the ALB and E C2" + ], + "correct": "", + "explanation": "Explanation/Reference: What Is Amazon CloudFront? Amazon CloudFront is a web service that speeds up d istribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwid e network of data centers called edge locations. Wh en a user requests content that you're serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance. Routing traffic to an Amazon CloudFront web distrib ution by using your domain name. If you want to speed up delivery of your web conten t, you can use Amazon CloudFront, the AWS content delivery network (CDN). CloudFront can deliver your entire website ?\u20ac\" including dynamic, static, streaming, and interactive content ?\u20ac\" by u sing a global network of edge locations. Requests for your content are automatically routed to the edge location that gives your users the lowe st latency. To use CloudFront to distribute your content, you c reate a web distribution and specify settings such as the Amazon S3 bucket or HTTP server that you want CloudFront to get your content from, whether you wa nt only selected users to have access to your conte nt, and whether you want to require users to use HTTPS. When you create a web distribution, CloudFront assi gns a domain name to the distribution, such asd111111abcdef8.cloudfront.net. You can use this domain name in the URLs for your content, for example: [1] Alternatively, you might prefer to use your own dom ain name in URLs, for example: [1] If you want to use your own domain name, use Amazon Route 53 to create an alias record that points to your CloudFront distribution. An alias record is a Route 53 extension to DNS. It's similar to a CNAME record , but you can create an alias record both for the r oot domain, such as example.com, and for subdomains, such aswww.example.com. (You can create CNAME records only for subdomains.) When Route 53 receives a DNS query that matches the name and type of an alias record, Route 53 responds with the domain name that is associate d with your distribution.", + "references": "https://docs.aws.amazon.com/Route53/latest/Develope rGuide/routing-to-cloudfront-distribution.html https://docs.aws.amazon.com/AmazonCloudFront/latest /DeveloperGuide/Introduction.html" + }, + { + "question": "A company has been storing analytics data in an Ama zon RDS instance for the past few years. The compan y asked a solutions architect to find a solution that allows users to access this data usin g an API. The expectation is that the application w ill experience periods of inactivity but could receive bursts of traffic within seconds. Which solution should the solutions architect sugge st?", + "options": [ + "A. Set up an Amazon API Gateway and use Amazon ECS.", + "B. Set up an Amazon API Gateway and use AWS Elastic Beanstalk.", + "C. Set up an Amazon API Gateway and use AWS Lambda f unctions.", + "D. Set up an Amazon API Gateway and use Amazon EC2 w ith Auto Scaling. Correct Answer: C" + ], + "correct": "", + "explanation": "Explanation/Reference: AWS Lambda - With Lambda, you can run code for virtually any typ e of application or backend service ?\u20ac\" all with ze ro administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You ca n set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app. How it works - Amazon API Gateway - Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, ma intain, monitor, and secure APIs at any scale. APIs act as the \"front door\" for application s to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way commun ication applications. API Gateway supports containerized and serverless workloads, as well as web applications. API Gateway handles all the tasks involved in accep ting and processing up to hundreds of thousands of concurrent API calls, including traffic management, CORS support, authorization and access control, throttling, monitoring, and API version management. API Gateway has no minimum fees or startup costs. You pay for the API calls you receive and the amount of data transferre d out and, with the API Gateway tiered pricing model, you can reduce your cost as your API usage scales.", + "references": "https://aws.amazon.com/lambda/ https://aws.amazon.com/api-gateway/" + }, + { + "question": "A company must generate sales reports at the beginn ing of every month. The reporting process launches 20 Amazon EC2 instances on the first of the month. The process runs for 7 days and cannot b e interrupted. The company wants to minimize costs. Which pricing model should the company choose?", + "options": [ + "A. Reserved Instances", + "B. Spot Block Instances", + "C. On-Demand Instances D. Scheduled Reserved Instances" + ], + "correct": "", + "explanation": "Explanation - Scheduled Reserved Instances - Scheduled Reserved Instances (Scheduled Instances) enable you to purchase capacity reservations that r ecur on a daily, weekly, or monthly basis, with a specified start time and duration, fo r a one-year term. You reserve the capacity in adva nce, so that you know it is available when you need it. You pay for the time that the instance s are scheduled, even if you do not use them. Scheduled Instances are a good choice for workloads that do not run continuously, but do run on a regu lar schedule. For example, you can use Scheduled Instances for an application that runs during busin ess hours or for batch processing that runs at the end of the week. If you require a capacity reservation on a continuo us basis, Reserved Instances might meet your needs and decrease costs. How Scheduled Instances Work - Amazon EC2 sets aside pools of EC2 instances in eac h Availability Zone for use as Scheduled Instances. Each pool supports a specific combination of instance type, operating system, and network. To get started, you must search for an available sc hedule. You can search across multiple pools or a s ingle pool. After you locate a suitable schedule, purchase it. You must launch your Scheduled Instances during the ir scheduled time periods, using a launch configura tion that matches the following attributes of the schedule that you purchased: inst ance type, Availability Zone, network, and platform . When you do so, Amazon EC2 launches EC2 instances on your behalf, based on the specifie d launch specification. Amazon EC2 must ensure that the EC2 instances have terminated by the end of the current scheduled time period so tha t the capacity is available for any other Scheduled Instances it is reserved for. Therefore, Amazon EC2 terminates the EC2 instances three minut es before the end of the current scheduled time per iod. You can't stop or reboot Scheduled Instances, but y ou can terminate them manually as needed. If you terminate a Scheduled Instance before its current scheduled time period ends, you can launch it again after a few minutes. Otherwise, you must w ait until the next scheduled time period. The following diagram illustrates the lifecycle of a Scheduled Instance.", + "references": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ec2-scheduled-instances.html" + }, + { + "question": "A company runs an application in a branch office wi thin a small data closet with no virtualized comput e resources. The application data is stored on an NFS volume. Compliance standards require a da ily offsite backup of the NFS volume. Which solution meets these requirements?", + "options": [ + "A. Install an AWS Storage Gateway file gateway on pr emises to replicate the data to Amazon S3.", + "B. Install an AWS Storage Gateway file gateway hardw are appliance on premises to replicate the data to", + "C. Install an AWS Storage Gateway volume gateway wit h stored volumes on premises to replicate the data to", + "D. Install an AWS Storage Gateway volume gateway wit h cached volumes on premises to replicate the data to" + ], + "correct": "", + "explanation": "Explanation/Reference: AWS Storage Gateway Hardware Appliance Hardware Appliance - Storage Gateway is available as a hardware applianc e, adding to the existing support for VMware ESXi, Microsoft Hyper-V, and Amazon EC2. This means that you can now make use of Storage Gat eway in situations where you do not have a virtuali zed environment, server-class hardware or IT staff with the specialized skills th at are needed to manage them. You can order applian ces from Amazon.com for delivery to branch offices, warehouses, and ?\u20acoutpost ?\u20ac office s that lack dedicated IT resources. Setup (as you w ill see in a minute) is quick and easy, and gives you access to three storage solutions: File Gateway ?\u20ac\" A file interface to Amazon S3, acc essible via NFS or SMB. The files are stored as S3 objects, allowing you to make use of specialized S3 features such as lifecycle managemen t and cross-region replication. You can trigger AWS Lambda functions, run Amazon Athena queries, and use Amazon Macie to discover an d classify sensitive data.", + "references": "https://aws.amazon.com/blogs/aws/new-aws-storage-ga teway-hardware-appliance/ https://aws.amazon.com/ storagegateway/file/" + }, + { + "question": "A company ?\u20ac\u2122s web application is using multiple Li nux Amazon EC2 instances and storing data on Amazon Elastic Block Store (Amazon EBS) volumes. The company is looking for a solution to i ncrease the resiliency of the application in case o f a failure and to provide storage that complies with atomicity, consistency, isolation, an d durability (ACID). What should a solutions architect do to meet these requirements?", + "options": [ + "A. Launch the application on EC2 instances in each A vailability Zone. Attach EBS volumes to each EC2", + "B. Create an Application Load Balancer with Auto Sca ling groups across multiple Availability Zones. Mou nt an", + "C. Create an Application Load Balancer with Auto Sca ling groups across multiple Availability Zones. Sto re data", + "D. Create an Application Load Balancer with Auto Sca ling groups across multiple Availability Zones. Sto re data" + ], + "correct": "C. Create an Application Load Balancer with Auto Sca ling groups across multiple Availability Zones. Sto re data", + "explanation": "Explanation/Reference: How Amazon EFS Works with Amazon EC2 The following illustration shows an example VPC acc essing an Amazon EFS file system. Here, EC2 instanc es in the VPC have file systems mounted. In this illustration, the VPC has three Availabilit y Zones, and each has one mount target created in i t. We recommend that you access the file system from a mount target within the same Availabi lity Zone. One of the Availability Zones has two su bnets. However, a mount target is created in only one of the subnets. Benefits of Auto Scaling - Better fault tolerance. Amazon EC2 Auto Scaling can detect when an instance is unhealthy, terminate it , and launch an instance to replace it. You can also configure Amazon EC2 Auto Scaling to u se multiple Availability Zones. If one Availability Zone becomes unavailable, Amazon EC2 Auto Scaling can launch instances in another one to compensate. Better availability. Amazon EC2 Auto Scaling helps ensure that your application always has the right a mount of capacity to handle the current traffic demand. Better cost management. Amazon EC2 Auto Scaling can dynamically increase and decrease capacity as needed. Because you pay for the EC2 instances you use, you save money by launching inst ances when they are needed and terminating them whe n they aren't.", + "references": "https://docs.aws.amazon.com/efs/latest/ug/how-it-wo rks.html#how-it-works-ec2 https://docs.aws.amazon.com/autoscaling/ec2/usergui de/auto-scaling-benefits.html" + }, + { + "question": "accounts belong to a large organization in AWS Organizations. The solution must be scalable and there must be a s ingle point where permissions can be maintained. What should a solutions architect do to accomplish this?", + "options": [ + "A. Create an ACL to provide access to the services o r actions.", + "B. Create a security group to allow accounts and att ach it to user groups.", + "C. Create cross-account roles in each account to den y access to the services or actions.", + "D. Create a service control policy in the root organ izational unit to deny access to the services or ac tions." + ], + "correct": "D. Create a service control policy in the root organ izational unit to deny access to the services or ac tions.", + "explanation": "Explanation/Reference: Service Control Policy concepts - SCPs offer central access controls for all IAM enti ties in your accounts. You can use them to enforce the permissions you want everyone in your business to follow. Using SCPs, you can give your d evelopers more freedom to manage their own permissi ons because you know they can only operate within the boundaries you define. You create and apply SCPs through AWS Organizations . When you create an organization, AWS Organizations automatically creates a root, which forms the parent container for all the accoun ts in your organization. Inside the root, you can g roup accounts in your organization into organizational units (OUs) to simplify management o f these accounts. You can create multiple OUs withi n a single organization, and you can create OUs within other OUs to form a hierarchical structure. You can attach SCPs to the organization root, OUs, and individual accounts. SCPs attached to the root and OUs apply to all OUs and a ccounts inside of them. SCPs use the AWS Identity and Access Management (IA M) policy language; however, they do not grant permissions. SCPs enable you set permission guardrails by defining the maximum avail able permissions for IAM entities in an account. If a SCP denies an action for an account, none of the entities in the account can take that a ction, even if their IAM permissions allow them to do so. The guardrails set in SCPs apply to all IAM entities in the account, which include all user s, roles, and the account root user.", + "references": "https://aws.amazon.com/blogs/security/how-to-use-se rvice-control-policies-to-set-permission-guardrails -across- accounts-in-your-awsorganization/ #:~:text=Central%20security%20administrators%20use% 20service,users%20and%20roles)%20adhere% 20to.&text=Now%2C%20using%20SCPs% 2C%20you% 20can,your%20organization%20or%20organizational%20u nit https://docs.aws.amazon.com/organizations/latest/us erguide/orgs_manage_policies_scp.html" + }, + { + "question": "A data science team requires storage for nightly lo g processing. The size and number of logs is unknow n and will persist for 24 hours only. What is the MOST cost-effective solution?", + "options": [ + "A. Amazon S3 Glacier", + "B. Amazon S3 Standard", + "C. Amazon S3 Intelligent-Tiering", + "D. Amazon S3 One Zone-Infrequent Access (S3 One Zone -IA) Correct Answer: B" + ], + "correct": "", + "explanation": "Explanation/Reference:", + "references": "https://aws.amazon.com/s3/storage-classes/#Unknown_ or_changing_access" + }, + { + "question": "A company has deployed an API in a VPC behind an in ternet-facing Application Load Balancer (ALB). An application that consumes the API as a client is deployed in a second account in private s ubnets behind a NAT gateway. When requests to the c lient application increase, the NAT gateway costs are higher than expected. A solutions architect has configured the ALB to be internal. Which combination of architectural changes will red uce the NAT gateway costs? (Choose two.)", + "options": [ + "A. Configure a VPC peering connection between the tw o VPCs. Access the API using the private address.", + "B. Configure an AWS Direct Connect connection betwee n the two VPCs. Access the API using the private", + "C. Configure a ClassicLink connection for the API in to the client VPC. Access the API using the Classic Link", + "D. Configure a PrivateLink connection for the API in to the client VPC. Access the API using the Private Link" + ], + "correct": "", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A solutions architect is tasked with transferring 7 50 TB of data from an on-premises network-attached file system located at a branch office Amazon S3 Glacier. The migration must not saturate the on-premises 1 M bps internet connection. Which solution will meet these requirements?", + "options": [ + "A. Create an AWS site-to-site VPN tunnel to an Amazo n S3 bucket and transfer the files directly. Transf er the", + "B. Order 10 AWS Snowball Edge Storage Optimized devi ces, and select an S3 Glacier vault as the", + "C. Mount the network-attached file system to an S3 b ucket, and copy the files directly. Create a lifecy cle policy", + "D. Order 10 AWS Snowball Edge Storage Optimized devi ces, and select an Amazon S3 bucket as the", + "A. Create a regular rule in AWS WAF and associate th e web ACL to an Application Load Balancer.", + "B. Create a rate-based rule in AWS WAF and associate the web ACL to an Application Load Balancer.", + "C. Create a custom rule in the security group of the Application Load Balancer to block the offending r equests.", + "D. Create a custom network ACL and associate it with the subnet of the Application Load Balancer to blo ck the" + ], + "correct": "B. Create a rate-based rule in AWS WAF and associate the web ACL to an Application Load Balancer.", + "explanation": "Explanation/Reference: AWS WAF is tightly integrated with Amazon CloudFron t, the Application Load Balancer (ALB), Amazon API Gateway, and AWS AppSync services that AWS custome rs commonly use to deliver content for their websit es and applications. When you use AWS WAF on Amazon Cl oudFront, your rules run in all AWS Edge Locations, located around the world close to your end-users. T his means security doesn't come at the expense of performance. Blocked requests are stopped before th ey reach your web servers. When you use AWS WAF on regional services, such as Application Load Balance r, Amazon API Gateway, and AWS AppSync, your rules run in the region and can be used to protect Intern et-facing resources as well as internal resources. .cm A rate-based rule tracks the rate of requests for e ach originating IP address and triggers the rule ac tion on IPs with rates that go over a limit. You set the li mit as the number of requests per 5-minute time spa n. You can use this type of rule to put a temporary block on requests from an IP address that's sending exces sive requests. Based on the given scenario, the requirement is to limit the number of requests from the illegitimate requests without affecting the genuine requests. To accomplish this requirement, you can use AWS WAF web ACL. There are two types of rules in creating y our own web ACL rule: regular and rate-based rules. You need to select the latter to add a rate limit to your web ACL. After c reating the web ACL, you can associate it with ALB. When the rule action triggers, AWS WAF applies the action to additional requests from the IP address u ntil the request rate falls below the limit. Hence, the correct answer is: Create a rate-based r ule in AWS WAF and associate the web ACL to an Application Load Balancer. The option that says: Create a regular rule in AWS WAF and associate the web ACL to an Application Load Balancer is incorrect because a re gular rule only matches the statement defined in th e rule. If you need to add a rate limit to your rule, you should create a rate-based rule. The option that says: Create a custom network ACL a nd associate it with the subnet of the Application Load Balancer to block the offending requests is in correct. Although NACLs can help you block incoming traffic, this option wouldn't be able to l imit the number of requests from a single IP addres s that is dynamically changing. The option that says: Create a custom rule in the s ecurity group of the Application Load Balancer to block the offending requests is incorrect because t he security group can only allow incoming traffic. Remember that you can't deny traffic using security groups. In addition, it is not capable of limiting the rate of traffic to your application unlike AWS WAF. References: https://docs.aws.amazon.com/waf/latest/developergui de/waf-rule-statement-type-rate-based.html https://aws.amazon.com/waf/faqs/ Check out this AWS WAF Cheat Sheet: https://tutorialsdojo.com/aws-waf/ AWS Security Services Overview - WAF, Shield, Cloud HSM, KMS: https://www.youtube.com/watch?v=-1S-RdeAmMo", + "references": "" + }, + { + "question": "A company plans to design a highly available archit ecture in AWS. They have two target groups with thr ee EC2 instances each, which are added to an Applicati on Load Balancer. In the security group of the EC2 instance, you have verified that port 80 for HTTP i s allowed. However, the instances are still showing out of service from the load balancer. What could be the root cause of this issue?", + "options": [ + "A. A. The wrong subnet was used in your VPC", + "B. B. The instances are using the wrong AMI.", + "C. C. The health check configuration is not properly defined.", + "D. D. The wrong instance type was used for the EC2 i nstance." + ], + "correct": "C. C. The health check configuration is not properly defined.", + "explanation": "Explanation/Reference: Since the security group is properly configured, th e issue may be caused by a wrong health check configuration in the Target Group. Your Application Load Balancer periodically sends r equests to its registered targets to test their sta tus. These tests are called health checks. Each load bal ancer node routes requests only to the healthy targ ets in the enabled Availability Zones for the load balance r. Each load balancer node checks the health of eac h target, using the health check settings for the tar get group with which the target is registered. Afte r your target is registered, it must pass one health check to be considered healthy. After each health check is completed, the load balancer node closes the connec tion that was established for the health check.", + "references": "http://docs.aws.amazon.com/elasticloadbalancing/lat est/classic/elb-healthchecks.html AWS Elastic Load Balancing Overview: https://www.youtube.com/watch?v=UBl5dw59DO8 Check out this AWS Elastic Load Balancing (ELB) Che at Sheet: https://tutorialsdojo.com/aws-elastic-load-balancin g-elb/ ELB Health Checks vs Route 53 Health Checks For Tar get Health Monitoring: https://tutorialsdojo.com/elb-health-checks-vs-rout e-53-health-checks-for-target-health-monitoring/" + }, + { + "question": "A newly hired Solutions Architect is checking all o f the security groups and network access control li st rules of the company's AWS resources. For security purposes, the MS SQL connection via port 1433 of th e database tier should be secured. Below is the secur ity group configuration of their Microsoft SQL Serv er database: The application tier hosted in an Auto Scaling grou p of EC2 instances is the only identified resource that needs to connect to the database. The Architect sho uld ensure that the architecture complies with the best practice of granting least privilege. Which of the following changes should be made to th e security group configuration?", + "options": [ + "A. A. For the MS SQL rule, change the Source to the Network ACL ID attached to the application tier.", + "B. B. For the MS SQL rule, change the Source to the security group ID attached to the application tier.", + "C. C. For the MS SQL rule, change the Source to the EC2 instance IDs of the underlying instances of the Auto", + "D. D. For the MS SQL rule, change the Source to the static AnyCast IP address attached to the applicati on tier." + ], + "correct": "B. B. For the MS SQL rule, change the Source to the security group ID attached to the application tier.", + "explanation": "Explanation/Reference: A security group acts as a virtual firewall for you r instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security g roups act at the instance level, not the subnet level. Th erefore, each instance in a subnet in your VPC can be assigned to a different set of security groups. If you launch an instance using the Amazon EC2 API or a command line tool and you don't specify a security group, the instance is automatically assig ned to the default security group for the VPC. If y ou launch an instance using the Amazon EC2 console, yo u have an option to create a new security group for the instance. For each security group, you add rules that control the inbound traffic to instances, and a separate s et of rules that control the outbound traffic. This secti on describes the basic things that you need to know about security groups for your VPC and their rules. Amazon security groups and network ACLs don't filte r traffic to or from link-local addresses (169.254.0.0/16) or AWS reserved IPv4 addresses (th ese are the first four IPv4 addresses of the subnet , including the Amazon DNS server address for the VPC ). Similarly, flow logs do not capture IP traffic t o or from these addresses. In the scenario, the security group configuration a llows any server (0.0.0.0/0) from anywhere to estab lish an MS SQL connection to the database via the 1433 p ort. The most suitable solution here is to change t he Source field to the security group ID attached to t he application tier. Hence, the correct answer is the option that says: For the MS SQL rule, change the Source to the security group ID attached to the application tier. The option that says: For the MS SQL rule, change t he Source to the EC2 instance IDs of the underlying instances of the Auto Scaling group is i ncorrect because using the EC2 instance IDs of the underlying instances of the Auto Scaling group as t he source can cause intermittent issues. New instan ces will be added and old instances will be removed fro m the Auto Scaling group over time, which means tha t you have to manually update the security group sett ing once again. A better solution is to use the sec urity group ID of the Auto Scaling group of EC2 instances . The option that says: For the MS SQL rule, change t he Source to the static AnyCast IP address attached to the application tier is incorrect becau se a static AnyCast IP address is primarily used fo r AWS Global Accelerator and not for security group c onfigurations. The option that says: For the MS SQL rule, change t he Source to the Network ACL ID attached to the application tier is incorrect because you have to u se the security group ID instead of the Network ACL ID of the application tier. Take note that the Network ACL covers the entire subnet which means that othe r applications that use the same subnet will also be affected. References: https://docs.aws.amazon.com/vpc/latest/userguide/VP C_SecurityGroups.html https://docs.aws.amazon.com/vpc/latest/userguide/VP C_Security.html", + "references": "" + }, + { + "question": "A company is storing its financial reports and regu latory documents in an Amazon S3 bucket. To comply with the IT audit, they tasked their Solutions Architect to track all new objects added to the bucket as we ll as the removed ones. It should also track whether a versio ned object is permanently deleted. The Architect mu st configure Amazon S3 to publish notifications for th ese events to a queue for post-processing and to an Amazon SNS topic that will notify the Operations te am. Which of the following is the MOST suitable solutio n that the Architect should implement?", + "options": [ + "A. A. Create a new Amazon SNS topic and Amazon SQS q ueue. Add an S3 event notification configuration on", + "B. B. Create a new Amazon SNS topic and Amazon MQ. A dd an S3 event notification configuration on the", + "C. C. Create a new Amazon SNS topic and Amazon MQ. A dd an S3 event notification configuration on the", + "D. D. Create a new Amazon SNS topic and Amazon SQS q ueue. Add an S3 event notification configuration on" + ], + "correct": "D. D. Create a new Amazon SNS topic and Amazon SQS q ueue. Add an S3 event notification configuration on", + "explanation": "Explanation/Reference: The Amazon S3 notification feature enables you to r eceive notifications when certain events happen in your bucket. To enable notifications, you must first add a notification configuration that identifies the e vents you want Amazon S3 to publish and the destinations where you want Amazon S3 to send the notifications. You stor e this configuration in the notification subresource that is associated with a bucket. Amazon S3 provides an API for you to manage this subresource. Amazon S3 event notifications typically deliver eve nts in seconds but can sometimes take a minute or l onger. If two writes are made to a single non-versioned objec t at the same time, it is possible that only a sing le event notification will be sent. If you want to ensure th at an event notification is sent for every successf ul write, you can enable versioning on your bucket. With versioni ng, every successful write will create a new versio n of your object and will also send an event notification. Amazon S3 can publish notifications for the followi ng events: 1. New object created events 2. Object removal events 3. Restore object events 4. Reduced Redundancy Storage (RRS) object lost eve nts 5. Replication events Amazon S3 supports the following destinations where it can publish events: 1. Amazon Simple Notification Service (Amazon SNS) topic 2. Amazon Simple Queue Service (Amazon SQS) queue 3. AWS Lambda If your notification ends up writing to the bucket that triggers the notification, this could cause an execution loop. For example, if the bucket triggers a Lambda function each time an object is uploaded and the function uploads an object to the bucket, then the function indirectly triggers itself. To avoid this, use two buckets, or configure the trigger to only apply to a prefix used for incoming objects. Hence, the correct answers is: Create a new Amazon SNS topic and Amazon SQS queue. Add an S3 event notification configuration on the bucket to p ublish s3:ObjectCreated:* and s3:ObjectRemoved:Delete event types to SQS and SNS. The option that says: Create a new Amazon SNS topic and Amazon MQ. Add an S3 event notification configuration on the bucket to publish s3:ObjectAdd ed:* and s3:ObjectRemoved:* event types to SQS and SNS is incorrect. There is no s3:ObjectAdded:* type in Amazon S3. You should add an S3 event notification configuration on the bucket to publish events of th e s3:ObjectCreated:* type instead. Moreover, Amazon S3 does support Amazon MQ as a destination to publish events. The option that says: Create a new Amazon SNS topic and Amazon SQS queue. Add an S3 event notification configuration on the bucket to publish s3:ObjectCre ated:* and ObjectRemoved:DeleteMarkerCreated event types to SQS and SNS is incorrect because the s3:Ob jectRemoved:DeleteMarkerCreated type is only trigge red when a delete marker is created for a versioned obj ect and not when an object is deleted or a versione d object is permanently deleted. The option that says: Create a new Amazon SNS topic and Amazon MQ. Add an S3 event notification configuration on the bucket to publish s3:ObjectCre ated:* and ObjectRemoved:DeleteMarkerCreated event types to SQS and SNS is incorrect because Amazon S3 does public event messages to Amazon MQ. You should use an Amazon SQS instead. In addition, the s3:ObjectRemoved:DeleteMarkerCreated type is only triggered when a delete marker is created for a ver sioned object. Remember that the scenario asked to publish events when an object is deleted or a versioned obj ect is permanently deleted. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/Not ificationHowTo.html https://docs.aws.amazon.com/AmazonS3/latest/dev/way s-to-add-notification-config-to-bucket.html https://aws.amazon.com/blogs/aws/s3-event-notificat ion/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/ Amazon SNS Overview: https://www.youtube.com/watch?v=ft5R45lEUJ8", + "references": "" + }, + { + "question": "To save costs, your manager instructed you to analy ze and review the setup of your AWS cloud infrastru cture. You should also provide an estimate of how much you r company will pay for all of the AWS resources tha t they are using. In this scenario, which of the following will incur costs? (Select TWO.)", + "options": [ + "A. A. A stopped On-Demand EC2 Instance", + "B. B. Public Data Set", + "C. C. EBS Volumes attached to stopped EC2 Instances", + "D. D. A running EC2 Instance" + ], + "correct": "", + "explanation": "Explanation/Reference: Billing commences when Amazon EC2 initiates the boo t sequence of an AMI instance. Billing ends when the instance terminates, which could occur through a web services command, by running \"shutdown -h\", o r through instance failure. When you stop an instance , AWS shuts it down but doesn't charge hourly usage for a stopped instance or data transfer fees. Howev er, AWS does charge for the storage of any Amazon EBS volumes. Hence, a running EC2 Instance and EBS Volumes attac hed to stopped EC2 Instances are the right answers and conversely, a stopped On-Demand EC2 Ins tance is incorrect as there is no charge for a stopped EC2 instance that you have shut down. Using Amazon VPC is incorrect because there are no additional charges for creating and using the VPC itself. Usage charges for other Amazon Web Services , including Amazon EC2, still apply at published ra tes for those resources, including data transfer charge s. Public Data Set is incorrect due to the fact that A mazon stores the data sets at no charge to the comm unity and, as with all AWS services, you pay only for the compute and storage you use for your own applicati ons. References: https://aws.amazon.com/cloudtrail/ https://aws.amazon.com/vpc/faqs https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /using-public-data-sets.html Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", + "references": "" + }, + { + "question": "The media company that you are working for has a vi deo transcoding application running on Amazon EC2. Each EC2 instance polls a queue to find out which v ideo should be transcoded, and then runs a transcod ing process. If this process is interrupted, the video will be transcoded by another instance based on the queuing system. This application has a large backlog of vid eos which need to be transcoded. Your manager would like to reduce this backlog by adding more EC2 inst ances, however, these instances are only needed unt il the backlog is reduced. In this scenario, which type of Amazon EC2 instance is the most cost-effective type to use?", + "options": [ + "A. A. Spot instances", + "B. B. Reserved instances", + "C. C. Dedicated instances", + "D. D. On-demand instances" + ], + "correct": "A. A. Spot instances", + "explanation": "Explanation/Reference: You require an instance that will be used not as a primary server but as a spare compute resource to augment the transcoding process of your application . These instances should also be terminated once th e backlog has been significantly reduced. In addition , the scenario mentions that if the current process is interrupted, the video can be transcoded by another instance based on the queuing system. This means t hat the application can gracefully handle an unexpected termination of an EC2 instance, like in the event of a Spot instance termination when the Spot price is greater than your set maximu m price. Hence, an Amazon EC2 Spot instance is the best and cost-effective option for this scenario. Amazon EC2 Spot instances are spare compute capacit y in the AWS cloud available to you at steep discounts compared to On-Demand prices. EC2 Spot en ables you to optimize your costs on the AWS cloud and scale your application's throughput up to 10X f or the same budget. By simply selecting Spot when launching EC2 instances, you can save up-to 90% on On-Demand prices. The only difference between On- Demand instances and Spot Instances is that Spot in stances can be interrupted by EC2 with two minutes of notification when the EC2 needs the capacity back. You can specify whether Amazon EC2 should hibernate , stop, or terminate Spot Instances when they are interrupted. You can choose the interruption behavi or that meets your needs. Take note that there is no \"bid price\" anymore for Spot EC2 instances since March 2018. You simply hav e to set your maximum price instead. Reserved instances and Dedicated instances are inco rrect as both do not act as spare compute capacity. On-demand instances is a valid option but a Spot in stance is much cheaper than On-Demand. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /spot-interruptions.html http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ how-spot-instances-work.html https://aws.amazon.com/blogs/compute/new-amazon-ec2 -spot-pricing Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", + "references": "" + }, + { + "question": "All objects uploaded to an Amazon S3 bucket must be encrypted for security compliance. The bucket will use server-side encryption with Amazon S3-Managed encry ption keys (SSE-S3) to encrypt data using 256- bit Advanced Encryption Standard (AES-256) block ci pher. Which of the following request headers must be used ? A. A. x-amz-server-side-encryption-customer-key", + "options": [ + "B. B. x-amz-server-side-encryption", + "C. C. x-amz-server-side-encryption-customer-algorith m", + "D. D. x-amz-server-side-encryption-customer-key-MD5" + ], + "correct": "B. B. x-amz-server-side-encryption", + "explanation": "Explanation/Reference: Server-side encryption protects data at rest. If yo u use Server-Side Encryption with Amazon S3-Managed Encryption Keys (SSE-S3), Amazon S3 will encrypt ea ch object with a unique key and as an additional safeguard, it encrypts the key itself with a master key that it rotates regularly. Amazon S3 server-si de encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES- 256), to encrypt your data. If you need server-side encryption for all of the o bjects that are stored in a bucket, use a bucket po licy. For example, the following bucket policy denies permiss ions to upload an object unless the request include s the x- amz-server-side-encryption header to request server -side encryption: However, if you chose to use server-side encryption with customer-provided encryption keys (SSE-C), yo u must provide encryption key information using the f ollowing request headers: x-amz-server-side-encryption-customer-algorithm x-amz-server-side-encryption-customer-key x-amz-server-side-encryption-customer-key-MD5 Hence, using the x-amz-server-side-encryption heade r is correct as this is the one being used for Amaz on S3-Managed Encryption Keys (SSE-S3). All other options are incorrect since they are used for SSE-C. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/ser v-side-encryption.html https://docs.aws.amazon.com/AmazonS3/latest/dev/Usi ngServerSideEncryption.html https://docs.aws.amazon.com/AmazonS3/latest/dev/Ser verSideEncryptionCustomerKeys.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", + "references": "" + }, + { + "question": "A company has an On-Demand EC2 instance with an att ached EBS volume. There is a scheduled job that creates a snapshot of this EBS volume every midnigh t at 12 AM when the instance is not used. One night , there has been a production incident where you need to pe rform a change on both the instance and on the EBS volume at the same time when the snapshot is curren tly taking place. Which of the following scenario is true when it com es to the usage of an EBS volume while the snapshot is in progress?", + "options": [ + "A. A. The EBS volume can be used in read-only mode w hile the snapshot is in progress.", + "B. B. The EBS volume cannot be used until the snapsh ot completes.", + "C. C. The EBS volume cannot be detached or attached to an EC2 instance until the snapshot completes", + "D. D. The EBS volume can be used while the snapshot is in progress." + ], + "correct": "D. D. The EBS volume can be used while the snapshot is in progress.", + "explanation": "Explanation/Reference: Snapshots occur asynchronously; the point-in-time s napshot is created immediately, but the status of t he snapshot is pending until the snapshot is complete (when all of the modified blocks have been transfer red to Amazon S3), which can take several hours for large initial snapshots or subsequent snapshots where man y blocks have changed. While it is completing, an in-progress snapshot is not affected by ongoing reads and writes to the vol ume hence, you can still use the EBS volume normally. When you create an EBS volume based on a snapshot, the new volume begins as an exact replica of the original volume that was used to create the snapsho t. The replicated volume loads data lazily in the background so that you can begin using it immediate ly. If you access data that hasn't been loaded yet, the volume immediately downloads the requested data fro m Amazon S3, and then continues loading the rest of the volume's data in the background. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ebs-creating-snapshot.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /EBSSnapshots.html Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/", + "references": "" + }, + { + "question": "A company is migrating a three-tier application to AWS. The application requires a MySQL database. In the past, the application users reported poor application performance when creating new entr ies. These performance issues were caused by users generating different real-time reports from the application during working hours. Which solution will improve the performance of the application when it is moved to AWS?", + "options": [ + "A. Import the data into an Amazon DynamoDB table wit h provisioned capacity. Refactor the application to use", + "B. Create the database on a compute optimized Amazon EC2 instance. Ensure compute resources exceed", + "C. Create an Amazon Aurora MySQL Multi-AZ DB cluster with multiple read replicas. Configure the applica tion", + "D. Create an Amazon Aurora MySQL Multi-AZ DB cluster . Configure the application to use the backup" + ], + "correct": "C. Create an Amazon Aurora MySQL Multi-AZ DB cluster with multiple read replicas. Configure the applica tion", + "explanation": "Explanation/Reference: Amazon RDS Read Replicas Now Support Multi-AZ Deplo yments Starting today, Amazon RDS Read Replicas for MySQL and MariaDB now support Multi-AZ deployments. Combining Read Replicas with Multi-AZ enables you to build a resilient disaster recovery strategy and simplify your database engine upgrade process. Amazon RDS Read Replicas enable you to create one o r more read-only copies of your database instance within the same AWS Region or in a different AWS Region. Updates made to the source database are the n asynchronously copied to your Read Replicas. In addition to providing scalability for read-heavy workloads, Read Replicas can be promoted to become a standalone database instance when needed. Amazon RDS Multi-AZ deployments provide enhanced av ailability for database instances within a single A WS Region. With Multi-AZ, your data is synchronously replicated to a standby in a diffe rent Availability Zone (AZ). In the event of an inf rastructure failure, Amazon RDS performs an automatic failover to the standby, minimizing disru ption to your applications. You can now use Read Replicas with Multi-AZ as part of a disaster recovery (DR) strategy for your prod uction databases. A well-designed and tested DR plan is critical for maintaining business continuity after a disaster. A Read Replica in a d ifferent region than the source database can be used as a standby database and promoted to becom e the new production database in case of a regionaldisruption. You can also combine Read Replicas with Multi-AZ fo r your database engine upgrade process. You can cre ate a Read Replica of your production database instance and upgrade it to a new database engine version. When the upgrade is complete, you c an stop applications, promote the Read Replica to a standalone database instance, and switch over your applications. Since the database instance is already a Multi-AZ deployment, no additional steps are needed. Overview of Amazon RDS Read Replicas Deploying one or more read replicas for a given sou rce DB instance might make sense in a variety of scenarios, including the following: Scaling beyond the compute or I/O capacity of a sin gle DB instance for read-heavy database workloads. You can direct this excess read traffic to one or more read replicas. Serving read traffic while the source DB instance i s unavailable. In some cases, your source DB instan ce might not be able to take I/O requests, for example due to I/O suspension for backups or sc heduled maintenance. In these cases, you can direct read traffic to your read replicas. For this use case, keep in mind that the data on the re ad replica might be \"stale\" because the source DB i nstance is unavailable. Business reporting or data warehousing scenarios wh ere you might want business reporting queries to ru n against a read replica, rather than your primary, production DB instance. Implementing disaster recovery. You can promote a r ead replica to a standalone instance as a disaster recovery solution if the source DB instance fails.", + "references": "https://aws.amazon.com/about-aws/whats-new/2018/01/ amazon-rds-read-replicas-now-support-multi-az- deployments/ https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/USER_ReadRepl.html" + }, + { + "question": "The company that you are working for has a highly a vailable architecture consisting of an elastic load balancer and several EC2 instances configured with auto-scal ing in three Availability Zones. You want to monito r your EC2 instances based on a particular metric, which i s not readily available in CloudWatch. Which of the following is a custom metric in CloudW atch which you have to manually set up?", + "options": [ + "A. Network packets out of an EC2 instance", + "B. CPU Utilization of an EC2 instance", + "C. Disk Reads activity of an EC2 instance", + "D. Memory Utilization of an EC2 instance" + ], + "correct": "D. Memory Utilization of an EC2 instance", + "explanation": "Explanation/Reference: CloudWatch has available Amazon EC2 Metrics for you to use for monitoring. CPU Utilization identifies the processing power required to run an application upon a selected instance. Network Utilization iden tifies the volume of incoming and outgoing network traffic to a single instance. Disk Reads metric is used to det ermine the volume of the data the application reads from t he hard disk of the instance. This can be used to d etermine the speed of the application. However, there are ce rtain metrics that are not readily available in Clo udWatch such as memory utilization, disk space utilization, and many others which can be collected by setting up a custom metric. You need to prepare a custom metric using CloudWatc h Monitoring Scripts which is written in Perl. You can also install CloudWatch Agent to collect more s ystem-level metrics from Amazon EC2 instances. Here's the list of custom metrics that you can set up: - Memory utilization - Disk swap utilization - Disk space utilization - Page file utilization - Log co llection CPU Utilization of an EC2 instance, Disk Reads acti vity of an EC2 instance, and Network packets out of an EC2 instance are all incorrect because these metrics are readily available in CloudWatch by defa ult. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /monitoring_ec2.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /mon-scripts.html#using_put_script Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/ Check out this Amazon CloudWatch Cheat Sheet: https://tutorialsdojo.com/amazon-cloudwatch/ Exam B", + "references": "" + }, + { + "question": "A company collects data for temperature, humidity, and atmospheric pressure in cities across multiple continents. The average volume of data that the com pany collects from each site daily is 500 GB. Each site has a high-speed Internet connection. The company wants to aggregate the data from all these global sites as quickly as possible in a single Amazon S3 bucket. T he solution must minimize operational complexity. Which solution meets these requirements?", + "options": [ + "A. Turn on S3 Transfer Acceleration on the destinati on S3 bucket. Use multipart uploads to directly upl oad site", + "B. Upload the data from each site to an S3 bucket in the closest Region. Use S3 Cross- Region Replicati on to", + "C. Schedule AWS Snowball Edge Storage Optimized devi ce jobs daily to transfer data from each site to th e", + "D. Upload the data from each site to an Amazon EC2 i nstance in the closest Region. Store the data in an" + ], + "correct": "A. Turn on S3 Transfer Acceleration on the destinati on S3 bucket. Use multipart uploads to directly upl oad site", + "explanation": "Explanation/Reference: http://lavnish.blogspot.com/2017/06/aws-s3-cross-re gion-replication.html", + "references": "" + }, + { + "question": "A company needs the ability to analyze the log file s of its proprietary application. The logs are stor ed in JSON format in an Amazon S3 bucket. Queries will be simp le and will run on- demand. A solutions architect n eeds to perform the analysis with minimal changes to the ex isting architecture. What should the solutions architect do to meet thes e requirements with the LEAST amount of operational overhead?", + "options": [ + "A. Use Amazon Redshift to load all the content into one place and run the SQL queries as needed.", + "B. Use Amazon CloudWatch Logs to store the logs. Run SQL queries as needed from the Amazon", + "C. Use Amazon Athena directly with Amazon S3 to run the queries as needed.", + "D. Use AWS Glue to catalog the logs. Use a transient Apache Spark cluster on Amazon EMR to run the SQL" + ], + "correct": "C. Use Amazon Athena directly with Amazon S3 to run the queries as needed.", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/athena/latest/ug/what-i s.html Amazon Athena is an interactive query servic e that makes it easy to analyze data directly in Amazon Si mple Storage Service (Amazon S3) using standard SQL . With a few actions in the AWS Management Console, y ou can point Athena at your data stored in Amazon S 3 and begin using standard SQL to run ad-hoc queries and get results in seconds.", + "references": "" + }, + { + "question": "A company uses AWS Organizations to manage multiple AWS accounts for different departments. The management account has an Amazon S3 bucket that con tains project reports. The company wants to limit access to this S3 bucket to only users of accounts within the organization in AWS Organizations. Which solution meets these requirements with the LE AST amount of operational overhead?", + "options": [ + "A. Add the aws PrincipalOrgID global condition key w ith a reference to the organization ID to the S3 bu cket policy.", + "B. Create an organizational unit (OU) for each depar tment. Add the aws:PrincipalOrgPaths global conditi on", + "C. Use AWS CloudTrail to monitor the CreateAccount, InviteAccountToOrganization, LeaveOrganization, and", + "D. Tag each user that needs access to the S3 bucket. Add the aws:PrincipalTag global condition key to t he S3" + ], + "correct": "A. Add the aws PrincipalOrgID global condition key w ith a reference to the organization ID to the S3 bu cket policy.", + "explanation": "Explanation/Reference: aws:PrincipalOrgID Validates if the principal acces sing the resource belongs to an account in your org anization. https://aws.amazon.com/blogs/security/control-acces s-to-aws-resources-by-using-the- aws-organization-o f- iam-principals/", + "references": "" + }, + { + "question": "An application runs on an Amazon EC2 instance in a VPC. The application processes logs that are stored in an Amazon S3 bucket. The EC2 instance needs to access the S3 bucket without connectivity to the internet. Which solution will provide private network connect ivity to Amazon S3?", + "options": [ + "A. Create a gateway VPC endpoint to the S3 bucket.", + "B. Stream the logs to Amazon CloudWatch Logs. Export the logs to the S3 bucket.", + "C. Create an instance profile on Amazon EC2 to allow S3 access.", + "D. Create an Amazon API Gateway API with a private l ink to access the S3 endpoint." + ], + "correct": "A. Create a gateway VPC endpoint to the S3 bucket.", + "explanation": "Explanation/Reference: The correct solution that will provide private netw ork connectivity to Amazon S3 is Option A: Create a gateway VPC endpoint to the S3 bucket. ***EXPLANATION*** Option A involves creating a gateway VPC endpoint, which is a network interface in a VPC that allows you to privately connect to a service over the Amazon network. You can create a gateway V PC endpoint for Amazon S3, which will allow the EC2 instance in the VPC to access the S3 bucket without connectivity to the internet.", + "references": "" + }, + { + "question": "A company is hosting a web application on AWS using a single Amazon EC2 instance that stores user- uploaded documents in an Amazon EBS volume. For bet ter scalability and availability, the company dupli cated the architecture and created a second EC2 instance and EBS volume in another Availability Zone, placin g both behind an Application Load Balancer. After completi ng this change, users reported that, each time they refreshed the website, they could see one subset of their documents or the other, but never all of the documents at the same time. What should a solutions architect propose to ensure users see all of their documents at once?", + "options": [ + "A. Copy the data so both EBS volumes contain all the documents", + "B. Configure the Application Load Balancer to direct a user to the server with the documents", + "C. Copy the data from both EBS volumes to Amazon EFS . Modify the application to save new documents to", + "D. Configure the Application Load Balancer to send t he request to both servers. Return each document fr om" + ], + "correct": "C. Copy the data from both EBS volumes to Amazon EFS . Modify the application to save new documents to", + "explanation": "Explanation/Reference: Amazon Elastic File System (EFS) is a fully managed file storage service that enables users to store a nd access data in the Amazon cloud. EFS is accessible over the network and can be mounted on multiple Ama zon EC2 instances. By copying the data from both EBS vo lumes to EFS and modifying the application to save new documents to EFS, users will be able to access all of their documents at the same time.", + "references": "" + }, + { + "question": "A company uses NFS to store large video files in on -premises network attached storage. Each video file ranges in size from 1 MB to 500 GB. The total storage is 7 0 TB and is no longer growing. The company decides to migrate the video files to Amazon S3. The company m ust migrate the video files as soon as possible whi le using the least possible network bandwidth. Which solution will meet these requirements?", + "options": [ + "A. Create an S3 bucket. Create an IAM role that has permissions to write to the S3 bucket.", + "B. Create an AWS Snowball Edge job. Receive a Snowba ll Edge device on premises. Use the Snowball Edge", + "C. Deploy an S3 File Gateway on premises. Create a p ublic service endpoint to connect to the S3 File", + "D. Set up an AWS Direct Connect connection between t he on-premises network and AWS.", + "B. On a Snowball Edge device you can copy files wit h a speed of up to 100Gbps. 70TB will take around 5 600" + ], + "correct": "B. Create an AWS Snowball Edge job. Receive a Snowba ll Edge device on premises. Use the Snowball Edge", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A company has an application that ingests incoming messages. Dozens of other applications and microservices then quickly consume these messages. The number of messages varies drastically and sometimes increases suddenly to 100,000 each second . The company wants to decouple the solution and increase scalability. Which solution meets these requirements?", + "options": [ + "A. Persist the messages to Amazon Kinesis Data Analy tics. Configure the consumer applications to read a nd", + "B. Deploy the ingestion application on Amazon EC2 in stances in an Auto Scaling group to scale the numbe r of", + "C. Write the messages to Amazon Kinesis Data Streams with a single shard. Use an AWS Lambda function to", + "D. Publish the messages to an Amazon Simple Notific ation Service (Amazon SNS) topic with multiple Amaz on" + ], + "correct": "", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A company is migrating a distributed application to AWS. The application serves variable workloads. Th e legacy platform consists of a primary server that c oordinates jobs across multiple compute nodes. The company wants to modernize the application with a s olution that maximizes resiliency and scalability. How should a solutions architect design the archite cture to meet these requirements?", + "options": [ + "A. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement", + "B. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement", + "C. Implement the primary server and the compute node s with Amazon EC2 instances that are managed in an", + "D. Implement the primary server and the compute node s with Amazon EC2 instances that are managed in an" + ], + "correct": "B. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/autoscaling/ec2/usergui de/as-using-sqs-queue.html", + "references": "" + }, + { + "question": "A company is running an SMB file server in its data center. The file server stores large files that ar e accessed frequently for the first few days after the files a re created. After 7 days the files are rarely acces sed. The total data size is increasing and is close to t he company's total storage capacity. A solutions ar chitect must increase the company's available storage space with out losing low- latency access to the most recently accessed files. The solutions architect must also p rovide file lifecycle management to avoid future st orage issues. Which solution will meet these requirements?", + "options": [ + "A. Use AWS DataSync to copy data that is older than 7 days from the SMB file server to AWS.", + "B. Create an Amazon S3 File Gateway to extend the co mpany's storage space. Create an S3 Lifecycle polic y", + "C. Create an Amazon FSx for Windows File Server file system to extend the company's storage space.", + "D. Install a utility on each user's computer to acce ss Amazon S3. Create an S3 Lifecycle policy to tran sition the" + ], + "correct": "", + "explanation": "Explanation/Reference: Answer directly points towards file gateway with li fecycles, https://docs.aws.amazon.com/filegateway/l atest/ files3/CreatingAnSMBFileShare.html", + "references": "" + }, + { + "question": "A company is building an ecommerce web application on AWS. The application sends information about new orders to an Amazon API Gateway REST API to process . The company wants to ensure that orders are processed in the order that they are received. Which solution will meet these requirements?", + "options": [ + "A. Use an API Gateway integration to publish a messa ge to an Amazon Simple Notification Service (Amazon", + "B. Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS)", + "C. Use an API Gateway authorizer to block any reques ts while the application processes an order.", + "D. Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS)" + ], + "correct": "B. Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS)", + "explanation": "Explanation/Reference: SQS FIFO queue guarantees message order.", + "references": "" + }, + { + "question": "A company has an application that runs on Amazon EC 2 instances and uses an Amazon Aurora database. The EC2 instances connect to the database by using user names and passwords that are stored locally in a f ile. The company wants to minimize the operational overh ead of credential management. What should a solutions architect do to accomplish this goal?", + "options": [ + "A. Use AWS Secrets Manager. Turn on automatic rotati on.", + "B. Use AWS Systems Manager Parameter Store. Turn on automatic rotation.", + "C. Create an Amazon S3 bucket to store objects that are encrypted with an AWS Key Management Service", + "D. Create an encrypted Amazon Elastic Block Store (A mazon EBS) volume for each EC2 instance. Attach the" + ], + "correct": "A. Use AWS Secrets Manager. Turn on automatic rotati on.", + "explanation": "Explanation/Reference: link https://tutorialsdojo.com/aws-secrets-manager- vs-systems-manager-parameter- store/ for difference s between SSM Parameter Store and AWS Secrets Manager", + "references": "" + }, + { + "question": "(ALB). The web application has static data and dyna mic data. The company stores its static data in an Amazon S3 bucket. The company wants to improve performance and reduce latency for the static data and dynamic data. The company is using its own domain name regi stered with Amazon Route 53. What should a solutions architect do to meet these requirements?", + "options": [ + "A. Create an Amazon CloudFront distribution that has the S3 bucket and the ALB as origins.", + "B. Create an Amazon CloudFront distribution that has the ALB as an origin. Create an AWS Global", + "C. Create an Amazon CloudFront distribution that has the S3 bucket as an origin. Create an AWS Global", + "D. Create an Amazon CloudFront distribution that has the ALB as an origin. Create an AWS Global" + ], + "correct": "A. Create an Amazon CloudFront distribution that has the S3 bucket and the ALB as origins.", + "explanation": "Explanation - AWS Global Accelerator vs CloudFront \u00b7 They both use the AWS global network and its edge locations around the world \u00b7 Both services integra te with AWS Shield for DDoS protection. \u00b7 CloudFront \u00b7 Improves performance for both cacheable content ( such as images and videos) \u00b7 Dynamic content (such as API acceleration and dynamic site delivery) \u00b7 Conte nt is served at the edge \u00b7 Global Accelerator \u00b7 Improves performance for a w ide range of applications over TCP or UDP \u00b7 Proxying packets at the edge to applications runn ing in one or more AWS Regions. \u00b7 Good fit for non- HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voi ce over IP \u00b7 Good for HTTP use cases that require s tatic IP addresses \u00b7 Good for HTTP use cases that required determinist ic, fast regional failover", + "references": "" + }, + { + "question": "A company performs monthly maintenance on its AWS i nfrastructure. During these maintenance activities, the company needs to rotate the credentials for its Ama zon RDS for MySQL databases across multiple AWS Regions. Which solution will meet these requirements with th e LEAST operational overhead?", + "options": [ + "A. Store the credentials as secrets in AWS Secrets M anager. Use multi-Region secret replication for the", + "B. Store the credentials as secrets in AWS Systems M anager by creating a secure string parameter. Use", + "C. Store the credentials in an Amazon S3 bucket that has server-side encryption (SSE) enabled. Use Amaz on", + "D. Encrypt the credentials as secrets by using AWS K ey Management Service (AWS KMS) multi-Region" + ], + "correct": "A. Store the credentials as secrets in AWS Secrets M anager. Use multi-Region secret replication for the", + "explanation": "Explanation Explanation/Reference: https://aws.amazon.com/blogs/security/how-to-replic ate-secrets-aws-secrets-manager- multiple-regions/", + "references": "" + }, + { + "question": "A company runs an ecommerce application on Amazon E C2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling gro up across multiple Availability Zones. The Auto Sca ling group scales based on CPU utilization metrics. The ecommerce application stores the transaction data i n a MySQL 8.0 database that is hosted on a large EC2 in stance. The database's performance degrades quickly as appl ication load increases. The application handles mor e read requests than write transactions. The company wants a solution that will automatically scale the database to meet the demand of unpredictable read workloads while maintaining high availability. Which solution will meet these requirements?", + "options": [ + "A. Use Amazon Redshift with a single node for leader and compute functionality.", + "B. Use Amazon RDS with a Single-AZ deployment Config ure Amazon RDS to add reader instances in a", + "C. Use Amazon Aurora with a Multi-AZ deployment. Con figure Aurora Auto Scaling with Aurora Replicas.", + "D. Use Amazon ElastiCache for Memcached with EC2 Spo t Instances." + ], + "correct": "C. Use Amazon Aurora with a Multi-AZ deployment. Con figure Aurora Auto Scaling with Aurora Replicas.", + "explanation": "Explanation/Reference: C, AURORA is 5x performance improvement over MySQL on RDS and handles more read requests than write,; maintaining high availability = Multi-AZ deployment", + "references": "" + }, + { + "question": "A company recently migrated to AWS and wants to imp lement a solution to protect the traffic that flows in and out of the production VPC. The company had an inspe ction server in its on-premises data center. The inspection server performed specific operations suc h as traffic flow inspection and traffic filtering. The company wants to have the same functionalities in the AWS C loud. Which solution will meet these requirements?", + "options": [ + "A. Use Amazon GuardDuty for traffic inspection and t raffic filtering in the production VPC.", + "B. Use Traffic Mirroring to mirror traffic from the production VPC for traffic inspection and filtering .", + "C. Use AWS Network Firewall to create the required r ules for traffic inspection and traffic filtering f or the", + "D. Use AWS Firewall Manager to create the required r ules for traffic inspection and traffic filtering f or the" + ], + "correct": "C. Use AWS Network Firewall to create the required r ules for traffic inspection and traffic filtering f or the", + "explanation": "Explanation/Reference: **AWS Network Firewall** is a stateful, managed net work firewall and intrusion detection and preventio n service for your virtual private cloud (VPC) that y ou created in Amazon Virtual Private Cloud (Amazon VPC). With Network Firewall, you can filter traffic at th e perimeter of your VPC. This includes filtering tr affic going to and coming from an internet gateway, NAT gateway, o r over VPN or AWS Direct Connect.", + "references": "" + }, + { + "question": "A company hosts a data lake on AWS. The data lake c onsists of data in Amazon S3 and Amazon RDS for PostgreSQL. The company needs a reporting solution that provides data visualization and includes all t he data sources within the data lake. Only the company's ma nagement team should have full access to all the visualizations. The rest of the company should have only limited access. Which solution will meet these requirements?", + "options": [ + "A. Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publi sh", + "B. Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publi sh", + "C. Create an AWS Glue table and crawler for the data in Amazon S3. Create an AWS Glue extract, transfor m,", + "D. Create an AWS Glue table and crawler for the data in Amazon S3. Use Amazon Athena Federated Query to" + ], + "correct": "B. Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publi sh", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/quicksight/latest/user/ sharing-a-dashboard.html https://docs.aws.amazon.co m/ quicksight/latest/user/share-a-dashboard-grant-acce ss- users.html", + "references": "" + }, + { + "question": "A company is implementing a new business applicatio n. The application runs on two Amazon EC2 instances and uses an Amazon S3 bucket for document storage. A solutions architect needs to ensure that the EC2 instances can access the S3 bucket. What should the solutions architect do to meet this requirement?", + "options": [ + "A. Create an IAM role that grants access to the S3 b ucket. Attach the role to the EC2 instances.", + "B. Create an IAM policy that grants access to the S3 bucket. Attach the policy to the EC2 instances.", + "C. Create an IAM group that grants access to the S3 bucket. Attach the group to the EC2 instances.", + "D. Create an IAM user that grants access to the S3 b ucket. Attach the user account to the EC2 instances ." + ], + "correct": "A. Create an IAM role that grants access to the S3 b ucket. Attach the role to the EC2 instances.", + "explanation": "Explanation/Reference: The correct option to meet this requirement is A: C reate an IAM role that grants access to the S3 buck et and attach the role to the EC2 instances. An IAM role i s an AWS resource that allows you to delegate acces s to AWS resources and services. You can create an IAM r ole that grants access to the S3 bucket and then at tach the role to the EC2 instances. This will allow the EC2 instances to access the S3 bucket and the docum ents stored within it. Option B is incorrect because an IAM policy is used to define permissions for an IAM user or group, no t for an EC2 instance. Option C is incorrect because an IAM group is used to group together IAM users and policies, not to gr ant access to resources. Option D is incorrect because an IAM user is used t o represent a person or service that interacts with AWS resources, not to grant access to resources.", + "references": "" + }, + { + "question": "An application development team is designing a micr oservice that will convert large images to smaller, compressed images. When a user uploads an image thr ough the web interface, the microservice should sto re the image in an Amazon S3 bucket, process and compr ess the image with an AWS Lambda function, and stor e the image in its compressed form in a different S3 bucket. A solutions architect needs to design a solution th at uses durable, stateless components to process th e images automatically. Which combination of actions will meet these requir ements? (Choose two.)", + "options": [ + "A. Create an Amazon Simple Queue Service (Amazon SQS ) queue. Configure the S3 bucket to send a", + "B. Configure the Lambda function to use the Amazon S imple Queue Service (Amazon SQS) queue as the", + "C. Configure the Lambda function to monitor the S3 b ucket for new uploads. When an uploaded image is", + "D. Launch an Amazon EC2 instance to monitor an Amazo n Simple Queue Service (Amazon SQS) queue." + ], + "correct": "", + "explanation": "Explanation/Reference: To design a solution that uses durable, stateless c omponents to process images automatically, a soluti ons architect could consider the following actions: Opt ion A involves creating an SQS queue and configurin g the S3 bucket to send a notification to the queue when an image is uploaded. This allows the application to d ecouple the image upload process from the image processing process and ensures that the image processing proce ss is triggered automatically when a new image is uplo aded. Option B involves configuring the Lambda func tion to use the SQS queue as the invocation source. When th e SQS message is successfully processed, the messag e is deleted from the queue. This ensures that the La mbda function is invoked only once per image and th at the image is not processed multiple times.", + "references": "" + }, + { + "question": "A company has a three-tier web application that is deployed on AWS. The web servers are deployed in a public subnet in a VPC. The application servers and databa se servers are deployed in private subnets in the s ame VPC. The company has deployed a third-party virtual firewall appliance from AWS Marketplace in an insp ection VPC. The appliance is configured with an IP interfa ce that can accept IP packets. A solutions architect needs to integrate the web ap plication with the appliance to inspect all traffic to the application before the traffic reaches the web serv er. Which solution will meet these requirements wit h the LEAST operational overhead?", + "options": [ + "A. Create a Network Load Balancer in the public subn et of the application's VPC to route the traffic to the", + "B. Create an Application Load Balancer in the public subnet of the application's VPC to route the traff ic to the", + "C. Deploy a transit gateway in the inspection VPConf igure route tables to route the incoming packets th rough", + "D. Deploy a Gateway Load Balancer in the inspection VPC. Create a Gateway Load Balancer endpoint to" + ], + "correct": "D. Deploy a Gateway Load Balancer in the inspection VPC. Create a Gateway Load Balancer endpoint to", + "explanation": "Explanation/Reference: https://aws.amazon.com/blogs/networking-and-content -delivery/scaling-network-traffic- inspection-using -aws- gateway-load-balancer/", + "references": "" + }, + { + "question": "same AWS Region. The data is stored in Amazon EC2 i nstances on Amazon Elastic Block Store (Amazon EBS) volumes. Modifications to the cloned data must not affect the production environment. The softwar e that accesses this data requires consistently high I/O p erformance. A solutions architect needs to minimize the time th at is required to clone the production data into th e test environment. Which solution will meet these requirements?", + "options": [ + "A. Take EBS snapshots of the production EBS volumes. Restore the snapshots onto EC2 instance store", + "B. Configure the production EBS volumes to use the E BS Multi-Attach feature. Take EBS snapshots of the", + "C. Take EBS snapshots of the production EBS volumes. Create and initialize new EBS volumes. Attach the", + "D. Take EBS snapshots of the production EBS volumes. Turn on the EBS fast snapshot restore feature on t he" + ], + "correct": "D. Take EBS snapshots of the production EBS volumes. Turn on the EBS fast snapshot restore feature on t he", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ebs-fast-snapshot-restore.html", + "references": "" + }, + { + "question": "An ecommerce company wants to launch a one-deal-a-d ay website on AWS. Each day will feature exactly on e product on sale for a period of 24 hours. The compa ny wants to be able to handle millions of requests each hour with millisecond latency during peak hours. Wh ich solution will meet these requirements with the LEAST operational overhead?", + "options": [ + "A. Use Amazon S3 to host the full website in differe nt S3 buckets. Add Amazon CloudFront distributions. Set", + "B. Deploy the full website on Amazon EC2 instances t hat run in Auto Scaling groups across multiple", + "C. Migrate the full application to run in containers . Host the containers on Amazon Elastic Kubernetes Service", + "D. Use an Amazon S3 bucket to host the website's sta tic content. Deploy an Amazon CloudFront distributi on." + ], + "correct": "D. Use an Amazon S3 bucket to host the website's sta tic content. Deploy an Amazon CloudFront distributi on.", + "explanation": "Explanation/Reference: D because all of the components are infinitely scal able dynamoDB, API Gateway, Lambda, and of course s 3 +cloudfront", + "references": "" + }, + { + "question": "A solutions architect is using Amazon S3 to design the storage architecture of a new digital media app lication. The media files must be resilient to the loss of an Availability Zone. Some files are accessed frequen tly while other files are rarely accessed in an unpredictable pattern. The solutions architect must minimize the costs of storing and retrieving the media files. Which storage option meets these requirements? A. S3 Standard", + "options": [ + "B. S3 Intelligent-Tiering", + "C. S3 Standard-Infrequent Access (S3 Standard-IA)", + "D. S3 One Zone-Infrequent Access (S3 One Zone-IA)" + ], + "correct": "B. S3 Intelligent-Tiering", + "explanation": "Explanation/Reference: \"unpredictable pattern\" - always go for Intelligent Tiering of S3 It also meets the resiliency require ment: \"S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, S 3 Glacier Instant Retrieval, S3 Glacier Flexible Re trieval, and S3 Glacier Deep Archive redundantly store objects o n multiple devices across a minimum of three Availa bility Zones in an AWS Region\" https://docs.aws.amazon.com /AmazonS3/latest/userguide/DataDurability.html", + "references": "" + }, + { + "question": "A company is storing backup files by using Amazon S 3 Standard storage. The files are accessed frequent ly for 1 month. However, the files are not accessed after 1 month. The company must keep the files indefinite ly. Which storage solution will meet these requirements MOST cost-effectively?", + "options": [ + "A. Configure S3 Intelligent-Tiering to automatically migrate objects.", + "B. Create an S3 Lifecycle configuration to transitio n objects from S3 Standard to S3 Glacier Deep Archi ve after", + "C. Create an S3 Lifecycle configuration to transitio n objects from S3 Standard to S3 Standard-Infrequen t", + "D. Create an S3 Lifecycle configuration to transitio n objects from S3 Standard to S3 One Zone-Infrequen t" + ], + "correct": "B. Create an S3 Lifecycle configuration to transitio n objects from S3 Standard to S3 Glacier Deep Archi ve after", + "explanation": "Explanation/Reference: The storage solution that will meet these requireme nts most cost-effectively is B: Create an S3 Lifecy cle configuration to transition objects from S3 Standar d to S3 Glacier Deep Archive after 1 month. Amazon S3 Glacier Deep Archive is a secure, durable, and extr emely low-cost Amazon S3 storage class for long-ter m retention of data that is rarely accessed and for w hich retrieval times of several hours are acceptabl e. It is the lowest- cost storage option in Amazon S3, making it a cost-effective choice for storing backup files t hat are not accessed after 1 month. You can use an S3 Lifecycle configuration to automatically transition objects from S3 Standard to S3 Glacier Deep Archive after 1 month. This will minimize the storage costs for the backup files that are not accessed frequently.", + "references": "" + }, + { + "question": "A company observes an increase in Amazon EC2 costs in its most recent bill. The billing team notices unwanted vertical scaling of instance types for a c ouple of EC2 instances. A solutions architect needs to create a graph comparing the last 2 months of EC2 costs an d perform an in-depth analysis to identify the root cause of the vertical scaling. How should the solutions arch itect generate the information with the LEAST opera tional overhead?", + "options": [ + "A. Use AWS Budgets to create a budget report and com pare EC2 costs based on instance types.", + "B. Use Cost Explorer's granular filtering feature to perform an in-depth analysis of EC2 costs based on", + "C. Use graphs from the AWS Billing and Cost Manageme nt dashboard to compare EC2 costs based on", + "D. Use AWS Cost and Usage Reports to create a report and send it to an Amazon S3 bucket. Use Amazon QuickSight with Amazon S3 as a source to generate an interactive graph based on instance" + ], + "correct": "B. Use Cost Explorer's granular filtering feature to perform an in-depth analysis of EC2 costs based on", + "explanation": "Explanation/Reference: https://aws.amazon.com/aws-cost-management/aws-cost -explorer/", + "references": "" + }, + { + "question": "A company is designing an application. The applicat ion uses an AWS Lambda function to receive informat ion through Amazon API Gateway and to store the informa tion in an Amazon Aurora PostgreSQL database. During the proof-of-concept stage, the company has to increase the Lambda quotas significantly to hand le the high volumes of data that the company needs to load into the database. A solutions architect must reco mmend a new design to improve scalability and minimize th e configuration effort. Which solution will meet these requirements?", + "options": [ + "A. Refactor the Lambda function code to Apache Tomca t code that runs on Amazon EC2 instances. Connect", + "B. Change the platform from Aurora to Amazon DynamoD Provision a DynamoDB Accelerator (DAX) cluster.", + "C. Set up two Lambda functions. Configure one functi on to receive the information.", + "D. Set up two Lambda functions. Configure one functi on to receive the information." + ], + "correct": "D. Set up two Lambda functions. Configure one functi on to receive the information.", + "explanation": "Explanation/Reference: A - refactoring can be a solution, BUT requires a L OT of effort - not the answer B - DynamoDB is NoSQL and Aurora is SQL, so it requires a DB migration... aga in a LOT of effort, so no the answer C and D are si milar in structure, but... C uses SNS, which would notify th e 2nd Lambda function... provoking the same bottlen eck... not the solution D uses SQS, so the 2nd lambda function can go to th e queue when responsive to keep with the DB load process. Usually the app decoupling helps with the performance improvement by distributing load. In th is case, the bottleneck is solved by uses queues.", + "references": "" + }, + { + "question": "A company needs to review its AWS Cloud deployment to ensure that its Amazon S3 buckets do not have unauthorized configuration changes. What should a solutions architect do to accomplish this goal?", + "options": [ + "A. Turn on AWS Config with the appropriate rules.", + "B. Turn on AWS Trusted Advisor with the appropriate checks.", + "C. Turn on Amazon Inspector with the appropriate ass essment template.", + "D. Turn on Amazon S3 server access logging. Configur e Amazon EventBridge (Amazon Cloud Watch Events)." + ], + "correct": "A. Turn on AWS Config with the appropriate rules.", + "explanation": "Explanation/Reference: The solution that will accomplish this goal is A: T urn on AWS Config with the appropriate rules. AWS C onfig is a service that enables you to assess, audit, and ev aluate the configurations of your AWS resources. Yo u can use AWS Config to monitor and record changes to the configuration of your Amazon S3 buckets. By turnin g on AWS Config and enabling the appropriate rules, you can ensure that your S3 buckets do not have unautho rized configuration changes.", + "references": "" + }, + { + "question": "A company is launching a new application and will d isplay application metrics on an Amazon CloudWatch dashboard. The company's product manager needs to a ccess this dashboard periodically. The product manager does not have an AWS account. A solutions a rchitect must provide access to the product manager by following the principle of least privilege. Which solution will meet these requirements?", + "options": [ + "A. Share the dashboard from the CloudWatch console. Enter the product manager's email address, and", + "B. Create an IAM user specifically for the product m anager. Attach the CloudWatchReadOnlyAccess AWS", + "C. Create an IAM user for the company's employees. A ttach the ViewOnlyAccess AWS managed policy to the", + "D. Deploy a bastion server in a public subnet. When the product manager requires access to the dashboar d," + ], + "correct": "A. Share the dashboard from the CloudWatch console. Enter the product manager's email address, and", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/AmazonCloudWatch/latest /monitoring/cloudwatch- dashboard-sharing.html", + "references": "" + }, + { + "question": "A company is migrating applications to AWS. The app lications are deployed in different accounts. The company manages the accounts centrally by using AWS Organizations. The company's security team needs a single sign-on (SSO) solution across all the compan y's accounts. The company must continue managing th e users and groups in its on-premises self- managed M icrosoft Active Directory. Which solution will meet these requirements?", + "options": [ + "A. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a one-way forest trust or a", + "B. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a two-way forest trust to", + "C. Use AWS Directory Service. Create a two-way trust relationship with the company's self- managed", + "D. Deploy an identity provider (IdP) on premises. En able AWS Single Sign-On (AWS SSO) from the AWS" + ], + "correct": "B. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a two-way forest trust to", + "explanation": "Explanation/Reference: In this scenario, AWS applications (Amazon Chime, A mazon Connect, Amazon QuickSight, AWS Single Sign- On, Amazon WorkDocs, Amazon WorkMail, Amazon WorkSp aces, AWS Client VPN, AWS Management Console, and AWS Transfer Family) need to be able t o look up objects from the on-premises domain in or der for them to function. This tells you that authentic ation needs to flow both ways. This scenario requir es a two- way trust between the on-premises and AWS Managed M icrosoft AD domains. It is a requirement of the application Scenario 2: https://aws.amazon.com/es/b logs/security/everything-you- wanted-to-know-about- trusts- with-aws-managed-microsoft-ad/", + "references": "" + }, + { + "question": "A company provides a Voice over Internet Protocol ( VoIP) service that uses UDP connections. The servic e consists of Amazon EC2 instances that run in an Aut o Scaling group. The company has deployments across multiple AWS Regions. The company needs to route us ers to the Region with the lowest latency. The comp any also needs automated failover between Regions. Which solution will meet these requirements?", + "options": [ + "A. Deploy a Network Load Balancer (NLB) and an assoc iated target group. Associate the target group with the", + "B. Deploy an Application Load Balancer (ALB) and an associated target group. Associate the target group with", + "C. Deploy a Network Load Balancer (NLB) and an assoc iated target group. Associate the target group with the", + "D. Deploy an Application Load Balancer (ALB) and an associated target group. Associate the target group with" + ], + "correct": "A. Deploy a Network Load Balancer (NLB) and an assoc iated target group. Associate the target group with the", + "explanation": "Explanation/Reference: Global Accelerator has automatic failover and is pe rfect for this scenario with VoIP https://aws.amazo n.com/ global-accelerator/faqs/", + "references": "" + }, + { + "question": "A development team runs monthly resource-intensive tests on its general purpose Amazon RDS for MySQL D B instance with Performance Insights enabled. The tes ting lasts for 48 hours once a month and is the onl y process that uses the database. The team wants to r educe the cost of running the tests without reducin g the compute and memory attributes of the DB instance. Which solution meets these requirements MOST cost-e ffectively?", + "options": [ + "A. Stop the DB instance when tests are completed. Re start the DB instance when required.", + "B. Use an Auto Scaling policy with the DB instance t o automatically scale when tests are completed.", + "C. Create a snapshot when tests are completed. Termi nate the DB instance and restore the snapshot when", + "D. Modify the DB instance to a low-capacity instance when tests are completed. Modify the DB instance a gain", + "A. Use AWS Config rules to define and detect resourc es that are not properly tagged.", + "B. Use Cost Explorer to display resources that are n ot properly tagged. Tag those resources manually.", + "C. Write API calls to check all resources for proper tag allocation. Periodically run the code on an EC 2", + "D. Write API calls to check all resources for proper tag allocation. Schedule an AWS Lambda function th rough" + ], + "correct": "A. Use AWS Config rules to define and detect resourc es that are not properly tagged.", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/config/latest/developer guide/tagging.html", + "references": "" + }, + { + "question": "A development team needs to host a website that wil l be accessed by other teams. The website contents consist of HTML, CSS, client-side JavaScript, and i mages. Which method is the MOST cost-effective for hosting the website?", + "options": [ + "A. Containerize the website and host it in AWS Farga te.", + "B. Create an Amazon S3 bucket and host the website t here.", + "C. Deploy a web server on an Amazon EC2 instance to host the website.", + "D. Configure an Application Load Balancer with an AW S Lambda target that uses the Express.js framework." + ], + "correct": "B. Create an Amazon S3 bucket and host the website t here.", + "explanation": "Explanation/Reference: The most cost-effective method for hosting a websit e that consists of HTML, CSS, client- side JavaScri pt, and images would be to create an Amazon S3 bucket and h ost the website there. Amazon S3 (Simple Storage Service) is an object storage service that enables you to store and retrieve data over the internet. I t is a highly scalable, reliable, and low-cost storage service th at is well-suited for hosting static websites. You can use Amazon S3 to host a website by creating a bucket, u ploading your website content to the bucket, and th en configuring the bucket as a static website hosting location.", + "references": "" + }, + { + "question": "A company runs an online marketplace web applicatio n on AWS. The application serves hundreds of thousands of users during peak hours. The company n eeds a scalable, near-real- time solution to share the details of millions of financial transactions with several other internal applications. Transactions a lso need to be processed to remove sensitive data before being sto red in a document database for low-latency retrieva l. What should a solutions architect recommend to meet thes e requirements?", + "options": [ + "A. Store the transactions data into Amazon DynamoDB. Set up a rule in DynamoDB to remove sensitive data", + "B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and", + "C. Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove", + "D. Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and" + ], + "correct": "C. Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove", + "explanation": "Explanation/Reference: Kinesis Data Firehose currently supports Amazon S3, Amazon Redshift, Amazon OpenSearch Service, Splunk , Datadog, NewRelic, Dynatrace, Sumologic, LogicMonit or, MongoDB, and HTTP End Point as destinations. https://aws.amazon.com/kinesis/data- firehose/faqs/ #:~:text=Kinesis%20Data%20Firehose%20currently% 20supports,HTTP%20E nd%20Point%20as%20destinations.", + "references": "" + }, + { + "question": "A company hosts its multi-tier applications on AWS. For compliance, governance, auditing, and security , the company must track configuration changes on its AWS resources and record a history of API calls made t o these resources. What should a solutions architect do to meet these requirements?", + "options": [ + "A. Use AWS CloudTrail to track configuration changes and AWS Config to record API calls.", + "B. Use AWS Config to track configuration changes and AWS CloudTrail to record API calls.", + "C. Use AWS Config to track configuration changes and Amazon CloudWatch to record API calls.", + "D. Use AWS CloudTrail to track configuration changes and Amazon CloudWatch to record API calls." + ], + "correct": "B. Use AWS Config to track configuration changes and AWS CloudTrail to record API calls.", + "explanation": "Explanation/Reference: CloudTrail - Track user activity and API call histo ry. Config - Assess, audits, and evaluates the conf iguration and relationships of tag resources.", + "references": "" + }, + { + "question": "A company is preparing to launch a public-facing we b application in the AWS Cloud. The architecture co nsists of Amazon EC2 instances within a VPC behind an Elas tic Load Balancer (ELB). A third-party service is u sed for the DNS. The company's solutions architect must rec ommend a solution to detect and protect against lar ge- scale DDoS attacks. Which solution meets these requirements?", + "options": [ + "A. Enable Amazon GuardDuty on the account.", + "B. Enable Amazon Inspector on the EC2 instances.", + "C. Enable AWS Shield and assign Amazon Route 53 to i t.", + "D. Enable AWS Shield Advanced and assign the ELB to it.", + "A. Create an S3 bucket in each Region. Configure the S3 buckets to use server-side encryption with Amaz on", + "B. Create a customer managed multi-Region KMS key. C reate an S3 bucket in each Region.", + "C. Create a customer managed KMS key and an S3 bucke t in each Region. Configure the S3 buckets to use", + "D. Create a customer managed KMS key and an S3 bucke t in each Region. Configure the S3 buckets to use" + ], + "correct": "B. Create a customer managed multi-Region KMS key. C reate an S3 bucket in each Region.", + "explanation": "Explanation/Reference: KMS Multi-region keys are required https://docs.aws.amazon.com/kms/latest/developergui de/multi-region-keys- overview.html", + "references": "" + }, + { + "question": "A company recently launched a variety of new worklo ads on Amazon EC2 instances in its AWS account. The company needs to create a strategy to access and ad minister the instances remotely and securely. The company needs to implement a repeatable process tha t works with native AWS services and follows the AW S Well-Architected Framework. Which solution will mee t these requirements with the LEAST operational overhead?", + "options": [ + "A. Use the EC2 serial console to directly access the terminal interface of each instance for administra tion.", + "B. Attach the appropriate IAM role to each existing instance and new instance. Use AWS Systems Manager", + "C. Create an administrative SSH key pair. Load the p ublic key into each EC2 instance.", + "D. Establish an AWS Site-to-Site VPN connection. Ins truct administrators to use their local on-premises" + ], + "correct": "B. Attach the appropriate IAM role to each existing instance and new instance. Use AWS Systems Manager", + "explanation": "Explanation/Reference: How can Session Manager benefit my organization? An s: No open inbound ports and no need to manage bastion hosts or SSH keys https://docs.aws.amazon.c om/systems- manager/latest/userguide/session- manager.html", + "references": "" + }, + { + "question": "A company is hosting a static website on Amazon S3 and is using Amazon Route 53 for DNS. The website i s experiencing increased demand from around the world . The company must decrease latency for users who access the website. Which solution meets these requirements MOST cost-e ffectively?", + "options": [ + "A. Replicate the S3 bucket that contains the website to all AWS Regions. Add Route 53 geolocation routi ng entries.", + "B. Provision accelerators in AWS Global Accelerator. Associate the supplied IP addresses with the S3 bu cket.", + "C. Add an Amazon CloudFront distribution in front of the S3 bucket. Edit the Route 53 entries to point to the", + "D. Enable S3 Transfer Acceleration on the bucket. Ed it the Route 53 entries to point to the new endpoin t." + ], + "correct": "C. Add an Amazon CloudFront distribution in front of the S3 bucket. Edit the Route 53 entries to point to the", + "explanation": "Explanation/Reference: The correct answer is C: Add an Amazon CloudFront d istribution in front of the S3 bucket. Edit the Rou te 53 entries to point to the CloudFront distribution. Amazon CloudFront is a content delivery network (CD N) that speeds up the delivery of static and dynami c web content, such as HTML, CSS, JavaScript, and images. It does this by placing cache servers in locations around the world, which store copies of the content and se rve it to users from the location that is nearest t o them. To decrease latency for users who access the static website hosted on Amazon S3, you can add an Amazon CloudFront distribution in front of the S3 bucket a nd edit the Route 53 entries to point to the CloudF ront distribution. This will allow CloudFront to cache t he content of the website at locations around the w orld, which will reduce the time it takes for users to access t he website by serving it from the location that is nearest to them. Answer A, (WRONG) - Replicating the S3 bucket that contains the website to all AWS Regions and adding Route 53 geolocation routing entries would b e more expensive than using CloudFront, as it would require you to pay for the additional storage and d ata transfer costs associated with replicating the bucket to multiple Regions. Answer B, (WRONG) - Provisioning accelerators in AW S Global Accelerator and associating the supplied I P addresses with the S3 bucket would also be more exp ensive than using CloudFront, as it would require y ou to pay for the additional cost of the accelerators. Answer D, (WRONG) - Enabling S3 Transfer Accelerati on on the bucket and editing the Route 53 entries t o point to the new endpoint would not reduce latency for users who access the website from around the wo rld, as it only speeds up the transfer of large files over the public internet and does not have cache servers in multiple locations around the world.", + "references": "" + }, + { + "question": "A company maintains a searchable repository of item s on its website. The data is stored in an Amazon R DS for MySQL database table that contains more than 10 mil lion rows. The database has 2 TB of General Purpose SSD storage. There are millions of updates against this data every day through the company's website. The company has noticed that some insert operations are taking 10 seconds or longer. The company has determined that the database storage performance is the problem. Which solution addresses this performance issue?", + "options": [ + "A. Change the storage type to Provisioned IOPS SSD.", + "B. Change the DB instance to a memory optimized inst ance class.", + "C. Change the DB instance to a burstable performance instance class.", + "D. Enable Multi-AZ RDS read replicas with MySQL nati ve asynchronous replication." + ], + "correct": "A. Change the storage type to Provisioned IOPS SSD.", + "explanation": "Explanation/Reference: A: Made for high levels of I/O opps for consistent, predictable performance. B: Can improve performance of insert opps, but it's a storage performance rather than processing power problem C: for moderate CPU usage D: for scale read-only replicas and doesn't improve performance of insert opps on the primary DB insta nce", + "references": "" + }, + { + "question": "A company has thousands of edge devices that collec tively generate 1 TB of status alerts each day. Eac h alert is approximately 2 KB in size. A solutions architec t needs to implement a solution to ingest and store the alerts for future analysis. The company wants a highly ava ilable solution. However, the company needs to mini mize costs and does not want to manage additional infras tructure. Additionally, the company wants to keep 1 4 days of data available for immediate analysis and archiv e any data older than 14 days. What is the MOST operationally efficient solution t hat meets these requirements?", + "options": [ + "A. Create an Amazon Kinesis Data Firehose delivery s tream to ingest the alerts. Configure the Kinesis D ata", + "B. Launch Amazon EC2 instances across two Availabili ty Zones and place them behind an Elastic Load", + "C. Create an Amazon Kinesis Data Firehose delivery s tream to ingest the alerts. Configure the Kinesis D ata", + "D. Create an Amazon Simple Queue Service (Amazon SQS ) standard queue to ingest the alerts, and set the" + ], + "correct": "A. Create an Amazon Kinesis Data Firehose delivery s tream to ingest the alerts. Configure the Kinesis D ata", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/AmazonS3/latest/usergui de/lifecycle-transition-general- considerations.htm l", + "references": "" + }, + { + "question": "A company's application integrates with multiple so ftware-as-a-service (SaaS) sources for data collect ion. The company runs Amazon EC2 instances to receive the da ta and to upload the data to an Amazon S3 bucket fo r analysis. The same EC2 instance that receives and u ploads the data also sends a notification to the us er when an upload is complete. The company has noticed slow application performance and wants to improve the performance as much as possible. Which solution will meet these requirements with th e LEAST operational overhead?", + "options": [ + "A. Create an Auto Scaling group so that EC2 instance s can scale out. Configure an S3 event notification to", + "B. Create an Amazon AppFlow flow to transfer data be tween each SaaS source and the S3 bucket. Configure", + "C. Create an Amazon EventBridge (Amazon CloudWatch E vents) rule for each SaaS source to send output", + "D. Create a Docker container to use instead of an EC 2 instance. Host the containerized application on A mazon" + ], + "correct": "B. Create an Amazon AppFlow flow to transfer data be tween each SaaS source and the S3 bucket. Configure", + "explanation": "Explanation/Reference: https://aws.amazon.com/appflow/", + "references": "" + }, + { + "question": "A company runs a highly available image-processing application on Amazon EC2 instances in a single VPC . The EC2 instances run inside several subnets across multiple Availability Zones. The EC2 instances do not communicate with each other. However, the EC2 insta nces download images from Amazon S3 and upload images to Amazon S3 through a single NAT gateway. T he company is concerned about data transfer charges . What is the MOST cost-effective way for the company to avoid Regional data transfer charges?", + "options": [ + "A. Launch the NAT gateway in each Availability Zone.", + "B. Replace the NAT gateway with a NAT instance.", + "C. Deploy a gateway VPC endpoint for Amazon S3.", + "D. Provision an EC2 Dedicated Host to run the EC2 in stances." + ], + "correct": "C. Deploy a gateway VPC endpoint for Amazon S3.", + "explanation": "Explanation/Reference: Deploying a gateway VPC endpoint for Amazon S3 is t he most cost-effective way for the company to avoid Regional data transfer charges. A gateway VPC endpo int is a network gateway that allows communication between instances in a VPC and a service, such as A mazon S3, without requiring an Internet gateway or a NAT device. Data transfer between the VPC and the servi ce through a gateway VPC endpoint is free of charge , while data transfer between the VPC and the Interne t through an Internet gateway or NAT device is subj ect to data transfer charges. By using a gateway VPC endpo int, the company can reduce its data transfer costs by eliminating the need to transfer data through the N AT gateway to access Amazon S3. This option would p rovide the required connectivity to Amazon S3 and minimize data transfer charges.", + "references": "" + }, + { + "question": "A company has an on-premises application that gener ates a large amount of time-sensitive data that is backed up to Amazon S3. The application has grown and ther e are user complaints about internet bandwidth limitations. A solutions architect needs to design a long- term solution that allows for both timely b ackups to Amazon S3 and with minimal impact on internet conne ctivity for internal users. Which solution meets these requirements?", + "options": [ + "A. Establish AWS VPN connections and proxy all traff ic through a VPC gateway endpoint.", + "B. Establish a new AWS Direct Connect connection and direct backup traffic through this new connection.", + "C. Order daily AWS Snowball devices. Load the data o nto the Snowball devices and return the devices to", + "D. Submit a support ticket through the AWS Managemen t Console. Request the removal of S3 service limits" + ], + "correct": "B. Establish a new AWS Direct Connect connection and direct backup traffic through this new connection.", + "explanation": "Explanation/Reference: A: VPN also goes through the internet and uses the bandwidth C: daily Snowball transfer is not really a long-ter m solution when it comes to cost and efficiency D: S3 limits don't change anything here", + "references": "" + }, + { + "question": "A company has an Amazon S3 bucket that contains cri tical data. The company must protect the data from accidental deletion. Which combination of steps should a solutions archi tect take to meet these requirements? (Choose two.)", + "options": [ + "A. Enable versioning on the S3 bucket.", + "B. Enable MFA Delete on the S3 bucket.", + "C. Create a bucket policy on the S3 bucket.", + "D. Enable default encryption on the S3 bucket." + ], + "correct": "", + "explanation": "Explanation/Reference: The correct solution is AB, as you can see here: https://aws.amazon.com/it/premiumsupport/knowledge- center/s3-audit-deleted- missing-objects/ It states the following: To prevent or mitigate future accidental deletions, consider the following features: Enable versioning to keep historical versions of an object. Enable Cr oss-Region Replication of objects. Enable MFA delet e to require multi- factor authentication (MFA) when del eting an object version.", + "references": "" + }, + { + "question": "A company has a data ingestion workflow that consis ts of the following: \u00b7 An Amazon Simple Notification Service (Amazon SNS ) topic for notifications about new data deliveries \u00b7 An AWS Lambda function to process the data and re cord metadata The company observes that the ingesti on workflow fails occasionally because of network conn ectivity issues. When such a failure occurs, the La mbda function does not ingest the corresponding data unl ess the company manually reruns the job. Which combination of actions should a solutions architect take to ensure that the Lambda function ingests al l data in the future? (Choose two.)", + "options": [ + "A. Deploy the Lambda function in multiple Availabili ty Zones.", + "B. Create an Amazon Simple Queue Service (Amazon SQS ) queue, and subscribe it to the SNS topic.", + "C. Increase the CPU and memory that are allocated to the Lambda function.", + "D. Increase provisioned throughput for the Lambda fu nction." + ], + "correct": "", + "explanation": "Explanation/Reference: A, C, D options are out, since Lambda is fully mana ged service which provides high availability and sc alability by its own Answers are B and E BE so that the lambda function reads the SQS queue and nothing gets lost.", + "references": "" + }, + { + "question": "A company has an application that provides marketin g services to stores. The services are based on pre vious purchases by store customers. The stores upload tra nsaction data to the company through SFTP, and the data is processed and analyzed to generate new marketing offers. Some of the files can exceed 200 GB in siz e. Recently, the company discovered that some of the s tores have uploaded files that contain personally identifiable information (PII) that should not have been included. The company wants administrators to be alerted if PII is shared again. The company also wa nts to automate remediation. What should a solutions architect do to meet these requirements with the LEAST development effort?", + "options": [ + "A. Use an Amazon S3 bucket as a secure transfer poin t. Use Amazon Inspector to scan the objects in the", + "B. Use an Amazon S3 bucket as a secure transfer poin t. Use Amazon Macie to scan the objects in the buck et. If objects contain PII, use Amazon Simple Notificat ion Service (Amazon SNS) to trigger a notification to the", + "C. Implement custom scanning algorithms in an AWS La mbda function. Trigger the function when objects ar e", + "D. Implement custom scanning algorithms in an AWS La mbda function. Trigger the function when objects ar e" + ], + "correct": "B. Use an Amazon S3 bucket as a secure transfer poin t. Use Amazon Macie to scan the objects in the buck et. If objects contain PII, use Amazon Simple Notificat ion Service (Amazon SNS) to trigger a notification to the", + "explanation": "Explanation/Reference: Amazon Macie is a data security and data privacy se rvice that uses machine learning (ML) and pattern matching to discover and protect your sensitive dat a", + "references": "" + }, + { + "question": "A company needs guaranteed Amazon EC2 capacity in t hree specific Availability Zones in a specific AWS Region for an upcoming event that will last 1 week. What should the company do to guarantee the EC2 cap acity?", + "options": [ + "A. Purchase Reserved Instances that specify the Regi on needed.", + "B. Create an On-Demand Capacity Reservation that spe cifies the Region needed.", + "C. Purchase Reserved Instances that specify the Regi on and three Availability Zones needed.", + "D. Create an On-Demand Capacity Reservation that spe cifies the Region and three Availability Zones need ed." + ], + "correct": "D. Create an On-Demand Capacity Reservation that spe cifies the Region and three Availability Zones need ed.", + "explanation": "Explanation/Reference: ***CORRECT*** Option D. Create an On-Demand Capacit y Reservation that specifies the Region and three Availability Zones needed. An On-Demand Capacity Re servation is a type of Amazon EC2 reservation that enables you to create and manage reserved capacity on Amazon EC2. With an On-Demand Capacity Reservation, you can specify the Region and Availab ility Zones where you want to reserve capacity, and the number of EC2 instances you want to reserve. This a llows you to guarantee capacity in specific Availab ility Zones in a specific Region. ***WRONG*** Option A, purchasing Reserved Instances that specify the Region needed, would not guarante e capacity in specific Availability Zones. Option B, creating an On-Demand Capacity Reservation that spe cifies the Region needed, would not guarantee capacity in specific Availability Zones. Option C, purchasing R eserved Instances that specify the Region and three Availab ility Zones needed, would not guarantee capacity in specific Availability Zones as Reserved Instances do not pro vide capacity reservations.", + "references": "" + }, + { + "question": "A company's website uses an Amazon EC2 instance sto re for its catalog of items. The company wants to m ake sure that the catalog is highly available and that the catalog is stored in a durable location. What should a solutions architect do to meet these requirements?", + "options": [ + "A. Move the catalog to Amazon ElastiCache for Redis.", + "B. Deploy a larger EC2 instance with a larger instan ce store.", + "C. Move the catalog from the instance store to Amazo n S3 Glacier Deep Archive.", + "D. Move the catalog to an Amazon Elastic File System (Amazon EFS) file system." + ], + "correct": "D. Move the catalog to an Amazon Elastic File System (Amazon EFS) file system.", + "explanation": "Explanation/Reference: keyword is \"durable\" location A and B is ephemeral storage C takes forever so is not HA, that leaves D", + "references": "" + }, + { + "question": "A company stores call transcript files on a monthly basis. Users access the files randomly within 1 ye ar of the call, but users access the files infrequently after 1 year. The company wants to optimize its solution by giving users the ability to query and retrieve files that are less than 1-year-old as quickly as possible. A delay in retrieving older files is acceptable. Which solutio n will meet these requirements MOST cost-effectivel y?", + "options": [ + "A. Store individual files with tags in Amazon S3 Gla cier Instant Retrieval. Query the tags to retrieve the files", + "B. Store individual files in Amazon S3 Intelligent-T iering. Use S3 Lifecycle policies to move the files to S3", + "C. Store individual files with tags in Amazon S3 Sta ndard storage. Store search metadata for each archi ve in", + "D. Store individual files in Amazon S3 Standard stor age. Use S3 Lifecycle policies to move the files to S3" + ], + "correct": "B. Store individual files in Amazon S3 Intelligent-T iering. Use S3 Lifecycle policies to move the files to S3", + "explanation": "Explanation/Reference: Users access the files randomly S3 Intelligent-Tier ing is the ideal storage class for data with unknow n, changing, or unpredictable access patterns, indepen dent of object size or retention period. You can us e S3 Intelligent-Tiering as the default storage class fo r virtually any workload, especially data lakes, da ta analytics, new applications, and user- generated content. http s://aws.amazon.com/fr/s3/storage-classes/intelligen t-tiering/", + "references": "" + }, + { + "question": "A company has a production workload that runs on 1, 000 Amazon EC2 Linux instances. The workload is powered by third-party software. The company needs to patch the third-party software on all EC2 instan ces as quickly as possible to remediate a critical securit y vulnerability. What should a solutions architect do to meet these requirements?", + "options": [ + "A. Create an AWS Lambda function to apply the patch to all EC2 instances.", + "B. Configure AWS Systems Manager Patch Manager to ap ply the patch to all EC2 instances.", + "C. Schedule an AWS Systems Manager maintenance windo w to apply the patch to all EC2 instances.", + "D. Use AWS Systems Manager Run Command to run a cust om command that applies the patch to all EC2" + ], + "correct": "D. Use AWS Systems Manager Run Command to run a cust om command that applies the patch to all EC2", + "explanation": "Explanation/Reference: AWS Systems Manager Run Command allows the company to run commands or scripts on multiple EC2 instances. By using Run Command, the company can qu ickly and easily apply the patch to all 1,000 EC2 instances to remediate the security vulnerability. Creating an AWS Lambda function to apply the patch to all EC2 instances would not be a suitable solution, as Lambda functions are not designed to run on EC2 instances. Configuring AWS Systems Manager Patch Ma nager to apply the patch to all EC2 instances would not be a suitable solution, as Patch Manager is not designed to apply third-party software patches. Sc heduling an AWS Systems Manager maintenance window to apply the patch to all EC2 instances would not be a suita ble solution, as maintenance windows are not designed t o apply patches to third-party software", + "references": "" + }, + { + "question": "A company is developing an application that provide s order shipping statistics for retrieval by a REST API. The company wants to extract the shipping statistics, o rganize the data into an easy-to-read HTML format, and send the report to several email addresses at the same t ime every morning. Which combination of steps should a solutions archi tect take to meet these requirements? (Choose two.)", + "options": [ + "A. Configure the application to send the data to Ama zon Kinesis Data Firehose.", + "B. Use Amazon Simple Email Service (Amazon SES) to f ormat the data and to send the report by email.", + "C. Create an Amazon EventBridge (Amazon CloudWatch E vents) scheduled event that invokes an AWS Glue", + "D. Create an Amazon EventBridge (Amazon CloudWatch E vents) scheduled event that invokes an AWS" + ], + "correct": "", + "explanation": "Explanation/Reference: You can use SES to format the report in HTML. https://docs.aws.amazon.com/ses/latest/dg/send-emai l-formatted.html", + "references": "" + }, + { + "question": "A company wants to migrate its on-premises applicat ion to AWS. The application produces output files t hat vary in size from tens of gigabytes to hundreds of terabytes. The application data must be stored in a standard file system structure. The company wants a solution that scales automatically. is highly available, an d requires minimum operational overhead. Which solution will meet these requirements?", + "options": [ + "A. Migrate the application to run as containers on A mazon Elastic Container Service (Amazon ECS). Use", + "B. Migrate the application to run as containers on A mazon Elastic Kubernetes Service (Amazon EKS). Use", + "C. Migrate the application to Amazon EC2 instances i n a Multi-AZ Auto Scaling group. Use Amazon Elastic File", + "D. Migrate the application to Amazon EC2 instances i n a Multi-AZ Auto Scaling group. Use Amazon Elastic" + ], + "correct": "C. Migrate the application to Amazon EC2 instances i n a Multi-AZ Auto Scaling group. Use Amazon Elastic File", + "explanation": "Explanation/Reference: EFS is a standard file system, it scales automatica lly and is highly available.", + "references": "" + }, + { + "question": "A company needs to store its accounting records in Amazon S3. The records must be immediately accessib le for 1 year and then must be archived for an additio nal 9 years. No one at the company, including admin istrative users and root users, can be able to delete the rec ords during the entire 10-year period. The records must be stored with maximum resiliency. Which solution will meet these requirements?", + "options": [ + "A. Store the records in S3 Glacier for the entire 10 -year period. Use an access control policy to deny deletion", + "C. Use an S3 Lifecycle policy to transition the reco rds from S3 Standard to S3 Glacier Deep Archive aft er 1", + "D. Use an S3 Lifecycle policy to transition the reco rds from S3 Standard to S3 One Zone- Infrequent Acc ess" + ], + "correct": "C. Use an S3 Lifecycle policy to transition the reco rds from S3 Standard to S3 Glacier Deep Archive aft er 1", + "explanation": "Explanation/Reference: Use S3 Object Lock in compliance mode https://docs.aws.amazon.com/AmazonS3/latest/usergui de/object-lock-overview.html", + "references": "" + }, + { + "question": "A company runs multiple Windows workloads on AWS. T he company's employees use Windows file shares that are hosted on two Amazon EC2 instances. The fi le shares synchronize data between themselves and maintain duplicate copies. The company wants a high ly available and durable storage solution that pres erves how users currently access the files. What should a solutions architect do to meet these requirements?", + "options": [ + "A. Migrate all the data to Amazon S3. Set up IAM aut hentication for users to access files.", + "B. Set up an Amazon S3 File Gateway. Mount the S3 Fi le Gateway on the existing EC2 instances.", + "C. Extend the file share environment to Amazon FSx f or Windows File Server with a Multi- AZ configurati on.", + "D. Extend the file share environment to Amazon Elast ic File System (Amazon EFS) with a Multi-AZ" + ], + "correct": "C. Extend the file share environment to Amazon FSx f or Windows File Server with a Multi- AZ configurati on.", + "explanation": "Explanation/Reference: https://aws.amazon.com/blogs/aws/amazon-fsx-for-win dows-file-server-update-new- enterprise-ready-featu res/", + "references": "" + }, + { + "question": "A solutions architect is developing a VPC architect ure that includes multiple subnets. The architectur e will host applications that use Amazon EC2 instances and Amaz on RDS DB instances. The architecture consists of s ix subnets in two Availability Zones. Each Availabilit y Zone includes a public subnet, a private subnet, and a dedicated subnet for databases. Only EC2 instances that run in the private subnets can have access to the RDS databases. Which solution will meet these requirements?", + "options": [ + "A. Create a new route table that excludes the route to the public subnets' CIDR blocks.", + "B. Create a security group that denies inbound traff ic from the security group that is assigned to inst ances in", + "C. Create a security group that allows inbound traff ic from the security group that is assigned to inst ances in", + "D. Create a new peering connection between the publi c subnets and the private subnets." + ], + "correct": "C. Create a security group that allows inbound traff ic from the security group that is assigned to inst ances in", + "explanation": "Explanation Explanation/Reference: A: doesn't fully configure the traffic flow B: security groups don't have deny rules D: peering is mostly between VPCs, doesn't really h elp here", + "references": "" + }, + { + "question": "A company has registered its domain name with Amazo n Route 53. The company uses Amazon API Gateway in the ca-central-1 Region as a public interface fo r its backend microservice APIs. Third-party servic es consume the APIs securely. The company wants to des ign its API Gateway URL with the company's domain name and corresponding certificate so that the thir d-party services can use HTTPS. Which solution will meet these requirements?", + "options": [ + "A. Create stage variables in API Gateway with Name=\" Endpoint-URL\" and Value=\"Company Domain Name\" to", + "B. Create Route 53 DNS records with the company's do main name. Point the alias record to the Regional A PI", + "C. Create a Regional API Gateway endpoint. Associate the API Gateway endpoint with the company's domain", + "D. Create a Regional API Gateway endpoint. Associate the API Gateway endpoint with the company's domain" + ], + "correct": "C. Create a Regional API Gateway endpoint. Associate the API Gateway endpoint with the company's domain", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/apigateway/latest/devel operguide/apigateway-regional- api-custom-domain- create.html", + "references": "" + }, + { + "question": "A company is running a popular social media website . The website gives users the ability to upload ima ges to share with other users. The company wants to make s ure that the images do not contain inappropriate co ntent. The company needs a solution that minimizes develop ment effort. What should a solutions architect do to meet these requirements?", + "options": [ + "A. Use Amazon Comprehend to detect inappropriate con tent. Use human review for low- confidence", + "B. Use Amazon Rekognition to detect inappropriate co ntent. Use human review for low- confidence", + "C. Use Amazon SageMaker to detect inappropriate cont ent. Use ground truth to label low- confidence", + "D. Use AWS Fargate to deploy a custom machine learni ng model to detect inappropriate content. Use groun d", + "A. Use Amazon EC2 instances, and install Docker on t he instances.", + "B. Use Amazon Elastic Container Service (Amazon ECS) on Amazon EC2 worker nodes.", + "C. Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate.", + "D. Use Amazon EC2 instances from an Amazon Elastic C ontainer Service (Amazon ECS)- optimized Amazon" + ], + "correct": "C. Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate.", + "explanation": "Explanation/Reference: AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without having to manage servers. AWS Fargate is co mpatible with Amazon Elastic Container Service (ECS ) and Amazon Elastic Kubernetes Service (EKS). https://aws.amazon.com/fr/fargate/", + "references": "" + }, + { + "question": "A company hosts more than 300 global websites and a pplications. The company requires a platform to ana lyze more than 30 TB of clickstream data each day. What should a solutions architect do to transmit and pro cess the clickstream data?", + "options": [ + "A. Design an AWS Data Pipeline to archive the data t o an Amazon S3 bucket and run an Amazon EMR cluster", + "B. Create an Auto Scaling group of Amazon EC2 instan ces to process the data and send it to an Amazon S3", + "C. Cache the data to Amazon CloudFront. Store the da ta in an Amazon S3 bucket. When an object is added to", + "D. Collect the data from Amazon Kinesis Data Streams . Use Amazon Kinesis Data Firehose to transmit the" + ], + "correct": "D. Collect the data from Amazon Kinesis Data Streams . Use Amazon Kinesis Data Firehose to transmit the", + "explanation": "Explanation/Reference: https://aws.amazon.com/es/blogs/big-data/real-time- analytics-with-amazon-redshift- streaming-ingestion /", + "references": "" + }, + { + "question": "A company has a website hosted on AWS. The website is behind an Application Load Balancer (ALB) that i s configured to handle HTTP and HTTPS separately. The company wants to forward all requests to the websi te so that the requests will use HTTPS. What should a solutions architect do to meet this requirement?", + "options": [ + "A. Update the ALB's network ACL to accept only HTTPS traffic.", + "B. Create a rule that replaces the HTTP in the URL w ith HTTPS.", + "C. Create a listener rule on the ALB to redirect HTT P traffic to HTTPS.", + "D. Replace the ALB with a Network Load Balancer conf igured to use Server Name Indication (SNI)." + ], + "correct": "C. Create a listener rule on the ALB to redirect HTT P traffic to HTTPS.", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/fr_fr/elasticloadbalanc ing/latest/application/create-https- listener.html https://aws.amazon.com/fr/premiumsupport/knowledge- center/elb-redirect-http-to- https-using-alb/", + "references": "" + }, + { + "question": "A company is developing a two-tier web application on AWS. The company's developers have deployed the application on an Amazon EC2 instance that connects directly to a backend Amazon RDS database. The company must not hardcode database credentials in t he application. The company must also implement a solution to automatically rotate the database crede ntials on a regular basis. Which solution will meet these requirements with th e LEAST operational overhead?", + "options": [ + "A. Store the database credentials in the instance me tadata. Use Amazon EventBridge (Amazon CloudWatch", + "B. Store the database credentials in a configuration file in an encrypted Amazon S3 bucket.", + "C. Store the database credentials as a secret in AWS Secrets Manager. Turn on automatic rotation for th e", + "D. Store the database credentials as encrypted param eters in AWS Systems Manager Parameter Store. Turn" + ], + "correct": "C. Store the database credentials as a secret in AWS Secrets Manager. Turn on automatic rotation for th e", + "explanation": "Explanation/Reference: The correct solution is C. Store the database crede ntials as a secret in AWS Secrets Manager. Turn on automatic rotation for the secret. Attach the requi red permission to the EC2 role to grant access to t he secret. AWS Secrets Manager is a service that enables you t o easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. By storing the database credentia ls as a secret in Secrets Manager, you can ensure that they are not hardcoded in the application and that they are automatically rotated on a regular basis. To grant the EC2 instance access to the secret, you can atta ch the required permission to the EC2 role. This will allo w the application to retrieve the secret from Secre ts Manager as needed.", + "references": "" + }, + { + "question": "A company is deploying a new public web application to AWS. The application will run behind an Applica tion Load Balancer (ALB). The application needs to be en crypted at the edge with an SSL/TLS certificate tha t is issued by an external certificate authority (CA). T he certificate must be rotated each year before the certificate expires. What should a solutions architect do to me et these requirements?", + "options": [ + "A. Use AWS Certificate Manager (ACM) to issue an SSL /TLS certificate. Apply the certificate to the ALB. Use", + "B. Use AWS Certificate Manager (ACM) to issue an SSL /TLS certificate. Import the key material from the", + "C. Use AWS Certificate Manager (ACM) Private Certifi cate Authority to issue an SSL/TLS certificate from the", + "D. Use AWS Certificate Manager (ACM) to import an SS L/TLS certificate. Apply the certificate to the ALB . Use" + ], + "correct": "", + "explanation": "Explanation/Reference: It's a third-party certificate, hence AWS cannot ma nage renewal automatically. The closest thing you c an do is to send a notification to renew the 3rd party certi ficate.", + "references": "" + }, + { + "question": "A company runs its infrastructure on AWS and has a registered base of 700,000 users for its document management application. The company intends to crea te a product that converts large .pdf files to .jpg image files. The .pdf files average 5 MB in size. The com pany needs to store the original files and the conv erted files. A solutions architect must design a scalable soluti on to accommodate demand that will grow rapidly ove r time. Which solution meets these requirements MOST cost-e ffectively?", + "options": [ + "A. Save the .pdf files to Amazon S3. Configure an S3 PUT event to invoke an AWS Lambda function to", + "B. Save the .pdf files to Amazon DynamoDUse the Dyna moDB Streams feature to invoke an AWS Lambda", + "C. Upload the .pdf files to an AWS Elastic Beanstalk application that includes Amazon EC2 instances, Am azon", + "D. Upload the .pdf files to an AWS Elastic Beanstalk application that includes Amazon EC2 instances, Am azon" + ], + "correct": "A. Save the .pdf files to Amazon S3. Configure an S3 PUT event to invoke an AWS Lambda function to", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/ServiceQuotas.ht ml", + "references": "" + }, + { + "question": "A company has more than 5 TB of file data on Window s file servers that run on premises. Users and applications interact with the data each day. The company is moving its Windows workloads to AWS. As the company continues this process, the company requires access to AWS and on-premises file storage with minimum latency. The company needs a solution that minimizes operational overhead and requires no significant changes to the existing file access pa tterns. The company uses an AWS Site-to- Site VPN connectio n for connectivity to AWS. What should a solutions architect do to meet these requirements?", + "options": [ + "A. Deploy and configure Amazon FSx for Windows File Server on AWS. Move the on- premises file data to", + "B. Deploy and configure an Amazon S3 File Gateway on premises. Move the on-premises file data to the S3", + "C. Deploy and configure an Amazon S3 File Gateway on premises. Move the on-premises file data to Amazon", + "D. Deploy and configure Amazon FSx for Windows File Server on AWS. Deploy and configure an Amazon FSx" + ], + "correct": "", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/filegateway/latest/file fsxw/what-is-file-fsxw.html", + "references": "" + }, + { + "question": "A hospital recently deployed a RESTful API with Ama zon API Gateway and AWS Lambda. The hospital uses API Gateway and Lambda to upload reports that are i n PDF format and JPEG format. The hospital needs to modify the Lambda code to identify protected health information (PHI) in the reports. Which solution will meet these requirements with th e LEAST operational overhead?", + "options": [ + "A. Use existing Python libraries to extract the text from the reports and to identify the PHI from the extracted", + "B. Use Amazon Textract to extract the text from the reports. Use Amazon SageMaker to identify the PHI f rom", + "C. Use Amazon Textract to extract the text from the reports. Use Amazon Comprehend Medical to identify the", + "D. Use Amazon Rekognition to extract the text from t he reports. Use Amazon Comprehend Medical to identi fy" + ], + "correct": "C. Use Amazon Textract to extract the text from the reports. Use Amazon Comprehend Medical to identify the", + "explanation": "Explanation/Reference: The correct solution is C: Use Amazon Textract to e xtract the text from the reports. Use Amazon Compre hend Medical to identify the PHI from the extracted text . Option C: Using Amazon Textract to extract the te xt from the reports, and Amazon Comprehend Medical to identify the PHI from the extracted text, would be the most efficient solution as it would involve the least op erational overhead. Textract is specifically design ed for extracting text from documents, and Comprehend Medi cal is a fully managed service that can accurately identify PHI in medical text. This solution would r equire minimal maintenance and would not incur any additional costs beyond the usage fees for Textract and Compre hend Medical.", + "references": "" + }, + { + "question": "A company has an application that generates a large number of files, each approximately 5 MB in size. The files are stored in Amazon S3. Company policy requi res the files to be stored for 4 years before they can be deleted. Immediate accessibility is always required as the files contain critical business data that i s not easy to reproduce. The files are frequently accessed in the first 30 days of the object creation but are rarel y accessed after the first 30 days. Which storage solution is MOST cost-effective?", + "options": [ + "A. Create an S3 bucket lifecycle policy to move file s from S3 Standard to S3 Glacier 30 days from objec t", + "B. Create an S3 bucket lifecycle policy to move file s from S3 Standard to S3 One Zone- Infrequent Acces s (S3", + "C. Create an S3 bucket lifecycle policy to move file s from S3 Standard to S3 Standard- Infrequent Acces s (S3", + "D. Create an S3 bucket lifecycle policy to move file s from S3 Standard to S3 Standard- Infrequent Acces s (S3" + ], + "correct": "C. Create an S3 bucket lifecycle policy to move file s from S3 Standard to S3 Standard- Infrequent Acces s (S3", + "explanation": "Explanation/Reference: > Immediate accessibility is always required as the files contain critical business data that is not e asy to reproduce If they do not explicitly mention that th ey are using Glacier Instant Retrieval, we should a ssume that Glacier -> takes more time to retrieve and may not meet the requirements", + "references": "" + }, + { + "question": "A company hosts an application on multiple Amazon E C2 instances. The application processes messages fr om an Amazon SQS queue, writes to an Amazon RDS table, and deletes the message from the queue. Occasional duplicate records are found in the RDS table. The SQS queue does not contain any duplicate messag es. What should a solutions architect do to ensure mess ages are being processed once only?", + "options": [ + "A. Use the CreateQueue API call to create a new queu e.", + "B. Use the AddPermission API call to add appropriate permissions.", + "C. Use the ReceiveMessage API call to set an appropr iate wait time.", + "D. Use the ChangeMessageVisibility API call to incre ase the visibility timeout." + ], + "correct": "D. Use the ChangeMessageVisibility API call to incre ase the visibility timeout.", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/AWSSimpleQueueService/l atest/SQSDeveloperGuide/sqs- visibility-timeout.htm l", + "references": "" + }, + { + "question": "A solutions architect is designing a new hybrid arc hitecture to extend a company's on- premises infras tructure to AWS. The company requires a highly available con nection with consistent low latency to an AWS Regio n. The company needs to minimize costs and is willing to accept slower traffic if the primary connection fails. What should the solutions architect do to meet these req uirements?", + "options": [ + "A. Provision an AWS Direct Connect connection to a R egion. Provision a VPN connection as a backup if th e", + "B. Provision a VPN tunnel connection to a Region for private connectivity. Provision a second VPN tunne l for", + "C. Provision an AWS Direct Connect connection to a R egion. Provision a second Direct Connect connection to", + "D. Provision an AWS Direct Connect connection to a R egion. Use the Direct Connect failover attribute fr om the" + ], + "correct": "A. Provision an AWS Direct Connect connection to a R egion. Provision a VPN connection as a backup if th e", + "explanation": "Explanation/Reference: Direct Connect goes throught 1 Gbps, 10 Gbps or 100 Gbps and the VPN goes up to 1.25 Gbps. https:// docs.aws.amazon.com/whitepapers/latest/aws-vpc-conn ectivity- options/aws-direct-connect-vpn.html", + "references": "" + }, + { + "question": "A company is running a business-critical web applic ation on Amazon EC2 instances behind an Application Load Balancer. The EC2 instances are in an Auto Scaling group. The application uses an Amazon Aurora PostgreSQL database that is deployed in a single Av ailability Zone. The company wants the application to be highly available with minimum downtime and minimum loss of data. Which solution will meet these requirements with th e LEAST operational effort?", + "options": [ + "A. Place the EC2 instances in different AWS Regions. Use Amazon Route 53 health checks to redirect traf fic.", + "B. Configure the Auto Scaling group to use multiple Availability Zones. Configure the database as Multi -AZ.", + "D. Configure the Auto Scaling group to use multiple AWS Regions. Write the data from the application to" + ], + "correct": "B. Configure the Auto Scaling group to use multiple Availability Zones. Configure the database as Multi -AZ.", + "explanation": "Explanation/Reference: RDS Proxy for Aurora https://docs.aws.amazon.com/AmazonRDS/latest/Aurora UserGuide/rds-proxy.html", + "references": "" + }, + { + "question": "A company's HTTP application is behind a Network Lo ad Balancer (NLB). The NLB's target group is config ured to use an Amazon EC2 Auto Scaling group with multip le EC2 instances that run the web service. The company notices that the NLB is not detecting H TTP errors for the application. These errors requir e a manual restart of the EC2 instances that run the we b service. The company needs to improve the applica tion's availability without writing custom scripts or code . What should a solutions architect do to meet thes e requirements?", + "options": [ + "A. Enable HTTP health checks on the NLB, supplying t he URL of the company's application.", + "B. Add a cron job to the EC2 instances to check the local application's logs once each minute. If HTTP errors", + "C. Replace the NLB with an Application Load Balancer . Enable HTTP health checks by supplying the URL of", + "D. Create an Amazon Cloud Watch alarm that monitors the UnhealthyHostCount metric for the NLB. Configur e" + ], + "correct": "C. Replace the NLB with an Application Load Balancer . Enable HTTP health checks by supplying the URL of", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A company runs a shopping application that uses Ama zon DynamoDB to store customer information. In case of data corruption, a solutions architect needs to des ign a solution that meets a recovery point objectiv e (RPO) of 15 minutes and a recovery time objective (RTO) of 1 hour. What should the solutions architect recommend to me et these requirements?", + "options": [ + "A. Configure DynamoDB global tables. For RPO recover y, point the application to a different AWS Region.", + "B. Configure DynamoDB point-in-time recovery. For RP O recovery, restore to the desired point in time.", + "C. Export the DynamoDB data to Amazon S3 Glacier on a daily basis. For RPO recovery, import the data fr om", + "D. Schedule Amazon Elastic Block Store (Amazon EBS) snapshots for the DynamoDB table every 15 minutes." + ], + "correct": "B. Configure DynamoDB point-in-time recovery. For RP O recovery, restore to the desired point in time.", + "explanation": "Explanation/Reference: A - DynamoDB global tables provides multi-Region, a nd multi-active database, but it not valid \"in case of data corruption\". In this case, you need a backup. This solutions isn't valid. **B** - Point in Time Recove ry is designed as a continuous backup juts to recover it fast. It covers perfectly the RPO, and probably the RTO. https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/PointInTimeRec overy.html C - A daily export will not cover the RPO of 15min. D - DynamoD B is serverless...", + "references": "" + }, + { + "question": "A company runs a photo processing application that needs to frequently upload and download pictures fr om Amazon S3 buckets that are located in the same AWS Region. A solutions architect has noticed an increa sed cost in data transfer fees and needs to implement a solution to reduce these costs. How can the solutions architect meet this requireme nt?", + "options": [ + "A. Deploy Amazon API Gateway into a public subnet an d adjust the route table to route S3 calls through it.", + "B. Deploy a NAT gateway into a public subnet and att ach an endpoint policy that allows access to the S3", + "C. Deploy the application into a public subnet and a llow it to route through an internet gateway to acc ess the", + "D. Deploy an S3 VPC gateway endpoint into the VPC an d attach an endpoint policy that allows access to t he" + ], + "correct": "D. Deploy an S3 VPC gateway endpoint into the VPC an d attach an endpoint policy that allows access to t he", + "explanation": "Explanation/Reference: Deploy an S3 VPC gateway endpoint into the VPC and attach an endpoint policy that allows access to the S3 buckets. By deploying an S3 VPC gateway endpoint, t he application can access the S3 buckets over a pri vate network connection within the VPC, eliminating the need for data transfer over the internet. This can help reduce data transfer fees as well as improve the pe rformance of the application. The endpoint policy c an be used to specify which S3 buckets the application ha s access to.", + "references": "" + }, + { + "question": "A company recently launched Linux-based application instances on Amazon EC2 in a private subnet and launched a Linux-based bastion host on an Amazon EC 2 instance in a public subnet of a VPC. A solutions architect needs to connect from the on-premises net work, through the company's internet connection, to the bastion host, and to the application servers. The s olutions architect must make sure that the security groups of all the EC2 instances will allow that access. Which combination of steps should the solutions arc hitect take to meet these requirements? (Choose two .)", + "options": [ + "A. Replace the current security group of the bastion host with one that only allows inbound access from the", + "B. Replace the current security group of the bastion host with one that only allows inbound access from the", + "C. Replace the current security group of the bastion host with one that only allows inbound access from the", + "D. Replace the current security group of the applica tion instances with one that allows inbound SSH acc ess" + ], + "correct": "", + "explanation": "Explanation/Reference: C because from on-prem network to bastion through i nternet (using on-prem resource's public IP), D bec ause bastion and ec2 is in same VPC, meaning bastion can communicate to EC2 via it's private IP address", + "references": "" + }, + { + "question": "hosted on Amazon EC2 in public subnets. The databas e tier consists of Microsoft SQL Server running on Amazon EC2 in a private subnet. Security is a high priority for the company. How should security groups be configured in this si tuation? (Choose two.)", + "options": [ + "A. Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0.", + "B. Configure the security group for the web tier to allow outbound traffic on port 443 from 0.0.0.0/0.", + "C. Configure the security group for the database tie r to allow inbound traffic on port 1433 from the se curity", + "D. Configure the security group for the database tie r to allow outbound traffic on ports 443 and 1433 t o the" + ], + "correct": "", + "explanation": "Explanation/Reference: Web Server Rules: Inbound traffic from 443 (HTTPS) Source 0.0.0.0/0 - Allows inbound HTTPS access from any IPv4 address Database Rules : 1433 (MS SQL)The default port to access a Microsoft SQL Server database, for example, on an Amazon RDS instance ht tps://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ security-group-rules- reference.html", + "references": "" + }, + { + "question": "A company wants to move a multi-tiered application from on premises to the AWS Cloud to improve the application's performance. The application consists of application tiers that communicate with each ot her by way of RESTful services. Transactions are dropped w hen one tier becomes overloaded. A solutions archit ect must design a solution that resolves these issues a nd modernizes the application. Which solution meets these requirements and is the MOST operationally efficient?", + "options": [ + "A. Use Amazon API Gateway and direct transactions to the AWS Lambda functions as the application layer.", + "B. Use Amazon CloudWatch metrics to analyze the appl ication performance history to determine the server s'", + "C. Use Amazon Simple Notification Service (Amazon SN S) to handle the messaging between application", + "D. Use Amazon Simple Queue Service (Amazon SQS) to h andle the messaging between application servers" + ], + "correct": "A. Use Amazon API Gateway and direct transactions to the AWS Lambda functions as the application layer.", + "explanation": "Explanation/Reference: https://serverlessland.com/patterns/apigw-http-sqs- lambda-sls", + "references": "" + }, + { + "question": "A company receives 10 TB of instrumentation data ea ch day from several machines located at a single fa ctory. The data consists of JSON files stored on a storage area network (SAN) in an on-premises data center l ocated within the factory. The company wants to send this data to Amazon S3 where it can be accessed by sever al additional systems that provide critical near- real -time analytics. A secure transfer is important bec ause the data is considered sensitive. Which solution offers the MOST reliable data transf er?", + "options": [ + "A. AWS DataSync over public internet", + "B. AWS DataSync over AWS Direct Connect", + "C. AWS Database Migration Service (AWS DMS) over pub lic internet", + "D. AWS Database Migration Service (AWS DMS) over AWS Direct Connect" + ], + "correct": "B. AWS DataSync over AWS Direct Connect", + "explanation": "Explanation/Reference: The most reliable solution for transferring the dat a in a secure manner would be option B: AWS DataSync over AWS Direct Connect. AWS DataSy nc is a data transfer service that uses network optimization techniques to transfer data efficientl y and securely between on-premises storage systems and Amazon S3 or other storage targets. When used over AWS Direct Connect, DataSync can provide a dedicate d and secure network connection between your on-premi ses data center and AWS. This can help to ensure a more reliable and secure data transfer compared to using the public internet.", + "references": "" + }, + { + "question": "A company needs to configure a real-time data inges tion architecture for its application. The company needs an API, a process that transforms data as the data is streamed, and a storage solution for the data. Which solution will meet these requirements with th e LEAST operational overhead?", + "options": [ + "A. Deploy an Amazon EC2 instance to host an API that sends data to an Amazon Kinesis data stream. Creat e", + "B. Deploy an Amazon EC2 instance to host an API that sends data to AWS Glue. Stop source/destination", + "C. Configure an Amazon API Gateway API to send data to an Amazon Kinesis data stream.", + "D. Configure an Amazon API Gateway API to send data to AWS Glue. Use AWS Lambda functions to" + ], + "correct": "C. Configure an Amazon API Gateway API to send data to an Amazon Kinesis data stream.", + "explanation": "Explanation/Reference: (A) - You don't need to deploy an EC2 instance to h ost an API - Operational overhead (B) - Same as A ( **C**) - Is the answer (D) - AWS Glue gets data from S3, not from API GW. AWS Glue could do ETL by itself, so d on't need lambda. Non sense. https://aws.amazon.com/glue/", + "references": "" + }, + { + "question": "A company needs to keep user transaction data in an Amazon DynamoDB table. The company must retain the data for 7 years. What is the MOST operationally efficient solution t hat meets these requirements?", + "options": [ + "A. Use DynamoDB point-in-time recovery to back up th e table continuously.", + "B. Use AWS Backup to create backup schedules and ret ention policies for the table.", + "C. Create an on-demand backup of the table by using the DynamoDB console. Store the backup in an Amazon" + ], + "correct": "B. Use AWS Backup to create backup schedules and ret ention policies for the table.", + "explanation": "Explanation/Reference: \"Amazon DynamoDB offers two types of backups: point -in-time recovery (PITR) and on- demand backups. (==> D is not the answer) PITR is used to recover y our table to any point in time in a rolling 35 day window, which is used to help customers mitigate accidental deletes or writes to their tables from bad code, m alicious access, or user error. (==> A isn't the answer) On demand backups are designed for long-term archiving and retention, which is typically used to help customer s meet compliance and regulatory requirements. This is the second of a series of two blog posts about using AW S Backup to set up scheduled on-demand backups for Amazon DynamoDB. Part 1 presents the steps to set u p a scheduled backup for DynamoDB tables from the AWS Management Console.\" (==> Not the DynamoBD cons ole and C isn't the answer either) https:// aws.amazon.com/blogs/database/part-2-set-up-schedul ed-backups-for-amazon- dynamodb-using-aws-backup/", + "references": "" + }, + { + "question": "A company is planning to use an Amazon DynamoDB tab le for data storage. The company is concerned about cost optimization. The table will not be used on mo st mornings. In the evenings, the read and write tr affic will often be unpredictable. When traffic spikes occur, they will happen very quickly. What should a solutions architect recommend?", + "options": [ + "A. Create a DynamoDB table in on-demand capacity mod e.", + "B. Create a DynamoDB table with a global secondary i ndex.", + "C. Create a DynamoDB table with provisioned capacity and auto scaling.", + "D. Create a DynamoDB table in provisioned capacity m ode, and configure it as a global table." + ], + "correct": "A. Create a DynamoDB table in on-demand capacity mod e.", + "explanation": "Explanation/Reference: **A** - On demand is the answer - https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/HowItWorks.Re adWriteCapacityMode.html#HowItWorks.OnDemand B - no t related with the unpredictable traffic C - provisioned capacity is recommended for known patte rns. Not the case here. D - same as C", + "references": "" + }, + { + "question": "A company recently signed a contract with an AWS Ma naged Service Provider (MSP) Partner for help with an application migration initiative. A solutions archi tect needs ta share an Amazon Machine Image (AMI) f rom an existing AWS account with the MSP Partner's AWS acc ount. The AMI is backed by Amazon Elastic Block Sto re (Amazon EBS) and uses an AWS Key Management Service (AWS KMS) customer managed key to encrypt EBS volume snapshots. What is the MOST secure way for the solutions archi tect to share the AMI with the MSP Partner's AWS account?", + "options": [ + "A. Make the encrypted AMI and snapshots publicly ava ilable. Modify the key policy to allow the MSP Part ner's", + "B. Modify the launchPermission property of the AMI. Share the AMI with the MSP Partner's AWS account on ly.", + "C. Modify the launchPermission property of the AMI. Share the AMI with the MSP Partner's AWS account on ly.", + "D. Export the AMI from the source account to an Amaz on S3 bucket in the MSP Partner's AWS account," + ], + "correct": "B. Modify the launchPermission property of the AMI. Share the AMI with the MSP Partner's AWS account on ly.", + "explanation": "Explanation/Reference: Share the existing KMS key with the MSP external ac count because it has already been used to encrypt t he AMI snapshot. https://docs.aws.amazon.com/kms/latest/developergui de/key-policy-modifying-external- accounts.html", + "references": "" + }, + { + "question": "A solutions architect is designing the cloud archit ecture for a new application being deployed on AWS. The process should run in parallel while adding and rem oving application nodes as needed based on the numb er of jobs to be processed. The processor application is stateless. The solutions architect must ensure that the application is loosely coupled and the job items ar e durably stored. Which design should the solutions architect use?", + "options": [ + "A. Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon Machine", + "B. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine", + "C. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine", + "D. Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon Machine" + ], + "correct": "C. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine", + "explanation": "Explanation/Reference: decoupled = SQS Launch template = AMI Launch configuration = EC2", + "references": "" + }, + { + "question": "A company hosts its web applications in the AWS Clo ud. The company configures Elastic Load Balancers t o use certificates that are imported into AWS Certifi cate Manager (ACM). The company's security team mus t be notified 30 days before the expiration of each cert ificate. What should a solutions architect recommend to meet this requirement?", + "options": [ + "A. Add a rule in ACM to publish a custom message to an Amazon Simple Notification Service (Amazon SNS)", + "B. Create an AWS Config rule that checks for certifi cates that will expire within 30 days.", + "C. Use AWS Trusted Advisor to check for certificates that will expire within 30 days. Create an Amazon" + ], + "correct": "B. Create an AWS Config rule that checks for certifi cates that will expire within 30 days.", + "explanation": "Explanation/Reference: AWS Config has a managed rule named acm-certificate -expiration-check to check for expiring certificate s (configurable number of days) https://aws.amazon.com/premiumsupport/knowledge-cen ter/acm-certificate-expiration/", + "references": "" + }, + { + "question": "A company's dynamic website is hosted using on-prem ises servers in the United States. The company is launching its product in Europe, and it wants to op timize site loading times for new European users. T he site's backend must remain in the United States. The produ ct is being launched in a few days, and an immediat e solution is needed. What should the solutions architect recommend?", + "options": [ + "A. Launch an Amazon EC2 instance in us-east-1 and mi grate the site to it.", + "B. Move the website to Amazon S3. Use Cross-Region R eplication between Regions.", + "C. Use Amazon CloudFront with a custom origin pointi ng to the on-premises servers.", + "D. Use an Amazon Route 53 geoproximity routing polic y pointing to on-premises servers.", + "C. Use Amazon CloudFront with a custom origin point ing to the on-premises servers. Amazon CloudFront i s a" + ], + "correct": "C. Use Amazon CloudFront with a custom origin pointi ng to the on-premises servers.", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A company wants to reduce the cost of its existing three-tier web architecture. The web, application, and database servers are running on Amazon EC2 instance s for the development, test, and production environments. The EC2 instances average 30% CPU uti lization during peak hours and 10% CPU utilization during non-peak hours. The production EC2 instances run 24 hours a day. Th e development and test EC2 instances run for at lea st 8 hours each day. The company plans to implement auto mation to stop the development and test EC2 instanc es when they are not in use. Which EC2 instance purcha sing solution will meet the company's requirements MOST cost- effectively?", + "options": [ + "A. Use Spot Instances for the production EC2 instanc es. Use Reserved Instances for the development and", + "B. Use Reserved Instances for the production EC2 ins tances. Use On-Demand Instances for the development", + "C. Use Spot blocks for the production EC2 instances. Use Reserved Instances for the development and tes t", + "D. Use On-Demand Instances for the production EC2 in stances. Use Spot blocks for the development and te st EC2 instances." + ], + "correct": "B. Use Reserved Instances for the production EC2 ins tances. Use On-Demand Instances for the development", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A company has a production web application in which users upload documents through a web interface or a mobile app. According to a new regulatory requireme nt. new documents cannot be modified or deleted aft er they are stored. What should a solutions architect do to meet this r equirement?", + "options": [ + "A. Store the uploaded documents in an Amazon S3 buck et with S3 Versioning and S3 Object Lock enabled.", + "B. Store the uploaded documents in an Amazon S3 buck et. Configure an S3 Lifecycle policy to archive the", + "C. Store the uploaded documents in an Amazon S3 buck et with S3 Versioning enabled.", + "D. Store the uploaded documents on an Amazon Elastic File System (Amazon EFS) volume." + ], + "correct": "A. Store the uploaded documents in an Amazon S3 buck et with S3 Versioning and S3 Object Lock enabled.", + "explanation": "Explanation/Reference: You can use S3 Object Lock to store objects using a write-once-read-many (WORM) model. Object Lock can help prevent objects from being deleted or overwrit ten for a fixed amount of time or indefinitely. You can use S3 Object Lock to meet regulatory requirements that re quire WORM storage, or add an extra layer of protec tion against object changes and deletion. Versioning is required and automatically activated as Object Lock is enabled. https://docs.aws.amazon.com/AmazonS3/lates t/userguide/object-lock- overview.html", + "references": "" + }, + { + "question": "A company has several web servers that need to freq uently access a common Amazon RDS MySQL Multi-AZ DB instance. The company wants a secure method for the web servers to connect to the database while meeting a security requirement to rotate user crede ntials frequently. Which solution meets these requirements?", + "options": [ + "A. Store the database user credentials in AWS Secret s Manager. Grant the necessary IAM permissions to", + "B. Store the database user credentials in AWS System s Manager OpsCenter. Grant the necessary IAM", + "C. Store the database user credentials in a secure A mazon S3 bucket. Grant the necessary IAM permission s", + "D. Store the database user credentials in files encr ypted with AWS Key Management Service (AWS KMS) on" + ], + "correct": "A. Store the database user credentials in AWS Secret s Manager. Grant the necessary IAM permissions to", + "explanation": "Explanation/Reference: Secrets Manager enables you to replace hardcoded cr edentials in your code, including passwords, with a n API call to Secrets Manager to retrieve the secret prog rammatically. This helps ensure the secret can't be compromised by someone examining your code, because the secret no longer exists in the code. Also, you can configure Secrets Manager to automatically rotate t he secret for you according to a specified schedule . This enables you to replace long-term secrets with short -term ones, significantly reducing the risk of comp romise. https://docs.aws.amazon.com/secretsmanager/latest/u serguide/intro.html", + "references": "" + }, + { + "question": "A company hosts an application on AWS Lambda functi ons that are invoked by an Amazon API Gateway API. The Lambda functions save customer data to an Amazo n Aurora MySQL database. Whenever the company upgrades the database, the Lambda functions fail to establish database connections until the upgrade i s complete. The result is that customer data is not r ecorded for some of the event. A solutions architect needs to design a solution th at stores customer data that is created during data base upgrades. Which solution will meet these requirements?", + "options": [ + "A. Provision an Amazon RDS proxy to sit between the Lambda functions and the database.", + "B. Increase the run time of the Lambda functions to the maximum. Create a retry mechanism in the code t hat", + "C. Persist the customer data to Lambda local storage . Configure new Lambda functions to scan the local", + "D. Store the customer data in an Amazon Simple Queue Service (Amazon SQS) FIFO queue." + ], + "correct": "A. Provision an Amazon RDS proxy to sit between the Lambda functions and the database.", + "explanation": "Explanation/Reference: https://aws.amazon.com/rds/proxy/ RDS Proxy minimiz es application disruption from outages affecting th e availability of your database by automatically conn ecting to a new database instance while preserving application connections. When failovers occur, RDS Proxy routes requests directly to the new database instance. This reduces failover times for Aurora an d RDS databases by up to 66%.", + "references": "" + }, + { + "question": "A survey company has gathered data for several year s from areas in the United States. The company host s the data in an Amazon S3 bucket that is 3 TB in size an d growing. The company has started to share the dat a with a European marketing firm that has S3 buckets. The company wants to ensure that its data transfer cost s remain as low as possible. Which solution will meet these requirements?", + "options": [ + "A. Configure the Requester Pays feature on the compa ny's S3 bucket.", + "B. Configure S3 Cross-Region Replication from the co mpany's S3 bucket to one of the marketing firm's S3", + "C. Configure cross-account access for the marketing firm so that the marketing firm has access to the", + "D. Configure the company's S3 bucket to use S3 Intel ligent-Tiering. Sync the S3 bucket to one of the", + "A. Enable the versioning and MFA Delete features on the S3 bucket.", + "B. Enable multi-factor authentication (MFA) on the I AM user credentials for each audit team IAM user ac count.", + "C. Add an S3 Lifecycle policy to the audit team's IA M user accounts to deny the", + "D. Use AWS Key Management Service (AWS KMS) to encry pt the S3 bucket and restrict audit team IAM user" + ], + "correct": "A. Enable the versioning and MFA Delete features on the S3 bucket.", + "explanation": "Explanation/Reference: The solution architect should do Option A: Enable t he versioning and MFA Delete features on the S3 buc ket. This will secure the audit documents by providing a n additional layer of protection against accidental deletion. With versioning enabled, any deleted or overwritten objects in the S3 bucket will be preserved as prev ious versions, allowing the company to recover them if n eeded. With MFA Delete enabled, any delete request made to the S3 bucket will require the use of an MFA cod e, which provides an additional layer of security.", + "references": "" + }, + { + "question": "A company is using a SQL database to store movie da ta that is publicly accessible. The database runs o n an Amazon RDS Single-AZ DB instance. A script runs que ries at random intervals each day to record the num ber of new movies that have been added to the database. The script must report a final total during busines s hours. The company's development team notices that the dat abase performance is inadequate for development tas ks when the script is running. A solutions architect m ust recommend a solution to resolve this issue. Which solution will meet this requirement with the LEAST operational overhead?", + "options": [ + "A. Modify the DB instance to be a Multi-AZ deploymen t.", + "B. Create a read replica of the database. Configure the script to query only the read replica.", + "C. Instruct the development team to manually export the entries in the database at the end of each day.", + "D. Use Amazon ElastiCache to cache the common querie s that the script runs against the database." + ], + "correct": "B. Create a read replica of the database. Configure the script to query only the read replica.", + "explanation": "Explanation/Reference: The best solution to meet the requirement with the least operational overhead would be to create a rea d replica of the database and configure the script to query o nly the read replica. Option B. A read replica is a fully managed database that is kept in sync with the prim ary database. Read replicas allow you to scale out read- heavy workloads by distributing read queries across multiple databases. This can help improve the performance of the database and reduce the impact o n the primary database. By configuring the script t o query the read replica, the development team can continue to use the primary database for development tasks, while the script's queries will be directed to the read r eplica. This will reduce the load on the primary da tabase and improve its performance.", + "references": "" + }, + { + "question": "A company has applications that run on Amazon EC2 i nstances in a VPC. One of the applications needs to call the Amazon S3 API to store and read objects. Accord ing to the company's security regulations, no traff ic from the applications is allowed to travel across the in ternet. Which solution will meet these requirements?", + "options": [ + "A. Configure an S3 gateway endpoint.", + "B. Create an S3 bucket in a private subnet.", + "C. Create an S3 bucket in the same AWS Region as the EC2 instances.", + "D. Configure a NAT gateway in the same subnet as the EC2 instances." + ], + "correct": "A. Configure an S3 gateway endpoint.", + "explanation": "Explanation/Reference: ***CORRECT*** The correct solution is Option A (Con figure an S3 gateway endpoint.) A gateway endpoint is a VPC endpoint that you can use to connect to Amazon S3 from within your VPC. Traffic between your VPC a nd Amazon S3 never leaves the Amazon network, so it do esn't traverse the internet. This means you can acc ess Amazon S3 without the need to use a NAT gateway or a VPN connection. ***WRONG*** Option B (creating an S3 bucket in a pr ivate subnet) is not a valid solution because S3 bu ckets do not have subnets. Option C (creating an S3 bucke t in the same AWS Region as the EC2 instances) is n ot a requirement for meeting the given security regulati ons. Option D (configuring a NAT gateway in the sam e subnet as the EC2 instances) is not a valid solutio n because it would allow traffic to leave the VPC a nd travel across the Internet.", + "references": "" + }, + { + "question": "A company is storing sensitive user information in an Amazon S3 bucket. The company wants to provide secure access to this bucket from the application t ier running on Amazon EC2 instances inside a VPC. Which combination of steps should a solutions archi tect take to accomplish this? (Choose two.)", + "options": [ + "A. Configure a VPC gateway endpoint for Amazon S3 wi thin the VPC.", + "B. Create a bucket policy to make the objects in the S3 bucket public.", + "C. Create a bucket policy that limits access to only the application tier running in the VPC.", + "D. Create an IAM user with an S3 access policy and c opy the IAM credentials to the EC2 instance." + ], + "correct": "", + "explanation": "Explanation/Reference: To provide secure access to the S3 bucket from the application tier running on Amazon EC2 instances in side the VPC, the solutions architect should take the fo llowing combination of steps: Option A: Configure a VPC gateway endpoint for Amazon S3 within the VPC. Amaz on S3 VPC Endpoints: https://docs.aws.amazon.com/vpc/latest/userguide/vp c-endpoints-s3.html Option C: Create a bucket policy that limits access to only t he application tier running in the VPC. Amazon S3 B ucket Policies: https://docs.aws.amazon.com/AmazonS3/late st/dev/using- iam-policies.html AWS Identity and Ac cess Management (IAM) Policies: https://docs.aws.amazon.com/IAM/latest/UserGuide/ac cess_policies.html", + "references": "" + }, + { + "question": "A company runs an on-premises application that is p owered by a MySQL database. The company is migratin g the application to AWS to increase the application' s elasticity and availability. The current architecture shows heavy read activity on the database during times of normal operation. E very 4 hours, the company's development team pulls a full export of the production database to populate a dat abase in the staging environment. During this period, use rs experience unacceptable application latency. The development team is unable to use the staging envir onment until the procedure completes. A solutions architect must recommend replacement ar chitecture that alleviates the application latency issue. The replacement architecture also must give the dev elopment team the ability to continue using the sta ging environment without delay. Which solution meets these requirements?", + "options": [ + "A. Use Amazon Aurora MySQL with Multi-AZ Aurora Repl icas for production. Populate the staging database by", + "B. Use Amazon Aurora MySQL with Multi-AZ Aurora Repl icas for production. Use database cloning to create", + "C. Use Amazon RDS for MySQL with a Multi-AZ deployme nt and read replicas for production. Use the standb y", + "D. Use Amazon RDS for MySQL with a Multi-AZ deployme nt and read replicas for production. Populate the" + ], + "correct": "B. Use Amazon Aurora MySQL with Multi-AZ Aurora Repl icas for production. Use database cloning to create", + "explanation": "Explanation/Reference: The recommended solution is Option B: Use Amazon Au rora MySQL with Multi-AZ Aurora Replicas for production. Use database cloning to create the stag ing database on-demand. To alleviate the applicatio n latency issue, the recommended solution is to use A mazon Aurora MySQL with Multi-AZ Aurora Replicas fo r production, and use database cloning to create the staging database on-demand. This allows the develop ment team to continue using the staging environment with out delay, while also providing elasticity and avai lability for the production application. Therefore, Options A, C , and D are not recommended Option A: Use Amazon Aurora MySQL with Multi-AZ Aur ora Replicas for production. Populating the staging database by implementing a backup and restore proce ss that uses the mysqldump utility is not the recommended solution because it involves taking a f ull export of the production database, which can ca use unacceptable application latency. Option C: Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for production. Using the standby instance for the staging database is not th e recommended solution because it does not give the development team the ability to continue using the staging environment without delay. The standby inst ance is used for failover in case of a production instance failure, and it is not intended for use as a stagin g environment. Option D: Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for production. Populating the staging database by implementing a b ackup and restore process that uses the mysqqldump utility is not the recommended solution because it involves taking a full export of the production dat abase, which can cause unacceptable application latency.", + "references": "" + }, + { + "question": "A company is designing an application where users u pload small files into Amazon S3. After a user uplo ads a file, the file requires one-time simple processing to transform the data and save the data in JSON for mat for later analysis. Each file must be processed as quickly as possible after it is uploaded. Demand will vary. On some day s, users will upload a high number of files. On other days, users will upload a few files or no files. Which solution meets these requirements with the LE AST operational overhead?", + "options": [ + "A. Configure Amazon EMR to read text files from Amaz on S3. Run processing scripts to transform the data .", + "B. Configure Amazon S3 to send an event notification to an Amazon Simple Queue Service (Amazon SQS)", + "C. Configure Amazon S3 to send an event notification to an Amazon Simple Queue Service (Amazon SQS)", + "D. Configure Amazon EventBridge (Amazon CloudWatch E vents) to send an event to Amazon Kinesis Data" + ], + "correct": "", + "explanation": "Explanation/Reference: Option C, Configuring Amazon S3 to send an event no tification to an Amazon Simple Queue Service (SQS) queue and using an AWS Lambda function to read from the queue and process the data, would likely be th e solution with the least operational overhead. AWS L ambda is a serverless computing service that allows you to run code without the need to provision or manage in frastructure. When a new file is uploaded to Amazon S3, it can trigger an event notification which sends a mes sage to an SQS queue. The Lambda function can then be set up to be triggered by messages in the queue, an d it can process the data and store the resulting J SON file in Amazon DynamoDB. Using a serverless solution like AWS Lambda can hel p to reduce operational overhead because it automat ically scales to meet demand and does not require you to p rovision and manage infrastructure. Additionally, u sing an SQS queue as a buffer between the S3 event notifica tion and the Lambda function can help to decouple t he processing of the data from the uploading of the da ta, allowing the processing to happen asynchronousl y and improving the overall efficiency of the system.", + "references": "" + }, + { + "question": "An application allows users at a company's headquar ters to access product data. The product data is st ored in an Amazon RDS MySQL DB instance. The operations tea m has isolated an application performance slowdown and wants to separate read traffic from write traff ic. A solutions architect needs to optimize the app lication's performance quickly. What should the solutions architect recommend?", + "options": [ + "A. Change the existing database to a Multi-AZ deploy ment. Serve the read requests from the primary", + "B. Change the existing database to a Multi-AZ deploy ment. Serve the read requests from the secondary", + "C. Create read replicas for the database. Configure the read replicas with half of the compute and stor age", + "D. Create read replicas for the database. Configure the read replicas with the same compute and storage" + ], + "correct": "D. Create read replicas for the database. Configure the read replicas with the same compute and storage", + "explanation": "Explanation/Reference: The solutions architect should recommend option D: Create read replicas for the database. Configure th e read replicas with the same compute and storage resource s as the source database. Creating read replicas al lows the application to offload read traffic from the so urce database, improving its performance. The read replicas should be configured with the same compute and stor age resources as the source database to ensure that they can handle the read workload effectively.", + "references": "" + }, + { + "question": "An Amazon EC2 administrator created the following p olicy associated with an IAM group containing sever al users: What is the effect of this policy?", + "options": [ + "A. Users can terminate an EC2 instance in any AWS Re gion except us-east-1.", + "B. Users can terminate an EC2 instance with the IP a ddress 10.100.100.1 in the us-east-1 Region.", + "C. Users can terminate an EC2 instance in the us-eas t-1 Region when the user's source IP is 10.100.100. 254.", + "D. Users cannot terminate an EC2 instance in the us- east-1 Region when the user's source IP is" + ], + "correct": "C. Users can terminate an EC2 instance in the us-eas t-1 Region when the user's source IP is 10.100.100. 254.", + "explanation": "Explanation/Reference: 0.0/24 , the following five IP addresses are reserv ed: 0.0: Network address. 0.1: Reserved by AWS for the VPC router. 0.2: Reserved by AWS. The IP address of the DNS ser ver is the base of the VPC network range plus two. ... 0.3: Reserved by AWS for future use. 0.255: Network broadcast address.", + "references": "" + }, + { + "question": "A company has a large Microsoft SharePoint deployme nt running on-premises that requires Microsoft Wind ows shared file storage. The company wants to migrate t his workload to the AWS Cloud and is considering va rious storage options. The storage solution must be highl y available and integrated with Active Directory fo r access control. Which solution will satisfy these requirements?", + "options": [ + "A. Configure Amazon EFS storage and set the Active D irectory domain for authentication.", + "B. Create an SMB file share on an AWS Storage Gatewa y file gateway in two Availability Zones.", + "C. Create an Amazon S3 bucket and configure Microsof t Windows Server to mount it as a volume.", + "D. Create an Amazon FSx for Windows File Server file system on AWS and set the Active Directory domain", + "D. Create an Amazon FSx for Windows File Server fil e system on AWS and set the Active Directory domain for" + ], + "correct": "D. Create an Amazon FSx for Windows File Server file system on AWS and set the Active Directory domain", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "An image-processing company has a web application t hat users use to upload images. The application upl oads the images into an Amazon S3 bucket. The company ha s set up S3 event notifications to publish the obje ct creation events to an Amazon Simple Queue Service ( Amazon SQS) standard queue. The SQS queue serves as the event source for an AWS Lambda function that processes the images and sends the results to user s through email. Users report that they are receiving multiple email messages for every uploaded image. A solutions architect determines that SQS messages ar e invoking the Lambda function more than once, resu lting in multiple email messages. What should the solutions architect do to resolve t his issue with the LEAST operational overhead?", + "options": [ + "A. Set up long polling in the SQS queue by increasin g the ReceiveMessage wait time to 30 seconds.", + "B. Change the SQS standard queue to an SQS FIFO queu e. Use the message deduplication ID to discard", + "C. Increase the visibility timeout in the SQS queue to a value that is greater than the total of the fu nction", + "D. Modify the Lambda function to delete each message from the SQS queue immediately after the message i s", + "A. Create an AWS Storage Gateway file gateway. Creat e a file share that uses the required client protoc ol.", + "B. Create an Amazon EC2 Windows instance. Install an d configure a Windows file share role on the instan ce.", + "C. Create an Amazon Elastic File System (Amazon EFS) file system, and configure it to support Lustre. A ttach", + "D. Create an Amazon FSx for Lustre file system. Atta ch the file system to the origin server." + ], + "correct": "D. Create an Amazon FSx for Lustre file system. Atta ch the file system to the origin server.", + "explanation": "Explanation/Reference: Lustre in the question is only available as FSx htt ps://aws.amazon.com/fsx/lustre/", + "references": "" + }, + { + "question": "A company's containerized application runs on an Am azon EC2 instance. The application needs to downloa d security certificates before it can communicate wit h other business applications. The company wants a highly secure solution to encrypt and decrypt the certific ates in near real time. The solution also needs to store data in highly available storage after the data is encrypte d. Which solution will meet these requirements with th e LEAST operational overhead?", + "options": [ + "A. Create AWS Secrets Manager secrets for encrypted certificates. Manually update the certificates as", + "B. Create an AWS Lambda function that uses the Pytho n cryptography library to receive and perform", + "C. Create an AWS Key Management Service (AWS KMS) cu stomer managed key. Allow the EC2 role to use", + "D. Create an AWS Key Management Service (AWS KMS) cu stomer managed key. Allow the EC2 role to use" + ], + "correct": "C. Create an AWS Key Management Service (AWS KMS) cu stomer managed key. Allow the EC2 role to use", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A solutions architect is designing a VPC with publi c and private subnets. The VPC and subnets use IPv4 CIDR blocks. There is one public subnet and one private subnet in each of three Availability Zones (AZs) fo r high availability. An internet gateway is used to provid e internet access for the public subnets. The priva te subnets require access to the internet to allow Amazon EC2 instances to download software updates. What should the solutions architect do to enable In ternet access for the private subnets?", + "options": [ + "A. Create three NAT gateways, one for each public su bnet in each AZ. Create a private route table for e ach AZ", + "B. Create three NAT instances, one for each private subnet in each AZ. Create a private route table for each", + "D. Create an egress-only internet gateway on one of the public subnets. Update the route table for the private" + ], + "correct": "A. Create three NAT gateways, one for each public su bnet in each AZ. Create a private route table for e ach AZ", + "explanation": "Explanation/Reference: To enable Internet access for the private subnets, the solutions architect should create three NAT gat eways, one for each public subnet in each Availability Zon e (AZ). NAT gateways allow private instances to ini tiate outbound traffic to the Internet but do not allow i nbound traffic from the Internet to reach the priva te instances. The solutions architect should then create a privat e route table for each AZ that forwards non-VPC tra ffic to the NAT gateway in its AZ. This will allow instances in the private subnets to access the Internet through the NAT gateways in the public subnets.", + "references": "" + }, + { + "question": "A company wants to migrate an on-premises data cent er to AWS. The data center hosts an SFTP server tha t stores its data on an NFS-based file system. The se rver holds 200 GB of data that needs to be transfer red. The server must be hosted on an Amazon EC2 instance tha t uses an Amazon Elastic File System (Amazon EFS) file system. Which combination of steps should a so lutions architect take to automate this task? (Choo se two.)", + "options": [ + "A. Launch the EC2 instance into the same Availabilit y Zone as the EFS file system.", + "B. Install an AWS DataSync agent in the on-premises data center.", + "C. Create a secondary Amazon Elastic Block Store (Am azon EBS) volume on the EC2 instance for the data.", + "D. Manually use an operating system copy command to push the data to the EC2 instance." + ], + "correct": "", + "explanation": "Explanation/Reference: **A**. Launch the EC2 instance into the same Availa bility Zone as the EFS file system. Makes sense to have the instance in the same AZ the EFS storage is. **B **. Install an AWS DataSync agent in the on-premise s data center. The DataSync with move the data to the EFS, which already uses the EC2 instance (see the info provided). No more things are required... C. Create a secondary Amazon Elastic Block Store (Amazon EBS ) volume on the EC2 instance for the data. This secon dary EBS volume isn't required... the data should b e move on to EFS... D. Manually use an operating system co py command to push the data to the EC2 instance. Potentially possible (instead of A), BUT the \"autom ate this task\" premise goes against any \"manually\" action. So, we should keep A. E. Use AWS DataSync to create a suitable location configuration for the on-premi ses SFTP server.", + "references": "" + }, + { + "question": "A company has an AWS Glue extract, transform, and l oad (ETL) job that runs every day at the same time. The job processes XML data that is in an Amazon S3 buck et. New data is added to the S3 bucket every day. A solutions architect notices that AWS Glue is proces sing all the data during each run. What should the solutions architect do to prevent A WS Glue from reprocessing old data?", + "options": [ + "A. Edit the job to use job bookmarks.", + "B. Edit the job to delete data after the data is pro cessed.", + "C. Edit the job by setting the NumberOfWorkers field to 1.", + "D. Use a FindMatches machine learning (ML) transform ." + ], + "correct": "A. Edit the job to use job bookmarks.", + "explanation": "Explanation/Reference: This is the purpose of bookmarks: \"AWS Glue tracks data that has already been processed during a previ ous run of an ETL job by persisting state information f rom the job run. This persisted state information i s called a job bookmark. Job bookmarks help AWS Glue maintain state information and prevent the reprocessing of o ld data.\" https://docs.aws.amazon.com/glue/latest/dg/m onitor-continuations.html", + "references": "" + }, + { + "question": "A solutions architect must design a highly availabl e infrastructure for a website. The website is powe red by Windows web servers that run on Amazon EC2 instance s. The solutions architect must implement a solutio n that can mitigate a large-scale DDoS attack that or iginates from thousands of IP addresses. Downtime i s not acceptable for the website. Which actions should th e solutions architect take to protect the website f rom such an attack? (Choose two.)", + "options": [ + "A. Use AWS Shield Advanced to stop the DDoS attack.", + "B. Configure Amazon GuardDuty to automatically block the attackers.", + "C. Configure the website to use Amazon CloudFront fo r both static and dynamic content.", + "D. Use an AWS Lambda function to automatically add a ttacker IP addresses to VPC network ACLs." + ], + "correct": "", + "explanation": "Explanation/Reference: Option A. Use AWS Shield Advanced to stop the DDoS attack. It provides always-on protection for Amazon EC2 instances, Elastic Load Balancers, and Amazon R oute 53 resources. By using AWS Shield Advanced, th e solutions architect can help protect the website fr om large-scale DDoS attacks. Option C. Configure the website to use Amazon Cloud Front for both static and dynamic content. CloudFro nt is a content delivery network (CDN) that integrates wi th other Amazon Web Services products, such as Amaz on S3 and Amazon EC2, to deliver content to users with low latency and high data transfer speeds. By usin g CloudFront, the solutions architect can distribute the website's content across multiple edge location s, which can help absorb the impact of a DDoS attack and red uce the risk of downtime for the website.", + "references": "" + }, + { + "question": "A company is preparing to deploy a new serverless w orkload. A solutions architect must use the princip le of least privilege to configure permissions that will be used to run an AWS Lambda function. An Amazon EventBridge (Amazon CloudWatch Events) rule will in voke the function. Which solution meets these requirements?", + "options": [ + "A. Add an execution role to the function with lambda :InvokeFunction as the action and * as the principa l.", + "B. Add an execution role to the function with lambda :InvokeFunction as the action and Service:", + "C. Add a resource-based policy to the function with lambda:* as the action and Service:", + "D. Add a resource-based policy to the function with lambda:InvokeFunction as the action and Service:" + ], + "correct": "D. Add a resource-based policy to the function with lambda:InvokeFunction as the action and Service:", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/eventbridge/latest/user guide/eb-use-resource- based.html#eb-lambda- permissions", + "references": "" + }, + { + "question": "A company is preparing to store confidential data i n Amazon S3. For compliance reasons, the data must be encrypted at rest. Encryption key usage must be log ged for auditing purposes. Keys must be rotated every year. Which solution meets these requirements and is the MOST operationally efficient?", + "options": [ + "A. Server-side encryption with customer-provided key s (SSE-C)", + "B. Server-side encryption with Amazon S3 managed key s (SSE-S3)", + "C. Server-side encryption with AWS KMS keys (SSE-KMS ) with manual rotation", + "D. Server-side encryption with AWS KMS keys (SSE-KMS ) with automatic rotation" + ], + "correct": "D. Server-side encryption with AWS KMS keys (SSE-KMS ) with automatic rotation", + "explanation": "Explanation/Reference: Automating the key rotation is the most efficient. Just to confirm, the A and B options don't allow au tomate the rotation as explained here: https://aws.amazon.com/kms/faqs/#:~:text=You%20can% 20choose%20to%20have%20A WS%20KMS% 20automatically%20rotate%20KMS,KMS%20custom%20key%2 0store%20fea ture", + "references": "" + }, + { + "question": "A bicycle sharing company is developing a multi-tie r architecture to track the location of its bicycle s during peak operating hours. The company wants to use these dat a points in its existing analytics platform. A solu tions architect must determine the most viable multi-tier option to support this architecture. The data poin ts must be accessible from the REST API. Which action meets th ese requirements for storing and retrieving locatio n data?", + "options": [ + "A. Use Amazon Athena with Amazon S3.", + "B. Use Amazon API Gateway with AWS Lambda.", + "C. Use Amazon QuickSight with Amazon Redshift.", + "D. Use Amazon API Gateway with Amazon Kinesis Data A nalytics." + ], + "correct": "B. Use Amazon API Gateway with AWS Lambda.", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A company has an automobile sales website that stor es its listings in a database on Amazon RDS. When a n automobile is sold, the listing needs to be removed from the website and the data must be sent to mult iple target systems. Which design should a solutions architect recommend ?", + "options": [ + "A. Create an AWS Lambda function triggered when the database on Amazon RDS is updated to send the", + "B. Create an AWS Lambda function triggered when the database on Amazon RDS is updated to send the", + "C. Subscribe to an RDS event notification and send a n Amazon Simple Queue Service (Amazon SQS) queue", + "D. Subscribe to an RDS event notification and send a n Amazon Simple Notification Service (Amazon SNS)" + ], + "correct": "A. Create an AWS Lambda function triggered when the database on Amazon RDS is updated to send the", + "explanation": "Explanation/Reference: Interesting point that Amazon RDS event notificatio n doesn't support any notification when data inside DB is updated. https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/USER_Events.overview.html So subscription to RDS events doesn't give any value for Fanout = SNS => SQS", + "references": "" + }, + { + "question": "A company needs to store data in Amazon S3 and must prevent the data from being changed. The company wants new objects that are uploaded to Amazon S3 to remain unchangeable for a nonspecific amount of ti me until the company decides to modify the objects. On ly specific users in the company's AWS account can have the ability 10 delete the objects. What should a so lutions architect do to meet these requirements?", + "options": [ + "A. Create an S3 Glacier vault. Apply a write-once, r ead-many (WORM) vault lock policy to the objects.", + "B. Create an S3 bucket with S3 Object Lock enabled. Enable versioning. Set a retention period of 100 ye ars.", + "C. Create an S3 bucket. Use AWS CloudTrail to track any S3 API events that modify the objects. Upon", + "D. Create an S3 bucket with S3 Object Lock enabled. Enable versioning. Add a legal hold to the objects. Add" + ], + "correct": "D. Create an S3 bucket with S3 Object Lock enabled. Enable versioning. Add a legal hold to the objects. Add", + "explanation": "Explanation/Reference: A - No as \"specific users can delete\" B - No as \"nonspecific amount of time\" C - No as \"prevent the data from being change\" D - The answer: \"The Object Lock legal hold operati on enables you to place a legal hold on an object v ersion. Like setting a retention period, a legal hold preve nts an object version from being overwritten or del eted. However, a legal hold doesn't have an associated re tention period and remains in effect until removed. \" https:// docs.aws.amazon.com/AmazonS3/latest/userguide/batch -ops-legal-hold.html", + "references": "" + }, + { + "question": "A social media company allows users to upload image s to its website. The website runs on Amazon EC2 instances. During upload requests, the website resi zes the images to a standard size and stores the re sized images in Amazon S3. Users are experiencing slow up load requests to the website. The company needs to reduce coupling within the app lication and improve website performance. A solutio ns architect must design the most operationally effici ent process for image uploads. Which combination of actions should the solutions a rchitect take to meet these requirements? (Choose t wo.)", + "options": [ + "A. Configure the application to upload images to S3 Glacier.", + "B. Configure the web server to upload the original i mages to Amazon S3.", + "C. Configure the application to upload images direct ly from each user's browser to Amazon S3 through th e use", + "D. Configure S3 Event Notifications to invoke an AWS Lambda function when an image is uploaded. Use the", + "C. Configure the application to upload images direc tly from each user's browser to Amazon S3 through t he use", + "D. Configure S3 Event Notifications to invoke an AW S Lambda function when an image is uploaded. Use th e" + ], + "correct": "", + "explanation": "Explanation/Reference: To meet the requirements of reducing coupling withi n the application and improving website performance , the solutions architect should consider taking the foll owing actions:", + "references": "" + }, + { + "question": "A company recently migrated a message processing sy stem to AWS. The system receives messages into an ActiveMQ queue running on an Amazon EC2 instance. M essages are processed by a consumer application running on Amazon EC2. The consumer application pro cesses the messages and writes results to a MySQL database running on Amazon EC2. The company wants t his application to be highly available with low operational complexity. Which architecture offers the HIGHEST availability?", + "options": [ + "A. Add a second ActiveMQ server to another Availabil ity Zone. Add an additional consumer EC2 instance i n", + "B. Use Amazon MQ with active/standby brokers configu red across two Availability Zones.", + "C. Use Amazon MQ with active/standby brokers configu red across two Availability Zones.", + "D. Use Amazon MQ with active/standby brokers configu red across two Availability Zones." + ], + "correct": "D. Use Amazon MQ with active/standby brokers configu red across two Availability Zones.", + "explanation": "Explanation/Reference: Option D offers the highest availability because it addresses all potential points of failure in the s ystem: Amazon MQ with active/standby brokers configured ac ross two Availability Zones ensures that the messag e queue is available even if one Availability Zone ex periences an outage. An Auto Scaling group for the consumer EC2 instance s across two Availability Zones ensures that the consumer application is able to continue processing messages even if one Availability Zone experiences an outage. Amazon RDS for MySQL with Multi-AZ enabled ensures that the database is available even if one Availabi lity Zone experiences an outage.", + "references": "" + }, + { + "question": "A company hosts a containerized web application on a fleet of on-premises servers that process incomin g requests. The number of requests is growing quickly . The on-premises servers cannot handle the increas ed number of requests. The company wants to move the a pplication to AWS with minimum code changes and minimum development effort. Which solution will mee t these requirements with the LEAST operational overhead?", + "options": [ + "A. Use AWS Fargate on Amazon Elastic Container Servi ce (Amazon ECS) to run the containerized web", + "B. Use two Amazon EC2 instances to host the containe rized web application. Use an Application Load", + "C. Use AWS Lambda with a new code that uses one of t he supported languages. Create multiple Lambda", + "D. Use a high performance computing (HPC) solution s uch as AWS ParallelCluster to establish an HPC" + ], + "correct": "A. Use AWS Fargate on Amazon Elastic Container Servi ce (Amazon ECS) to run the containerized web", + "explanation": "Explanation/Reference: Less operational overhead means A: Fargate (no EC2) , move the containers on ECS, autoscaling for growt h and ALB to balance consumption. B - requires configure EC2 C - requires add code (developpers) D - seems like the most complex approach, like re-a rchitecting the app to take advantage of an HPC pla tform.", + "references": "" + }, + { + "question": "A company uses 50 TB of data for reporting. The com pany wants to move this data from on premises to AW S. A custom application in the company's data center r uns a weekly data transformation job. The company p lans to pause the application until the data transfer is complete and needs to begin the transfer process a s soon as possible. The data center does not have any availab le network bandwidth for additional workloads. A so lutions architect must transfer the data and must configure the transformation job to continue to run in the A WS Cloud. Which solution will meet these requirements with th e LEAST operational overhead?", + "options": [ + "A. Use AWS DataSync to move the data. Create a custo m transformation job by using AWS Glue.", + "B. Order an AWS Snowcone device to move the data. De ploy the transformation application to the device.", + "C. Order an AWS Snowball Edge Storage Optimized devi ce. Copy the data to the device.", + "D. Order an AWS Snowball Edge Storage Optimized devi ce that includes Amazon EC2 compute. Copy the", + "A. Use AWS DataSync to move the data. Create a cust om transformation job by using AWS Glue. - No BW" + ], + "correct": "C. Order an AWS Snowball Edge Storage Optimized devi ce. Copy the data to the device.", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A company has created an image analysis application in which users can upload photos and add photo fra mes to their images. The users upload images and metada ta to indicate which photo frames they want to add to their images. The application uses a single Amazon EC2 instance and Amazon DynamoDB to store the metadata. The application is becoming more popular, and the n umber of users is increasing. The company expects t he number of concurrent users to vary significantly de pending on the time of day and day of week. The com pany must ensure that the application can scale to meet the needs of the growing user base. Which solution meats these requirements?", + "options": [ + "A. Use AWS Lambda to process the photos. Store the p hotos and metadata in DynamoDB.", + "B. Use Amazon Kinesis Data Firehose to process the p hotos and to store the photos and metadata.", + "C. Use AWS Lambda to process the photos. Store the p hotos in Amazon S3. Retain DynamoDB to store the", + "D. Increase the number of EC2 instances to three. Us e Provisioned IOPS SSD (io2) Amazon Elastic Block" + ], + "correct": "C. Use AWS Lambda to process the photos. Store the p hotos in Amazon S3. Retain DynamoDB to store the", + "explanation": "Explanation/Reference: https://www.quora.com/How-can-I-use-DynamoDB-for-st oring-metadata-for-Amazon-S3- objects", + "references": "" + }, + { + "question": "A medical records company is hosting an application on Amazon EC2 instances. The application processes customer data files that are stored on Amazon S3. T he EC2 instances are hosted in public subnets. The EC2 instances access Amazon S3 over the internet, but t hey do not require any other network access. A new requirement mandates that the network traffic for file transfers take a private route and not be sent over the internet. Which change to the network architecture should a s olutions architect recommend to meet this requireme nt?", + "options": [ + "A. Create a NAT gateway. Configure the route table f or the public subnets to send traffic to Amazon S3", + "B. Configure the security group for the EC2 instance s to restrict outbound traffic so that only traffic to the S3", + "C. Move the EC2 instances to private subnets. Create a VPC endpoint for Amazon S3, and link the endpoin t to", + "D. Remove the internet gateway from the VPC. Set up an AWS Direct Connect connection, and route traffic to" + ], + "correct": "C. Move the EC2 instances to private subnets. Create a VPC endpoint for Amazon S3, and link the endpoin t to", + "explanation": "Explanation/Reference: The correct answer is C. Move the EC2 instances to private subnets. Create a VPC endpoint for Amazon S 3, and link the endpoint to the route table for the pr ivate subnets. To meet the new requirement of transferring files o ver a private route, the EC2 instances should be mo ved to private subnets, which do not have direct access to the internet. This ensures that the traffic for fi le transfers does not go over the internet. To enable the EC2 instances to access Amazon S3, a VPC endpoint for Amazon S3 can be created. VPC endpoints allow resources within a VPC to communica te with resources in other services without the tra ffic being sent over the internet. By linking the VPC en dpoint to the route table for the private subnets, the EC2 instances can access Amazon S3 over a private conne ction within the VPC. Option A (Create a NAT gateway) would not work, as a NAT gateway is used to allow resources in private subnets to access the internet, while the requireme nt is to prevent traffic from going over the intern et. Option B (Configure the security group for the EC2 instances to restrict outbound traffic) would not a chieve the goal of routing traffic over a private connection, as the traffic would still be sent over the interne t. Option D (Remove the internet gateway from the VPC and set up an AWS Direct Connect connection) would not be necessary, as the requirement can be met by simply creating a VPC endpoint for Amazon S3 and routing traffic through it.", + "references": "" + }, + { + "question": "A company uses a popular content management system (CMS) for its corporate website. However, the required patching and maintenance are burdensome. T he company is redesigning its website and wants ane w solution. The website will be updated four times a year and does not need to have any dynamic content available. The solution must provide high scalabili ty and enhanced security. Which combination of changes will meet these requir ements with the LEAST operational overhead? (Choose two.)", + "options": [ + "A. Configure Amazon CloudFront in front of the websi te to use HTTPS functionality.", + "B. Deploy an AWS WAF web ACL in front of the website to provide HTTPS functionality.", + "C. Create and deploy an AWS Lambda function to manag e and serve the website content.", + "D. Create the new website and an Amazon S3 bucket. D eploy the website on the S3 bucket with static webs ite" + ], + "correct": "", + "explanation": "Explanation/Reference: A -> We can configure CloudFront to require HTTPS f rom clients (enhanced security) https:// docs.aws.amazon.com/AmazonCloudFront/latest/Develop erGuide/using-https-viewers-to- cloudfront.html D -> storing static website on S3 provides scalabil ity and less operational overhead, then configurati on of Application LB and EC2 instances (hence E is out) B is out since AWS WAF Web ACL does not to provide HTTPS functionality, but to protect HTTPS only.", + "references": "" + }, + { + "question": "A company stores its application logs in an Amazon CloudWatch Logs log group. A new policy requires th e company to store all application logs in Amazon Ope nSearch Service (Amazon Elasticsearch Service) in n ear- real time. Which solution will meet this requirement with the LEAST operational overhead?", + "options": [ + "A. Configure a CloudWatch Logs subscription to strea m the logs to Amazon OpenSearch Service (Amazon", + "B. Create an AWS Lambda function. Use the log group to invoke the function to write the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service).", + "C. Create an Amazon Kinesis Data Firehose delivery s tream. Configure the log group as the delivery stre ams", + "D. Install and configure Amazon Kinesis Agent on eac h application server to deliver the logs to Amazon" + ], + "correct": "A. Configure a CloudWatch Logs subscription to strea m the logs to Amazon OpenSearch Service (Amazon", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/AmazonCloudWatch/latest /logs/CWL_OpenSearch_Stream. html > You can configure a CloudWatch Logs log group to stream dat a it receives to your Amazon OpenSearch Service clu ster in NEAR REAL-TIME through a CloudWatch Logs subscri ption", + "references": "" + }, + { + "question": "A company is building a web-based application runni ng on Amazon EC2 instances in multiple Availability Zones. The web application will provide access to a repository of text documents totaling about 900 TB in size. The company anticipates that the web application wi ll experience periods of high demand. A solutions a rchitect must ensure that the storage component for the text documents can scale to meet the demand of the application at all times. The company is concerned about the overall cost of the solution. Which stora ge solution meets these requirements MOST cost-effecti vely?", + "options": [ + "A. Amazon Elastic Block Store (Amazon EBS)", + "B. Amazon Elastic File System (Amazon EFS)", + "C. Amazon OpenSearch Service (Amazon Elasticsearch S ervice)", + "D. Amazon S3" + ], + "correct": "D. Amazon S3", + "explanation": "Explanation/Reference: Amazon S3 is an object storage service that is desi gned to store and retrieve large amounts of data fr om anywhere on the web. It is highly scalable, highly available, and cost-effective, making it an ideal c hoice for storing a large repository of text documents that w ill experience periods of high demand. S3 is a stan dalone storage service that can be accessed from anywhere, and it is designed to handle large numbers of obje cts, making it well-suited for storing the 900 TB reposi tory of text documents described in the scenario. I t is also designed to handle high levels of demand, making it suitable for handling periods of high demand.", + "references": "" + }, + { + "question": "A global company is using Amazon API Gateway to des ign REST APIs for its loyalty club users in the us- east-1 Region and the ap-southeast-2 Region. A solutions a rchitect must design a solution to protect these AP I Gateway managed REST APIs across multiple accounts from SQL injection and cross-site scripting attacks . Which solution will meet these requirements with th e LEAST amount of administrative effort?", + "options": [ + "A. Set up AWS WAF in both Regions. Associate Regiona l web ACLs with an API stage.", + "B. Set up AWS Firewall Manager in both Regions. Cent rally configure AWS WAF rules.", + "C. Set up AWS Shield in bath Regions. Associate Regi onal web ACLs with an API stage.", + "D. Set up AWS Shield in one of the Regions. Associat e Regional web ACLs with an API stage." + ], + "correct": "B. Set up AWS Firewall Manager in both Regions. Cent rally configure AWS WAF rules.", + "explanation": "Explanation Explanation/Reference: Using AWS WAF has several benefits. Additional prot ection against web attacks using criteria that you specify. You can define criteria using characteristics of we b requests such as the following: Presence of SQL c ode that is likely to be malicious (known as SQL injection). Presence of a script that is likely to be maliciou s (known as cross-site scripting). AWS Firewall Manager simplif ies your administration and maintenance tasks acros s multiple accounts and resources for a variety of pr otections. https://docs.aws.amazon.com/waf/latest/ developerguide/what-is-aws-waf.html", + "references": "" + }, + { + "question": "A company has implemented a self-managed DNS soluti on on three Amazon EC2 instances behind a Network Load Balancer (NLB) in the us-west-2 Region. Most o f the company's users are located in the United Sta tes and Europe. The company wants to improve the perfor mance and availability of the solution. The company launches and configures three EC2 instances in the eu-west-1 Region and adds the EC2 instances as targ ets for a new NLB. Which solution can the company use t o route traffic to all the EC2 instances?", + "options": [ + "A. Create an Amazon Route 53 geolocation routing pol icy to route requests to one of the two NLBs. Creat e an", + "B. Create a standard accelerator in AWS Global Accel erator. Create endpoint groups in us- west-2 and eu -", + "C. Attach Elastic IP addresses to the six EC2 instan ces. Create an Amazon Route 53 geolocation routing", + "D. Replace the two NLBs with two Application Load Ba lancers (ALBs). Create an Amazon Route 53 latency" + ], + "correct": "B. Create a standard accelerator in AWS Global Accel erator. Create endpoint groups in us- west-2 and eu -", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/global-accelerator/late st/dg/what-is-global-accelerator.html", + "references": "" + }, + { + "question": "A company is running an online transaction processi ng (OLTP) workload on AWS. This workload uses an unencrypted Amazon RDS DB instance in a Multi-AZ de ployment. Daily database snapshots are taken from t his instance. What should a solutions architect do to ensure the database and snapshots are always encrypted moving forward?", + "options": [ + "A. Encrypt a copy of the latest DB snapshot. Replace existing DB instance by restoring the encrypted", + "B. Create a new encrypted Amazon Elastic Block Store (Amazon EBS) volume and copy the snapshots to it.", + "C. Copy the snapshots and enable encryption using AW S Key Management Service (AWS KMS) Restore", + "D. Copy the snapshots to an Amazon S3 bucket that is encrypted using server-side encryption with AWS Ke y" + ], + "correct": "A. Encrypt a copy of the latest DB snapshot. Replace existing DB instance by restoring the encrypted", + "explanation": "Explanation/Reference: \"You can enable encryption for an Amazon RDS DB ins tance when you create it, but not after it's create d. However, you can add encryption to an unencrypted D B instance by creating a snapshot of your DB instan ce, and then creating an encrypted copy of that snapsho t. You can then restore a DB instance from the encr ypted snapshot to get an encrypted copy of your original DB instance.\" https://docs.aws.amazon.com/prescript ive- guidance/latest/patterns/encrypt-an-existing-amazon -rds-for-postgresql-db- instance.html", + "references": "" + }, + { + "question": "A company wants to build a scalable key management infrastructure to support developers who need to en crypt data in their applications. What should a solutions architect do to reduce the operational burden?", + "options": [ + "A. Use multi-factor authentication (MFA) to protect the encryption keys.", + "B. Use AWS Key Management Service (AWS KMS) to prote ct the encryption keys.", + "C. Use AWS Certificate Manager (ACM) to create, stor e, and assign the encryption keys.", + "D. Use an IAM policy to limit the scope of users who have access permissions to protect the encryption keys." + ], + "correct": "B. Use AWS Key Management Service (AWS KMS) to prote ct the encryption keys.", + "explanation": "Explanation/Reference: If you are a developer who needs to digitally sign or verify data using asymmetric keys, you should us e the service to create and manage the private keys you'l l need. If you're looking for a scalable key manage ment infrastructure to support your developers and their growing number of applications, you should use it to reduce your licensing costs and operational burden... https://aws.amazon.com/kms/faqs/#:~:text=If%20you%2 0are%20a%20developer%20who %20needs%20to% 20digitally,a%20broad%20set%20of%20industry%20and%2 0regional% 20compliance%20regimes.", + "references": "" + }, + { + "question": "A company has a dynamic web application hosted on t wo Amazon EC2 instances. The company has its own SSL certificate, which is on each instance to perfo rm SSL termination. There has been an increase in t raffic recently, and the operations team determined that S SL encryption and decryption is causing the compute capacity of the web servers to reach their maximum limit. What should a solutions architect do to increase th e application's performance?", + "options": [ + "A. Create a new SSL certificate using AWS Certificat e Manager (ACM). Install the ACM certificate on eac h", + "B. Create an Amazon S3 bucket Migrate the SSL certif icate to the S3 bucket. Configure the EC2 instances to", + "C. Create another EC2 instance as a proxy server. Mi grate the SSL certificate to the new instance and", + "D. Import the SSL certificate into AWS Certificate M anager (ACM). Create an Application Load Balancer w ith" + ], + "correct": "D. Import the SSL certificate into AWS Certificate M anager (ACM). Create an Application Load Balancer w ith", + "explanation": "Explanation/Reference: This issue is solved by SSL offloading, i.e. by mov ing the SSL termination task to the ALB. https:// aws.amazon.com/blogs/aws/elastic-load-balancer-supp ort-for-ssl-termination/", + "references": "" + }, + { + "question": "A company has a highly dynamic batch processing job that uses many Amazon EC2 instances to complete it . The job is stateless in nature, can be started and stopped at any given time with no negative impact, and typically takes upwards of 60 minutes total to comp lete. The company has asked a solutions architect t o design a scalable and cost-effective solution that meets t he requirements of the job. What should the solutions architect recommend?", + "options": [ + "A. Implement EC2 Spot Instances. B. Purchase EC2 Reserved Instances.", + "C. Implement EC2 On-Demand Instances.", + "D. Implement the processing on AWS Lambda." + ], + "correct": "A. Implement EC2 Spot Instances. B. Purchase EC2 Reserved Instances.", + "explanation": "Explanation/Reference: Cant be implemented on Lambda because the timeout f or Lambda is 15mins and the Job takes 60minutes to complete", + "references": "" + }, + { + "question": "A company runs its two-tier ecommerce website on AW S. The web tier consists of a load balancer that se nds traffic to Amazon EC2 instances. The database tier uses an Amazon RDS DB instance. The EC2 instances a nd the RDS DB instance should not be exposed to the pu blic internet. The EC2 instances require internet a ccess to complete payment processing of orders through a third-party web service. The application must be hi ghly available. Which combination of configuration optio ns will meet these requirements? (Choose two.)", + "options": [ + "A. Use an Auto Scaling group to launch the EC2 insta nces in private subnets. Deploy an RDS Multi-AZ DB", + "B. Configure a VPC with two private subnets and two NAT gateways across two Availability Zones. Deploy an", + "C. Use an Auto Scaling group to launch the EC2 insta nces in public subnets across two Availability Zone s.", + "D. Configure a VPC with one public subnet, one priva te subnet, and two NAT gateways across two Availabi lity" + ], + "correct": "", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A solutions architect needs to implement a solution to reduce a company's storage costs. All the compa ny's data is in the Amazon S3 Standard storage class. Th e company must keep all data for at least 25 years. Data from the most recent 2 years must be highly availab le and immediately retrievable. Which solution will meet these requirements?", + "options": [ + "A. Set up an S3 Lifecycle policy to transition objec ts to S3 Glacier Deep Archive immediately.", + "B. Set up an S3 Lifecycle policy to transition objec ts to S3 Glacier Deep Archive after 2 years.", + "C. Use S3 Intelligent-Tiering. Activate the archivin g option to ensure that data is archived in S3 Glac ier Deep", + "D. Set up an S3 Lifecycle policy to transition objec ts to S3 One Zone-Infrequent Access (S3 One Zone-IA )" + ], + "correct": "B. Set up an S3 Lifecycle policy to transition objec ts to S3 Glacier Deep Archive after 2 years.", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/AmazonS3/latest/usergui de/intelligent-tiering- overview.html#:~:text=S3% 20Intelligent%2DTiering%20provides%20you,minimum%20 of %2090%20consecutive%20days. Option B / S3 Glacier Deep Archive seems correct to reduce a comp any's storage costs.", + "references": "" + }, + { + "question": "A media company is evaluating the possibility of mo ving its systems to the AWS Cloud. The company need s at least 10 TB of storage with the maximum possible I/ O performance for video processing, 300 TB of very durable storage for storing media content, and 900 TB of storage to meet requirements for archival med ia that is not in use anymore. Which set of services should a solutions architect recommend to meet these requirements?", + "options": [ + "A. Amazon EBS for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for", + "B. Amazon EBS for maximum performance, Amazon EFS fo r durable data storage, and Amazon S3 Glacier", + "C. Amazon EC2 instance store for maximum performance , Amazon EFS for durable data storage, and", + "D. Amazon EC2 instance store for maximum performance , Amazon S3 for durable data storage, and Amazon" + ], + "correct": "D. Amazon EC2 instance store for maximum performance , Amazon S3 for durable data storage, and Amazon", + "explanation": "Explanation/Reference: Max instance store possible at this time is 30TB fo r NVMe which has the higher I/O compared to EBS. is4gen.8xlarge 4 x 7,500 GB (30 TB) NVMe SSD https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /InstanceStorage.html#instance-store-volumes", + "references": "" + }, + { + "question": "A company wants to run applications in containers i n the AWS Cloud. These applications are stateless a nd can tolerate disruptions within the underlying infrastr ucture. The company needs a solution that minimizes cost and operational overhead. What should a solutions archi tect do to meet these requirements?", + "options": [ + "A. Use Spot Instances in an Amazon EC2 Auto Scaling group to run the application containers.", + "B. Use Spot Instances in an Amazon Elastic Kubernete s Service (Amazon EKS) managed node group.", + "C. Use On-Demand Instances in an Amazon EC2 Auto Sca ling group to run the application containers.", + "D. Use On-Demand Instances in an Amazon Elastic Kube rnetes Service (Amazon EKS) managed node group." + ], + "correct": "B. Use Spot Instances in an Amazon Elastic Kubernete s Service (Amazon EKS) managed node group.", + "explanation": "Explanation/Reference: The correct answer is B. To minimize cost and opera tional overhead, the solutions architect should use Spot Instances in an Amazon Elastic Kubernetes Service ( Amazon EKS) managed node group to run the applicati on containers. Amazon EKS is a fully managed service t hat makes it easy to run Kubernetes on AWS. By usin g a managed node group, the company can take advantage of the operational benefits of Amazon EKS while minimizing the operational overhead of managing the Kubernetes infrastructure. Spot Instances provide a cost- effective way to run stateless, fault-tolerant appl ications in containers, making them a good fit for the company's requirements.", + "references": "" + }, + { + "question": "A company is running a multi-tier web application o n premises. The web application is containerized an d runs on a number of Linux hosts connected to a PostgreSQ L database that contains user records. The operatio nal overhead of maintaining the infrastructure and capa city planning is limiting the company's growth. A s olutions architect must improve the application's infrastruc ture. Which combination of actions should the solutions a rchitect take to accomplish this? (Choose two.)", + "options": [ + "A. Migrate the PostgreSQL database to Amazon Aurora.", + "B. Migrate the web application to be hosted on Amazo n EC2 instances.", + "C. Set up an Amazon CloudFront distribution for the web application content.", + "D. Set up Amazon ElastiCache between the web applica tion and the PostgreSQL database." + ], + "correct": "", + "explanation": "Explanation/Reference: The correct answers are A and E. To improve the app lication's infrastructure, the solutions architect should migrate the PostgreSQL database to Amazon Aurora an d migrate the web application to be hosted on AWS Fargate with Amazon Elastic Container Service (Amaz on ECS). Amazon Aurora is a fully managed, scalable, and hig hly available relational database service that is c ompatible with PostgreSQL. Migrating the database to Amazon A urora would reduce the operational overhead of maintaining the database infrastructure and allow t he company to focus on building and scaling the app lication. AWS Fargate is a fully managed container orchestrat ion service that enables users to run containers wi thout the need to manage the underlying EC2 instances. By using AWS Fargate with Amazon Elastic Container Service (Amazon ECS), the solutions architect can i mprove the scalability and efficiency of the web ap plication and reduce the operational overhead of maintaining the underlying infrastructure.", + "references": "" + }, + { + "question": "An application runs on Amazon EC2 instances across multiple Availability Zonas. The instances run in a n Amazon EC2 Auto Scaling group behind an Application Load Balancer. The application performs best when the CPU utilization of the EC2 instances is at or near 40%. What should a solutions architect do to maintain th e desired performance across all instances in the g roup?", + "options": [ + "A. Use a simple scaling policy to dynamically scale the Auto Scaling group.", + "B. Use a target tracking policy to dynamically scale the Auto Scaling group.", + "C. Use an AWS Lambda function ta update the desired Auto Scaling group capacity.", + "D. Use scheduled scaling actions to scale up and sca le down the Auto Scaling group." + ], + "correct": "B. Use a target tracking policy to dynamically scale the Auto Scaling group.", + "explanation": "Explanation/Reference: The correct answer is B. To maintain the desired pe rformance across all instances in the Amazon EC2 Au to Scaling group, the solutions architect should use a target tracking policy to dynamically scale the Au to Scaling group. A target tracking policy allows the Auto Scaling gr oup to automatically adjust the number of EC2 insta nces in the group based on a target value for a metric. In this case, the target value for the CPU utilization metric could be set to 40% to maintain the desired performance o f the application. The Auto Scaling group would the n automatically scale the number of instances up or d own as needed to maintain the target value for the metric. https://docs.aws.amazon.com/autoscaling/ec2/usergui de/as-scaling-simple-step.html", + "references": "" + }, + { + "question": "A company is developing a file-sharing application that will use an Amazon S3 bucket for storage. The company wants to serve all the files through an Amazon Clou dFront distribution. The company does not want the files to be accessible through direct navigation to the S3 U RL. What should a solutions architect do to meet these requirements?", + "options": [ + "A. Write individual policies for each S3 bucket to g rant read permission for only CloudFront access.", + "B. Create an IAM user. Grant the user read permissio n to objects in the S3 bucket. Assign the user to", + "C. Write an S3 bucket policy that assigns the CloudF ront distribution ID as the Principal and assigns t he target", + "D. Create an origin access identity (OAI). Assign th e OAI to the CloudFront distribution." + ], + "correct": "D. Create an origin access identity (OAI). Assign th e OAI to the CloudFront distribution.", + "explanation": "Explanation/Reference: The correct answer is D. To meet the requirements, the solutions architect should create an origin acc ess identity (OAI) and assign it to the CloudFront dist ribution. The S3 bucket permissions should be confi gured so that only the OAI has read permission. An OAI is a special CloudFront user that is associa ted with a CloudFront distribution and is used to g ive CloudFront access to the files in an S3 bucket. By using an OAI, the company can serve the files throu gh the CloudFront distribution while preventing direct acc ess to the S3 bucket. https://docs.aws.amazon.com/ AmazonCloudFront/latest/DeveloperGuide/private-cont ent-restricting- access-to-s3.html", + "references": "" + }, + { + "question": "A company's website provides users with downloadabl e historical performance reports. The website needs a solution that will scale to meet the company's webs ite demands globally. The solution should be cost-e ffective, limit the provisioning of infrastructure resources, and provide the fastest possible response time. Which combination should a solutions architect reco mmend to meet these requirements?", + "options": [ + "A. Amazon CloudFront and Amazon S3", + "B. AWS Lambda and Amazon DynamoDB", + "C. Application Load Balancer with Amazon EC2 Auto Sc aling", + "D. Amazon Route 53 with internal Application Load Ba lancers" + ], + "correct": "A. Amazon CloudFront and Amazon S3", + "explanation": "Explanation/Reference: The correct answer is Option A. To meet the require ments, the solutions architect should recommend usi ng Amazon CloudFront and Amazon S3. By combining Amazo n CloudFront and Amazon S3, the solutions architect can provide a scalable and cost- effectiv e solution that limits the provisioning of infrastr ucture resources and provides the fastest possible respons e time. https://aws.amazon.com/cloudfront/", + "references": "" + }, + { + "question": "A company runs an Oracle database on premises. As p art of the company's migration to AWS, the company wants to upgrade the database to the most recent av ailable version. The company also wants to set up d isaster recovery (DR) for the database. The company needs t o minimize the operational overhead for normal operations and DR setup. The company also needs to maintain access to the database's underlying operat ing system. Which solution will meet these requirements?", + "options": [ + "A. Migrate the Oracle database to an Amazon EC2 inst ance. Set up database replication to a different AW S", + "C. Migrate the Oracle database to Amazon RDS Custom for Oracle. Create a read replica for the database in", + "D. Migrate the Oracle database to Amazon RDS for Ora cle. Create a standby database in another Availabil ity" + ], + "correct": "C. Migrate the Oracle database to Amazon RDS Custom for Oracle. Create a read replica for the database in", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/rds-custom.html https://docs.aws.amazon.com/ AmazonRDS/latest/UserGuide/working-with-custom- ora cle.html", + "references": "" + }, + { + "question": "A company wants to move its application to a server less solution. The serverless solution needs to ana lyze existing and new data by using SL. The company stor es the data in an Amazon S3 bucket. The data requir es encryption and must be replicated to a different AW S Region. Which solution will meet these requirements with th e LEAST operational overhead?", + "options": [ + "A. Create a new S3 bucket. Load the data into the ne w S3 bucket. Use S3 Cross-Region Replication (CRR) to", + "B. Create a new S3 bucket. Load the data into the ne w S3 bucket. Use S3 Cross-Region Replication (CRR) to", + "C. Load the data into the existing S3 bucket. Use S3 Cross-Region Replication (CRR) to replicate encryp ted", + "D. Load the data into the existing S3 bucket. Use S3 Cross-Region Replication (CRR) to replicate encryp ted" + ], + "correct": "C. Load the data into the existing S3 bucket. Use S3 Cross-Region Replication (CRR) to replicate encryp ted", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A company runs workloads on AWS. The company needs to connect to a service from an external provider. The service is hosted in the provider's VPC. Accord ing to the company's security team, the connectivit y must be private and must be restricted to the target ser vice. The connection must be initiated only from th e company's VPC. Which solution will mast these requirements?", + "options": [ + "A. Create a VPC peering connection between the compa ny's VPC and the provider's VPC.", + "B. Ask the provider to create a virtual private gate way in its VPC. Use AWS PrivateLink to connect to t he target", + "C. Create a NAT gateway in a public subnet of the co mpany's VPUpdate the route table to connect to the", + "D. Ask the provider to create a VPC endpoint for the target service. Use AWS PrivateLink to connect to the" + ], + "correct": "", + "explanation": "Explanation/Reference: **AWS PrivateLink provides private connectivity bet ween VPCs, AWS services, and your on-premises networks, without exposing your traffic to the publ ic internet**. AWS PrivateLink makes it easy to con nect services across different accounts and VPCs to sign ificantly simplify your network architecture. Inter face **VPC endpoints**, powered by AWS PrivateLink, connect yo u to services hosted by AWS Partners and supported solutions available in AWS Marketplace. https://aws .amazon.com/privatelink/", + "references": "" + }, + { + "question": "A company is migrating its on-premises PostgreSQL d atabase to Amazon Aurora PostgreSQL. The on- premises database must remain online and accessible during the migration. The Aurora database must rem ain synchronized with the on-premises database. Which c ombination of actions must a solutions architect ta ke to meet these requirements? (Choose two.)", + "options": [ + "A. Create an ongoing replication task.", + "B. Create a database backup of the on-premises datab ase.", + "C. Create an AWS Database Migration Service (AWS DMS ) replication server.", + "D. Convert the database schema by using the AWS Sche ma Conversion Tool (AWS SCT)." + ], + "correct": "", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/prescriptive-guidance/l atest/patterns/migrate-an-on- premises-postgresql- database-to-aurora-postgresql.html", + "references": "" + }, + { + "question": "A company uses AWS Organizations to create dedicate d AWS accounts for each business unit to manage each business unit's account independently upon req uest. The root email recipient missed a notificatio n that was sent to the root user email address of one acco unt. The company wants to ensure that all future notifications are not missed. Future notifications must be limited to account administrators. Which solution will meet these requirements?", + "options": [ + "A. Configure the company's email server to forward n otification email messages that are sent to the AWS", + "B. Configure all AWS account root user email address es as distribution lists that go to a few administr ators", + "C. Configure all AWS account root user email message s to be sent to one administrator who is responsibl e for", + "D. Configure all existing AWS accounts and all newly created accounts to use the same root user email", + "A. Migrate the queue to a redundant pair (active/sta ndby) of RabbitMQ instances on Amazon MQ. Create a", + "B. Migrate the queue to a redundant pair (active/sta ndby) of RabbitMQ instances on Amazon MQ. Create a", + "C. Create a Multi-AZ Auto Scaling group for EC2 inst ances that host the RabbitMQ queue.", + "D. Create a Multi-AZ Auto Scaling group for EC2 inst ances that host the RabbitMQ queue." + ], + "correct": "B. Migrate the queue to a redundant pair (active/sta ndby) of RabbitMQ instances on Amazon MQ. Create a", + "explanation": "Explanation/Reference: Migrating to Amazon MQ reduces the overhead on the queue management. C and D are dismissed. Deciding between A and B means deciding to go for an AutoSca ling group for EC2 or an RDS for Postgress (both mu lti- AZ). The RDS option has less operational impact, as provide as a service the tools and software requir ed. Consider for instance, the effort to add an additio nal node like a read replica, to the DB. https://docs.aws.amazon.com/amazon-mq/latest/develo per-guide/active-standby-broker- deployment.html https://aws.amazon.com/rds/postgresql/", + "references": "" + }, + { + "question": "A reporting team receives files each day in an Amaz on S3 bucket. The reporting team manually reviews a nd copies the files from this initial S3 bucket to an analysis S3 bucket each day at the same time to use with Amazon QuickSight. Additional teams are starting to send more files in larger sizes to the initial S3 bucket. The reporting team wants to move the files automati cally analysis S3 bucket as the files enter the ini tial S3 bucket. The reporting team also wants to use AWS La mbda functions to run pattern-matching code on the copied data. In addition, the reporting team wants to send the data files to a pipeline in Amazon Sage Maker Pipelines. What should a solutions architect do to meet these requirements with the LEAST operational overhead?", + "options": [ + "A. Create a Lambda function to copy the files to the analysis S3 bucket. Create an S3 event notificatio n for the", + "B. Create a Lambda function to copy the files to the analysis S3 bucket. Configure the analysis S3 buck et to", + "C. Configure S3 replication between the S3 buckets. Create an S3 event notification for the analysis S3", + "D. Configure S3 replication between the S3 buckets. Configure the analysis S3 bucket to send event" + ], + "correct": "", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/zh_cn/AmazonS3/latest/u serguide/NotificationHowTo.html", + "references": "" + }, + { + "question": "A solutions architect needs to help a company optim ize the cost of running an application on AWS. The application will use Amazon EC2 instances, AWS Farg ate, and AWS Lambda for compute within the architecture. The EC2 instances will run the data ingestion layer of the application. EC2 usage will be sporadic and unpredictable. Workloads that run on EC2 instances can be interrupted at any time. The application fro nt end will run on Fargate, and Lambda will serve the API layer. The front-end utilization and API layer util ization will be predictable over the course of the next year. Which combination of purchasing options will provid e the MOST cost-effective solution for hosting this application? (Choose two.)", + "options": [ + "A. Use Spot Instances for the data ingestion layer", + "B. Use On-Demand Instances for the data ingestion la yer", + "C. Purchase a 1-year Compute Savings Plan for the fr ont end and API layer.", + "D. Purchase 1-year All Upfront Reserved instances fo r the data ingestion layer." + ], + "correct": "", + "explanation": "Explanation/Reference: To optimize the cost of running this application on AWS, you should consider the following options: A. Use Spot Instances for the data ingestion layer C. Purchase a 1-year Compute Savings Plan for the front-end and API layer Therefore, the most cost-effective solution f or hosting this application would be to use Spot In stances for the data ingestion layer and to purchase either a 1 -year Compute Savings Plan or a 1-year EC2 instance Savings Plan for the front-end and API layer.", + "references": "" + }, + { + "question": "A company runs a web-based portal that provides use rs with global breaking news, local alerts, and wea ther updates. The portal delivers each user a personaliz ed view by using mixture of static and dynamic cont ent. Content is served over HTTPS through an API server running on an Amazon EC2 instance behind an Application Load Balancer (ALB). The company wants the portal to provide this content to its users acr oss the world as quickly as possible. How should a solutions architect design the applica tion to ensure the LEAST amount of latency for all users?", + "options": [ + "A. Deploy the application stack in a single AWS Regi on. Use Amazon CloudFront to serve all static and", + "B. Deploy the application stack in two AWS Regions. Use an Amazon Route 53 latency routing policy to se rve", + "C. Deploy the application stack in a single AWS Regi on. Use Amazon CloudFront to serve the static conte nt.", + "D. Deploy the application stack in two AWS Regions. Use an Amazon Route 53 geolocation routing policy t o" + ], + "correct": "A. Deploy the application stack in a single AWS Regi on. Use Amazon CloudFront to serve all static and", + "explanation": "Explanation Explanation/Reference: https://docs.aws.amazon.com/AmazonCloudFront/latest /DeveloperGuide/HowCloudFron tWorks.html#CloudFrontRegionaledgecaches", + "references": "" + }, + { + "question": "A gaming company is designing a highly available ar chitecture. The application runs on a modified Linu x kernel and supports only UDP-based traffic. The company ne eds the front- end tier to provide the best possibl e user experience. That tier must have low latency, route traffic to the nearest edge location, and provide s tatic IP addresses for entry into the application endpoints. What should a solutions architect do to meet these requirements?", + "options": [ + "A. Configure Amazon Route 53 to forward requests to an Application Load Balancer. Use AWS Lambda for", + "B. Configure Amazon CloudFront to forward requests t o a Network Load Balancer. Use AWS Lambda for the", + "C. Configure AWS Global Accelerator to forward reque sts to a Network Load Balancer. Use Amazon EC2", + "D. Configure Amazon API Gateway to forward requests to an Application Load Balancer." + ], + "correct": "C. Configure AWS Global Accelerator to forward reque sts to a Network Load Balancer. Use Amazon EC2", + "explanation": "Explanation/Reference: AWS Global Accelerator and Amazon CloudFront are se parate services that use the AWS global network and its edge locations around the world. CloudFront imp roves performance for both cacheable content (such as images and videos) and dynamic content (such as API acceleration and dynamic site delivery). Global Accelerator improves performance for a wide range o f applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions. Global Accelerator is a good fit for non- HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voi ce over IP, as well as for HTTP use cases that specifically require static IP addresses or determi nistic, fast regional failover. Both services integ rate with AWS Shield for DDoS protection.", + "references": "" + }, + { + "question": "A company wants to migrate its existing on-premises monolithic application to AWS. The company wants t o keep as much of the front-end code and the backend code as possible. However, the company wants to bre ak the application into smaller applications. A differ ent team will manage each application. The company needs a highly scalable solution that minimizes operational overhead. Which solution will meet these requirements?", + "options": [ + "A. Host the application on AWS Lambda. Integrate the application with Amazon API Gateway.", + "B. Host the application with AWS Amplify. Connect th e application to an Amazon API Gateway API that is", + "C. Host the application on Amazon EC2 instances. Set up an Application Load Balancer with EC2 instances in", + "D. Host the application on Amazon Elastic Container Service (Amazon ECS). Set up an Application Load" + ], + "correct": "D. Host the application on Amazon Elastic Container Service (Amazon ECS). Set up an Application Load", + "explanation": "Explanation/Reference: The correct answer is Option D. To meet the require ments, the company should host the application on Amazon Elastic Container Service (Amazon ECS) and s et up an Application Load Balancer with Amazon ECS as the target. Option A is not a valid solution bec ause AWS Lambda is not suitable for hosting long-ru nning applications. Option B is not a valid solution beca use AWS Amplify is a framework for building, deploy ing, and managing web applications, not a hosting solution. Option C is not a valid solution because Amazon EC2 instances are not fully managed container orchestra tion services. The company will need to manage the EC2 instances, which will increase operational overhead .", + "references": "" + }, + { + "question": "A company recently started using Amazon Aurora as t he data store for its global ecommerce application. When large reports are run, developers report that the e commerce application is performing poorly. After re viewing metrics in Amazon CloudWatch, a solutions architect finds that the ReadIOPS and CPUUtilizalion metrics are spiking when monthly reports run. What is the MOST cost-effective solution?", + "options": [ + "A. Migrate the monthly reporting to Amazon Redshift.", + "B. Migrate the monthly reporting to an Aurora Replic a.", + "C. Migrate the Aurora database to a larger instance class.", + "D. Increase the Provisioned IOPS on the Aurora insta nce." + ], + "correct": "B. Migrate the monthly reporting to an Aurora Replic a.", + "explanation": "Explanation/Reference: Option B: Migrating the monthly reporting to an Aur ora Replica may be the most cost- effective solutio n because it involves creating a read-only copy of th e database that can be used specifically for runnin g large reports without impacting the performance of the pr imary database. This solution allows the company to scale the read capacity of the database without incurring additional hardware or I/O costs.", + "references": "" + }, + { + "question": "A company hosts a website analytics application on a single Amazon EC2 On-Demand Instance. The analyti cs software is written in PHP and uses a MySQL databas e. The analytics software, the web server that prov ides PHP, and the database server are all hosted on the EC2 instance. The application is showing signs of performance degradation during busy times and is pr esenting 5xx errors. The company needs to make the application scale seamlessly. Which solution will m eet these requirements MOST cost-effectively?", + "options": [ + "A. Migrate the database to an Amazon RDS for MySQL D B instance. Create an AMI of the web application.", + "B. Migrate the database to an Amazon RDS for MySQL D B instance. Create an AMI of the web application.", + "C. Migrate the database to an Amazon Aurora MySQL DB instance. Create an AWS Lambda function to stop", + "D. Migrate the database to an Amazon Aurora MySQL DB instance. Create an AMI of the web application." + ], + "correct": "D. Migrate the database to an Amazon Aurora MySQL DB instance. Create an AMI of the web application.", + "explanation": "Explanation/Reference: Option D is the most cost-effective solution becaus e; * it uses an Auto Scaling group with a launch templ ate and a Spot Fleet to automatically scale the num ber of EC2 instances based on the workload. * using a Spot Fleet allows the company to take adv antage of the lower prices of Spot Instances while still providing the required performance and availability for the application. * using an Aurora MySQL database instance allows th e company to take advantage of the scalability and performance of Aurora.", + "references": "" + }, + { + "question": "A company runs a stateless web application in produ ction on a group of Amazon EC2 On- Demand Instances behind an Application Load Balancer. The applicatio n experiences heavy usage during an 8-hour period e ach business day. Application usage is moderate and ste ady overnight. Application usage is low during week ends. The company wants to minimize its EC2 costs without affecting the availability of the application. Which solution will meet these requirements?", + "options": [ + "A. Use Spot Instances for the entire workload.", + "B. Use Reserved Instances for the baseline level of usage. Use Spot instances for any additional capaci ty that", + "C. Use On-Demand Instances for the baseline level of usage. Use Spot Instances for any additional capac ity", + "D. Use Dedicated Instances for the baseline level of usage. Use On-Demand Instances for any additional" + ], + "correct": "B. Use Reserved Instances for the baseline level of usage. Use Spot instances for any additional capaci ty that", + "explanation": "Explanation/Reference: Option B is the most cost-effective solution that m eets the requirements. * Using Reserved Instances for the baseline level o f usage will provide a discount on the EC2 costs fo r steady overnight and weekend usage. * Using Spot Instances for any additional capacity that the application needs during peak usage times will allow the company to take advantage of spare capacity in the region at a lower cost than On-Demand Instances .", + "references": "" + }, + { + "question": "A company needs to retain application log files for a critical application for 10 years. The applicati on team regularly accesses logs from the past month for tro ubleshooting, but logs older than 1 month are rarel y accessed. The application generates more than 10 TB of logs per month. Which storage option meets these requirements MOST cost-effectively?", + "options": [ + "A. Store the logs in Amazon S3. Use AWS Backup to mo ve logs more than 1 month old to S3 Glacier Deep", + "B. Store the logs in Amazon S3. Use S3 Lifecycle pol icies to move logs more than 1 month old to S3 Glac ier", + "C. Store the logs in Amazon CloudWatch Logs. Use AWS Backup to move logs more than 1 month old to S3", + "D. Store the logs in Amazon CloudWatch Logs. Use Ama zon S3 Lifecycle policies to move logs more than 1", + "A. Configure the Lambda function for deployment acro ss multiple Availability Zones.", + "B. Modify the Lambda function's configuration to inc rease the CPU and memory allocations for the functi on.", + "C. Configure the SNS topic's retry strategy to incre ase both the number of retries and the wait time be tween", + "D. Configure an Amazon Simple Queue Service (Amazon SQS) queue as the on-failure destination. Modify th e" + ], + "correct": "D. Configure an Amazon Simple Queue Service (Amazon SQS) queue as the on-failure destination. Modify th e", + "explanation": "Explanation/Reference: To ensure that all notifications are eventually pro cessed, the solutions architect can set up an Amazo n SQS queue as the on-failure destination for the Amazon SNS topic. This way, when the Lambda function fails due to network connectivity issues, the notification will be sent to the queue instead of being lost. The Lam bda function can then be modified to process messages in the que ue, ensuring that all notifications are eventually processed.", + "references": "" + }, + { + "question": "A company has a service that produces event data. T he company wants to use AWS to process the event da ta as it is received. The data is written in a specifi c order that must be maintained throughout processi ng. The company wants to implement a solution that minimize s operational overhead. How should a solutions architect accomplish this?", + "options": [ + "A. Create an Amazon Simple Queue Service (Amazon SQS ) FIFO queue to hold messages.", + "B. Create an Amazon Simple Notification Service (Ama zon SNS) topic to deliver notifications containing", + "C. Create an Amazon Simple Queue Service (Amazon SQS ) standard queue to hold messages. Set up an", + "D. Create an Amazon Simple Notification Service (Ama zon SNS) topic to deliver notifications containing" + ], + "correct": "A. Create an Amazon Simple Queue Service (Amazon SQS ) FIFO queue to hold messages.", + "explanation": "Explanation/Reference: The correct solution is Option A. Creating an Amazo n Simple Queue Service (Amazon SQS) FIFO queue to hold messages and setting up an AWS Lambda function to process messages from the queue will ensure tha t the event data is processed in the correct order an d minimize operational overhead. Option B is incorrect because using Amazon Simple N otification Service (Amazon SNS) does not guarantee the order in which messages are delivered. Option C is incorrect because using an Amazon SQS s tandard queue does not guarantee the order in which messages are processed. Option D is incorrect because using an Amazon SQS q ueue as a subscriber to an Amazon SNS topic does no t guarantee the order in which messages are processed .", + "references": "" + }, + { + "question": "A company is migrating an application from on-premi ses servers to Amazon EC2 instances. As part of the migration design requirements, a solutions architec t must implement infrastructure metric alarms. The company does not need to take action if CPU utilization inc reases to more than 50% for a short burst of time. However, if the CPU utilization increases to more than 50% and read IOPS on the disk are high at the same time, th e company needs to act as soon as possible. The solut ions architect also must reduce false alarms. What should the solutions architect do to meet thes e requirements?", + "options": [ + "A. Create Amazon CloudWatch composite alarms where p ossible.", + "B. Create Amazon CloudWatch dashboards to visualize the metrics and react to issues quickly.", + "C. Create Amazon CloudWatch Synthetics canaries to m onitor the application and raise an alarm.", + "D. Create single Amazon CloudWatch metric alarms wit h multiple metric thresholds where possible." + ], + "correct": "A. Create Amazon CloudWatch composite alarms where p ossible.", + "explanation": "Explanation/Reference: Composite alarms determine their states by monitori ng the states of other alarms. You can **use compos ite alarms to reduce alarm noise**. For example, you ca n create a composite alarm where the underlying met ric alarms go into ALARM when they meet specific condit ions. You then can set up your composite alarm to g o into ALARM and send you notifications when the unde rlying metric alarms go into ALARM by configuring t he underlying metric alarms never to take actions. Cur rently, composite alarms can take the following act ions: https://docs.aws.amazon.com/AmazonCloudWatch/latest /monitoring/Create_Composite_ Alarm.html", + "references": "" + }, + { + "question": "A company wants to migrate its on-premises data cen ter to AWS. According to the company's compliance requirements, the company can use only the ap-north east-3 Region. Company administrators are not permi tted to connect VPCs to the internet. Which solutions will meet these requirements? (Choo se two.)", + "options": [ + "A. Use AWS Control Tower to implement data residency guardrails to deny internet access and deny access", + "B. Use rules in AWS WAF to prevent internet access. Deny access to all AWS Regions except ap-northeast- 3", + "C. Use AWS Organizations to configure service contro l policies (SCPS) that prevent VPCs from gaining", + "D. Create an outbound rule for the network ACL in ea ch VPC to deny all traffic from 0.0.0.0/0. Create a n IAM" + ], + "correct": "", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/organizations/latest/us erguide/orgs_manage_policies_scp s_examples_vpc.html#example_vpc_2", + "references": "" + }, + { + "question": "A company uses a three-tier web application to prov ide training to new employees. The application is a ccessed for only 12 hours every day. The company is using a n Amazon RDS for MySQL DB instance to store information and wants to minimize costs. What shoul d a solutions architect do to meet these requiremen ts? A. Configure an IAM policy for AWS Systems Manager S ession Manager. Create an IAM role for the policy. Update the trust relationship of the role. Set up a utomatic start and stop for the DB instance.", + "options": [ + "B. Create an Amazon ElastiCache for Redis cache clus ter that gives users the ability to access the data from", + "C. Launch an Amazon EC2 instance. Create an IAM role that grants access to Amazon RDS.", + "D. Create AWS Lambda functions to start and stop the DB instance. Create Amazon EventBridge (Amazon" + ], + "correct": "D. Create AWS Lambda functions to start and stop the DB instance. Create Amazon EventBridge (Amazon", + "explanation": "Explanation/Reference: https://aws.amazon.com/blogs/database/schedule-amaz on-rds-stop-and-start-using-aws- lambda/", + "references": "" + }, + { + "question": "A company sells ringtones created from clips of pop ular songs. The files containing the ringtones are stored in Amazon S3 Standard and are at least 128 KB in size. The company has millions of files, but downloads a re infrequent for ringtones older than 90 days. The co mpany needs to save money on storage while keeping the most accessed files readily available for its users . Which action should the company take to meet these requirements MOST cost-effectively?", + "options": [ + "A. Configure S3 Standard-Infrequent Access (S3 Stand ard-IA) storage for the initial storage tier of the objects.", + "B. Move the files to S3 Intelligent-Tiering and conf igure it to move objects to a less expensive storag e tier after", + "C. Configure S3 inventory to manage objects and move them to S3 Standard-Infrequent Access (S3 Standard -", + "D. Implement an S3 Lifecycle policy that moves the o bjects from S3 Standard to S3 Standard-Infrequent" + ], + "correct": "D. Implement an S3 Lifecycle policy that moves the o bjects from S3 Standard to S3 Standard-Infrequent", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A company needs to save the results from a medical trial to an Amazon S3 repository. The repository mu st allow a few scientists to add new files and must re strict all other users to read-only access. No user s can have the ability to modify or delete any files in the re pository. The company must keep every file in the r epository for a minimum of 1 year after its creation date. Which solution will meet these requirements?", + "options": [ + "A. Use S3 Object Lock in governance mode with a lega l hold of 1 year.", + "B. Use S3 Object Lock in compliance mode with a rete ntion period of 365 days.", + "C. Use an IAM role to restrict all users from deleti ng or changing objects in the S3 bucket.", + "D. Configure the S3 bucket to invoke an AWS Lambda f unction every time an object is added. Configure th e" + ], + "correct": "", + "explanation": "Explanation/Reference: Compliance Mode. The key difference between Complia nce Mode and Governance Mode is that there are NO users that can override the retention periods set o r delete an object, and that also includes your AWS root account which has the highest privileges.", + "references": "" + }, + { + "question": "A large media company hosts a web application on AW S. The company wants to start caching confidential media files so that users around the world will hav e reliable access to the files. The content is stor ed in Amazon S3 buckets. The company must deliver the content qu ickly, regardless of where the requests originate geographically. Which solution will meet these requirements?", + "options": [ + "A. Use AWS DataSync to connect the S3 buckets to the web application.", + "B. Deploy AWS Global Accelerator to connect the S3 b uckets to the web application.", + "C. Deploy Amazon CloudFront to connect the S3 bucket s to CloudFront edge servers.", + "D. Use Amazon Simple Queue Service (Amazon SQS) to c onnect the S3 buckets to the web application." + ], + "correct": "C. Deploy Amazon CloudFront to connect the S3 bucket s to CloudFront edge servers.", + "explanation": "Explanation/Reference: Caching == Edge location == CloudFront", + "references": "" + }, + { + "question": "A company produces batch data that comes from diffe rent databases. The company also produces live stre am data from network sensors and application APIs. The company needs to consolidate all the data into one place for business analytics. The company needs to proces s the incoming data and then stage the data in diff erent Amazon S3 buckets. Teams will later run one-time qu eries and import the data into a business intellige nce tool to show key performance indicators (KPIs). Which combination of steps will meet these requirem ents with the LEAST operational overhead? (Choose t wo.)", + "options": [ + "A. Use Amazon Athena for one-time queries. Use Amazo n QuickSight to create dashboards for KPIs.", + "B. Use Amazon Kinesis Data Analytics for one-time qu eries. Use Amazon QuickSight to create dashboards f or", + "C. Create custom AWS Lambda functions to move the in dividual records from the databases to an Amazon", + "D. Use an AWS Glue extract, transform, and load (ETL ) job to convert the data into JSON format. Load th e" + ], + "correct": "", + "explanation": "Explanation/Reference: https://aws.amazon.com/blogs/big-data/enhance-analy tics-with-google-trends-data- using-aws-glue-amazon - athena-and-amazon-quicksight/", + "references": "" + }, + { + "question": "A company stores data in an Amazon Aurora PostgreSQ L DB cluster. The company must store all the data f or 5 years and must delete all the data after 5 years. The company also must indefinitely keep audit logs of actions that are performed within the database. Currently, the company has automated backups configured for Au rora. Which combination of steps should a solutions archi tect take to meet these requirements? (Choose two.)", + "options": [ + "A. Take a manual snapshot of the DB cluster.", + "B. Create a lifecycle policy for the automated backu ps.", + "C. Configure automated backup retention for 5 years.", + "D. Configure an Amazon CloudWatch Logs export for th e DB cluster." + ], + "correct": "", + "explanation": "Explanation/Reference: https://aws.amazon.com/about-aws/whats-new/2020/06/ amazon-aurora-snapshots-can- be-managed-via-aws- backup/?nc1=h_ls AWS Backup adds Amazon Aurora data base cluster snapshots as its latest protected resource", + "references": "" + }, + { + "question": "A solutions architect is optimizing a website for a n upcoming musical event. Videos of the performance s will be streamed in real time and then will be available on demand. The event is expected to attract a global online audience. Which service will improve the performance of both the real-time and on-demand streaming?", + "options": [ + "A. Amazon CloudFront", + "B. AWS Global Accelerator", + "C. Amazon Route 53", + "D. Amazon S3 Transfer Acceleration" + ], + "correct": "A. Amazon CloudFront", + "explanation": "Explanation/Reference: CloudFront offers several options for streaming you r media to global viewers--both pre- recorded files and live events. https://docs.aws.amazon.com/AmazonCloudFront/latest /DeveloperGuide/IntroductionUs eCases.html#IntroductionUseCasesStreaming", + "references": "" + }, + { + "question": "A company is running a publicly accessible serverle ss application that uses Amazon API Gateway and AWS Lambda. The application's traffic recently spiked d ue to fraudulent requests from botnets. Which steps should a solutions architect take to bl ock requests from unauthorized users? (Choose two.)", + "options": [ + "A. Create a usage plan with an API key that is share d with genuine users only.", + "B. Integrate logic within the Lambda function to ign ore the requests from fraudulent IP addresses.", + "C. Implement an AWS WAF rule to target malicious req uests and trigger actions to filter them out.", + "D. Convert the existing public API to a private API. Update the DNS records to redirect users to the ne w API" + ], + "correct": "", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/apigateway/latest/devel operguide/api-gateway-api-usage- plans.html https://medium.com/@tshemku/aws-waf-vs-firewall-man ager-vs-shield-vs-shield- advanced-4c86911e94c6", + "references": "" + }, + { + "question": "An ecommerce company hosts its analytics applicatio n in the AWS Cloud. The application generates about 300 MB of data each month. The data is stored in JSON f ormat. The company is evaluating a disaster recover y solution to back up the data. The data must be acce ssible in milliseconds if it is needed, and the dat a must be kept for 30 days. Which solution meets these requirements MOST cost-e ffectively?", + "options": [ + "A. Amazon OpenSearch Service (Amazon Elasticsearch S ervice)", + "B. Amazon S3 Glacier", + "C. Amazon S3 Standard", + "D. Amazon RDS for PostgreSQL" + ], + "correct": "C. Amazon S3 Standard", + "explanation": "Explanation/Reference: Cost-effective solution with milliseconds of retrie val -> it should be s3 standard", + "references": "" + }, + { + "question": "A company has a small Python application that proce sses JSON documents and outputs the results to an o n- premises SQL database. The application runs thousan ds of times each day. The company wants to move the application to the AWS Cloud. The company needs a h ighly available solution that maximizes scalability and minimizes operational overhead. Which solution will meet these requirements?", + "options": [ + "A. Place the JSON documents in an Amazon S3 bucket. Run the Python code on multiple Amazon EC2", + "B. Place the JSON documents in an Amazon S3 bucket. Create an AWS Lambda function that runs the", + "C. Place the JSON documents in an Amazon Elastic Blo ck Store (Amazon EBS) volume.", + "D. Place the JSON documents in an Amazon Simple Queu e Service (Amazon SQS) queue as messages." + ], + "correct": "B. Place the JSON documents in an Amazon S3 bucket. Create an AWS Lambda function that runs the", + "explanation": "Explanation/Reference: https://aws.amazon.com/rds/aurora/", + "references": "" + }, + { + "question": "The company's HPC workloads run on Linux. Each HPC workflow runs on hundreds of Amazon EC2 Spot Instances, is short-lived, and generates thousands of output files that are ultimately stored in persi stent storage for analytics and long-term future use. The company seeks a cloud storage solution that per mits the copying of on-premises data to long-term persistent storage to make data available for proce ssing by all EC2 instances. The solution should als o be a high performance file system that is integrated wit h persistent storage to read and write datasets and output files. Which combination of AWS services meets these requi rements?", + "options": [ + "A. Amazon FSx for Lustre integrated with Amazon S3", + "B. Amazon FSx for Windows File Server integrated wit h Amazon S3", + "C. Amazon S3 Glacier integrated with Amazon Elastic Block Store (Amazon EBS)", + "D. Amazon S3 bucket with a VPC endpoint integrated w ith an Amazon Elastic Block Store (Amazon EBS)" + ], + "correct": "A. Amazon FSx for Lustre integrated with Amazon S3", + "explanation": "Explanation/Reference: Additional keywords: make data available for proces sing by all EC2 instances ==> FSx In absence of EFS , it should be FSx. Amazon FSx For Lustre provides a hig h-performance, parallel file system for hot data", + "references": "" + }, + { + "question": "A company is building a containerized application o n premises and decides to move the application to A WS. The application will have thousands of users soon a fter it is deployed. The company is unsure how to m anage the deployment of containers at scale. The company needs to deploy the containerized application in a highly available architecture that minimizes operational o verhead. Which solution will meet these requirements?", + "options": [ + "A. Store container images in an Amazon Elastic Conta iner Registry (Amazon ECR) repository. Use an Amazo n", + "B. Store container images in an Amazon Elastic Conta iner Registry (Amazon ECR) repository. Use an Amazo n", + "C. Store container images in a repository that runs on an Amazon EC2 instance. Run the containers on EC 2", + "D. Create an Amazon EC2 Amazon Machine Image (AMI) t hat contains the container image." + ], + "correct": "A. Store container images in an Amazon Elastic Conta iner Registry (Amazon ECR) repository. Use an Amazo n", + "explanation": "Explanation/Reference: \"minimizes operational overhead\" --> Fargate is ser verless", + "references": "" + }, + { + "question": "A company has two applications: a sender applicatio n that sends messages with payloads to be processed and a processing application intended to receive the me ssages with payloads. The company wants to implemen t an AWS service to handle messages between the two appl ications. The sender application can send about 1,0 00 messages each hour. The messages may take up to 2 d ays to be processed: If the messages fail to proces s, they must be retained so that they do not impact th e processing of any remaining messages. Which solution meets these requirements and is the MOST operationally efficient?", + "options": [ + "A. Set up an Amazon EC2 instance running a Redis dat abase. Configure both applications to use the insta nce.", + "B. Use an Amazon Kinesis data stream to receive the messages from the sender application.", + "C. Integrate the sender and processor applications w ith an Amazon Simple Queue Service (Amazon SQS)", + "D. Subscribe the processing application to an Amazon Simple Notification Service (Amazon SNS) topic to" + ], + "correct": "C. Integrate the sender and processor applications w ith an Amazon Simple Queue Service (Amazon SQS)", + "explanation": "Explanation/Reference: Amazon SQS supports dead-letter queues (DLQ), which other queues (source queues) can target for messages that can't be processed (consumed) success fully. https://docs.aws.amazon.com/ AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs- dead-letter-queues.html", + "references": "" + }, + { + "question": "A solutions architect must design a solution that u ses Amazon CloudFront with an Amazon S3 origin to s tore a static website. The company's security policy requi res that all website traffic be inspected by AWS WA F. How should the solutions architect comply with thes e requirements?", + "options": [ + "A. Configure an S3 bucket policy to accept requests coming from the AWS WAF Amazon Resource Name", + "B. Configure Amazon CloudFront to forward all incomi ng requests to AWS WAF before requesting content", + "C. Configure a security group that allows Amazon Clo udFront IP addresses to access Amazon S3 only.", + "D. Configure Amazon CloudFront and Amazon S3 to use an origin access identity (OAI) to restrict access to" + ], + "correct": "D. Configure Amazon CloudFront and Amazon S3 to use an origin access identity (OAI) to restrict access to", + "explanation": "Explanation/Reference: https://aws.amazon.com/premiumsupport/knowledge-cen ter/cloudfront-access-to- amazon-s3/ confirms use o f OAI (and option D).", + "references": "" + }, + { + "question": "Organizers for a global event want to put daily rep orts online as static HTML pages. The pages are exp ected to generate millions of views from users around the wo rld. The files are stored in an Amazon S3 bucket. A solutions architect has been asked to design an eff icient and effective solution. Which action should the solutions architect take to accomplish this?", + "options": [ + "A. Generate presigned URLs for the files.", + "B. Use cross-Region replication to all Regions.", + "C. Use the geoproximity feature of Amazon Route 53. D. Use Amazon CloudFront with the S3 bucket as its ori gin." + ], + "correct": "", + "explanation": "Explanation/Reference: The most effective and efficient solution would be Option D (Use Amazon CloudFront with the S3 bucket as its origin.) Amazon CloudFront is a content delivery ne twork (CDN) that speeds up the delivery of static a nd dynamic web content, such as HTML pages, images, an d videos. By using CloudFront, the HTML pages will be served to users from the edge location that is clos est to them, resulting in faster delivery and a bet ter user experience. CloudFront can also handle the high tra ffic and large number of requests expected for the global event, ensuring that the HTML pages are available a nd accessible to users around the world.", + "references": "" + }, + { + "question": "A company runs a production application on a fleet of Amazon EC2 instances. The application reads the data from an Amazon SQS queue and processes the messages in parallel. The message volume is unpredictable and often has intermittent traffic. This applicatio n should continually process messages without any d owntime. Which solution meets these requirements MOST cost-e ffectively?", + "options": [ + "A. Use Spot Instances exclusively to handle the maxi mum capacity required.", + "B. Use Reserved Instances exclusively to handle the maximum capacity required.", + "C. Use Reserved Instances for the baseline capacity and use Spot Instances to handle additional capacit y.", + "D. Use Reserved Instances for the baseline capacity and use On-Demand Instances to handle additional" + ], + "correct": "C. Use Reserved Instances for the baseline capacity and use Spot Instances to handle additional capacit y.", + "explanation": "Explanation/Reference: A - it's out because it's not ok to use full spot c overage. B - it's hard to predict how much resources are nee ded to buy ahead, so it's suitable to no have any d own time but not the best from cost perspective C - possible to be correct answer such as use cover baseline with RI and rest with spot that is cheape r. Regarding don't time. there is no downtime because vaseline covered with RI and all communication is v ia SQS (distributed model) D - possible but less cost effective then C", + "references": "" + }, + { + "question": "A security team wants to limit access to specific s ervices or actions in all of the team's AWS account s. All accounts belong to a large organization in AWS Orga nizations. The solution must be scalable and there must be a single point where permissions can be maintain ed. What should a solutions architect do to accomplish this?", + "options": [ + "A. Create an ACL to provide access to the services o r actions.", + "B. Create a security group to allow accounts and att ach it to user groups.", + "C. Create cross-account roles in each account to den y access to the services or actions.", + "D. Create a service control policy in the root organ izational unit to deny access to the services or ac tions.", + "D. Service control policies (SCPs) are one type of policy that you can use to manage your organization . SCPs" + ], + "correct": "D. Create a service control policy in the root organ izational unit to deny access to the services or ac tions.", + "explanation": "Explanation Explanation/Reference:", + "references": "" + }, + { + "question": "A company is concerned about the security of its pu blic web application due to recent web attacks. The application uses an Application Load Balancer (ALB) . A solutions architect must reduce the risk of DDo S attacks against the application. What should the solutions architect do to meet this requirement?", + "options": [ + "A. Add an Amazon Inspector agent to the ALB.", + "B. Configure Amazon Macie to prevent attacks.", + "C. Enable AWS Shield Advanced to prevent attacks.", + "D. Configure Amazon GuardDuty to monitor the ALB." + ], + "correct": "C. Enable AWS Shield Advanced to prevent attacks.", + "explanation": "Explanation/Reference: To reduce the risk of DDoS attacks against the appl ication, the solutions architect should enable AWS Shield Advanced (Option C). AWS Shield is a managed Distri buted Denial of Service (DDoS) protection service t hat helps protect web applications running on AWS from DDoS attacks. AWS Shield Advanced is an additional layer of protection that provides enhanced DDoS pro tection capabilities, including proactive monitorin g and automatic inline mitigations, to help protect again st even the largest and most sophisticated DDoS att acks. By enabling AWS Shield Advanced, the solutions archite ct can help protect the application from DDoS attac ks and reduce the risk of disruption to the application.", + "references": "" + }, + { + "question": "A company's web application is running on Amazon EC 2 instances behind an Application Load Balancer. Th e company recently changed its policy, which now requ ires the application to be accessed from one specif ic country only. Which configuration will meet this requirement?", + "options": [ + "A. Configure the security group for the EC2 instance s.", + "B. Configure the security group on the Application L oad Balancer.", + "C. Configure AWS WAF on the Application Load Balance r in a VPC.", + "D. Configure the network ACL for the subnet that con tains the EC2 instances." + ], + "correct": "C. Configure AWS WAF on the Application Load Balance r in a VPC.", + "explanation": "Explanation/Reference: Geographic (Geo) Match Conditions in AWS WAF. This new condition type allows you to use AWS WAF to restrict application access based on the geographic location of your viewers. With geo match condition s you can choose the countries from which AWS WAF should allow access. https://aws.amazon.com/about-aws/ whats-new/2017/10/aws-waf- now-supports-geographic- match/", + "references": "" + }, + { + "question": "A company provides an API to its users that automat es inquiries for tax computations based on item pri ces. The company experiences a larger number of inquirie s during the holiday season only that cause slower response times. A solutions architect needs to desi gn a solution that is scalable and elastic. What should the solutions architect do to accomplis h this?", + "options": [ + "A. Provide an API hosted on an Amazon EC2 instance. The EC2 instance performs the required computations", + "B. Design a REST API using Amazon API Gateway that a ccepts the item names. API Gateway passes item", + "C. Create an Application Load Balancer that has two Amazon EC2 instances behind it. The EC2 instances w ill", + "D. Design a REST API using Amazon API Gateway that c onnects with an API hosted on an Amazon EC2" + ], + "correct": "B. Design a REST API using Amazon API Gateway that a ccepts the item names. API Gateway passes item", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A solutions architect is creating a new Amazon Clou dFront distribution for an application. Some of the information submitted by users is sensitive. The ap plication uses HTTPS but needs another layer of sec urity. The sensitive information should.be protected throu ghout the entire application stack, and access to t he information should be restricted to certain applica tions. Which action should the solutions architect take?", + "options": [ + "A. Configure a CloudFront signed URL.", + "B. Configure a CloudFront signed cookie.", + "C. Configure a CloudFront field-level encryption pro file.", + "D. Configure CloudFront and set the Origin Protocol Policy setting to HTTPS Only for the Viewer Protoco l" + ], + "correct": "C. Configure a CloudFront field-level encryption pro file.", + "explanation": "Explanation/Reference: Field-level encryption allows you to enable your us ers to securely upload sensitive information to you r web servers. The sensitive information provided by your users is encrypted at the edge, close to the user, and remains encrypted throughout your entire applicatio n stack. This encryption ensures that only applicat ions that need the data--and have the credentials to decrypt it--are able to do so.", + "references": "" + }, + { + "question": "A gaming company hosts a browser-based application on AWS. The users of the application consume a larg e number of videos and images that are stored in Amaz on S3. This content is the same for all users. The application has increased in popularity, and mi llions of users worldwide accessing these media fil es. The company wants to provide the files to the users whi le reducing the load on the origin. Which solution meets these requirements MOST cost-e ffectively?", + "options": [ + "A. Deploy an AWS Global Accelerator accelerator in f ront of the web servers.", + "B. Deploy an Amazon CloudFront web distribution in f ront of the S3 bucket.", + "C. Deploy an Amazon ElastiCache for Redis instance i n front of the web servers.", + "D. Deploy an Amazon ElastiCache for Memcached instance in front of the web servers. Correct Answer: B", + "B. Cloud front is best for content delivery. Global Accelerator is best for non-HTTP (TCP/UDP) cases a nd" + ], + "correct": "", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A company has a multi-tier application that runs si x front-end web servers in an Amazon EC2 Auto Scali ng group in a single Availability Zone behind an Appli cation Load Balancer (ALB). A solutions architect n eeds to modify the infrastructure to be highly available wi thout modifying the application. Which architecture should the solutions architect c hoose that provides high availability?", + "options": [ + "A. Create an Auto Scaling group that uses three inst ances across each of two Regions.", + "B. Modify the Auto Scaling group to use three instan ces across each of two Availability Zones.", + "C. Create an Auto Scaling template that can be used to quickly create more instances in another Region.", + "D. Change the ALB in front of the Amazon EC2 instanc es in a round-robin configuration to balance traffi c to" + ], + "correct": "B. Modify the Auto Scaling group to use three instan ces across each of two Availability Zones.", + "explanation": "Explanation/Reference: Option B. Modify the Auto Scaling group to use thre e instances across each of the two Availability Zon es. This option would provide high availability by distribut ing the front- end web servers across multiple Avai lability Zones. If there is an issue with one Availability Z one, the other Availability Zone would still be ava ilable to serve traffic. This would ensure that the application rem ains available and highly available even if there i s a failure in one of the Availability Zones.", + "references": "" + }, + { + "question": "An ecommerce company has an order-processing applic ation that uses Amazon API Gateway and an AWS Lambda function. The application stores data in an Amazon Aurora PostgreSQL database. During a recent sales event, a sudden surge in customer orders occu rred. Some customers experienced timeouts, and the application did not process the orders of those cus tomers. A solutions architect determined that the CPU utili zation and memory utilization were high on the data base because of a large number of open connections. The solutions architect needs to prevent the timeout er rors while making the least possible changes to the appl ication. Which solution will meet these requirements?", + "options": [ + "A. Configure provisioned concurrency for the Lambda function. Modify the database to be a global databa se in", + "B. Use Amazon RDS Proxy to create a proxy for the da tabase. Modify the Lambda function to use the RDS", + "C. Create a read replica for the database in a diffe rent AWS Region. Use query string parameters in API", + "D. Migrate the data from Aurora PostgreSQL to Amazon DynamoDB by using AWS Database Migration" + ], + "correct": "B. Use Amazon RDS Proxy to create a proxy for the da tabase. Modify the Lambda function to use the RDS", + "explanation": "Explanation Explanation/Reference: Many applications, including those built on modern serverless architectures, can have a large number o f open connections to the database server and may open and close database connections at a high rate, exhaust ing database memory and compute resources. Amazon RDS P roxy allows applications to pool and share connections established with the database, improvin g database efficiency and application scalability. https://aws.amazon.com/id/rds/proxy/", + "references": "" + }, + { + "question": "An application runs on Amazon EC2 instances in priv ate subnets. The application needs to access an Ama zon DynamoDB table. What is the MOST secure way to access the table whi le ensuring that the traffic does not leave the AWS network?", + "options": [ + "A. Use a VPC endpoint for DynamoDB.", + "B. Use a NAT gateway in a public subnet.", + "C. Use a NAT instance in a private subnet.", + "D. Use the internet gateway attached to the VPC." + ], + "correct": "A. Use a VPC endpoint for DynamoDB.", + "explanation": "Explanation/Reference: VPC endpoints for service in private subnets https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/vpc-endpoints- dynamodb.html", + "references": "" + }, + { + "question": "An entertainment company is using Amazon DynamoDB t o store media metadata. The application is read intensive and experiencing delays. The company does not have staff to handle additional operational ov erhead and needs to improve the performance efficiency of DynamoDB without reconfiguring the application. What should a solutions architect recommend to meet this requirement?", + "options": [ + "A. Use Amazon ElastiCache for Redis.", + "B. Use Amazon DynamoDB Accelerator (DAX).", + "C. Replicate data by using DynamoDB global tables.", + "D. Use Amazon ElastiCache for Memcached with Auto Di scovery enabled." + ], + "correct": "B. Use Amazon DynamoDB Accelerator (DAX).", + "explanation": "Explanation/Reference: To improve the performance efficiency of DynamoDB w ithout reconfiguring the application, a solutions a rchitect should recommend using Amazon DynamoDB Accelerator (DAX) which is Option B as the correct answer. DAX is a fully managed, in- memory cache that can be us ed to improve the performance of read-intensive wor kloads on DynamoDB. DAX stores frequently accessed data in memory, allowing the application to retrieve data from the cache rather than making a request to DynamoDB. This can significantly reduce the number of read requests made to DynamoDB, improving the performanc e and reducing the latency of the application.", + "references": "" + }, + { + "question": "A company's infrastructure consists of Amazon EC2 i nstances and an Amazon RDS DB instance in a single AWS Region. The company wants to back up its data i n a separate Region. Which solution will meet these requirements with th e LEAST operational overhead? A. Use AWS Backup to copy EC2 backups and RDS backup s to the separate Region.", + "options": [ + "B. Use Amazon Data Lifecycle Manager (Amazon DLM) to copy EC2 backups and RDS backups to the", + "C. Create Amazon Machine Images (AMIs) of the EC2 in stances. Copy the AMIs to the separate Region.", + "D. Create Amazon Elastic Block Store (Amazon EBS) sn apshots. Copy the EBS snapshots to the separate" + ], + "correct": "", + "explanation": "Explanation/Reference: Cross-Region backup Using AWS Backup, you can copy backups to multiple different AWS Regions on demand or automatically as part of a scheduled back up plan. Cross-Region backup is particularly valuab le if you have business continuity or compliance requirem ents to store backups a minimum distance away from your production data. https://docs.aws.amazon.com/aws-ba ckup/latest/devguide/whatisbackup.html", + "references": "" + }, + { + "question": "A solutions architect needs to securely store a dat abase user name and password that an application us es to access an Amazon RDS DB instance. The application t hat accesses the database runs on an Amazon EC2 instance. The solutions architect wants to create a secure parameter in AWS Systems Manager Parameter Store. What should the solutions architect do to meet this requirement?", + "options": [ + "A. Create an IAM role that has read access to the Pa rameter Store parameter. Allow Decrypt access to an", + "B. Create an IAM policy that allows read access to t he Parameter Store parameter. Allow Decrypt access to an", + "C. Create an IAM trust relationship between the Para meter Store parameter and the EC2 instance. Specify", + "D. Create an IAM trust relationship between the DB i nstance and the EC2 instance. Specify Systems Manag er" + ], + "correct": "A. Create an IAM role that has read access to the Pa rameter Store parameter. Allow Decrypt access to an", + "explanation": "Explanation/Reference: To securely store a database user name and password in AWS Systems Manager Parameter Store and allow an application running on an EC2 instance to access it, the solutions architect should create an IAM r ole that has read access to the Parameter Store parameter an d allow Decrypt access to an AWS KMS key that is us ed to encrypt the parameter. The solutions architect s hould then assign this IAM role to the EC2 instance . This approach allows the EC2 instance to access the para meter in the Parameter Store and decrypt it using t he specified KMS key while enforcing the necessary sec urity controls to ensure that the parameter is only accessible to authorized parties.", + "references": "" + }, + { + "question": "A company is designing a cloud communications platf orm that is driven by APIs. The application is host ed on Amazon EC2 instances behind a Network Load Balancer (NLB). The company uses Amazon API Gateway to provide external users with access to the applicati on through APIs. The company wants to protect the p latform against web exploits like SQL injection and also wa nts to detect and mitigate large, sophisticated DDo S attacks. Which combination of solutions provides the MOST pr otection? (Choose two.)", + "options": [ + "A. Use AWS WAF to protect the NLB.", + "B. Use AWS Shield Advanced with the NLB.", + "C. Use AWS WAF to protect Amazon API Gateway.", + "D. Use Amazon GuardDuty with AWS Shield Standard" + ], + "correct": "", + "explanation": "Explanation/Reference: Shield - Load Balancer, CF, Route53 AWF - CF, ALB, API Gateway", + "references": "" + }, + { + "question": "A company has a legacy data processing application that runs on Amazon EC2 instances. Data is processe d sequentially, but the order of results does not mat ter. The application uses a monolithic architecture . The only way that the company can scale the application to m eet increased demand is to increase the size of the instances. The company's developers have decided to rewrite th e application to use a microservices architecture o n Amazon Elastic Container Service (Amazon ECS). What should a solutions architect recommend for com munication between the microservices?", + "options": [ + "A. Create an Amazon Simple Queue Service (Amazon SQS ) queue. Add code to the data producers, and send", + "B. Create an Amazon Simple Notification Service (Ama zon SNS) topic. Add code to the data producers, and", + "C. Create an AWS Lambda function to pass messages. A dd code to the data producers to call the Lambda", + "D. Create an Amazon DynamoDB table. Enable DynamoDB Streams. Add code to the data producers to insert" + ], + "correct": "A. Create an Amazon Simple Queue Service (Amazon SQS ) queue. Add code to the data producers, and send", + "explanation": "Explanation/Reference: Option B, using Amazon Simple Notification Service (SNS), would not be suitable for this use case, as SNS is a pub/sub messaging service that is designed for one- to-many communication, rather than point-to-point communication between specific microservices. Option C, using an AWS Lambda function to pass mess ages, would not be suitable for this use case, as i t would require the data producers and data consumers to have a direct connection and invoke the Lambda function, rather than being decoupled through a mes sage queue. Option D, using an Amazon DynamoDB table with Dynam oDB Streams, would not be suitable for this use cas e, as it would require the data consumers to continuou sly poll the DynamoDB Streams API to detect new tab le entries, rather than being notified of new data thr ough a message queue.", + "references": "" + }, + { + "question": "A company wants to migrate its MySQL database from on premises to AWS. The company recently experienced a database outage that significantly im pacted the business. To ensure this does not happen again, the company wants a reliable database solution on A WS that minimizes data loss and stores every transa ction on at least two nodes. Which solution meets these requirements?", + "options": [ + "A. Create an Amazon RDS DB instance with synchronous replication to three nodes in three Availability Z ones.", + "B. Create an Amazon RDS MySQL DB instance with Multi -AZ functionality enabled to synchronously replicat e", + "C. Create an Amazon RDS MySQL DB instance and then c reate a read replica in a separate AWS Region that", + "D. Create an Amazon EC2 instance with a MySQL engine installed that triggers an AWS Lambda function to" + ], + "correct": "B. Create an Amazon RDS MySQL DB instance with Multi -AZ functionality enabled to synchronously replicat e", + "explanation": "Explanation/Reference: Amazon RDS MySQL DB instance with Multi-AZ function ality enabled to synchronously replicate the data Standby DB in Multi-AZ- synchronous replication", + "references": "" + }, + { + "question": "A company is building a new dynamic ordering websit e. The company wants to minimize server maintenance and patching. The website must be highly available and must scale read and write capacity as quickly a s possible to meet changes in user demand. Which solution will meet these requirements?", + "options": [ + "A. Host static content in Amazon S3. Host dynamic co ntent by using Amazon API Gateway and AWS Lambda.", + "B. Host static content in Amazon S3. Host dynamic co ntent by using Amazon API Gateway and AWS Lambda.", + "C. Host all the website content on Amazon EC2 instan ces. Create an Auto Scaling group to scale the EC2", + "D. Host all the website content on Amazon EC2 instan ces. Create an Auto Scaling group to scale the EC2" + ], + "correct": "A. Host static content in Amazon S3. Host dynamic co ntent by using Amazon API Gateway and AWS Lambda.", + "explanation": "Explanation/Reference: A - is correct, because Dynamodb on-demand scales w rite and read capacity", + "references": "" + }, + { + "question": "A company has an AWS account used for software engi neering. The AWS account has access to the company's on-premises data center through a pair of AWS Direct Connect connections. All non-VPC traffi c routes to the virtual private gateway. A development team recently created an AWS Lambda f unction through the console. The development team needs to allow the function to access a database th at runs in a private subnet in the company's data c enter. Which solution will meet these requirements? A. Configure the Lambda function to run in the VPC w ith the appropriate security group.", + "options": [ + "B. Set up a VPN connection from AWS to the data cent er. Route the traffic from the Lambda function thro ugh", + "C. Update the route tables in the VPC to allow the L ambda function to access the on- premises data cent er", + "D. Create an Elastic IP address. Configure the Lambd a function to send traffic through the Elastic IP a ddress" + ], + "correct": "", + "explanation": "Explanation/Reference: To configure a VPC for an existing function: 1. Open the Functions page of the Lambda console. 2. Choose a function. 3. Choose Configuration and then choose VPC. 4. Under VPC, choose Edit. 5. Choose a VPC, subnets, and security groups. <-- **That's why I believe the answer is A**. Note: If your function needs internet access, use network address translation (NAT). Connecting a function t o a public subnet doesn't give it internet access or a public IP address.", + "references": "" + }, + { + "question": "A company runs an application using Amazon ECS. The application creates resized versions of an origina l image and then makes Amazon S3 API calls to store t he resized images in Amazon S3. How can a solutions architect ensure that the appli cation has permission to access Amazon S3?", + "options": [ + "A. Update the S3 role in AWS IAM to allow read/write access from Amazon ECS, and then relaunch the", + "B. Create an IAM role with S3 permissions, and then specify that role as the taskRoleArn in the task de finition.", + "C. Create a security group that allows access from A mazon ECS to Amazon S3, and update the launch", + "D. Create an IAM user with S3 permissions, and then relaunch the Amazon EC2 instances for the ECS clust er" + ], + "correct": "B. Create an IAM role with S3 permissions, and then specify that role as the taskRoleArn in the task de finition.", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/AWSCloudFormation/lates t/UserGuide/aws-resource-ecs- taskdefinition.html", + "references": "" + }, + { + "question": "A company has a Windows-based application that must be migrated to AWS. The application requires the u se of a shared Windows file system attached to multipl e Amazon EC2 Windows instances that are deployed across multiple Availability Zone: What should a solutions architect do to meet this r equirement? A. Configure AWS Storage Gateway in volume gateway m ode. Mount the volume to each Windows instance.", + "options": [ + "B. Configure Amazon FSx for Windows File Server. Mou nt the Amazon FSx file system to each Windows", + "C. Configure a file system by using Amazon Elastic F ile System (Amazon EFS). Mount the EFS file system to", + "D. Configure an Amazon Elastic Block Store (Amazon E BS) volume with the required size." + ], + "correct": "B. Configure Amazon FSx for Windows File Server. Mou nt the Amazon FSx file system to each Windows", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/AmazonECS/latest/develo perguide/wfsx-volumes.html", + "references": "" + }, + { + "question": "A company is developing an ecommerce application th at will consist of a load-balanced front end, a con tainer- based application, and a relational database. A sol utions architect needs to create a highly available solution that operates with as little manual intervention as possible. Which solutions meet these requirements? (Choose tw o.)", + "options": [ + "A. Create an Amazon RDS DB instance in Multi-AZ mode .", + "B. Create an Amazon RDS DB instance and one or more replicas in another Availability Zone.", + "C. Create an Amazon EC2 instance-based Docker cluste r to handle the dynamic application load.", + "D. Create an Amazon Elastic Container Service (Amazo n ECS) cluster with a Fargate launch type to handle", + "A.(O) multi-az <= 'little intervention'", + "B.(X) read replica <= Promoting a read replica to b e a standalone DB instance", + "C.(X) use Amazon ECS instead of EC2-based docker fo r little human intervention", + "D.(O) Amazon ECS on AWS Fargate : AWS Fargate is a technology that you can use with Amazon ECS to run", + "A. Use AWS Transfer Family to configure an SFTP-enab led server with a publicly accessible endpoint.", + "B. Use Amazon S3 File Gateway as an SFTP server. Exp ose the S3 File Gateway endpoint URL to the new", + "C. Launch an Amazon EC2 instance in a private subnet in a VPInstruct the new partner to upload files to the", + "D. Launch Amazon EC2 instances in a private subnet i n a VPC. Place a Network Load Balancer (NLB) in fro nt" + ], + "correct": "A. Use AWS Transfer Family to configure an SFTP-enab led server with a publicly accessible endpoint.", + "explanation": "Explanation/Reference: AWS Transfer Family securely scales your recurring business-to-business file transfers to AWS Storage services using SFTP, FTPS, FTP, and AS2 protocols. https://aws.amazon.com/aws-transfer-family/", + "references": "" + }, + { + "question": "A company needs to store contract documents. A cont ract lasts for 5 years. During the 5-year period, t he company must ensure that the documents cannot be ov erwritten or deleted. The company needs to encrypt the documents at rest and rotate the encryption keys au tomatically every year. Which combination of steps should a solutions archi tect take to meet these requirements with the LEAST operational overhead? (Choose two.)", + "options": [ + "A. Store the documents in Amazon S3. Use S3 Object L ock in governance mode.", + "B. Store the documents in Amazon S3. Use S3 Object L ock in compliance mode.", + "C. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3).", + "D. Use server-side encryption with AWS Key Managemen t Service (AWS KMS) customer managed keys." + ], + "correct": "", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/AmazonS3/latest/usergui de/UsingServerSideEncryption.ht ml", + "references": "" + }, + { + "question": "A company has a web application that is based on Ja va and PHP. The company plans to move the applicati on from on premises to AWS. The company needs the abil ity to test new site features frequently. The compa ny also needs a highly available and managed solution that requires minimum operational overhead. Which solution will meet these requirements? A. Create an Amazon S3 bucket. Enable static web hos ting on the S3 bucket. Upload the static content to the S3 bucket. Use AWS Lambda to process all dynamic co ntent.", + "options": [ + "B. Deploy the web application to an AWS Elastic Bean stalk environment. Use URL swapping to switch", + "C. Deploy the web application to Amazon EC2 instance s that are configured with Java and PHP. Use Auto", + "D. Containerize the web application. Deploy the web application to Amazon EC2 instances." + ], + "correct": "B. Deploy the web application to an AWS Elastic Bean stalk environment. Use URL swapping to switch", + "explanation": "Explanation/Reference: Elastic Beanstalk is a fully managed service that m akes it easy to deploy and run applications in the AWS; To enable frequent testing of new site features, you c an use URL swapping to switch between multiple Elas tic Beanstalk environments.", + "references": "" + }, + { + "question": "A company has an ordering application that stores c ustomer information in Amazon RDS for MySQL. During regular business hours, employees run one-time quer ies for reporting purposes. Timeouts are occurring during order processing because the reporting queries are taking a long time to run. The company needs to eli minate the timeouts without preventing employees from perf orming queries. What should a solutions architect do to meet these requirements?", + "options": [ + "A. Create a read replica. Move reporting queries to the read replica.", + "B. Create a read replica. Distribute the ordering ap plication to the primary DB instance and the read r eplica.", + "C. Migrate the ordering application to Amazon Dynamo DB with on-demand capacity.", + "D. Schedule the reporting queries for non-peak hours ." + ], + "correct": "A. Create a read replica. Move reporting queries to the read replica.", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A hospital wants to create digital copies for its l arge collection of historical written records. The hospital will continue to add hundreds of new documents each day. The hospital's data team will scan the documents a nd will upload the documents to the AWS Cloud. A solutions architect must implement a solution to analyze the documents, extract the medical informat ion, and store the documents so that an application can run SQL queries on the data. The solution must maximize scalability and operational efficiency. Which combination of steps should the solutions arc hitect take to meet these requirements? (Choose two .)", + "options": [ + "A. Write the document information to an Amazon EC2 i nstance that runs a MySQL database.", + "B. Write the document information to an Amazon S3 bu cket. Use Amazon Athena to query the data.", + "C. Create an Auto Scaling group of Amazon EC2 instan ces to run a custom application that processes the", + "D. Create an AWS Lambda function that runs when new documents are uploaded. Use Amazon Rekognition to convert the documents to raw text. Use Amazon Tr anscribe Medical to detect and extract relevant" + ], + "correct": "", + "explanation": "Explanation/Reference: B > Store documents on S3 an use Athena to query > E > Use Textract to extract text from files and not Rekognition. N.B Rekognition is for image identifi fcation", + "references": "" + }, + { + "question": "A company is running a batch application on Amazon EC2 instances. The application consists of a backen d with multiple Amazon RDS databases. The application is causing a high number of reads on the databases . A solutions architect must reduce the number of datab ase reads while ensuring high availability. What should the solutions architect do to meet this requirement?", + "options": [ + "A. Add Amazon RDS read replicas.", + "B. Use Amazon ElastiCache for Redis.", + "C. Use Amazon Route 53 DNS caching", + "D. Use Amazon ElastiCache for Memcached." + ], + "correct": "B. Use Amazon ElastiCache for Redis.", + "explanation": "Explanation/Reference: https://aws.amazon.com/getting-started/hands-on/boo sting-mysql-database- performance-with-amazon- elasticache-for-redis/", + "references": "" + }, + { + "question": "A company needs to run a critical application on AW S. The company needs to use Amazon EC2 for the application's database. The database must be highly available and must fail over automatically if a di sruptive event occurs. Which solution will meet these requirements?", + "options": [ + "A. Launch two EC2 instances, each in a different Ava ilability Zone in the same AWS Region. Install the", + "B. Launch an EC2 instance in an Availability Zone. I nstall the database on the EC2 instance.", + "C. Launch two EC2 instances, each in a different AWS Region. Install the database on both EC2 instances .", + "D. Launch an EC2 instance in an Availability Zone. I nstall the database on the EC2 instance." + ], + "correct": "A. Launch two EC2 instances, each in a different Ava ilability Zone in the same AWS Region. Install the", + "explanation": "Explanation Explanation/Reference: (Configure the EC2 instances as a cluster) Cluster consist of one or more DB instances and a cluster v olume that manages the data for those DB instances. Clust er Volume is a VIRTUAL DATABASE storage volume that spans multiple Availability Zones, with each Availa bility Zone having a copy of the DB cluster data. https://docs.aws.amazon.com/AmazonRDS/latest/Aurora UserGuide/Aurora.Overview.htm", + "references": "" + }, + { + "question": "A company's order system sends requests from client s to Amazon EC2 instances. The EC2 instances proces s the orders and then store the orders in a database on Amazon RDS. Users report that they must reproces s orders when the system fails. The company wants a r esilient solution that can process orders automatic ally if a system outage occurs. What should a solutions architect do to meet these requirements?", + "options": [ + "A. Move the EC2 instances into an Auto Scaling group . Create an Amazon EventBridge (Amazon CloudWatch", + "B. Move the EC2 instances into an Auto Scaling group behind an Application Load Balancer (ALB). Update the", + "C. Move the EC2 instances into an Auto Scaling group . Configure the order system to send messages to an", + "D. Create an Amazon Simple Notification Service (Ama zon SNS) topic. Create an AWS Lambda function, and" + ], + "correct": "C. Move the EC2 instances into an Auto Scaling group . Configure the order system to send messages to an", + "explanation": "Explanation/Reference: To meet the requirements of the company, a solution should be implemented that can automatically proce ss orders if a system outage occurs. Option C meets th ese requirements by using an Auto Scaling group and Amazon Simple Queue Service (SQS) to ensure that or ders can be processed even if a system outage occur s. In this solution, the EC2 instances are placed in a n Auto Scaling group, which ensures that the number of instances can be automatically scaled up or down ba sed on demand. The ordering system is configured to send messages to an SQS queue, which acts as a buff er and stores the messages until they can be proces sed by the EC2 instances. The EC2 instances are configu red to consume messages from the queue and process them. If a system outage occurs, the messages in th e queue will remain available and can be processed once the system is restored.", + "references": "" + }, + { + "question": "A company runs an application on a large fleet of A mazon EC2 instances. The application reads and writ es entries into an Amazon DynamoDB table. The size of the DynamoDB table continuously grows, but the application needs only data from the last 30 days. The company needs a solution that minimizes cost an d development effort. Which solution meets these requirements?", + "options": [ + "A. Use an AWS CloudFormation template to deploy the complete solution. Redeploy the CloudFormation stac k", + "B. Use an EC2 instance that runs a monitoring applic ation from AWS Marketplace.", + "C. Configure Amazon DynamoDB Streams to invoke an AW S Lambda function when a new item is created in", + "D. Extend the application to add an attribute that h as a value of the current timestamp plus 30 days to each new item that is created in the table. Configure Dy namoDB to use the attribute as the TTL attribute." + ], + "correct": "D. Extend the application to add an attribute that h as a value of the current timestamp plus 30 days to each new item that is created in the table. Configure Dy namoDB to use the attribute as the TTL attribute.", + "explanation": "Explanation/Reference: The DynamoDB TTL feature allows you to define a per -item timestamp to determine when an item is no lon ger needed. Shortly after the date and time of the spec ified timestamp, DynamoDB deletes the item from you r table without consuming any write throughput.", + "references": "" + }, + { + "question": "A company has a Microsoft .NET application that run s on an on-premises Windows Server. The application stores data by using an Oracle Database Standard Ed ition server. The company is planning a migration t o AWS and wants to minimize development changes while moving the application. The AWS application environment should be highly available. Which combi nation of actions should the company take to meet t hese requirements? (Choose two.)", + "options": [ + "A. Refactor the application as serverless with AWS L ambda functions running .NET Core.", + "B. Rehost the application in AWS Elastic Beanstalk w ith the .NET platform in a Multi-AZ deployment.", + "C. Replatform the application to run on Amazon EC2 w ith the Amazon Linux Amazon Machine Image (AMI).", + "D. Use AWS Database Migration Service (AWS DMS) to m igrate from the Oracle database to Amazon", + "B. Rehost the application in AWS Elastic Beanstalk with the .NET platform in a Multi-AZ deployment." + ], + "correct": "", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A company runs a containerized application on a Kub ernetes cluster in an on-premises data center. The company is using a MongoDB database for data storag e. The company wants to migrate some of these environments to AWS, but no code changes or deploym ent method changes are possible at this time. The company needs a solution that minimizes operational overhead. Which solution meets these requirements?", + "options": [ + "A. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes for compute and", + "B. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate for compute and Amazon", + "D. Use Amazon Elastic Kubernetes Service (Amazon EKS ) with AWS Fargate for compute and Amazon" + ], + "correct": "D. Use Amazon Elastic Kubernetes Service (Amazon EKS ) with AWS Fargate for compute and Amazon", + "explanation": "Explanation/Reference: https://containersonaws.com/introduction/ec2-or-aws -fargate/", + "references": "" + }, + { + "question": "A telemarketing company is designing its customer c all center functionality on AWS. The company needs a solution that provides multiple speaker recognition and generates transcript files. The company wants to query the transcript files to analyze the business patter ns. The transcript files must be stored for 7 years for auditing purposes. Which solution will meet these requirements?", + "options": [ + "A. Use Amazon Rekognition for multiple speaker recog nition. Store the transcript files in Amazon S3. Us e", + "B. Use Amazon Transcribe for multiple speaker recogn ition. Use Amazon Athena for transcript file analys is.", + "C. Use Amazon Translate for multiple speaker recogni tion. Store the transcript files in Amazon Redshift . Use", + "D. Use Amazon Rekognition for multiple speaker recog nition. Store the transcript files in Amazon S3. Us e" + ], + "correct": "B. Use Amazon Transcribe for multiple speaker recogn ition. Use Amazon Athena for transcript file analys is.", + "explanation": "Explanation/Reference: The correct answer is B: Use Amazon Transcribe for multiple speaker recognition. Use Amazon Athena for transcript file analysis. Amazon Transcribe is a service that automatically t ranscribes spoken language into written text. It ca n handle multiple speakers and can generate transcript files in real-time or asynchronously. These transcript f iles can be stored in Amazon S3 for long-term storage. Amazon Athena is a query service that allows you to analyze data stored in Amazon S3 using SQL. You ca n use it to analyze the transcript files and identify patterns in the data. Option A is incorrect because Amazon Rekognition is a service for analyzing images and videos, not transcribing spoken language. Option C is incorrect because Amazon Translate is a service for translating text from one language to another, not transcribing spoken language. Option D is incorrect because Amazon Textract is a service for extracting text and data from documents and images, not transcribing spoken language.", + "references": "" + }, + { + "question": "A company hosts its application on AWS. The company uses Amazon Cognito to manage users. When users log in to the application, the application fetches required data from Amazon DynamoDB by using a REST API that is hosted in Amazon API Gateway. The company w ants an AWS managed solution that will control acce ss to the REST API to reduce development efforts. Which solution will meet these requirements with th e LEAST operational overhead? A. Configure an AWS Lambda function to be an authori zer in API Gateway to validate which user made the request.", + "options": [ + "B. For each user, create and assign an API key that must be sent with each request. Validate the key by using", + "C. Send the user's email address in the header with every request. Invoke an AWS Lambda function to", + "D. Configure an Amazon Cognito user pool authorizer in API Gateway to allow Amazon Cognito to validate" + ], + "correct": "D. Configure an Amazon Cognito user pool authorizer in API Gateway to allow Amazon Cognito to validate", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/apigateway/latest/devel operguide/apigateway-integrate- with-cognito.html", + "references": "" + }, + { + "question": "A company is developing a marketing communications service that targets mobile app users. The company needs to send confirmation messages with Short Mess age Service (SMS) to its users. The users must be a ble to reply to the SMS messages. The company must stor e the responses for a year for analysis. What should a solutions architect do to meet these requirements?", + "options": [ + "A. Create an Amazon Connect contact flow to send the SMS messages. Use AWS Lambda to process the", + "B. Build an Amazon Pinpoint journey. Configure Amazo n Pinpoint to send events to an Amazon Kinesis data", + "C. Use Amazon Simple Queue Service (Amazon SQS) to d istribute the SMS messages. Use AWS Lambda to", + "D. Create an Amazon Simple Notification Service (Ama zon SNS) FIFO topic. Subscribe an Amazon Kinesis" + ], + "correct": "B. Build an Amazon Pinpoint journey. Configure Amazo n Pinpoint to send events to an Amazon Kinesis data", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/pinpoint/latest/develop erguide/event-streams.html", + "references": "" + }, + { + "question": "A company is planning to move its data to an Amazon S3 bucket. The data must be encrypted when it is s tored in the S3 bucket. Additionally, the encryption key must be automatically rotated every year. Which solution will meet these requirements with th e LEAST operational overhead?", + "options": [ + "A. Move the data to the S3 bucket. Use server-side e ncryption with Amazon S3 managed encryption keys", + "B. Create an AWS Key Management Service (AWS KMS) cu stomer managed key. Enable automatic key", + "C. Create an AWS Key Management Service (AWS KMS) cu stomer managed key. Set the S3 bucket's default", + "D. Encrypt the data with customer key material befor e moving the data to the S3 bucket." + ], + "correct": "B. Create an AWS Key Management Service (AWS KMS) cu stomer managed key. Enable automatic key", + "explanation": "Explanation/Reference: SSE-S3 - is free and uses AWS owned CMKs (CMK = Cus tomer Master Key). The encryption key is owned and managed by AWS, and is shared among many accoun ts. Its rotation is automatic with time that varies as shown in the table here. The time is not explicitly defined. SSE-KMS - has two flavors: AWS managed CMK. This is free CMK generated only fo r your account. You can only view it policies and a udit usage, but not manage it. Rotation is automatic - o nce per 1095 days (3 years), Customer managed CMK. This uses your own key that y ou create and can manage. Rotation is not enabled b y default. But if you enable it, it will be automatic ally rotated every 1 year. This variant can also us e an imported key material by you. If you create such key with an imported material, there is no automated rotation. Only manual rotation. SSE-C - customer provided key. The encryption key i s fully managed by you outside of AWS. AWS will not rotate it.", + "references": "" + }, + { + "question": "The customers of a finance company request appointm ents with financial advisors by sending text messag es. A web application that runs on Amazon EC2 instances accepts the appointment requests. The text message s are published to an Amazon Simple Queue Service (Am azon SQS) queue through the web application. Anothe r application that runs on EC2 instances then sends m eeting invitations and meeting confirmation email messages to the customers. After successful schedul ing, this application stores the meeting informatio n in an Amazon DynamoDB database. As the company expands, customers report that their meeting invitations are taking longer to arrive. What should a solutions architect recommend to reso lve this issue?", + "options": [ + "A. Add a DynamoDB Accelerator (DAX) cluster in front of the DynamoDB database.", + "B. Add an Amazon API Gateway API in front of the web application that accepts the appointment requests.", + "C. Add an Amazon CloudFront distribution. Set the or igin as the web application that accepts the appoin tment", + "D. Add an Auto Scaling group for the application tha t sends meeting invitations. Configure the Auto Sca ling" + ], + "correct": "D. Add an Auto Scaling group for the application tha t sends meeting invitations. Configure the Auto Sca ling", + "explanation": "Explanation/Reference: Option D. Add an Auto Scaling group for the applica tion that sends meeting invitations. Configure the Auto Scaling group to scale based on the depth of the SQ S queue. To resolve the issue of longer delivery ti mes for meeting invitations, the solutions architect can re commend adding an Auto Scaling group for the applic ation that sends meeting invitations and configuring the Auto Scaling group to scale based on the depth of t he SQS queue. This will allow the application to scale up as the number of appointment requests increases, im proving the performance and delivery times of the meeting i nvitations.", + "references": "" + }, + { + "question": "An online retail company has more than 50 million a ctive customers and receives more than 25,000 order s each day. The company collects purchase data for cu stomers and stores this data in Amazon S3. Addition al customer data is stored in Amazon RDS. The company wants to make all the data available to various teams so that the teams can perform analyt ics. The solution must provide the ability to manage fin e-grained permissions for the data and must minimiz e operational overhead. Which solution will meet these requirements?", + "options": [ + "A. Migrate the purchase data to write directly to Am azon RDS. Use RDS access controls to limit access.", + "B. Schedule an AWS Lambda function to periodically c opy data from Amazon RDS to Amazon S3. Create an", + "C. Create a data lake by using AWS Lake Formation. C reate an AWS Glue JDBC connection to Amazon RDS.", + "D. Create an Amazon Redshift cluster. Schedule an AW S Lambda function to periodically copy data from" + ], + "correct": "C. Create a data lake by using AWS Lake Formation. C reate an AWS Glue JDBC connection to Amazon RDS.", + "explanation": "Explanation/Reference: https://aws.amazon.com/blogs/big-data/manage-fine-g rained-access-control-using-aws- lake-formation/", + "references": "" + }, + { + "question": "A company hosts a marketing website in an on-premis es data center. The website consists of static docu ments and runs on a single server. An administrator updat es the website content infrequently and uses an SFT P client to upload new documents. The company decides to host its website on AWS and to use Amazon CloudFront. The company's solutions architect creates a CloudFront distribution. The so lutions architect must design the most cost-effecti ve and resilient architecture for website hosting to serve as the CloudFront origin. Which solution will meet these requirements?", + "options": [ + "A. Create a virtual server by using Amazon Lightsail . Configure the web server in the Lightsail instanc e. Upload", + "B. Create an AWS Auto Scaling group for Amazon EC2 i nstances. Use an Application Load Balancer. Upload", + "C. Create a private Amazon S3 bucket. Use an S3 buck et policy to allow access from a CloudFront origin", + "D. Create a public Amazon S3 bucket. Configure AWS T ransfer for SFTP. Configure the S3 bucket for websi te" + ], + "correct": "C. Create a private Amazon S3 bucket. Use an S3 buck et policy to allow access from a CloudFront origin", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/cli/latest/reference/tr ansfer/describe-server.html", + "references": "" + }, + { + "question": "A company wants to manage Amazon Machine Images (AM Is). The company currently copies AMIs to the same AWS Region where the AMIs were created. The co mpany needs to design an application that captures AWS API calls and sends alerts whenever the Amazon EC2 CreateImage API operation is called within the company's account. Which solution will meet these requirements with th e LEAST operational overhead? A. Create an AWS Lambda function to query AWS CloudT rail logs and to send an alert when a CreateImage API call is detected.", + "options": [ + "B. Configure AWS CloudTrail with an Amazon Simple No tification Service (Amazon SNS) notification that", + "C. Create an Amazon EventBridge (Amazon CloudWatch E vents) rule for the CreateImage API call. Configure", + "D. Configure an Amazon Simple Queue Service (Amazon SQS) FIFO queue as a target for AWS CloudTrail" + ], + "correct": "C. Create an Amazon EventBridge (Amazon CloudWatch E vents) rule for the CreateImage API call. Configure", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/AWSEC2/latest/WindowsGu ide/monitor-ami- events.html#:~:text=For% 20example%2C%20you%20can%20create%20an%20EventBridg e%20rule%20that%20detects%20when% 20the%20AMI%20creation%20process%20has% 20completed %20and%20then%20invokes%20an% 20Amazon%20SNS%20topic%20to%20 send%20an%20email%20 notification%20to%20you.", + "references": "" + }, + { + "question": "A company owns an asynchronous API that is used to ingest user requests and, based on the request type , dispatch requests to the appropriate microservice f or processing. The company is using Amazon API Gate way to deploy the API front end, and an AWS Lambda func tion that invokes Amazon DynamoDB to store user requests before dispatching them to the processing microservices. The company provisioned as much DynamoDB throughput as its budget allows, but the company is still experiencing availability issues and is losing user requests. What should a solutions architect do to address thi s issue without impacting existing users?", + "options": [ + "A. Add throttling on the API Gateway with server-sid e throttling limits.", + "B. Use DynamoDB Accelerator (DAX) and Lambda to buff er writes to DynamoDB.", + "C. Create a secondary index in DynamoDB for the tabl e with the user requests.", + "D. Use the Amazon Simple Queue Service (Amazon SQS) queue and Lambda to buffer writes to DynamoDB." + ], + "correct": "D. Use the Amazon Simple Queue Service (Amazon SQS) queue and Lambda to buffer writes to DynamoDB.", + "explanation": "Explanation/Reference: To address the issue of lost user requests and impr ove the availability of the API, the solutions arch itect should use the Amazon Simple Queue Service (Amazon SQS) qu eue and Lambda to buffer writes to DynamoDB. Option D (correct answer) By using an SQS queue and Lambda, the solutions architect can decouple the A PI front end from the processing microservices and imp rove the overall scalability and availability of th e system. The SQS queue acts as a buffer, allowing the API fr ont end to continue accepting user requests even if the processing microservices are experiencing high work loads or are temporarily unavailable. The Lambda fu nction can then retrieve requests from the SQS queue and w rite them to DynamoDB, ensuring that all user reque sts are stored and processed. This approach allows the company to scale the processing microservices independently from the API front end, ensuring that the API remains available to users even during per iods of high demand.", + "references": "" + }, + { + "question": "A company needs to move data from an Amazon EC2 ins tance to an Amazon S3 bucket. The company must ensure that no API calls and no data are routed thr ough public internet routes. Only the EC2 instance can have access to upload data to the S3 bucket. Which solution will meet these requirements?", + "options": [ + "A. Create an interface VPC endpoint for Amazon S3 in the subnet where the EC2 instance is located. Atta ch a", + "B. Create a gateway VPC endpoint for Amazon S3 in th e Availability Zone where the EC2 instance is locat ed.", + "C. Run the nslookup tool from inside the EC2 instanc e to obtain the private IP address of the S3 bucket 's", + "D. Use the AWS provided, publicly available ip-range s.json file to obtain the private IP address of the S3" + ], + "correct": "A. Create an interface VPC endpoint for Amazon S3 in the subnet where the EC2 instance is located. Atta ch a", + "explanation": "Explanation/Reference: https://aws.amazon.com/premiumsupport/knowledge-cen ter/connect-s3-vpc-endpoint/", + "references": "" + }, + { + "question": "A solutions architect is designing the architecture of a new application being deployed to the AWS Clo ud. The application will run on Amazon EC2 On-Demand Instan ces and will automatically scale across multiple Availability Zones. The EC2 instances will scale up and down frequently throughout the day. An Applica tion Load Balancer (ALB) will handle the load distributi on. The architecture needs to support distributed s ession data management. The company is willing to make cha nges to code if needed. What should the solutions architect do to ensure th at the architecture supports distributed session da ta management?", + "options": [ + "A. Use Amazon ElastiCache to manage and store sessio n data.", + "B. Use session affinity (sticky sessions) of the ALB to manage session data.", + "C. Use Session Manager from AWS Systems Manager to m anage the session.", + "D. Use the GetSessionToken API operation in AWS Secu rity Token Service (AWS STS) to manage the" + ], + "correct": "A. Use Amazon ElastiCache to manage and store sessio n data.", + "explanation": "Explanation/Reference: The correct answer is A. Use Amazon ElastiCache to manage and store session data. In order to support distributed session data management in this scenari o, it is necessary to use a distributed data store such as Amazon ElastiCache. This will allow the session dat a to be stored and accessed by multiple EC2 instanc es across multiple Availability Zones, which is necess ary for a scalable and highly available architectur e. Option B, using session affinity (sticky sessions) of the ALB , would not be sufficient because this would only a llow the session data to be stored on a single EC2 instance, which would not be able to scale across multiple A vailability Zones. Options C and D, using Session Manager and t he GetSessionToken API operation in AWS STS, are not related to session data management and would no t be appropriate solutions for this scenario.", + "references": "" + }, + { + "question": "A company offers a food delivery service that is gr owing rapidly. Because of the growth, the company's order processing system is experiencing scaling problems during peak traffic hours. The current architecture includes the following: \u00b7 A group of Amazon EC2 instances that run in an Am azon EC2 Auto Scaling group to collect orders from the application \u00b7 Another group of EC2 instances that run in an Ama zon EC2 Auto Scaling group to fulfill orders The order collection process occurs quickly, but th e order fulfillment process can take longer. Data must not be lost because of a scaling event. A solutions architect must ensure that the order co llection process and the order fulfillment process can both scale properly during peak traffic hours. The solut ion must optimize utilization of the company's AWS resources. Which solution meets these requirements?", + "options": [ + "A. Use Amazon CloudWatch metrics to monitor the CPU of each instance in the Auto Scaling groups.", + "B. Use Amazon CloudWatch metrics to monitor the CPU of each instance in the Auto Scaling groups.", + "C. Provision two Amazon Simple Queue Service (Amazon SQS) queues: one for order collection and another", + "D. Provision two Amazon Simple Queue Service (Amazon SQS) queues: one for order collection and another" + ], + "correct": "D. Provision two Amazon Simple Queue Service (Amazon SQS) queues: one for order collection and another", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A company hosts multiple production applications. O ne of the applications consists of resources from A mazon EC2, AWS Lambda, Amazon RDS, Amazon Simple Notifica tion Service (Amazon SNS), and Amazon Simple Queue Service (Amazon SQS) across multiple AWS Regi ons. All company resources are tagged with a tag name of \"application\" and a value that corresponds to each application. A solutions architect must pro vide the quickest solution for identifying all of the tagged components. Which solution meets these requirements?", + "options": [ + "A. Use AWS CloudTrail to generate a list of resource s with the application tag.", + "B. Use the AWS CLI to query each service across all Regions to report the tagged components.", + "C. Run a query in Amazon CloudWatch Logs Insights to report on the components with the application tag.", + "D. Run a query with the AWS Resource Groups Tag Edit or to report on the resources globally with the" + ], + "correct": "D. Run a query with the AWS Resource Groups Tag Edit or to report on the resources globally with the", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/tag-editor/latest/userg uide/tagging.html https://docs.aws.amazon.com/tag-editor/latest/userg uide/tagging.html", + "references": "" + }, + { + "question": "A company needs to export its database once a day t o Amazon S3 for other teams to access. The exported object size varies between 2 GB and 5 GB. The S3 ac cess pattern for the data is variable and changes r apidly. The data must be immediately available and must rem ain accessible for up to 3 months. The company need s the most cost-effective solution that will not incr ease retrieval time. Which S3 storage class should the company use to me et these requirements?", + "options": [ + "A. S3 Intelligent-Tiering", + "B. S3 Glacier Instant Retrieval", + "C. S3 Standard", + "D. S3 Standard-Infrequent Access (S3 Standard-IA)" + ], + "correct": "A. S3 Intelligent-Tiering", + "explanation": "Explanation/Reference: \"The S3 access pattern for the data is variable and changes rapidly\" => Use S3 intelligent tiering wit h smart enough to transit the prompt storage class.", + "references": "" + }, + { + "question": "A company is developing a new mobile app. The compa ny must implement proper traffic filtering to prote ct its Application Load Balancer (ALB) against common appl ication-level attacks, such as cross-site scripting or SQL injection. The company has minimal infrastructure a nd operational staff. The company needs to reduce i ts share of the responsibility in managing, updating, and securing servers for its AWS environment. What should a solutions architect recommend to meet these requirements?", + "options": [ + "A. Configure AWS WAF rules and associate them with t he ALB.", + "B. Deploy the application using Amazon S3 with publi c hosting enabled.", + "C. Deploy AWS Shield Advanced and add the ALB as a p rotected resource.", + "D. Create a new ALB that directs traffic to an Amazo n EC2 instance running a third-party firewall, whic h then" + ], + "correct": "C. Deploy AWS Shield Advanced and add the ALB as a p rotected resource.", + "explanation": "Explanation/Reference: https://aws.amazon.com/shield/features/", + "references": "" + }, + { + "question": "A company's reporting system delivers hundreds of . csv files to an Amazon S3 bucket each day. The comp any must convert these files to Apache Parquet format a nd must store the files in a transformed data bucke t. Which solution will meet these requirements with th e LEAST development effort?", + "options": [ + "A. Create an Amazon EMR cluster with Apache Spark in stalled. Write a Spark application to transform the", + "B. Create an AWS Glue crawler to discover the data. Create an AWS Glue extract, transform, and load (ET L)", + "C. Use AWS Batch to create a job definition with Bas h syntax to transform the data and output the data to the", + "D. Create an AWS Lambda function to transform the da ta and output the data to the transformed data buck et." + ], + "correct": "", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/prescriptive-guidance/l atest/patterns/three-aws-glue-etl- job-types-for-co nverting- data-to-apache-parquet.html", + "references": "" + }, + { + "question": "A company has 700 TB of backup data stored in netwo rk attached storage (NAS) in its data center. This backup data need to be accessible for infrequent re gulatory requests and must be retained 7 years. The company has decided to migrate this backup data fro m its data center to AWS. The migration must be complete within 1 month. The company has 500 Mbps o f dedicated bandwidth on its public internet connec tion available for data transfer. What should a solutions architect do to migrate and store the data at the LOWEST cost?", + "options": [ + "A. Order AWS Snowball devices to transfer the data. Use a lifecycle policy to transition the files to A mazon S3", + "B. Deploy a VPN connection between the data center a nd Amazon VPC. Use the AWS CLI to copy the data", + "C. Provision a 500 Mbps AWS Direct Connect connectio n and transfer the data to Amazon S3. Use a lifecyc le", + "D. Use AWS DataSync to transfer the data and deploy a DataSync agent on premises. Use the DataSync task" + ], + "correct": "A. Order AWS Snowball devices to transfer the data. Use a lifecycle policy to transition the files to A mazon S3", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/datasync/latest/usergui de/create-s3-location.html#using- storage-classes", + "references": "" + }, + { + "question": "A company has a serverless website with millions of objects in an Amazon S3 bucket. The company uses t he S3 bucket as the origin for an Amazon CloudFront di stribution. The company did not set encryption on t he S3 bucket before the objects were loaded. A solutions architect needs to enable encryption for all existi ng objects and for all objects that are added to the S3 bucket in the future. Which solution will meet these requirements with th e LEAST amount of effort?", + "options": [ + "A. Create a new S3 bucket. Turn on the default encry ption settings for the new S3 bucket.", + "B. Turn on the default encryption settings for the S 3 bucket. Use the S3 Inventory feature to create a .csv file", + "C. Create a new encryption key by using AWS Key Mana gement Service (AWS KMS).", + "D. Navigate to Amazon S3 in the AWS Management Conso le. Browse the S3 bucket's objects. Sort by the" + ], + "correct": "B. Turn on the default encryption settings for the S 3 bucket. Use the S3 Inventory feature to create a .csv file", + "explanation": "Explanation/Reference: https://aws.amazon.com/blogs/storage/encrypting-obj ects-with-amazon-s3-batch- operations/", + "references": "" + }, + { + "question": "A company runs a global web application on Amazon E C2 instances behind an Application Load Balancer. T he application stores data in Amazon Aurora. The compa ny needs to create a disaster recovery solution and can tolerate up to 30 minutes of downtime and potential data loss. The solution does not need to handle th e load when the primary infrastructure is healthy. What should a solutions architect do to meet these requirements?", + "options": [ + "A. Deploy the application with the required infrastr ucture elements in place. Use Amazon Route 53 to co nfigure", + "B. Host a scaled-down deployment of the application in a second AWS Region. Use Amazon Route 53 to", + "C. Replicate the primary infrastructure in a second AWS Region. Use Amazon Route 53 to configure active -", + "D. Back up data with AWS Backup. Use the backup to c reate the required infrastructure in a second AWS" + ], + "correct": "A. Deploy the application with the required infrastr ucture elements in place. Use Amazon Route 53 to co nfigure", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/Route53/latest/Develope rGuide/dns-failover-types.html", + "references": "" + }, + { + "question": "A company has a web server running on an Amazon EC2 instance in a public subnet with an Elastic IP address. The default security group is assigned to the EC2 instance. The default network ACL has been modified to block all traffic. A solutions architec t needs to make the web server accessible from ever ywhere on port 443. Which combination of steps will accomplish this tas k? (Choose two.)", + "options": [ + "A. Create a security group with a rule to allow TCP port 443 from source 0.0.0.0/0.", + "B. Create a security group with a rule to allow TCP port 443 to destination 0.0.0.0/0.", + "C. Update the network ACL to allow TCP port 443 from source 0.0.0.0/0.", + "D. Update the network ACL to allow inbound/outbound TCP port 443 from source 0.0.0.0/0 and to destinati on" + ], + "correct": "", + "explanation": "Explanation/Reference: A, E is perfect the combination. To be more precise , We should add outbound with \"outbound TCP port 32 768- 65535 to destination 0.0.0.0/0.\" as an ephemeral po rt due to the stateless of NACL.", + "references": "" + }, + { + "question": "A company's application is having performance issue s. The application is stateful and needs to complet e in- memory tasks on Amazon EC2 instances. The company u sed AWS CloudFormation to deploy infrastructure and used the M5 EC2 instance family. As traffic inc reased, the application performance degraded. Users are reporting delays when the users attempt to access t he application. Which solution will resolve these issues in the MOS T operationally efficient way?", + "options": [ + "A. Replace the EC2 instances with T3 EC2 instances t hat run in an Auto Scaling group.", + "B. Modify the CloudFormation templates to run the EC 2 instances in an Auto Scaling group.", + "C. Modify the CloudFormation templates. Replace the EC2 instances with R5 EC2 instances.", + "D. Modify the CloudFormation templates. Replace the EC2 instances with R5 EC2 instances." + ], + "correct": "D. Modify the CloudFormation templates. Replace the EC2 instances with R5 EC2 instances.", + "explanation": "Explanation/Reference: \"in-memory tasks\" => need the \"R\" EC2 instance type to archive memory optimization. So we are concerne d about C & D. Because EC2 instances don't have built -in memory metrics to CW by default. As a result, w e have to install the CW agent to archive the purpose.", + "references": "" + }, + { + "question": "A solutions architect is designing a new API using Amazon API Gateway that will receive requests from users. The volume of requests is highly variable; several hours can pass without receiving a single request. The data processing will take place asynchronously, but shou ld be completed within a few seconds after a reques t is made. Which compute service should the solutions architec t have the API invoke to deliver the requirements a t the lowest cost?", + "options": [ + "A. An AWS Glue job", + "B. An AWS Lambda function", + "C. A containerized service hosted in Amazon Elastic Kubernetes Service (Amazon EKS)", + "D. A containerized service hosted in Amazon ECS with Amazon EC2" + ], + "correct": "B. An AWS Lambda function", + "explanation": "Explanation/Reference: API Gateway + Lambda is the perfect solution for mo dern applications with serverless architecture.", + "references": "" + }, + { + "question": "A company runs an application on a group of Amazon Linux EC2 instances. For compliance reasons, the company must retain all application log files for 7 years. The log files will be analyzed by a reporti ng tool that must be able to access all the files concurrently. Which storage solution meets these requirements MOS T cost- effectively?", + "options": [ + "A. Amazon Elastic Block Store (Amazon EBS)", + "B. Amazon Elastic File System (Amazon EFS)", + "C. Amazon EC2 instance store", + "D. Amazon S3" + ], + "correct": "", + "explanation": "Explanation/Reference: Cost Effective: S3", + "references": "" + }, + { + "question": "A company has hired an external vendor to perform w ork in the company's AWS account. The vendor uses a n automated tool that is hosted in an AWS account tha t the vendor owns. The vendor does not have IAM acc ess to the company's AWS account. How should a solutions architect grant this access to the vendor?", + "options": [ + "A. Create an IAM role in the company's account to de legate access to the vendor's IAM role.", + "B. Create an IAM user in the company's account with a password that meets the password complexity", + "C. Create an IAM group in the company's account. Add the tool's IAM user from the vendor account to the", + "D. Create a new identity provider by choosing \"AWS a ccount\" as the provider type in the IAM console. Su pply" + ], + "correct": "A. Create an IAM role in the company's account to de legate access to the vendor's IAM role.", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/id _roles_common-scenarios_third- party.html", + "references": "" + }, + { + "question": "A company has deployed a Java Spring Boot applicati on as a pod that runs on Amazon Elastic Kubernetes Service (Amazon EKS) in private subnets. The applic ation needs to write data to an Amazon DynamoDB tab le. A solutions architect must ensure that the applicat ion can interact with the DynamoDB table without ex posing traffic to the internet. Which combination of steps should the solutions architect take to accomplish this goal? (Choose two.)", + "options": [ + "A. Attach an IAM role that has sufficient privileges to the EKS pod.", + "B. Attach an IAM user that has sufficient privileges to the EKS pod.", + "C. Allow outbound connectivity to the DynamoDB table through the private subnets' network ACLs.", + "D. Create a VPC endpoint for DynamoDB." + ], + "correct": "", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/vpc-endpoints- dynamodb.html", + "references": "" + }, + { + "question": "A company recently migrated its web application to AWS by rehosting the application on Amazon EC2 instances in a single AWS Region. The company wants to redesign its application architecture to be hig hly available and fault tolerant. Traffic must reach al l running EC2 instances randomly. Which combination of steps should the company take to meet these requirements? (Choose two.) A. Create an Amazon Route 53 failover routing policy.", + "options": [ + "B. Create an Amazon Route 53 weighted routing policy .", + "C. Create an Amazon Route 53 multivalue answer routi ng policy.", + "D. Launch three EC2 instances: two instances in one Availability Zone and one instance in another Avail ability" + ], + "correct": "", + "explanation": "Explanation/Reference: https://aws.amazon.com/premiumsupport/knowledge-cen ter/multivalue-versus-simple- policies/", + "references": "" + }, + { + "question": "A media company collects and analyzes user activity data on premises. The company wants to migrate thi s capability to AWS. The user activity data store wil l continue to grow and will be petabytes in size. T he company needs to build a highly available data ingestion so lution that facilitates on-demand analytics of exis ting data and new data with SQL. Which solution will meet these requirements with th e LEAST operational overhead?", + "options": [ + "A. Send activity data to an Amazon Kinesis data stre am. Configure the stream to deliver the data to an", + "B. Send activity data to an Amazon Kinesis Data Fire hose delivery stream. Configure the stream to deliv er the", + "C. Place activity data in an Amazon S3 bucket. Confi gure Amazon S3 to run an AWS Lambda function on the", + "D. Create an ingestion service on Amazon EC2 instanc es that are spread across multiple Availability Zon es." + ], + "correct": "B. Send activity data to an Amazon Kinesis Data Fire hose delivery stream. Configure the stream to deliv er the", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A company collects data from thousands of remote de vices by using a RESTful web services application t hat runs on an Amazon EC2 instance. The EC2 instance re ceives the raw data, transforms the raw data, and stores all the data in an Amazon S3 bucket. The num ber of remote devices will increase into the millio ns soon. The company needs a highly scalable solution that m inimizes operational overhead. Which combination of steps should a solutions archi tect take to meet these requirements? (Choose two.)", + "options": [ + "A. Use AWS Glue to process the raw data in Amazon S3 .", + "B. Use Amazon Route 53 to route traffic to different EC2 instances.", + "C. Add more EC2 instances to accommodate the increas ing amount of incoming data.", + "D. Send the raw data to Amazon Simple Queue Service (Amazon SQS). Use EC2 instances to process the" + ], + "correct": "", + "explanation": "Explanation/Reference: \"RESTful web services\" => API Gateway. \"EC2 instance receives the raw data, transforms the raw data, and stores all the data in an Amazon S3 bucket\" => GLUE with (Extract - Transform - Load)", + "references": "" + }, + { + "question": "A company needs to retain its AWS CloudTrail logs f or 3 years. The company is enforcing CloudTrail acr oss a set of AWS accounts by using AWS Organizations from the parent account. The CloudTrail target S3 bucke t is configured with S3 Versioning enabled. An S3 Lifecy cle policy is in place to delete current objects af ter 3 years. After the fourth year of use of the S3 bucket, the S3 bucket metrics show that the number of objects h as continued to rise. However, the number of new Cloud Trail logs that are delivered to the S3 bucket has remained consistent. Which solution will delete objects that are older t han 3 years in the MOST cost-effective manner?", + "options": [ + "A. Configure the organization's centralized CloudTra il trail to expire objects after 3 years.", + "B. Configure the S3 Lifecycle policy to delete previ ous versions as well as current versions.", + "C. Create an AWS Lambda function to enumerate and de lete objects from Amazon S3 that are older than 3", + "D. Configure the parent account as the owner of all objects that are delivered to the S3 bucket." + ], + "correct": "B. Configure the S3 Lifecycle policy to delete previ ous versions as well as current versions.", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/awscloudtrail/latest/us erguide/best-practices- security.html#:~:text=The% 20CloudTrail%20trail,time%20has%20passed.", + "references": "" + }, + { + "question": "A company has an API that receives real-time data f rom a fleet of monitoring devices. The API stores t his data in an Amazon RDS DB instance for later analysis. Th e amount of data that the monitoring devices send t o the API fluctuates. During periods of heavy traffic, th e API often returns timeout errors. After an inspection of the logs, the company determ ines that the database is not capable of processing the volume of write traffic that comes from the API. A solutions architect must minimize the number of con nections to the database and must ensure that data is not lo st during periods of heavy traffic. Which solution will meet these requirements?", + "options": [ + "A. Increase the size of the DB instance to an instan ce type that has more available memory.", + "B. Modify the DB instance to be a Multi-AZ DB instan ce. Configure the application to write to all activ e RDS DB", + "C. Modify the API to write incoming data to an Amazo n Simple Queue Service (Amazon SQS) queue. Use an", + "D. Modify the API to write incoming data to an Amazo n Simple Notification Service (Amazon SNS) topic. U se", + "C. Modify the API to write incoming data to an Amaz on Simple Queue Service (Amazon SQS) queue. Use an" + ], + "correct": "C. Modify the API to write incoming data to an Amazo n Simple Queue Service (Amazon SQS) queue. Use an", + "explanation": "Explanation Explanation/Reference:", + "references": "" + }, + { + "question": "A company manages its own Amazon EC2 instances that run MySQL databases. The company is manually managing replication and scaling as demand increase s or decreases. The company needs a new solution th at simplifies the process of adding or removing comput e capacity to or from its database tier as needed. The solution also must offer improved performance, scal ing, and durability with minimal effort from operat ions. Which solution meets these requirements?", + "options": [ + "A. Migrate the databases to Amazon Aurora Serverless for Aurora MySQL.", + "B. Migrate the databases to Amazon Aurora Serverless for Aurora PostgreSQL.", + "C. Combine the databases into one larger MySQL datab ase. Run the larger database on larger EC2 instance s.", + "D. Create an EC2 Auto Scaling group for the database tier. Migrate the existing databases to the new" + ], + "correct": "A. Migrate the databases to Amazon Aurora Serverless for Aurora MySQL.", + "explanation": "Explanation/Reference: https://aws.amazon.com/rds/aurora/serverless/", + "references": "" + }, + { + "question": "A company is concerned that two NAT instances in us e will no longer be able to support the traffic nee ded for the company's application. A solutions architect wa nts to implement a solution that is highly availabl e, fault tolerant, and automatically scalable. What should the solutions architect recommend?", + "options": [ + "A. Remove the two NAT instances and replace them wit h two NAT gateways in the same Availability Zone.", + "B. Use Auto Scaling groups with Network Load Balance rs for the NAT instances in different Availability Zones.", + "C. Remove the two NAT instances and replace them wit h two NAT gateways in different Availability Zones.", + "D. Replace the two NAT instances with Spot Instances in different Availability Zones and deploy a Netwo rk", + "C. If you have resources in multiple Availability Z ones and they share one NAT gateway, and if the NAT" + ], + "correct": "C. Remove the two NAT instances and replace them wit h two NAT gateways in different Availability Zones.", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "An application runs on an Amazon EC2 instance that has an Elastic IP address in VPC A. The applicationrequires access to a database in VPC B. Both VPCs a re in the same AWS account. Which solution will provide the required access MOS T securely?", + "options": [ + "A. Create a DB instance security group that allows a ll traffic from the public IP address of the applic ation", + "B. Configure a VPC peering connection between VPC A and VPC B.", + "C. Make the DB instance publicly accessible. Assign a public IP address to the DB instance.", + "D. Launch an EC2 instance with an Elastic IP address into VPC B. Proxy all requests through the new EC2" + ], + "correct": "B. Configure a VPC peering connection between VPC A and VPC B.", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A company runs demonstration environments for its c ustomers on Amazon EC2 instances. Each environment is isolated in its own VPC. The company's operation s team needs to be notified when RDP or SSH access to an environment has been established.", + "options": [ + "A. Configure Amazon CloudWatch Application Insights to create AWS Systems Manager OpsItems when RDP", + "B. Configure the EC2 instances with an IAM instance profile that has an IAM role with the", + "C. Publish VPC flow logs to Amazon CloudWatch Logs. Create required metric filters.", + "D. Configure an Amazon EventBridge rule to listen fo r events of type EC2 Instance State- change Notific ation." + ], + "correct": "C. Publish VPC flow logs to Amazon CloudWatch Logs. Create required metric filters.", + "explanation": "Explanation/Reference: EC2 Instance State-change Notifications are not the same as RDP or SSH established connection notifica tions. Use Amazon CloudWatch Logs to monitor SSH access to your Amazon EC2 Linux instances so that you can monitor rejected (or established) SSH connection re quests and take action.", + "references": "" + }, + { + "question": "A solutions architect has created a new AWS account and must secure AWS account root user access. Which combination of actions will accomplish this? (Choose two.)", + "options": [ + "A. Ensure the root user uses a strong password.", + "B. Enable multi-factor authentication to the root us er.", + "C. Store root user access keys in an encrypted Amazo n S3 bucket.", + "D. Add the root user to a group containing administr ative permissions.", + "B. Enabling multi-factor authentication for the roo t user provides an additional layer of security to ensure that" + ], + "correct": "", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A company is building a new web-based customer rela tionship management application. The application wi ll use several Amazon EC2 instances that are backed by Amazon Elastic Block Store (Amazon EBS) volumes behind an Application Load Balancer (ALB). The appl ication will also use an Amazon Aurora database. Al l data for the application must be encrypted at rest and i n transit. Which solution will meet these requirements?", + "options": [ + "A. Use AWS Key Management Service (AWS KMS) certific ates on the ALB to encrypt data in transit. Use", + "B. Use the AWS root account to log in to the AWS Man agement Console. Upload the company's encryption", + "C. Use AWS Key Management Service (AWS KMS) to encry pt the EBS volumes and Aurora database storage", + "D. Use BitLocker to encrypt all data at rest. Import the company's TLS certificate keys to AWS Key" + ], + "correct": "C. Use AWS Key Management Service (AWS KMS) to encry pt the EBS volumes and Aurora database storage", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A company is moving its on-premises Oracle database to Amazon Aurora PostgreSQL. The database has several applications that write to the same tables. The applications need to be migrated one by one wi th a month in between each migration. Management has exp ressed concerns that the database has a high number of reads and writes. The data must be kept in sync across both databases throughout the migration. What should a solutions architect recommend?", + "options": [ + "A. Use AWS DataSync for the initial migration. Use A WS Database Migration Service (AWS DMS) to create a", + "B. Use AWS DataSync for the initial migration. Use A WS Database Migration Service (AWS DMS) to create a", + "C. Use the AWS Schema Conversion Tool with AWS Datab ase Migration Service (AWS DMS) using a", + "D. Use the AWS Schema Conversion Tool with AWS Datab ase Migration Service (AWS DMS) using a" + ], + "correct": "C. Use the AWS Schema Conversion Tool with AWS Datab ase Migration Service (AWS DMS) using a", + "explanation": "Explanation Explanation/Reference: https://aws.amazon.com/ko/premiumsupport/knowledge- center/dms-memory- optimization/", + "references": "" + }, + { + "question": "A company has a three-tier application for image sh aring. The application uses an Amazon EC2 instance for the front-end layer, another EC2 instance for the a pplication layer, and a third EC2 instance for a My SQL database. A solutions architect must design a scala ble and highly available solution that requires the least amount of change to the application. Which solution meets these requirements?", + "options": [ + "A. Use Amazon S3 to host the front-end layer. Use AW S Lambda functions for the application layer. Move the", + "B. Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end layer and the applic ation", + "C. Use Amazon S3 to host the front-end layer. Use a fleet of EC2 instances in an Auto Scaling group for the", + "D. Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end layer and the applic ation" + ], + "correct": "D. Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end layer and the applic ation", + "explanation": "Explanation/Reference: for \"Highly available\": Multi-AZ & for \"least amoun t of changes to the application\": Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to appl ication health monitoring", + "references": "" + }, + { + "question": "An application running on an Amazon EC2 instance in VPC-A needs to access files in another EC2 instanc e in VPC-B. Both VPCs are in separate AWS accounts. The network administrator needs to design a solution to configure secure access to EC2 instance in VPC-B fr om VPC-A. The connectivity should not have a single point of failure or bandwidth concerns. Which solution will meet these requirements?", + "options": [ + "A. Set up a VPC peering connection between VPC-A and VPC-B.", + "B. Set up VPC gateway endpoints for the EC2 instance running in VPC-B.", + "C. Attach a virtual private gateway to VPC-B and set up routing from VPC-A.", + "D. Create a private virtual interface (VIF) for the EC2 instance running in VPC-B and add appropriate r outes" + ], + "correct": "A. Set up a VPC peering connection between VPC-A and VPC-B.", + "explanation": "Explanation/Reference: AWS uses the existing infrastructure of a VPC to cr eate a VPC peering connection; it is neither a gate way nor a VPN connection, and does not rely on a separate pie ce of physical hardware. There is no single point o f failure for communication or a bandwidth bottleneck. https: //docs.aws.amazon.com/vpc/latest/peering/what-is-vp c- peering.html", + "references": "" + }, + { + "question": "notified as soon as the Amazon EC2 instance usage f or a given month exceeds a specific threshold for e ach account. What should a solutions architect do to meet this r equirement MOST cost-effectively?", + "options": [ + "A. Use Cost Explorer to create a daily report of cos ts by service. Filter the report by EC2 instances. Configure", + "B. Use Cost Explorer to create a monthly report of c osts by service. Filter the report by EC2 instances .", + "C. Use AWS Budgets to create a cost budget for each account. Set the period to monthly. Set the scope t o", + "D. Use AWS Cost and Usage Reports to create a report with hourly granularity. Integrate the report data with" + ], + "correct": "C. Use AWS Budgets to create a cost budget for each account. Set the period to monthly. Set the scope t o", + "explanation": "Explanation/Reference: https://aws.amazon.com/getting-started/hands-on/con trol-your-costs-free-tier-budgets/", + "references": "" + }, + { + "question": "A solutions architect needs to design a new microse rvice for a company's application. Clients must be able to call an HTTPS endpoint to reach the microservice. T he microservice also must use AWS Identity and Acce ss Management (IAM) to authenticate calls. The solutio ns architect will write the logic for this microser vice by using a single AWS Lambda function that is written in Go 1.x. Which solution will deploy the function in the MOST operationally efficient way?", + "options": [ + "A. Create an Amazon API Gateway REST API. Configure the method to use the Lambda function. Enable IAM", + "B. Create a Lambda function URL for the function. Sp ecify AWS_IAM as the authentication type.", + "C. Create an Amazon CloudFront distribution. Deploy the function to Lambda@Edge. Integrate IAM", + "D. Create an Amazon CloudFront distribution. Deploy the function to CloudFront Functions.", + "A. Create an Amazon API Gateway REST API. Configure the method to use the Lambda function. Enable IAM" + ], + "correct": "A. Create an Amazon API Gateway REST API. Configure the method to use the Lambda function. Enable IAM", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A company previously migrated its data warehouse so lution to AWS. The company also has an AWS Direct Connect connection. Corporate office users query th e data warehouse using a visualization tool. The av erage size of a query returned by the data warehouse is 5 0 MB and each webpage sent by the visualization too l is approximately 500 KB. Result sets returned by the d ata warehouse are not cached. Which solution provides the LOWEST data transfer eg ress cost for the company?", + "options": [ + "A. Host the visualization tool on premises and query the data warehouse directly over the internet.", + "B. Host the visualization tool in the same AWS Regio n as the data warehouse. Access it over the interne t.", + "C. Host the visualization tool on premises and query the data warehouse directly over a Direct Connect", + "D. Host the visualization tool in the same AWS Regio n as the data warehouse and access it over a Direct" + ], + "correct": "D. Host the visualization tool in the same AWS Regio n as the data warehouse and access it over a Direct", + "explanation": "Explanation/Reference: https://aws.amazon.com/directconnect/pricing/ https ://aws.amazon.com/blogs/aws/aws- data-transfer-pric es- reduced/", + "references": "" + }, + { + "question": "An online learning company is migrating to the AWS Cloud. The company maintains its student records in a PostgreSQL database. The company needs a solution i n which its data is available and online across mul tiple AWS Regions at all times. Which solution will meet these requirements with th e LEAST amount of operational overhead?", + "options": [ + "A. Migrate the PostgreSQL database to a PostgreSQL c luster on Amazon EC2 instances.", + "B. Migrate the PostgreSQL database to an Amazon RDS for PostgreSQL DB instance with the Multi-AZ", + "C. Migrate the PostgreSQL database to an Amazon RDS for PostgreSQL DB instance.", + "D. Migrate the PostgreSQL database to an Amazon RDS for PostgreSQL DB instance. Set up DB snapshots" + ], + "correct": "C. Migrate the PostgreSQL database to an Amazon RDS for PostgreSQL DB instance.", + "explanation": "Explanation/Reference: https://aws.amazon.com/about-aws/whats-new/2018/01/ amazon-rds-read-replicas-now- support-multi-az- deployments/", + "references": "" + }, + { + "question": "A company hosts its web application on AWS using se ven Amazon EC2 instances. The company requires that the IP addresses of all healthy EC2 instances be re turned in response to DNS queries. Which policy should be used to meet this requiremen t?", + "options": [ + "A. Simple routing policy", + "B. Latency routing policy", + "C. Multivalue routing policy", + "D. Geolocation routing policy" + ], + "correct": "C. Multivalue routing policy", + "explanation": "Explanation Explanation/Reference: Use a multivalue answer routing policy to help dist ribute DNS responses across multiple resources. For example, use multivalue answer routing when you wan t to associate your routing records with a Route 53 health check. For example, use multivalue answer ro uting when you need to return multiple values for a DNS query and route traffic to multiple IP addresses. h ttps://aws.amazon.com/premiumsupport/knowledge- cen ter/ multivalue-versus-simple-policies/", + "references": "" + }, + { + "question": "A medical research lab produces data that is relate d to a new study. The lab wants to make the data av ailable with minimum latency to clinics across the country for their on-premises, file-based applications. The data files are stored in an Amazon S3 bucket that has read-onl y permissions for each clinic. What should a solutions architect recommend to meet these requirements?", + "options": [ + "A. Deploy an AWS Storage Gateway file gateway as a v irtual machine (VM) on premises at each clinic", + "B. Migrate the files to each clinic's on-premises ap plications by using AWS DataSync for processing.", + "C. Deploy an AWS Storage Gateway volume gateway as a virtual machine (VM) on premises at each clinic.", + "D. Attach an Amazon Elastic File System (Amazon EFS) file system to each clinic's on- premises servers.", + "A. Deploy an AWS Storage Gateway file gateway as a virtual machine (VM) on premises at each clinic AWS" + ], + "correct": "A. Deploy an AWS Storage Gateway file gateway as a v irtual machine (VM) on premises at each clinic", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A company is using a content management system that runs on a single Amazon EC2 instance. The EC2 instance contains both the web server and the datab ase software. The company must make its website platform highly available and must enable the websi te to scale to meet user demand. What should a solutions architect recommend to meet these requirements?", + "options": [ + "A. Move the database to Amazon RDS, and enable autom atic backups. Manually launch another EC2 instance", + "B. Migrate the database to an Amazon Aurora instance with a read replica in the same Availability Zone as the", + "C. Move the database to Amazon Aurora with a read re plica in another Availability Zone.", + "D. Move the database to a separate EC2 instance, and schedule backups to Amazon S3.", + "C. Move the database to Amazon Aurora with a read r eplica in another Availability Zone. Create an Amaz on" + ], + "correct": "C. Move the database to Amazon Aurora with a read re plica in another Availability Zone.", + "explanation": "Explanation Explanation/Reference:", + "references": "" + }, + { + "question": "A company is launching an application on AWS. The a pplication uses an Application Load Balancer (ALB) to direct traffic to at least two Amazon EC2 instances in a single target group. The instances are in an Auto Scaling group for each environment. The company req uires a development environment and a production environment. The production environment will have p eriods of high traffic. Which solution will configure the development envir onment MOST cost-effectively?", + "options": [ + "A. Reconfigure the target group in the development e nvironment to have only one EC2 instance as a targe t.", + "B. Change the ALB balancing algorithm to least outst anding requests.", + "C. Reduce the size of the EC2 instances in both envi ronments.", + "D. Reduce the maximum number of EC2 instances in the development environment's Auto Scaling group.", + "D. Reduce the maximum number of EC2 instances in th e development environment's Auto Scaling group This" + ], + "correct": "D. Reduce the maximum number of EC2 instances in the development environment's Auto Scaling group.", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A company runs a web application on Amazon EC2 inst ances in multiple Availability Zones. The EC2 insta nces are in private subnets. A solutions architect imple ments an internet-facing Application Load Balancer (ALB) and specifies the EC2 instances as the target group. Ho wever, the internet traffic is not reaching the EC2 instances. How should the solutions architect reconfigure the architecture to resolve this issue?", + "options": [ + "A. Replace the ALB with a Network Load Balancer. Con figure a NAT gateway in a public subnet to allow", + "B. Move the EC2 instances to public subnets. Add a r ule to the EC2 instances' security groups to allow", + "C. Update the route tables for the EC2 instances' su bnets to send 0.0.0.0/0 traffic through the interne t gateway", + "D. Create public subnets in each Availability Zone. Associate the public subnets with the ALB. Update t he route" + ], + "correct": "D. Create public subnets in each Availability Zone. Associate the public subnets with the ALB. Update t he route", + "explanation": "Explanation/Reference: D. Create public subnets in each Availability Zone. Associate the public subnets with the ALB. Update the route tables for the public subnets with a route to the p rivate subnets. This solution will resolve the issu e by allowing the internet traffic to reach the EC2 instances. By creating public subnets in each availability zone and associating them with the ALB, the internet traffic will be directed to the ALB. Updating the route ta bles for the public subnets with a route to the private subnets will allow the traffic to be routed to the private subnets where the EC2 instances reside. This ensures that the tra ffic reaches the correct target group, and the secu rity group of the instances allows inbound traffic from the in ternet.", + "references": "" + }, + { + "question": "A company has deployed a database in Amazon RDS for MySQL. Due to increased transactions, the database support team is reporting slow reads against the DB instance and recommends adding a read replica. Which combination of actions should a solutions arc hitect take before implementing this change? (Choos e two.)", + "options": [ + "A. Enable binlog replication on the RDS primary node .", + "B. Choose a failover priority for the source DB inst ance.", + "C. Allow long-running transactions to complete on th e source DB instance.", + "D. Create a global table and specify the AWS Regions where the table will be available." + ], + "correct": "", + "explanation": "Explanation/Reference: \"An active, long-running transaction can slow the p rocess of creating the read replica. We recommend t hat you wait for long-running transactions to complete befo re creating a read replica. If you create multiple read replicas in parallel from the same source DB instance, Amazo n RDS takes only one snapshot at the start of the f irst create action. When creating a read replica, there are a few things to consider. First, you must enabl e automatic backups on the source DB instance by sett ing the backup retention period to a value other th an 0. This requirement also applies to a read replica tha t is the source DB instance for another read replic a\" https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/USER_ReadRepl.htm", + "references": "" + }, + { + "question": "A company runs analytics software on Amazon EC2 ins tances. The software accepts job requests from user s to process data that has been uploaded to Amazon S3 . Users report that some submitted data is not bein g processed Amazon CloudWatch reveals that the EC2 in stances have a consistent CPU utilization at or nea r 100%. The company wants to improve system performan ce and scale the system based on user load. What should a solutions architect do to meet these requirements?", + "options": [ + "A. Create a copy of the instance. Place all instance s behind an Application Load Balancer.", + "B. Create an S3 VPC endpoint for Amazon S3. Update t he software to reference the endpoint.", + "C. Stop the EC2 instances. Modify the instance type to one with a more powerful CPU and more memory.", + "D. Route incoming requests to Amazon Simple Queue Se rvice (Amazon SQS). Configure an EC2 Auto Scaling", + "D. Route incoming requests to Amazon Simple Queue S ervice (Amazon SQS). Configure an EC2 Auto Scaling" + ], + "correct": "D. Route incoming requests to Amazon Simple Queue Se rvice (Amazon SQS). Configure an EC2 Auto Scaling", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A company is implementing a shared storage solution for a media application that is hosted in the AWS Cloud. The company needs the ability to use SMB clients to access data. The solution must be fully managed. Which AWS solution meets these requirements?", + "options": [ + "A. Create an AWS Storage Gateway volume gateway. Cre ate a file share that uses the required client", + "B. Create an AWS Storage Gateway tape gateway. Confi gure tapes to use Amazon S3.", + "C. Create an Amazon EC2 Windows instance. Install an d configure a Windows file share role on the instan ce.", + "D. Create an Amazon FSx for Windows File Server file system. Attach the file system to the origin serve r." + ], + "correct": "D. Create an Amazon FSx for Windows File Server file system. Attach the file system to the origin serve r.", + "explanation": "Explanation/Reference: Amazon FSx has native support for Windows file syst em features and for the industry- standard Server Message Block (SMB) protocol to access file storage over a network. https://docs.aws.amazon.com/fsx/la test/ WindowsGuide/what-is.html", + "references": "" + }, + { + "question": "A company's security team requests that network tra ffic be captured in VPC Flow Logs. The logs will be frequently accessed for 90 days and then accessed i ntermittently. What should a solutions architect do to meet these requirements when configuring the logs?", + "options": [ + "A. Use Amazon CloudWatch as the target. Set the Clou dWatch log group with an expiration of 90 days", + "B. Use Amazon Kinesis as the target. Configure the K inesis stream to always retain the logs for 90 days .", + "C. Use AWS CloudTrail as the target. Configure Cloud Trail to save to an Amazon S3 bucket, and enable S3", + "D. Use Amazon S3 as the target. Enable an S3 Lifecyc le policy to transition the logs to S3 Standard-Inf requent" + ], + "correct": "D. Use Amazon S3 as the target. Enable an S3 Lifecyc le policy to transition the logs to S3 Standard-Inf requent", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/AmazonCloudWatch/latest /logs/CloudWatchLogsConcepts .html", + "references": "" + }, + { + "question": "An Amazon EC2 instance is located in a private subn et in a new VPC. This subnet does not have outbound internet access, but the EC2 instance needs the abi lity to download monthly security updates from an o utside vendor. What should a solutions architect do to meet these requirements? A. Create an internet gateway, and attach it to the VPC. Configure the private subnet route table to us e the internet gateway as the default route.", + "options": [ + "B. Create a NAT gateway, and place it in a public su bnet. Configure the private subnet route table to u se the", + "C. Create a NAT instance, and place it in the same s ubnet where the EC2 instance is located.", + "D. Create an internet gateway, and attach it to the VPC. Create a NAT instance, and place it in the sam e" + ], + "correct": "B. Create a NAT gateway, and place it in a public su bnet. Configure the private subnet route table to u se the", + "explanation": "Explanation/Reference: https://medium.com/@tshemku/aws-internet-gateway-vs -nat-gateway-vs-nat-instance- 30523096df22", + "references": "" + }, + { + "question": "A solutions architect needs to design a system to s tore client case files. The files are core company assets and are important. The number of files will grow over t ime. The files must be simultaneously accessible from mu ltiple application servers that run on Amazon EC2 instances. The solution must have built-in redundan cy. Which solution meets these requirements?", + "options": [ + "A. Amazon Elastic File System (Amazon EFS)", + "B. Amazon Elastic Block Store (Amazon EBS)", + "C. Amazon S3 Glacier Deep Archive", + "D. AWS Backup" + ], + "correct": "A. Amazon Elastic File System (Amazon EFS)", + "explanation": "Explanation/Reference: EFS Amazon Elastic File System (EFS) automatically grows and shrinks as you add and remove files with no need for management or provisioning.", + "references": "" + }, + { + "question": "A solutions architect has created two IAM policies: Policy1 and Policy2. Both policies are attached to an IAM group. A cloud engineer is added as an IAM user to the IAM group. Which action will the cloud engineer be abl e to perform?", + "options": [ + "A. Deleting IAM users", + "B. Deleting directories", + "C. Deleting Amazon EC2 instances", + "D. Deleting logs from Amazon CloudWatch Logs" + ], + "correct": "C. Deleting Amazon EC2 instances", + "explanation": "Explanation Explanation/Reference: ec2:* Allows full control of EC2 instances, so C is correct The policy only grants get and list permis sion on IAM users, so not A ds:Delete deny denies delete-direct ory, so not B, see https://awscli.amazonaws.com/v2/ documentation/api/latest/reference/ds/index.html Th e policy only grants get and describe permission on logs, so not D", + "references": "" + }, + { + "question": "A company is reviewing a recent migration of a thre e-tier application to a VPC. The security team disc overs that the principle of least privilege is not being appli ed to Amazon EC2 security group ingress and egress rules between the application tiers. What should a solutions architect do to correct thi s issue?", + "options": [ + "A. Create security group rules using the instance ID as the source or destination.", + "B. Create security group rules using the security gr oup ID as the source or destination.", + "C. Create security group rules using the VPC CIDR bl ocks as the source or destination.", + "D. Create security group rules using the subnet CIDR blocks as the source or destination." + ], + "correct": "B. Create security group rules using the security gr oup ID as the source or destination.", + "explanation": "Explanation/Reference: Security Group Rulesapply to instances https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /security-group-rules.html", + "references": "" + }, + { + "question": "A company has an ecommerce checkout workflow that w rites an order to a database and calls a service to process the payment. Users are experiencing timeout s during the checkout process. When users resubmit the checkout form, multiple unique orders are created f or the same desired transaction. How should a solutions architect refactor this work flow to prevent the creation of multiple orders?", + "options": [ + "A. Configure the web application to send an order me ssage to Amazon Kinesis Data Firehose. Set the", + "B. Create a rule in AWS CloudTrail to invoke an AWS Lambda function based on the logged application pat h", + "C. Store the order in the database. Send a message t hat includes the order number to Amazon Simple", + "D. Store the order in the database. Send a message t hat includes the order number to an Amazon Simple", + "D. Store the order in the database. Send a message that includes the order number to an Amazon Simple" + ], + "correct": "D. Store the order in the database. Send a message t hat includes the order number to an Amazon Simple", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A solutions architect is implementing a document re view application using an Amazon S3 bucket for stor age. The solution must prevent accidental deletion of th e documents and ensure that all versions of the doc uments are available. Users must be able to download, modi fy, and upload documents. Which combination of actions should be taken to mee t these requirements? (Choose two.)", + "options": [ + "A. Enable a read-only bucket ACL.", + "B. Enable versioning on the bucket.", + "C. Attach an IAM policy to the bucket.", + "D. Enable MFA Delete on the bucket." + ], + "correct": "", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A company is building a solution that will report A mazon EC2 Auto Scaling events across all the applic ations in an AWS account. The company needs to use a serverle ss solution to store the EC2 Auto Scaling status da ta in Amazon S3. The company then will use the data in Am azon S3 to provide near-real-time updates in a dashboard. The solution must not affect the speed o f EC2 instance launches. How should the company move the data to Amazon S3 t o meet these requirements?", + "options": [ + "A. Use an Amazon CloudWatch metric stream to send th e EC2 Auto Scaling status data to Amazon Kinesis", + "B. Launch an Amazon EMR cluster to collect the EC2 A uto Scaling status data and send the data to Amazon", + "C. Create an Amazon EventBridge rule to invoke an AW S Lambda function on a schedule.", + "D. Use a bootstrap script during the launch of an EC 2 instance to install Amazon Kinesis Agent. Configu re" + ], + "correct": "A. Use an Amazon CloudWatch metric stream to send th e EC2 Auto Scaling status data to Amazon Kinesis", + "explanation": "Explanation/Reference: You can use metric streams to continually stream Cl oudWatch metrics to a destination of your choice, w ith near-real-time delivery and low latency. One of the use cases is Data Lake: create a metric stream and direct it to an Amazon Kinesis Data Firehose delivery stream that delivers your CloudWatch metrics to a data lak e such as Amazon S3. https://docs.aws.amazon.com/AmazonClo udWatch/latest/monitoring/CloudWatch- Metric- Streams.html", + "references": "" + }, + { + "question": "A company has an application that places hundreds o f .csv files into an Amazon S3 bucket every hour. T he files are 1 GB in size. Each time a file is uploade d, the company needs to convert the file to Apache Parquet format and place the output file into an S3 bucket. Which solution will meet these requirements with th e LEAST operational overhead? A. Create an AWS Lambda function to download the .cs v files, convert the files to Parquet format, and p lace the output files in an S3 bucket. Invoke the Lambda function for each S3 PUT event.", + "options": [ + "B. Create an Apache Spark job to read the .csv files , convert the files to Parquet format, and place th e output", + "C. Create an AWS Glue table and an AWS Glue crawler for the S3 bucket where the application places the", + "D. Create an AWS Glue extract, transform, and load ( ETL) job to convert the .csv files to Parquet forma t and" + ], + "correct": "D. Create an AWS Glue extract, transform, and load ( ETL) job to convert the .csv files to Parquet forma t and", + "explanation": "Explanation/Reference: https://docs.aws.amazon.com/prescriptive-guidance/l atest/patterns/three-aws-glue-etl- job-types-for-co nverting- data-to-apache-parquet.html", + "references": "" + }, + { + "question": "A company is implementing new data retention polici es for all databases that run on Amazon RDS DB instances. The company must retain daily backups fo r a minimum period of 2 years. The backups must be consistent and restorable. Which solution should a solutions architect recomme nd to meet these requirements?", + "options": [ + "A. Create a backup vault in AWS Backup to retain RDS backups. Create a new backup plan with a daily", + "B. Configure a backup window for the RDS DB instance s for daily snapshots. Assign a snapshot retention", + "C. Configure database transaction logs to be automat ically backed up to Amazon CloudWatch Logs with an", + "D. Configure an AWS Database Migration Service (AWS DMS) replication task. Deploy a replication instanc e," + ], + "correct": "A. Create a backup vault in AWS Backup to retain RDS backups. Create a new backup plan with a daily", + "explanation": "Explanation/Reference: Create a backup vault in AWS Backup to retain RDS b ackups. Create a new backup plan with a daily sched ule and an expiration period of 2 years after creation. Assign the RDS DB instances to the backup plan.", + "references": "" + }, + { + "question": "A company's compliance team needs to move its file shares to AWS. The shares run on a Windows Server SMB file share. A self-managed on-premises Active D irectory controls access to the files and folders. The company wants to use Amazon FSx for Windows Fil e Server as part of the solution. The company must ensure that the on-premises Active Directory groups restrict access to the FSx for Windows File Server SMB compliance shares, folders, and files after the mov e to AWS. The company has created an FSx for Window s File Server file system. Which solution will meet these requirements? A. Create an Active Directory Connector to connect t o the Active Directory. Map the Active Directory gr oups to IAM groups to restrict access.", + "options": [ + "B. Assign a tag with a Restrict tag key and a Compli ance tag value. Map the Active Directory groups to IAM", + "C. Create an IAM service-linked role that is linked directly to FSx for Windows File Server to restrict access.", + "D. Join the file system to the Active Directory to r estrict access.", + "D. Join the file system to the Active Directory to restrict access. Joining the FSx for Windows File S erver file" + ], + "correct": "D. Join the file system to the Active Directory to r estrict access.", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A company recently announced the deployment of its retail website to a global audience. The website ru ns on multiple Amazon EC2 instances behind an Elastic Loa d Balancer. The instances run in an Auto Scaling gr oup across multiple Availability Zones. The company wants to provide its customers with dif ferent versions of content based on the devices tha t the customers use to access the website. Which combination of actions should a solutions arc hitect take to meet these requirements? (Choose two .)", + "options": [ + "A. Configure Amazon CloudFront to cache multiple ver sions of the content.", + "B. Configure a host header in a Network Load Balance r to forward traffic to different instances.", + "C. Configure a Lambda@Edge function to send specific objects to users based on the User- Agent header.", + "D. Configure AWS Global Accelerator. Forward request s to a Network Load Balancer (NLB). Configure the" + ], + "correct": "", + "explanation": "Explanation/Reference: https://aws.amazon.com/lambda/edge/", + "references": "" + }, + { + "question": "A company plans to use Amazon ElastiCache for its m ulti-tier web application. A solutions architect cr eates a Cache VPC for the ElastiCache cluster and an App VP C for the application's Amazon EC2 instances. Both VPCs are in the us-east-1 Region. The solutions architect must implement a solution t o provide the application's EC2 instances with acce ss to the ElastiCache cluster. Which solution will meet these requirements MOST co st-effectively?", + "options": [ + "A. Create a peering connection between the VPCs. Add a route table entry for the peering connection in both", + "C. Create a peering connection between the VPCs. Add a route table entry for the peering connection in both", + "D. Create a Transit VPC. Update the VPC route tables in the Cache VPC and the App VPC to route traffic", + "A. Create a peering connection between the VPCs. Ad d a route table entry for the peering connection in both" + ], + "correct": "A. Create a peering connection between the VPCs. Add a route table entry for the peering connection in both", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A company is building an application that consists of several microservices. The company has decided t o use container technologies to deploy its software on AW S. The company needs a solution that minimizes the amount of ongoing effort for maintenance and scalin g. The company cannot manage additional infrastructure . Which combination of actions should a solutions arc hitect take to meet these requirements? (Choose two .)", + "options": [ + "A. Deploy an Amazon Elastic Container Service (Amazo n ECS) cluster.", + "B. Deploy the Kubernetes control plane on Amazon EC2 instances that span multiple Availability Zones.", + "C. Deploy an Amazon Elastic Container Service (Amazo n ECS) service with an Amazon EC2 launch type.", + "D. Deploy an Amazon Elastic Container Service (Amazo n ECS) service with a Fargate launch type. Specify a" + ], + "correct": "", + "explanation": "Explanation/Reference: AWS Fargate is a technology that you can use with A mazon ECS to run containers without having to manag e servers or clusters of Amazon EC2 instances. With F argate, you no longer have to provision, configure, or scale clusters of virtual machines to run container s. https://docs.aws.amazon.com/AmazonECS/latest/ userguide/what-is- fargate.html", + "references": "" + }, + { + "question": "A company has a web application hosted over 10 Amaz on EC2 instances with traffic directed by Amazon Ro ute 53. The company occasionally experiences a timeout error when attempting to browse the application. Th e networking team finds that some DNS queries return IP addresses of unhealthy instances, resulting in t he timeout error. What should a solutions architect implement to over come these timeout errors? A. Create a Route 53 simple routing policy record fo r each EC2 instance. Associate a health check with each record.", + "options": [ + "B. Create a Route 53 failover routing policy record for each EC2 instance. Associate a health check wit h each", + "C. Create an Amazon CloudFront distribution with EC2 instances as its origin. Associate a health check with", + "D. Create an Application Load Balancer (ALB) with a health check in front of the EC2 instances. Route t o the", + "D. Create an Application Load Balancer (ALB) with a health check in front of the EC2 instances. Route to the" + ], + "correct": "D. Create an Application Load Balancer (ALB) with a health check in front of the EC2 instances. Route t o the", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A solutions architect needs to design a highly avai lable application consisting of web, application, a nd database tiers. HTTPS content delivery should be as close to the edge as possible, with the least delivery time . Which solution meets these requirements and is MOST secure?", + "options": [ + "A. Configure a public Application Load Balancer (ALB ) with multiple redundant Amazon EC2 instances in", + "B. Configure a public Application Load Balancer with multiple redundant Amazon EC2 instances in private", + "C. Configure a public Application Load Balancer (ALB ) with multiple redundant Amazon EC2 instances in", + "D. Configure a public Application Load Balancer with multiple redundant Amazon EC2 instances in public", + "C. Configure a public Application Load Balancer (AL B) with multiple redundant Amazon EC2 instances in" + ], + "correct": "C. Configure a public Application Load Balancer (ALB ) with multiple redundant Amazon EC2 instances in", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A company has a popular gaming platform running on AWS. The application is sensitive to latency becaus e latency can impact the user experience and introduc e unfair advantages to some players. The applicatio n is deployed in every AWS Region. It runs on Amazon EC2 instances that are part of Auto Scaling groups configured behind Application Load Balancers (ALBs) . A solutions architect needs to implement a mechan ism to monitor the health of the application and redire ct traffic to healthy endpoints. Which solution meets these requirements? A. Configure an accelerator in AWS Global Accelerato r. Add a listener for the port that the application listens on, and attach it to a Regional endpoint in each Re gion. Add the ALB as the endpoint.", + "options": [ + "B. Create an Amazon CloudFront distribution and spec ify the ALB as the origin server.", + "C. Create an Amazon CloudFront distribution and spec ify Amazon S3 as the origin server.", + "D. Configure an Amazon DynamoDB database to serve as the data store for the application.", + "A. Configure an accelerator in AWS Global Accelerat or. Add a listener for the port that the applicatio n listens" + ], + "correct": "", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A company has one million users that use its mobile app. The company must analyze the data usage in ne ar- real time. The company also must encrypt the data i n near-real time and must store the data in a centr alized location in Apache Parquet format for further proce ssing. Which solution will meet these requirements with th e LEAST operational overhead?", + "options": [ + "A. Create an Amazon Kinesis data stream to store the data in Amazon S3. Create an Amazon Kinesis Data", + "B. Create an Amazon Kinesis data stream to store the data in Amazon S3. Create an Amazon EMR cluster to", + "C. Create an Amazon Kinesis Data Firehose delivery s tream to store the data in Amazon S3.", + "D. Create an Amazon Kinesis Data Firehose delivery s tream to store the data in Amazon S3.", + "D. Create an Amazon Kinesis Data Firehose delivery stream to store the data in Amazon S3. Create an", + "A. Use Amazon ElastiCache in front of the database.", + "B. Use RDS Proxy between the application and the dat abase.", + "C. Migrate the application from EC2 instances to AWS Lambda.", + "D. Migrate the database from Amazon RDS for MySQL to Amazon DynamoDB." + ], + "correct": "A. Use Amazon ElastiCache in front of the database.", + "explanation": "Explanation/Reference: https://aws.amazon.com/caching/", + "references": "" + }, + { + "question": "An ecommerce company has noticed performance degrad ation of its Amazon RDS based web application. The performance degradation is attributed to an increas e in the number of read-only SQL queries triggered by business analysts. A solutions architect needs to s olve the problem with minimal changes to the existi ng web application. What should the solutions architect recommend?", + "options": [ + "A. Export the data to Amazon DynamoDB and have the b usiness analysts run their queries.", + "B. Load the data into Amazon ElastiCache and have th e business analysts run their queries.", + "C. Create a read replica of the primary database and have the business analysts run their queries.", + "D. Copy the data into an Amazon Redshift cluster and have the business analysts run their queries.", + "C. Create a read replica of the primary database an d have the business analysts run their queries. Cre ating a" + ], + "correct": "C. Create a read replica of the primary database and have the business analysts run their queries.", + "explanation": "Explanation/Reference:", + "references": "" + }, + { + "question": "A company is using a centralized AWS account to sto re log data in various Amazon S3 buckets. A solutio ns architect needs to ensure that the data is encrypte d at rest before the data is uploaded to the S3 buc kets. The data also must be encrypted in transit. Which solution meets these requirements?", + "options": [ + "A. Use client-side encryption to encrypt the data th at is being uploaded to the S3 buckets.", + "B. Use server-side encryption to encrypt the data th at is being uploaded to the S3 buckets.", + "C. Create bucket policies that require the use of se rver-side encryption with S3 managed encryption key s" + ], + "correct": "A. Use client-side encryption to encrypt the data th at is being uploaded to the S3 buckets.", + "explanation": "Explanation/Reference: here keyword is \"before\" \"the data is encrypted at rest before the data is uploaded to the S3 buckets. \"", + "references": "" + }, + { + "question": "A solutions architect observes that a nightly batch processing job is automatically scaled up for 1 ho ur before the desired Amazon EC2 capacity is reached. The pea k capacity is the `same every night and the batch j obs always start at 1 AM. The solutions architect needs to find a cost-effective solution that will allow for the desired EC2 capacity to be reached quickly and allow the Au to Scaling group to scale down after the batch jobs are complete. What should the solutions architect do to meet thes e requirements?", + "options": [ + "A. Increase the minimum capacity for the Auto Scalin g group.", + "B. Increase the maximum capacity for the Auto Scalin g group.", + "C. Configure scheduled scaling to scale up to the de sired compute level.", + "D. Change the scaling policy to add more EC2 instanc es during each scaling operation.", + "C. Configure scheduled scaling to scale up to the d esired compute level. By configuring scheduled scal ing, the" + ], + "correct": "C. Configure scheduled scaling to scale up to the de sired compute level.", + "explanation": "Explanation/Reference:", + "references": "" + } +] \ No newline at end of file