- (Topic 4)
An ecommerce company has noticed performance degradation of its Amazon RDS based web application. The performance degradation is attributed to an increase in the number of read-only SQL queries triggered by business analysts. A solutions architect needs to solve the problem with minimal changes to the existing web application.
What should the solutions architect recommend?
Correct Answer:
C
Creating a read replica of the primary RDS database will offload the read- only SQL queries from the primary database, which will help to improve the performance of the web application. Read replicas are exact copies of the primary database that can be used to handle read-only traffic, which will reduce the load on the primary database and improve the performance of the web application. This solution can be implemented with minimal changes to the existing web application, as the business analysts can continue to run their queries on the read replica without modifying the code.
- (Topic 3)
A company runs an application on a large fleet of Amazon EC2 instances. The application reads and write entries into an Amazon DynamoDB table. The size of the DynamoDB table continuously grows, but the application needs only data from the last 30 days. The company needs a solution that minimizes cost and development effort.
Which solution meets these requirements?
Correct Answer:
D
Amazon DynamoDB Time to Live (TTL) allows you to define a per-item timestamp to determine when an item is no longer needed. Shortly after the date and time of the specified timestamp, DynamoDB deletes the item from your table without consuming any write throughput. TTL is provided at no extra cost as a means to reduce stored data volumes by retaining only the items that remain current for your workload’s needs.
TTL is useful if you store items that lose relevance after a specific time. The following are example TTL use cases:
Remove user or sensor data after one year of inactivity in an application.
Archive expired items to an Amazon S3 data lake via Amazon DynamoDB Streams and AWS Lambda.
Retain sensitive data for a certain amount of time according to contractual or regulatory obligations. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html
- (Topic 4)
A company runs a microservice-based serverless web application. The application must be able to retrieve data from multiple Amazon DynamoDB tables. A solutions architect needs to give the application the ability to retrieve the data with no impact on the baseline performance of the application.
Which solution will meet these requirements in the MOST operationally efficient way?
Correct Answer:
C
An edge-optimized API Gateway is a way to create RESTful APIs that can access multiple DynamoDB tables through AWS Lambda functions. The edge-optimized API Gateway provides low latency and high performance by caching API responses at CloudFront edge locations. The AWS Lambda functions can use the AWS SDK to query or scan the DynamoDB tables and return the data to the API Gateway. This solution meets all the requirements of the question, while the other options do not. References:
✑ https://aws.amazon.com/blogs/compute/understanding-database-options-for-your-
serverless-web-applications/
✑ https://aws.amazon.com/getting-started/hands-on/build-serverless-web-app- lambda-apigateway-s3-dynamodb-cognito/module-3/
✑ https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/best- practices.html
- (Topic 4)
A company has 150 TB of archived image data stored on-premises that needs to be moved to the AWS Cloud within the next month. The company's current network connection allows up to 100 Mbps uploads for this purpose during the night only.
What is the MOST cost-effective mechanism to move this data and meet the migration deadline?
Correct Answer:
B
AWS Snowball is a petabyte-scale data transport service that uses secure devices to transfer large amounts of data into and out of the AWS Cloud. Snowball addresses common challenges with large-scale data transfers including high network costs, long transfer times, and security concerns. AWS Snowball can transfer up to 80 TB of data per device, and multiple devices can be used in parallel to meet the migration deadline. AWS Snowball is more cost-effective than AWS Snowmobile, which is designed for exabyte-scale data transfers, or Amazon S3 Transfer Acceleration, which is optimized for fast transfers over long distances. Amazon S3 VPC endpoint does not increase the upload speed, but only provides a secure and private connection between the VPC and
S3. References: AWS Snowball, AWS Snowmobile, Amazon S3 Transfer Acceleration, Amazon S3 VPC endpoint
- (Topic 4)
A company runs an application on AWS. The application receives inconsistent amounts of usage. The application uses AWS Direct Connect to connect to an on-premises MySQL- compatible database. The on-premises database consistently uses a minimum of 2 GiB of memory.
The company wants to migrate the on-premises database to a managed AWS service. The company wants to use auto scaling capabilities to manage unexpected workload increases.
Which solution will meet these requirements with the LEAST administrative overhead?
Correct Answer:
C
it allows the company to migrate the on-premises database to a managed AWS service that supports auto scaling capabilities and has the least administrative overhead. Amazon Aurora Serverless v2 is a configuration of Amazon Aurora that automatically scales compute capacity based on workload demand. It can scale from hundreds to hundreds of thousands of transactions in a fraction of a second. Amazon Aurora Serverless v2 also supports MySQL-compatible databases and AWS Direct Connect connectivity. References:
✑ Amazon Aurora Serverless v2
✑ Connecting to an Amazon Aurora DB Cluster