- (Topic 4)
A company maintains about 300 TB in Amazon S3 Standard storage month after month The S3 objects are each typically around 50 GB in size and are frequently replaced with multipart uploads by their global application The number and size of S3 objects remain constant but the company's S3 storage costs are increasing each month.
How should a solutions architect reduce costs in this situation?
Correct Answer:
B
This option is the most cost-effective way to reduce the S3 storage costs in this situation. Incomplete multipart uploads are parts of objects that are not completed or aborted by the application. They consume storage space and incur charges until they are deleted. By enabling an S3 Lifecycle policy that deletes incomplete multipart uploads, you can automatically remove them after a specified period of time (such as one day) and free up the storage space. This will reduce the S3 storage costs and also improve the performance of the application by avoiding unnecessary retries or errors.
Option A is not correct because switching from multipart uploads to Amazon S3 Transfer Acceleration will not reduce the S3 storage costs. Amazon S3 Transfer Acceleration is a feature that enables faster data transfers to and from S3 by using the AWS edge network. It is useful for improving the upload speed of large objects over long distances, but it does not affect the storage space or charges. In fact, it may increase the costs by adding a data transfer fee for using the feature.
Option C is not correct because configuring S3 inventory to prevent objects from being archived too quickly will not reduce the S3 storage costs. Amazon S3 Inventory is a feature that provides a report of the objects and their metadata in an S3 bucket. It is useful for managing and auditing the S3 objects, but it does not affect the storage space or charges. In fact, it may increase the costs by generating additional S3 objects for the inventory reports.
Option D is not correct because configuring Amazon CloudFront to reduce the number of objects stored in Amazon S3 will not reduce the S3 storage costs. Amazon CloudFront is a content delivery network (CDN) that distributes the S3 objects to edge locations for faster and lower latency access. It is useful for improving the download speed and availability of the S3 objects, but it does not affect the storage space or charges. In fact, it may increase the costs by adding a data transfer fee for using the service. References:
✑ Managing your storage lifecycle
✑ Using multipart upload
✑ Amazon S3 Transfer Acceleration
✑ Amazon S3 Inventory
✑ What Is Amazon CloudFront?
- (Topic 4)
A company wants to use artificial intelligence (Al) to determine the quality of its customer service calls. The company currently manages calls in four different languages, including English. The company will offer new languages in the future. The company does not have the resources to regularly maintain machine learning (ML) models.
The company needs to create written sentiment analysis reports from the customer service call recordings. The customer service call recording text must be translated into English.
Which combination of steps will meet these requirements? (Select THREE.)
Correct Answer:
DEF
These answers are correct because they meet the requirements of creating written sentiment analysis reports from the customer service call recordings in any language and translating them into English. Amazon Transcribe is a service that uses advanced machine learning technologies to recognize speech in audio files and transcribe them into text. You can use Amazon Transcribe to convert the audio recordings in any language into text, and specify the language code of the source audio. Amazon Translate is a neural machine translation service that delivers fast, high-quality, and affordable language translation. You can use Amazon Translate to translate text in any language to English, and specify the source and target language codes. Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find insights and relationships in text. You can use Amazon Comprehend to create the sentiment analysis reports, which determine if the text is positive, negative, neutral, or mixed.
References:
✑ https://docs.aws.amazon.com/transcribe/latest/dg/what-is-transcribe.html
✑ https://docs.aws.amazon.com/translate/latest/dg/what-is.html
✑ https://docs.aws.amazon.com/comprehend/latest/dg/how-sentiment.html
- (Topic 4)
A social media company is building a feature for its website. The feature will give users the ability to upload photos. The company expects significant increases in demand during large events and must ensure that the website can handle the upload traffic from users.
Which solution meets these requirements with the MOST scalability?
Correct Answer:
C
This approach allows users to upload files directly to S3 without passing through the application servers, reducing the load on the application and improving scalability. It leverages the client-side capabilities to handle the file uploads and offloads the processing to S3.
- (Topic 4)
A company has Amazon EC2 instances that run nightly batch jobs to process data. The EC2 instances run in an Auto Scaling group that uses On-Demand billing. If a job fails on one instance: another instance will reprocess the job. The batch jobs run between 12:00 AM and 06 00 AM local time every day.
Which solution will provide EC2 instances to meet these requirements MOST cost- effectively'?
Correct Answer:
C
This option is the most cost-effective solution because it leverages the Spot Instances, which are unused EC2 instances that are available at up to 90% discount compared to On-Demand prices. Spot Instances can be interrupted by AWS when the demand for On-Demand instances increases, but since the batch jobs are fault-tolerant and can be reprocessed by another instance, this is not a major issue. By using a launch template, the company can specify the configuration of the Spot Instances, such as the instance type, the operating system, and the user data. By using an Auto Scaling group, the company can automatically scale the number of Spot Instances based on the CPU usage, which reflects the load of the batch jobs. This way, the company can optimize the performance and the cost of the EC2 instances for the nightly batch jobs.
A. Purchase a 1-year Savings Plan for Amazon EC2 that covers the instance family of the Auto Scaling group that the batch job uses. This option is not optimal because it requires a commitment to a consistent amount of compute usage per hour for a one-year term, regardless of the instance type, size, region, or operating system. This can limit the flexibility and scalability of the Auto Scaling group and result in overpaying for unused compute capacity. Moreover, Savings Plans do not provide a capacity reservation, which means the company still needs to reserve capacity with On-Demand Capacity Reservations and pay lower prices with Savings Plans.
* B. Purchase a 1-year Reserved Instance for the specific instance type and operating system of the instances in the Auto Scaling group that the batch job uses. This option is not ideal because it requires a commitment to a specific instance configuration for a one-year term, which can reduce the flexibility and scalability of the Auto Scaling group and result in overpaying for unused compute capacity. Moreover, Reserved Instances do not provide a capacity reservation, which means the company still needs to reserve capacity with On- Demand Capacity Reservations and pay lower prices with Reserved Instances.
* D. Create a new launch template for the Auto Scaling group Increase the instance size Set a policy to scale out based on CPU usage. This option is not cost-effective because it does not take advantage of the lower prices of Spot Instances. Increasing the instance size can improve the performance of the batch jobs, but it can also increase the cost of the On- Demand instances. Moreover, scaling out based on CPU usage can result in launching more instances than needed, which can also increase the cost of the system.
References:
✑ 1 Spot Instances - Amazon Elastic Compute Cloud
✑ 2 Launch templates - Amazon Elastic Compute Cloud
✑ 3 Auto Scaling groups - Amazon EC2 Auto Scaling
✑ [4] Savings Plans - Amazon EC2 Reserved Instances and Other AWS Reservation Models
- (Topic 4)
A company is deploying an application that processes large quantities of data in parallel. The company plans to use Amazon EC2 instances for the workload. The network architecture must be configurable to prevent groups of nodes from sharing the same underlying hardware.
Which networking solution meets these requirements?
Correct Answer:
A
it allows the company to deploy an application that processes large quantities of data in parallel and prevent groups of nodes from sharing the same underlying hardware. By running the EC2 instances in a spread placement group, the company can launch a small number of instances across distinct underlying hardware to reduce correlated failures. A spread placement group ensures that each instance is isolated from each other at the rack level. References:
✑ Placement Groups
✑ Spread Placement Groups