AWS-Solution-Architect-Associate Dumps

AWS-Solution-Architect-Associate Free Practice Test

Amazon AWS-Solution-Architect-Associate: Amazon AWS Certified Solutions Architect - Associate

QUESTION 16

- (Topic 4)
A company is building an Amazon Elastic Kubernetes Service (Amazon EKS) cluster for its workloads. All secrets that are stored in Amazon EKS must be encrypted in the Kubernetes etcd key-value store.
Which solution will meet these requirements?

Correct Answer: B
This option is the most secure and simple way to encrypt the secrets that are stored in Amazon EKS. AWS Key Management Service (AWS KMS) is a service that allows you to create and manage encryption keys that can be used to encrypt your data. Amazon EKS KMS secrets encryption is a feature that enables you to use a KMS key to encrypt the secrets that are stored in the Kubernetes etcd key-value store. This provides an additional layer of protection for your sensitive data, such as passwords, tokens, and keys. You can create a new KMS key or use an existing one, and then enable the Amazon EKS KMS secrets encryption on the Amazon EKS cluster. You can also use IAM policies to control who can access or use the KMS key.
Option A is not correct because using AWS Secrets Manager to manage, rotate, and store all secrets in Amazon EKS is not necessary or efficient. AWS Secrets Manager is a service that helps you securely store, retrieve, and rotate your secrets, such as database credentials, API keys, and passwords. You can use it to manage secrets that are used by your applications or services outside of Amazon EKS, but it is not designed to encrypt the secrets that are stored in the Kubernetes etcd key-value store. Moreover, using AWS Secrets Manager would incur additional costs and complexity, and it would not leverage the native Kubernetes secrets management capabilities.
Option C is not correct because using the Amazon EBS Container Storage Interface (CSI) driver as an add-on does not encrypt the secrets that are stored in Amazon EKS. The Amazon EBS CSI driver is a plugin that allows you to use Amazon EBS volumes as persistent storage for your Kubernetes pods. It is useful for providing durable and scalable storage for your applications, but it does not affect the encryption of the secrets that are stored in the Kubernetes etcd key-value store. Moreover, using the Amazon EBS CSI driver would require additional configuration and resources, and it would not provide the same level of security as using a KMS key.
Option D is not correct because creating a new AWS KMS key with the alias aws/ebs and enabling default Amazon EBS volume encryption for the account does not encrypt the secrets that are stored in Amazon EKS. The alias aws/ebs is a reserved alias that is used by AWS to create a default KMS key for your account. This key is used to encrypt the Amazon EBS volumes that are created in your account, unless you specify a different KMS key. Enabling default Amazon EBS volume encryption for the account is a setting that ensures that all new Amazon EBS volumes are encrypted by default. However, these features do not affect the encryption of the secrets that are stored in the Kubernetes etcd key-value store. Moreover, using the default KMS key or the default encryption setting would not provide the same level of control and security as using a custom KMS key and enabling the Amazon EKS KMS secrets encryption feature. References:
✑ Encrypting secrets used in Amazon EKS
✑ What Is AWS Key Management Service?
✑ What Is AWS Secrets Manager?
✑ Amazon EBS CSI driver
✑ Encryption at rest

QUESTION 17

- (Topic 3)
An ecommerce company is experiencing an increase in user traffic. The company's store is deployed on Amazon EC2 instances as a two-tier web application consisting of a web tier and a separate database tier. As traffic increases, the company notices that the architecture is causing significant delays in sending timely marketing and order confirmation email to users. The company wants to reduce the time it spends resolving complex email delivery issues and minimize operational overhead.
What should a solutions architect do to meet these requirements?

Correct Answer: B
Amazon SES is a cost-effective and scalable email service that enables businesses to send and receive email using their own email addresses and domains. Configuring the web instance to send email through Amazon SES is a simple and effective solution that can reduce the time spent resolving complex email delivery issues and minimize operational overhead.

QUESTION 18

- (Topic 2)
A company's web application is running on Amazon EC2 instances behind an Application Load Balancer. The company recently changed its policy, which now requires the application to be accessed from one specific country only.
Which configuration will meet this requirement?

Correct Answer: C
https://aws.amazon.com/about-aws/whats-new/2017/10/aws-waf-now-supports- geographic-match/

QUESTION 19

- (Topic 3)
A company has an On-premises volume backup solution that has reached its end of life. The company wants to use AWS as part of a new backup solution and wants to maintain local access to all the data while it is backed up on AWS. The company wants to ensure that the data backed up on AWS is automatically and securely transferred.
Which solution meets these requirements?

Correct Answer: D
This option is the most efficient because it uses AWS Storage Gateway, which is a service that connects an on-premises software appliance with cloud-based storage to provide seamless integration with data security features between your on-premises IT environment and the AWS storage infrastructure1. It also uses a stored volume gateway, which is a type of volume gateway that stores your primary data locally and asynchronously backs up point-in-time snapshots of your data to Amazon S32. It also runs the Storage Gateway software application on premises and maps the gateway storage volumes to on-premises
storage, which enables you to use your existing storage hardware and network infrastructure. It also mounts the gateway storage volumes to provide local access to the data, which ensures that your data is available for low latency access on premises while also getting backed up to AWS. This solution meets the requirement of maintaining local access to all the data while it is backed up on AWS and ensuring that the data backed up on AWS is automatically and securely transferred. Option A is less efficient because it uses AWS Snowball, which is a physical device that lets you transfer large amounts of data into and out of AWS3. However, this does not provide a periodic backup solution, as it requires manual handling and shipping of the device. It also configures on-premises systems to mount the Snowball S3 endpoint to provide local access to the data, which could introduce additional complexity and latency. Option B is less efficient because it uses AWS Snowball Edge, which is a physical device that has onboard storage and compute capabilities for select AWS capabilities. However, this does not provide a periodic backup solution, as it requires manual handling and shipping of the device. It also uses the Snowball Edge file interface to provide on-premises systems with local access to the data, which could introduce additional complexity and latency. Option C is less efficient because it uses AWS Storage Gateway and configures a cached volume gateway, which is a type of volume gateway that stores your primary data in Amazon S3 and retains a copy of frequently accessed data subsets locally. However, this does not provide local access to all the data, as only some data subsets are cached locally. It also configures a percentage of data to cache locally, which could incur higher costs and complexity than using a stored volume gateway.

QUESTION 20

- (Topic 4)
A company runs a three-tier web application in a VPC across multiple Availability Zones. Amazon EC2 instances run in an Auto Scaling group for the application tier.
The company needs to make an automated scaling plan that will analyze each resource's daily and weekly historical workload trends. The configuration must scale resources appropriately according to both the forecast and live changes in utilization.
Which scaling strategy should a solutions architect recommend to meet these requirements?

Correct Answer: B
This solution meets the requirements because it allows the company to use both predictive scaling and dynamic scaling to optimize the capacity of its Auto Scaling group. Predictive scaling uses machine learning to analyze historical data and forecast future traffic patterns. It then adjusts the desired capacity of the group in advance of the predicted changes. Dynamic scaling uses target tracking to maintain a specified metric (such as CPU utilization) at a target value. It scales the group in or out as needed to keep the metric close to the target. By using both scaling methods, the company can benefit from faster, simpler, and more accurate scaling that responds to both forecasted and live changes in utilization. References:
✑ Predictive scaling for Amazon EC2 Auto Scaling
✑ [Target tracking scaling policies for Amazon EC2 Auto Scaling]