- (Topic 4)
A company plans to migrate toAWS and use Amazon EC2 On-Demand Instances for its application. During the migration testing phase, a technical team observes that the application takes a long time to launch and load memory to become fully productive.
Which solution will reduce the launch time of the application during the next testing phase?
Correct Answer:
C
The solution that will reduce the launch time of the application during the next testing phase is to launch the EC2 On-Demand Instances with hibernation turned on and configure EC2 Auto Scaling warm pools. This solution allows the application to resume from a hibernated state instead of starting from scratch, which can save time and resources. Hibernation preserves the memory (RAM) state of the EC2 instances to the root EBS volume and then stops the instances. When the instances are resumed, they restore their memory state from the EBS volume and become productive quickly. EC2 Auto Scaling warm pools can be used to maintain a pool of pre-initialized instances that are ready to scale out when needed. Warm pools can also support hibernated instances, which can further reduce the launch time and cost of scaling out.
The other solutions are not as effective as the first one because they either do not reduce the launch time, do not guarantee availability, or do not use On-Demand Instances as required. Launching two or more EC2 On-Demand Instances with auto scaling features does not reduce the launch time of the application, as each instance still has to go through the initialization process. Launching EC2 Spot Instances does not guarantee availability, as Spot Instances can be interrupted by AWS at any time when there is a higher demand for capacity. Launching EC2 On-Demand Instances with Capacity Reservations does not reduce the launch time of the application, as it only ensures that there is enough capacity available for the instances, but does not pre-initialize them.
References:
✑ Hibernating your instance - Amazon Elastic Compute Cloud
✑ Warm pools for Amazon EC2 Auto Scaling - Amazon EC2 Auto Scaling
- (Topic 4)
A company is designing the network for an online multi-player game. The game uses the UDP networking protocol and will be deployed in eight AWS Regions. The network architecture needs to minimize latency and packet loss to give end users a high-quality gaming experience.
Which solution will meet these requirements?
Correct Answer:
B
The best solution for this situation is option B, setting up AWS Global Accelerator with UDP listeners and endpoint groups in each Region. AWS Global Accelerator is a networking service that improves the availability and performance of internet applications by routing user requests to the nearest AWS Region [1]. It also improves the performance of UDP applications by providing faster, more reliable data transfers with lower latency and fewer packet losses. By setting up UDP listeners and endpoint groups in each Region, Global Accelerator will route traffic to the nearest Region for faster response times and a better user experience.
- (Topic 4)
An ecommerce company runs applications in AWS accounts that are part of an organization in AWS Organizations The applications run on Amazon Aurora PostgreSQL databases across all the accounts The company needs to prevent malicious activity and must identify abnormal failed and incomplete login attempts to the databases
Which solution will meet these requirements in the MOST operationally efficient way?
Correct Answer:
C
This option is the most operationally efficient way to meet the requirements because it allows the company to monitor and analyze the database login activity across all the accounts in the organization. By publishing the Aurora general logs to a log group in Amazon CloudWatch Logs, the company can enable the logging of the database connections, disconnections, and failed authentication attempts. By exporting the log data to a central Amazon S3 bucket, the company can store the log data in a durable and cost- effective way and use other AWS services or tools to perform further analysis or alerting on the log data. For example, the company can use Amazon Athena to query the log data in Amazon S3, or use Amazon SNS to send notifications based on the log data.
* A. Attach service control policies (SCPs) to the root of the organization to identify the failed login attempts. This option is not effective because SCPs are not designed to identify the failed login attempts, but to restrict the actions that the users and roles can perform in the member accounts of the organization. SCPs are applied to the AWS API calls, not to the database login attempts. Moreover, SCPs do not provide any logging or analysis capabilities for the database activity.
* B. Enable the Amazon RDS Protection feature in Amazon GuardDuty for the member accounts of the organization. This option is not optimal because the Amazon RDS Protection feature in Amazon GuardDuty is not available for Aurora PostgreSQL databases, but only for Amazon RDS for MySQL and Amazon RDS for MariaDB databases. Moreover, the Amazon RDS Protection feature does not monitor the database login attempts, but the network and API activity related to the RDS instances.
* D. Publish all the Aurora PostgreSQL database events in AWS CloudTrail to a central Amazon S3 bucket. This option is not sufficient because AWS CloudTrail does not capture the database login attempts, but only the AWS API calls made by or on behalf of the Aurora PostgreSQL database. For example, AWS CloudTrail can record the events such as creating, modifying, or deleting the database instances, clusters, or snapshots, but not the events such as connecting, disconnecting, or failing to authenticate to the database. References:
✑ 1 Working with Amazon Aurora PostgreSQL - Amazon Aurora
✑ 2 Working with log groups and log streams - Amazon CloudWatch Logs
✑ 3 Exporting Log Data to Amazon S3 - Amazon CloudWatch Logs
✑ [4] Amazon GuardDuty FAQs
✑ [5] Logging Amazon RDS API Calls with AWS CloudTrail - Amazon Relational Database Service
- (Topic 4)
A company manages AWS accounts in AWS Organizations. AWS 1AM Identity Center (AWS Single Sign-On) and AWS Control Tower are configured for the accounts. The company wants to manage multiple user permissions across all the accounts.
The permissions will be used by multiple 1AM users and must be split between the developer and administrator teams. Each team requires different permissions. The company wants a solution that includes new users that are hired on both teams.
Which solution will meet these requirements with the LEAST operational overhead?
Correct Answer:
C
This solution meets the requirements with the least operational overhead because it leverages the features of IAM Identity Center and AWS Control Tower to centrally manage multiple user permissions across all the accounts. By creating new groups and permission sets, the company can assign fine-grained permissions to the developer and administrator teams based on their roles and responsibilities. The permission sets are applied to the groups at the organization level, so they are automatically inherited by all the accounts in the organization. When new users are hired, the company only needs to add them to the appropriate group in IAM Identity Center, and they will automatically get the permissions assigned to that group. This simplifies the user management and reduces the manual effort of assigning permissions to each user individually.
References:
✑ Managing access to AWS accounts and applications
✑ Managing permissions sets
✑ Managing groups
- (Topic 3)
An application runs on Amazon EC2 instances in private subnets. The application needs to access an Amazon DynamoDB table. What is the MOST secure way to access the table while ensuring that the traffic does not leave the AWS network?
Correct Answer:
A
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/vpc-endpoints- dynamodb.html
A VPC endpoint for DynamoDB enables Amazon EC2 instances in your VPC to use their private IP addresses to access DynamoDB with no exposure to the public internet. Your EC2 instances do not require public IP addresses, and you don't need an internet gateway, a NAT device, or a virtual private gateway in your VPC. You use endpoint policies to control access to DynamoDB. Traffic between your VPC and the AWS service does not leave the Amazon network.