AIF-C01 Dumps

AIF-C01 Free Practice Test

Amazon-Web-Services AIF-C01: AWS Certified AI Practitioner

QUESTION 31

Which option is a benefit of ongoing pre-training when fine-tuning a foundation model (FM)?

Correct Answer: B
Ongoing pre-training when fine-tuning a foundation model (FM) improves model performance over time by continuously learning from new data.
✑ Ongoing Pre-Training:
✑ Why Option B is Correct:
✑ Why Other Options are Incorrect:

QUESTION 32

A company has developed an ML model for image classification. The company wants to deploy the model to production so that a web application can use the model.
The company needs to implement a solution to host the model and serve predictions without managing any of the underlying infrastructure.
Which solution will meet these requirements?

Correct Answer: A
Amazon SageMaker Serverless Inference is the correct solution for deploying an ML model to production in a way that allows a web application to use the model without the need to manage the underlying infrastructure.
✑ Amazon SageMaker Serverless Inference provides a fully managed environment
for deploying machine learning models. It automatically provisions, scales, and manages the infrastructure required to host the model, removing the need for the company to manage servers or other underlying infrastructure.
✑ Why Option A is Correct:
✑ Why Other Options are Incorrect:
Thus, A is the correct answer, as it aligns with the requirement of deploying an ML model without managing any underlying infrastructure.

QUESTION 33

A company wants to develop an educational game where users answer questions such as the following: "A jar contains six red, four green, and three yellow marbles. What is the probability of choosing a green marble from the jar?"
Which solution meets these requirements with the LEAST operational overhead?

Correct Answer: C
The problem involves a simple probability calculation that can be handled efficiently by straightforward mathematical rules and computations. Using machine learning techniques would introduce unnecessary complexity and operational overhead.
✑ Option C (Correct): "Use code that will calculate probability by using simple rules and computations": This is the correct answer because it directly solves the problem with minimal overhead, using basic probability rules.
✑ Option A: "Use supervised learning to create a regression model" is incorrect as it overcomplicates the solution for a simple probability problem.
✑ Option B: "Use reinforcement learning to train a model" is incorrect because reinforcement learning is not needed for a simple probability calculation.
✑ Option D: "Use unsupervised learning to create a model" is incorrect as unsupervised learning is not applicable to this task.
AWS AI Practitioner References:
✑ Choosing the Right Solution for AI Tasks: AWS recommends using the simplest and most efficient approach to solve a given problem, avoiding unnecessary machine learning techniques for straightforward tasks.

QUESTION 34

A company wants to use a large language model (LLM) on Amazon Bedrock for sentiment analysis. The company wants to know how much information can fit into one prompt.
Which consideration will inform the company's decision?

Correct Answer: B
The context window determines how much information can fit into a single prompt when using a large language model (LLM) like those on Amazon Bedrock.
✑ Context Window:
✑ Why Option B is Correct:
✑ Why Other Options are Incorrect:

QUESTION 35

A company needs to train an ML model to classify images of different types of animals. The company has a large dataset of labeled images and will not label more data. Which type of learning should the company use to train the model?

Correct Answer: A
Supervised learning is appropriate when the dataset is labeled. The model uses this data to learn patterns and classify images. Unsupervised learning, reinforcement learning, and active learning are not suitable since they either require unlabeled data or different problem settings. References: AWS Machine Learning Best Practices.