An AI practitioner is using an Amazon Bedrock base model to summarize session chats from the customer service department. The AI practitioner wants to store invocation logs to monitor model input and output data.
Which strategy should the AI practitioner use?
Correct Answer:
B
Amazon Bedrock provides an option to enable invocation logging to capture and store the input and output data of the models used. This is essential for monitoring and auditing purposes, particularly when handling customer data.
✑ Option B (Correct): "Enable invocation logging in Amazon Bedrock": This is the correct answer as it directly enables the logging of all model invocations, ensuring transparency and traceability.
✑ Option A: "Configure AWS CloudTrail" is incorrect because CloudTrail logs API
calls but does not provide specific logging for model inputs and outputs.
✑ Option C: "Configure AWS Audit Manager" is incorrect as Audit Manager is used for compliance reporting, not specific invocation logging for AI models.
✑ Option D: "Configure model invocation logging in Amazon EventBridge" is incorrect as EventBridge is for event-driven architectures, not specifically designed for logging AI model inputs and outputs.
AWS AI Practitioner References:
✑ Amazon Bedrock Logging Capabilities: AWS emphasizes using built-in logging features in Bedrock to maintain data integrity and transparency in model operations.
An AI practitioner is using a large language model (LLM) to create content for marketing campaigns. The generated content sounds plausible and factual but is incorrect.
Which problem is the LLM having?
Correct Answer:
B
In the context of AI, "hallucination" refers to the phenomenon where a model generates outputs that are plausible-sounding but are not grounded in reality or the training data. This problem often occurs with large language models (LLMs) when they create information that sounds correct but is actually incorrect or fabricated.
✑ Option B (Correct): "Hallucination": This is the correct answer because the
problem described involves generating content that sounds factual but is incorrect, which is characteristic of hallucination in generative AI models.
✑ Option A: "Data leakage" is incorrect as it involves the model accidentally learning
from data it shouldn't have access to, which does not match the problem of generating incorrect content.
✑ Option C: "Overfitting" is incorrect because overfitting refers to a model that has
learned the training data too well, including noise, and performs poorly on new data.
✑ Option D: "Underfitting" is incorrect because underfitting occurs when a model is
too simple to capture the underlying patterns in the data, which is not the issue here.
AWS AI Practitioner References:
✑ Large Language Models on AWS: AWS discusses the challenge of hallucination in large language models and emphasizes techniques to mitigate it, such as using guardrails and fine-tuning.
A loan company is building a generative AI-based solution to offer new applicants discounts based on specific business criteria. The company wants to build and use an AI model responsibly to minimize bias that could negatively affect some customers.
Which actions should the company take to meet these requirements? (Select TWO.)
Correct Answer:
AC
To build and use an AI model responsibly, especially in sensitive applications like loan approvals, it's crucial to address potential biases and ensure transparency:
✑ Detect imbalances or disparities in the data (Option A): Analyzing the training data
for imbalances or disparities is essential. Imbalanced data can lead to models that are biased towards the majority class, potentially disadvantaging certain groups. By identifying and mitigating these imbalances, the company can reduce the risk of biased predictions.
✑ Evaluate the model's behavior to provide transparency to stakeholders (Option C):
Regularly assessing the model's outputs and decision-making processes allows the company to understand how decisions are made. This evaluation fosters transparency, enabling the company to explain model behavior to stakeholders
and ensure that the model operates as intended without unintended biases. Options B, D, and E, while relevant to model performance and evaluation, do not directly address the responsible use of AI concerning bias and transparency.
Reference: AWS Certified AI Practitioner Exam Guide
An AI company periodically evaluates its systems and processes with the help of independent software vendors (ISVs). The company needs to receive email message notifications when an ISV's compliance reports become available.
Which AWS service can the company use to meet this requirement?
Correct Answer:
D
AWS Data Exchange is a service that allows companies to securely exchange data with third parties, such as independent software vendors (ISVs). AWS Data Exchange can be configured to provide notifications, including email notifications, when new datasets or compliance reports become available.
✑ Option D (Correct): "AWS Data Exchange": This is the correct answer because it
enables the company to receive notifications, including email messages, when ISVs' compliance reports are available.
✑ Option A: "AWS Audit Manager" is incorrect because it focuses on assessing an
organization's own compliance, not receiving third-party compliance reports.
✑ Option B: "AWS Artifact" is incorrect as it provides access to AWS??s compliance reports, not ISVs'.
✑ Option C: "AWS Trusted Advisor" is incorrect as it offers optimization and best practices guidance, not compliance report notifications.
AWS AI Practitioner References:
✑ AWS Data Exchange Documentation: AWS explains how Data Exchange allows organizations to subscribe to third-party data and receive notifications when updates are available.
A company wants to make a chatbot to help customers. The chatbot will help solve technical problems without human intervention. The company chose a foundation model (FM) for the chatbot. The chatbot needs to produce responses that adhere to company tone.
Which solution meets these requirements?
Correct Answer:
C
Experimenting and refining the prompt is the best approach to ensure that the chatbot using a foundation model (FM) produces responses that adhere to the company's tone.
✑ Prompt Engineering:
✑ Why Option C is Correct:
✑ Why Other Options are Incorrect: