What is a potential outcome of using poor-quality data in AI application?
Correct Answer:
B
“A potential outcome of using poor-quality data in AI applications is that AI models may produce biased or erroneous results. Poor-quality data means that the data is inaccurate, incomplete, inconsistent, irrelevant, or outdated for the AI task. Poor-quality data can affect the performance and reliability of AI models, as they may not have enough or correct information to learn from or make accurate predictions. Poor-quality data can also introduce or exacerbate biases or errors in AI models, such as human bias, societal bias, confirmation bias, or overfitting or underfitting.”
Which statement exemplifies Salesforces honesty guideline when training AI models?
Correct Answer:
B
“Ensuring appropriate consent and transparency when using AI-generated responses is a statement that exemplifies Salesforce’s honesty guideline when training AI models. Salesforce’s honesty guideline is one of the Trusted AI Principles that states that AI systems should be designed and developed with respect for honesty and integrity in how they work and what they produce. Ensuring appropriate consent and transparency means respecting and honoring the choices and preferences of users regarding how their data is used or generated by AI systems. Ensuring appropriate consent and transparency also means providing clear and accurate information and documentation about the AI systems and their outputs.”
What should be done to prevent bias from entering an AI system when training it?
Correct Answer:
B
“Using diverse training data is what should be done to prevent bias from entering an AI system when training it. Diverse training data means that the data covers a wide range of features and patterns that are relevant for the AI task. Diverse training data can help prevent bias by ensuring that the AI system learns from a balanced and representative sample of the target population or domain. Diverse training data can also help improve the accuracy and generalization of the AI system by capturing more variations and scenarios in the data.”
Which type of bias results from data being labeled according to stereotypes?
Correct Answer:
B
“Societal bias results from data being labeled according to stereotypes. Societal bias is a type of bias that reflects the assumptions, norms, or values of a specific society or culture. For example, societal bias can occur when data is labeled based on gender, race, ethnicity, or religion stereotypes.”
Cloud Kicks wants to improve the quality of its AI model's predictions with the use of a large amount of data.
Which data quality element should the company focus on?
Correct Answer:
A
To improve the quality of AI model predictions, Cloud Kicks should focus on the accuracy of the data. Accurate data ensures that the insights and predictions generated by AI models are reliable and valid. Data accuracy involves correcting errors, filling missing values, and verifying data sources to enhance the quality of information fed into the AI systems. Focusing on data accuracy helps in minimizing prediction errors and enhances the decision-making process based on AI insights. For more details on the importance of data quality in AI models, Salesforce provides extensive guidance in their documentation, which can be found at Data Quality and AI.