What should be done to prevent bias from entering an AI system when training it?
Correct Answer:
B
“Using diverse training data is what should be done to prevent bias from entering an AI system when training it. Diverse training data means that the data covers a wide range of features and patterns that are relevant for the AI task. Diverse training data can help prevent bias by ensuring that the AI system learns from a balanced and representative sample of the target population or domain. Diverse training data can also help improve the accuracy and generalization of the AI system by capturing more variations and scenarios in the data.”
Which type of bias results from data being labeled according to stereotypes?
Correct Answer:
B
“Societal bias results from data being labeled according to stereotypes. Societal bias is a type of bias that reflects the assumptions, norms, or values of a specific society or culture. For example, societal bias can occur when data is labeled based on gender, race, ethnicity, or religion stereotypes.”
Cloud Kicks wants to improve the quality of its AI model's predictions with the use of a large amount of data.
Which data quality element should the company focus on?
Correct Answer:
A
To improve the quality of AI model predictions, Cloud Kicks should focus on the accuracy of the data. Accurate data ensures that the insights and predictions generated by AI models are reliable and valid. Data accuracy involves correcting errors, filling missing values, and verifying data sources to enhance the quality of information fed into the AI systems. Focusing on data accuracy helps in minimizing prediction errors and enhances the decision-making process based on AI insights. For more details on the importance of data quality in AI models, Salesforce provides extensive guidance in their documentation, which can be found at Data Quality and AI.
Which type of bias imposes a system ‘s values on others?
Correct Answer:
A
“Societal bias is the type of bias that imposes a system’s values on others. Societal bias is a type of bias that reflects the assumptions, norms, or values of a specific society or culture. Societal bias can affect the fairness and ethics of AI systems, as they may affect how different groups or domains are perceived, treated, or represented by AI systems. For example, societal bias can occur when AI systems impose a system’s values on others, such as using Western standards of beauty or success to judge or rank people from other cultures.”
A consultant conducts a series of Consequence Scanning workshops to support testing diverse datasets.
Which Salesforce Trusted AI Principles is being practiced>
Correct Answer:
B
“Conducting a series of Consequence Scanning workshops to support testing diverse datasets is an action that practices Salesforce’s Trusted AI Principle of Inclusivity. Inclusivity is one of the Trusted AI Principles that states that AI systems should be designed and developed with respect for diversity and inclusion of different perspectives, backgrounds, and experiences. Conducting Consequence Scanning workshops means engaging with various stakeholders to identify and assess the potential impacts and implications of AI systems on different groups or domains. Conducting Consequence Scanning workshops can help practice Inclusivity by ensuring that diverse datasets are used to test and evaluate AI systems.”