What is one technique to mitigate bias and ensure fairness in AI applications?
Correct Answer:
A
A technique to mitigate bias and ensure fairness in AI applications is ongoing auditing and monitoring of the data used in AI applications. Regular audits help identify and address any biases that may exist in the data, ensuring that AI models function fairly and without prejudice. Monitoring involves continuously checking the performance of AI systems to safeguard against discriminatory outcomes. Salesforce emphasizes the importance of ethical AI practices, including transparency and fairness, which can be further explored through Salesforce’s AI ethics guidelines at Salesforce AI Ethics.
What is the key difference between generative and predictive AI?
Correct Answer:
A
“The key difference between generative and predictive AI is that generative AI creates new content based on existing data and predictive AI analyzes existing data. Generative AI is a type of AI that can generate novel content such as images, text, music, or video based on existing data or inputs. Predictive AI is a type of AI that can analyze existing data or inputs and make predictions or recommendations based on patterns or trends.”
What is the role of Salesforce Trust AI principles in the context of CRM system?
Correct Answer:
A
“The role of Salesforce Trust AI principles in the context of CRM systems is guiding ethical and responsible use of AI. Salesforce Trust AI principles are a set of guidelines and best practices for developing and using AI systems in a responsible and ethical way. The principles include Accountability, Fairness & Equality, Transparency & Explainability, Privacy & Security, Reliability & Safety, Inclusivity & Diversity, Empowerment & Education. The principles aim to ensure that AI systems are aligned with the values and interests of customers, partners, and society.”
What is a benefit of a diverse, balanced, and large dataset?
Correct Answer:
C
“Model accuracy is a benefit of a diverse, balanced, and large dataset. A diverse dataset can capture a variety of features and patterns that are relevant for the AI task. A balanced dataset can avoid overfitting or underfitting the model to a specific subset of data. A large dataset can provide enough information for the model to learn from and generalize well to new data.”
Cloud Kicks uses Einstein to generate predictions out is not seeing accurate results? What to a potential mason for this?
Correct Answer:
A
“Poor data quality is a potential reason for not seeing accurate results from an AI model. Poor data quality means that the data is inaccurate, incomplete, inconsistent, irrelevant, or outdated for the AI task. Poor data quality can affect the performance and reliability of AI models, as they may not have enough or correct information to learn from or make accurate predictions.”