AIGP Dumps

AIGP Free Practice Test

IAPP AIGP: Artificial Intelligence Governance Professional

QUESTION 6

- (Topic 2)
What is the main purpose of accountability structures under the Govern function of the NIST Al Risk Management Framework?

Correct Answer: A
The NIST AI Risk Management Framework’s Govern function emphasizes the importance of establishing accountability structures that empower and train cross-functional teams. This is crucial because cross-functional teams bring diverse perspectives and expertise, which are essential for effective AI governance and risk management. Training these teams ensures that they are well-equipped to handle their responsibilities and can make informed decisions that align with the organization’s AI principles and ethical standards. Reference: NIST AI Risk Management Framework documentation, Govern function section.

QUESTION 7

- (Topic 2)
What is the primary purpose of conducting ethical red-teaming on an Al system?

Correct Answer: B
The primary purpose of conducting ethical red-teaming on an AI system is to simulate model risk scenarios. Ethical red-teaming involves rigorously testing the AI system to identify potential weaknesses, biases, and vulnerabilities by simulating real-world attack or failure scenarios. This helps in proactively addressing issues that could compromise the system's reliability, fairness, and security. Reference: AIGP Body of Knowledge on AI Risk Management and Ethical AI Practices.

QUESTION 8

- (Topic 2)
After completing model testing and validation, which of the following is the most important step that an organization takes prior to deploying the model into production?

Correct Answer: A
After completing model testing and validation, the most important step prior to deploying the model into production is to perform a readiness assessment. This assessment ensures that the model is fully prepared for deployment, addressing any potential issues related to infrastructure, performance, security, and compliance. It verifies that the model meets all necessary criteria for a successful launch. Other steps, such as defining a model-validation methodology, documenting maintenance teams and processes, and identifying known edge cases, are also important but come secondary to confirming overall readiness. Reference: AIGP Body of Knowledge on Deployment Readiness.

QUESTION 9

- (Topic 2)
During the development of semi-autonomous vehicles, various failures occurred as a result of the sensors misinterpreting environmental surroundings, such as sunlight.
These failures are an example of?

Correct Answer: B
The failures in semi-autonomous vehicles due to sensors misinterpreting environmental surroundings, such as sunlight, are examples of brittleness. Brittleness in AI systems refers to their inability to handle variations in input data or unexpected conditions, leading to failures when the system encounters situations that were not adequately covered during training. These systems perform well under specific conditions but fail when those conditions change. Reference: AIGP Body of Knowledge on AI System Robustness and Failures.

QUESTION 10

- (Topic 2)
CASE STUDY
Please use the following answer the next question:
A local police department in the United States procured an Al system to monitor and analyze social media feeds, online marketplaces and other sources of public information to detect evidence of illegal activities (e.g., sale of drugs or stolen goods). The Al system works by surveilling the public sites in order to identify individuals that are likely to have committed a crime. It cross-references the individuals against data maintained by law enforcement and then assigns a percentage score of the likelihood of criminal activity based on certain factors like previous criminal history, location, time, race and gender.
The police department retained a third-party consultant assist in the procurement process, specifically to evaluate two finalists. Each of the vendors provided information about their system's accuracy rates, the diversity of their training data and how their system works. The consultant determined that the first vendor’s system has a higher accuracy rate and based on this information, recommended this vendor to the police department.
The police department chose the first vendor and implemented its Al system. As part of the implementation, the department and consultant created a usage policy for the system, which includes training police officers on how the system works and how to incorporate it
into their investigation process.
The police department has now been using the Al system for a year. An internal review has found that every time the system scored a likelihood of criminal activity at or above 90%, the police investigation subsequently confirmed that the individual had, in fact, committed a crime. Based on these results, the police department wants to forego investigations for cases where the Al system gives a score of at least 90% and proceed directly with an arrest.
Which Al risk would NOT have been identified during the procurement process based on the categories of information requested by the third-party consultant?

Correct Answer: A
The AI risk that would not have been identified during the procurement process based on the categories of information requested by the third-party consultant is security. The consultant focused on accuracy rates, diversity of training data, and system functionality, which pertain to performance and fairness but do not directly address the security aspects of the AI system. Security risks involve ensuring that the system is protected against unauthorized access, data breaches, and other vulnerabilities that could compromise its integrity. Reference: AIGP Body of Knowledge on AI Security and Risk Management.