Product Manager Quality Plus Engineering, United States
Accountability, assurance, and explainability of AI systems have become issues of national security, personal identity, equity and safety and public policy across the globe. In October 2022, the White House enacted the AI Bill of Rights. The common element among the bill’s five components is trust. Similarly, the foundational element of all cybersecurity is trust. Courts, legislatures, and rulemaking bodies are now regulating the assurance of AI systems so they do not contribute to harmful misinformation, disinformation, and other forms of cyber distortion. Discover how to assure high risk, public facing, and decision-making AI systems using multiple risk management frameworks.
Learning Objectives:
Explain the current state of the EU, U.S., Canada, and Australia AI regulations, standards, and guidelines.
Describe risk assurance requirements and potential regulations for automated, public-facing, decision-making tools.
Discuss risk assurance and auditing methodologies including the AI RMF, ISO 42001, and IEC 23894.
Learn how to address the three key questions of AI risk assurance.
Discuss the future of AI risk accountability and assurance with emphasis on tips and tools.