Risk Assessment as a Tool for Reducing AI-Related Harm
Trust in AI begins with understanding its risks
The present and potential far-reaching risks some AI systems, technologies, and practices pose to human rights and public interests highlight the importance of decisions about frameworks for governing AI.[1] To minimize (ideally, avoid) those risks, comprehensive risk assessment (RA)—a process of predicting a probability and/or a degree of significance of adverse consequences (injuries or other harms) happening in response to an event—could be incorporated into AI governance frameworks to provide stakeholders with the information they need to evaluate planned uses of AI. Done properly, ex ante RA produces useful information about the potential harms to individuals and society at large, taking into account relevant benchmarks, standards, criteria, community norms, as well as inherent uncertainties, among other factors.
AI-specific RA has broad applicability, including as part of the deployment and use of facial recognition and information recommender systems, large language models and other generative technologies, algorithmic decision systems, and many other data-centric technologies.
According to the Organisation for Economic Co-Operation and Development (OECD), as of June 2021, approximately 60 countries were considering or had developed AI-related strategies and policies.[2] Among them, governing authorities in Canada, the European Union (EU), New Zealand, Germany, the United Kingdom (UK), and others have proposed or implemented RA (or some aspect of RA) as part of their strategies.[3]
In the United States, the National Institute of Standards and Technology (NIST) in 2023 issued a framework for conducting AI-specific RA,[4] and the National Security Commission on AI (NSCAI) in 2021 issued a Final Report recommending certain federal agencies employ risk management.[5] Meanwhile, US lawmakers introduced legislative measures mandating RA or RA-like requirements for certain AI systems and use cases (Table I). And state lawmakers in a handful of states have adopted risk analysis as part of their data privacy protection and fairness legal frameworks, which is seen as directly regulating AI systems that harvest and use personal data (Table I).
Even so, a consensus legal construct for AI-specific RA does not exist, nor is there widespread agreement on what RA is or how it should be conducted, documented, and reviewed when an AI technology or practice is at issue.[8]
The result is an uneven and incomplete legal landscape where the potential for harm may be lower in some jurisdictions than in others, and a potential race to the bottom exists whereby certain jurisdictions that fail to impose legal or regulatory impediments, often for political reasons, become attractive to companies looking to reduce their regulatory burden. That potentially exposes certain individuals in those jurisdictions to unacceptable harms.
Although AI-specific RA is not a panacea for AI-related risks, lawmaker can ensure RA becomes a widely-used viable tool for reducing risk by adopting a few key legal elements in future AI governance frameworks. These include:
(1) AI-specific RA should be mandatory at least for companies developing, deploying and using AI systems known to potentially increase the risk of harmful impacts. But also, those engaging in AI practices known to lead to potential harm and those whose AI activities potentially impact a significant number of individuals should also be required to conduct RA. Many companies already conduct risk analysis in their daily operations; thus the burden on them to implement a formal AI-focused RA program may not be onerous.
(2) Federal funding and authorization should be provided for ongoing work in crafting standards, benchmarks, criteria, and consensus norms for risk assessors to use in reaching conclusions about AI systems. Though numerical estimates of risks may be unsuitable to AI systems (for example, how does one numerically assess acceptable degrees of privacy degradation?), without at least some standards and other measures, risk assessors may rely too much on narrative analysis, which may lack granularity and introduce bias.
(3) A governance framework for AI should provide for independent review of RA reports. Among the reviewers should be subject matter experts in the relevant technology who have access to company records, and are complimented by ethicists and lawyers (among others). Those reviewers could report to a centralized authority, such as a governing agency or a commission (the Federal Trade Commission, for example), or be certified by the agency/commission. Independent review, combined with reasonable enforcement efforts, may promote transparency around AI, lead to greater trust in AI systems and technologies and, in turn, help propagate the benefits of AI to more people.
[1] The kinds of harms produced by AI technologies are well-documented. See, e.g., N.A. Smuha, Beyond the Individual: Governing AI’s Societal Harm, Internet Policy Review, 10(3) at 3 (2021). https://policyreview.info/pdf/policyreview-2021-3-1574.pdf.
[2] Organisation for Economic Co-Operation and Development (OECD), State of implementation of the OECD AI Principles: Insights from national AI policies, OECD Digital Economy Papers No. 311, at 10 (2021) https://doi.org/10.1787/1cd40c44-en.
[3] Id.
[4] National Institute of Standards and Technology (NIST), Risk Management Framework (hereinafter “NIST RMF”) (2023).
https://www.nist.gov/system/files/documents/2022/08/18/AI_RMF_2nd_draft.pdf, mandated per the National Artificial Intelligence Initiative Act (2020) (Division E of Public Law 116-283, The National Defense Authorization Act for Fiscal Year 2021), 15 U.S.C. § 278h–1(c) (2021).
[5] National Security Commission on Artificial Intelligence (NSCAI), The Final Report, at 115 (2021).
[8] See generally G. Ezeani et al., A survey of artificial intelligence risk assessment methodologies: The global state of play and leading practices identified, EY & Trilateral Research (2021); L.A. Yeung, Guidance for the Development of AI Risk and Impact Assessments, Center for Long Term Cybersecurity, Univ. of Calif., Berkeley (2021); I.D. Raji et al., Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing, Conference on Fairness, Accountability, and Transparency (FAT* ’20), January 27–30, 2020, Barcelona, Spain, ACM (2020).
[9] I.D. Raji, Radical Proposal: Third-Party Auditor Access for AI Accountability (slide presentation), Stanford 2021 Human-Centric AI Conference (2021).
Other source: A. Circiumara, Five key issues about the regulation of AI, The Platform Law Blog (July 5, 2022) https://theplatformlaw.blog/2022/07/05/five-key-issues-about-the-regulation-of-ai/.