Eliza Holland

 

Artificial Intelligence (AI) has the potential to drive economic growth,[1] produce higher living standards, and perhaps even prolong life.[2] However, the advent of AI comes with significant risks. These include not only risks to human safety[3] and labor markets,[4] but also discriminatory outcomes from algorithmic decision-making.[5] As cognizance of these and other risks has become mainstream,[6] national governments and other stakeholder groups have responded by rapidly propagating guidelines for ethical AI use.[7] Although actual AI regulation remains in nascent stages, there is now a developing consensus on best practices for safe and ethical AI.[8] This post examines two principles that national governments most commonly adopt: transparency and robustness.

  1. Transparency

The principle of transparency councils forthrightness regarding AI’s presence (consumers should know when they are interacting with AI rather than with a human), and its results (service providers should be able to explain whythe AI came to a certain result).[9] Computer scientist Ge Wang succinctly explains why transparency is vital for ethical AI. He describes a common misconception of AI as a “big red button”—while part of AI’s allure is its ability to spot patterns in complex phenomena that defy rule-based descriptions, this allure creates the temptation to inadvisably conceive of AI as “a technology that reliably delivers the right answers while hiding the process that leads to them.”[10] So, although discrimination in criminal justice, lending, and other areas predates algorithmic decision-making, the gloss of legitimacy contributed by the “big red button” effect makes a lack of transparency in algorithmic decision-making especially concerning.

National frameworks include action items for implementing the principle of transparency fairly frequently but with varying degrees of specificity. The European Union’s Draft AI Act would require that certain AI systems be accompanied by disclosure as to, inter alia, the level of accuracy, robustness, and cybersecurity against which the AI system was tested and any foreseeable circumstance which may lead to risks to health and safety or fundamental rights.[11] The United States Federal Trade Commission advises that companies use transparency frameworks and independent standards, conduct and publish the results of independent audits, and consider opening the company’s data or source code to outside inspection.[12] Singapore’s model includes illustrations of companies that have, in the government’s opinion, implemented the principle of transparency well.[13] One practical suggestion from the Singapore framework is that companies use tools such as the Fry readability graph or the Flesch-Kincaid readability test[14] to ensure that consumers can actually understand communications about how the company uses AI.[15] 

  1. Robustness

A natural corollary of transparency, robustness—also referred to as accuracy—is also often cited in national guidelines. This principle addresses the quality of datasets on which AI systems train. First, it is important that a dataset be as complete and representative as possible. A 2018 study of gender classification software done by computer scientists Joy Buolamwini and Timnit Gebru illustrates this point. For the classification systems studied, darker-skinned females were the most misclassified group, with error rates of up to 34.7% (in contrast to 0.8% for lighter-skinned males).[16] The datasets on which these systems trained were “overwhelmingly composed of lighter-skinned subjects,” suggesting that the systems’ failure to recognize darker-skinned females was at least in part due to that group not being adequately represented in the training data.[17]

            A commitment to eradicating discriminatory AI requires more than just obtaining complete and representative datasets. Even the most complete datasets may be inherently biased, reflecting inequities in society. The output of an AI system using these datasets will therefore still be discriminatory. For example, researchers Genevieve Smith and Ishita Rustagi from the Center for Equity, Gender, and Leadership at UC Berkeley discuss gender bias in the consumer credit industry: “Early processes used marital status and gender to determine creditworthiness. Eventually, these discriminatory practices were replaced by ones considered more neutral. But by then, women had less formal financial history and suffered from discrimination, impacting their ability to get credit.”[18]

            At this juncture it may be appropriate to consider why the burden to correct this inherent bias should fall on AI developers, given the fact that bias predates their systems and is present despite their best intentions. The answer hearkens back to the “big red button” concept discussed under the principle of transparency—as Professor Michael Sandel states, “AI not only replicates human biases, it confers on these biases a kind of scientific credibility. It makes it seem that these predictions and judgments have an objective status.”[19] Professor Ruha Benjamin terms this phenomenon “the New Jim Code”—“the employment of new technologies that reflect and reproduce existing inequities but that are promoted and perceived as more objective and progressive than the discriminatory systems of a previous era.”[20]

An encouraging proportion of national ethics guidelines and policy proposals state the importance of robust and accurate datasets, both in terms of representativeness and avoiding inherent bias.[21] Dubai’s guidelines, for example, not only state that organizations should refrain from training AI systems on data that is unrepresentative or inaccurate, but also that both developer and operator organizations should undertake exploration to identify potentially prejudicial decision-making tendencies in AI systems arising from biases in the data.[22] The FTC urges companies to not only “start with the right foundation,” referring to ensuring that data sets aren’t missing information from a particular population, but also “watch out for discriminatory outcomes,” reducing the risk that a well-intentioned algorithm perpetuates racial inequity.[23] These statements lack guidance for next steps when a necessary dataset is in fact found to contain inherent bias, but they do at least reflect an understanding under this principle that using a “complete” dataset does not ensure a system will be bias-free. 

  • . . .Or What?

Critics have attacked ethical AI frameworks for lacking meaningful enforcement mechanisms.[24] While the underdeveloped regulatory landscape is worthy of criticism given the magnitude of potential risks that AI poses, these guiding principles may still play a significant role in AI regulation. The United Nations Educational, Scientific and Cultural Organization (UNESCO) points out that while ethical values and principles are not necessarily legal norms in and of themselves, they “can powerfully shape the development and implementation of policy measures and legal norms, by providing guidance where the ambit of norms is unclear or where such norms are not yet in place due to the fast pace of technological development combined with the relatively slower pace of policy responses.”[25]  

Recent developments in the United States suggest that critics may not have to wait long for the “bite” they crave from ethical frameworks,[26] and global consensus on guiding principles including transparency and robustness is a strong place to start.

 

_______________________

[1] AI is forecasted to provide $15.7 trillion in global economic growth by 2030. PwC, “Artificial Intelligence everywhere,” (last visited Nov. 3, 2021), https://www.pwc.com/gx/en/issues/data-and-analytics/artificial-intelligence.html.

[2] See University of Surrey, Extending Human Lifespans: Using Artificial Intelligence to Find Anti-Aging Chemical Compounds, SciTech Daily (July 24, 2021), https://scitechdaily.com/extending-human-lifespans-ai-built-to-find-anti-aging-chemical-compounds/.

[3] For example, since Tesla introduced its self-driving feature “Autopilot” in 2015, there have been at least ten deaths involving Autopilot in the United States. Neal Boudette, Tesla Says Autopilot Makes Its Cars Safer. Crash Victims Say It Kills, New York Times, (July 5, 2021) https://www.nytimes.com/2021/07/05/business/tesla-autopilot-lawsuits-safety.html.

[4] See e.g., Alex Salkever, What if AI is coming for jobs faster than we thought?, World Economic Forum (Sept. 10, 2018), https://www.weforum.org/agenda/2018/09/what-if-ai-is-coming-for-jobs-faster-than-we-thought; Devashish Shrestha, Disruption in the Labor Market due to the AI Revolution, Fusemachines (Mar. 15, 2019), https://fusemachines.medium.com/disruption-in-the-labor-market-due-to-the-ai-revolution-4e9349e52637.

[5] See Kori Hale, A.I. Bias Caused 80% of Black Mortgage Applicants to be Denied, Forbes (Sept. 2, 2021), https://www.forbes.com/sites/korihale/2021/09/02/ai-bias-caused-80-of-black-mortgage-applicants-to-be-denied/?sh=3157e9f436fe, (discussing racially discriminatory algorithms used in lending decisions for home loans); Genevieve Smith & Ishita Rustagi, When Good Algorithms Go Sexist: Why and How to Advance AI Gender Equity, Stanford Social Innovation Review (Mar. 31, 2021), https://ssir.org/articles/entry/when_good_algorithms_go_sexist_why_and_how_to_advance_ai_gender_equity, (discussing gender-based discrimination in algorithmic decision making in the consumer credit industry).

[6] See Jessica Fjeld et al., Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI, Berkman Klein Center Research Publication No. 2020-1 4 (Jan. 15, 2020), https://dash.harvard.edu/handle/1/42160420; Karen Hao, In 2020, let’s stop AI ethics-washing and actually do something, MIT Technology Review (2019) https://www.technologyreview.com/2019/12/27/57/ai-ethics-washing-time-to-act/.

[7] These outputs are given various names, including but not limited to “ethical frameworks,” “ethical guidelines,” ethical principles,” “ethics for AI,” “national strategy for ethical AI,” and on. These terms and similar are used interchangeably for purposes of this post.

[8] See Amit Choudhury, A closer look at Singapore’s AI Governance framework: insights for other governments, Global Government Forum (Jun. 5, 2021), https://www.globalgovernmentforum.com/singapores-ai-governance-framework-insights-governments/.

[9] See e.g., Fact Sheet: Digital Charter Implementation Act, 2020, https://www.ic.gc.ca/eic/site/062.nsf/eng/00119.html (The CPPA contains new transparency requirements that apply to automated decision-making systems like algorithms and artificial intelligence. Businesses would have to be transparent about how they use such systems to make significant predictions, recommendations or decisions about individuals. Individuals would also have the right to request that businesses explain how a prediction, recommendation or decision was made by an automated decision-making system and explain how the information was obtained.”);  European Commission, Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts Article 52 (Apr. 21, 2021), https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1623335154975&uri=CELEX%3A52021PC0206 (“Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use.”).

[10] Ge Wang, Humans in the Loop: The Design of Interactive AI Systems, Stanford University Human-Centered Artificial Intelligence (Oct. 20, 2019), https://hai.stanford.edu/news/humans-loop-design-interactive-ai-systems.

[11] European Commission, supra note 9, Article 13.

[12] Elisa Jillson, Aiming for truth, fairness, and equity in your company’s use of AI, FTC Business Blog (Apr. 19, 2021), https://www.ftc.gov/news-events/blogs/business-blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai.

[13] See Model Artificial Intelligence Governance Framework, Second Edition 60-62 (Jan. 21, 2020), https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organisation/ai/sgmodelaigovframework2.pdf. 

[14] These and other tests use formulas to calculate the US grade level required to understand a piece of text.

[15] Id. at 57. 

[16] Joy Buolamwini & Timnit Gebru, Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, Proceedings of Machine Learning Research 81: 1 (2018), http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf

[17] Id.

[18] Genevieve Smith & Ishita Rustagi, When Good Algorithms Go Sexist: Why and How to Advance AI Gender Equity, Stanford Social Innovation Review (Mar. 31, 2021).

[19] Christina Pazzanesse, Ethical concerns mount as AI takes bigger decision-making role in more industries, Harvard Gazette (Oct. 26, 2020), https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/.

[20] Ruha Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code 5 (2019).

[21] See e.g., Ignacio Cofone, Policy Proposals for PIPEDA Reform to Address Artificial Intelligence (Nov. 12, 2020), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3740059, (discussing the challenge of retaining relevancy in the quickly-developing field of AI); Model Artificial Intelligence Governance Framework, supra note 13, at 36-40 (recommending a series of steps to ensure representativeness and to avoid biases in datasets); Ethical Framework for Artificial Intelligence in Colombia 35 (Aug. 2020), https://dapre.presidencia.gov.co/dapre/SiteAssets/documentos/ETHICAL%20FRAMEWORK%20FOR%20ARTIFICIAL%20INTELLIGENCE%20IN%20COLOMBIA.pdf.

[22] Smart Dubai, AI Ethics Principles & Guidelines 20 (2019), https://www.digitaldubai.ae/docs/default-source/ai-principles-resources/ai-ethics.pdf.  

[23] Elisa Jillson, Aiming for truth, fairness, and equity in your company’s use of AI, FTC Business Blog (Apr. 19, 2021), https://www.ftc.gov/news-events/blogs/business-blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai.

[24] See Sakiko Fukuda-Parr & Elizabeth Gibbons, Emerging Consensus on ‘Ethical AI’: Human Rights Critique of Stakeholder Guidelines, Global Policy Volume 12 Supplement 6 32 (July 2021), https://onlinelibrary.wiley.com/doi/epdf/10.1111/1758-5899.12965, (“Emerging consensus on ‘ethical AI’ is problematic for its lack of grounding in international human rights law and weak emphasis on accountability and participation.”); see also Jon Stokes, No, the FTC is not about to wade into the AI bias wars, jonstokes.com (Apr. 20, 2021), https://www.jonstokes.com/p/no-the-ftc-is-not-about-to-wade-into, (discussing the FTC’s statements regarding AI ethics and, with others, questioning whether the FTC will actually hold businesses accountable for unethical use of AI).

[25] UNESCO Ad Hoc Expert Group, First Draft of the Recommendation on the Ethics of Artificial Intelligence 2 (Sept. 7, 2020), https://unesdoc.unesco.org/in/rest/annotationSVC/Attachment/attach_upload_feb9258a-9458-4535-9920-fca53c95a424.

[26] See Ryan McKenney et al., U.S. Artificial Intelligence Regulation Takes Shape, Orrick, Nov. 19, 2021, https://www.jdsupra.com/legalnews/u-s-artificial-intelligence-regulation-1161759/.