Breaking News

Managing the risks of inevitably biased visual artificial intelligence systems – Brookings Institution

Scientists have long been developing machines that attempt to imitate the human brain. Just as humans are exposed to systemic injustices, machines learn human-like stereotypes and cultural norms from sociocultural data, acquiring biases and associations in the process. Our research shows that bias is not only reflected in the patterns of language, but also in the image datasets used to train computer vision models. As a result, widely used computer vision models such as iGPT and DALL-E 2 generate new explicit and implicit characterizations and stereotypes that perpetuate existing biases about social groups, which further shape human cognition.

Such computer vision models are used in downstream applications for security, surveillance, job candidate assessment, border control, and information retrieval. Implicit biases also manifest in the decision-making processes of machines, creating lasting impacts on people’s dignity and opportunities. Moreover, nefarious actors may use readily available pre-trained models to impersonate public figures, blackmail, deceive, plagiarize, cause cognitive distortion, and sway public opinion. Such machine-generated data pose a significant threat to information integrity in the public sphere. Even though machines have been rapidly advancing and can offer some opportunities for public interest use, their application in societal contexts without proper regulation, scientific understanding, and public awareness of their safety and societal implications raises serious ethical concerns.
Biased gender associations
A worthy example for exploring such biases appear in biased gender associations. To understand how gender associations manifest in downstream tasks, we prompted iGPT to complete an image given a woman’s face. iGPT is a self-supervised model trained on a large set of images to predict the next pixel value, allowing for image generation. Fifty-two percent of the autocompleted images had bikinis or low-cut tops. In comparison, faces of men were autocompleted with suits or career-related attire 42 percent of the time. Only seven percent of male autocompleted images featured revealing clothing. To provide a comprehensive analysis of bias in self-supervised computer vision models, we also developed the image embedding association test to quantify the implicit associations of the model that might lead to biased outcomes. Our findings reveal that the model contains innocuous associations such as flowers and musical instruments being more pleasant than insects and weapons. However, the model also embeds biased and potentially harmful social group associations related to age, gender, body weight, and race or ethnicity. The biases at the intersection of race and gender are aligned with theories on intersectionality, reflecting emergent biases not explained by the sum of biases towards either race or gender identity alone.
The perpetuation of biases that have been maintained through structural and historical inequalities by these models has significant societal implications. For example, biased job candidate assessment tools perpetuate discrimination among members of historically disadvantaged groups and predetermine the applicants’ economic opportunities. When the administration of justice and policing relies on models that associate certain skin tones, races or ethnicities with negative valence, people of color wrongfully suffer the life-changing consequences. When computer vision applications directly or indirectly process information related to protected attributes, they contribute to said biases, exacerbating the problem by creating a vicious bias cycle, which will continue unless technical, social, and policy-level bias mitigation strategies are implemented.
State-of-the-art pre-trained computer vision models like iGPT are incorporated into consequential decision-making in complex artificial intelligence (AI) systems. Recent advances in multi-modal AI effectively combine language and vision models. The integration of various modalities in an AI system further complicates the safety implications of cutting-edge technology. Although pre-trained AI is highly costly to build and operate, models made available to the public are freely deployed in commercial and critical decision-making settings and facilitate decisions made in well-regulated domains, such as the administration of justice, education, the workforce, and healthcare. However, due to the proprietary nature of commercial AI systems and lack of regulatory oversight of AI and data, no standardized transparency mechanism exists, which officially documents when, where, and how AI is deployed. Consequently, the unintentional harmful side effects of AI live on long after their originators have been updated or deleted.
Establishing unacceptable uses of AI, requiring extra checks and safety for high-risk products (such as those in the European Union’s draft Artificial Intelligence Act), and standardizing the model improvement process for each modality and multi-modal combination to issue safety updates and recalls are all promising approaches to tackle some of the challenges that might lead to irreparable harm. Standards can also help guide developers. For example, the National Institute of Science and Technology (NIST) released the special publication “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence” in 2022 and a draft AI Risk Management Framework summarizing many of these risks and suggesting standards for trustworthiness, fairness, accountability, and transparency.

Related Content

Third-party audits and impact assessments could also play a major role in holding deployers accountable—for example, a House bill in subcommittee (the Algorithmic Accountability Act of 2022) requires impact assessments of automated decision systems—but third-party audits with a real expectation of accountability are rare. The bottom line is that researchers in AI ethics have called for public audits, harm incident reporting systems, stakeholder involvement in system development, and notice to individuals when they are subject to automated decision-making.
Regulating bias and discrimination in the U.S. has been an ongoing effort for decades. Policy-level bias mitigation strategies have been effectively but slowly reducing bias in the system, and consequently in humans’ minds. Both humans and vision systems inevitably learn bias from the large-scale sociocultural data they are exposed to—so future efforts to improve equity and redress historical injustice will depend on increasingly influential AI systems. Developing bias measurement and analysis methods for AI, trained on sociocultural data, would shed light into the biases in social and automated processes. Accordingly, actionable strategies can be developed by better understanding the evolution and characteristics of bias. Although some vision applications can be used for good (for example, assistive and accessibility technologies designed to aid individuals with disabilities), we have to be cautious about the known and foreseeable risks of AI.
As scientists and researchers continue developing methods and appropriate metrics to analyze AI’s risks and benefits, collaborations with policymakers and federal agencies inform evidence-driven AI policymaking. Introducing the required standards for trustworthy AI would affect how the industry implements and deploys AI systems. Meanwhile, communicating the properties and impact of AI to direct and indirect stakeholders will raise awareness on how AI affects every aspect of our lives, society, world, and the law. Preventing a techno-dystopian reality requires managing the risks of this sociotechnical problem through ethical, scientific, humanistic, and regulatory approaches.
Source: https://www.brookings.edu/blog/techtank/2022/09/26/managing-the-risks-of-inevitably-biased-visual-artificial-intelligence-systems/