Breaking News

[Project Description] Mitigating AI/ML Bias in Context: Establishing Practices for Testing, Evaluation, Verification, and Validation of AI Systems (Draft) – Computer Security Resource Center

Date Published: August 18, 2022Comments Due: Email Comments to:

Author(s)

Apostol Vassilev (NIST), Harold Booth (NIST), Murugiah Souppaya (NIST)

Announcement
The NCCoE has released a new draft project description, Mitigating AI/ML Bias in Context: Establishing Practices for Testing, Evaluation, Verification, and Validation of AI Systems. Publication of this project description begins a process to solicit public comments for the project requirements, scope, and hardware and software components for use in a laboratory environment.

To tackle the complex problem of mitigating AI bias, this project will adopt a comprehensive socio-technical approach to testing, evaluation, verification, and validation (TEVV) of AI systems in context. This approach will connect the technology to societal values in order to develop guidance for recommended practices in deploying automated decision-making supported by AI/ML systems. A small but novel part of this project will be to look at the interplay between bias and cybersecurity and how they interact with each other. 

The initial phase of the project will focus on a proof-of-concept implementation for credit underwriting decisions in the financial services sector. We intend to consider other application use cases, such as hiring and school admissions, in the future. This project will result in a freely available NIST AI/ML Practice Guide.

Earlier this month, we announced a hybrid workshop on Mitigating AI Bias in Context on Wednesday, August 31, 2022. The workshop will now be virtual only via WebEx and will provide an opportunity to discuss this topic and work towards finalizing this project description. You can register by clicking on the above workshop link. Hope to see you there!

Review the project description and submit comments online on or before September 16, 2022.

You can also help shape and contribute to this project by joining the NCCoE’s AI Bias Mitigation Community of Interest. Send an email to [email protected] detailing your interest.

Managing bias in an AI system is critical to establishing and maintaining trust in its operation. Despite its importance, bias in AI systems remains endemic across many application domains and can lead to harmful impacts regardless of intent. Bias is also context-dependent. To tackle this complex problem, we adopt a comprehensive socio-technical approach to testing, evaluation, verification, and validation (TEVV) of AI systems in context. This approach connects the technology to societal values in order to develop guidance for recommended practices in deploying automated decision-making supported by AI/ML systems in a sector of the industry. A small but novel part of this project will be to look at the interplay between bias and cybersecurity and how they interact with each other. The project will leverage existing commercial and open-source technology in conjunction with the NIST Dioptra, an experimentation test platform for ML datasets and models. The initial phase of the project will focus on a proof-of-concept implementation for credit underwriting decisions in the financial services sector. We intend to consider other application use cases, such as hiring and school admissions, in the future. This project will result in a freely available NIST AI/ML Practice Guide.

Managing bias in an AI system is critical to establishing and maintaining trust in its operation. Despite its importance, bias in AI systems remains endemic across many application domains and can lead to harmful impacts regardless of intent. Bias is also context-dependent. To tackle this complex…
See full abstract

Managing bias in an AI system is critical to establishing and maintaining trust in its operation. Despite its importance, bias in AI systems remains endemic across many application domains and can lead to harmful impacts regardless of intent. Bias is also context-dependent. To tackle this complex problem, we adopt a comprehensive socio-technical approach to testing, evaluation, verification, and validation (TEVV) of AI systems in context. This approach connects the technology to societal values in order to develop guidance for recommended practices in deploying automated decision-making supported by AI/ML systems in a sector of the industry. A small but novel part of this project will be to look at the interplay between bias and cybersecurity and how they interact with each other. The project will leverage existing commercial and open-source technology in conjunction with the NIST Dioptra, an experimentation test platform for ML datasets and models. The initial phase of the project will focus on a proof-of-concept implementation for credit underwriting decisions in the financial services sector. We intend to consider other application use cases, such as hiring and school admissions, in the future. This project will result in a freely available NIST AI/ML Practice Guide.Hide full abstract

Keywords

AI-assisted human decision-making; AI bias; AI fairness; artificial intelligence (AI); bias detection; bias mitigation; credit underwriting; human-computer interaction; machine learning (ML); machine learning model

Control Families

None selected

Source: https://csrc.nist.gov/publications/detail/white-paper/2022/08/18/mitigating-ai-ml-bias-in-context/draft