Breaking News

“Explaining” machine learning reveals policy challenges – Science Magazine

SummaryThere is a growing demand to be able to “explain” machine learning (ML) systems’ decisions and actions to human users, particularly when used in contexts where decisions have substantial implications for those affected and where there is a requirement for political accountability or legal compliance (1). Explainability is often discussed as a technical challenge in designing ML systems and decision procedures, to improve understanding of what is typically a “black box” phenomenon. But some of the most difficult challenges are nontechnical and raise questions about the broader accountability of organizations using ML in their decision-making. One reason for this is that many decisions by ML systems may exhibit bias, as systemic biases in society lead to biases in data used by the systems (2). But there is another reason, less widely appreciated. Because the quantities that ML systems seek to optimize have to be specified by their users, explainable ML will force policy-makers to be more explicit about their objectives, and thus about their values and political choices, exposing policy trade-offs that may have previously only been implicit and obscured. As the use of ML in policy spreads, there may have to be public debate that makes explicit the value judgments or weights to be used. Merely technical approaches to “explaining” ML will often only be effective if the systems are deployed by trustworthy and accountable organizations.
Source: https://science.sciencemag.org/content/368/6498/1433