Breaking News

6 Ways to Combat Bias in Machine Learning – Built In

Just like humans, data are deeply susceptible to bias. Humans create data, and so it therefore reflects our own biases, assumptions and blind spots. It’s unavoidable, then, that social biases exist in both company data and in the background data that feeds modern natural language processing (NLP) algorithms. Despite this, there are ways to identify and counter biased decision-making by models. Although none of these methods are silver bullets and should always be used in conjunction with human input, they can help address potential issues as they arise. 
 
Sources of Bias in an ML Task
From corpus issues to decision-making problems, bias presents itself in numerous ways in the field of machine learning (ML). All of the following are common sources of bias.
Background Bias. Language-model-based NLP approaches consume web-scale quantities of text to give the NLP systems background knowledge about how language works. Although the benefit of this method is that a small amount of training data can produce excellent results, social biases leak in through the pre-training corpus. One example is a tendency to associate European-American names with positive sentiment and African-American names with negative sentiment.
Perceptive Bias. Many ML training tasks seek to replicate human judgments, and those judgments may be based on existing biases, either conscious or unconscious. For example, one study found a strong tendency for white athletes to be described as hardworking and intelligent whereas Black athletes were labeled as physically powerful and athletic. Any training data coming from human judgment is very likely to contain social biases.
Outcome Bias. Data points not obviously derived from human judgment can also reflect existing social prejudices as well. A loan default is a factual event that either did or did not happen. The event may still be rooted in uneven opportunities, however. For example, people of color have suffered more job losses during the recent downturn and have been slower to regain their jobs. It is important to understand that there is no clear divide between “factual” versus “biased” datasets: Social biases can affect any measurable aspect of an individual’s life.
Availability Bias. Machine learning performs best with clear, frequently repeated patterns. Those who do not fit neatly into such patterns are more likely to be overlooked by ML systems. For example, a company hiring primarily from the U.S. may fail to consider attendees of foreign universities due to a lack of data. The use of different degree names and titles globally could also affect an algorithm’s decision-making.
 
Best Practices in Debiasing ML
Given all these issues, we should view machine learning with some suspicion – as we should human processes. To make strides in debiasing, we must actively and continually look for signs of bias, build in review processes for outlier cases and stay up to date with advances in the machine learning field. 
Below are some of the techniques and processes that we can implement to address bias in ML.

6 Ways to Combat Bias

Anonymization and Direct Calibration
Linear Models
Adversarial Learning
Data Cleaning
Audits and KPI
Human Exploration

Anonymization and Direct Calibration. Removing names and gendered pronouns from documents as they’re processed is a good first step, as is excluding clear markers of protected classes. Although this is an important start, it’s not a complete solution. These signals still show up in many places that are impossible to wholly disentangle. Research has shown that the bias remains in second-order associations between words: For example, “female” and “male” associated words still cluster and can still form undesired signals for the algorithms as a result. Nevertheless, randomizing names as we feed data into a model prevents the algorithm from using preconceptions about the name in its decision making. This is also a good practice in initial resume screening even when done by humans.
Linear Models. Deep models and decision trees can more easily hide their biases than linear models, which provide direct weights for each feature under consideration. For some tasks, then, it may be appropriate to trade the accuracy of more modern methods for the simple explanations of traditional approaches. In other cases, we can use deep learning as a “teacher” algorithm for linear classifiers: a small amount of annotated data is used to train a deep network, which then generates predictions for many more documents. These then train the linear classifier. This can approach deep learning accuracy, but allows a human to view the reasons for a classification, flagging potentially biased features in use.
Adversarial Learning. If a model can’t reliably determine gender or race, it’s difficult for it to perform in a biased manner. Adversarial learning shares the weights in a deep network between two classifiers: One solves the problem of interest, and the other one determines some fact such as the author’s gender. The main classifier is trained like usual, but the adversarial classifier, for example a gender classifier, penalizes weights when the model makes a correct prediction until that classifier consistently fails. If the internal representation of the document contained gender signal, we’d expect the second classifier to eventually discover it. Since it can’t, we can assume the first classifier isn’t making use of this information.
Data Cleaning. In many ways, the best way to reduce bias in our models is to reduce bias in our businesses. The datasets used in language models are too large for manual inspection, but cleaning them is worthwhile. Additionally, training humans to make less biased decisions and observations will help create data that does the same. Employee training in tandem with review of historic data is a great way to improve models while also indirectly addressing other workplace issues. Employees are taught about common biases and sources of them, and review training data looking for examples of biased decisions or language. The training examples can be pruned or corrected, while the employees hopefully become more careful in their own work in the future.
Audits and KPI. Machine learning is complicated, and these models exist within larger human processes that have their own complexities and challenges. Each piece in a business process may look acceptable, yet the aggregate still displays bias. An audit is an occasional deeper examination of either an aspect of a business or how an example moves through the whole process, actively looking for issues. Key Performance Indicators, or KPI, are values that can be monitored to observe whether things are trending in the right direction, such as the percentage of women promoted each year. Audited examples may be atypical, and KPI fluctuate naturally, but looking for possible issues is the first step towards solving them, however that may end up being accomplished. 
Human exploration. This is one of the best ways of finding issues in real systems. Studies that sent employers identical resumes with stereotypically white and Black names as their only variable have demonstrated hiring biases in corporate America. Allowing people to play with inputs or search through aggregate statistics is a valuable way to discover unknown systemic issues. For particularly important tasks, you can even implement “bias bounties,” offering cash rewards to people who find clear flaws in an implementation. Depending on data sensitivity, this could be access to the results of models in practice, so researchers can statistically demonstrate ways the models acted biased in the past, or an opportunity to create fake data in an effort to show that some pattern of use causes undesired behavior by the model. 
 
Takeaway: Get Serious About Combating Bias
Clean and plentiful data, appropriate algorithms and human oversight are critical for the successful implementation of artificial intelligence applications in business processes. Especially when data is insufficient, it may not be appropriate to apply these techniques to every problem. It’s also important to recognize the biases in human processes, however; eschewing AI does not make the issues go away.
A commitment to debiasing as an ongoing process, both with regard to ML and human agents, is therefore vital. Doing so can help diversify organizations, mitigate the impact of hidden systemic biases, and promote fairness across an organization, fostering positive outcomes in recruitment, retention and brand awareness efforts.
Up NextInside the Crowdsourced Quest for Inclusive Voice AI

Source: https://builtin.com/machine-learning/bias-machine-learning