Breaking News

Why AI Is The Sharpest Weapon In Industry’s Battle Against Discrimination

Strategic advisor at Blue Yonder.

“Fairness” is a complicated concept to address in the world of business. For artificial intelligence (AI), it represents a sharp weapon that could be used for unprecedented levels of good as companies look to become discrimination-free.

The significance of this subject hit home last year after I was invited to deliver a keynote presentation on two subjects at the first German-French conference on artificial intelligence, AIXIA. The first subject was sustainability — a long-established ramification of artificial intelligence.

Hand in hand with this topic was the issue of discrimination-free AI. And it didn’t take long to realize why two of the biggest global states placed this issue as a top priority as machine learning (ML) begins to take hold of industry.

Historical Bias

Whenever you look at data, you quickly find that human decisions are inherently unfair. That’s not to say right or wrong, because that’s often subjective. But literally unfair. This is because the world in which we live is unbalanced; therefore, the data reflecting that world becomes biased in its nature.

In society, we see this every day. Should the rich pay higher taxes? It’s not balanced, but many would say it’s fair. Yet you wouldn’t charge that same rich person more for a loaf of bread. Already, the idea of proportionate expenditure to income becomes inconsistent, and maybe that’s unfair in itself.

Now apply that notion to AI — a system in which data is fed into a machine for that algorithm to derive resultant decisions for humans to act upon with better accuracy than they could ever achieve themselves.

If the data being fed is tainted by historical bias, that’s what will be relied upon to make future decisions. That cycle will not only continue, but be exacerbated to an unprecedented degree.

Take hiring patterns, for example. Gender and race are the classic parameters that most companies are looking to balance and make “fair.” If historical biases and data throw off the intended aim of AI, then a human decision needs to be made as to what information to feed into the algorithm.

Without guidance toward fairness, ML may still unearth the best candidates, but there’s no guarantee it will adhere to a quota or address discrimination if it’s only relying on discriminatory evidence to make its decisions.

Some high-profile examples where AI was left to its own devices emphasize the ethical battle at hand:

Facial recognition technology has been found to work far better for white people than other races (and men better than women) because it was trialed and trained on primarily white Americans. In China, with local products, the opposite is often found, with European faces more challenging to identify.

On the gender side of things, Amazon was challenged for its selection processes, which were found to be based more around men as a consequence of relying on historical samples (with unfair decisions).

These results were initially surprising. At least the problem was identified. The AI community took this as an opportunity to develop better algorithms that can learn fair decisions from unfair data. Many companies now take care to use more balanced training data. It’s a step in the right direction, although work still needs to be done.

Who Decides What Is Fair?

Companies need to arm AI with the correct data. It needs the most unprejudiced and ethical set of instructions in order to remove human bias, but also historical inadequacies, so that balance can be strived for to an optimum degree.

The good news is that the algorithms and “instructions” are relatively easy to adapt. The technology is there, ready and waiting to remove discrimination.

But who decides what is fair?

Not only do you have to establish what’s “right,” but you then have to quantify that in a way that can be calculated by the kinds of systems we create.

Should there be the same number of male IT professors as there are female, even if there are five times the number of men studying IT? Can you go further back and deduce that women were not given as much of an opportunity or encouragement to pursue IT and decide it actually is fair to address that balance further down the line?

By contrast, medicine in Germany is two-thirds women. But electro-engineering is a 72%-28% split in the other direction. Is it then fair to make both 50-50 opportunities when it comes to hiring, or should the AI instruction reflect that potentially social difference?

When it comes to the enterprise, social fairness is a web of complexity, and you have to establish what you assess as ethical first in order to realize those goals.

A Sharp Weapon To Wield

When ethical guidelines have been drawn, AI is set to be a revolutionary game-changer. It’s the reason that two European countries brought the topic to the table.

To make the decision as to what is fair and to then feed it into a machine that will work accurately up against those definitions of fairness is significant. It could be a huge leveler for some of the more blatant discriminations seen in areas of HR, wage gaps, etc.

Of course, if a contentious or even unfair notion is loaded into ML, then that discriminatory or unethical status quo would be exacerbated. But when used for good, we have a sharp weapon in our hands — one made of data and algorithms that can undo cultural, historic and social imbalances across industries in a much more accelerated way than we could enforce.

The reason for its potential is simple: It removes the same human biases and influences that created such inequalities in the first place.

AI works to data. And as long as the data being lent to these algorithms is sharpened with good intentions, then we have a very powerful weapon to wield in the fight against discrimination. The recent discriminatory incidents and the worldwide protests against it show how necessary such a weapon is.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?