Breaking News

The ethical use of data in artificial intelligence

Just as we humans are biased, so is the technology we create.

advertisement

Howard J. Ross, a leading expert on diversity, leadership and organization said, “We have created this sort of ‘bias equals badness’ equation. We hear the word ‘bias,’ and we say it’s bad or wrong. In reality, bias can be bad or wrong…but it can also be tremendously helpful.” Unfortunately, the former is something we have seen far too often with devastating consequences. The latter, however, is also true, especially as long as individuals and organizations recognize its existence, understand its effect, and manage it.

The AI and machine-learning systems that underwrite so much of businesses’ digital transformations today are designed to serve millions of customers, yet they are defined by a relatively small homogenous group. Even with the best of intentions, these individuals cannot readily pinpoint their own biases.

My colleague, Ed Jay, recently shared that, “Bias is an inherent part of the human experience. It’s the silent filter created by our lived experiences, a lens through which our everyday decisions pass. It shapes us. And often, we’re not even aware of it.” Everyone has biases. That is a fact—and it does not make us bad people. It is not recognizing biases that can lead to bad decisions in life, at work, and in relationships.

advertisement

advertisement

As our reliance on data grows, a number of bias-based incidents have come to light, such as facial recognition struggling to recognize people of different ethnicities and recruitment algorithms favoring men over women. The problem is, we cannot prevent bias from seeping into the systems we use; like in other realms of life, so many of our biases are unconscious or unaccounted for.

So what can we do about it?

KNOW THE PROMISES—AND THE PERILS—OF AI

Businesses are in a bit of a bind when it comes to balancing the duality of AI. It can help scale companies and streamline and maximize certain processes. It can also introduce new risks born of a lack of experience with what is, for all intents and purposes, still a pretty new technology.

advertisement

Risks of AI run the gamut and more and more companies are using AI insights to inform business decisions. How can they know that the data is truly representative, when it may emanate from potentially flawed sources, or be harvested by algorithms imbued with bias?

This quote from Harvard Business Review hits the nail on the head: “There is often an assumption that technology is neutral, but the reality is far from it. Machine-learning algorithms are created by people, who all have biases. They are never fully ‘objective;’ rather they reflect the world view of those who build them and the data they’re fed.”

We may not be able to completely eliminate bias, but we can be empowered to limit its effects. Data ethics are an important first step.

advertisement

AN APPEAL FOR GREATER DATA AND AI ETHICS

The Alan Turing Institute advocates for a macro-ethics approach as a way to encompass all the complexities of data and AI ethics. They define it as “an overall framework that avoids narrow, ad-hoc approaches and addresses the ethical impact and implications of data science and its applications within a consistent, holistic and inclusive framework.” This, they continue, is what will maximize the value of data science in society.

Unlike micro-ethics, which are primarily concerned with individuals, macro-ethics consider the societal impact and responsibilities of technology. This thinking has led to some calls for industries-wide ethical frameworks.

MAKE YOUR OWN CHANGE

Changing the system feels “big.” But there are small ways organizations can contribute and create progress. Here are three concrete examples of changes organizations can make themselves:

advertisement

1. Have a multidisciplinary approach to technology decision-making.

The C-suite needs the consistent and considered input of all its teams, including IT, programmers and developers, data scientists, lawyers, and customer service. That last one is more critical than it may seem at first blush; your customer support team members are a direct line to your end-users, and they have incredible insights about the effects of your products.

2. Constantly improve the diversity of your teams.

According to the U.S. Equal Employment Opportunity Commission, the high-tech sector—when compared to all private industry—still hires more white men than anyone else. This is how it breaks down representation:

  • 64% Men
  • 36% Women (26% in AI specifically, according to IBM)
  • 68.5% White
  • 14% Asian American
  • 8% Latinx
  • 7.4% African American

Improving diversity in AI and high tech by hiring, training, and upskilling more women, LGBTQ2+ people, people of color, people with disabilities, and people from a variety of life-experience backgrounds will enable your organization to make better-informed data decisions. It will also make your organization more resilient and adaptable to change. As IBM recently remarked, 85% of AI professionals think their industry is becoming more diverse, and 91% of those people say it’s having a positive impact.

advertisement

3. Adopt data annotation.

AI and its various subsets need to be fed information to create learning models. This is where data annotation comes in. Proper labeling of data makes algorithms and learning models more reliable and representative.

Of course, manually annotating every single piece of data and preparing it for integration into AI systems is a massively demanding task. Nowadays, it is becoming increasingly common for businesses to hire external partners to do this work as ethically responsible data demands a highly diverse group of annotators, which is difficult to achieve in-house. This is where TELUS International’s AI Data Solutions can help via our global crowdsourced AI community of over one million data annotators and linguists.

A MORE ETHICAL FUTURE FOR AI

AI has incredible power, and as the saying goes, according to Voltaire—although I prefer citing Peter Parker’s Uncle Ben— “With great power comes great responsibility.” Today we cannot even fathom what AI’s potential could be even a year from now, as cutting-edge applications quickly become outdated as they expand and evolve at such a rapid pace. This means that we as individuals, leaders, and organizations must be simultaneously evolving in our ongoing understanding of AI’s flaws, complexities, and potentially harmful ethical ramifications.

advertisement

Ultimately, businesses are responsible for creating diverse workforces and ethical frameworks for how we use technology, especially technology that has outward consequences for customers and society at large. AI may never be human, but it needs to be humanly accountable.

Jeff Puritt is the president and CEO of TELUS International, a leading digital customer experience innovator that designs, builds, and delivers next-gen digital solutions for global and disruptive brands. TELUS International provides data collection & creation, data annotation, data relevance & validation, and linguistics. To learn more about our AI Data Solutions, please click here.

advertisement

You can find articles in our three-part Data & AI series here.