Breaking News

Beat the Bots: Employer Risk in Delegating Hiring Practices to Artificial Intelligence— A Dilemma Worth Solving – JD Supra

As the hiring market surges in the post COVID-19 world, companies may be tempted to automate hiring systems by adopting artificial intelligence (AI), freeing up human resources professionals to engage with candidates at a higher level. After the European Commission’s April 12, 2021 proposed legal framework for AI, it is time for U.S. labor and employment and data privacy lawyers to address the looming issues AI use poses domestically when used for hiring purposes.

Advantages of using AI in hiring include the speed with which such tools can neatly organize potential candidates away from irrelevant applicants at a time when many companies are looking to quickly fill multiple positions. However, the same sorting methods can lead to unintended liability if algorithms are designed to remove applicants because of their protected characteristics.

Before You Begin Using AI in HR, Consider…

The two main criticisms lodged against “hiretech” relate to (1) ethical concerns and (2) legal restrictions.

Ethical Concerns

The use of AI software in hiring is controversial largely because the makers of such software promise that their algorithms can reduce the human biases embedded in the hiring process. Essentially, using AI software could eliminate biased hiring behavior, allowing for a diverse hiring pool and leading to a diverse workforce. However, the opposite has also been alleged—that these algorithms are only as good as their creators, and many of their creators rely on companies’ prior workforce data, further entrenching bias into the hiring process and consequently the workforce.

HireVue provides a prime example of this debate. In 2019, the Electronic Privacy Information Center, a nonprofit, filed a complaint against HireVue with the Federal Trade Commission, claiming that the company’s algorithm and AI software amounted to “unfair and deceptive trade practices.” HireVue shot back that it had not violated any laws. The dispute centered on what factors HireVue included in its algorithm for scoring potential candidates. For example, a candidate’s gestures, pose, voice tone, and cadence, and the content of their responses are scored, leading to an “employability score.” Employers can use that score to assess whether the candidate would make it to the next round of interviews.

However, since the complaint, HireVue changed its policies to remove facial expressions as a scoring factor in its algorithm and also hired a third-party to audit its algorithms. Its Master Service Agreement is available publicly on its website. HireVue is not the only artificial intelligence tool to face allegations of discriminatory tactics, it just received more publicity.

AI algorithms build on biases when they use existing employees as the perfect model to score candidates. Studies have shown that facial expressions of people of color may score lower or be harder for the system to interpret. Similarly, applicants with physical or cognitive disabilities or speaking impediments may score lower because of time delays and the software’s difficulty interpreting without regard for the content of what is said.

(2) Legal Restrictions

Businesses remain liable for discriminatory hiring practices even if they employ an algorithm that ultimately makes the discriminatory choices. Liability is possible under multiple anti-discrimination laws, including the Equal Employment Opportunity Commission’s (EEOC) enforcement of the Americans with Disabilities Act and potentially under the Employee Polygraph Protection Act and the Genetic Information Nondiscrimination Act. These acts could apply under the argument that employers are using software to ask questions or reach conclusions in hiring that they are legally to do themselves.

A more nuanced concern involves whether the use of such tools violates the privacy rights of the applicant. Algorithms score candidates on a broad range of personal, physical, cognitive, and verbal factors. Some of these factors may segment the applicant into a group based on a disability that the algorithm believes exists, which the candidate otherwise would not have been forced to reveal. The labeling or diving of applicants based on potentially private information may be illegal or at a minimum unethical without an applicant’s explicit informed consent.

Some states have taken a more proactive approach to this practice, following the lead of the European Union (EU). The EU and its progressive proposed regulation of AI includes a heightened analysis for its use in hiring. So far, Illinois, Texas, and Washington have state biometric laws limiting and regulating what biometric identifiers can be collected. New York, California, Washington, and Arkansas have additional protections for protected personal information encompassing biometric data.

Back to 10-Second Human Eye Resume Scans?

With so many ethical and legal issues at play, some employers may determine that the risks outweigh the rewards of using AI in hiring. For some businesses, this may be true; however, there are still advantages to using AI when hiring, as long as employers are well aware of all the risks. In fact, many companies may already use AI in other steps of the hiring process and consider that it is worth expanding to more of the process. In 2018, 67% of hiring managers and recruiters said that AI was already saving them time, and that number has likely increased. It is also worth noting that 43% of the same professionals surveyed believe that AI is helping them remove human bias. It’s not a matter of avoiding AI because of its risks and costs, but instead of embracing it with cautious and experienced expectations.

The following steps should be addressed before a business enters into a contract with an AI software provider for their HR needs. Conversely, these are the same steps that an AI software provider should anticipate and proactively answer with potential clients to cultivate trust in their people and processes:

Necessary Next Steps

Due Diligence. Look into your chosen software provider and determine whether its tools have faced prior or pending lawsuits or complaints and if so, whether the issues have been resolved.

Examine EU Requirements. If relevant, conduct an inquiry into the feasibility of meeting each of the EU’s new requirements for high-risk AI software use, which includes hiring.

Legal Research. Review the federal Uniform Guidelines on Employee Selection Procedures; Title VII; The Age Discrimination in Employment Act (ADEA); EEOC; State AI laws (Illinois, Texas, and Washington; biometric privacy laws: New York, California, Washington, and Arkansas) to ensure the software can/does meet the guidance and legal requirements.

Investigate Interference. Assess the likelihood that the algorithm could be manipulated to purposefully exclude people based on protected characteristics and who would be liable.

Require that the software provider

Fully explains the algorithm—how it avoids bias, exclusion, security, consent, coverage (training data for models). Specifically ask about transparency and validity.

Provides bias audit reports and validation studies—ask for permission to share publicly.

Ensures that there is a human element—a report, audit, or other similar tool prepared internally for employers to review. Determine how the system selects which things will be “flagged” for human review, and can a human review more than what is selected?

Requires that regular, extensive auditing is permitted and accurate and that the algorithm can be revised if it implicitly learns to discriminate.

Source: https://www.jdsupra.com/legalnews/beat-the-bots-employer-risk-in-2400540/