News

EEOC Issues Guidance on the Use of Artificial Intelligence and Employee Selection

By  |  

The Equal Employment Opportunity Commission (“EEOC”) recently released new guidance regarding Artificial Intelligence (“AI”) software used in the employment selection process and how it could potentially violate Title VII of the Civil Rights Act of 1964 (“Title VII”). This document comes one year after the EEOC released guidance regarding AI software and the risk of a violation of the Americans with Disabilities Act (the “ADA”). The EEOC understands that companies would like to utilize AI software for different types of decision-making processes regarding employee selection. Within these publications, the EEOC discusses how a company can monitor its AI software so that it doesn’t violate the ADA or Title VII.

What is Artificial Intelligence Software?

Congress defines AI to mean a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”

How does AI work in a Human Resources Context?

Software or applications that include AI are used at various stages of employment, including hiring, performance evaluation, promotion, and termination. This can be, but isn’t limited to: resume scanners, employee monitoring software that rates employees on the number or accuracy of their keystrokes or other factors, and “virtual assistants” or “chatbots” that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements.

Potential Title VII Violations

Companies risk Title VII violations if they use AI tests or selection procedures that aren’t related to the job position and seem neutral, but end up excluding people based on race, color, religion, sex, or national origin. It is important that, if a company is using AI tests or selection procedures to assist in hiring, the measures or filters are directly related to the job or have a business necessity.

As an example of how a seemingly neutral algorithm can have a disparate impact on a protected class, in 2015, a large tech company discovered that the AI software it was using to filter resumes was heavily biased against women. This occurred because the algorithm examined the number of resumes submitted over the past ten years and attempted to match qualities it registered as “top talent” from resumes submitted in that timeframe. Since most applicants in those past 10 years were men, the algorithm trained itself to prefer men. It penalized resumes that had the word “women’s” and downgraded graduates of two all-women’s colleges.

Potential Americans with Disabilities Act Issues

AI software can also “screen out” an individual because of a disability if the disability causes that individual to receive a lower score or an assessment result that is less acceptable to the employer, and the individual loses a job opportunity as a result.

For example, a chatbot might be programmed with an algorithm that rejects all applicants who disclose that they have significant gaps in their employment history. If that gap in employment was to receive treatment for a disability, then the chatbot may screen out that person because of their disability, which in turn would violate the ADA.

What Should Companies Do?

If a company wants to use AI software, then it needs to monitor whether the AI software is having an adverse effect on people afforded protections under the ADA and Title VII. Something a company could do is perform periodic audits regarding its software and check what types of candidates have been filtered out. If a large number of individuals in a protected class have been filtered out, it’s important to learn why and potentially adjust the AI software or algorithm. In order to avoid an ADA violation, a company may tell applicants or employees what steps an evaluation process includes and may ask them whether they will need reasonable accommodations to complete it.

A company should take similar steps if would like to use an outside vendor who uses AI software for these decisions. If this is the case, the company may want to ask the vendor what steps have been taken to evaluate whether use of the tool causes a substantially lower selection rate for individuals with a characteristic protected by Title VII or the ADA. If the vendor has not taken those steps, then the company may want to consider alternative software.

Important Links:

Link to Q&A regarding Title VII: https://www.eeoc.gov/select-issues-assessing-adverse-impact-software-algorithms-and-artificial-intelligence-used

Link to Guidance regarding the ADA: https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence

Please contact Heather Hammond (hhammond@gravelshea.com) at
Gravel & Shea PC if you have questions or would like assistance.