Employing AI: Artificial Intelligence in the US Recruitment and Hiring Process
Is federal regulation on the horizon ?
In recent years, a number of AI products have been developed that promise to streamline the candidate recruitment and screening process for employers. Given the existing framework of laws prohibiting discriminatory hiring practices, these technologies should be subject to additional scrutiny to ensure that they do not perpetuate biases that effectively discriminate against individuals on the basis of a protected characteristic. Moreover, in addition to existing employment laws, a number of states and localities have passed legislation that regulates the use of AI in the hiring context.
There are a variety of AI applications that have been developed to assist companies in managing their recruitment and hiring processes. These products take a number of forms, including, chat bots to liaise with candidates, search functions that allow employers to key word search large pools of resumes, and filtering functions that aim to identify candidates based on job requirements. In one notable application, a company purports that its algorithm can analyze the facial movements, word choice, and speaking voice of candidates and then assign them an "employability" score.
State and local law developments
In December 2021, the New York City Council enacted a law that aims to limit the discriminatory use of AI technology in the hiring process. The new law imposes obligations on employers and employment agencies who use automated employment decision tools to screen candidates for employment.
Automated employment decision tools are defined broadly and include "any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons."
Employers that use such systems will be subject to the following requirements:
- The tool must have been subject to a bias audit conducted no more than one year prior to the use of the tool; and
- A summary of the results of the most recent bias audit and the distribution date of the tool must be made publicly available on the website of the employer prior to use.
Moreover, employers must notify candidates of the following:
- That the AI tool will be used at least ten days prior to use and allow the candidate to request an alternative selection process or accommodation;
- The employer must notify the candidate of the "qualifications and characteristics" that the AI tool will use to assess candidates; and
- If not disclosed on the employer's website, the employer must inform the candidate about the type and source of the data collected for the AI tool and a copy of its data retention policy.
The law will go into effect January 1, 2023, and violations may result in civil penalties of $500 for the first violation and $1,500 for subsequent violations, with each day on which an AI tool is used being counted as a separate violation.
There are also several states that have enacted or are considering legislation that would regulate the use of AI in the hiring process. For example, Illinois passed the Artificial Intelligence Video Interview Act, which became effective on January 1, 2020. Under the Act, employers who use A.I. to analyze candidate video interviews are required to:
a) notify applicants that A.I. will be used in their video interviews;
b) obtain consent to use A.I. in each candidate's evaluation;
c) explain to applicants how the A.I. works and what characteristics the A.I. will track in relation to their fitness for the position;
d) limit sharing of the video interview to those who have the requisite expertise to evaluate the candidate; and
e) comply with an applicant's request to destroy his or her video within 30 days.
Existing federal laws
The use of AI in the hiring context may implicate a number of existing regulatory regimes. First, using AI to screen candidates may implicate Title VII of the Civil Rights Act of 1964 (Title VII), a federal law that protects applicants against discrimination based on protected classes, including race, color, national origin, sex, and religion. A number of other federal laws also prohibit discrimination in hiring, such as the Americans with Disabilities Act and the Age Discrimination in Employment Act.
These federal laws prohibit discrimination based on disparate treatment or disparate impact. Claims of disparate treatment generally arise from intentional discrimination, which is unlikely to be an issue when using AI in the hiring process. On the other hand, disparate impact claims are based on facially neutral practices that have the effect of harming a certain class of people. AI systems that perpetuate existing biases against protected classes may, therefore, give rise to claims of disparate impact.
It has been reported that the Equal Employment Opportunity Commission (EEOC) has opened two investigations into the use of AI in hiring practices, and in late 2020, 10 Senators wrote a letter to the EEOC requesting information on the commission's enforcement actions relating discrimination resulting from the use of hiring technologies. In late October 2021, the EEOC Chair announced the launch of an initiative to ensure that AI tools used in the hiring process comply with federal civil rights laws.
The Fair Credit Reporting Act (FCRA) may also create obligations for companies that employ third parties to assess candidates. Under the FCRA, a third-party vendor that assembles consumer information and uses AI to make decisions about an individual's eligibility for employment may be considered a Consumer Reporting Agency. Using a report from Consumer Reporting Agencies to make employment decisions will trigger notification obligations for the hiring company. Specifically, if the company takes adverse action against a potential hire based on information contained in the report, the company must send an adverse action notice to the affected individual. These notices must inform the candidate of the source of the information, of their right to view the information, and of their right to dispute incomplete or inaccurate information.
Looking ahead
New York City's law marks an interesting first move in the regulation of AI for hiring purposes in the United States. Other state and federal regulators are certainly aware of the potential risks of AI, but it remains to be seen whether the regulation of AI will follow the piecemeal and sectoral approach taken in the U.S. with regard to data privacy regulation.
Interestingly, on December 10, 2021, the Federal Trade Commission announced that it was prepared to commence a new rulemaking process in order to ensure, among other things, "that algorithmic decision-making does not result in unlawful discrimination," which gives a strong indication that federal regulation is on the horizon for 2022.