Employers’ use of artificial intelligence in assessing job applicants and employees has increased rapidly throughout the last decade. These tools are used in a variety of contexts, such as making hiring decisions, determining promotions, and evaluating employee performance. However, as more employers implement forms of artificial intelligence in their hiring processes, a demand for regulation has also emerged to combat resulting bias.
In December 2021, the Federal Trade Commission (“FTC”) issued an advanced notice of proposed rulemaking titled “Trade Regulation in Commercial Surveillance,” which stated that the “Commission is considering initiating a rulemaking under section 18 of the FTC Act to . . . ensure that algorithmic decision-making does not result in unlawful discrimination.” This followed a statement from FTC Chair Lina M. Khan in October 2021 that stated that the FTC “must explore using its rulemaking tools to codify baseline [privacy] protections,” reasoning, in part, “that greater adoption of workplace surveillance technologies and facial recognition tools is expanding data collection in newly invasive and potentially discriminatory ways.”
Last month, both the Equal Employment Opportunity Commission (“EEOC”) and Department of Justice (“DOJ”) issued guidance on the use of artificial intelligence in employment processes to prevent violations of the Americans with Disabilities Act (“ADA”). The DOJ warned that “[e]ven where an employer does not mean to discriminate, its use of a hiring technology may still lead to unlawful discrimination.” The EEOC explained that steps an employer may take to avoid discrimination on the basis of race and sex “are typically distinct from the steps needed to address the problem of disability bias.” Both the EEOC and DOJ provided suggestions for avoiding ADA violations, including training staff to recognize and process requests for reasonable accommodation quickly, using an accessible test that measures an applicant’s job skills rather than disability, and ensuring that an employer is not unlawfully seeking medical or disability-related information.
Although federal regulation is still in the early phases, some states have already begun the process of implementing restrictions on employers’ use of artificial intelligence or automated decision-making.
For example, in March, the California Fair Employment and Housing Council (“FEHC”) issued “Draft Modifications to Employment Regulations Regarding Automated-Decision Systems.” Therein, the FEHC propose to make it “unlawful for an employer or a covered entity to use qualification standards, employment tests, automated-decision systems, or other selection criteria that screen out or tend to screen out an applicant or employee or a class of applicants or employees on the basis of a characteristic protected by this Act, unless the standards, tests, or other selection criteria, as used by the covered entity, are shown to be job-related for the position in question and are consistent with business necessity.” These changes, which would include significant recordkeeping, have not yet been fully implemented.
Reacting to the increased use of artificial intelligence and automated decision-making, the New York City Council passed a bill to restrict employers’ use of “automated employment decision tools.” The law – which will take effect on January 1, 2023 – prohibits employers or employment agencies from using automated employment decision tools to screen a candidate for employment or promotion unless: 1) the tool has been the subject of a bias audit within one year of the tool’s use, and 2) a summary of the results of the most recent bias audit (and the distribution date of the tool to which such audit applies) has been made publicly available on the website of the employer or employment agency prior to the use of the tool. Candidates now also have the right to be able to request an alternative selection process or accommodation. The law not only includes notice requirements, but also imposes significant monetary penalties for violations.
What, if anything, should employers do? All employers should: 1) evaluate their computer-assisted employment processes to determine if any tools might be violating the ADA, 2) consider whether any of the DOJ or EEOC suggestions should be adopted, and 3) stay apprised of applicable law changes. New York City employers should determine if they use, or will use, automated employment decision tools. Employers using such technology should: 1) find an independent auditor to conduct the required bias audit of these tools, 2) develop an alternative selection process, and 3) draft notices that will be required under the law.
As with any new legislation effecting employment policy and regulations, it is important to review any longstanding employment policies to ensure compliance with any new law.
Thanks to Damaris Hernandez, 2021 Summer Associate and Recipient of the Honorable Walter R. Stone Diversity Fellowship, for her significant contributions to this blog post.
 Trade Regulation Rule on Commercial Surveillance, Office of Information and Regulatory Affairs (Dec. 10, 2021), https://www.reginfo.gov/public/do/eAgendaViewRule?pubId=202110&RIN=3084-AB69
 Statement of Chair Lina M. Khan Regarding the Rep. to Cong. on Priv. and Sec. Comm’n File No. P06540, Federal Trade Commission (Oct. 1, 2021). https://www.ftc.gov/system/files/documents/public_statements/1597024/statement_of_chair_lina_m_khan_regarding_the_report_to_congress_on_privacy_and_security_-_final.pdf
 The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees, EEOC (May 12, 2022) https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence
 Draft Modifications to Employment Regulations Regarding Automated-Decision Systems, FEHC (March 15, 2022) https://www.dfeh.ca.gov/wp-content/uploads/sites/32/2022/03/AttachB-ModtoEmployRegAutomated-DecisionSystems.pdf
 N.Y.C. Admin. Code §§ 20-871 (2022).