Disclosures and Audits of AI Hiring Tools are Now Required in New York City, With Other Locations Sure to Follow

May 10, 2023

Topics: Hiring and Recruiting

The discussions around Artificial Intelligence (“AI”) – the pros, the cons, the concerns – have heated up over the last year. It is therefore not surprising that the conversation has turned to the use of AI in employment decisions.

Employers and employment agencies are increasingly using AI tools to help screen massive numbers of applications. These tools can swiftly search resumes for keywords and specific experience or skills. AI has begun to be used to manage existing employees too, with companies deploying the technology to help evaluate workers for promotions and to build out internal teams.

While it may seem at first that distancing recruiting selection from human decision makers might help eliminate bias, several studies have shown that AI hiring tools can actually increase prejudicial or biased hiring. Laws are now being developed across the country to respond to the potential negative effect of AI recruiting tools, including seeking to implement facial recognition restrictions for interviews in Illinois and Maryland, AI recruiting rules in Washington DC, and proposed laws in California and on the Federal level. A perfect example of how these new AI regulations will impact employers recently went into effect in New York City.

The New York City Artificial Intelligence Law – the first of its kind in the nation – applies to employers and employment agencies who use AI tools to make hiring, recruiting or promotion decisions. The requirements being imposed are somewhat onerous on employers, including:

  1. Requiring that any AI tool undergo an annual “bias audit” by an independent auditor, and publishing the results; and
  2. Notifying candidates and employees that they are using an automated tool at least 10 days before the tool’s use and detailing what job qualifications and characteristics the tool uses.

The required notice can be included on an employer’s website, and we suggest that is likely the best and easiest way in which to comply. In addition, candidates must have a way of requesting that an alternative, non-AI selection process be used. Currently, it’s not clear that employers will need to actually provide an alternative to AI, only a means to request one, but we expect more guidance on that soon. Recruiters and hiring professionals should be prepared for the potential extra headache of adjusting their process after receiving opt-out requests and should make sure their managers and recruitment professionals are aware of and ready, willing, and able to switch gears if necessary.

The NYC Department of Consumer and Worker Protection (DCWP), which oversees enforcing the new law, has indicated that it will not begin enforcement until July 5, 2023. However, the law gives individuals the right to bring a claim for violations, which could be as high as $500.00 per day that the AI process is in use without proper auditing and notice.

If your company is operating or hiring for roles in New York City, and you use or intend to use AI in your recruitment or promotion process, we strongly suggest discussing with your employment counsel what resources are available for you, how best to inform individuals of your use of AI, and what your obligations are.

That said, all companies should pay close attention to these issues.  As mentioned above, new regulation of recruiting practices is on the rise.  But since companies could also face claims under existing law for practices that have a discriminatory impact, it is worthwhile for employers to go the extra step to be sure all their hiring processes, whether human or technology-based, are carefully vetted, fair and equitable.

For more information about these or other employment law topics, please contact the authors Brooke Schneider and Timothy Eavenson, or your personal Greenwald Doherty attorney contact.