NYC Aims to Be First to Rein in AI Hiring Tools
Job candidates rarely know when hidden artificial intelligence tools are rejecting their resumes or analyzing their video interviews. But New York City residents could soon get more say over the computers making behind-the-scenes decisions about their careers.
A bill passed by the city council in early November would ban employers from using automated hiring tools unless a yearly bias audit can show they won’t discriminate based on an applicant’s race or gender. It would also force makers of those AI tools to disclose more about their opaque workings and give candidates the option of choosing an alternative process — such as a human — to review their application.
Proponents liken it to another pioneering New York City rule that became a national standard-bearer earlier this century — one that required chain restaurants to slap a calorie count on their menu items.
Instead of measuring hamburger health, though, this measure aims to open a window into the complex algorithms that rank the skills and personalities of job applicants based on how they speak or what they write. More employers, from fast food chains to Wall Street banks, are relying on such tools to speed up recruitment, hiring and workplace evaluations.
“I believe this technology is incredibly positive but it can produce a lot of harms if there isn’t more transparency,” said Frida Polli, co-founder and CEO of New York startup Pymetrics, which uses AI to assess job skills through game-like online assessments. Her company lobbied for the legislation, which favors firms like Pymetrics that already publish fairness audits.
But some AI experts and digital rights activists are concerned that it doesn’t go far enough to curb bias, and say it could set a weak standard for federal regulators and lawmakers to ponder as they examine ways to rein in harmful AI applications that exacerbate inequities in society.
“The approach of auditing for bias is a good one. The problem is New York City took a very weak and vague standard for what that looks like,” said Alexandra Givens, president of the Center for Democracy & Technology. She said the audits could end up giving AI vendors a “fig leaf” for building risky products with the city’s imprimatur.
Givens said it’s also a problem that the proposal only aims to protect against racial or gender bias, leaving out the trickier-to-detect bias against disabilities or age. She said the bill was recently watered down so that it effectively just asks employers to meet existing requirements under U.S. civil rights laws prohibiting hiring practices that have a disparate impact based on race, ethnicity or gender. The legislation would impose fines on employers or employment agencies of up to $1,500 per violation — though it will be left up to the vendors to conduct the audits and show employers that their tools meet the city’s requirements.
The City Council voted 38-4 to pass the bill on Nov. 10, giving a month for outgoing Mayor Bill De Blasio to sign or veto it or let it go into law unsigned. De Blasio’s office says he supports the bill but hasn’t said if he will sign it. If enacted, it would take effect in 2023 under the administration of Mayor-elect Eric Adams.
Julia Stoyanovich, an associate professor of computer science who directs New York University’s Center for Responsible AI, said the best parts of the proposal are its disclosure requirements to let people know they’re being evaluated by a computer and where their data is going.
“This will shine a light on the features that these tools are using,” she said.
But Stoyanovich said she was also concerned about the effectiveness of bias audits of high-risk AI tools — a concept that’s also being examined by the White House, federal agencies such as the Equal Employment Opportunity Commission and lawmakers in Congress and the European Parliament.
“The burden of these audits falls on the vendors of the tools to show that they comply with some rudimentary set of requirements that are very easy to meet,” she said.
The audits won’t likely affect in-house hiring tools used by tech giants like Amazon. The company several years ago abandoned its use of a resume-scanning tool after finding it favored men for technical roles — in part because it was comparing job candidates against the company’s own male-dominated tech workforce.
There’s been little vocal opposition to the bill from the AI hiring vendors most commonly used by employers. One of those, HireVue, a platform for video-based job interviews, said in a statement this week that it welcomed legislation that “demands that all vendors meet the high standards that HireVue has supported since the beginning.”
The Greater New York Chamber of Commerce said the city’s employers are also unlikely to see the new rules as a burden.
“It’s all about transparency and employers should know that hiring firms are using these algorithms and software, and employees should also be aware of it,” said Helana Natt, the chamber’s executive director.