In a bid to tackle the persistent issue of AI bias in hiring, New York City implemented Local Law 144 (LL144) in 2021, requiring employers to annually audit automated employment decision tools (AEDTs) for race and gender bias. But, a recent study conducted by researchers at Cornell University, Consumer Reports, and the Data & Society Research Institute has uncovered major flaws in the law’s effectiveness. The study, yet to be officially published but shared with The Register, reveals alarming statistics, indicating a stark lack of compliance among employers subject to LL144.
The reality of LL144’s ineffectiveness for AI bias
Let’s delve into the specifics of the LL144 law and the findings of the study. It becomes evident that, despite its good intentions, LL144 lacks teeth in compelling employers to address AI bias effectively. The law’s broad scope allows employers considerable discretion in determining whether their systems fall under its purview. Shockingly, of the 391 employers sampled, only 18 had published the required audit reports, and a mere 13 had included transparency notices in their job postings.
The study’s co-author, Jacob Metcalf of Data & Society, pointed out LL144’s shortcomings, emphasizing that it grants too much discretion to employers. Notably, the law fails to mandate corrective actions if audits uncover discriminatory outcomes, leaving employers vulnerable to expensive civil lawsuits. The researchers also shed light on the experience of auditors reviewing AEDTs used by NYC companies, uncovering instances where employers opted not to publicize audit results with unfavorable numbers due to legal concerns.
Global ramifications
The failure of New York’s attempt to curb AI bias has rippling effects on similar initiatives worldwide. Proposed legislations in other jurisdictions, including California, Washington D.C., and New York state, have faced setbacks as lawmakers reconsider their structures in light of LL144’s shortcomings. Even the European Union’s AI Act, while provisionally agreed upon, faces challenges in defining regulations for AI used in recruiting processes.
Notably, HR software firm Workday’s legal troubles underscore the urgency of addressing AI bias. Despite LL144’s limited success, the researchers argue that the law serves as a valuable learning experience and a crucial first step towards better regulation. Yet they stress the need to broaden the scope of legislation, urging future laws to abandon abstract language and provide clearer definitions, particularly in determining what constitutes covered use of AEDT software.
A call for clearer legislation
A critical question about the future of AI bias regulation appears. Given the shortcomings of LL144, what lessons can be drawn to shape more effective and comprehensive legislation globally? As discussions around AI bias continue, the need for clearer definitions, broader scope, and stringent enforcement measures becomes increasingly evident. The journey to regulate AI in hiring decisions may be fraught with challenges, but learning from the mistakes of LL144 could pave the way for more robust and impactful regulations in the future.