The New York City Automated Employment Decision Tool law introduces important considerations regarding the use of AI and machine learning in hiring processes. How can companies ensure transparency in their AI hiring systems to avoid biases and demonstrate compliance with the law?

1.1k viewscircle icon2 Upvotescircle icon2 Comments
Sort by:
Fellow at CodeX, The Stanford Center for Legal Informatics & Generative AI Editor at law.MIT2 years ago

Companies can take numerous measures to ensure transparency in their AI hiring systems and demonstrate compliance with the law. For example:

Documentation and explanation. Maintain comprehensive documentation about the design, training, and functioning of the AI system. This should include the data sources, algorithms used, and the rationale behind decisions.

Bias audits. Regularly conduct audits to identify and rectify any biases in the AI system. This includes both evident and subtle biases that could disadvantage certain groups of applicants.

Lightbulb on1
Director of Legal in Software2 years ago

The first step is education. Learn about current AI tools. Learn how they work. What data they have been trained on and for how long. The second step is being open and transparent about their use of such systems. The third step is relying on a multitude of sources for hiring information and not just one source. This is to avoid bias, prejudice, and incomplete or inaccurate data that a tool may be relying on to provide outputs to you.

Lightbulb on3

Content you might like

Yes21%

Maybe and we are seriously evaluating38%

Maybe one day, but not seriously evaluating yet24%

No, I just plan on using personally9%

No, not using at all or haven't yet7%

View Results

Yes, more jobs created51%

Yes, jobs lost to AI35%

Other (please comment)13%

View Results