The New York City Automated Employment Decision Tool law introduces important considerations regarding the use of AI and machine learning in hiring processes. How can companies ensure transparency in their AI hiring systems to avoid biases and demonstrate compliance with the law?

1.1k viewscircle icon2 Upvotescircle icon2 Comments
Sort by:
Fellow at CodeX, The Stanford Center for Legal Informatics & Generative AI Editor at law.MIT2 years ago

Companies can take numerous measures to ensure transparency in their AI hiring systems and demonstrate compliance with the law. For example:

Documentation and explanation. Maintain comprehensive documentation about the design, training, and functioning of the AI system. This should include the data sources, algorithms used, and the rationale behind decisions.

Bias audits. Regularly conduct audits to identify and rectify any biases in the AI system. This includes both evident and subtle biases that could disadvantage certain groups of applicants.

Lightbulb on1
Director of Legal in Software2 years ago

The first step is education. Learn about current AI tools. Learn how they work. What data they have been trained on and for how long. The second step is being open and transparent about their use of such systems. The third step is relying on a multitude of sources for hiring information and not just one source. This is to avoid bias, prejudice, and incomplete or inaccurate data that a tool may be relying on to provide outputs to you.

Lightbulb on3

Content you might like

Cost of RPA products24%

Lack of developers who can code RPA applications43%

Amount of customization needed to automate business processes26%

Lack of RPA code maintenance resources4%

View Results

Fully prepared15%

Moderately prepared51%

Neutral12%

Slightly unprepared17%

Not prepared at all4%

View Results