The New York City Automated Employment Decision Tool law introduces important considerations regarding the use of AI and machine learning in hiring processes. How can companies ensure transparency in their AI hiring systems to avoid biases and demonstrate compliance with the law?

1.1k viewscircle icon2 Upvotescircle icon2 Comments
Sort by:
Fellow at CodeX, The Stanford Center for Legal Informatics & Generative AI Editor at law.MIT2 years ago

Companies can take numerous measures to ensure transparency in their AI hiring systems and demonstrate compliance with the law. For example:

Documentation and explanation. Maintain comprehensive documentation about the design, training, and functioning of the AI system. This should include the data sources, algorithms used, and the rationale behind decisions.

Bias audits. Regularly conduct audits to identify and rectify any biases in the AI system. This includes both evident and subtle biases that could disadvantage certain groups of applicants.

Lightbulb on1
Director of Legal in Software2 years ago

The first step is education. Learn about current AI tools. Learn how they work. What data they have been trained on and for how long. The second step is being open and transparent about their use of such systems. The third step is relying on a multitude of sources for hiring information and not just one source. This is to avoid bias, prejudice, and incomplete or inaccurate data that a tool may be relying on to provide outputs to you.

Lightbulb on3

Content you might like

Yes – comprehensive and documented50%

Yes - early stage, still evolving44%

Not yet - but planning6%

No plans yet

View Results

Automation in legal tasks45%

Compliance / updations29%

Reviewing contracts32%

Error detection16%

Creating diligence reports17%

Data extraction37%

Summarizing documents67%

Intellectual property management5%

Easy access to information38%

Legal chatbots7%

Legal research26%

Other (comment below)1%

View Results