Should Artificial Intelligence be tested by other Artificial Intelligence?

234 viewscircle icon1 Upvotecircle icon2 Comments
Sort by:
CTO4 years ago

Currently the Twitter algorithm is bad at dealing with black people in photographs. It chops them off or otherwise fails to recognize them. Is the problem the code or is it the data? The algorithm is being trained on bad data, or mostly white people, and changes need to be made. I imagine the Twitter QA people were all white or the data used was all white and so it all looked fine during testing. 

The problem is that when you put the algorithm into real use, it's not fine. There are multiple challenges that come down to specification of use cases. We have to be more agile in terms of our response—we have to be less anxious to say our technology meets all its required use or test cases and more keen to learn by experience and quickly change. There is now black data and code, and the big change for us as IT people is understanding that there is implicit bias in training datasets independently, because you can have a great algorithm, but if your data is bad, you're in trouble. Walking this boundary of ethical composition of data training data and training appropriately is really important.

Lightbulb on2
Executive Coach / Global Chief Information Officer & CISO in Education4 years ago

You need human bodies asking questions, like you would with implicit/explicit bias or anything else. Until you can get every major data of brain synapses possible to put against every question, computers aren't going to get there. With quantum computers and maybe 50 years of that analysis maybe it’s possible, but for the next 50-100 years, unless there's a major investment in quantum computing or something like it, and that synaptic analysis is done by those computers in massive amounts, you're just not going to be able to reproduce the diverse questions that people will ask.

Lightbulb on2

Content you might like

Established AI governance framework with defined policies and oversight37%

Currently developing governance models and risk controls63%

Relying on existing security/compliance frameworks (no AI-specific policy)26%

No formal AI governance approach in place4%

View Results

Yes81%

No15%

Unsure3%

View Results