How can CISOs/CIOs codify a set of ethical principles around AI?
At my previous company, we codified a code of conduct. Because the firefighters are just going to run towards the fire, so we had to put the rules of the road in place so that they didn't run over people unconsciously and cause other issues.
I agree with everything you're saying up to a point. Let’s get a little theoretical here. Your assumption that the injection of the human is my first starting point regarding potential AI driven challenges, assumes uniform implementation, enforcement, and decisioning regarding incidents in events across departments and geographies. But when you get to a point where that is not uniform, or you get to a point where all of a sudden you're seeing more, then that changes. If the AI says, "I'm seeing more cases of DOP incidents coming out of India,” because I've got 15 call centers in India, and the level of discipline is not necessarily where it needs to be there, is the AI making an informed decision in terms of tightening down regulations and escalating more, or is it reflecting an unconscious bias that we have to be aware of and program through? You don't want to be answering those questions after the fact when India is in an uproar. So what we're saying from where we are now makes perfect sense, but there are longer term issues and ramifications and principles.
I think you're right. But I've heard from almost everybody that they're just now starting to contemplate it. If we can have a dialogue there might be ways in which people grab and go from each other when building their own principles. They might say, "Oh, crap, I get that I need that, but here's what you're missing in yours." And then use the community to help fill that need faster.
Content you might like
Scalable AI49%
Composable Data and Analytics41%
XOps22%
Data Fabric34%
Engineering Decision Intelligence25%
Augmented Consumers6%
Edge Computing26%
I agree. The regulatory stuff is critical, but it's one component of the broader set of items because that regulatory component (CCPA, GDPR, etc.) is not going to talk about a potential racial bias or disability bias. Because their focus is on the confidentiality of that, not necessarily the inferences. There's a great paper that the EU did on artificial intelligence and the ethical implications.