How can CISOs/CIOs codify a set of ethical principles around AI?


527 views1 Upvote8 Comments

CISO, 1,001 - 5,000 employees
We are building a sense of a normative “should” when it comes to artificial intelligence around the regulatory imperative. So the framing discussion around “should” is attestation to various regulatory frameworks (CCPA and GDPR foremost amongst them as the golden or rising bars for responsible handling of data). One of the challenges in evolving that conversation beyond strong regulatory frameworks is when you're trying to build a critical thought approach to some of the new technologies and skills that are desperately needed, you have to look beyond that “should”. There's a business enabling and a board facing win in looking at the regulatory narrative because it's always going to have a space in highly regulated sectors. But when evolving the conversation around AI, you have to consider the regulatory piece as only a part of it. The second point I would make here is, we’ve all been in those meetings where we're reviewing the latest EDR platform or source solution and there is a claim of AI/ML. It reminds me of detective black box analytics and the push towards that five or six years ago. These pre-packaged solutions promise, “don't worry about what's happening down here, but there's protection.” As a credit to the information security community today, that's no longer good enough. I think there's a need amongst us to drive towards validation of those claims. And then it goes back to the skilling piece. Do we have the right people to ask those questions and are we preparing them for that? That's a difficult thing.
1 1 Reply
Board Member, Advisor, Executive Coach in Software, Self-employed

I agree. The regulatory stuff is critical, but it's one component of the broader set of items because that regulatory component (CCPA, GDPR, etc.) is not going to talk about a potential racial bias or disability bias. Because their focus is on the confidentiality of that, not necessarily the inferences. There's a great paper that the EU did on artificial intelligence and the ethical implications.

VP, Director of Cyber Incident Response in Finance (non-banking), 10,001+ employees
From a cybersecurity and incident response capability perspective, I've always been concerned about the event itself that led to whatever the investigation is. Invariably almost everything that we do in incident response starts with an event of some type, not with a person. And so at the start of it, who cares about the person, because there's an event that started this whole thing, and eventually you're going to get down to, oh, it was this computer and it was this person behind the keyboard who did whatever the action was. And so it’s always event-before-person for me. There is that human component, where even analysts will go, “Geez, that says it was Jeff, but Jeff wouldn't normally do that kind of thing, would he?" Or, "Oh, Jeff's my friend and we'll just sweep this under the rug." So even though an AI component might come up, once it gets to the humans, the picture changes a lot in terms of what the response action is going to be.
1 2 Replies
Board Member, Advisor, Executive Coach in Software, Self-employed

At my previous company, we codified a code of conduct. Because the firefighters are just going to run towards the fire, so we had to put the rules of the road in place so that they didn't run over people unconsciously and cause other issues.

Director of Security Operations in Finance (non-banking), 5,001 - 10,000 employees

I agree with everything you're saying up to a point. Let’s get a little theoretical here. Your assumption that the injection of the human is my first starting point regarding potential AI driven challenges, assumes uniform implementation, enforcement, and decisioning regarding incidents in events across departments and geographies. But when you get to a point where that is not uniform, or you get to a point where all of a sudden you're seeing more, then  that changes. If the AI says, "I'm seeing more cases of DOP incidents coming out of India,” because I've got 15 call centers in India, and the level of discipline is not necessarily where it needs to be there, is the AI making an informed decision in terms of tightening down regulations and escalating more, or is it reflecting an unconscious bias that we have to be aware of and program through? You don't want to be answering those questions after the fact when India is in an uproar. So what we're saying from where we are now makes perfect sense, but there are longer term issues and ramifications and principles.

1
Director of Security Operations in Finance (non-banking), 5,001 - 10,000 employees
The assumption is that us as large companies are all looking at what our principles for AI are. Where I am, we're just looking at ours now, and it's been an eight month endeavor to figure out what our principles are, what we want to advocate for, what we want to do, and we're still working on it. From what I understand, there aren't a lot of companies out there right now that are having the same conversations.
1 1 Reply
Board Member, Advisor, Executive Coach in Software, Self-employed

I think you're right. But I've heard from almost everybody that they're just now starting to contemplate it. If we can have a dialogue there might be ways in which people grab and go from each other when building their own principles. They might say, "Oh, crap, I get that I need that, but here's what you're missing in yours." And then use the community to help fill that need faster.

CISO, 1,001 - 5,000 employees
I am not an AI focused practitioner by any means, but I think with my limited experience, something that really occurs to me about AI is—just as is the case with more complex, advanced, inherently challenging information security disciplines—there's a real conversation to have around inequity, because at the very core technical level (is your data structured? Is it unstructured? Do you even know where your data is) there's a wide array of maturity in that space. So I am not shirking the collective responsibility to think about privacy and aspire to the codification of that, but I also look at some of these smaller entities and I think there's an element that can and should be done, but there's also a fundamental problem of skilling and environmental maturation that has to set the floor for ML and AI to be affected atop it.
1

Content you might like

Scalable AI49%

Composable Data and Analytics41%

XOps22%

Data Fabric34%

Engineering Decision Intelligence25%

Augmented Consumers6%

Edge Computing26%


219 PARTICIPANTS

780 views

CTO in Software, 11 - 50 employees
No, we haven't published corporate guidance establishing guardrails for use of commercial generative AI services.
Read More Comments
2.1k views1 Upvote3 Comments