Should the ethical issues arising from AI/ML be owned by the CISO?

1.3k views1 Upvote4 Comments

Board Member, Advisor, Executive Coach in Software, Self-employed
With machine learning and artificial intelligence I had to write codes of conduct for data scientists, figure out where we're getting the training data, and whether or not it was ethically acquired or, 'just on the internet.'  At Cylance, we were developing a persona product that used mouse movements and keystrokes to figure out who the right person was. At Cymatic, we do that in the browser. But how do you do that without creating a ton of other privacy implications or false inferences? So you have all these slippery slopes with AI and security, particularly when it's used for identification, that there's a whole lot of other inferences that could be made. One of the things that I learned, and it frustrated me with many peers over the past several years, is that a lot of my security peers said that's not their responsibility. It's not their responsibility if there's a bias built into it, indirectly. They’d say “yeah, but it's not a cybersecurity attack.” And I say: build it into your SDLC, build it into your privacy by design, you own it. I look at it from a regular risk perspective, but I also look at these ethical and moral implications because I want to create the principles and compass so that we don't go down the slippery slope, not realizing it and then go, "Oh shit” later.
1 Reply
SVP, CISO, 10,001+ employees

I don't run into that many CSOs any longer who say, “that's not my business,” but you still have engineers who say, "Well, why are you looking at this?" I see fewer and fewer CSOs who say that; however, they just don't have the expertise and the structure to support it. Thinking from a CIA perspective, we have always over-rotated in confidentiality and not on integrity and availability.  .  Another aspect of this is, it's nothing different than protecting and having good practices that you have around any pieces of software or engineering. For example, we use AI very extensively inside the network and not for the purpose of anything other than more optimization, really learning about the massive rise in that work, how we need to train the data, and how we can learn from it. I found that over the past few years there are good tool kits that have come to the market that can help the data scientists and the engineers test for some of these threat scenarios. But if you think about it the same way that you think about any product security, you just tool it a little bit differently for AI, but nevertheless, it is part of your scope and you need to govern it properly.

Chief Information Officer in Manufacturing, 10,001+ employees
I think it's a joint effort with the CISO, Engineering, Architecture, and Operations to maintain and keep AI/ML in check when it comes to security. Each area plays a role in ensuring the security of machine learning and artificial intelligence.
Director of Security Operations in Finance (non-banking), 5,001 - 10,000 employees
I think as practitioners in the profession, there are a couple of things we need to bring to the mix as potential roadblocks for us. One of the things is the professionalization collectively of CSOs across the board because of the skills gap and the jobs gap. We've got a lot of people who are coming in chasing dollars and not fully holistically skilled, and understanding that there's a hard and soft portion to doing this sort of work. Many of them either grew up in the soft portion and don't have the hard portion or grew up in the hard portion and don't care about the soft portion, which is making it difficult for them to truly be advisors and partners. So I think that's one problem or challenge that we're seeing collectively as we look at ethical implications, and that's exacerbated by the second roadblock. In less mature, less senior, and smaller organizations we are still looked at as a business roadblock versus a business enabler. So when you start asking the question, “yes, I know we can, but should we, and what are the principles that will guide us?” we've run into the same ready-fire-aim business focus that went into outsourcing, offshore, wireless, cloud, and anything else over the past 30 years. Those of us who've been around the block are just getting to where there's that level of operation, maturity, and understanding in these harder organizations. So as we look at these ethical issues and inject ourselves into them so that we can properly secure them in our role to protect the company and more importantly, to provide shareholder value, then that's the level of up-skill and focus that I would submit needs to happen in the up-and-coming crop of CSOs.

Content you might like

Yes, significantly25%

Yes, but not by much59%


Not sure3%



Big Data21%

Remote Work17%

Microservices / Containerization11%

CI / CD5%



Digital Transformation16%

Cloud / Cloud Native1%

DevOps or DevSecOps6%

Other (comment)1%


2.6k views5 Upvotes16 Comments

CTO in Software, 11 - 50 employees
No, we haven't published corporate guidance establishing guardrails for use of commercial generative AI services.
Read More Comments
1.8k views1 Upvote2 Comments