Should the ethical issues arising from AI/ML be owned by the CISO?
Chief Information Officer in Manufacturing, 10,001+ employees
I think it's a joint effort with the CISO, Engineering, Architecture, and Operations to maintain and keep AI/ML in check when it comes to security. Each area plays a role in ensuring the security of machine learning and artificial intelligence.Director of Security Operations in Finance (non-banking), 5,001 - 10,000 employees
I think as practitioners in the profession, there are a couple of things we need to bring to the mix as potential roadblocks for us. One of the things is the professionalization collectively of CSOs across the board because of the skills gap and the jobs gap. We've got a lot of people who are coming in chasing dollars and not fully holistically skilled, and understanding that there's a hard and soft portion to doing this sort of work. Many of them either grew up in the soft portion and don't have the hard portion or grew up in the hard portion and don't care about the soft portion, which is making it difficult for them to truly be advisors and partners. So I think that's one problem or challenge that we're seeing collectively as we look at ethical implications, and that's exacerbated by the second roadblock. In less mature, less senior, and smaller organizations we are still looked at as a business roadblock versus a business enabler. So when you start asking the question, “yes, I know we can, but should we, and what are the principles that will guide us?” we've run into the same ready-fire-aim business focus that went into outsourcing, offshore, wireless, cloud, and anything else over the past 30 years. Those of us who've been around the block are just getting to where there's that level of operation, maturity, and understanding in these harder organizations. So as we look at these ethical issues and inject ourselves into them so that we can properly secure them in our role to protect the company and more importantly, to provide shareholder value, then that's the level of up-skill and focus that I would submit needs to happen in the up-and-coming crop of CSOs.Content you might like
Yes, significantly25%
Yes, but not by much59%
No13%
Not sure3%
79 PARTICIPANTS
Big Data21%
Remote Work17%
Microservices / Containerization11%
CI / CD5%
Zero-Trust15%
Automation2%
Digital Transformation16%
Cloud / Cloud Native1%
DevOps or DevSecOps6%
Other (comment)1%
1006 PARTICIPANTS
CTO in Software, 11 - 50 employees
No, we haven't published corporate guidance establishing guardrails for use of commercial generative AI services.
I don't run into that many CSOs any longer who say, “that's not my business,” but you still have engineers who say, "Well, why are you looking at this?" I see fewer and fewer CSOs who say that; however, they just don't have the expertise and the structure to support it. Thinking from a CIA perspective, we have always over-rotated in confidentiality and not on integrity and availability. . Another aspect of this is, it's nothing different than protecting and having good practices that you have around any pieces of software or engineering. For example, we use AI very extensively inside the network and not for the purpose of anything other than more optimization, really learning about the massive rise in that work, how we need to train the data, and how we can learn from it. I found that over the past few years there are good tool kits that have come to the market that can help the data scientists and the engineers test for some of these threat scenarios. But if you think about it the same way that you think about any product security, you just tool it a little bit differently for AI, but nevertheless, it is part of your scope and you need to govern it properly.