When it comes to AI, is the industry evolving as fast as the technology?

2.9k views1 Upvote4 Comments

Director of Security Operations in Finance (non-banking), 5,001 - 10,000 employees
I applaud the progress made but we have to keep learning. If the software that’s produced devalues your worth based on your identity, you would consider that a significant emotional event. 

The challenge I have is that I often get that approach without an understanding of the guiding principles we're going to use: What are the do's and don'ts that we will use until we have those guiding principles and what will we do to create those checks and balances? I see a lot of eagerness to put out AI because it exists and not much understanding of the potential impacts or wanting to examine them afterwards. That used to be the model that many less mature organizations had around security—that’s changed, but we need to make sure that change exists with AI.

Let's keep learning and failing fast but let's also understand what the principles are regarding use, as well as the checks and balances we'll put in place while we learn. I'm not hearing much discussion about that from the same companies that have been working on AI principles for the better part of a year, yet they already have projects in flight for utilizing AI in their products. We need to get in front of that principle-based usage, or we'll end up both slowing AI from a reactionary standpoint—from people who don't understand it—and causing problems from which we won’t be able to recover until people or organizations have been victimized. That concerns me.
President and National Managing Principal in Software, 501 - 1,000 employees
You've got to fail fast and learn it, but I don't see anybody doing that. That's the problem. Companies invest in AI, they get some programmers and data analysts and think it's great. But the very few that have been successful are the ones that are running it and having daily reviews to check: Is this correct? Are we doing it the right way? Can we adjust the knobs? 

All of the academic literature that's out there forcefully says that human involvement is a necessity for AI to be successful because we need to have the ability to turn the knobs up or down to recognize when there's bias, etc. But that's the problem we've been living with in tech for eons: If something looks cool then we'll just buy it and put it out there, as if it's just going to work without us having to work at it.
CEO and Co-Founder in Software, 51 - 200 employees
Today, 99% of your ML and AI are black boxes. So even the researcher doesn't truly know what happens in the actual transformation. All they know is: this is my input, these are my parameters in tuning and this is the desired output. 

We do heuristics: we keep running thousands of simulations to get to the desired output and then freeze that model. If there’s any simple change to the model, we have to redo the whole thing. That's why ML and AI are very expensive. When you talk to the researchers, they can say that it's continuous, adaptive, etc., all they want. But the folks who really do it will never speak out because the reality is that it’s a black box and we do it by heuristics, which doesn't sound fancy by any stretch.
Board Member, Advisor, Executive Coach in Software, Self-employed
Despite all the marketing hype, the reality of AI in the global security market space is that it’s about 10% of the spending, regardless of what people say they're doing. In 2020, Forbes reported cybersecurity spending was $123 billion in total. But when you look at the global AI and cybersecurity marketplace in some studies that were done, the total spending around AI and the market size in late 2019 and early 2020 was about $9 billion.  

If you look at all the marketing, 90% of people say they're doing AI and ML. But when you look at the financial numbers, it's less than 10%. So there's a lot of hype there and it might be that they have some models in the early stages, or maybe they're not really doing AI and ML. They may just have an expert system that they're labeling as something else. That's where the context matters.

Content you might like

Not at all15%


A fair amount16%





We're discussing it50%


I'm not sure2%