What are your biggest concerns as AI capabilities become a standard industry offering?
Sort by:
The volume and cleanliness of data that is required for these capabilities to succeed.
I normally estimate that 70% of any AI project is sorting out data and not value add.
Artificial intelligence is not the same kind of intelligence as human intelligence and never will be. We can create all kinds of algorithms and apply machine learning -- allow AI to make decisions much more quickly and speed up systems. However, computers can never develop intuition, which relies on gut feelings; a deep emotional response. We can model other kinds of emotions and interactions, but intuition -- often referred to as the "sixth sense" --- cannot be programmed. Taking this out of the equation in decision making (or worse, attempting to create a mathematical model) is a path we should not attempt.
When you start looking at the commercial world, the models are very biased based on the data. There are enough studies that have demonstrated that and not only in cyber; the best example was the RELIEF study.
When you're trying to do short, secure administration models, Palantir completely goofed up. It was absolutely racially biased and that opened up MIT to come out with IBM and say, "We've got to be really careful. Do we have the right sample size?" And when you look at security, we all know that we don't have large enough sample sizes. We want to talk about AI, but we truly don't even have good ML models. We stretch statistical models to ML and say we have some sort of AI. I don't want to downplay the progress because I’ve published tons of papers on that. But the key is trying to draw that fine line, and people are not willing to tell you where the data is coming from.
Understanding that you can't put Pandora back in the box is a necessity for what we do. AI is not good, bad, right or wrong, it just is; we have to figure out how to deal with it. All of us within the security or technology professionals need to understand that. But we also have to understand that AI has human consequences.
Is there a way to train the models 'Ethics and Morality'?
Using this powerful technology ethically and morally correct way still depends on human individual. There lies the danger.