Home

What are your biggest concerns as AI capabilities become a standard industry offering? 

Artificial intelligence is not the same kind of intelligence as human intelligence and never will be.  We can create all kinds of algorithms and apply machine learning -- allow AI to make decisions much more quickly and speed up systems.  However, computers can never develop intuition, which relies on gut feelings; a deep emotional response.  We can model other kinds of emotions and interactions, but intuition -- often referred to as the "sixth sense" --- cannot be programmed.  Taking this out of the equation in decision making (or worse, attempting to create a mathematical model) is a path we should not attempt.

Anonymous Author
Artificial intelligence is not the same kind of intelligence as human intelligence and never will be.  We can create all kinds of algorithms and apply machine learning -- allow AI to make decisions much more quickly and speed up systems.  However, computers can never develop intuition, which relies on gut feelings; a deep emotional response.  We can model other kinds of emotions and interactions, but intuition -- often referred to as the "sixth sense" --- cannot be programmed.  Taking this out of the equation in decision making (or worse, attempting to create a mathematical model) is a path we should not attempt.
2 upvotes
Anonymous Author
The volume and cleanliness of data that is required for these capabilities to succeed. I normally estimate that 70% of any AI project is sorting out data and not value add.
2 upvotes
Anonymous Author
If AI is done right, it's a huge opportunity for technology; if we get it wrong it’s a huge slippery slope. There's already been evidence, in some cases, of the consequences involved if we mess up broad aspects of AI: a few years ago, Microsoft put out a chatbot that didn't have the right monitoring and people were seeding it with harassment and hostile language. That basically trained the chatbot to say inappropriate things back to people because people had flooded it with all this negative stuff. There have been instances of AI that have already had severe consequences, like the criminal risk assessment algorithms that are used by some courts to determine if people will reoffend. There was a research study done by a group of PhD students that found that this system was substantially discriminatory against African-Americans because of the way in which the data was put into it. It used machine learning to determine somebody's likelihood to recommit crimes based upon the anchor point of a historical bias, which the machine then learned to continue going forward. I've worried a ton about that at Cylance and even during my time at Intel.
1 upvotes
Anonymous Author
Understanding that you can't put Pandora back in the box is a necessity for what we do. AI is not good, bad, right or wrong, it just is; we have to figure out how to deal with it. All of us within the security or technology professionals need to understand that. But we also have to understand that AI has human consequences.
1 upvotes
Anonymous Author
Is there a way to train the models 'Ethics and Morality'?  Using this powerful technology ethically and morally correct way still depends on human individual. There lies the danger. 
1 upvotes
Anonymous Author
When you start looking at the commercial world, the models are very biased based on the data. There are enough studies that have demonstrated that and not only in cyber; the best example was the RELIEF study. When you're trying to do short, secure administration models, Palantir completely goofed up. It was absolutely racially biased and that opened up MIT to come out with IBM and say, "We've got to be really careful. Do we have the right sample size?" And when you look at security, we all know that we don't have large enough sample sizes. We want to talk about AI, but we truly don't even have good ML models. We stretch statistical models to ML and say we have some sort of AI. I don't want to downplay the progress because I’ve published tons of papers on that. But the key is trying to draw that fine line, and people are not willing to tell you where the data is coming from.
0 upvotes