Home

What are the biggest security pitfalls when it comes to AI?

This is the discussion that everybody should have. The security or risk aspects of AI are twofold. One is, the risks associated with AI and the ethical, privacy and security implications that could be negative. Then there's AI used in the context of security that has its own set of not only risks, but also benefits. The potential dangers with AI in security lie in how it's used.  For example, if you're using AI to authenticate somebody, is there a bias in the authentication that discriminates against people? If you're using facial recognition, is there a color palette issue—where I'm reddish because there’s an orange line in the frame and I've got an orange shirt in the back lighting—that might discriminate against or fail to recognize me? There are things like that which we need to consider as well as just inferring the data in question.  That’s one of the things that I worried about at Cylance: there was a capability being developed that used mouse movements and keystrokes to determine if the right person was on the other end of the machine. But the other things that you could infer from the typing and mouse movements were: has this person had a couple of glasses of wine; have they had Parkinson's; are they impaired in some way? So there's also another side of the collection and use of that data if you don't bar yourself from using it for other purposes. There have to be rules and boundaries.

Anonymous Author
This is the discussion that everybody should have. The security or risk aspects of AI are twofold. One is, the risks associated with AI and the ethical, privacy and security implications that could be negative. Then there's AI used in the context of security that has its own set of not only risks, but also benefits. The potential dangers with AI in security lie in how it's used.  For example, if you're using AI to authenticate somebody, is there a bias in the authentication that discriminates against people? If you're using facial recognition, is there a color palette issue—where I'm reddish because there’s an orange line in the frame and I've got an orange shirt in the back lighting—that might discriminate against or fail to recognize me? There are things like that which we need to consider as well as just inferring the data in question.  That’s one of the things that I worried about at Cylance: there was a capability being developed that used mouse movements and keystrokes to determine if the right person was on the other end of the machine. But the other things that you could infer from the typing and mouse movements were: has this person had a couple of glasses of wine; have they had Parkinson's; are they impaired in some way? So there's also another side of the collection and use of that data if you don't bar yourself from using it for other purposes. There have to be rules and boundaries.
1 upvotes
Anonymous Author
For security purposes, I think AI is necessary in order to handle the amount of security data out there so that you’re able to combat threats. The technology is already in the wrong hands, so how do we ensure that the good guy and the bad guy have the same things?
0 upvotes