What can we do to ensure protection against generative AI-created imposters posing as a company official and providing directives and guidance to staff and vendors?
Sort by:
Fractional CIO in Services (non-Government)2 years ago
Same things you would do to ensure protection from human imposters - train your staff on what to look out for and give them a means to report issues.
This is not a new risk, Generative AI is a new tool to reduce the effort for Scammers.
CIO in Telecommunication2 years ago
I've only seen this occurring through email (so far), so strong Policies against employees at all levels only using internal email for company business, combined with secure email defenses that automatically block emails with impersonated credentials is a good start.
There is a lot to learn from the military. But fundamentally training, training, training. We are going to have to assume that this sort of phishing, including realistic video or phone calls will continue, but just like today's state of the art email phishing there will always be small signs that employees should be trained to detect.
Releasing a payment, releasing confidential payment, allowing access to confidential information, and similar high impact transactions should be protected by specific and advanced training. For some organisations it may be necessary to include rotating keywords as human safeguards on impactful transactions as it is highly unlikely that the AIphisher will be able to stay on top of such data sets - unless more significant penetration has already occurred.
So generally, many layers of monitoring and zero-trust policies (which address people factors as well as technology ones) for anything that truly needs protection.
For the stuff that doesn't really matter in damages... well, it's kind of like other things in life, we are just going to have to get used to a little more embarassment and less privacy in the 21st century when someone goofs.