What can we do to ensure protection against generative AI-created imposters posing as a company official and providing directives and guidance to staff and vendors?

257 viewscircle icon3 Comments
Sort by:
Head of Transformation in Government2 years ago

There is a lot to learn from the military. But fundamentally training, training, training. We are going to have to assume that this sort of phishing, including realistic video or phone calls will continue, but just like today's state of the art email phishing there will always be small signs that employees should be trained to detect. 
Releasing a payment, releasing confidential payment, allowing access to confidential information, and similar high impact transactions should be protected by specific and advanced training. For some organisations it may be necessary to include rotating keywords as human safeguards on impactful transactions as it is highly unlikely that the AIphisher will be able to stay on top of such data sets - unless more significant penetration has already occurred.
So generally, many layers of monitoring and zero-trust policies (which address people factors as well as technology ones) for anything that truly needs protection.
For the stuff that doesn't really matter in damages... well, it's kind of like other things in life, we are just going to have to get used to a little more embarassment and less privacy in the 21st century when someone goofs. 

Lightbulb on2
Fractional CIO in Services (non-Government)2 years ago

Same things you would do to ensure protection from human imposters - train your staff on what to look out for and give them a means to report issues.

  This is not a new risk, Generative AI is a new tool to reduce the effort for Scammers.

Lightbulb on1
CIO in Telecommunication2 years ago

I've only seen this occurring through email (so far), so strong Policies against employees at all levels only using internal email for company business, combined with secure email defenses that automatically block emails with impersonated credentials is a good start.

Lightbulb on1

Content you might like

0-3 months7%

4-6 months43%

6-12 months29%

Longer than 1 year7%

Have not seen a return on our test automation investment8%

Don't know3%

View Results

data security posture management 67%

data loss prevention 67%

data access governance 33%

encryption 33%

privacy enhanced technology

use of synthetic data

None, not using AI

View Results