As AI-generated social engineering (Deepfakes/vishing) scales, which area of your defense strategy feels most vulnerable right now?

Employee Awareness: Traditional training can't keep up with AI realism.50%

Detection Latency: We identify scams after the breach, not before.64%

Cross-Sector Intelligence: We lack real-time signals from outside our industry.29%

Vendor Risk: Our third-party partners are the "weakest link."4%

28 PARTICIPANTS
283 viewscircle icon1 Comment
Sort by:
Founder and Strategist in Travel and Hospitality2 days ago

63% of respondents say their biggest vulnerability is detection latency, meaning scams are being caught after the damage is done.
This aligns with what we’re seeing across sectors: attackers are moving faster than traditional controls can respond.

Other signals from the poll:
52% flagged employee awareness as a growing gap, especially as deepfakes become more realistic.
30% highlighted the lack of cross‑sector intelligence, showing how isolated threat data leaves organizations blind to emerging patterns.
4% pointed to vendor risk, which remains a low-frequency but high-impact concern.
These insights reinforce a broader shift: defence strategies need real‑time, adaptive, and community‑driven intelligence, not just annual training or post‑incident analysis.

I’d be interested to hear from others:
What’s one change you’ve made (or want to make) to stay ahead of AI‑powered deception?

Content you might like

CISSP (Certified Information Systems Security Professional)31%

CISM (Certified Information Security Manager)40%

CCISO (Certified Chief Information Security Officer)27%

CompTIA Security+19%

CCSP (Certified Cloud Security Professional)14%

I am not pursuing a certification currently.27%

Other (comment below)2%

View Results