If your org’s using any virtual assistants with AI capabilities, are you concerned about indirect prompt injection attacks?

Extremely concerned — it’s a major risk17%

Somewhat concerned — it's a potential risk68%

Mildly concerned — it’s on my radar11%

Not particularly concerned — I doubt we’ll be impacted1%

338 PARTICIPANTS
3.7k viewscircle icon2 Comments
Sort by:
Chief Data Scientist in IT Servicesa year ago

This has been a risk for as long as IT systems have been around. I feel like were using the word prompt to talk about Generative AI solutions, but there is alot of solutions that based on Conversational AI. But any solutions that has access to your back end and integration is a risk of attack.

Board Member, Advisor, Executive Coach in Software3 years ago

What many dont realize is that AI or more accurately ML models themselves are not protected - whether it be a model used as a virtual assistant or an ML model used in a trading platform in the financial industry or an ML model embedded in an application like a CRM or even your security tools.  So we should be asking a a much broader question on the risks any ML model poses to our organizations and our customers

Content you might like

Yes, we have a comprehensive governance policy50%

No, we need more guidance around this48%

Not sure what policies are currently in place3%

View Results

Strongly Agree8%

Agree68%

Neutral25%

Disagree10%

Strongly Disagree

View Results