If your org’s using any virtual assistants with AI capabilities, are you concerned about indirect prompt injection attacks?

Extremely concerned — it’s a major risk19%

Somewhat concerned — it's a potential risk66%

Mildly concerned — it’s on my radar13%

Not particularly concerned — I doubt we’ll be impacted2%

344 PARTICIPANTS
3.7k viewscircle icon2 Comments
Sort by:
Chief Data Scientist in IT Servicesa year ago

This has been a risk for as long as IT systems have been around. I feel like were using the word prompt to talk about Generative AI solutions, but there is alot of solutions that based on Conversational AI. But any solutions that has access to your back end and integration is a risk of attack.

Board Member, Advisor, Executive Coach in Software2 years ago

What many dont realize is that AI or more accurately ML models themselves are not protected - whether it be a model used as a virtual assistant or an ML model used in a trading platform in the financial industry or an ML model embedded in an application like a CRM or even your security tools.  So we should be asking a a much broader question on the risks any ML model poses to our organizations and our customers

Content you might like

Lack of security18%

Inaccuracy36%

Bias22%

Job losses5%

Negative cultural impact6%

Lack of IP protection5%

Widespread knowledge gaps5%

Economic volatility

Another threat

View Results

Composite AI6%

AI governance34%

AI orchestration and automation platform26%

Generative AI18%

Human-centered AI10%

Synthetic data3%

Other (Comment below)

View Results