What are the most pressing AI risks software teams are encountering today? What strategies do you find most effective in helping staff mitigate them?

289 viewscircle icon2 Comments
Sort by:
VP of Engineeringa month ago

Organization-wide AI use guidelines help. For example, developers are instructed on how to turn off data sharing in various tools. Security experts should educate the team, and documentation should be provided.

Sr Software Principal engineer (Gen AI and ML Security) in Hardwarea month ago

The biggest risk is using AI without fully understanding it, especially since many AI systems lack transparency. If you build AI in-house, you know its reasoning, but with external systems, you don't have access to that level of knowledge. We focus on AI security and advise against using tools that are not fully understood. Education is our main strategy: We analyze every tool before implementation and train our team thoroughly. We avoid rushing adoption and prioritize understanding and building our own AI systems.

Another mitigation strategy is limiting the AI’s context window to only what is necessary, rather than exposing it to all available information. This approach helps safeguard against cybersecurity risks.

Content you might like

Invest more in eCommerce34%

Maintain the current investment in eCommerce60%

Invest less in eCommerce4%

View Results

We are currently using AI in our contact center19%

Our data is not easily/fully accessible between departments41%

We use conversational AI for web chats, but not for telephony20%

We just can't justify the investment15%

Other (please comment)1%

View Results