What are the most pressing AI risks software teams are encountering today? What strategies do you find most effective in helping staff mitigate them?

382 viewscircle icon2 Comments
Sort by:
VP of Engineering4 months ago

Organization-wide AI use guidelines help. For example, developers are instructed on how to turn off data sharing in various tools. Security experts should educate the team, and documentation should be provided.

Sr Software Principal engineer (Gen AI and ML Security) in Hardware4 months ago

The biggest risk is using AI without fully understanding it, especially since many AI systems lack transparency. If you build AI in-house, you know its reasoning, but with external systems, you don't have access to that level of knowledge. We focus on AI security and advise against using tools that are not fully understood. Education is our main strategy: We analyze every tool before implementation and train our team thoroughly. We avoid rushing adoption and prioritize understanding and building our own AI systems.

Another mitigation strategy is limiting the AI’s context window to only what is necessary, rather than exposing it to all available information. This approach helps safeguard against cybersecurity risks.

Content you might like

data is scattered across different applications23%

some systems have APIs, but not everything connects59%

we have a single source of data, but without AI integration16%

our data is already being used by AI models1%

View Results

Government and Public Sector16%

Private Sector Organisations54%

Academia and Universities24%

Think Tanks and Research Labs4%

View Results