If your org has adopted agentic AI (or plans to), what are your top concerns when it comes to managing potential security risks?
Sort by:
If your organization is adopting or planning to adopt agentic AI (AI that can make decisions and take actions autonomously), there are several important security risks to keep an eye on. Here are some key concerns:
1. Data Privacy and Integrity
• Risk: Agentic AI systems need a lot of data to function, and this data needs to be kept safe and accurate.
• Concern: How do we protect sensitive information from being accessed, leaked, or misused, especially when the AI might be interacting with that data without human oversight?
2. Adversarial Attacks
• Risk: AI systems, especially those that act on their own, can be tricked or manipulated through attacks that change the way they make decisions.
• Concern: How do we prevent the AI from being tricked or misled into making harmful decisions by bad actors?
3. Lack of Transparency and Accountability
• Risk: AI decision-making can be complex and sometimes hard to understand, making it difficult to know why a certain decision was made.
• Concern: If the AI makes a mistake or causes harm, how do we figure out who’s responsible? How do we make sure AI is used responsibly?
4. AI Updates and Maintenance
• Risk: Over time, AI systems may need updates or improvements, but these can also introduce new vulnerabilities or change the AI’s behaviour in unexpected ways.
• Concern: How do we ensure updates are secure and that the AI’s behaviour is still under control after the changes?
6. Interacting with Legacy Systems
• Risk: If the AI has to interact with older systems or third-party tech, those systems might not have the same level of security.
• Concern: How do we make sure the AI and other systems are properly integrated, especially when some of those systems might not be as secure as we’d like?
7. Misuse of AI
• Risk: There’s always the chance that AI could be repurposed for malicious purposes, either by insiders or external hackers.
• Concern: How can we monitor the AI’s use to make sure it’s not being exploited for harmful purposes, like cyberattacks?
8. Monitoring and Auditing Challenges
• Risk: Once AI starts making decisions without human involvement, it’s harder to keep track of what it’s doing and ensure it’s operating as expected.
• Concern: How can we set up systems that effectively track and audit the AI’s actions to make sure it’s following security protocols?
9. Ethical and Legal Issues
• Risk: AI might make unethical decisions or cause legal problems, especially if it acts in ways we didn’t anticipate.
• Concern: How do we ensure the AI behaves ethically and follows all necessary laws, like privacy regulations or road traffic laws?
How to Address These Risks:
• Conduct regular security audits to catch potential vulnerabilities early.
• Implement strict access controls to prevent unauthorized changes to the AI.
• Use AI explainability tools to make sure we can understand and trust its decision-making.
• Incorporate human oversight where necessary, even with autonomous systems, to ensure decisions align with our standards.
• Adopt AI-specific security practices to address risks that are unique to autonomous systems.
If you plan to enable AI to perform changes to your configuration, you might prepare pre-authorized ones with lower risk and let it be performed by AI.
I remember the introduction of SNORT:
On the first tests I was able to achieve a DoS by blocking the network from accessing DNS and AD, by making snort believe that these are the source of an attack.
This is a valid threat as long as you do not actively prevent this scenario.
So, all changes that are not proven low risk have to be checked and approved by an qualified carbon-unit.
Our biggest concern is the calibration of the fully autonomous AI for, in our case, dynamic security configuration; for instance which actions can it take, can they be undone, how will be monitor or be notified of the actions for loop back and validation.
Data Protection & Sovereignty:
Securing ingested data and ensuring compliance with jurisdictional laws based on third-party processing locations.
Anonymization:
When possible, use of tokenization/synthetic data to minimize exposure of sensitive information.
AI Integrity Risks:
Mitigating hallucinations (fact-checking, human validation) and blocking prompt injections (input sanitization).
Third-Party Vulnerabilities:
Auditing vendors for secure AI development practices and API/integration security.