How to Make AI Trustworthy

June 12, 2021

Contributor: Susan Moore

Repressive regimes are working hard on artificial intelligence (AI) systems that control populations and suppress dissent. If these efforts succeed, political protests will be a sentimental relic of the past, squashed before they ever get to the streets.

Less dramatic but nonetheless serious risks also exist for AI in the enterprise. What if incorrect AI credit scoring stops consumers from securing loans? Or an attack on the AI model of a self-driving vehicle leads to a fatal accident? Or if data poisoning results in bank home loan approvals being biased against a certain group?

Learn more: Gartner Security & Risk Management Summit

In a Gartner survey, companies deploying AI cited security and privacy as their top barriers to implementing it. Security threats against AI are not new, but they have been insufficiently addressed by enterprise users.

“Business trust is the key to enterprise AI success,” says Avivah Litan, Distinguished VP Analyst, Gartner. “However, security and privacy standards and tools that protect organizations better are still being developed. This means most organizations have been left largely on their own in terms of threat defense.”

New types of threat demand a new response

Most attacks against normal software can also be applied against AI. This means the same security and risk management solutions that mitigate damage from malicious hackers also mitigate damage caused by benign users who introduce mistakes into AI environments. Solutions that protect sensitive data from being compromised or from being biased also help protect against AI ethics issues caused by biased model training.

AI introduces new threats. Data “poisoned” intentionally or by mistake at the AI training stage or while the AI model is running can manipulate the outcome. Query attacks may attempt to determine the AI model’s logic and change its rules. Theft of training data or the AI model itself could also occur.

Gartner recommends implementing two new risk management pillars on top of existing measures used to mitigate threats and protect AI investments

Measures to Protect and Secure AI to make it trustworthy

Secure your AI today

Retrofitting security into any system is much more costly than building it in from the outset. This is no less true with AI systems.

“Don’t wait until the inevitable breach, compromise or mistake damages or undermines your company’s business, reputation or performance,” says Litan. “This will keep AI models performing well, ensure that your data is protected, and support ‘responsible AI’ that weeds out model biases, unethical practices and bad decision making.”


Experience IT Security and Risk Management conferences

Join your peers for the unveiling of the latest insights at Gartner conferences.

Drive stronger performance on your mission-critical priorities.