How to Make AI Trustworthy

New risk mitigation measures must be implemented to protect AI in the enterprise and ensure that it’s perceived as trustworthy.

Repressive regimes are working hard on artificial intelligence (AI) systems that control populations and suppress dissent. If these efforts succeed, political protests will be a sentimental relic of the past, squashed before they ever get to the streets.

Less dramatic but nonetheless serious risks also exist for AI in the enterprise. What if incorrect AI credit scoring stops consumers from securing loans? Or an attack on the AI model of a self-driving vehicle leads to a fatal accident? Or if data poisoning results in bank home loan approvals being biased against a certain group?

Learn more: Gartner Security & Risk Management Summit

In a Gartner survey, companies deploying AI cited security and privacy as their top barriers to implementing it. Security threats against AI are not new, but they have been insufficiently addressed by enterprise users.

“Business trust is the key to enterprise AI success,” says Avivah Litan, Distinguished VP Analyst, Gartner. “However, security and privacy standards and tools that protect organizations better are still being developed. This means most organizations have been left largely on their own in terms of threat defense.”

New types of threat demand a new response

Most attacks against normal software can also be applied against AI. This means the same security and risk management solutions that mitigate damage from malicious hackers also mitigate damage caused by benign users who introduce mistakes into AI environments. Solutions that protect sensitive data from being compromised or from being biased also help protect against AI ethics issues caused by biased model training.

AI introduces new threats. Data “poisoned” intentionally or by mistake at the AI training stage or while the AI model is running can manipulate the outcome. Query attacks may attempt to determine the AI model’s logic and change its rules. Theft of training data or the AI model itself could also occur.

Gartner recommends implementing two new risk management pillars on top of existing measures used to mitigate threats and protect AI investments

Measures to Protect and Secure AI to make it trustworthy

Secure your AI today

Retrofitting security into any system is much more costly than building it in from the outset. This is no less true with AI systems.

“Don’t wait until the inevitable breach, compromise or mistake damages or undermines your company’s business, reputation or performance,” says Litan. “This will keep AI models performing well, ensure that your data is protected, and support ‘responsible AI’ that weeds out model biases, unethical practices and bad decision making.”

Recommended reading for Gartner clients*: AI Security: How to Make AI Trustworthy by Avivah Litan et al.

*Note: Some documents may not be available to all Gartner clients.

Get Smarter

Follow #Gartner

Attend a Gartner event

Explore Gartner Conferences

2020-2022 Emerging Technology Roadmap for Large Enterprises

We gathered expertise from IT professionals across 438 organizations to...

Learn More


Get actionable advice in 60 minutes from the world's most respected experts. Keep pace with the latest issues that impact business.

Start Watching