Simple Questions to Assess AI Risks and Benefits

December 13, 2017
Contributor: Chris Pemberton

3 questions marketers must ask about decisions, transparency and monitoring.

Marketers are in a quandary when it comes to navigating the rapid rise in hype and promise surrounding artificial intelligence (AI) applications for marketing. Be too aggressive and you risk unleashing unpredictable forces and unreliable outcomes. Be too conservative and you risk being left behind in a fast-moving revolution that may transform the competitive landscape.

“Marketers need guidelines to accelerate decision making and manage risk without crushing innovation,” says Andrew Frank, vice president and distinguished analyst, Gartner for Marketers.

Illustration of Common AI Marketing Applications

Three key questions help marketers assess the risk associated with proposed AI-based automation systems.


Classify automated decisions

Classify automated decisions about offers and experiences by the magnitude of risks posed by advanced profiling and unpredictable customer interactions. Start by asking “Does the system decide which offers or experiences individuals receive based on information about them that they haven’t consented to share for this purpose?”

Three of today’s popular AI marketing applications have different risk/value profiles.

  • Marketer-facing analytics: These often duplicate some of the explanatory services that data scientists provide and, as a result, their adoption is less transformational than other applications. They are relatively low risk and can easily be contained.
  • Untargeted conversational agents: These agents have a significant impact on customer experience by providing a new way to interact with a brand because they are designed to offer consumers or business customers access to natural-language dialogue.
  • Real-time personalization: Designed to automate decisions about which content or offers are presented to a customer based on individual, unique profiles. Isolate this class of AI systems for further evaluation because of the potential risks involved in using individual data to drive algorithmic decisions.

Distinguish AI algorithms based on transparency

Determine if the decisions that are made can be explained. Ask “Is the system transparent or opaque in its ability to explain decisions?” Whenever possible, explanations of profiling decisions should be made available on demand to staff and customers as a condition of AI deployment. Offer consumers choices around opt-out, deleting and correcting personal data wherever they’re exposed to profile-based personalization.


Evaluate feasibility of human monitoring

Evaluate the possibility and costs of monitoring AI decisions with humans in real time to address the risks of full automation. Ask “Does it make sense to subject the system’s decisions to human review to approve or override and provide explanations where needed?” If you’re considering an opaque algorithm that makes substantive decisions about customer experience, your final question must be whether you can cost-effectively validate it with human review.

“Identifying an AI proposal as potentially risky is not necessarily grounds to reject it,” says Frank. “Instead, it indicates a need to establish confidence that the system will behave within the bounds of acceptable risk.”

You may also be interested in

“I use Gartner to bolster my confidence in decision making.”

Stay smarter.