Three key questions help marketers assess the risk associated with proposed AI-based automation systems.
Classify automated decisions
Classify automated decisions about offers and experiences by the magnitude of risks posed by advanced profiling and unpredictable customer interactions. Start by asking “Does the system decide which offers or experiences individuals receive based on information about them that they haven’t consented to share for this purpose?”
Three of today’s popular AI marketing applications have different risk/value profiles.
- Marketer-facing analytics: These often duplicate some of the explanatory services that data scientists provide and, as a result, their adoption is less transformational than other applications. They are relatively low risk and can easily be contained.
- Untargeted conversational agents: These agents have a significant impact on customer experience by providing a new way to interact with a brand because they are designed to offer consumers or business customers access to natural-language dialogue.
- Real-time personalization: Designed to automate decisions about which content or offers are presented to a customer based on individual, unique profiles. Isolate this class of AI systems for further evaluation because of the potential risks involved in using individual data to drive algorithmic decisions.
Distinguish AI algorithms based on transparency
Determine if the decisions that are made can be explained. Ask “Is the system transparent or opaque in its ability to explain decisions?” Whenever possible, explanations of profiling decisions should be made available on demand to staff and customers as a condition of AI deployment. Offer consumers choices around opt-out, deleting and correcting personal data wherever they’re exposed to profile-based personalization.
Evaluate feasibility of human monitoring
Evaluate the possibility and costs of monitoring AI decisions with humans in real time to address the risks of full automation. Ask “Does it make sense to subject the system’s decisions to human review to approve or override and provide explanations where needed?” If you’re considering an opaque algorithm that makes substantive decisions about customer experience, your final question must be whether you can cost-effectively validate it with human review.
“Identifying an AI proposal as potentially risky is not necessarily grounds to reject it,” says Frank. “Instead, it indicates a need to establish confidence that the system will behave within the bounds of acceptable risk.”