As we move toward more augmented analytics including autogenerated insights and models, the explainability of these insights and models will become critical to trust, to regulatory compliance, and to brand reputation management. Explainable AI is the set of capabilities that describes a model, highlights its strengths and weaknesses, predicts its likely behavior, and identifies any potential biases. It has the ability to articulate the decisions of a descriptive, predictive or prescriptive model to enable accuracy, fairness, accountability, stability and transparency in algorithmic decision making.

By 2023, over 75% of large organizations will hire artificial intelligence specialists in behavior forensic, privacy and customer trust to reduce brand and reputation risk.

Gartner Predicts

What does Explainable AI Enable?

  • Explainable AI enables a better adoption of AI by increasing the transparency and trustworthiness of AI solutions and outcomes. Explainable AI also reduces the risks associated with regulatory and reputational accountability for safety and fairness.

  • Increasingly, these solutions are not only showing data scientists the input and the output of a model, but are also explaining the reasons the system selected particular models and the techniques applied by augmented data science and ML.

  • Bias has been a long-standing risk in training AI models. Bias could be based on race, gender, age or location. There is also temporal bias, bias toward a specific structure of data, or even bias in selecting a problem to solve. Explainable AI solutions are beginning to identify these and other potential sources of bias.

  • Explainable AI technologies may also identify privacy violation risk, with options for privacy-aware machine learning (PAML), multiparty computation, and variants of homomorphic encryption to identify privacy violation risks.

How Does This Impact Your Organization and Skills?

Data and analytics leaders should invest in training and education to develop the skills needed to mitigate risks in black-box models.

This should include:

  • How to make data science and ML models interpretable by design and select the right model transparency from a range of models, from least to most transparent.

  • How to select the right model accuracy when required, and methods of validating and explaining these models.

  • Various methods, such as generative explainability and combining simple, but explainable models, with more complex, but less explainable ones.

  • Exploring the latest explainability techniques, such as the ones that are tracked by DARPA, or that are coming from commercial vendors.

  • Visualization approaches for seeing and understanding the data in the context of training and interpreting machine learning algorithms.

  • Techniques for understanding and validating the most-complex types of predictive models.

  • Communication and empathy skills for data scientists to detect the users’ attitude and needs for explainability and successful AI adoption.

  • Establish AI ethics boards and other groups that are responsible for AI safety, fairness and ethics. These boards should include internal and external individuals known for their high reputation and integrity.

We've got you covered!

Relevant Sessions

  • The Foundation of Data Science and Machine Learning: Delivering Value in the Age of AI
  • Augmented Data Management Forges a New Alliance Between Human and Artificial Intelligence
  • AI Talent: Recruiting, Hiring, Organizing, Training and Retaining
  • Myths and Pitfalls of Artificial Intelligence and How to Navigate Them
  • Storytelling for AI-Leads, Data Scientists & Machine Learning Engineer

Want to stay informed?

Get conference email updates.
Contact Information

All fields are required.

  • Step 2 of 2