Understanding Explainable AI Concepts

As explainability has become a hot topic in the artificial intelligence (AI) community, confusion around the different concepts of explainable AI has surfaced. Right now, “explainable AI” is an umbrella term for several different types of explainability. Below are some of the sub-categories of explainability we are seeing offered today:

Training Data Quality Management
We’ve recently seen companies that do training data quality management position themselves as explainable AI. These firms ensure that the training data that you use to train a system is not biased. You can then train systems on it, see how the system reacts based on that training data, and ultimately ensure that the decisions being put out by the system are unbiased.

Focus/Influence
You also see some folks starting to position themselves as explainable AI when they are able to highlight the node of influence within a deep learning network. Some vendors use deep learning to analyze another deep learning network, allowing them to highlight that node of influence as it flows to layers above or below and ultimately back down to the underlying data source. In these instances, it’s highlighting an area of focus within that network so you can understand whether or not the system is triggering on certain aspects of the underlying data that could bias your results.

Transparency
Vendors are talking about transparency as explainable AI, as well. An example of transparency would be “this algorithm was trained on this data, by this person, at this time.” It’s the explanation of which algorithm was used, who built the system, what data it was trained on, when it was done, etc. so that the user can have an understanding of how the system is ultimately built.

Visualization
We’re also seeing companies with visualization capabilities, whether it’s visualizations into the network or visualizations from a dashboarding perspective. They, too, are describing themselves as a type of explainable AI.

Provenance
Vendors who provide provenance are also positioning themselves as explainable AI. An example of provenance would be a natural language generation system that writes a report and cites the underlying data at the end of each sentence. You are able to click into underlying data, or at least look up where the system generated its facts from, so you can see whether or not that underlying data is reliable.

Proof/Causal
And then there’s what we believe to be the highest bar of explainability, which is proof and, ultimately, causal models. At this level, rather than pointing to a parameter as an explanation, a system can explain its reasoning in natural language to the average user. This is the concept we target at Kyndi by combining machine learning and symbolic AI approaches.

For a deeper dive on Explainable AI, watch our latest webinar.