Explainable AI Isn’t Just About Explanations

By Ryan Welsh, Founder & CEO

We created the Explainable AI category several years ago. Since then, it has become increasingly important. In early 2018, The Economist proclaimed, “For AI to thrive, it must explain itself.” In December 2018, a Bloomberg article cited an IBM survey that showed that a lack of explainability was the most significant roadblock to the adoption of AI in the enterprise. Just last week, Wired published an article about deep learning pioneer Yoshua Bengio where he stated that “deep learning won’t realize its full potential, and won’t deliver a true AI revolution, until it…start[s] asking why things happen.” But Explainable AI has also become increasingly misunderstood. Explainable AI isn’t just about explanations. Instead, Explainable AI is a holistic approach to building AI that not only makes it humanly understandable but also smarter and faster.

Today’s AI is not very smart. A hallmark of human intelligence is the ability to re-use learned knowledge and transfer it to different problems. Today’s AI does not do this. It is prone to disastrously fail when exposed to different data. The comedian Demetri Martin has a joke where he asks:

How bad does a guess have to be for it to be an uneducated guess?
#1: “Do you know the temperature outside?”
#2: “Uh, carrots?”
#1: “Did you say carrots?”
#2: “Yeah, I was just guessing. I don’t know, carrots?”
#1: “Are you educated?”
#2: “No. No, I’m not.”
#1: “Okay. Well, that makes sense because it’s never been carrots outside and never will be carrots. So, you need to get an education, go to school, then come back to me, maybe you can make an educated guess.”

The joke works because “carrots” is a ridiculous answer. It is an excellent example of what it means to fail disastrously, and today’s deep learning systems do this all the time. DARPA calls it being “statistically impressive, but individually unreliable.” While this may not be important for consumer companies classifying cat pictures, in the enterprise, mistakes have costs associated with them. In most cases, the costs can be in the hundreds of thousands or millions of dollars. Or, a mistake could be fatal.

Machine learning alone lacks an essential element of learning. Learning is not just about prediction; it is also about generalization. But the generalization element is absent from deep learning. Explainable AI revisits the question of knowledge representation (i.e., the structure that is amenable to human understanding) and uses machine learning to acquire knowledge for generalization. Being smart or knowledgeable means never failing disastrously when operationalized.

Today’s AI is also not very fast at delivering real business value. Seven years after the start of this AI cycle, only 20% of companies aware of its potential have incorporated machine learning into their core business. An important reason for the slow uptake is the lack of directly relevant, tagged data for training machine learning systems. Industry analysts estimate that as much as 80% of the effort is spent on aggregating, cleaning, labeling, and augmenting data. A $5B data labeling industry has popped up to do this grunt work for enterprises.

By treating learning as an exercise in knowledge acquisition, Explainable AI systems can also re-use the knowledge it learned for multiple tasks. This re-use promotes data efficiency and reduces the amount of data you need to train an AI system. That means faster time to value for enterprise use cases. Imagine you were hiring for an entry-level position at your company. Would you want to hire a newborn child and teach it to do the job over the next two decades, or would you want to hire a recent college graduate who has already accumulated twenty years of knowledge? An enterprise AI project that doesn’t combine machine learning and knowledge representation is like hiring the newborn.

The joke in the AI space is that AI conferences generate greater than 50% of revenue for the industry. There has been a lack of success transitioning AI projects from pilot to production deployment. Explainable AI overcomes the common challenges enterprises face when operationalizing AI because it focuses on building systems that are smarter, faster, and explainable.