‘Explainability’ Makes AI More Intelligent and Successful

The Explainability debate is picking up steam.

In a thought-provoking article published recently on Wired, David Weinberger commented: “Don’t Make AI Artificially Stupid in the Name of Transparency.” Instead, he argues for the technocratic approach of optimization. Dave Gershgorn at Quartz picked it up and provided an excellent synthesis in his article yesterday.

I disagree with a few things in Dr. Weinberger’s article but think he has put forth an interesting idea that is worth further discussion as a near-term solution for achieving what society wants from opaque AI systems.

First, artificial intelligence is not only machine learning. Machine learning is a method of Artificial Intelligence. It is a “black box,” but not all AI methods are.

Second, I disagree with his statement that “demanding explicability…may require making artificial intelligence artificially stupid.” As I learned on my high school tests, if you cannot explain your answer, then you are not intelligent, you are unintelligent. The same goes for algorithms for non-trivial uses. Demanding explicability is asking AI to be more intelligent, not less.

Third, I disagree with the overall language describing AI as being more sophisticated than humans.  It’s as if AI has to lower itself down to our level so us mere mortals can understand it. I look at it differently. I think AI needs to raise itself up to our level and communicate with us at a sufficient level of abstraction (i.e., language). We use language quite efficiently to describe complex things. Why can’t a machine?

Lastly, I disagree with Dr. Weinberger and Yann LeCun suggesting that instead of demanding explainability, we look “at trends in what decisions the machines are making to rebuild them in the way we want.” Considering that we are talking about non-trivial uses (e.g., autonomous vehicles and jail sentences), thinking of impacts on people lives as a trend to be optimized lacks compassion. Can you imagine being on the other side of a biased algorithm (e.g., getting a longer prison sentence than someone else for the same crime) only to be told: “Sorry, we are still tuning it”?

With all that said, I find his recommendation thought-provoking and worthy of further discussion as a hack to machine learning’s “black box” problem. As The Economist notes, technocratic approaches (like Dr. Weinberger’s optimization) “do best when blitzing the mess made by incompetent and squabbling politicians.” The AI community is not incompetent, but we do squabble.

Ryan Welsh
Founder and CEO

Subscribe to Kyndi's Blog