Growing Demand for AI Solutions That Produce ‘Explainable,’ Auditable Results

As artificial intelligence (AI) becomes an increasingly essential part of how organizations of all types and sizes operate, there is a growing recognition that the old “black box” approach used by many AI providers is not sufficient or appropriate. The fact is, many companies doing business in highly regulated sectors as well as governmental entities that operate under constant oversight scrutiny, need to be able to explain the “how’s” and “why’s” of AI-generated results. In many cases, the law mandates this level of transparency.

The MIT Technology Review recently published an article on this topic, highlighting the growing demand for AI solutions whose results are “explainable” and include an audit trail. Here is a quote from that article that sums up this growing AI issue:

“Adam Wenchel, vice president of machine learning and data innovation at Capital One, says the company would like to use deep learning for all sorts of functions, including deciding who is granted a credit card. But it cannot do that because the law requires companies to explain the reason for any such decision to a prospective customer. Late last year Capital One created a research team, led by Wenchel, dedicated to finding ways of making these computer techniques more explainable.”

According to the MIT article, the need to understand and explain the decision-making of deep learning algorithms is something the Defense Advanced Research Projects Agency (DARPA) is also experiencing. DARPA is funding a variety of new approaches to make deep learning more transparent, as the Defense Department understandably cannot put its blind trust in “black boxes.”

Ryan Welsh, Founder and CEO of Kyndi, believes that the technology industry must step up its efforts to embrace “explainable AI” and make its results more transparent and auditable. “Unlike many of our competitors, Kyndi’s products and solutions do not function as a ‘black box,’” Said Welsh. “We help customers to create fully auditable and explainable AI knowledge and intelligence assets, which is especially critical for heavily regulated industries like financial services.”