Explainable AI and Beyond: Fujitsu Panel Session Recap

Despite amazing strides in recent years, artificial intelligence (AI) continues to struggle with a “black box” problem. As AI adoption grows in regulated industries, so too does the need for the technology to be transparent and explainable. Last week’s Fujitsu Advanced Technology Symposium focused on this issue, with the theme being “Make AI Trustworthy! Explainable and Ethical AI for Everyone.” One panel session featured DARPA Program Manager David Gunning, Fujitsu AI Director Ajay Chander, and Kyndi CEO Ryan Welsh, with Electronic Frontier Foundation Chairman Brad Templeton moderating. Among the topics covered in the session:

Explainable AI Techniques
Multiple technical approaches are being used to attack the “black box” problem. “Some of the deep learning people say the way out of this is more deep learning,” said David Gunning, who is leading a DARPA project on AI. “So they’re going to take one deep net that’s been trained to make a decision, and train a second deep net to generate the explanation.” Alternatively, Kyndi™ uses a technique which combines logical models with machine learning to develop Explainable AI systems.

Why Explainable AI?
Some argue that making an AI system explainable comes with too much of a performance tradeoff. However, if AI is to be used in regulated industries such as financial services and healthcare, explainability is a fundamental requirement. “I hear over and over that people are not using these tools if they don’t get explainability,” said Gunning. “If it’s finding cat videos on Facebook, it’s not a big deal. But if they’re giving financial advice to someone or giving advice to someone making a serious decision, [AI companies] are going to find out their sales are going to drop if they don’t have it.” Welsh mentioned the disconnect he sees between venture capitalists and the users of the technology, noting the concentration of investment in AI technologies that customers may not be interested in due to being “black boxes.”

Who is Explainability for?
The panel also discussed where the value of Explainable AI is realized. Is it for engineers? Regulators? Customers? “For us, it’s our end user,” said Welsh. “And our end user is your general analyst or researcher and then up from there.” Gunning echoed Welsh’s comment, adding, “But assume that if you can do something to help explainability for the end user, that probably as a side benefit is going to help the developer and everybody else in the pipeline.”

Watch the full panel session

Subscribe to Kyndi's Blog