Ethics and AI: SXSW Panel Recap

As adoption of artificial intelligence (AI) technologies has grown in recent years, so too have ethical concerns. AI today is too often a “black box,” unable to explain the reasoning behind its decision-making. If AI is to be used in critical situations such as criminal sentencing, loan approval, and defense, it must be transparent and explainable. Earlier this month, SXSW hosted a panel session focusing on the issue of ethical AI. Among the topics discussed in the session:

Defining Explainability
Explainable AI is currently one of the hottest topics in the artificial intelligence community. But what is it that makes an AI system explainable? According to Kyndi Founder & CEO Ryan Welsh, “Explainability is a system’s ability to explain itself in natural language to the average user. If a system can say ‘I generate this output because of x, y, z’ in natural language to the average user, I call that explainability.” Welsh also touched on interpretability and provenance, which he views as lower levels of explainability.

AI in the DoD
Adoption of artificial intelligence in the defense sector has raised concerns among some. The idea of deferring decision-making power to machines in mission-critical situations makes many uneasy, and for good reason. During the panel session, however, Defense Innovation Board Executive Director Josh Marcuse noted, “I have to emphasize something that I think has been really misunderstood, which is that AI does not equal autonomy, and autonomy does not equal AI. We have autonomous systems today that don’t rely on AI, and most of the systems we’re contemplating won’t actually be autonomous.” Instead, Marcuse sees artificial intelligence as a tool to augment human intelligence analysts, going on to say, “If you take a human-centered approach to [AI], I believe it takes a lot of the fear out.”

Steps Toward Transparent and Explainable AI
When asked about progress in cracking the explainable AI code, Welsh mentioned the need to fuse together machine learning and symbolic logic, two paradigms of artificial intelligence with staunch supporters often at odds with each other. Regarding Kyndi’s approach, he explained, “For us, it was taking a lot of the machine learning methods and infusing them with symbolic AI. Symbolic AI has these abstractions and this representation that is based on logic, which is more humanly comprehensible.” And while he has been working on this for the last four years, Welsh has noticed other companies focusing on explainability as of late, stating, “Recently you’re seeing folks at DeepMind and other places publishing papers saying ‘now is the time to fuse these paradigms together because you overcome the limitations of deep learning methods,’ and the key is explainability.”

Listen to the full panel session

Subscribe to Kyndi's Blog