By Martijn Rasser, Director of Analysis, Kyndi
Last week, Kyndi participated in the American Council for Technology and Industry Advisory Council’s (ACT-IAC) Government and Artificial Intelligence Forum in Washington, D.C. The event brought together leaders in government and industry to discuss the promise and potential of artificial intelligence (AI), pioneering uses of AI in numerous government agencies, and barriers to broader adoption. I want to focus on the latter.
Many attendees cited organizational culture as the biggest hurdle to acceptance of AI in the public sector. While there are notable examples of early adoption in parts of the government—particularly in the intelligence community and the department of defense—most agencies are just beginning to think about how they can use AI to perform their mission smarter, faster, and cheaper.
Dr. Marilyn Miller of the National Institutes of Health shared a poignant example with her efforts to fight Alzheimer’s disease. As head of the Alzheimer’s Disease Sequencing Project, she pushed to apply AI resources to quickly and effectively screen the human genome to help identify genomic variants that contribute to increased risk of developing the disease, and identify variants that help to protect against it. She noted that many colleagues resisted her initiative; she considers the general NIH culture to be stodgy and believes that scientists fear being scooped by a machine.
At Kyndi, we have heard similar stories in discussions with customers and colleagues in the public and private sectors. During my panel discussion, I offered advice on how to address resistance to the change that widespread use of AI will bring. I see three main drivers for this resistance: not trusting the decisions and actions of an autonomous system, concern over major disruptions on the job, and a fear of being displaced by a robot.
First, to garner trust, Kyndi espouses the concept of ‘Explainable AI.’ Simply put, one has to understand how and why an AI system produced an output to have confidence in it. AI systems will increasingly be used to aid critical decision-making. As such, the system needs to provide to human users the rationale for its actions in a manner that is understandable, appropriately qualified, and consistent. No system should be a black box.
Second, as with the introduction of any new technology, there will be changes in how jobs are structured as AI systems become more commonplace. This change need not be disruptive, however, but can and should be positive and energizing. We are designing our products to be adaptable and flexible. An AI system should easily fit into a customer’s existing workflow to promote efficiency and productivity, instead of imposing disruptive processes. Kyndi also designs its products to be flexible by making them adjustable to an individual user’s preferences, habits, and creativity. We believe AI will be much more effective and impactful if it encourages innovations in analytics by its users, rather than being constrained by design choices made by the software’s developers.
Finally, at Kyndi we see AI as augmenting, not replacing, the human knowledge worker. The fear of large-scale job losses is overblown. While certain jobs will certainly disappear—taxi drivers as we shift to autonomous vehicles, for example—the widespread use of AI will create new opportunities across the economic spectrum. Importantly, AI will be a boon for the bulk of the current workforce: human analysts coping with ever-growing amounts of data. There simply is no time to process, read, and understand it all. AI is the means to boost productivity and help human analysts find actionable insight and knowledge in minimal time.
We invite you to contact us to learn more about our vision for the role of AI in the future knowledge economy.