Managing the tension between opposing effects of explainability of artificial intelligence: a contingency theory perspective
【Author】 Abedin, Babak
【Source】INTERNET RESEARCH
【影响因子】6.353
【Abstract】Purpose Research into the interpretability and explainability of data analytics and artificial intelligence (AI) systems is on the rise. However, most recent studies either solely promote the benefits of explainability or criticize it due to its counterproductive effects. This study addresses this polarized space and aims to identify opposing effects of the explainability of AI and the tensions between them and propose how to manage this tension to optimize AI system performance and trustworthiness. Design/methodology/approach The author systematically reviews the literature and synthesizes it using a contingency theory lens to develop a framework for managing the opposing effects of AI explainability. Findings The author finds five opposing effects of explainability: comprehensibility, conduct, confidentiality, completeness and confidence in AI (5Cs). The author also proposes six perspectives on managing the tensions between the 5Cs: pragmatism in explanation, contextualization of the explanation, cohabitation of human agency and AI agency, metrics and standardization, regulatory and ethical principles, and other emerging solutions (i.e. AI enveloping, blockchain and AI fuzzy systems). Research limitations/implications As in other systematic literature review studies, the results are limited by the content of the selected papers. Practical implications The findings show how AI owners and developers can manage tensions between profitability, prediction accuracy and system performance via visibility, accountability and maintaining the "social goodness" of AI. The results guide practitioners in developing metrics and standards for AI explainability, with the context of AI operation as the focus. Originality/value This study addresses polarized beliefs amongst scholars and practitioners about the benefits of AI explainability versus its counterproductive effects. It poses that there is no single best way to maximize AI explainability. Instead, the co-existence of enabling and constraining effects must be managed.
【Keywords】Contingency theory; Systematic literature review; Explainable artificial intelligence; Interpretable analytics; Mitigating strategies; Opposing effects
【发表时间】2021
【收录时间】2022-01-02
【文献类型】
【主题类别】
--
评论