Górski, Łukasz; Ramakrishna, Shashishekar
Abstract: Explainable artificial intelligence (XAI) is a research direction that was already put under scrutiny, in particular in AI\&Law community. Whilst there were notable developments in the area of (general, not necessarily legal) XAI, user experience studies regarding such methods, as well as more general studies pertaining to the concept of explainability among the users are still lagging behind with < 1\% of research outcomes in the area including user evaluations in the first place. This paper firstly, assesses the performance of different explainability methods (Grad-CAM, LIME, SHAP), in explaining the predictions for a legal text classification problem; those explanations were then judged by legal professionals according to their accuracy. Secondly, the same respondents were asked to give their opinion on the desired qualities of (explainable) artificial intelligence (AI) legal decision system and to present their general understanding of the term XAI. This part was treated as a pilot study for a more pronounced one regarding the lawyer’s position on AI, and XAI in particular.