XAI: Nothing certain (with a probability score)
Yana Chekan
The “X” that has recently appeared before the familiar AI abbreviation is not intended to revolutionize the field, nor does it stand for anything specific beyond its literal meaning. Rather, it emphasizes the critical aspect of explainability. eXplainable Artificial Intelligence (XAI) is designing methods to tackle some of the field's longstanding issues while introducing its own set of intriguing research questions. On why XAI is worth drawing your attention (vector;)) to, find in the text below.
Overview and motivation
Any comprehensive discussion on XAI often highlights the following core issues:
The black box nature of AI
Imagine we trained a well-performing model on a given dataset (not yet personal, we’ll come to that later). However, in an almost Frankenstein-like manner, the inner workings of the model remain a mystery even to its creator. We do not know precisely why it produces certain outputs. For those unfamiliar with the Husky-Wolf classification model, we recommend examining it. Explaining the model involves understanding what motivates a specific output of the model (“local explanations”) or describing the internals of the system (“global explanations”).
EU’s right to explanation
The EU General Data Protection Regulation (GDPR), specifically Articles 13-15, grants citizens rights to ‘meaningful information about the logic involved’ in automated decisions made regarding a natural person. According to this regulation, if a decision is made algorithmically based on your personal data, you have the right to demand an explanation of why the decision was made. For example, if something has been advertised to you on social media or a credit risk assessment categorizes you in a particular way.
The machine learning community, along with researchers from the humanitarian fields, has not yet established a standard that a good explanation must fulfill. For most people, simply having access to the underlying code would not suffice as an explanation (Goodman and Flaxman, 2017), not to mention how such an explanation could reveal a company's trade secrets. Various methods have been developed, including graphical representations of the most important areas for the model’s decision (such as heatmaps) or mapping importance scores in the input sequence to the model’s outcome (like LIME). The fact that this field is more human-oriented, rather than solely performance-focused, makes XAI research questions particularly exciting.
Explanations: ethics
Several points need to be addressed if a machine learning model is to be made explainable. The questions of its content, form, and target audience are interrelated. Explanations produced by modern systems are neither standardized nor systematically assessed. Below are some notable ethical questions that arise when choosing one form of explanation over another.
Fairwashing
A paper by Aïvodji et al. explored how a knowingly biased machine learning model could be explained to justify its decisions. Their proxy model was developed to find ways to produce counterfactual explanations of a decision-making model that were untrue and aimed at hiding the real reason behind the decisions. People exercising their right to explanation often lack the expertise or time to verify if the explanation is legitimate and must trust the provided information. This manipulation is called fairwashing (Aïvodji et al., 2019).
Persuasive vs. transparent explanations
This issue is relevant to human-to-human and human-machine interactions alike and is discussed in various philosophical papers. The endpoint of any explanation, a human being, is biased towards simpler descriptions. In both human communication and model explanation, the information provided can be manipulated in order to persuade someone of its correctness rather than provide them with the full range of facts (sometimes even without realizing it). According to the authors of “Explaining Explanations: An Overview of Interpretability of Machine Learning”, “...it is fundamentally unethical to present a simplified description of a complex system to increase trust in the system if the limitations of the simplified description cannot be understood by users, and worse if the explanation is optimized to hide undesirable attributes of the system” (Gilpin et al., 2018).
Tradeoff between completeness and interpretability
Naturally, a high level of transparency can be achieved when all accessible information is provided. For example, if we were to output all the paths a decision tree model took to reach its conclusion, the explanation would indeed be as complete as possible. On the other hand, as mentioned in “A Survey of Methods for Explaining Black Box Models”, “...the goodness of the explanation could be invalidated by its size and complexity. For example, when the linear model is high-dimensional, the explanation may be overwhelming. Moreover, if a too large set of rules, or a too deep and wide tree are returned, they could not be humanly manageable even though they are perfectly capturing the internal logic of the black box for the classification” (Guidotti et al., 2018). This tradeoff between completeness and interpretability is widely discussed in scientific literature, and the solution often depends on the audience that will receive the explanation.
Explanations: target audience
According to a survey conducted on explanations, “the goal of interpretability is to describe the internals of a system in a way that is understandable to humans. The success of this goal is tied to the cognition, knowledge, and biases of the user: for a system to be interpretable, it must produce descriptions that are simple enough for a person to understand using a vocabulary that is meaningful to the user” (Gilpin et al., 2018). In other words, who are we explaining to? Does an explanation need to take a different form based on the end user? The authors of “Explainability for Artificial Intelligence in Healthcare: A Multidisciplinary Perspective” and “Solving the Explainable AI Conundrum by Bridging Clinicians’ Needs and Developers’ Goals” conducted a study on artificial intelligence in the medical sphere, where it is also integral that the outcomes are explainable, and asked themselves exactly the questions above. Here are some excerpts from these studies:
How do we start?
Medical workers need explainability because of the highly sensitive nature of their job and the risks that come with it. Random errors must be easily identifiable. What is less obvious is that patients also need the model to be explainable. “For obtaining informed consent for diagnostic procedures or interventions, the law requires individual and comprehensive information about and understanding of these processes. In the case of AI-based decision support, the underlying processes and algorithms have therefore to be explained to the individual patient… For example, AI systems primed for “survival” as the outcome might not be aligned with the value of patients for whom a “reduction of suffering” is more important” (Tonekaboni et al. 2019). This brings us directly to the next point.
Double-edged sword
Will both sides be satisfied with the same explainability method? Not necessarily. If a hypothetical model were to suggest a diagnosis based on a hypothetical blood test, a healthcare provider would want to know what the most relevant indications for that were, rather than the exact principles of the model as they will not have the abilities and resources to evaluate it. A patient, on the other hand, might require additional information on how the model computed the result, and what the core principles and risks are, which the doctor may not be able to deliver.
Third party
The second study conducted a number of interviews with clinicians and developers, revealing several themes, among which was the “Opposing Goals for XAI”. The researchers discovered a drastic difference between the developers’ and clinicians’ understanding of the fundamental purpose of XAI: Developers believed that the aim of XAI is to make the model’s actions understandable. "They tried to achieve this, for example, by introducing Shapley values of static and dynamic risk contributors to help explain the local logic of the model at each point in time," whereas clinicians found this information unhelpful in increasing their understanding and trust in the system. For them, ”XAI was related to a system’s ability to demonstrate the plausibility of results within the clinical context”. They preferred the system to provide information on what tests and/or actions were conducted that day that led to the given risk assessment (Rudin, 2019).
Although this covers only a small portion of questions arising in XAI applications, we believe it provides a good understanding of why the goal of an explanation can be very ambiguous and require cooperation from both the producing and receiving sides of the algorithmic decision-maker, with some consultation from social sciences experts. Currently, there are multiple use-case scenarios where explainability is crucial. Apart from medicine, they include systems in public and governmental institutions (e.g., to detect generated text or fake news), in the justice sector (e.g., to provide sources for automated decision support systems), or in the financial sector for loan application risk assessment (quick reminder: this is where “A Right to Explanation” plays its role).
Conclusion
With this blog post, we aimed to introduce XAI as a more human-oriented field of ML. While focusing less on state-of-the-art methods, we explained why none of these can be considered perfect. XAI cannot be exclusively quantified by measuring its accuracy; instead, the focus must shift to the personal demands of each user category, which should be considered for any emerging technology.
In conclusion, the emerging field of eXplainable Artificial Intelligence (XAI) presents a nuanced landscape where the quest for transparency intersects with ethical considerations and the diverse needs of its target audience. As highlighted throughout this post, the black box nature of AI systems has spurred the necessity for explanations, driven not only by regulatory demands such as the EU's Right to Explanation but also by the fundamental principles of accountability and trust in automated decision-making.
In healthcare, where the stakes are particularly high, XAI assumes a critical role in not only providing insights into algorithmic decisions but also fostering a deeper understanding among medical professionals and patients alike. Yet, as evidenced by divergent perspectives between developers and clinicians, the challenge lies not merely in rendering AI comprehensible but also in aligning its explanations with the specific informational needs of each constituency.