“Stability” (another important element in TRUST) compares explanations between similar instances for a fixed model (while “Consistency“, see AI_9, compares explanations between models). High stability means that slight variations in the features of an instance do not substantially change the explanation (unless these slight variations also strongly change the prediction).
See (Du et al. 2022) and (Gyllenhammar et al., 2020).
Main Question
Is a high stability achieved?
Sub-Questions:
- Are the explanations similar for similar instances?
- Is the explanation strongly affected by slight changes of the feature values of the instance to be explained?
- Is the explanation strongly affected by the non-deterministic components of the explanation method?
References
- Du, X., Wang, X., Gozum, G. and Li, Y. (2022) ‘Unknown-Aware Object Detection: Learning What You Don’t Know from Videos in the Wild’ . IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 13678–13688. Available at: https://openaccess.thecvf.com/content/CVPR2022/papers/Du_Unknown-Aware_Object_Detection_Learning_What_You_Dont_Know_From_Videos_CVPR_2022_paper.pdf (Accessed: 22 May 2024)
- Gyllenhammar, M., Johansson, R., Warg, F., Chen, D., Heyn, H.-M., Sanfridson, M., Söderberg, J., Thorsén, A. and Ursing, S. (2020) ‘Towards an Operational Design Domain That Supports the Safety Argumentation of an Automated Driving System’. 10th European Congress on Embedded Real Time Systems (ERTS 2020). Available at: https://hal.science/hal-02456077 (Accessed 22 May 2024).