View Categories

AI – Explainable AI (XAI) – Accuracy of Explanation

“Explainable AI” (XAI), a crucial aspect for TAI, aims at ensuring the correct behaviour of a ML component (approaches that learn not only how to assign the classification/regression labels, but also additional explanations of the model). The questions are relaetd to the possibility of AI-based solution to support humans, not only from a technical perspective, but also to improve their knowledge of a certain event / phenomenon. The reader can see the reports “Bias in AI systems and AI aided decision making” (ISO/IEC JTC 1/SC 42, 2020) and “Assessment of the robustness of neural networks” (ISO/IEC JTC 1/SC 42, 2019).

Main Question

Is an explanation accurate enough? (Seshia et al., 2021)

Sub-Questions:

  1. Can the explanation be applied to model outputs using training/Known data?
  2. Can the explanation be applied to model outputs using unseen/new data?
  3. Are there requirements in the model design to guarantee the explainability about its outputs?

References

  • Seshia, S.A., Sadigh, D. and Sastry, S.S. (2022) ‘Toward verified artificial intelligence’, Communications of the ACM, 65(7), pp. 46–55. doi:https://doi.org/10.1145/3503914