View Categories

AI – Safety Argumentation and Transparency in AI Decision-making

This question is on the same line as “Uncertainty Estimation (for DNN)”: A complete safety argumentation can be made via methods that support humans in understanding and analysing why an AI system took a specific decision. Moreover, insights into the inner operations of an AI system (DNN in particular) can increase the trust DNN in general; in fact, the human understanding about semantics helps in a semantic understanding of the DNN, and it is an iterative process.

Main Question:

Have the relevant dimensions been founded and selected, which affect and influence the performances of the AI system? (Burton et al., 2022)

Sub-Questions:

  1. Has the use of a specific tool been considered? (Haedecke et al., 2022)
  2. Can this tool support and enable the user’s visual analyses over huge amounts of data in such a way to understand the nature of the underlying data and the DNN performance?
  3. Based on that, can the user make hypotheses about the relevance of semantic and non-semantic dimensions, which then can be checked or refined interactively?
  4. Will this result in human advantages in terms of major understanding and deeper knowledge of AI system behaviour?

References