This question is on the same line as “Uncertainty Estimation (for DNN)”: A complete safety argumentation can be made via methods that support humans in understanding and analysing why an AI system took a specific decision. Moreover, insights into the inner operations of an AI system (DNN in particular) can increase the trust DNN in general; in fact, the human understanding about semantics helps in a semantic understanding of the DNN, and it is an iterative process.
Main Question:
Have the relevant dimensions been founded and selected, which affect and influence the performances of the AI system? (Burton et al., 2022)
Sub-Questions:
- Has the use of a specific tool been considered? (Haedecke et al., 2022)
- Can this tool support and enable the user’s visual analyses over huge amounts of data in such a way to understand the nature of the underlying data and the DNN performance?
- Based on that, can the user make hypotheses about the relevance of semantic and non-semantic dimensions, which then can be checked or refined interactively?
- Will this result in human advantages in terms of major understanding and deeper knowledge of AI system behaviour?
References
- Burton, S., Hellert, C., Hüger, F., Mock, M. and Rohatschek, A. (2022) ‘Safety Assurance of Machine Learning for Perception Functions’. Deep Neural Networks and Data for Automated Driving, pp.335–358.
DOI: https://doi.org/10.1007/978-3-031-01233-4_12. - Haedecke, E., Mock, M., S. Houben, M. Akila (2022) ‘ScrutinAI: Visual Analytics Approach for the Semantic Analysis of Deep Neural Network Predictions’. EuroVis Workshop on Visual Analytics.
https://publica-rest.fraunhofer.de/server/api/core/bitstreams/f7d5aca7-5080-4153-b1e1-aa113032a4f3/content (Accessed: 27 May 2024)