AI/ML-based solutions, especially Deep Neural Networks (DNNs), are largely opaque for humans; their outputs are difficult to be interpreted and explained. This represents a big limitation for testing and formal verification. In addition, perturbations or real-word corruptions in the AI system could induce changes in the data (e.g., adversarial examples, sensor noise, weather influences, certain colours, or contrasts by sensor degeneration, etc.). Finally, also the use of AI system(s) in another domain (transfer learning) or in another context (e.g., training in summer, execution in winter, etc.) sometimes reduces the functional quality dramatically (Willers et al., 2020), (Schwalbe et al., 2020).
Main Question
Have specific metrics or related requirements been provided to assure the robustness of AI systems (above all DNN)?
Sub-Questions
- Have required performances been achieved under reasonable perturbations?
- Has the system been tested in all required conditions (based on ODD, features and configurations, and so on)?
- Has the robustness of the system been evaluated under real-world circumstances? (Burton et al., 2022)
References
- Willers, O., Sudholt, S., Raafatnia, S., Abrecht, S. (2020). ‘Safety Concerns and Mitigation Approaches Regarding the Use of Deep Learning in Safety-Critical Perception Tasks’. Proceedings of the International Conference on Computer Safety, Reliability, and Security (SAFECOMP) Workshops. Available at: https://www.springerprofessional.de/safety-concerns-and-mitigation-approaches-regarding-the-use-of-d/18306378 (Accessed 22 May 2024)
- Schwalbe, G. Knie, B. , Sämann,T. , Dobberphul,T. , Gauerhof, L., Raafatnia, S., Rocco, V (2020) ‘Structuring the Safety Argumentation for Deep Neural Network Based Perception’ Computer Safety, Reliability, and Security. SAFECOMP 2020 Workshops. Available at: https://www.springerprofessional.de/structuring-the-safety-argumentation-for-deep-neural-network-bas/18306386 (Accessed 22 May 2024).
- Burton, S., Hellert, C., Hüger, F., Mock, M. and Rohatschek, A. (2022) ‘Safety Assurance of Machine Learning for Perception Functions’. Deep Neural Networks and Data for Automated Driving, pp.335–358. doi:https://doi.org/10.1007/978-3-031-01233-4_12.