View Categories

AI – Requirements for AI System Robustness

AI/ML-based solutions, especially Deep Neural Networks (DNNs), are largely opaque for humans; their outputs are difficult to be interpreted and explained. This represents a big limitation for testing and formal verification. In addition, perturbations or real-word corruptions in the AI system could induce changes in the data (e.g., adversarial examples, sensor noise, weather influences, certain colours, or contrasts by sensor degeneration, etc.). Finally, also the use of AI system(s) in another domain (transfer learning) or in another context (e.g., training in summer, execution in winter, etc.) sometimes reduces the functional quality dramatically (Willers et al., 2020), (Schwalbe et al., 2020).

Main Question

Have specific metrics or related requirements been provided to assure the robustness of AI systems (above all DNN)?

Sub-Questions

  1. Have required performances been  achieved under reasonable perturbations?
  2. Has the system been  tested in all required conditions (based on ODD, features and configurations, and so on)?
  3. Has the robustness of the system been evaluated under real-world circumstances? (Burton et al., 2022)

References