In order to integrate AI models (including modules and components) into products and industrial processes, it is necessary to have requirements for rigorous SW verification. The objective is to ensure both safety and performance of the SW in all parts of the system (ISO 26262-1:2018, 2020), taking into accopunt that many methods for validating non-AI systems are mostly not directly applicable to AI systems, above all to neural networks. See (Dietterich, 2020) and (Zielke, 2020).
Traditional ML models with few parameters (shallow models) are better suited to meet robustness goals than complex (deep) models, which can be more effective in some context (especially in perception) (Belkin et al., 2019).
Main Question
Has standardization been regarded as a way to manage this risk, so to enable industry to use deep models without compromising on safety or other aspects related to robustness?
Sub-Questions
- Has the complexity of DNN been considered in terms of risks for Robustness, by considering standardization? (For example, “Assessment of the Robustness of Neural Networks” by WG3 been considered) (ISO/IEC JTC 1/SC 42 Artificial Intelligence, 2019)
- Are there some kinds of guardrail in the model to mitigate the risks for minority/vulnerable groups?
- Can a negative impact be detected and mitigated by the model?
References
- Dietterich, T.G. (2017) ‘Steps Toward Robust Artificial Intelligence’. AI Magazine, 38(3), pp.3–24. doi:https://doi.org/10.1609/aimag.v38i3.2756.
- Zielke, T. (2020)’ Is Artificial Intelligence Ready for Standardization?’ Communications in Computer and Information Science, pp.259–274. doi:https://doi.org/10.1007/978-3-030-56441-4_19.
- Belkin, M., Hsu, D., Ma, S. and Mandal, S. (2019) ‘Reconciling modern machine-learning practice and the classical bias–variance trade-off’. Proceedings of the National Academy of Sciences, 116(32), pp.15849–15854. doi:https://doi.org/10.1073/pnas.1903070116.