Reliability Assessment and Safety Arguments for Machine Learning Components in System Assurance

Safety Assurance
DOI: 10.1145/3570918 Publication Date: 2022-11-17T15:07:32Z
ABSTRACT
The increasing use of Machine Learning (ML) components embedded in autonomous systems -- so-called Learning-Enabled Systems (LESs) has resulted the pressing need to assure their functional safety. As for traditional safety, emerging consensus within both, industry and academia, is assurance cases this purpose. Typically support claims reliability can be viewed as a structured way organising arguments evidence generated from safety analysis modelling activities. While such activities are traditionally guided by consensus-based standards developed vast engineering experience, LESs pose new challenges safety-critical application due characteristics design ML models. In article, we first present an overall framework with emphasis on quantitative aspects, e.g., breaking down system-level targets component-level requirements supporting stated metrics. We then introduce novel model-agnostic Reliability Assessment Model (RAM) classifiers that utilises operational profile robustness verification evidence. discuss model assumptions inherent assessing uncovered our RAM propose solutions practical use. Probabilistic argument templates at lower also based RAM. Finally, evaluate demonstrate methods, not only conduct experiments synthetic/benchmark datasets but scope methods case studies simulated Autonomous Underwater Vehicles physical Unmanned Ground Vehicles.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (95)
CITATIONS (9)