The prevalence of advanced technologies such as artificial intelligence, machine learning and advanced statistical methods is rapidly increasing and now found in everyday life. Additionally, these complex systems utilize hardware, software, algorithms and human interactions in a integrated fashion. However, adequately assessing potential safety and security concerns early in design (concept development) utilizing traditional reliability/hazard/safety methods and tools can be problematic. This leads to problems in the use of the systems that defy necessary causal attribution, prolonging the potential safety risk and consuming resources in an ineffective and inefficient manner.
It is imperative to improve preventive methodologies and causal investigative measures to address this growing issue. The Safety and Security in Complex Systems initiative launched by the ASQ Reliability Division aims to address these challenges and provide support to any interested parties faced with this challenge. This initiative will impact the community positively in the several ways. The first and foremost impact is to ensure the safety and security of the users and/or recipients that interact with or are potentially affected by adverse consequence related to failures of these systems. The second impact will reduce the expense and resource consumption that accompanies safety related consequence and/or security breaches.
In order to accomplish these goals a Safety/Security in Complex Systems Initiative Committee has been formed with membership open to any relevant party or stakeholder. Interested in learning more? Send us an email at firstname.lastname@example.org or join our mailing list below.
ASQ Automotive Division Webinar 20 May 2016Safety Related Issue Discovery in Early Stages of Product DevelopmentWith competitive automotive industry initiatives driving more and more features and functions, with highly likely moves from traditional mechanical/hydraulic/pneumatic implementations to ‘by wire’ functions, and consequently reducing the operators use of cognitive operational effort (e.g. autonomous cars) the issue of Safety becomes one of paramount concern. Dealing with these issues after release from manufacturing is extremely difficult due to the complexity of the system, and using traditional baseline RCA/problem solving methods are too often inadequate. These problems, with their blurred software/hardware boundaries are no longer deterministic but now fall into the category of phenomenological faults. This Perspective Piece examines a path forward to develop early detection/discovery of potential safety issues that using methods that address ‘condition’ based failures rather than the common ‘event’ based approach.