Blog Archives

Dynamic Fault Tree Method (Part 1 of 3)

img_6042

Introduction
One of the most important goals for the reliability analysis is “Predicting the reliability of the system for a specified mission time” [1].
There are plenty of techniques accessible to be used to reach this goal.
In order to predict the reliability of a system, a proper reliability model must be selected.
Fault Tree Analysis (FTA) is one of the most developed and dominant techniques in reliability studies.
First in 1962, FTA techniques have been created at Bell Telephone Laboratories [2].
Nowadays, FTA is widely used for quantitative reliability analysis and safety assessment of complex and critical engineering systems [3].
In fact, FTA is a logical tree demonstrating the ways in which a system fails.
The tree starts with an unpleasant event (top event), and all conceivable paths for top event to occur are shown.
For this logic tree, the leaves are basic events (BEs), which model component failures [4] and generally linked to the failure of components [5].
The BEs demonstrate root causes for the unpleasant event.
Each BE has a proper failure distribution (mostly Weibull and exponential distributions), its suitability is verified by goodness of fit techniques [4].
Nowadays, FTA method is the most used quantitative technique for accident scenario assessment in the industry [6]; however, this method is often used in the static form not proper for analyzing the complex systems.

Static Fault Tree (SFT)
The main assumptions for the use of the SFTs are [6,7]:
i) binary BEs;
ii) statistically independent BEs;
iii) instantaneous transition between the working and the failed state;
iv) restoration of components as good as new by maintenance; if the failure of a component influences other events on superior levels, its repair restores these events to the normal operative condition.
The way that events are connected to produce system failure, is represented by means of logical Boolean gates (AND; OR; Voting).
ASQ-RD-June2015-Newsletter.pdf - Google Chrome
AND gate (Fig. 1-a) has failed output when all inputs fail, OR gate (Fig. 1-b) fails if at least one of inputs fails and Voting gate (Fig. 1-c) fails if at least k out of n inputs fails [4].
SFTs with AND, OR, and Voting (k of n) gates cannot encompass the dynamic behavior of system failure mechanisms [8].
To overcome this problem, Dynamic Fault Tree (DFT) analysis is suggested in recent research.

Dynamic Fault Tree
Most of reliability modeling techniques are based on statistical methods.
Their typical examples are reliability block diagram (RBD), FTA, and Markov chains [9].
These methods are not able to encompass the dynamic behavior of complex systems.
Dynamic reliability assessment methods were developed on common basic of static reliability analysis, in order to encompass the dynamic behavior of sequence, spare or time dependency actions or failures in the complex systems.
The key parameter to separate dynamic behavior from static behavior is the time.
Dynamic reliability approaches are powerful formalisms and invent a more realistic modeling of complex systems [10].
Among these new formalisms (DFT analysis, Dynamic RBDs, Boolean logic Driven Markov Process and etc.), which proposed to reliability calculation studies, DFT analysis has been the most used and practical one As compared with the SFT, DFT is a graphical model for the reliability studies that combines the ways how an undesired event (top event) can occur.
However, in a DFT, top event is a time dependent event.
DFT represents a better estimation of the traditional FT by including the time dependency [11].
Like a SFT, the DFT is a tree in which the leaves are BEs; however, in this approach, BEs are more realistic and detailed than SFT technique.
The main assumptions for the use of the DFTs are [12]:
i) binary BEs;
ii) Non-repairable components (recently, some efforts have been made to consider repair in DFT [5]).

By: Mohammad Pourgol-Mohammad, Ph.D, P.E, CRE, mpourgol@gmail.com

Previously published in the June 2015 Volume 6, Issue 2 ASQ Reliability Division Newsletter

Picture © B. Poncelet https://bennyponcelet.wordpress.com

Posted in General

RAMS BEST PAPER AWARD 2016

We congratulate Vladimir Babishin, Sharareh Taghipour with the 2016 RAMS BEST PAPER AWARD for the paper “Joint Maintenance and Inspection Optimization of a k-out-of-n System”

IMG_6721

Posted in General

RAMS 2017 – Visit ASQ Reliability Division

As currently RAMS is ongoing in Orlando.
Please visit the ASQ Reliability Division booth, and talk to the Reliability experts and see what the Reliability Division has to offer.

IMG_6719

IMG_6727

unnamed (6)

IMG_6717

Posted in General

2015-2016 Quality Engineering Best Reliability Paper Award

On behalf of the ASQ Reliability Division (ASQ RD) and the QE Best Reliability Paper Award committee, we congratulate Michael Scott Hamada being selected as the winner for the 2015-2016 QE Best Reliability Paper Award.

For the paper “Bayesian Analysis of Step-Stress Accelerated Life Tests and Its Use in Planning,” this award includes a plaque, presented at the annual ASQ RD dinner banquet in Orlando, FL on Tuesday, January 24, 2017.

IMG_6724

Posted in General

Rabia Muammar spoke at the 2016 Industrial Engineering Forum at Hashemite University

In September Mr. Rabia Muammar spoke about ASQ Division on the 2016 Industrial Engineering Forum at Hashemite University.

It was a useful conference, and I he received an excellent feedback from the participants.

Included the presentation.

 

Audience

Mr. Rabia Muammar

Posted in General

Interested in volunteer opportunities in reliability engineering?

Interested in volunteer opportunities in reliability engineering?

Our ASQ division is very active in engineering education, conferences and publications both in the US and abroad.

If you are interested in learning more about our group and how to get involved, I invite you to join our yearly planning meeting via WebEx.

Our meeting will be held on 1 October and you can volunteer from any geographic region.

Please email me (marc@asqrd.org) if you are interested!

Please also have a look on our website to learn more.

www.asqrd.org

asqrd-at-rams

Posted in General

Keynote Speakers on ASTR 2016

This year on ASTR:  Dr. William Meeker Dr. Andre Kleyner and Dr. Elisabetta Jerome are the key speakers.

They are know for many thinks, but certainly as co-authors of “Statistical Methods for Reliability Data”, “Statistical Intervals” and “Practical Reliability Engineering”

Inline image 1

Inline image 2

Afbeeldingsresultaat voor Statistical Intervals with Gerald Hahn

Posted in General

ARS North America 2016 Award Winners

TUCSON, AZ (July 20, 2016) – ReliaSoft announced the winners of the Excellent Presentation Awards of the International Applied Reliability Symposium (ARS) North America 2016. The peer-selected winners were revealed at a hosted awards dinner that closed out the 3-day event in San Diego, California, which ran June 21 – 23. Winners were recognized with certificates and cash prizes.

Congratulations to the Excellent Presentation Award Winners at ARS North America 2016:

  • Gold – Vishal Mhaske of Tesla Motors
    Tailoring the DFMEA Process to Fit a Fast Paced Company Culture
  • Silver – Rachel Stanford of Schlumberger
    Reliability Centered Maintenance: Applying an Aviation Philosophy to Oil & Gas
  • Bronze – Ray Gibson of Philips – Respironics Sleep Therapy Product Group
    Combining Reliability Tools in the Pursuit of Root Cause from Field Data

More than 160 reliability and maintainability professionals from over 100 cities attended this year’s event. Both new and experienced presenters contributed to the diverse range of topics discussed, from FMEA and safety & risk analysis to maintenance strategy and design for reliability.

ReliaSoft VP of Business Development Adamantios Mettas said, “On behalf of ReliaSoft, I want to thank all the presenters who led the conversations at ARS by sharing their experiences, challenges and solutions to the reliability and maintainability engineering community. We invite past presenters to continue enriching the community, as well as new presenters to bring their unique voice to the forum. We look forward to next year’s event.”

For more details about the about ARS North America 2016 event, visit http://www.arsymposium.org/2016/index.htm

###

About ReliaSoft Corporation

ReliaSoft Corporation is the industry leader in reliability engineering software, training and services that combine the latest theoretical advances with essential tools for the practitioner in the field. Founded in 1992, ReliaSoft has evolved into a total reliability solutions company, offering software and expertise focused primarily on the reliability engineering and quality needs of product manufacturers and maintenance organizations. For more information, visit www.reliasoft.com.

For More Information

To view event details on ARS North America 2016, visit http://www.arsymposium.org/2016/index.htm. To purchase digital copies of the ARS proceedings, visit the ReliaSoft Online Store at https://store.reliasoft.com/store/home.php?cat=29. To view pictures from the event, visit http://www.arsymposium.org/2016/pictures/.

For questions, please contact Nikki Helms, Marketing Specialist at nikki.helms@hbmprenscia.com.

Posted in General

ASTR Conference 2016, 28-30 September,

“Finding the Balance between Testing, Modeling and Analysis to ensure Product Reliability and Safety”

Panelists:

  • Dr. William Meeker – Distinguished Professor, Iowa State University
  • Dr. Andre Kleyner – Global Reliability Engineering Leader, Delphi
  • Dr. Elisabetta Jerome – Technical Advisor, Armament & Weapons Test and Evaluation, United States Air Force (USAF)

Exceeding customer expectations and designing highly reliable products and systems is a complex task that should balance modeling and testing. Failure modes must be identified, modeled and mitigated in order to reduce risk while growing reliability.

  • However, what type of testing should be conducted?
  • When should modeling be used instead or in conjunction with testing?
  • How much testing should be performed?
  • What type of evidence should vendors provide in order to ensure reliability requirements?

These are tough questions engineers, analysts and managers face during the design and manufacturing process.

The ASTR 2016 panelists will discuss these important questions.

http://www.ieee-astr.org/

Have a question for the panel? Submit it today to marc@asqrd.org!

Keynote Speaker

Dr. William Meeker Dr. William Meeker | Professor of Liberal Arts and Sciences, Iowa University
Dr. William Meeker is Professor of Statistics and Distinguished Professor of Liberal Arts and Sciences at Iowa State University. He is a Fellow of the American Statistical Association (ASA) the American Society for Quality (ASQ), and the American Association for the Advancement of Science, and a past Editor of Technometrics. He is co-author of the books Statistical Methods for Reliability Data with Luis Escobar (1998), and Statistical Intervals with Gerald Hahn (1991), 14 book chapters, and of numerous publications in the engineering and statistical literature. He has won the ASQ Youden prize five times and the ASQ Wilcoxon Prize three times. He was recognized by the ASA with their Best Practical Application Award in 2001 and by the ASQ Statistics Division’s with their W.G. Hunter Award in 2003. In 2007 he was awarded the ASQ Shewhart medal. He won the 2012 Jerome Sacks Award for Cross-Disciplinary Research and the 2014 ASQ Brumbaugh Award. He has done research and consulted extensively on problems in reliability data analysis, warranty analysis, accelerated testing, nondestructive evaluation, and statistical computing.

Featured Speakers

Dr. Elisabetta Jerome Dr. Elisabetta Jerome | Technical Advisor, Armament and Weapons Test and Evaluation, Eglin Air Force, Florida
Dr. Elisabetta L. “Betta” Jerome, a member of the Senior Executive Service, is Technical Advisor, Armament and Weapons Test and Evaluation, Eglin Air Force, Florida. She is the senior technical advisor to the Air Force Test Center Commander and serves as the senior Air Force technical advisor and national/international authority in armament and weapons test and evaluation. Dr Jerome also has responsibility for providing technical advice and guidance to the highest levels of the Air Force, the DoD, as well as nationally-important conventional weapon system development efforts with regard to test infrastructure, test capabilities and best practices, modeling and simulation, and interpretation of test results. Dr. Jerome holds a Doctor of Philosophy in Aerospace Engineering from the University of Florida, Master of Science degree in Mechanical Engineering and Bachelor of Science degree in Mechanical Engineering from Ohio State University.
Dr. Elisabetta Jerome Dr. Andre Kleyner | Global Reliability Engineering Leader, Delphi Electronics & Safety
Andre Kleyner has 30 years of engineering, research, consulting, and managerial experience specializing in reliability of electronic and mechanical systems designed to operate in severe environments. He received the doctorate in Mechanical Engineering from University of Maryland, and Master of Business Administration from Ball State University. Dr. Kleyner is a Global Reliability Engineering Leader with Delphi Electronics & Safety and an adjunct professor at Purdue University. He is a Fellow of the American Society for Quality (ASQ), a Certified Reliability Engineer, Certified Quality Engineer, and a Six Sigma Black Belt. He also holds several US and foreign patents and authored multiple professional publications including three books on the topics of reliability, statistics, warranty management, and lifecycle cost analysis. Andre Kleyner is also the editor of the Wiley Series in Quality and Reliability Engineering (John Wiley & Sons). For more information please visit www.andre-kleyner.com.

 

 

Posted in General

Risk-Based Approaches To Establishing Sample Sizes For Process Validation

By Mark Durivage, ASQ Fellow

Using confidence, reliability, and acceptance quality limits (AQLs) to determine sample sizes for process validation are proven methods to ensure validation activities will yield valid results based upon an organization’s risk acceptance determination threshold, industry practice, guidance documents, and regulatory requirements. Figure 1 shows the relationship between risk and sample size — as level of risk increases, the sample size increases accordingly.

Working with companies in FDA-regulated industries, I frequently see validations with inadequate sample sizes or otherwise without satisfactory statistical justification. This is due, in part, to engineers being thrown into the quality function without proper training or being told that “this is the way we have always done it”. This article is intended to provide background and guidance for people writing, executing, and summarizing validation protocols and reports.

risk_v_sample_size

 

Rooted in the Regulations and Standards

The importance of validating using accepted statistical techniques with rationale for sample sizes is readily apparent in FDA and ISO requirements.

The definition of process validation, according to 21 CFR 820, the FDA’s Quality System Regulation (QSR) for medical devices, is “establishing by objective evidence that a process consistently produces a result or product meeting its predetermined specifications.” The QSR requires that:

Where the results of a process cannot be fully verified by subsequent inspection and test, the process shall be validated with a high degree of assurance and approved according to established procedures.

The QSR also requires that:

Where appropriate, each manufacturer shall establish and maintain procedures for identifying valid statistical techniques required for establishing, controlling, and verifying the acceptability of process capability and product characteristics. Sampling plans, when used, shall be written and based on a valid statistical rationale.

In the pharmaceutical world, 21 CFR 211, FDA’s Good Manufacturing Practices (GMP), requires that “appropriate written procedures, designed to prevent microbiological contamination of drug products purporting to be sterile, shall be established and followed. Such procedures shall include validation of all aseptic and sterilization processes.” As you may have noticed, 21 CFR 211 is “silent” in regard to sample size justification for process validation, but it uses wording such as “application of suitable statistical procedures where appropriate” in reference to product release and stability programs. However, experience has shown there is an expectation of using valid statistical and rationale for all aspects of pharmaceutical process validation.

ISO 13485:2016, the standard for medical device quality system requirements, has similar language requiring the organization to “validate any processes for production and service provision where the resulting output cannot be or is not verified by subsequent monitoring or measurement and, as a consequence, deficiencies become apparent only after the product is in use or the service has been delivered.” — using, “as appropriate, statistical techniques with rationale for sample sizes.”

The ISO 9001:2015 quality management system requirements stipulate that an organization “implement production and service provision under controlled conditions.” Controlled conditions shall include, as applicable, “the validation, and periodic revalidation, of the ability to achieve planned results of the processes for production and service provision, where the resulting output cannot be verified by subsequent monitoring or measurement.”

Statistical Methods for Determining Sample Size

Deciding which statistical techniques to use is dependent on the type of data for which validation is required. There are two types of data: variable and attribute. Variable data is those aspects measured using a continuous scale, such as weight, length, strength, etc. Attribute data has two different outcomes — for example, good/bad or pass/fail — or discrete counts. Due to the specific nature of variable data, it yields much more information than attribute data.

Risk is defined as combination of the probability of occurrence of harm and the severity of that harm. It is essential that risk levels be defined and applied uniformly throughout the organization. Table 1 shows an example of risk level definitions with accompanying defect classifications. These definitions can and will vary based upon the product(s) produced and their intended and unintended uses.

Risk-Based Approaches To Establishing Sample Sizes For Process Validation - Google Chrome_2

So how exactly is confidence and reliability related to attribute data? Confidence is defined as the amount of uncertainty about this estimate of probability, whereas reliability is the probability that a product will be functional at specified conditions for a specified amount of time. Confidence can be increased by increasing the sample size. In other words, the larger the sample, the more is actually known regarding the confidence of the reliability of the process.

Confidence and reliability have slightly different, yet similar, meanings when referring to variable data. In this context, confidence is the degree of certainty that the interval contains a certain percentage of each individual measurement in the population. Reliability is the fraction of the population the interval contains. The use of confidence and reliability for variable data assumes normality of the data. Other methods are also available that utilize non-parametric for data that is not normally distributed.

Table 2 depicts example confidence and reliability levels based upon risk. Of course, different confidence and reliability levels can and should be utilized based upon an organization’s risk acceptance determination threshold, industry practice, guidance documents, and regulatory requirements.

Risk-Based Approaches To Establishing Sample Sizes For Process Validation - Google Chrome_3

There is also a third method for determining statistically valid techniques and rationale for sample sizes. This method uses sampling tables with an appropriate AQL. According to ANSI/ASQ Z1.4-2008, the AQL is “the maximum defective percent … that, for purpose of sampling inspection, can be considered satisfactory as a process average.” This method relies on assigning AQLs based upon risk acceptance.

Table 3 depicts example AQL levels based upon risk. Different AQL levels can and should be utilized based upon the organizations risk acceptance determination threshold, industry practice, guidance documents, and regulatory requirements. A note of caution when using this method: Lot sizes used for validation activities should be consistent with the lot sizes anticipated for production.

Risk-Based Approaches To Establishing Sample Sizes For Process Validation - Google Chrome_4

Regardless of the method used to determine the statistically valid sample size and appropriate rational (confidence, reliability, or AQL), the method should be based on predefined definitions of risk associated with the product, costs associated with producing the product, and costs associated with inspection, measuring, and testing and consistently applied.

Subsequent articles in this series will provide how-to advice on applying these techniques in your organization.

About the Author

Mark Allen Durivage is the managing principal consultant at Quality Systems Compliance LLC and an author of several quality-related books. He earned a B.A.S. in computer aided machining from Siena Heights University and an M.S. in quality management from Eastern Michigan University. Durivage is an ASQ Fellow and holds several ASQ certifications including; CQM/OE, CRE, CQE, CQA, CHA, CBA, CPGP, and CSSBB. He also is a Certified Tissue Bank Specialist (CTBS) and holds a Global Regulatory Affairs Certification (RAC). Durivage resides in Lambertville, Michigan. Please feel free to email him at mark.durivage@qscompliance.com with any questions or comments, or connect with him on LinkedIn.
References:

  1. Durivage, M.A., 2014, Practical Engineering, Process, and Reliability Statistics, Milwaukee, ASQ Quality Press
  2. Durivage, M.A. and Mehta B., 2016, Practical Process Validation, Milwaukee, ASQ Quality Press
  3. Durivage, M.A., 2016, The Certified Pharmaceutical GMP Professional Handbook, 2nd Ed., Milwaukee, ASQ Quality Press.
  4. ISO 9001:2015 Quality management systems—Requirements
  5. ISO 13485:2016 Medical devices—Quality management systems—Requirements for regulatory purposes
  6. United States Code of Federal Regulations 21 CFR:
    1. Part 211 Current good manufacturing practice for finished pharmaceuticals
    2. Part 820 Quality system regulation
  7. ANSI/ASQ Z1.4-2008: Sampling Procedures and Tables for Inspection by Attributes

Link: http://www.outsourcedpharma.com/doc/risk-based-approaches-to-establishing-sample-sizes-for-process-validation-0001

Posted in General
Webinar Categories
Recent Webinars
  • Design and Analysis of Experiments in MINITAB
    May 11, 2017
    Register
  • Operational Reliability Assessment Using Multivariate Covariates (基于 多元协变量的运行可靠性评价)
    May 14, 2017
    Register
Networking

Provide a global forum for networking among practitioners of reliability engineering, management and related topics.

Growth

Facilitate growth and development of division members,

Example

Provide Resources

Promote reliability engineering principles and serve as a technical resource on reliability engineering for ASQ, standards agencies, industry, government, academia and related disciplines

Training

Sponsor, present and promote reliability, maintainability, and related training materials for courses, symposia, and conferences.