“An approximate answer to the right question is worth a good deal more than the exact answer to an approximate problem.”

—John W. Tukey (1915–2000)

Picture © B. Poncelet https://bennyponcelet.wordpress.com

“An approximate answer to the right question is worth a good deal more than the exact answer to an approximate problem.”

—John W. Tukey (1915–2000)

Picture © B. Poncelet https://bennyponcelet.wordpress.com

Posted in General

1. Always assume the worst will eventually happen. This applies especially to critical parts and assemblies. Know what is critical especially to the use and customers. What are the critical parts? How will the customer abuse the assembly?

2. Always check for tolerance stack up problems. Parts in tolerance today may not be in the future. Don’t assume stability from the suppliers or that wear can not occur. Most of the time we do not know the relationship between being in specification and the ultimate reliability. A DOE (Design of Experiments) would help here.

3. Metal inserts in plastic parts are hard to mold well. This may lead to problems in use because of residual stress and will eventually cause problems through tool wear and/or part cracking.

4. Always maximize the radii that are present. Small radii lead to high stress concentrations and failure prone places. Harden these areas or use harder metals where possible when the radii can not be increased.

5. Use as few connections as possible. This includes connectors, wire connections such as welds or solder joints, crimps and material connections and seals. Remember all connections are potentially weak points that will fail given time and stress.

6. All seals fail given time and stress. You need at least two levels of sealing to ensure the product will last as long as the customer expects. Remember that some materials diffuse through others. Perhaps three levels of seal are required.

7. Threads on bolts and screws shouldn’t carry shear loads. Remember they need preloads and/or stretches to ensure proper loading initially. Metal stretches, fractures and corrodes as well as developing high stress concentrations in use under tension. Be sure to allow for this.

8. Use as few nuts, bolts and screws as possible. While these are convenient temporary connection methods, it is 100 year old technology. Lock washers and locktite have been developed to slow down the rate of loosening. All will eventually come loose anyway when there is stress, temperature or vibration present.

9. Belts and chains will stretch and/or slip when use to deliver power. Remember these types of parts need constant tension devices to aid their reliability. Again this is old technology that can be made reliable by careful application. (Note, this is one of the biggest field problem with snow-throwers.)

10. Avoid set screws as these easily come loose because of their small sizes. Even when used on a flat, set screws are only “temporary connection” mechanisms. Locktite only makes “the temporary” a little longer in the presence of stress.

11. Watch the use of metal arms to carry loads. They often deflect in an imperceptible manner. This is especially true when loads are dynamic.

12. Integrate as many mechanical functions as possible. Use as few separate and distinct mechanical parts that are joined as possible. Joints are usually unreliable.

Each of these common mechanical design problems is used in common “every day life” situations where 10% failures per year might be acceptable or near the limit of technology (washing machines, other appliances, many instruments and even some cars). The same standard designs will not work well in high reliability applications where only 1 or 2% failures per year are desired or acceptable (aerospace, military, medical devices etc.). Remember the difference between the two applications when designing.

By: by James McLinn CRE, Fellow ASQ JMREL2@aol.com

Published in Mechanical Design Reliability Handbook: Simplified Approaches and Techniques ISBN 0277-9633 February 2010 (available as free download for ASQ Reliability Division Members)

Picture © B. Poncelet https://bennyponcelet.wordpress.com

Posted in General

William Sealy Gosset, alias “Student,” was an immensely talented scientist of diverse interests, but he will be remembered primarily

for his contributions to the development of modern statistics.

Born in Canterbury in 1876, he was educated at Winchester and New College, Oxford, where he studied chemistry and mathematics.

At the turn of the 19th century, Arthur Guinness, Son & Co. became interested in hiring scientists to analyze data concerned with various aspects of its brewing process.

Gosset was to be one of the first of these scientists, and so it was that in 1899 he moved to Dublin to take up a job as a brewer at St. James’Gate.

In 1935 he left Dublin to become head brewer at the new Guinness Park Royal brewery in London, but he died soon thereafter at the young age of 61 in 1937.

After initially finding his feet at the brewery in Dublin, Gosset wrote a report for Guinness in 1904 called “The Application of the Law of Error to Work of the Brewery.”

The report emphasized the importance of probability theory in setting an exact value on the results of brewery experiments, many of which were probable but not certain.

Most of the report was the classic theory of errors (Airy and Merriman) being applied to brewery analysis, but it also showed signs of a curious mind at work exploring new statistical horizons.

The report concluded that a mathematician should be consulted about special problems with small samples in the brewery.

Taken from: Philip J. Boland (1984): “A Biographical Glimpse of William Sealy Gosset”, The American Statistician, 38:3, 179-183.

Previously published in the June 2013 Volume 4, Issue 2 ASQ Reliability Division Newsletter

Picture © B. Poncelet https://bennyponcelet.wordpress.com

Posted in General

Simulation based methods, especially Monte Carlo simulation based techniques, can solve these problems.

According to the researches, complex systems which may be difficult to solve with analytical methods are simply solved with Monte Carlo simulation approach [3,4,7,12].

The reliability methods, which are based on Monte Carlo simulation approach, because of their ability in modeling the real conditions and stochastic behavior of the system, can eliminate uncertainty in reliability modeling [7].

The utilization of this approach is increasing for the calculation and estimation of reliability of dynamic systems.

**DFT Versus SFT**

Although, there are many rational reasons to utilize the dynamic methods in industrial field, the usage of these methods is not very common yet.

Perhaps, the main reason for this problem is directly related to the owners.

They do not bother to modernize existing static methods, such as RBD and SFT, which are extensively used in industrial field.

It comes from two main causes [5]; First of all, static approaches are more simplified and aggressively tested.

In addition, dynamic approaches are still too vague to apply to industrial applications.

Also, from a technical point of view, a SFT can be translated in a RBD, but this conversation to a DFT has to be figured out.

**A Simple Example of DFT and SFT **

Fig. 3 presents a simple example of SFT (the left one) and DFT for a similar system.

Let us consider the failure rate value of 0.01 (1/hrs) for all BEs.

In this example, Top Event (TE) of SFT will occur if all of the BEs occur; that is occurring of A, B and C at a same time, all together, no matter the sequences of them.

Now, Let us consider the DFT of this case.

Also, for this DFT, Top Event (TE) will occur if all of the BEs occur, at a same time, all together, but the way how this configuration is reached, matters.

In this case, due to the presence of the PAND gate, the sequences of events are important.

In this case, to occurrence DFT’s TE, A and B must occur before C.

At the mission time of 1000 hours, unreliability value for SFT and DFT, are 9.99E-16 and 3.33E-16, respectively (numerical analysis was done by “PTC Windchill” software).

**References**

[1] Bechta Dugan, J., Bavuso, Salvatore J., Boyd, M.A., 1992, “Dynamic Fault-Tree Models For Fault-Tolerant Computer Systems,” IEEE Transactions on Reliability, Vol. 41, pp. 363 – 377.

[2] Xing, L., Amari, S. V., 2008, “Handbook of Performability Engineering,” Fault Tree Analysis, London, Springer London, pp. 595-620.

[3] DurgaRao, K., et al., 2009, “Dynamic Fault Tree Analysis Using Monte Carlo Simulation In Probabilistic Safety Assessment,” Reliability Engineering & System Safety, Vol. 94, pp. 872–883.

[4] Berg, G.V., “Monte Carlo Sampling of Dynamic Fault Trees for Reliability Prediction,”

[5] Manno, G., et al., 2014, “Conception of Repairable Dynamic Fault Trees and resolution by the use of RAATSS, a Matlab toolbox based on the ATS formalism.” Reliability Engineering & System Safety, Vol. 121, pp.

250–262.

[6] Chiacchio, F., et al., 2011, “Dynamic Fault Trees Resolution: A Conscious Trade-Off between Analytical and Simulative Approaches,” Reliability Engineering & System Safety, Vol. 96, pp. 1515–1526.

[7] Faulin Fajardo, J., et al., 2010, “Simulation Methods for Reliability and Availability of Complex Systems,” British Library Cataloguing in Publication Data, pp. 41-64.

[8] Amari, S., Dill, G., Howald, E., 2003, “A New Approach To Solve Dynamic Fault Trees,” Annual Reliability and Maintainability Symposium, IEEE Publisher., pp. 374 – 379.

[9] Rausand, M., Hoyland, A., 2003, “System Reliability Theory: Models, Statistical Methods, and Applications,” 2nd Edition, New York, USA, Wiley-Interscience

By: Mohammad Pourgol-Mohammad, Ph.D, P.E, CRE, mpourgol@gmail.com

Previously published in the December 2015 Volume 6, Issue 4 ASQ Reliability Division Newsletter

Picture © B. Poncelet https://bennyponcelet.wordpress.com

Posted in General

**Dynamic Gates**

The DFT is based on the developing of new gates, including Priority-AND (PAND) gate, Functional Dependency (FDEP) gate, Spare gate and Sequence Enforcing (SEQ) gate.

DFTs were developed based on SFTs with new types of gates, which are called dynamic gates.

The use of these new dynamic gates makes it feasible to involve time and cross dependencies in the calculations.

As mentioned, the DFT brings four new dynamic gates; including PAND gate (Fig. 2-a), which fails if all of its inputs fail in a pre-defined order (left-to-right in the visual presentation of the gate), FDEP gate (Fig. 2-b), which compels its secondary (dependent) inputs to fail when the primary input (trigger) occurs, along with the first event, SPARE gate (Fig. 2-c), which has one primary input and a number of spare inputs, and SEQ gate (Fig. 2-d), which compels its inputs to occur in a pre-defined order (left-to-right in the visual presentation of the gate).

**DFT Solution Methods **

Several researches have been conducted on altering old approaches for solving DFTs.

Existing methods for solving DFTs are generally based on the mapping the DFT into a different model [5].

In general, solution approaches to solve a DFT, are classified to four different methods; including analytical, simulation based, diagram representation

and hybrid methods.

There are three classes of quantitative analytical models, which are used to solve a DFT [6]:

**combinatorial approaches**, which are unable to handle dynamic dependencies among the system components [13];

**state-space approaches**, which improve static models for modeling complex systems; but the statespace model of a system can be too large and it may require too much computation time [12], and

**modular approaches**, which are combination of previous approaches and mostly used for DFT analysis [1,8].

Most of these methods are particular for a special case and it is difficult to extend that solution method for other cases.

In addition, the complexity of the real systems requires the modeling of their reliability with realistic considerations, which suggests the use of analytical methods very grinding and effortful [7].

Simulation based methods, especially Monte Carlo simulation based techniques, can solve these problems.

According to the researches, complex systems which may be difficult to solve with analytical methods are simply solved with Monte Carlo simulation approach [3,4,7,12].

The reliability methods, which are based on Monte Carlo simulation approach, because of their ability in modeling the real conditions and stochastic behavior of the system,

By: Mohammad Pourgol-Mohammad, Ph.D, P.E, CRE, mpourgol@gmail.com

Previously published in the September 2015 Volume 6, Issue 3 ASQ Reliability Division Newsletter

Picture © B. Poncelet https://bennyponcelet.wordpress.com

Posted in General

**Introduction**

One of the most important goals for the reliability analysis is “Predicting the reliability of the system for a specified mission time” [1].

There are plenty of techniques accessible to be used to reach this goal.

In order to predict the reliability of a system, a proper reliability model must be selected.

Fault Tree Analysis (FTA) is one of the most developed and dominant techniques in reliability studies.

First in 1962, FTA techniques have been created at Bell Telephone Laboratories [2].

Nowadays, FTA is widely used for quantitative reliability analysis and safety assessment of complex and critical engineering systems [3].

In fact, FTA is a logical tree demonstrating the ways in which a system fails.

The tree starts with an unpleasant event (top event), and all conceivable paths for top event to occur are shown.

For this logic tree, the leaves are basic events (BEs), which model component failures [4] and generally linked to the failure of components [5].

The BEs demonstrate root causes for the unpleasant event.

Each BE has a proper failure distribution (mostly Weibull and exponential distributions), its suitability is verified by goodness of fit techniques [4].

Nowadays, FTA method is the most used quantitative technique for accident scenario assessment in the industry [6]; however, this method is often used in the static form not proper for analyzing the complex systems.

**Static Fault Tree (SFT) **

The main assumptions for the use of the SFTs are [6,7]:

i) binary BEs;

ii) statistically independent BEs;

iii) instantaneous transition between the working and the failed state;

iv) restoration of components as good as new by maintenance; if the failure of a component influences other events on superior levels, its repair restores these events to the normal operative condition.

The way that events are connected to produce system failure, is represented by means of logical Boolean gates (AND; OR; Voting).

AND gate (Fig. 1-a) has failed output when all inputs fail, OR gate (Fig. 1-b) fails if at least one of inputs fails and Voting gate (Fig. 1-c) fails if at least k out of n inputs fails [4].

SFTs with AND, OR, and Voting (k of n) gates cannot encompass the dynamic behavior of system failure mechanisms [8].

To overcome this problem, Dynamic Fault Tree (DFT) analysis is suggested in recent research.

**Dynamic Fault Tree**

Most of reliability modeling techniques are based on statistical methods.

Their typical examples are reliability block diagram (RBD), FTA, and Markov chains [9].

These methods are not able to encompass the dynamic behavior of complex systems.

Dynamic reliability assessment methods were developed on common basic of static reliability analysis, in order to encompass the dynamic behavior of sequence, spare or time dependency actions or failures in the complex systems.

The key parameter to separate dynamic behavior from static behavior is the time.

Dynamic reliability approaches are powerful formalisms and invent a more realistic modeling of complex systems [10].

Among these new formalisms (DFT analysis, Dynamic RBDs, Boolean logic Driven Markov Process and etc.), which proposed to reliability calculation studies, DFT analysis has been the most used and practical one As compared with the SFT, DFT is a graphical model for the reliability studies that combines the ways how an undesired event (top event) can occur.

However, in a DFT, top event is a time dependent event.

DFT represents a better estimation of the traditional FT by including the time dependency [11].

Like a SFT, the DFT is a tree in which the leaves are BEs; however, in this approach, BEs are more realistic and detailed than SFT technique.

The main assumptions for the use of the DFTs are [12]:

i) binary BEs;

ii) Non-repairable components (recently, some efforts have been made to consider repair in DFT [5]).

By: Mohammad Pourgol-Mohammad, Ph.D, P.E, CRE, mpourgol@gmail.com

Previously published in the June 2015 Volume 6, Issue 2 ASQ Reliability Division Newsletter

Picture © B. Poncelet https://bennyponcelet.wordpress.com

Posted in General

We congratulate Vladimir Babishin, Sharareh Taghipour with the 2016 RAMS BEST PAPER AWARD for the paper “Joint Maintenance and Inspection Optimization of a k-out-of-n System”

Posted in General

As currently RAMS is ongoing in Orlando.

Please visit the ASQ Reliability Division booth, and talk to the Reliability experts and see what the Reliability Division has to offer.

Posted in General

On behalf of the ASQ Reliability Division (ASQ RD) and the QE Best Reliability Paper Award committee, we congratulate Michael Scott Hamada being selected as the winner for the 2015-2016 QE Best Reliability Paper Award.

For the paper “Bayesian Analysis of Step-Stress Accelerated Life Tests and Its Use in Planning,” this award includes a plaque, presented at the annual ASQ RD dinner banquet in Orlando, FL on Tuesday, January 24, 2017.

Posted in General

In September Mr. Rabia Muammar spoke about ASQ Division on the 2016 Industrial Engineering Forum at Hashemite University.

It was a useful conference, and I he received an excellent feedback from the participants.

Included the presentation.

Posted in General