Blog Archives

ASQ RRD SERIES: Testing Techniques – Making Evidence Based Decisions

Thu, Feb 14  12:00 PM – 1:00 PM EST

by Professional Analysis and Consulting, Inc.: Timothy M. Hicks, PE and Roch J. Shipley, PhD, PE, FASM



Testing is a very broad topic. Various types of tests are performed throughout the life cycle of a product or component. These tests range from:

  • Pre-production testing
  • Audit testing to verify production intent
  • Reliability testing once a product is in production

The focus of this presentation will be on reliability testing and to provide an overview of the types of tests that are available, what specific tests are utilized for, and provide examples of successfully implemented tests.

When a product does not perform as expected, the process of finding the reason why is commonly referred to as failure analysis. Materials and components do not really “fail”; they may fracture when overloaded or when corrosion is involved due to an aggressive environment. These failures can be attributed to either design related issues or inappropriate application or use. Therefore, the only “failure” is a failure to meet expectations. Materials characterization and testing is a critical element of the failure analysis process. Categories of the materials characterization testing techniques that will be discussed include:

  • Plastics / Polymer Analysis
  • Metals Analysis
  • Coatings / Surface Analysis
  • Corrosion Analysis

Timothy M. Hicks, PE
Mechanical Engineer
BS – Michigan Technological University
MS – Rensselaer Polytechnic Institute
Industry – 35 years experience
27 years in design, testing, and manufacturing
8 years in engineering consulting

Roch J. Shipley, PhD, PE, FASM
Materials Engineer
BS and PhD – Illinois Institute of Technology
Industry – 38 years experience
10 years in manufacturing and corporate research
28 years in engineering consulting

Posted in General

ASQ RRD Series: An Introduction to Uncertainty Quantification for Reliability & Risk Assessments

Thu, Jan 10, 2019 12:00 PM – 1:00 PM EST

by Mark Andrews, Ph.D. 


Numerical simulations have become the choice approach for performing analytics in many industrial sectors. With the phenomenal growth in computational power and significant advancements made in Computer-Aided Engineering (CAE) software, computer experiments of complex systems are now capable of reducing the dependency and costs of conducting physical experiments. While the prevalence of simulation tools offers unique potential to generate expedient analytics, simulation modeling of complex systems requires Uncertainty Quantification, an advanced analytical methodology capable of generating actionable results.

 Uncertainty Quantification is a multi-disciplinary field that brings together statistics, applied mathematics, and computer science to quantify uncertainties in numerical simulations. Like Six Sigma, Uncertainty Quantification makes use of statistical models to find feasible solutions to problems involving variability. However, the two methodologies seek to meet different objectives.

 This webinar will begin by introducing the topic of Uncertainty Quantification along with the basic methods and processes used to quantify uncertainties. Illustrative examples will be used to highlight how UQ can enhance Six Sigma.

Mark Andrews, Ph.D. 
Technology Steward

Dr. Mark Andrews, UQ Technology Steward, is responsible for advising SmartUQ on the industry’s UQ needs and challenges and is the principal investigator for SmartUQ’s project with Probabilistic Analysis Consortium for Engines (PACE) developed and managed by Ohio Aerospace Institute (OAI). He recently received the award for best training at 2018 the Conference on Advancing Analysis & Simulation in Engineering (CAASE). Before SmartUQ, Dr. Andrews spent 15 years at Caterpillar where he worked as Senior Research Engineer, Engineering Specialist in Corporate Reliability, and Senior Engineering Specialist in Virtual Product Development. He has a Ph.D. in Mechanical Engineering from the New Mexico State University.

Posted in General

November 2017 TCC Transformation Resolution

The latest “TCC Transformation Resolution” can be found here:

TCC Transformation Resolution

Posted in General

New ASQ Member unit Operating Agreement

The new “Member Unit Operating Agreement” can be found here:

Posted in General

Best Reliability Paper Award by ASQ RRD 2018 – Dr. Stevens and Dr. Anderson-Cook – “Quantifying similarity in reliability surfaces using the probability of agreement”

We are proud to announce Dr. Stevens and Dr. Anderson-Cook’s paper, “Quantifying similarity in reliability surfaces using the probability of agreement”, published on Quality Engineering, 2017, vol. 29, no. 3, was selected for the Best Reliability Paper Award by the ASQ RRD paper award committee.

At the RRD dinner banquet in the upcoming RAMS conference in January they will receive the award plaque.
In addition, the monetary gift that comes with this award.

We thank them for their excellent contributions to the reliability engineering community and we look forward to seeing more of their works on Quality Engineering in future.


Posted in General

Free Webinar: ASQ RRD Series: Big Data Analytics – Telematics Data Analysis by Dennis Craggs

ASQ RRD Series: Big Data Analytics – Telematics Data Analysis  by Dennis Craggs

Thu, Nov 8, 2018 12:00 PM – 1:00 PM EST


Engineers conduct tests to verify that product meet engineering standards.
These standards were developed with customer surveys, duplication past standards, or relying on expert opinions.
Modern technology provides a new tool to validate standards by measuring product usage by the customer.
An automotive example will be discussed, but the methods have much broader application.
Automobile companies installed telematics modules on fleet vehicles, with the consent of the owner, to collect and store usage and environmental data.
The volume of data was enormous.
Different analytic methods were required for different data types but there were only a few data types to consider.
The raw data needed to be standardized for different vehicle miles or time.
Then counting data, like trips/day, were analyzed with simple probability distributions.
State data, like switch setting, need to be analyzed for transitions and time duration in states.
Continuous measurements are more difficult.
Engineers frequently use bar histograms, but bar histograms quickly become unreadable when two or more data sets were combined in the same graphic.
Methods were developed that allowed many histograms to be analyzed fleet usage patterns and develop the 5th, 50th, and 95th customer percentiles.
The standards can then be validated using the customer data.

Dennis Craggs attended the University of Detroit and Wayne Statue University achieving Masters in Engineering Mechanics and Operations Research, is a licensed Professional Engineer, and is a Quality and Reliability Engineer.
In the Aerospace, he worked at NASA and Teledyne CAE, and in Automotive, Ford and Chrysler.
He applied the disciplines of fluid mechanics, heat transfer, mechanical and electrical design, testing, and development.
He learned several programming languages and develop significant software.
As a statistician, he assisted managers and engineers in the development, review, and approval of validation standards, analyzed warranty and test data, and developed methods to statistically analyze vehicle lifetime usage.
He was member of a joint USA SAE and German ZVEI taskforce that developed SAE-J1879 “Handbook for Robustness Validation of Semiconductor Devices in Automotive Applications” and helped to develop the automotive lead-free electronic validation standard USCAR40, “Lead Free Solder Validation Test Plan”.
He taught graduate level statistics and reliability at Wayne State University, and as independent trainer presented “An Introduction to Minitab” seminars to corporate clients.
Dennis presented at the Society of Automotive Engineers, the American Society for Quality, the Automotive Electronics Council, and ISSAT conferences, and published SAE and ISSAT technical papers.

Posted in General

ASQ Houston Regional Quality Conference 2018 – Friday, November 2nd, 2018


ASQ Houston Regional Quality Conference 2018

✓ Our main event of the year. Over 1000 attendees in our previous 5 conferences!
✓ Earn CEUs or RUs attending the Conference!
✓ Network with the Top Notch Quality Professionals
✓ Speakers with ASQ’s World Conference experience


Kimberly Watson-Hemphill President, Firefly Consulting
Mark Galley, President  Think Reliability
Eric Helgeson, Quality Director
Pure Safety Group/SNC-Lavalin/Chrysler
Rajdeep Golecha, CEO Zdaly


Friday, November 2nd, 2018
United Way of Greater Houston
50 Waugh Drive

More info:

2018 Conference Booklet

Posted in General


Answers to CRE questions in Sept 2018 Newsletter

Which of the following statements is NOT true?
o Designing for reliability requires risk assessment of the product
o Project schedule should not include tasks for risk assessment
o Cost estimates for reliability testing should be included in the project budget
o Designing for reliability should include user hazard assessment

In which of the following product life cycle phases should product disposal issues be addressed?
o Design/Development
o Concept/Planning
o Operation/Repair
o Wearout/Disposal

During which phase of the product life cycle should testing begin to validate the design?
o Design/Development
o Production/Manufacturing
o Concept/Planning
o Operation/Repair

Which of the following is a metric to describe robust functionality?
o Mean Time To Failure
o Signal to Noise Ratio
o Cost to Value Ratio
o Mean Time To Repair

MTBF and MTTF are two reliability terms that are:
o Synonymous with each other
o Based on the use of the same lifetime distribution
o Are applied differently. MTTF is applicable for maintainable systems. MTBF is only applicable for non‐maintainable / nonrepairable systems
o Are calculated using the second moments of the lifetime distribution.

Which of the following expressions is best used to describe the pattern of failures over time for repairable systems?
o Rate of Occurrence Of Failures
o Hazard rate
o Mean Time To Failure
o Mean Time To Repair

What is the most appropriate definition of Mean Time Between Failures (MTBF)?
o The longest time period that a piece of repairable equipment failed in the record.
o The average time period that a piece of repairable equipment takes to be repaired.
o The average time period that a piece of repairable equipment is operational in the past 6 months.
o The average time period that a piece of repairable equipment is operational.

Which element(s) make a complete reliability goal statement?
o Function, probability, duration and environment.
o Only need MTBF
o Probability, useful life and wear out mechanism
o Function, probability, shipping and environment

The equation, R(t) = e‐λt, is applicable during a product’s:
o infant mortality stage.
o useful life stage.
o wear‐out stage.
o break‐in, useful life stage and wear‐out stages.

Which of the following best describes availability?
o The ability of a product, when used under given conditions, to perform satisfactorily when called upon
o The probability that a product will perform the intended function, in the manner specified, for a particular period of time
o The probability that a failed system can be made operable in a specified interval of downtime
o The degree to which a product is operable and capable of performing its required function at any randomly chosen  time during its specified operating time, provided that the product is available at the start of that period

Picture © B. Poncelet

Posted in General

ASQ RRD courses on RAMS 2019

The Role of Reliability in Risk-Based Decision-Making

This 8-hour course answers the questions:

  1. What is Risk & Risk Management?
  2. What is the connection between Risk Management and Reliability?
  3. What type of data do I need for making a Decision under Risk?
  4. What’s the difference between Quantitative and Qualitative tools in deciding Risk?
  5. What should I expect from Managers & Customers when I present my analysis?

ISO 9001:2015 is a risk-based standard. In addition to the Quality Systems Management that must recognize risk and opportunities in all aspects of a business (Sections 4.1 & 4.2 of ISO9001-2015); Section 6 states that the organization shall “determine risks and opportunities that need to be addressed.” Thus we have arrived at the need for Risk-Based thinking and Risk Management.

But since there are typically too many risks, and not enough money to address all of them, how and what do we do? First you have to set a Risk “goal” (in terms of Reliability & –possibly Safety—depending on the product). Allocate this top-level Risk goal among the sub-systems (and lower if  that makes sense). This will set the Design Reliability (& Safety) goals.

Of the Risk Management processes this presentation will concentrate in the areas of

  • Qualitative risk analysis
  • Quantitative risk analysis

Examples using various Reliability & Statistical tools (FMEA, Weibull Analysis, Monte-Carlo Simulation, and others) will illustrate “calculating” risk and how to prioritize risks against a “standard.”… even when your data is sparse (or possibly non-existent).

In addition, you’ll see some methods to help in telling the Boss bad news: “We can’t do this project in the time frame (a budget – time risk to the company) as quickly as you want.”

While a knowledge of some elementary statistics is assumed, the presentation will briefly review Reliability and Statistical concepts before they are used. Also, EXCEL, MINITAB and Crystal Ball for illustrating parts of, or all of, some examples. 

Communicating Reliability, Risk, and Resiliency to Decision Makers

Communication of concepts related to reliability, risk, and resiliency is frequently cited by technical professionals as the most challenging and overlooked aspects of their work.  Texts and guidance documents frequently reference the importance of better communication and education; however, there are few practical examples and limited practical guidance provided. Getting the boss’s boss to understand remains one of the most elusive aspects of serving as reliability professional.

This workshop will fill many of the gaps between the technical analysis and decision maker.  The workshop will be provided from the perspective of an individual who dually serves on decision-making bodies as well as who also provides reliability, risk, and resiliency analysis to decision makers.

A comprehensive list of references will be provided to provide tools, techniques, and approaches.  These will include communications best practices from a wide range of international sources. It will also include the book on the subject written by the facilitator.  However, the workshop argues that the presentation of reliability and risk information is different than manipulative practices frequently championed by marketing and political professionals.  Reliability and risk professionals must be able to truthfully, ethically, and effectively communicate what the data and analysis is concluding, and at the same time avoid being demoted or terminated.  The role of reliability and risk professionals as trusted advisors to executive management who does not understand, or does not care to understand, is indeed a tricky balance.

The learning objectives include:

  1. Practical approaches for communicating reliability, risk, and resiliency to subordinates, peers, senior management, and decision makers
  2. Basic understanding definitions of reliability, risk, resiliency, decision making and communications
  3. The major types of decisions and how communication approaches change with decision type (strategic, tactical, and rare events)
  4. Personality profiles and their impact on communication and decision making
  5. The role of ethics in communication of technical information
  6. Options and best practices for visual communication of reliability and risk information
  7. Impact of innumeracy, biases, and general population’s ability to understand probability
  8. Tips and best practices for building rapport and verbal communication
  9. Techniques for better communicating reliability, risk, and resiliency information in emails, conference calls, and Q&A sessions
  10. Tips and best practices for communicating to groups that advise decision makers

The interactive workshop will utilize case examples from the facilitator.  Participants will also be asked to bring in a real-word example on which to apply the workshop objectives when they return home.  An audience response system will be used to help solicit input and participation.  A role play will be utilized at the end of the workshop to demonstrate each participant’s improved ability to better communicate reliability, risk, and resiliency to decision makers. 

Posted in General

Essential Competencies for Improving Software Development – Webinar Slides

On Thu, Jul 12, 2018 Linda Westfall gave a webinar on Essential Competencies for Improving Software Development

Essential Competencies for Improving Software Development

Recorded webinar will be uploaded later.

Posted in General