Design of Experiments is a tool for gaining deeper insights into how input factors influence outcomes. Confounding factors can create a false impression of causation or exaggerate or mask the true effect of the primary variables being studied. Asset managers, reliability engineers, data analysts, and other technologists are well served by understanding this technique. Sorting through randomness and not taking “best” on face value are important outcomes.
The Great Fishing Competition
Andy and I had been fishing together for nearly 20 years, including an annual weeklong fishing trip. The debate over who was the better fisherman had surfaced countless times in bars and over beers. We designed a plan based on my experience with the Design of Experiments. We would fish twice per day for a week on the annual fishing trip and isolate one or more confounding factors each day.
What is the Purpose of Design of Experiments?
Design of Experiments (DOE) aims to systematically plan, conduct, and analyze experiments to understand the relationships between inputs and outputs. It helps researchers and practitioners optimize processes, products, and systems.
The goal of the DOE is to:
Identify Cause-and-Effect Relationships
Optimize Processes and Products
Control Variation
Test Multiple Factors Simultaneously
Minimize Confounding Factors
Improve Decision Making
DOE achieves isolation of factors through:
Randomization
Blocking
Factorial Designs
Replication
Control Groups
By systematically planning and analyzing experiments, DOE can identify and mitigate the impact of confounding factors, leading to more reliable and valid conclusions.
What Professions Use Design of Experiments The Most?
Engineering, food science, pharmaceuticals, marketing & consumer research, and data analytics are areas where you will commonly find forms of Design of Experiments.
It Is Not Always “Statistical” In Practice
The Great Fishing Competition is a good example of how the Design of Experiments does not have to be statistical to provide definitive insights. In practice, isolating different variables through a few tests can provide enough insight that one thing is not necessarily better than the other.
We Are Both Good Fisherman
In the end, there was no clear winner of the Great Fishing Competition. After a tough seven-day battle, a single fish on the last cast of the last day proved to be the difference maker. Andy won the competition by the most weight (pounds) of fish, but I caught a few more and the biggest fish.
Our informal Design of Experiments provided enough information to sort through the randomness and our biases. We are both pretty good fishermen.
With all its unpredictability, the ocean has its own way of deciding who the better fisherman is on any given day.
References:
Wikipedia
The Reliability Handbook
JD Solomon Inc. provides solutions for program development, asset management, and facilitation at the nexus of facilities, infrastructure, and the environment. Sign up for monthly updates related to our firm.
JD Solomon is the founder of JD Solomon, Inc., the creator of the FINESSE fishbone diagram®, and the co-creator of the SOAP criticality method©. He is the author of Communicating Reliability, Risk & Resiliency to Decision Makers: How to Get Your Boss’s Boss to Understand and Facilitating with FINESSE: A Guide to Successful Business Solutions.
Comments