Why PdM Programs Fail: Misuse of Technology
by Alan Friedman
Originally published here: http://reliabilityweb.com/index.php/articles/why_pdm_programs_fail_misuse_of_technology/
A very good mechanic knows that you need the right tool for the job, but a common problem with PdM programs is that sometimes people acquire the tool before fully understanding what problem needs to be fixed. Of course, when you have a hammer all of your problems look like nails, and what follows from this mistaken view is a whole list of reasons why PdM programs fail. The biggest lesson I learned from engineering school is that the solution to a problem is most often found in its correct definition. That is, solutions become obvious when you really understand what the problem is.
We laugh when we read the exchange between the tech support person and the new computer owner who calls to say his wireless Internet is not working. After the tech support person laboriously goes through all of the steps to verify that the hardware and software are all installed and functioning, she asks who the person’s Internet service provider is – and, in the pregnant pause that follows, we suddenly know what the real problem is!
One reason PdM programs fail is because the goals of the program are not well defined or well understood. A company purchases a technology like a vibration analysis system or infrared camera and then they get trained to use the tool, but not what to use it for. What they often fail to do is change processes and procedures in the plant to take advantage of the information this new tool provides. In other words, you buy a screwdriver, you learn how to loosen and tighten screws but you somehow fail to see how this does or doesn’t relate to the plant’s overall operation.
So, what are the goals of a successful program? Depending on your background, experience or role in your organization, you may have differing ideas about this, but how you view this will have a large impact on how you employ the technology and on the sorts of benefits you will receive. It will also ultimately dictate your view of what is the best tool for the job. To reiterate, I believe that the failure of many PdM programs can be traced back directly to confusion or disagreement on this core question: what is the goal of the program? Why are we purchasing this tool (or service), how will we use it and how will we measure our success? In many cases, the tools are purchased before these questions are answered, if they are ever answered. In other cases, the benefits one hopes to achieve are not in line with how the technology is actually being employed.
Let’s consider two common viewpoints regarding the goals of a vibration analysis program. One typical view is that vibration analysis is one of the best non-destructive technologies available to detect and diagnose mechanical faults and degradation in rotating machinery. The goal of using the technology is to detect and diagnose faults in rotating machinery – period.
Another common view is that because vibration analysis can be used to detect wear in rotating machines, one can utilize this machinery condition information to better plan maintenance actions. This leads to an increase in uptime, quality and plant performance and a decrease in unplanned maintenance, catastrophic failures and accidents. These benefits, loosely defined as Overall Equipment Effectiveness (OEE), lead to higher profitability. In this view, the lofty goal of the vibration analysis program is higher plant profitability.
This is the crux of many failed programs. Perhaps a manager agrees to purchase a vibration monitoring system or a monitoring service. In his mind, he imagines a 30:1 return on his investment. Maybe he hasn’t thought it completely through, but when he considers the benefits of such a system, his mind leans towards the goal of higher profitability. He has read plenty of articles about condition monitoring and profitability and he is sold on the idea of it. Now, a product has been purchased, some technicians and engineers have been given some training, but they understand the goal differently. They use the equipment to detect problems in their rotating machinery; perhaps they even become quite skilled at it. But beyond this, no organizational changes have been implemented to schedule maintenance based on vibration test results, nor have metrics been introduced to calculate and measure the impact of the technology on uptime and spare parts and, ultimately, its impact on the bottom line.
From the point of view of the engineers and technicians using the system, it appears successful. They are able to troubleshoot machines and diagnose problems but imagine what happens when a recession hits and upper management goes around looking for programs to cut. How will these technicians make the case that their vibration program should be preserved? Where is the 30:1 ROI? This is one major cause of terminated PdM programs. The original idea was to impact the bottom line, but the technology was actually used in a more limited fashion. The organizational and procedural changes required to utilize machine condition information to meet the goal of higher profitability were not implemented.
Another issue is the tool itself, the actual equipment or service that one purchases. If we consider the two separate goals mentioned above, it will soon be obvious that the equipment we purchase, and how we use the equipment, will vary based on our goal. Again, I will reiterate that most people purchase the equipment first and never fully reconcile the goal.
Here is a common scenario that describes a plant using vibration analysis to troubleshoot machines and determine what is wrong with them. The plant either has a vibration expert on-site or uses an outside consultant. Typically, someone hears a weird noise coming from a machine or they feel that the machine is vibrating too much. Maybe the machine keeps failing unexpectedly or seems to have more problems than a similar unit. Whatever it is, someone in the maintenance department believes there is a problem, and so they call the vibe guy to troubleshoot it.
The on-site expert or consultant will require customizable high tech equipment that allows him to set up a variety of special tests to troubleshoot the machine. The data collection equipment may have a big screen because the analyst will do a lot of his analysis on the plant floor. The equipment may also have many channels and it will likely be complex and difficult to use. Because there is no historical data, the focus will not be on trending or looking for changes over time, therefore, his equipment will not require any advanced alarming or trending capabilities. It would not be uncommon to expect the analyst to spend multiple hours or even multiple days in some cases, diagnosing the problem and submitting his report. This would most likely be a costly, but hopefully, infrequent expense.
Summary Scenario #1
Data collector needs:
• Big screen
• Many test types
• Customizable, multi-channel, magnet mounted sensors
• Intelligence in the analyzer
Does not need:
• Intelligent software
• Highly trained
• Highly paid
• Not much program management required
Now let’s consider that the goal of the program is to use the technology to better plan maintenance, ultimately leading to a measurable impact on plant profitability. What type of equipment will be best suited to meet this goal?
In this next scenario, the emphasis is placed on trending because the goal is to look for changes in machine condition and then base maintenance decisions on this information. Time is spent up front defining standard test conditions and organizing the program. This scenario calls for a low cost, efficient worker to collect data in exactly the same way, day in and day out, year after year on the same equipment. The data collection equipment would be “idiot proof” with limited or controlled options for the user, or it may be an online system. Test points on the machine would be screw type sensor pads or installed targets for magnet mounts to insure repeatability. Initiation of a standard test should take no more than a button press. Because the data collection tasks, including the required equipment, have been defined in such a way as to ensure repeatable, relevant and historical data, there is no reason for the person collecting the data to look at or analyze the data on the plant floor. This eliminates the need for the data collector’s big screen.
The software will have to be very good at looking at trend data in an efficient way because this scenario also calls for testing most of the plant’s machines frequently, not only machines with known problems. Therefore, the analysis software will require the sophistication, not the data collector. There won’t be time (or need) for an analyst to spend multiple hours looking at data from each machine; a couple of minutes will be enough to see if the condition has changed, a couple more will be needed to understand how it’s changed and to update the status and add a recommendation in the software. Additionally, because trends based on good data should provide enough information to meet the goals of this scenario, the data collector will not require the capability to perform advanced customized tests, nor will the technician collecting the data require much training.
Lastly, since this scenario is concerned with improving maintenance decisions and relating them to the bottom line, the software should be part of a larger CMMS package or Plant Asset Management program. Linking results to business goals such as improvements in uptime, quality and plant performance allow maintenance managers to accurately quantify their impact on profitability.
Summary Scenario #2
• Easy to use
• Human error proof
• Simple, standard tests or online system
Data collector doesn’t need:
• Big screen
• Complex customized tests
• Triaxial sensor and stud mount
• Intelligent software
• Good alarming
• Trending and reporting features
• Links to CMMS and asset management software
• Metrics calculated from maintenance decisions up to plant profitability
• Data collection technician
• Low skill
• Low wage
• High skill
• High wage
As you can see, the way we define the goal has a big impact on the type of equipment we will purchase and how this equipment is used. It also points to a common reason why PdM programs fail. People often buy the equipment with the most bells and whistles first, with little to no attention on the software and no idea how the monitoring program will be organized. This is to say they buy the equipment defined in the first scenario with a vague idea that they will receive the rewards of using it as described in the second scenario. They focus more on the tool than on program management. When they receive training from the equipment vendor, it is often training in how to use the tool, not what to use the tool for. People who fall into this trap will typically say that they only test “critical” machines, not understanding that they are doing this because they bought equipment that was not designed to test large numbers of machines efficiently.
Now let’s return to the original question: Why do PdM programs fail? One reason that I hope is clear by now is the possible confusion between condition monitoring tools and their accompanying goals. The most common stumbling blocks are in understanding what the business goals are, employing the right tools, people and processes to meet those goals and establishing metrics to show how effective the program is in reaching the goals. Often times, plants employ highly trained individuals to use complex equipment solely to troubleshoot machines that are already known to be problematic. This may be a valid use of the technology, but it is not PdM and does not bring the same rewards or ROI. If you begins with the stated goal of increasing profitability and work down the ladder from there, equipment purchases and the way these tools are employed will be very different and the profitability goal will be better realized.