After having consulted with more than fifty companies, I came to the conclusion that the problem of not trusting the quality of data in a maintenance management system is based on a reality. But at the same time, it may also be used as an excuse to delay seemingly time consuming improvements or use the data for analysis. I haven’t seen a single maintenance team in the energy sector that has said that their data quality is perfect and without anomalies or errors. But despite hearing such concerns about data quality, I always ask the questions “How good does the data quality need to be?”
The answer I most often hear is – the quality needs to be sufficiently good for the application to be possible. This means data needs to help analyze the performance of equipment effectively, identify specific underperforming equipment and determine the failure modes or how and why specific equipment is failing. If I get data to explain this much, that’s the first victory. The second step would then be to setup a system which can automatically analyze the “sufficiently-good” data and explain all of the above with a click of a button.
What if my data quality isn’t “sufficiently-good”?
If data quality isn’t the best, then first set criteria on what you consider sufficiently-good data. Once you quantify measures and take the feelings and subjectivity out of the analysis, you can determine areas where the quality is sufficiently-good and areas where it needs to improve. There are companies that have a scoring structure which takes into account all the required data fields for calculating 15 asset performance metrics that measure maintenance cost, asset reliability and availability. Using this scoring structure, it’s possible to rate how effectively a company can calculate each metric.
After determining high accuracy metrics, identify areas where the company’s data is sufficiently-good and can be used for high quality analyses. Areas where the accuracy is medium or low come with a recommendation on what the company can do to improve. When the cost is not accounted for on the correct work order and asset, for example, insights on maintenance spend using metrics like average corrective cost become obscured. Companies must have a quantified analysis and specific recommendations to develop an action plan, improve specific issues and transition to the sufficiently-good data state.
Do I need to fix my historical data?
The amount of effort required to reopen work orders and fix them may not be worth the investment. Efforts to incorporate improvements in the data entry process have the most value. In the example above, the next step for this company should be to investigate their time and material system and understand if the work force lists time on individual work orders, or if they lump all of their time on a single work order. Then after understanding the root of the problem, they need to implement easy-to-use workflows. Maintenance processes should be as simple and accountable as possible, which is achievable today with the latest technological advances and easy access to software systems.
Data quality won’t happen without end-user buy-in
If employees are not held accountable for data, it is not going to be properly managed. Everyone must provide the necessary information about what is wrong with equipment and how much it costs to fix. Hosting several end-user training sessions about how more information ultimately saves time and reduces workload will help drive buy-in.
For energy organizations with thousands of assets in use, it’s also difficult to understand underperforming assets based on human expertise alone. If the data is sufficiently-good, an automated analysis and system approach can not only quickly pinpoint underperformance by unit or equipment type, but also generate a prioritized list of these issues. Analysts will spend 90% of their time analyzing data versus spending their time gathering, cleansing and processing the data.
Use high accuracy metrics or sufficiently-good data to analyze asset performance as far as it will allow, while you keep improving processes and the quality of the rest of your data to make it sufficiently good.