In the run up to our half-day seminar on 17th March, Reimagining Benchmarking, Ed Merrow of IPA Global has offered his own take on the subject.


Like so many consultant-speak terms, benchmarking has meant many things to many people over the roughly four decades since the term was coined. In the paragraphs below, I will describe the different forms of benchmarking of capital projects that I have encountered over my career of studying project performance.


I will suggest that the critical deficiency in most benchmarking approaches is devastatingly simple: an insufficient depth of understanding of what drives project excellence. Without deep understanding, benchmarking becomes a trivial pursuit, a narrow exercise, or at worst, a positively damaging activity.I have seen four distinct types of benchmarking in the capital projects arena in ascending order of difficulty and value: 1. Visitations; 2. Competitive performance benchmarking; 3. Practices benchmarking; 4. Systemic quality control and improvement


museums-victoria-7YUvAUbfSV0-unsplash_editedThe most basic form of benchmarking is what I call “visitations.” It has sometimes been known by the more pejorative term “industrial tourism.” This is a company-to-company/organization-to-organization meeting to discuss some aspect of project performance. The normal procedure is that Company A has heard that Company B is particularly good at some aspect of projects and requests a meeting to discuss. Company A is thereby obligated to reciprocate at a later date on a topic to be selected by Company B. Visitations are most useful when the subject is reasonably narrow and the two companies involved are quite similar in terms of industrial sector, size, and outlook. Only very strong companies are able to benefit much from visits to companies in different sectors because any learnings will require a good deal of translation, adjustment, and adaptation before being useful. Weak players cannot manage that.

If the meeting agenda is well-structured, and Company B is deeply knowledgeable on the subject, and Company B is forthcoming, such a meeting can be quite useful to Company A. Often, however, even if all the conditions above are met, Company A struggles to make good use of the information provided unless they are already a reasonably strong project system. The usual result is a manifestation of the old saw that the rich get richer and the poor get whatever. Companies with strong project systems are very adept at learning from other players. Strong companies often benchmark with weaker companies and come away with very valuable insights from the weaker company. The benchmarking leader of one strong company once put it to me this way: “Every company that manages to stay in business does something well. Our goal is to understand what that is and implement it. We feel that we can learn from a company, adapt the learning to our situation, and implement it before they have gotten their effort off the ground. That’s why we are happy to talk to everyone.” In short, visitations tend to benefit most those who need them least.

Competitive Performance Benchmarking

The next level in benchmarking seeks to understand one’s competitive position. “Are we top quartile?’ is the usual question being posed. Often project systems that engage is this type of benchmarking are doing so to fend off internal critics, usually in their business ranks, who are constantly grousing about project performance (often while failing to register their own role in project problems).
Benchmarking to prove one’s prowess is fraught with danger. IPA once had the engineering organization of a prominent energy company ask us to back their contention that they executed projects at “about” 82 percent of industry average. They even shared their methodology for that conclusion. We took a sample of 35 of their projects from around the world and discovered that they had measured their deviation from industry average with remarkable precision; the difficulty was that they were 18 percent higher than industry average. It took several years to get through the grieving process.

When a projects organization is ready to hear bad news, competitive performance benchmarking can be very valuable if used to create a sense of urgency in the organization to change. Bad news, especially quantified bad news, can be a very effective springboard for improvement. The bad news when openly shared can enlist the cooperation of non-project organization actors as well. The deficiency in competitive performance benchmarking is that it presupposes that the organization knows what to fix and how to drive an improvement process.

Executing competitive performance benchmarking well is actually quite difficult. It requires extensive data collection and very thorough data normalization. If comparisons are being made on a global basis, rather than within a narrow region, locational differences must be normalized or we risk false comparisons. Alternatively, the differences in performance may be real, but if they are driven by locational differences there may be nothing whatever that can be done to ameliorate the differences. In any case, one cannot execute competitive performance benchmarking without substantial investment. When competitive performance benchmarking is done “on the cheap” it can easily devolve into a “dash for cash” contest amongst those vying for capital.

Practices Benchmarking

The biggest drawback of competitive performance benchmarking is that it does not tell you what to fix. That is where practices benchmarking comes into play. The combination of competitive performance benchmarking and practices benchmarking is the most common form of benchmarking by sophisticated mature project systems.

Over the past two decades, the global projects community has achieved some degree of consensus about the importance of a set of practices to achieve better project results. Foremost amongst these practices is owner-led project definition prior to final investment decision. We call this process “front-end loading” (FEL). However, there are many other important practices as well, such as sensible contracting strategies, strong controls, change management, risk management, information management, and various “value-improving practices” such as constructability and value engineering. All of these practices can be benchmarked, that is, systematically compared with peers and best practice. Deficiencies in some of these practices can be enough to trigger preventable project failures.

There are several problems with practices benchmarking. First, how reliable are the putative relationships between the practice and results? Project management has always been subjected to periodic fads. Separating fad from sound practice is not always easy. Second, how should various practices be ranked in the schemes of things. Not all practices are equally important and some practices are essential to the effectiveness of others. For example, we find constructability only to be an effective practice when coupled with good front-end loading. Almost all sound practices have points in project development when they should be employed, so timing is an issue. Indeed, a practice such as value engineering when practiced too late in FEL actually results in poorer project outcomes. Finally, and perhaps most importantly, practices benchmarking can easily devolve into a “check-the-box” exercise. Practices are followed so as to say that they were followed, not because they made sense in the particular case.

Finally, practices are practiced by projects organizations. But what happens if the key underlying pathology in a company or agency performance is the behavior of others? In those cases, practices will not be followed, not because project personnel are not capable of doing their jobs well, but because others preclude them from doing so. Too often, incentives do not align with excellent projects.

Systematic Quality Control and Improvement

A few organizations amongst the 125 or so industrial companies and government agencies that are IPA clients employ benchmarking in its most powerful manifestation: as an integral part of statistical quality control of their capital project systems. In this use, benchmarking provides the essential data with which to detect defects in the process by which an organization is generating its capital projects. It is incredibly powerful and is the only way to produce excellence over a sustained period of time. Intellectually, this form of benchmarking is the heir to W. Edwards Deming. It provides early detection of problems and a way to correct problems systematically.

So why is this use of benchmarking so rare? I suppose there are many possible reasons, but I believe the most common is that use of benchmarking for quality control requires a very deep and holistic understanding of the project development and execution process. In my experience, such understanding is rare. Perhaps even more onerous, that holistic understanding must be shared across an entire organization. Business management, operations management, and corporate management must all grasp the way the project system should function and genuinely buy-into supporting the system. Even in highly capital-intensive companies where project effectiveness is essential to corporate success, a shared framework of understanding is often difficult to maintain. At a minimum, corporate management must understand that the protection of the capital project system from the predations of businesses and operations managers looking for short-term advantage is a corporate remit. If corporate shirks that responsibility, even for a short while, the project system degrades rapidly.

The figure below shows a simplified version of what a holistic vision of the project development process. Each aspect of the process is subject to quantitative benchmarking. Those benchmarks in turn can be used to understand whether the system is operating as intended, and if not, corrective measures can be taken quickly. For example, if the scope development team is struggling to understand the priorities amongst project objectives, which is easily measured, it suggests a defect in the business/engineering interface. The quality of front-end loading is measured in a straightforward and reliable manner, but what about “strong (owner) team”? Team quality is more difficult to measure than FEL, but IPA (and I am sure others) have developed objective team quality measures that correlate very strongly with better FEL and with better business outcomes.

Ed Merrow Benchmarking

Summing Up

A strong benchmarking program always accompanies a strong and effective owner project system. A strong project system encompasses work process, organizational and governance elements that need to mesh and support each other to achieve the project objectives. Benchmarking measures how well a project system is performing and in its more powerful forms, benchmarking explains why it is performing the way it is. In its most powerful manifestation benchmarking provides a real-time guide to effective project system management and lays out the roles of all parts of the organization that touch the effectiveness of capital projects.

Like most forms of excellence, excellent benchmarking requires commitment, investment, and a willingness to forego short-term fixes of long-term problems. As one of my colleagues noted, the unwillingness to invest in the long-term may be the biggest barrier to benchmarking excellence.

Ed Merrow,
Chief Executive, IPA Global

Association Half-day Seminar,  Reimagining Benchmarking

Join Ed Merrow on 17th March to reimagine benchmarking through the medium of three great case studies from Exxon Mobil, EDF and Nuclear New Build, and Highways England.

Register for the seminar


LiPodcaststen to Ed Merrow, interviewed by Andrew Crudgington for the Association’s Podcast