- The Human Stream
- Posts
- Reliable...but Unsafe?
Reliable...but Unsafe?
A series on the role of reliability in patient safety
Introduction
A focus on system- and process-level reliability, typically pursued through simplification, standardisation, and minimising variability (via compliance measures and performance monitoring), is a common organising principle for many patient safety assurance and improvement activities.
Stemming from mid-twentieth century (industrial) process engineering and quality management concepts, these reliability-oriented methods undoubtedly have their place in healthcare improvement, but how suitable are they really when the focus is safety (rather than efficiency)?
This issue of The Human Stream marks the start of a short series on the role and limits of these ideas as they are typically incorporated in patient safety work - sometimes detached from the knowledge base they emerged from, and often more widely than is warranted.
Over the next few issues we will interrogate several assumptions that underpin reliability thinking in the context of patient safety.
What do we mean by ‘reliability’ anyway?
Colloquially, saying that something is reliable is much the same as saying that it can be depended upon (or possessing the quality of being ‘dependable’).
In the realm of quality improvement, reliability takes on a more specific meaning: denoting ‘failure-free’ operations or a low rate of process-level ‘defects’. This framing of quality (as reliability) emerged from industrial manufacturing settings as did many of the tools that are often deployed to improve process reliability. This way of thinking, and its affiliated methods, were introduced to healthcare by the clinical quality movement and were quickly absorbed into the patient safety toolkit in the early 2000s due to core similarities between how we thought about incidents at the time (as linear chains of events) and the linear process orientation of reliability engineering methods. It was easy to view process-level safety as a product of ‘defect-free’ clinical processes, and to view overall operational safety as an an aggregation of many such ‘defect-free’ processes.
The above logic (even if rarely stated explicitly like that) underpins a vast amount of patient safety governance, assurance and improvement work today. Look closely and you will see manifestations of it in many clinical procedures, policy documents, and organisational strategic plans. Overtones of such thinking are also visible in the patient safety research literature and several textbooks, many of which situate high-reliability process engineering methods adjacent to the concept of High Reliability Organisations (HROs)1 , suggesting intersections.
But is it warranted? We look to two sources to unpack this - the HRO literature itself and a contemporary source of thinking in systems safety: Nancy Leveson’s 2011 book, Engineering a Safer World.
Differentiating HRO from process engineering methods
Many papers in healthcare, a multitude of strategy documents (and more than a few commercial patient safety improvement programs) tend to conflate reliability-focused optimisation methods (which come from industrial engineering disciplines) with the idea of High-Reliability Organisations (HROs) which came from sociological research on high-performing safety-critical organisations. Practitioners and leaders are also not immune to this.
While understandable to an extent, given the shared terms involved, its important to recognise that the work undertaken by Karlene Roberts, Denise Rousseau, Karl Weick and Kathleen Sutcliffe (the original UC Berkeley research team that developed the HRO concept) within a selection of organisations with inordinately safe track records despite a high potential for catastrophic risk (air traffic management, power generation and naval air craft carriers) surfaced a multitude of factors, none of which were remotely connected to process-level reliability 2,3 .
In fact, Roberts’ six actions for managers and Weick & Sutcliffe’s five principles of HROs are all about managing risk and the capacities to succeed under uncertainty (closer to present day ideas about system resilience and managing complexity) rather than process-level reliability.
We might look at HRO theory in more detail in a subsequent issue but for now let’s stay close to the question of process reliability and system safety.