Chapter 5: Human Factors and the Ethics of Explaining Failure

by Roel van Winsen and Sidney W.A. Dekker

Practitioner summary

The idea that human performance is systematically connected to the features of people’s tools and tasks effectively constitutes the birth of human factors. However, accidents are often still seen as the result of ‘human error’, either at the sharp operational end or the blunt organisational end. This chapter aims to point out some practical and ethical implications of ‘human error’ (and its subcategories) as an explanation for why sometimes socio-technological systems fail. In briefly discussing some ‘costs’ of relying on this reductionist approach to explaining and dealing with failure, we argue that as a human factors community we need to engage in (ethical) discussions about, and take responsibility for, the effects of the practices that we promote.

Chapter quotes

“Human factors has embraced ‘situation awareness’ (SA) as a construct to aid our understanding of human decision making in complex dynamic systems and to help with the design of human-machine interfaces. However, now SA – and the loss of it – functions as a causal and normative construct to explain and judge human performance…”

“It is here – in ‘human error’ explanations of accidents – that the tension between systems and individuals creates some paradoxes that need both practical and ethical consideration…”

“…it is now commonly accepted that 70% to 90% of accidents can be attributed to ‘human error’ (Hollnagel, 2006). This seems at odds with the human factors principle that human performance is systematically connected to the system in which it takes place.”

“…by disregarding the context and local rationality, the entire explanatory load of the accident – as well as the moral responsibility – is placed on the individual (human) component, which takes away all the necessity and opportunity to learn from the event. The systemic conditions that gave rise to the error are left in the system for the next person to run into.”

“The manner in which the juridical discourse takes and uses human factors constructs – such as ‘(a loss of) SA’ – to ask (and answer) questions about error, culpability, and criminality is an ethical problem that we as a human factors community need to take responsibility for.”


Practitioner reflections (scroll down to add your own reflection)

Reflection by Daniel Hummerdal (co-author of Chapter 26)

Human Factors ideas and theories could potentially be grouped into those that make analysis easier, and those that make analysis more difficult. Van Winsen and Dekker’s chapter would belong to the latter. But they do not take this position to be annoying and to play epistemological police. Instead, they offer a case for why we need to embrace difficulties. They point out the benefits of opening up and looking at the many external ramifications of a particular way of understanding the world. As such, their chapter is an invite and challenge to look beyond an explanation (‘human error’) that many have come to take for granted.

As sometimes echoed within the Appreciative Inquiry field ‘Systems grow in the direction of the questions that are asked’. In contrast, the human error category (and its subcategories) provides and answer stopping points for further inquiries. As the authors point out, this may have a soothing effect on those tasked to manage systems. But it does not give many improvement opportunities.

There is something dreadfully artificial and negative about the human error discussion. Of course people and organisations want to stop bad events from happening. But starting with the problem or outcome frames the problem as a deviation, as if it is a bastard event that is somehow unrelated and different to normal work or normal operations. Moving from a focus on parts to functions puts the error back in the context in which it occurred and where it for all analytical purposes belongs. This perspective can inspire a different set of questions with more analytical yield: What are the functions involved? How are they connected? When and where do they become fragile? What other ways can enable the function or sub-functions? How can people, technology and organisations be supported in achieving reliable outcomes? If the HF/E field was better at doing this it would not only stop bad events, but also make workplaces more efficient, more productive, but also nicer places to work.

We could blame the ones who use HF/E tools to blame operators, but they are merely using the tools that the field has provided. While human factors is sometimes talked about as a unified field, it’s clearly made up from various and sometimes conflicting viewpoints. The differences may help us to articulate what we believe in, what the impact is of different perspectives and what the limitations are of various viewpoints. But there is an overarching question of how the field can evaluate the value of the sometimes opposing ideas. For example the human error explanation models and taxonomies may originate in the purest of needs and scientific of intentions. But, as illustrated by the authors, Human Factors concepts and categories cannot and should not be evaluated in relation to themselves. This chapter (uncomfortably) highlights that what we as HF/E practitioners believe in and pursue, always has a range of social, technical, political and cultural ramifications. The concepts and categories of Human Factors should not (only) be evaluated for the scientific accuracy or internal or validity, but through impact they have on the many relations they interact with and impact once released ‘into the wild’.

Reflection by Ron Gantt (co-author of Chapter 6)

One of the lessons I’ve learned and I’ve constantly had to remind myself is that what I intended sometimes matters less than what the client hears. Following an assessment of an organization that had a spike in incidents, we delivered a summary report that highlighted how the accidents were not a result of deficient personnel, but rather a system that stretched past it’s adaptive capacity through a combination of increased production, initiatives to increase efficiency, and an again infrastructure. All recommendations we provided were system-level recommendations designed to help the organization cope with the changing operational environment and fix some of the infrastructure issues.

The first action they took after receiving the report – sack the safety manager.

Even though we did not recommend such an action, I couldn’t help but feel that we contributed to this. We explained the findings and recommendations of our assessment in a way that made sense to us, but we failed to keep in mind the lens through which the client would interpret our report. They were looking for people to blame from the start and perhaps we could have made it more clear that these issues were a result of broken components but emergent properties of a complex system, as van Winsen and Dekker point out. Perhaps if we had tried to understand the locally rational world of the client we could have avoided the frustrating and ineffective result.

Advertisements

About drclairewilliams

I am a senior consultant at Human Applications and Visiting Research Fellow in Human Factors and Behaviour Change at the University of Derby. Most of my work just now deals with leadership and culture in the health and safety realm; trying to support organisations to take a systems approach to understanding behaviour. I blog in a personal capacity. Views expressed here are mine and not those of any affiliated organisation, unless stated otherwise. You can find me on twitter at @claire_dr
This entry was posted in Part 1: Reflections on the profession and tagged , , , , , , , . Bookmark the permalink.

One Response to Chapter 5: Human Factors and the Ethics of Explaining Failure

  1. Mick Roberts says:

    It is clear from Ron’s example that core issue with the client is their understanding that if something has failed in safety, it must be the fault of a safety manager. This is a trait of an organisation with an inhibited culture and a tough thing to change, from a consultant’s perspective.
    Some good and relevant info here:

    https://safetyrisk.net/safety-manager-an-ultimate-scapegoat/

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s