Justin Hughes writes on issues relating to team and organisational performance. A former Red Arrows pilot, he is now Managing Director of Mission Excellence, a consultancy focused on improving clients’ execution – their ability to close the gap between what gets talked about and planned, and what gets done. Justin previously spent 12 years as an RAF fighter pilot and is a renowned speaker on performance and risk and has presented alongside Richard Branson and Kofi Annan.

I am writing this on an aircraft waiting for a maintenance problem to be fixed, about to fly from London to Munich.

If you’re reading this, I guess that the engineers did a good job. The reason I know that there has been a technical problem is that we have already tried to depart once and the take-off was aborted.  We taxied back to the terminal so that the problem could be fixed – by the captain’s description an issue with one of the computer systems.

It is perhaps relevant to point out that the airline in question is British Airways; the relevance of that is that British Airways has a strong safety record and I think that a regular traveller may or may not think that their service is amongst the best, but many would agree that it is a professionally-run airline with high standards.

The interesting thing about this situation is that one of the passengers has asked to get off and be put on a later flight. He explained to the crew that he was not happy to fly on an aircraft with a known technical problem.

Whilst sitting here I can’t help wonder about the rationality of that decision. If you’re now thinking that the long winter nights would really fly by hanging out with me, then in my defence, I am presenting on decision-making to an audience of German MBA alumni tomorrow, so being in a hyper-rational frame of mind is perhaps no bad thing.

One can make a case that the quitter’s behaviour (can we call him that? I kind of feel that he’s let the team down here) is perfectly reasonable.  This aircraft definitely has something wrong with it. However like many analyses and decisions, one doesn’t have to probe too far to expose some hidden assumptions which make the conclusion at best irrelevant, and often actually illogical.

The most obvious assumption in the quitter’s decision is that the next flight will be safer. What are some of the things which would need to be true for there to even be a chance of that being true?

  1. The next aircraft, an enormously complex machine, has no other problem that is any worse than this one.  Bear in mind that this aircraft has been the subject of engineering focus and that a named individual now has to sign off that the problem is fixed.  On what basis does the quitter think that the next aircraft is ‘healthier’?  Because nobody tells him otherwise?
  2. If it does have a problem, then it will be one which triggers some sort of alarm for the pilots, which they can do something about as opposed to being impossible to influence or resolve (it’s self-evident that the pilots and engineers believe that the current problem can be fixed).
  3. The pilots of the next aircraft are as competent as these ones. This is actually quite a relevant issue. Aborting take off in a commercial airliner is a big deal. I am strangely reassured that this current pilot is so risk-averse that he would rather abort the take-off then just carry on – in many ways the path of least resistance once take off has started.
  4. The fact this aircraft had a minor technical problem is of any relevance whatsoever to the chance of it ever being in an accident, any more than any other aircraft.
  5. I could go on.

Item 4 is more significant than it might first appear. The quitter is making his decision based on one known data point – the aircraft has a problem. I heard his conversation with the crew and observed no evidence that he had any deeper insight than that. A known problem can of course be mitigated and managed.

Aircraft accidents rarely happen because of known problems, and no two accidents are ever the same. It’s what we don’t know that we should be worried about. In spite of the abuse from the Daily Mail (and many other commentators, in fairness), Donald Rumsfeld may have been on to something with his ‘unknown unknowns’.

Even a more careful analysis of known data can lead to completely wrong conclusions (and therefore decisions).

In WWII, there was a requirement to increase the survivability of bomber aircraft. A study was done to identify consistent damage areas on returning aircraft and a plan was made to fit additional armour – makes sense.  Does it?

A gentleman called Wald didn’t think so. For those aircraft which made it back, the damage in those areas was evidently survivable. Wald’s suggestion was to armour the areas where the survivors had no damage.  He inferred that the lost aircraft were getting hit in an area different to the survivors.  Wald’s plan was implemented and survival rates increased. Admittedly, it’s not possible to perfectly correlate the change to the survivability but it seemed to work.

What’s the lesson? Try and unpick the assumptions in your decision-making. It’s often the case that decisions are made based on limited or irrelevant data which is then incorrectly interpreted (this was not intentionally a description of the managed funds and financial advisory industries). You’re reading this.  I made it to Munich.  Did I prove my thesis or not?  What hidden assumptions did I make?