In his brilliant book ‘The Flaw of Averages: Why We Underestimate Risk in the Face of Uncertainty’ Dr Sam Savage shows us why we need to connect the ‘seat of the intellect to the seat of the pants’.
Savage’s message is that making decisions under uncertainty requires probabilistic methods rather than averages in everything from movie portfolios to the height of flood water*. A simpler message is that we can fly intuitively by the seat of our pants for some types of decisions but that failing to engage the brain for others could mean our pants end up around our ankles.
Does your IT organisation make decisions by the seat of the pants or the seat of the intellect? Does the word ‘pants’ make you smile? Be honest now.
I suspect that decisions and behaviours in IT organisations are distorted by intuition and judgement more than we’d like to admit. We all fall prey to these cognitive biases, however rational we like to think we are. Don’t worry, its human. So is our denial. However when the effects of cognitive bias aren’t understood, recognised and mitigated the stage is set for some questionable decision-making. There are great examples of how this plot unfolds in ‘Why plans fail – Cognitive Bias and Decision-Making’ by Jim Benson. IT is a regular cast member.
If you’re still not convinced, read ‘Thinking Fast and Slow’ by Daniel Kahneman and his earlier work with Amos Tversky. The former won a Nobel Prize for his work in behavioural economics which describes how our intuition, the ‘gut feel’, makes irrational decisions in the presence of uncertainty. Combine the ‘halo effect’ with the ‘illusion of knowledge’, throw in some ‘cognitive dissonance’ and we can also see why IT decision-makers are held up to be (and may even believe themselves) immune to these effects.
How do we know when decisions have been based on good evidence, bad evidence or faulty judgement and how do we know if they were successful? If IT is investing in a vendor’s technology, letting an outsourcing contract, reorganising, shedding headcount or making service improvements then its making bets on some outcome. Perhaps the outcome itself is a vague articulation of some flawed reasoning (but that’s another story). Our natural over-confidence and biases make success seem inevitable in our heads when there’s actually some probability of a negative outcome. If no-one is asking “Where’s the evidence?” then alarm bells should be ringing.
Making IT decisions using selective evidence or using biased sources such as vendors and suppliers can be a problem too, whether unintentional or deliberate. Decisions can also be distorted by a culture of fear and blame where evidence is suppressed or sanitised to duck punishment for ‘bad’ decisions whilst at the same time losing the ability to experiment, learn and celebrate ‘good’ ones.
There are also dangers lurking in the things which might be considered self-evident in some IT departments as Pfeffer & Sutton describe in their book ‘Hard Facts, Dangerous Half-Truths, and Total Nonsense: Profiting from Evidence-based Management’. Doing things which are believed to have worked elsewhere (but proof is scarce) or following deeply held (but untested) ideologies are just two examples. Does that remind you of a new technology hype, a ‘best practice’ framework, a new boss keen to make an impact? If unchallenged or unsupported by evidence these decisions could either seep into the collective sub-consciousness or breed resentment and even sabotage.
Evidence-based Management might sound like a new-fangled fad but it isn’t. Not where decisions really matter, in health for example. Take Stacey Barr’s recent post about why we should trust scientific evidence in spite of personal bias or even the media’s pursuit of a story. Ben Goldacre has a done a lot to expose this kind of ‘Bad Science’ in the media and yet the IT media and their marketing paymasters have been jumping, unskeptical, from one bandwagon to another for decades.
Making good IT decisions needs evidence and evidence needs measurement. This is measurement in the broadest sense:
‘A quantitatively expressed reduction of uncertainty based on one or more observations’
according to Doug Hubbard’s ‘How to Measure Anything: Finding the Value of Intangibles in Business’. Hubbard, like Savage, is an exponent of probabilistic methods and creator of Applied Information Economics for measuring the value of information itself and reducing uncertainty in the variables most likely to have an economic impact.
Maybe IT has something learn from other parts of the business about how to use Monte Carlo models and other statistical tools (from Lean and SixSigma for example) to focus on the best improvement returns rather than relying on perception and intuition.
As shareholders, tax payers and employers, shouldn’t we expect IT leaders to make decisions using more robust methods and then prove that they’ve created, not destroyed, value? I’d like to think so.
*Savage’s dad Leonard Jimmie Savage was a giant of Bayesian decision theory in the 1950s so you might expect him to know a thing or two about probability.