My favorite example of decision under uncertainty is when I am making a purchasing decision and paying for the purchase by check (cheque!). I want to have some degree of certainty that the check will clear, so I have several strategies at my disposal.
- Call the bank, tell the bank that I have a particular purchase to make, give the intended cheque number and amount and get some kind of a gurantee that the check will clear (even if my expected paycheck deposit bounces).
- Call the bank/go to an ATM or whatever, get the current (available) balance, find out which payments/deposits have already cleared and satisfy myself that there is enough for the check I am about to write
- Call the bank/go to the ATM or whatever and get the currrent balance, ensure it is enough to cover the check I am about to write
- Look in the check ledger (or home accounting system) and satisfy myself that the balance is sufficient.
- Assume that because I have checks left, I must have money.
Well I was just kidding about #5. These choices all have some degree of risk (from very little risk in #1 to significant risk in #5). But which one do we choose, and why? I suspect it has a great deal to do with how we perceive the risks. So, for example, if I were working for a large company with a great history of making payroll, the likelihood of a pay check bouncing is very low, so I will act confidently and perhaps take any of the choices 2, 3, 4. However working for an underfunded startup, I may well choose 1. There are lots of other variables in here too. How much does the bank charge for bouncing a check – vs the value of the check. How much reputation (aka credit report) damage might I suffer, etc. So I make a decision. But except in case 1, the decision is somewhat uncertain.
In our daily lives, we constantly act on imperfect data, but we use our experiences to guide us. I am pretty sure that my socks will be in the sock drawer, so will go there first. But if they are in the laundry – I might have to take a corrective action, the risk is low, the actual impact is low, so I will go to the drawer first. Oh, and no, I don’t consciously do the risk analysis of every small thing – that would be very counter productive. That’s why we have experience! We can act without consciously managing trivial risk.
Now moving into information systems, things look a little different – but should they? We attempt to limit risk of doing the wrong things (including being gamed by clever risk expoitation) by using tight centralized control (aka databases of record, single points of truth, etc.) However, these kinds of systems are inherently fragile. They are also more unreliable than we think. The value of any piece of data is only valid at the time we request it unless we wrap the request with explicit transaction boundaries. But then if we have to serialize access to data every time we want to use any of it, everything would gum up and come to a grinding halt. So we always use approximations
The “trick” is to realize that you are always retrieving an approximation, realize that much decision making can be relegated to a system of reference, but that when making changes you must affect the system of record. And when affecting the system of record we do need transactions! Transactions give us the “truth” at that moment. The decision to change the “truth” is however very often made on approximations. This isn’t bad – it’s reality. Of course we can contrive situations where we have to have “truth” just as in my example 1 above, but those cases are more rare than we imagine.
It then becomes an architectural and design principle – what source do we we use in order to make decisions? How does the system of record react when attempts are made to update it based on out of date information? That all plays into the Values and Trust axes relevant to that decision making.