Revisiting Hansson on Great Uncertainty

When making decisions in the real world, we tend to go on blind. There simply is either not enough information on the options to make a reasonable assessment of the expectations, or there is simply too much information that makes us unable to choose the relevant bit. Most of the times the situation seems to be a combination of the two.

Making a choice in such a messy situation can hardly be considered rational in the sense of any classical decision making theory. One can only say, that the rational choice is not to choose at all — to refrain from any action regarding the problem at hand. But such a rationality is rarely the privilege of but the skeptics and the hermits, and the insane. However, there are methods and principles to apply onto this irrationality — if not madness — that help us to make as sustainable choice as we can in the tricky world we live in.

In his paper “Decision Making under Great Uncertainty” (Philosophy of the Social Sciences, vol. 26 no. 3, s. 369-386, September 1996), S.O. Hansson gives us a typology for situations where the uncertainty faced by decision makers fall out of the familiar school book examples. He lists four major types:

  1. Lack of knowledge on the available options (“uncertainty of demarcation”): either a) there is no known list of all the options  (“unfinished list of options”) or b) there is a lack of understanding of the problem that is meant to be solved by the decision (“indeterminate decision horizon”)
  2. Consequences are opaque for some or all options (“uncertainty of consequences”)
  3. Lack of trust for the available information and/or experts (“uncertainty of reliance”): either a) the recognised experts disagree with each other or b) there is no agreement on who counts as an expert on the issue
  4. Lack of knowledge on the values that should weight on the decision (“uncertainty of values)

Hansson claims that these “untypical” cases are more likely to occur than people are willing to admit. He notes that these kinds of uncertainty have failed to gain attention because they resist mathematical modelling. As I said above, there are principles for decision making when we are faced with cases that fall out of the classical decision making situation. For the rest of the paper, I will represent and assess what Hansson presented almost 20 years ago.

Hansson suggests three available strategies for  1.a) case, where it is known that the list of options is unfinished (and thus the decision maker is uncertain of the final relative expectations of the available options in situation where the list of options is extended). First, one can act upon the given options as if the list was a finished one (“closure”). Secondly, (“semi-closure”) one may make a partial decision to take one of the options and decide to find new options to turn to later on (given that the for the time being chosen option is relative to time and can be reversed later). Thirdly, one may postpone the selection of one of the options until either there is more options found or that it is no longer an option to postpone the decision (“active postponement”).

These second order strategic choices can have their own calculus for expectations. For instance, closure should not be taken if all the available alternatives are defective or have known gravely undesirable consequences. Active postponement should be avoided if the problem has tendency to seriously worsen over time. Semi-closure or active postponement should be avoided if the search of alternatives is either too costly or unlikely to go through.

What is interesting with the semi-closure and active postponement is that they are conservative principles: they attempt at keeping options open as long as possible, so that we may make a new decision later on with better information and understanding. Closure has, on the other hand, a cost benefit on its side. But it possesses far higher second order risks by possibly closing of new decision making options later on.

Hansson suggests two strategies for  1.b) case, where there is no obvious way to limit the problem into a decision where all the relevant issues are weighted in. In other words, in these cases, there are several equally sensible “decision horizons”, each equipped with sensible set of expectations for the options, but each set of expectation incompatible with the others. The first strategy is to divide the original problem into subproblems if possible. This division should be such that each subproblem has less sensible horizons to consider than the original problem. (Hansson’s example here is the safe disposal of nuclear waste. Here the problem may be divided into what should be done to the waste already produced, and what to the potential future waste.)

The second strategy is, if possible, to combine the horizons, produce a “fusion” of them. That is, all the issues are brought on the table, and taken into consideration. Since this latter strategy is far harder for us to handle, it should be taken only if the first strategy fails. The latter strategy has always higher level of complexity and interconnectivity to deal with.

But I find some worries on the division strategy as well. They relate to the next case, i.e., uncertainty of consequences.  There is, surely, less complexity due to subdivision of the problem.  But supposing we are dealing with a problem within a complex system (say ecology), the subdivision might hide some system complexity related to the overall problem, and the piecemeal problem solutions might present some unforeseen nasty consequences. Thus the decision horizon cases are not always independent from uncertainty of consequences cases.

Case 2. is still far harder to handle than the previous ones. Here the problem is that it is impossible to say that there is no unforeseen consequences for any decision option. How to value the risks and the benefits when there might be a fat tail, i.e., a little probability but high impact event looming outside the considerations? Or when the system you are tinkering with possesses high level of complexity and interconnectivity? Thus, Hansson concludes, the first item on the list is to figure out whether we can make the decision with our present knowledge while disregarding the unforeseen.  To do so, he suggests four simple signs to look out for.

  1. Is the uncertainty symmetric? If there seem to be as much unforeseen factors for all the options, you might be able to disregard the unforeseen. If there is more likely to be unpredictability behind some options, you should take it into account.
  2. Is the phenomena new? If the phenomena and the problem are old, and the options have been tested before, you might be able disregard the unforeseen, otherwise not.
  3. How does the problem and the options relate to time and space? If they are clearly and discretely limited spatio-temporally, you may disregard the unforeseen, if not, you should not.
  4. How system complexity -related the problems and options are? If there is a complex system (e.g. ecology or economy) that relates to the problems and options, you should take the unforeseen into consideration. If not, you might be able to disregard it.

Hansson does not provide us with a ready made solution for the cases where you should take the unforeseen into account. He says that we should look for asymmetries, and choose the option with less open ended consequences. And we should, in the cases where the unforeseen is looming, avoid interventions whenever possible. Thus, here too, Hansson promotes conservative attitude when facing great uncertainty.

Hansson suggests two approaches to case 3. a) where there is expert disagreement on the issue to be decided (i.e.,  they at least provide differing estimations for expectations but also might disagree on a larger scale). First, he says that we should take the majority, unless the minority option has smaller risks and thus is the more cautious option (i.e., an option that is best relative to the worst case presented). Secondly, he suggests a “representative hedges method” where the experts are asked to assess their own conviction to their opinion. Thus the level of conviction may be used to reevaluate the expert conflict.

The case 3. b) is far more messy. Here we have no agreement on who may be regarded as an expert. To counter this situation, Hansson suggests some steps. One is to subdivide the problem in such a way that we may agree on the experts on these subproblems. We may apply peer review boards to decide on the expertise, if the subproblem falls under a field of study. Third one is to develop the communication between the decision makers and the experts by increasing the intelligibility and understandability of the expert message for the decision makers. One way, according to Hansson, to do this is — unsurprising but rather too often overlooked tool —  to do trial studies and test the expert opinion in a way that provides empirical, evidential information on it.

These principles Hansson presents for the both types of uncertainty of reliance (i.e., 3.a) and b)) seem highly sensible ways to deal with the expert disagreement and disagreement on expertise. There is again some sensible bias towards conservative measures. But I think that these principles could also be combined with considerations of the other kinds of great uncertainty, and, for instance, take closure, semiclosure or active postponement as part of the reliance consideration. Or one could take the values of the experts as part of the consideration when hedging.

Hansson argues that the case 4 turns out more tricky than it at first appear. As such we may be uncertain or in a disagreement about the values that we have to consider for the decision. And this affects the expectations of the options, since the risks and benefits are always relative to the set of values we have. But when decision affects other people — or even future generations — should we make the decision based on our values or their values? And if the latter, how can we find out the relevant values? In the case of present persons, we can poll for opinions and take other measures that might give us a decent hint, but in the case of future — we only know that past century alone has seen some radical changes in the values of people — and we may only suppose that our present set of values might not be the values of future.

Hansson suggests, however,  that we may, firstly, take the basic human needs (food, health, continuation of life) as the base values for decision. Secondly, we may look at the trends in the history of values, and try to detect what has been permanent and resistant to changes. Thirdly, we may also be sensitive to the future values by opting for options that do not produce permanent changes — or for options that are reversible. These are, as Hansson himself notes, conservation prone conservative principles.

What we learn from Hansson is that, when faced with decision where the expectations are unclear and non-measurable, we still have some higher order principles to use that might get us out of the mess in a sustainable way. Many of the principles attempt to lessen the uncertainty (for instance, in relation to uncertainty of reliance) if only possible. The rest of Hansson’s principles tend to be conservative and conservation minded — such that they aim at leaving open as many future choices as possible. And thus echo the words of Arne Naess, who said that in the face of unknown risks the decision should always be “rather not.”

However, to be on the conservation side of the issues, to avoid intervention to complex systems, might not be politically sexy. It seems that taking action and making decisions is. Nevertheless, I have not so far seen a good argument for non-conservative bias in decision making when one faces what Hansson calls great uncertainty. Hansson’s 20 year old paper, at least, should be enough to show that, unless there are strong and well presented arguments, the conservation choice, i.e., the conservative option which leaves as many future options open as possible with minimum number of overly nasty foreseeable consequences, must be the default decision when facing great uncertainty.

 

Leave a Reply

Your email address will not be published. Required fields are marked *