6 years, 9 months ago

Hypotheses, method and recursion

Link: http://weblog.tetradian.com/2014/05/27/hypotheses-method-and-recursion/

What methods do we need for experiment and research in enterprise-architecture and the like? Should and must we declare exactly what our hypothesis will be before we start?

Those are some of the questions that Stephen Bounds triggered off for me with this tweet, a couple of days back:

I duly read the article, which, yes, all seemed fair enough on the surface. But I do have some severe reservations about how well those kinds of ideas actually work in real-world practice, as summarised in the post that I referenced in my reply-tweet:

Which triggered off a really nice back-and-forth on Twitter:

  • smbounds: A good read & the uncertainty of a single event is worth emphasizing. But playing the odds still pays off in the long run. // so scientific method is still important for EA and KM, even if it is just making predictions in the aggregate.
  • tetradian: strong agree scientific-method important in #entarch – yet must also beware of its limits – eg. see also ‘Enterprise-architect – applied-scientist, or alchemist?‘  // relationship b/w repeatable (‘scientific method’) and non-repeatable (not suited to method) is fractal & complex in #entarch etc // not using scientific-method where it fits is unwise; (mis)using it where it doesn’t fit is often catastrophic (e.g BPR) #entarch
  • smbounds: scientific method is not about repeatability or cause & effect. It is about testing hypotheses in a way that minimises our bias.
  • tetradian: oops… scientific-method is exactly about repeatability and predictability – that’s its entire basis and raison-d’être… :-) // confirmation-bias and other cognitive-errors are real concerns – yet ultimately it’s all about repeatability // re science limits, recommend ‘Art Of Scientific Investigation‘ and ‘Against Method
  • smbounds: disagree: the *method* needs to be repeatable, but the hypothesis could be one of non-repeatability
  • tetradian: “the *method* needs to be repeatable” – this is where it gets tricky :-) – method vs meta-method, fractal-recursion in method etc // probably best I blog on this (method and meta-method in #entarch etc)

I asked for permission to quote the tweets above, which Stephen kindly gave – hence this post.

The first part of my response would be to quote from the Wikipedia page on Paul Feyerabend‘s book ‘Against Method‘:

The abstract critique is a reductio ad absurdum of methodological monism (the belief that a single methodology can produce scientific progress). Feyerabend goes on to identify four features of methodological monism: the principle of falsification, a demand for increased empirical content, the forbidding of ad hoc hypotheses and the consistency condition. He then demonstrates that these features imply that science could not progress, hence an absurdity for proponents of the scientific method.

(See the Wikipedia page for the respective page-numbers in the original printed book.)

The relevant point here is that one of the key assertions that’s made in that Guardian article is a requirement for “the forbidding of ad-hoc hypotheses” – and yet, as in the quote above, that is explicitly one of the four foundation-stones of ‘scientific method’ whose validity Feyerabend demolishes in Against Method. Beveridge, in The Art of Scientific Investigation, perhaps isn’t quite so extreme as Feyerabend, but not far off: there’s a whole chapter on hypothesis, anyway, including a section on ‘Precautions in the use of hypothesis’ (pp. 48-52). In short, it’s nothing like as clear-cut as the Guardian article makes it out to be – and a lot more problematic than it looks, too.

One of the things that makes problematic is a fundamental trap I’ve mentioned here a few times before now, known as Gooch’s Paradox: that “things not only have to be seen to be believed, but also have to be believed to be seen“. That’s where those cognitive-errors such as confirmation-bias arise: our beliefs prime us to see certain things as ‘signal’, and dismiss everything else as ‘noise’. Defining an a priori hypothesis is, by definition, a belief: that there is something to test, and that this chosen method and context of experimentation is a (or the) way to test it. Which, automatically, drops us straight into the trap of Gooch’s Paradox: and if we don’t deliberately compensate for the all-too-natural ‘filtering’, we won’t be able to see anything that doesn’t fit our hypothesis. Oops…

And, yes, I’m going to throw in a SCAN frame at this point, because it’s directly relevant to the next part of the critique:

In practice, predefined a priori hypotheses will only make sense with ‘tame-problems’ that stay the same long enough to test an hypothesis against them. Which, in SCAN terms, pretty much places us well over to the left-hand side of SCAN’s ‘boundary of effective-certainty’. There can be quite a bit of variation – in fact the variation is exactly what an hypothesis should test, and test for – yet even that variation itself has to be ‘tame’ enough to test.

In the mid-range of the hard-sciences – between the near-infinitesimally-small and the near-infinitely-large – things do tend to stay tame enough for a priori hypotheses to work, or at least be useful. yet outside of that mid-range, and even more so in the ‘soft sciences’ such as psychology and large parts of knowledge-management – the two areas mentioned in Stephen’s initial tweet – we’re much more likely to be dealing with wild-problems rather than tame-problems: in other words, more often over to the right of that ‘boundary of effective-certainty’. Where, courtesy of Gooch’s Paradox, those a priori hypotheses are more likely to lead to exactly the kind of circular self-referential ‘junk science’ that this ‘registration revolution’ is intended to prevent…

The other real trap here is fractal recursion – that, to use SCAN terminology again, we have elements of the Simple, Complicated, Ambiguous and Not-known themselves within things that might seem to ‘be’ only Simple, Complicated, Ambiguous or Not-known. (See my post ‘Using recursion in sensemaking for a worked-example of this, using the domain-boundaries of the SCAN frame itself.) No real surprise there, I’d guess: to my mind, it should be obvious and self-evident once we understand the real implications of experiential concepts such as ‘every point contains (hints of) every other point’. And the reason why this is such a trap is that a priori hypotheses will sometimes work even in the most extreme of the Not-known, but then suddenly make no sense – which is what we’d more likely expect out in that kind of context, but confuses the heck out of someone who does assume that it’d continue working ‘as expected’.

I’m glad to see that there is some hint of awareness of this in the Guardian article:

The reasoning behind [the ‘registration’ initiative] is simply this: that by having scientists state at least part of what they’re going to do before they do it, registration gently compels us to stick to the scientific method.

It does at least acknowledge some of the uncertainties, in that point about “at least part of”, rather than an assertion of ‘must’, or ‘always’. Yet what there still doesn’t seem to be there, in that quote, is enough awareness of the very real limits of the ‘scientific method’…

As Stephen says, there is a real need for consistent method. Yet given that there are real limitations to the validity and usefulness of the classic scientific-method, what’s more needed instead is meta-method, generic or abstract methods for creating other context-specific methods dynamically according to the nature of the context. For example, if we again use the SCAN frame as a base-reference, we can use one of its cross-maps, of typical ‘themes’ in each of the SCAN domains, to suggest appropriate techniques to work with and use for tests in each type of context:

As described in the posts ‘Sensemaking and the swamp-metaphor‘ and ‘Sensemaking – modes and disciplines‘, the ‘swamp metaphor’ provides another worked-example:

Which in turn links up with the classic scientific-method sequence of ‘idea, hypothesis, theory, law’:

The crucial part in both of these examples is that we use the respective cues to select methods that match up with the underlying criteria of the respective domain: don’t mix them up! Which, unfortunately, is very easy to do, particularly when we’re dealing with fractal-recursion…

To get another view on how meta-methods work, perhaps take a look at some of the posts here on metaframeworks, such as ‘On metaframeworks in enterprise-architecture‘, and the series of posts starting with ‘Metaframeworks in practice – an introduction‘.

Overall, though, the key point here is that a single method will not be sufficient to cover all of the different types of context that we deal with in enterprise-architecture and the like – perhaps especially for the ‘soft-sciences’ elements of the practice. And yes, that ‘no single method’ constraint does apply to the fabled ‘scientific method’ too: trying to use it where its assertions and assumptions inherently do not and cannot apply is not a wise idea!

Leave it there for now: over to you for comments, perhaps?