7 years, 10 months ago

EA metamodel – the big picture (and the small picture too)

Link: http://weblog.tetradian.com/2011/09/06/ea-metamodel-big-picture/

In the various previous posts about EA metamodels, we’ve been exploring some of the detailed structures for toolsets and the like at a very, very low-level. But what’s the big-picture here? What’s the point?

So let’s step back for a moment, and look at real-world EA practice.

Much of our work consists of conversations with people, and getting people to talk together, so as to get the various things and processes and activities and everything else in the organisation and shared-enterprise to work better together.

To support those conversations, and to help sense-making and decision-making, we create models. Lots and lots of models. Different models, in different notations, for different different stakeholders, and different contexts.

Lots and lots of different ways to describe things that are often essentially the same, but happen to be seen from different directions.

Yet keeping track of the samenesses and and togethernesses and relationships and dependencies of everything in all those different portrayals is often a real nightmare.

Finding a way to resolve that nightmare is what this is all about.

A bit of context

Look around at your own context. If you’re doing any kind of architecture-work, or even just explaining things to other people, you’ll be doing lots of diagrams and drawings and models. Some will be hand-drawn, some will be done on a drawing-package such as Visio or Dia or LucidChart, and some may be done within a purpose-built modelling tool such as ArgoUML or Agilian or Sparx Enterprise Architect or Troux Metis. Lots of different ways of doing the same sort of thing with different levels of formal-rigour.

But if we look at it with a more abstract eye, what we’re using is different types of notation. Some will be too freeform to describe as a ‘notation’ as such, though the point is that it’s still used for sense-making and decision-making. Once we get to a certain level, we tend to use some fairly standard notation such as UML or BPMN or Porter Value-Chain or Business Motivation Model or Business Model Canvas, simply because it’s easier to develop shared understanding with a shared model-notation.

And again with an abstract eye, each notation consists of the following:

  • a bunch of ‘things’ – the entities of the notation
  • a bunch of connections-between-things – the relations
  • a bunch of rules or guidelines about how and when and why and with-what things may be related to other things – the semantics that identify the meanings of things and their relations and interdependencies
  • often, a graphic backplane, parts of which may be semantically significant as ‘containers‘ for things (such as the ‘building blocks’ in Business Model Canvas)

(Often a notation will also be linked to various methods of how to use the notation, or to change-management processes that relate to or guide the use of the notation. That’s something we’ll need to note and include as we go deeper into the usage of metamodels within EA and the like.)

Each notation describes entities and relations and semantics in a different way. But often they’re actually the same entities that we see in another notation. What we need, then, is a way to keep track of entities (and some of the relations and semantics) as we switch between different notations.

UML (Unified Modelling Language) does this already for software-modelling and software-architecture: a bunch of different ways to look at the ‘areas of concern’ for software-architecture and the like. That’s a very good example here: entities that we develop in a Structure Diagram can be made available (‘re-used’) in any of the other dozen or so diagram-types (notations) within the overall UML.

Yet UML only deals with the software-development aspects of the context. For example, there’s no direct means to link it to an Archimate model, to show how it maps to business processes in an architectural sense. There’s certainly no means to link it to Business Motivation Model, to show dependencies on business-drivers; there’s no means to link it Business Model Canvas, to rethink the overall business-model (and what part software might or might not play in a revised business-model). Those may not be of much concern to software-architecture – but they are of very real concern to entrprise-architecture or any other architecture that needs to intersect with software-architecture and place it within the overall business or enterprise context.

Hence what we’re talking about here is a much-larger-scale equivalent of UML. It needs to accept that every notation is different – in other words there’s no possibility of “One Notation To Rule Them All”, a single notation that would cover every possible need at every level. Instead, like UML, it would aim to be able to maintain and update entities and relations as they move between different notations – in other words, something quite close to “One Metamodel To Link Them All”.

The catch is that we need to go a long way below the surface to make it work. For UML, that underlying support is provided by MOF (OMG Meta-Object Facility), which is also shared with other OMG specifications such as BPMN (and perhaps also OMG BMM – Business Motivation Model?). Yet MOF only applies to the OMG (Object Management Group) specifications: what about all the other models and notations and everything else that’s defined and maintained by everyone else, often not even in a formal metamodel format? To link across that huge scope of ‘the everything’, we need something that goes at least one level deeper again: and that’s what this is all about.

The simplest start-point is that pair of questions from Graeme Burnett, that would apply to any entity or relation or almost anything:

  • Tell me about yourself?
  • Tell me what you’re associated with, and why?

I’d suggest that that’s where we need to start: with something that is integral enough to be described as ‘yourself’, and to which we could apply those questions. That’s our root-level – a kind of UML for UML-and-everything-else.

Yet that really is at a very low level – deeper than MOF, which is deeper than UML, which is deeper than the UML Structure Diagram, which is deeper than any class-structure model that we might build on that metamodel for UML Structure Diagram.

As we go down through all of those layers, it needs to get simpler and simpler, in order to identify and support the commonality between all those disparate surface-elements. At the kind of level we’re talking about here – what’s known as the ‘M3/M4 metamodel layer’ – it needs to be very simple indeed – which makes it a bit difficult to describe what’s going on, especially by the time we’ve worked our way back up all the way to the surface again.

And, yes, this is where it gets a bit technical…

Building on motes

At this point I’ll link back to a reply to a comment by Peter Bakker on the previous post ‘EA metamodel – a possible structure‘.

Some while back, someone said in a comment to one of the previous that the metamodel I’ve been describing is “like going back to assembly language”. Which, in a sense, it is – I’ll freely admit that.

The commenter didn’t mean it politely – he evidently thought that this was a fatal design flaw, one that invalidated the whole exercise.

But he’d actually missed the point: down at the root – in software, at least – everything is assembly-language. (Or machine-code, to be pedantic, but that’s described here too, of course.)

For this layer we’re talking about here – a layer even deeper than MOF, a layer that is shared across every possible notation – it is the equivalent of assembly-language. It has to be: there’s no choice about that.

And the point here is that if we don’t get the thinking right at this level, we’re not going to get the interoperability and versatility that we need at the surface-level. But this ‘M3/M4 layer’ really is the equivalent of assembly-language. M2/M3 is the equivalent of the compiler, M1/M2 (the surface-ish metamodel) is the high-level language, and M0/M1 (the modelling that we actually work with) is the equivalent of the software-application. It’s essential not to get confused about the layering here: this is a very deep level indeed, far deeper than most people would usually need to go. But we need to get it right here.

Peter said in his comment:

I pretend that I want to start my own Tubemapping Promotion Enterprise :-)
Therefore I will sketch three models:
1. a mindmap for the brainstorm part
2. a business model canvas for the business model derived/extrapolated from the mindmap
3. a tubemap to visualize the routes from here (the idea) to there (a running business)
I keep them as simple as possible of course.

Next step is to see if I can metamodel/model all three models together with motes following my perception of your guidelines above.

So when we’re developing a multi-notation model – such as Peter’s tubemapping ‘app’, in this case – we need to consider all those four inter-layers at once:

  • at the surface layer (M0/M1) we need a notation for each of the distinct models (such as, in Peter’s example, for mind-map, Business Model Canvas, and tubemap)
  • at the metamodel (M1/M2) we need a metamodel that would describe the components and interactions (entities and relations, plus the rules and parameters that define them) that make up each of those notations
  • at the metametamodel layer (M2/M3) we need to identify how to move entities (and, in a few cases, relations) between each of the metamodels, without losing or damaging anything from what’s already been done at the individual (meta)model layers
  • at the ‘atomic’ layer (M3/M4, the focus of this post) we need to identify the ‘assembly-language’ fine-detail of exactly what happens at each stage and with each change to each of the underlying motes, so as to verify that everything actually works and nothing is missed.

Note that for most work we shouldn’t ever need to look at the ‘atomic’ layer. We’re only doing it now because we’re roadtesting the ideas for this ‘mote’ concept, deliberately running everything backwards from the surface layer to check out the interoperability and the rest.

(In other words, yes, we’re doing a deep-dive into the ‘assembly-language’ layer, and it isn’t what we’d normally do. But if you’re a CompSci student, you’d usually expect to have to build a compiler at some stage, to prove that you know what’s going on ‘under the hood’. You’d typically start from hand-sketches that become UML diagrams that become language-specifications for the compiler that works with assembly-language. Most everyday users aren’t CompSci students mucking around with compilers – but someone has to do it if we’re to get usable surface-layer applications. Same here, really. :-) )

What I’ve been suggesting is that we can support all of those requirements at this layer with a single underlying entity, which I’ve nicknamed the ‘mote’. This has a very simple structure:

  • unique identifier
  • role-identifier (text-string) – specifies what role this mote plays in the (slightly) larger scheme of things
  • parameter (raw-data) – to be interpreted as number, pointer, text, date or anything, according to context
  • variable-length list of pointers to ‘related-motes’

In graphic form, it looks a bit like a bacillus:

But we need to remember that it’s tiny – again, much like a bacillus. It’s the ways that they all link together that provide it with the overall power. The mote’s parameter carries just a tiny bit of information; but when that information is placed in context with that from other motes, we can build up towards a much bigger picture.

In effect, what we have in the EA-toolset’s repository is a kind of ‘mote-cloud’:

The ways in which the motes link together I’ve summarised in the previous post, ‘EA metamodel – a possible structure‘. Many remain always as tiny little fragments of information, re-used all over the place in simple many-to-one relationships. But some motes do get large enough to notice – and that’s what we start to see further up the scale, as Entity, Relationship, Model and so on.

Interestingly, there is one context in which we do work directly at the mote-level. It’s when we ask those two questions: “tell me about yourself?” and “tell me what you’re associated with, and why?”. The information to answer those questions is carried directly within the mote, in its embedded role and parameter and via its related-mote list. The ‘yourself’-mote in this case would only be an Entity, a Relation or a Model.

Hope this helps to clarify things a bit further? Comments / suggestions requested as usual, anyway.