I have been involved with modeling and modeling tools for a long time, dating back to the CASE tool days. My mentor from the beginning of my career, when teaching me the notion of modeling and what it was used for, once said: “the primary purpose of any model is to communicate”. In those days, but for a few very specialized tools, most modeling tools took the approach of simply automating a process of building diagrammatic depictions, replacing what people were currently doing by hand. Communication was achieved because the user used a standard method and the tool automated the process of building a model according to that specification (graphic standards and meta-model) and presented it in an easily readable way.
As time moved on and concepts such as TQM, BPR, and much later EA became something people were interested in using modeling tools for, the simple capture and rendering of model data was no longer sufficient. People began to want to use their model data – the fruits of all their hard work – for something besides just printing in documents or presenting in PowerPoint slides.
Data Must Be Complete for Analysis to Work
One of the first areas I was involved in was building a tool for using activity models as the basis for Activity Based Costing; shortly after that another project focused on using process models as the basis for process simulations. In both of these cases, this involved the use of the model data to analyze the part of the enterprise that particular model represented. And in both these cases, and the many cases since, the one fact that that is always true is what every programmer learns in year 1 of college – garbage in, garbage out. The data used for these analyses must be complete for the analysis to work – unfortunately often the only way to check for completeness was to either build reports and check via reading the report looking for blanks, or attempt to run the desired analysis then trouble shoot why it failed.
Of course, the worst case scenario is if the analysis runs and appears to work…but does so using incomplete data…possibly resulting in a decision based on that flawed analysis!
Over the decades since, the names of the practices around which models find use have changed, as have the methods (such as the introduction of Object Oriented methods) and frameworks. Additionally, many of the modeling tools have come and gone, though notably System Architect has been flexible and powerful enough to evolve as times have changed. One thing that has remained consistent however is that people have a desire to use their data for something more than a nice printout – often the words they use to describe their desired use are “analysis” and “decision support”.
By way of example, DoDAF 2 – Volume 1 specifically discusses the use of EA data and states “Architecture-based analytics includes all of the processes that transform architectural data into useful information in support of the decision making process.“
Volume 1 goes on to discuss, in section 10.5, the “Principles of Architecture Analytics” and states that the “five key foundational principles of architecture analytics are”:
1) Information Consistency
Dealing with data being collected in accordance to an overarching metadata structure; e.g., meta-model.
2) Data Completeness
Which refers to “the requirement that all required attributes of data elements are specified”; and continues on to mention examples of why this is important for architecture based analytics and decision support.
5) Lack of Ambiguity
Lou has done a great job of discussing meta-models and System Architect in previous articles and Martin has introduced another meta-model for consideration in Archimate for System Architect. For the purposes of this article I am going to focus on item 2 on the list. Why? Because, regardless of the meta-model involved (DoDAF2, Archimate, UML, etc.), and regardless of the name given to the practice in which various models are used (EA, BPR, etc.), or even the tool used – the one thing that is absolutely necessary for the collected model data to be of any use beyond a graphical depiction is that the data be complete enough to serve the intended purpose.
Those of you familiar enough with the properties language of System Architect are aware that there is the ability to make any given property “Required” – and certainly thoughtful use of that capability can go a long way toward ensuring that the resulting architecture is complete enough for the stated purpose. However, it is often true that the data collected in an architecture, and how it is collected, is not such that you want to prevent the entry of an item if the user does not have all the required attributes at the time of creation, as will happen with the “Required” keyword. And of course, many larger EA projects involve more than one team member so there will be varying levels of knowledge of the subject matter and thus necessary follow-up research. Also, it is not likely any single person will have knowledge of “how far along are we” – though certainly that question will arise.
It is for these reasons – the real word way data is collected and the requirement that the data is complete for any analysis of that data to be valid – that EA Frameworks has created a macro add-on to System Architect for the specific purpose of analyzing the completeness of the data contained in the encyclopedia. The macro allows the user to specify the “required” properties of a definition type (“required” properties are certain to change based on the intended use [purpose] of that definition in analysis) and then queries the content of the encyclopedia to determine the level (percent) of completion of the definitions. The result of this macro is presented in several ways:
1) Charts in Excel that show percent complete for each property
2) Sheets in Excel that show the definitions with “blank” properties and what the properties are.
Performer Dashboard — Click to Enlarge.
The results of the macro can be used for several purposes:
- Compliance/conformance checking – if standards exist for collection of EA data such that there are completeness requirements, this macro can save a great deal of time that would otherwise be spent looking for blanks in reports.
- Pre-analysis completeness checks – as mentioned above
- Tool bridge analysis – for instance, does the data necessary to generate complete code or output a complete XML file exist in the model data.
- Project status check – how far along are we to building a complete architecture.
And others I am sure.
Not to oversell the idea – completeness and correctness are related but separate concepts. Just because a value exists in a property does not mean it is meaningful or correct. Also related is consistency – for instance, what is the answer to the question “Is the total set of data and stated relationships internally consistent such that one part of the EA is not in disagreement with another part?” These other concepts will serve as the basis for both future articles and upgrades to the metrics macro for System Architect.
That said, completeness checking is an important but often overlooked check of any set of EA data. This macro is intended to make that check as simple as possible.
For a running (shockwave) demo of the metrics macro please go to:
Where you can see this and other macro add-ons for System Architect.