2 months, 10 days ago

An Architecture for Creating an Ultra-secure Network and Datastore

The Problem
According to United States records, from 2006 cyber attacks to 2016, (crimes, intelligence gathering, and warfare) have gone up 1300 percent.  Other reports identified in Forbes Magazine indicate that between 2015 and 2016 there was a 200 to 450 percent increase in attacks.  I suspect that though the numbers vastly underestimate the total number of attacks.  I know that in the late 1980s, one company was averaging 10,000 attacks per day on its website and access points to the internet; of which 4000 originated in Russia (then the USSR), China, North Korea, and the like
.
There are two goals for attacks, to disrupt the entire IT infrastructure or to gather or change protected data for various nefarious purposes.  There is a multiplicity of reasons for these attack, monetary gain, political change, and so on; the “so on” is too long to enumerate.
The cost for preventing and mitigating the effects of these attacks has spawned a new multi-billion dollar industry.  Consequently, the need is for an entirely new system (network and datastore) that completely defeats all attack vectors.  That is what I’m proposing here.

The Solution A Disruptive Architecture: The Once and Future System

The Goal

The goal of the architecture presented here is to define a highly secure system for the transmission and storage of data.
The architecture is for a fundamentally different “new” network and datastore.  I put “new” in quotes because I based the architecture on a number of concepts and standards from the late 1970s to the mid-1990s.  For reasons of economies and business politics these concepts and standards were abandoned.  When I submitted the architecture for a patent and even though the architecture uses old concepts and standard in a new way, I was told that since it was based on well known concepts and standards the architecture it is unpatentable. 
Consequently, I’m presenting it in this post in the hope that someone will take serious look at it and communicate with me so that I can present the details and we can build a secure network and datastore.

The Architecture

My fundamental idea is to create a separate “data only” network and datastore.  While initially, having a worldwide network for the storage and transmission of data separate from the Internet “of everything” may seem as a ludicrous idea for those looking at the “short-term” costs for an organization; what the cost of having data stolen, corrupted, or destroyed would be for an organization?  And remember that there are  initial and recurring costs for data security on a cloud or across the internet.
This new architecture has five components.  One of them has evolved over the past twenty years.  One of them was declared obsolete thirty years ago.  One of them is based on petrified standards of the 1980s.  And one uses a new twist on current hardware and software.  The fifth is a particular form of governance.

New User Interface Security

The base technology of the new user interface has been evolving over the past twenty years at least.  It is a combination of three functional technologies.  The first is biometric recognition.  Any secure system requires some form of authentication; that you are who you say you are.  Various forms of biometric authentication, facial recognition, fingerprint identification, retinal pattern recognition, and so on, are currently the least likely forms of identification to be broken by cyber attacks.
The second security technology is a version of the smartcard.  These are credit-card-like with a data storage computer chip embedded.  Under this new function the card reader would communicate the location, time of day, and date, whereupon the card would generate a pass code based on those parameters.
 
At the same time the reader would generate a pass code also based on those parameters.  The system would accept the identification if and only if they matched.  Since any secure system requires at least to factor authentication, a user would need both the smart card (which additionally could store the biometric data) and their own body.
Finally, authorization and access control are both static for a given user interface to the system.  This means that the user of a given device (be it a terminal, PC, smart phone, etc.) can only gain access to the set of data, records, or summaries to which their entitled. 
So a contract specialist has no access to engineering data for the contract or only a limited set.  If the contract specialist attempted to sign on to another device, to which he was not preapproved, he could not get to the data to which he is entitled. The reason is that an individual must be preapproved of every terminal the individual wants to use. 
Or a doctor may not see a patient’s complete medical history without the patient’s permission. This would be a two step process.  The doctor would have to sign in on his or her device using the two-factor authentication, described above.  Then the patient would have to sign on to the same device using the same two-factor authentication to give the doctor permission to access his or her medical record.
The security meta-data and parameters are stored on the ultra-secure data network (USDN).  Any updates or changes must be made and approved through the system’s security governance function.  No dynamic changes can be made until the changes are approved.  In a political/cultural context, this governance process will be the most difficult to secure since users expect changes to be made “NOW” and the process doesn’t allow “NOW” to happen.

The Bridge

The second architectural component is the bridge from the Internet to the USDN.  This is really the key component securing the USDN from attacks.  And this is the component that was declared obsolete thirty years ago.  In the early 1980s there were many proprietary data networks.  To communicate data from one network to another required a network bridge.
The following diagram is from the patent that I applied for.  It shows an example of how changing the protocol layers or stacks creates a portcullis in the bridge that provides the ultra-security.  On the left side of the bridge are the standard Internet protocols.  Other than the top layer (called the Application Layer in the OSI model) and the bottom layer (the Physical Layer in the OSI model), all layers link and guide the communications between the sender and receiver.
Notice that the functional protocols on each side of the bridge, with the exception of the physical layer are different.  On the left side all protocols are current Internet standards.  However, on the right side the bridge uses protocols from the Open Systems Interconnect (OSI) suite.  These protocols were abandoned in the 1990s in favor of the earlier TCP/IP suite, that at the time were less expensive and much less capable. [Sidebar: “The first example of a superior principle is always less capable than a mature example of an inferior principle”].


What this means is that the entire USDN will use these OSI protocols.  Any cyber attack software developed for Internet protocols would have to be redesigned for the OSI protocols.
Even if the hackers of whatever stripe did develop software capable of exploiting vulnerabilities in the OSI protocol stack they would still need to get it onto the network.  But the design of the bridge includes a portcullis in the middle of the bridge.
This portcullis is designed to allow only data and records in well defined formats to pass.  This means that no documents can move across the bridge.  In this case “documents” includes e-mail, documents, unformatted text, files, or other unformatted data.
This stringent requirement eliminates nearly every attack vector by hackers.  For example, there is no way that a Trojan horse attachment can get into the system because e-mail, let along e-mail with attachments, is allowed access across the bridge.
As shown in the diagram, only data in specific and static XML formats is allowed to move through the portcullis.  The XML data structures are installed in the portcullis only after approval using one of the governance processes.
So, for example, medical data would use an XML version of the international medical standards, engineering data would use an XML version of STEP, and so on.  Only data exactly following those standards to which the user is entitled would get through the portcullis.. This would initially have a very large overburden in meta-security and access control data about all individuals.

The Network

The third architectural component is the network.  The network is based on petrified standards of the 1980s.  Inside the portcullis-bridge data would be free to move among the various nodes of the network using the same OSI protocol stack that is used on the right side of the portcullis-bridge shown in the diagram.
Additionally, it would use improved versions of the Directory Service (X.500) standard.  This would include using static routing meta-data (which many network analysts would say is not an improvement).  However, static routing meta-data means that if an unauthorized node magically appeared on the USDN (because some hacker tapped one of the USDN lines) the node would be recognized as a threat immediately.  Consequently, any attempt to breach the security imposed by the portcullis-bridge by directly attacking the network would fail, as long as good governance is in place.

Datastores

The last technical function is data storage.  This datastore function uses a new twist on current hardware and software design for the storage of data and information.  The twist is that only specific data and records are store, not files from outside the network. 
An organization using an USDN-like system would have its data file structures created by authorized personnel inside the USDN.  These file structures would follow the various authorized XML data structures.  No freeform data like e-mail or documents would be allowed.  [Sidebar: remember its much much simpler to create documents from data than to glean data from documents.] 
The only applications that are authorized to run on the USDN and its datastore computers are those that create, read, update, or delete records or data elements.  Reading data would include reading for transfer, and for summarization. 
For example, suppose the medical profession of a state or of the United States adopts the USDN to protect patients’ medical records.  A medical researcher may be granted access to summaries of certain data elements of patients’ record that have a particular medical problem.  This access would be granted through an approval process—part of governance—prior to obtaining the summaries.
The advantage is that the medical researcher has access to a complete set of data for the population of an area.  The downside for the researcher is that they need to have a well formulated and defensible hypothesis to work from, in order to obtain the data, and that the governance processes take time.

Governance

The Governance processes function of the system’s architecture is most critically important of the five functions because it is the only one where humans are involved—Big Time.  As discussed above there are many security functions that are static and require administrative functions to change the parameters and meta-data.  While I expect that actually changing the meta-data and parameters will be automated, the various decision making processes will not.
One obvious example is in banking.  Some financial data must be secure within a financial institution and only shared with a client.  Other data, in the form of transactions must be shared between and among banks and other financial institutions.
The USDN security meta-data would determine which data could be sent to another financial organization, what data can be sent, and other characteristics of transaction.  It would be within the USDN and not across any portion of the Internet.  It can then be retrieved by the destination organization.
For example, if all defense contractors were on the USDN then when teams formed to respond to a DoD Request For Proposal (RFP), the various teams of contractors and subcontractors could share requirements and other data within their team.  When the DoD chose the winning team, program/project, risk, and design data could be shared and shared with the customer without fear a cyber attack on one of the sub-contractors leading to the capture or corruption of the program or mission critical data.  [Sidebar: frequently a third or fourth tier sub-contractor has more vulnerabilities than the prime contractor.]

Issues

Again,”The first instance of a superior principle is always inferior to a mature example of an inferior principle.”
There are three issues with the creation of such a system. 
The first is cost; creating an entire nationwide or worldwide network is very expensive in the startup phase.  Creating (or really resurrecting in many cases) software to support the functions of the USDN will be very expensive.  There is the cost of implementing software services to interface with existing organizational applications.  Acquiring the physical cabling for the system will be expensive.  
Modifying routers to use the new protocols will be expensive. Designing, constructing, and testing the new portcullis-bridge will be very expensive.  Most of this investment will need to be done before one data element is protected.
The cost is more than a straight financial issue of building the system.  It will threaten much of the multi-billion dollar cyber security industry’s income stream.  This industry will market and lobby against building out the system.
The second issue may be used by that industry as an argument against the USDN.  The issue that the system only protects data and not other types of information like e-mail and documents.  This is true.  However, the core of any organization is its data.  Documents can be easily constructed from data, but not the other was around.
The third issue, at least initially, is the response time of the system.  Currently applications and users have come to expect nanosecond response times to dynamic requests.  Initially, at least, I predict that the response time to requests will be in terms of seconds; maybe many.  I saw this with Microsoft DOS—until version 3.1 it was bad—other products from Microsoft, Apple, and Oracle [Sidebar: I worked with Oracle 4.1] and many other hardware and software products.]  So it will be a rocky start, but ultimately it will cost much less than the recover, rebuild, patch, upgrade, and get hacked again systems of today.

Summary

While the USDN does not protect an organization from cyber attacks, it does make an organization’s mission critical data nearly invulnerable an organization will be able to recover from an attack and will make it nearly impossible for terrorists, cyber criminals, etc. to get a personal data or its mission critical data protected.

For anyone who is interested, please comment on this post.  I have much of knowledge of the processes, technology, and construction process involved than I can put in a post, but would be happy to discuss it.
8 months, 25 days ago

Agility, SOA, Virtual Extended Enterprise, Swarming Tactics, and Architecture

Agility and the Virtual Extended EnterpriseIn the 1990s, The Agility Forum of Lehigh University defined Agility as “The ability to successfully respond to unexpected challenges and opportunities.” The forum chartered a technical committe…

6 years, 3 months ago

Transformation Benefits Measurement, the Political and Technical Hard Part of Mission Alignment and Enterprise Architecture

Pre-Ramble
This post will sound argumentative (and a bit of Ranting–in fact, I will denote the rants in color.  Some will agree, some will laugh, and Management and Finance Engineering may become defensive), and probably shows my experiences with management and finance engineering (Business Management Incorporated, that owns all businesses) in attempting benefits measurement.  However, I’m trying to point out the PC landmines (especially in the Rants) that I stepped on so that other Systems Engineers, System Architects, and Enterprise Architects don’t step on these particular landmines–there are still plenty of others, so find your own, then let me know.

A good many of the issues result from a poor understanding by economists and Finance Engineers of the underlying organizational economic model embodied in Adam Smith’s work, which is the foundation of Capitalism.  The result of this poor understanding is an incomplete model, as I describe in Organizational Economics: The Formation of Wealth.

Transformation Benefits Measurement Issues
As Adam Smith discussed in Chapter 1, Book 1, of his Magna Opus, commonly called The Wealth of Nations, a transformation of process and the insertion of tools transforms the productivity processes.  Adam Smith called the process transformation “The division of labour“, or more commonly today, the assembly line.  At the time, 1776, where all industry of “cottage industry” this transformation Enterprise Architecture was revolutionary.  He did this using an example of straight pin production.  Further, he discussed that concept that tooling makes this process even more effective, since tools are process multipliers. In the military, their tools, weapons, are “force multipliers”, which for the military is a major part of their process. Therefore, both transformation of processes and transforming tooling should increase the productivity of an organization.  Productivity is increasing the effectiveness of the processes of an organization to achieve its Vision or meet the requirements of it various Missions supporting the vision.
The current global business cultural, especial finance from Wall St. to the individual CFOs and other “finance engineers”, militates against reasonable benefits measurement of the transformation of processes and insertion and maintenance of tools.  The problem is that finance engineers do not believe in either increased process effectiveness or cost avoidance (to increase the cost efficiency of a process).
Issue #1 the GFI Process
Part of the problem is the way most organizations decide on IT investments in processes and tooling.  The traditional method is the GFI (Go For It) methodology that involves two functions, a “beauty contest” and “backroom political dickering”.  That is, every function within an organization has its own pet projects to make its function better (and thereby its management’s bonuses larger).  The GFI decision support process is usually served up with strong dashes of NIH (Not Invented Here) and LSI (Last Salesman In) syndromes.
This is like every station on an assembly line dickering for funding to better perform its function.  The more PC functions would have an air conditioned room to watch the automated tooling perform the task, while those less PC would have their personnel chained to the workstation, while they used hand tools to perform their function; and not any hand tools, but the ones management thought they needed–useful or not.  Contrast this with the way the Manufacturing Engineering units of most manufacturing companies work.  And please don’t think I’m using hyperbole because I can cite chapter and verse where I’ve seen it, and in after hours discussions with cohorts from other organizations, they’ve told me the same story.
As I’ve discussed in A Model of an Organization’s Control Function using IDEF0 Model, The OODA Loop, and Enterprise Architecture, the Enterprise Architect and System Architect can serve in the “Manufacturing Engineer” role for many types of investment decisions.  However, this is still culturally unpalatable in many organizations since it gives less wiggle room to finance engineers and managers.
Issue #2 Poorly Formalized Increased Process Effectiveness Measuring Procedures
One key reason (or at least rationale) why management and especially finance engineers find wiggle room is that organizations (management and finance engineering) is unable (unwilling) to fund the procedures and tooling to accurately determine pre- and post-transformation process effectiveness because performing the procedures and maintaining the tools uses resources, while providing no ROI–this quarter. [Better to use the money for Management Incentives, rather than measuring the decisions management makes].
To demonstrate how poorly the finance engineering religion understands the concept of Increased Process Effectiveness, I will use the example of Cost Avoidance, which is not necessarily even Process Effectiveness, but is usually Cost Efficiency.  Typically, Cost Avoidance is investing in training, process design, or tooling now to reduce the cost of operating or maintaining the processes and tooling later. 
[Rant 1: a good basic academic definition and explanation cost avoidance is found at http://www.esourcingwiki.com/index.php/Cost_Reduction_and_Avoidance.  It includes this definition:

“Cost avoidance is a cost reduction that results from a spend that is lower then the spend that would have otherwise been required if the cost avoidance exercise had not been undertaken.” ]

As discussed in the article just cited, in the religion of Finance Engineering, cost avoidance is considered as “soft” or “intangible”.  The reason finance engineer cite for not believing cost avoidance number is that the “savings classified as avoidance (are suspect) due to a lack of historical comparison.” 
[Rant 2: Of course Cost Reduction Saving is like that of avoiding a risk (an unknown) by changing the design is not valid, see my post The Risk Management Process because the risk never turned into an issue (a problem).] 
This is as opposed to cost reduction, where the Finance Engineer can measure the results in ROI.  This makes cost reduction efforts much more palatable to Finance Engineers, managers, and Wall St. Traders.  Consequently, increased cost efficiency is much more highly valued by this group than Increased Process Effectiveness.  Yet, as discussed above, the reason for tools (and process transformations) is to Increase Process Effectiveness.   So, Finance Engineering puts the “emphassus on the wrong salobul“.
They are aided an abetted by (transactional and other non-leader) management.  A discussed recently on CNBC Squawk Box, the recent the CEOs of major corporations cite for their obscenely high salaries is that they make decisions that avoid risk. 
[Rant 3: Of course this is ignoring the fact that going into and operating a business is risky, by definition; and any company that avoids risk is on the “going out of business curve”.  So most executives in US Companies today are paid 7 figure salaries to put their companies on “the going out of business curve”–interesting]
However, Cost Avoidance is one of two ways to grow a business.  The first is to invent a new product or innovate on an existing product (e.g., the IPAD) such that the company generates new business.  The second, is to Increase Process Effectiveness. 
Management, especially mid- and upper-level management, does not want to acknowledge the role of process transformation or the addition or upgrade of tooling as increasing the effectiveness of a process, procedure, method, or function.  The reason is simple, it undermines the ability for them to claim it as their own ability to manage their assets (read employees) better and therefore “earn” a bonus or promotion.  Consequently, this leaves those Enterprise and System Architects always attempting to “prove their worth” without using the metric that irrefutably prove the point.
These are the key cultural issue (problems) in selling real Enterprise Architecture and System Architecture.  And frankly, the only organizations that will accept this cultural type of change are entrepreneurial, and those large organization in a panic or desperation.  These are the only ones that are willing to change their culture.
Benefits Measurement within the OODA Loop
Being an Enterprise and an Organizational Process Architect, as well as a Systems Engineer and System Architect, I know well that measuring the benefits of a transformation (i.e., cost avoidance) is technically difficult at best; and is especially so, if the only metrics “management” considers are financial. 
Measuring Increased Process Effectiveness
In an internal paper I did in 2008, Measuring the Process Effectiveness of Deliverable of a Program [Rant 4: ignored with dignity by at least two organizations when I proposed R&D to create a benefits measurement procedure], I cited a paper: John Ward, Peter Murray and Elizabeth Daniel, Benefits Management Best Practice Guidelines (2004, Document Number: ISRC-BM-200401: Information Systems Research Centre Cranfield School of Management), that posits four types of metric that can be used to measure benefits (a very good paper by the way).
  1. Financial–Obviously
  2. Quantifiable–Metrics that organization is currently using to measure its process(es) performance and dependability that will predictably change with the development or transformation; the metrics will demonstrate the benefits (or lack thereof).  This type of metric will provide hard, but not financial, evidence that the transformation has benefits.  Typically, the organization knows both the minimum and maximum for the metric (e.g., 0% to 100%).
  3. Measurable–Metrics that organization is not currently using to measure its performance, but that should measurably demonstrate the benefits of the development or transformation.  Typically, these metrics have a minimum, like 0, but no obvious maximum.  For example, I’m currently tracking the number of pages accessed per day.  I know that if no one reads a page the metric will be zero.  However, I have no idea of the potential readership for anyone post because most of the ideas presented here are concepts that will be of utility in the future. [Rant 5: I had one VP who was letting me know he was going to lay me off from an organization that claimed it was an advance technology integrator that “he was beginning to understand was I had been talking about two years before”–that’s from a VP of an organization claiming to be advanced in their thinking about technology integration–Huh….]  Still, I have a good idea of the readership of each post from the data,  what the readership is interested in and what falls flat on its face.  Measurable metrics will show or demonstrate the benefits, but cannot be used to forecast those benefits.  Another example is of a RAD process I created in 2000.  This process was the first RAD process that I know of, that the SEI considered as Conformant; that is, found in conformance by an SEI Auditor.  At the time, I had no way to measure its success except by project adoption rate (0 being no projects used it).  By 2004, within the organization I worked for, that did several hundred small, medium, and large efforts per year, over half of them were using the process.  I wanted to move from measurable to quantitative, using metrics like defects per roll out, customer satisfaction, additional customer funding, effort spent per requirement (use case), and so on, but “the management considered collecting this data, analyzing and storing it to be an expense, not an investment and since the organization was only CMMI level 3 and not level 4, this proved infeasible.   [Rant 6: It seems to me that weather forecasters and Wall St. Market Analysts are the only ones that can be paid to use measurable metrics to forecast, whether they are right wrong, or indifferent–and the Wall St. analysts are paid a great deal even when they are wrong.]
  4. Observable–Observable is the least quantitative, which is to say the most qualitative, of the metric types.  These are metrics with no definite minimum or maximum.  Instead, they are metrics that the participants agree on ahead of time–requirements? (see my post Types of Requirements.)  These metrics are really little more than any positive change that occurs after the transformation.  At worst they are anecdotal evidence.  Unfortunately, because Financial Engineers and Managers (for reasons discussed above) are not willing to invest in procedures and tooling for better metrics like those above, unless they are forced into it by customers, (e.g., requiring CMMI Level 5), Enterprise Architects, System Architects, and Systems Engineer must rely on anecdotal evidence, the weakest kind, to validate the benefits of a transformation.
Metric Context Dimensions
Having metrics to measure the benefits is good, if and only if, the metrics are in context.  In my internal paper, Measuring the Process Effectiveness of Deliverable of a Program, which I cited above, I found a total of four contextual dimensions, and since I have discovered a fifth.  I give two, to illustrate what I mean.
In several previous posts I’ve used the IDEF0 pattern as a model of the organization (see Figure 1 in A Model of an Organization’s Control Function using IDEF0 Model, The OODA Loop, and Enterprise Architecture in particular).  One context for the metrics is whether the particular metric is measuring improvement in the process, the mechanisms (tooling), or in the control functions; a transformation may affect all three.  If it affects two of the pattern’s abstract components or all three, the transformation may affect each either by increasing or decreasing the benefit.  Then the Enterprise Architect must determine the “net benefit.”
The key to this “net benefit” is to determine how well the metric(s) of each component measures the organization’s movement or change in velocity of movement toward achieving its Vision and/or Mission.  This is a second context.  As I said, there are at least three more.
Measuring Increased Cost Efficiency
While measuring the Benefits that accrue from a transformation is difficult (just plain hard), measuring the increased cost efficiency is simple and easy–relatively–because it is based on cost reduction, not cost avoidance.  The operative word is “relatively”, since management and others will claim that their skill and knowledge reduced the cost, not the effort of the transformation team or the Enterprise Architecture team that analyzed, discovered, and recommended the transformation.  [Rant 7: More times than I can count, I have had and seen efforts where management did everything possible to kill off a transformation effort, then when it was obvious to all that the effort was producing results “pile on” to attempt to garner as much credit for the effort as possible.  One very minor example for my experience was that in 2000, my boss at the time told me that I should not be “wasting so much time on creating a CMMI Level 3 RAD process, but instead should be doing real work.”  I call this behavior the “Al Gore” or “Project Credit Piling On” Syndrome (In his election bid Al Gore attempted to take all the credit for the Internet and having participated in its development for years prior, I and all of my cohorts resented the attempt).  Sir Arthur Clarke captured this syndrome in his Law of Revolutionary Development.

“Every revolutionary idea evokes three stages of reaction. They can be summed up as:
–It is Impossible—don’t Waste My Time!
–It is Possible, but not Worth Doing!
–I Said it was a Good Idea All Along!”]

Consequently, “proving” that the engineering and implementation of the transformation actually reduced the cost, and not the “manager’s superior management abilities” is difficult at best–if it weren’t the manager’s ability, then why pay him or her the “management bonus” [Rant 8: which is were the Management Protective Association kicks in to protect their own].

The Benefits Measurement Process

The two hardest activities of Mission Alignment and Implementation Process are Observe and Orient, as defined within the OODA Loop (see A Model of an Organization’s Control Function using IDEF0 Model, The OODA Loop, and Enterprise Architecture for the definitions of these tasks or functions of the OODA Loop).  To really observe processes the results and affects of a process transformation requires an organizational process as described, in part, by the CMMI Level 4 Key Practices or some of the requirements of the ISO 9001 standards.

As usual, I will submit to the reader that the keys (culturally and in a business sense) to getting the the organization to measure the success (benefits) of its investment decisions and its policy and management decisions is twofold.  The first high-level activity is a quick, (and therefore, necessarily incomplete) inventory of its Mission(s), Strategies, Processes and tooling assets.  As I describe in Initially implementing an Asset and Enterprise Architecture Process and an AEAR, this might consist of documenting and inserting the data of the final configuration of each new transformation effort as it is rolled out into an AEAR during an initial 3 month period; and additionally inserting current Policies and Standards (with their associate Business Rules) into the AEAR.  Second, analyze the requirements of each effort (the metrics associated with the requirements, really) to determine the effort’s success metrics.  Using the Benefits Context Matrix determine where these metrics are incomplete (in some cases), over defined (in others), obtuse and opaque, or conflicting among themselves.  The Enterprise Architect would present the results of these analyses to management, together with recommendations for better metrics and more Process Effective transformation efforts (projects an programs).

The second high-level activity is to implement procedures and tooling to more effectively to efficiently observe and orient the benefits through the metrics (as well as the rest of the Mission Alignment/Mission Implementation Cycles).  Both of these activities should have demonstrable results (an Initial Operating Capability, IOC) by the end of the first 3 month Mission Alignment cycle.  The IOC need not be much, but it must be implemented, not some notional or conceptual design.  This forces the organization to invest resources in measurements of benefits and perhaps, in which component the benefits exist, control, process, or mechanisms.

Initially, expect that the results from the Benefits Metrics to be lousy for at least three reasons.  First, the AEAR is skeletal at best.  Second, the organization and all the participants, including the Enterprise Architect have a learning curve with respect to the process.  Third, the initially set of benefits metrics will not really measure the benefits, or at least not effectively measure the benefits. 

For example,I have been told, and believe to be true, that several years ago, the management of a Fortune 500 company chose IBM’s MQSeries as middleware, to interlink many of its “standalone” systems in its fragmented architecture.  This was a good to excellent decision in the age before SOA, since the average maintenance cost for a business critical custom link was about $100 per link per month and the company had several hundred business critical links.  The IBM solution standardized the procedure for inter-linkage in a central communications hub using an IBM standard protocol.  Using the MQSeries communications solution required standardized messaging connectors.  Each new installation of a connector was a cost to the organization.  But, since connectors could be reused, IBM could right claim that the Total Cost of Ownership (TCO) for the inter-linkage would be significantly reduced. 

However, since the “benefit” of migrating to the IBM solution was “Cost Reduction“, not Increased Process Effectiveness [RANT 9: Cost Avoidance in Finance Engineering parlance], Management and Finance Engineering (Yes, both had to agree), directed that the company would migrate its systems.  That was good, until they identified the “Benefit Metric” on which the management would get their bonuses.  That benefit metric was “The number of new connector installed“.  While it sounds reasonable, the result was hundreds of new connectors were installed, but few connectors were reused because the management was not rewarded for reuse, just new connectors.  Finance Engineering took a look at the IBM Invoice and had apoplexy!  It cost more in a situation where they had a guarantee from the supplier that it would cost less [RANT 10: And an IBM guarantee reduced risk to zero].  The result was that the benefit (increased cost efficiency) metric was changed to “The number of interfaces reusing existing connectors, or where not possible new connectors”.  Since clear identification and delineation of metrics is difficult even for Increased Cost Efficiency (Cost Reduction), it will be more so for Increased Process Effectiveness (Cost Avoidance).

Having effectively rained on every one’s parade, I still maintain that with the support of the organization’ s leadership, the Enterprise Architect, can create a Transformation Benefits Measurement procedure with good benefit (Increased Process Effectiveness) metrics in 3 to 4 cycles of the Mission Alignment Process.  And customer’s requiring the suppliers to follow CMMI Level 5 Key Practices, SOA as an architectural pattern or functional design, together with Business Process Modeling, and Business Activity Monitoring and Management (BAMM) tooling will all help drive the effort.

For example, BAMM used in conjunction with SOA-based Services will enable the Enterprise Architect to determine such prosaic metrics as Process Throughput (in addition to determining bottlenecks) before and after a ttransformation. [RANT 11: Management and Finance Engineering are nearly psychologically incapable of allowing a team to measure a Process, System, or Service after its been put into production, let alone measuring these before the transformation.  This is the reason I recommend that the Enterprise Architecture processes, Like Mission Alignment be short cycles instead of straight through one off processes like the waterfall process–each cycle allow the Enterprise Architect to measure the results and correct defects in the transformation and in the metrics.  It’s also the reason I recommend that the Enterprise Architect be on the CEO staff, rather that a hired consulting firm.] Other BAMM-derived metrics might be the cost and time used per unit produced across the process, the increase in quality (decreased defects), up-time of functions of the process, customer satisfaction, employee satisfaction (employee morale increases with successful processes), and so on.  These all help the Enterprise Architect Observe and Orient the changes in the process due to the transformation, as part of the OODA Loop-based Mission Alignment/Mission Implementation process.