How Strategic Planning relates to Enterprise Architecture?

TOGAF often refers to Strategic Planning without specifying the details of what it consists of. This document explains why there is a perfect fit between the two.

Strategic Planning means different things to different people. The one constant is its reference to Business Planning which usually occurs annually in most companies. One of the activities of this exercise is the consideration of the portfolio of projects for the following financial year, also referred to as Project Portfolio Management (PPM). This activity may also be triggered when a company modifies its strategy or the priority of its current developments.

Drivers for Strategic Planning may be

· New products or services

· A need for greater Business flexibility and agility

· Merger & Acquisition

· Company’s reorganization

· Consolidation of manufacturing plants, lines of business, partners, information systems

· Cost reduction

· Risk mitigation

· Business Process Management initiatives

· Business Process Outsourcing

· Facilities outsourcing or in sourcing

· Off shoring

Strategic Planning as a process may include activities such as:

1. The definition of the mission and objectives of the enterprise

Most companies have a mission statement depicting the business vision, the purpose and value of the company and the visionary goals to address future opportunities. With that business vision, the board of the company defines the strategic (e.g. reputation, market share) and financial objectives (e.g. earnings growth, sales targets).

2. Environmental analysis

The environmental analysis may include the following activities:

· Internal analysis of the enterprise

· Analysis of the enterprise’s industry

· A PEST Analysis (Political, Economic, Social, and Technological factors). It is very important that an organization considers its environment before beginning the marketing process. In fact, environmental analysis should be continuous and feed all aspects of planning, identify the strengths and weaknesses, the opportunities and threats (SWOT).

3. Strategy definition

Based on the previous activities, the enterprise matches strengths to opportunities and addressing its weaknesses and external threats and elaborate a strategic plan. This plan may then be refined at different levels in the enterprise. Below is a diagram explaining the various levels of plans.

image

To build that strategy, an Enterprise Strategy Model may be used to represent the Enterprise situation accurately and realistically for both past and future views. This can be based on Business Motivation Modeling (BMM) which allows developing, communicating and managing a Strategic Plan. Another possibility is the use of Business Model Canvas which allows the company to develop and sketch out new or existing business models. (Refer to the work from Alexander Osterwalder http://alexosterwalder.com/).

The model’s analyses should consider important strategic variables such as customers demand expectations, pricing and elasticity, competitor behavior, emissions regulations, future input, and labor costs.

These variables are then mapped to the main important business processes (capacity, business capabilities, constraints), and economic performance to determine the best decision for each scenario. The strategic model can be based on business processes such as customer, operation or background processes. Scenarios can then are segmented and analyzed by customer, product portfolio, network redesign, long term recruiting and capacity, mergers and acquisitions to describe Segment Business Plans.

4. Strategy Implementation

The selected strategy is implemented by means of programs, projects, budgets, processes and procedures. The way in which the strategy is implemented can have a significant impact on whether it will be successful, and this is where Enterprise Architecture may have a significant role to play. Often, the people formulating the strategy are different from those implementing it. The way the strategy is communicated is a key element of the success and should be clearly explained to the different layers of management including the Enterprise Architecture team.

To support that strategy, different levels or architecture can be considered such as strategic, segment or capability architectures.

image

Figure 20-1: Summary Classification Model for Architecture Landscapes

This diagram below illustrates different examples of new business capabilities linked to a Strategic Architecture.

image

It also illustrates how Strategic Architecture supports the enterprise’s vision and the strategic plan communicated to an Enterprise Architecture team.

Going to the next level allows better detail the various deliverables and the associated new business capabilities. The segment architecture maps perfectly to the Segment Business Plan.

image

5. Evaluation and monitoring

The implementation of the strategy must be monitored and adjustments made as required.

Evaluation and monitoring consists of the following steps:

1. Definition of KPIs, measurement and metrics

2. Definition of target values for these KPIs

3. Perform measurements

4. Compare measured results to the pre-defined standard

5. Make necessary changes

Strategic Planning and Enterprise Architecture should ensure that information systems do not operate in a vacuum. At its core, TOGAF 9 uses/supports a strong set of guidelines that were promoted in the previous version, and have surrounded them with guidance on how to adopt and apply TOGAF to the enterprise for Strategic Planning initiatives. The ADM diagram below clearly indicates the integration between the two processes.

The company’s mission and vision must be communicated to the Enterprise Architecture team which then maps Business Capabilities to the different Business Plans levels.

image

Many Enterprise Architecture projects are focused at low levels but should be aligned with Strategic Corporate Planning. Enterprise Architecture is a critical discipline, one Strategic Planning mechanism to structure an enterprise. TOGAF 9 is without doubt an effective framework for working with stakeholders through Strategic Planning and architecture work, especially for organizations who are actively transforming themselves.

The Battle of Our Times: Capabilities vs. Process

Photo by Tim Hipps, FMWRC Public Affairs I’m a little ashamed to admit I’ve spent far too much of my career debating colleagues on the merits of capability versus process. In the worst example, I engaged in an intense debate … Continue reading

Transformation Benefits Measurement, the Political and Technical Hard Part of Mission Alignment and Enterprise Architecture

Pre-Ramble
This post will sound argumentative (and a bit of Ranting–in fact, I will denote the rants in color.  Some will agree, some will laugh, and Management and Finance Engineering may become defensive), and probably shows my experiences with management and finance engineering (Business Management Incorporated, that owns all businesses) in attempting benefits measurement.  However, I’m trying to point out the PC landmines (especially in the Rants) that I stepped on so that other Systems Engineers, System Architects, and Enterprise Architects don’t step on these particular landmines–there are still plenty of others, so find your own, then let me know.

A good many of the issues result from a poor understanding by economists and Finance Engineers of the underlying organizational economic model embodied in Adam Smith’s work, which is the foundation of Capitalism.  The result of this poor understanding is an incomplete model, as I describe in Organizational Economics: The Formation of Wealth.

Transformation Benefits Measurement Issues
As Adam Smith discussed in Chapter 1, Book 1, of his Magna Opus, commonly called The Wealth of Nations, a transformation of process and the insertion of tools transforms the productivity processes.  Adam Smith called the process transformation “The division of labour“, or more commonly today, the assembly line.  At the time, 1776, where all industry of “cottage industry” this transformation Enterprise Architecture was revolutionary.  He did this using an example of straight pin production.  Further, he discussed that concept that tooling makes this process even more effective, since tools are process multipliers. In the military, their tools, weapons, are “force multipliers”, which for the military is a major part of their process. Therefore, both transformation of processes and transforming tooling should increase the productivity of an organization.  Productivity is increasing the effectiveness of the processes of an organization to achieve its Vision or meet the requirements of it various Missions supporting the vision.
The current global business cultural, especial finance from Wall St. to the individual CFOs and other “finance engineers”, militates against reasonable benefits measurement of the transformation of processes and insertion and maintenance of tools.  The problem is that finance engineers do not believe in either increased process effectiveness or cost avoidance (to increase the cost efficiency of a process).
Issue #1 the GFI Process
Part of the problem is the way most organizations decide on IT investments in processes and tooling.  The traditional method is the GFI (Go For It) methodology that involves two functions, a “beauty contest” and “backroom political dickering”.  That is, every function within an organization has its own pet projects to make its function better (and thereby its management’s bonuses larger).  The GFI decision support process is usually served up with strong dashes of NIH (Not Invented Here) and LSI (Last Salesman In) syndromes.
This is like every station on an assembly line dickering for funding to better perform its function.  The more PC functions would have an air conditioned room to watch the automated tooling perform the task, while those less PC would have their personnel chained to the workstation, while they used hand tools to perform their function; and not any hand tools, but the ones management thought they needed–useful or not.  Contrast this with the way the Manufacturing Engineering units of most manufacturing companies work.  And please don’t think I’m using hyperbole because I can cite chapter and verse where I’ve seen it, and in after hours discussions with cohorts from other organizations, they’ve told me the same story.
As I’ve discussed in A Model of an Organization’s Control Function using IDEF0 Model, The OODA Loop, and Enterprise Architecture, the Enterprise Architect and System Architect can serve in the “Manufacturing Engineer” role for many types of investment decisions.  However, this is still culturally unpalatable in many organizations since it gives less wiggle room to finance engineers and managers.
Issue #2 Poorly Formalized Increased Process Effectiveness Measuring Procedures
One key reason (or at least rationale) why management and especially finance engineers find wiggle room is that organizations (management and finance engineering) is unable (unwilling) to fund the procedures and tooling to accurately determine pre- and post-transformation process effectiveness because performing the procedures and maintaining the tools uses resources, while providing no ROI–this quarter. [Better to use the money for Management Incentives, rather than measuring the decisions management makes].
To demonstrate how poorly the finance engineering religion understands the concept of Increased Process Effectiveness, I will use the example of Cost Avoidance, which is not necessarily even Process Effectiveness, but is usually Cost Efficiency.  Typically, Cost Avoidance is investing in training, process design, or tooling now to reduce the cost of operating or maintaining the processes and tooling later. 
[Rant 1: a good basic academic definition and explanation cost avoidance is found at http://www.esourcingwiki.com/index.php/Cost_Reduction_and_Avoidance.  It includes this definition:

“Cost avoidance is a cost reduction that results from a spend that is lower then the spend that would have otherwise been required if the cost avoidance exercise had not been undertaken.” ]

As discussed in the article just cited, in the religion of Finance Engineering, cost avoidance is considered as “soft” or “intangible”.  The reason finance engineer cite for not believing cost avoidance number is that the “savings classified as avoidance (are suspect) due to a lack of historical comparison.” 
[Rant 2: Of course Cost Reduction Saving is like that of avoiding a risk (an unknown) by changing the design is not valid, see my post The Risk Management Process because the risk never turned into an issue (a problem).] 
This is as opposed to cost reduction, where the Finance Engineer can measure the results in ROI.  This makes cost reduction efforts much more palatable to Finance Engineers, managers, and Wall St. Traders.  Consequently, increased cost efficiency is much more highly valued by this group than Increased Process Effectiveness.  Yet, as discussed above, the reason for tools (and process transformations) is to Increase Process Effectiveness.   So, Finance Engineering puts the “emphassus on the wrong salobul“.
They are aided an abetted by (transactional and other non-leader) management.  A discussed recently on CNBC Squawk Box, the recent the CEOs of major corporations cite for their obscenely high salaries is that they make decisions that avoid risk. 
[Rant 3: Of course this is ignoring the fact that going into and operating a business is risky, by definition; and any company that avoids risk is on the “going out of business curve”.  So most executives in US Companies today are paid 7 figure salaries to put their companies on “the going out of business curve”–interesting]
However, Cost Avoidance is one of two ways to grow a business.  The first is to invent a new product or innovate on an existing product (e.g., the IPAD) such that the company generates new business.  The second, is to Increase Process Effectiveness. 
Management, especially mid- and upper-level management, does not want to acknowledge the role of process transformation or the addition or upgrade of tooling as increasing the effectiveness of a process, procedure, method, or function.  The reason is simple, it undermines the ability for them to claim it as their own ability to manage their assets (read employees) better and therefore “earn” a bonus or promotion.  Consequently, this leaves those Enterprise and System Architects always attempting to “prove their worth” without using the metric that irrefutably prove the point.
These are the key cultural issue (problems) in selling real Enterprise Architecture and System Architecture.  And frankly, the only organizations that will accept this cultural type of change are entrepreneurial, and those large organization in a panic or desperation.  These are the only ones that are willing to change their culture.
Benefits Measurement within the OODA Loop
Being an Enterprise and an Organizational Process Architect, as well as a Systems Engineer and System Architect, I know well that measuring the benefits of a transformation (i.e., cost avoidance) is technically difficult at best; and is especially so, if the only metrics “management” considers are financial. 
Measuring Increased Process Effectiveness
In an internal paper I did in 2008, Measuring the Process Effectiveness of Deliverable of a Program [Rant 4: ignored with dignity by at least two organizations when I proposed R&D to create a benefits measurement procedure], I cited a paper: John Ward, Peter Murray and Elizabeth Daniel, Benefits Management Best Practice Guidelines (2004, Document Number: ISRC-BM-200401: Information Systems Research Centre Cranfield School of Management), that posits four types of metric that can be used to measure benefits (a very good paper by the way).
  1. Financial–Obviously
  2. Quantifiable–Metrics that organization is currently using to measure its process(es) performance and dependability that will predictably change with the development or transformation; the metrics will demonstrate the benefits (or lack thereof).  This type of metric will provide hard, but not financial, evidence that the transformation has benefits.  Typically, the organization knows both the minimum and maximum for the metric (e.g., 0% to 100%).
  3. Measurable–Metrics that organization is not currently using to measure its performance, but that should measurably demonstrate the benefits of the development or transformation.  Typically, these metrics have a minimum, like 0, but no obvious maximum.  For example, I’m currently tracking the number of pages accessed per day.  I know that if no one reads a page the metric will be zero.  However, I have no idea of the potential readership for anyone post because most of the ideas presented here are concepts that will be of utility in the future. [Rant 5: I had one VP who was letting me know he was going to lay me off from an organization that claimed it was an advance technology integrator that “he was beginning to understand was I had been talking about two years before”–that’s from a VP of an organization claiming to be advanced in their thinking about technology integration–Huh….]  Still, I have a good idea of the readership of each post from the data,  what the readership is interested in and what falls flat on its face.  Measurable metrics will show or demonstrate the benefits, but cannot be used to forecast those benefits.  Another example is of a RAD process I created in 2000.  This process was the first RAD process that I know of, that the SEI considered as Conformant; that is, found in conformance by an SEI Auditor.  At the time, I had no way to measure its success except by project adoption rate (0 being no projects used it).  By 2004, within the organization I worked for, that did several hundred small, medium, and large efforts per year, over half of them were using the process.  I wanted to move from measurable to quantitative, using metrics like defects per roll out, customer satisfaction, additional customer funding, effort spent per requirement (use case), and so on, but “the management considered collecting this data, analyzing and storing it to be an expense, not an investment and since the organization was only CMMI level 3 and not level 4, this proved infeasible.   [Rant 6: It seems to me that weather forecasters and Wall St. Market Analysts are the only ones that can be paid to use measurable metrics to forecast, whether they are right wrong, or indifferent–and the Wall St. analysts are paid a great deal even when they are wrong.]
  4. Observable–Observable is the least quantitative, which is to say the most qualitative, of the metric types.  These are metrics with no definite minimum or maximum.  Instead, they are metrics that the participants agree on ahead of time–requirements? (see my post Types of Requirements.)  These metrics are really little more than any positive change that occurs after the transformation.  At worst they are anecdotal evidence.  Unfortunately, because Financial Engineers and Managers (for reasons discussed above) are not willing to invest in procedures and tooling for better metrics like those above, unless they are forced into it by customers, (e.g., requiring CMMI Level 5), Enterprise Architects, System Architects, and Systems Engineer must rely on anecdotal evidence, the weakest kind, to validate the benefits of a transformation.
Metric Context Dimensions
Having metrics to measure the benefits is good, if and only if, the metrics are in context.  In my internal paper, Measuring the Process Effectiveness of Deliverable of a Program, which I cited above, I found a total of four contextual dimensions, and since I have discovered a fifth.  I give two, to illustrate what I mean.
In several previous posts I’ve used the IDEF0 pattern as a model of the organization (see Figure 1 in A Model of an Organization’s Control Function using IDEF0 Model, The OODA Loop, and Enterprise Architecture in particular).  One context for the metrics is whether the particular metric is measuring improvement in the process, the mechanisms (tooling), or in the control functions; a transformation may affect all three.  If it affects two of the pattern’s abstract components or all three, the transformation may affect each either by increasing or decreasing the benefit.  Then the Enterprise Architect must determine the “net benefit.”
The key to this “net benefit” is to determine how well the metric(s) of each component measures the organization’s movement or change in velocity of movement toward achieving its Vision and/or Mission.  This is a second context.  As I said, there are at least three more.
Measuring Increased Cost Efficiency
While measuring the Benefits that accrue from a transformation is difficult (just plain hard), measuring the increased cost efficiency is simple and easy–relatively–because it is based on cost reduction, not cost avoidance.  The operative word is “relatively”, since management and others will claim that their skill and knowledge reduced the cost, not the effort of the transformation team or the Enterprise Architecture team that analyzed, discovered, and recommended the transformation.  [Rant 7: More times than I can count, I have had and seen efforts where management did everything possible to kill off a transformation effort, then when it was obvious to all that the effort was producing results “pile on” to attempt to garner as much credit for the effort as possible.  One very minor example for my experience was that in 2000, my boss at the time told me that I should not be “wasting so much time on creating a CMMI Level 3 RAD process, but instead should be doing real work.”  I call this behavior the “Al Gore” or “Project Credit Piling On” Syndrome (In his election bid Al Gore attempted to take all the credit for the Internet and having participated in its development for years prior, I and all of my cohorts resented the attempt).  Sir Arthur Clarke captured this syndrome in his Law of Revolutionary Development.

“Every revolutionary idea evokes three stages of reaction. They can be summed up as:
–It is Impossible—don’t Waste My Time!
–It is Possible, but not Worth Doing!
–I Said it was a Good Idea All Along!”]

Consequently, “proving” that the engineering and implementation of the transformation actually reduced the cost, and not the “manager’s superior management abilities” is difficult at best–if it weren’t the manager’s ability, then why pay him or her the “management bonus” [Rant 8: which is were the Management Protective Association kicks in to protect their own].

The Benefits Measurement Process

The two hardest activities of Mission Alignment and Implementation Process are Observe and Orient, as defined within the OODA Loop (see A Model of an Organization’s Control Function using IDEF0 Model, The OODA Loop, and Enterprise Architecture for the definitions of these tasks or functions of the OODA Loop).  To really observe processes the results and affects of a process transformation requires an organizational process as described, in part, by the CMMI Level 4 Key Practices or some of the requirements of the ISO 9001 standards.

As usual, I will submit to the reader that the keys (culturally and in a business sense) to getting the the organization to measure the success (benefits) of its investment decisions and its policy and management decisions is twofold.  The first high-level activity is a quick, (and therefore, necessarily incomplete) inventory of its Mission(s), Strategies, Processes and tooling assets.  As I describe in Initially implementing an Asset and Enterprise Architecture Process and an AEAR, this might consist of documenting and inserting the data of the final configuration of each new transformation effort as it is rolled out into an AEAR during an initial 3 month period; and additionally inserting current Policies and Standards (with their associate Business Rules) into the AEAR.  Second, analyze the requirements of each effort (the metrics associated with the requirements, really) to determine the effort’s success metrics.  Using the Benefits Context Matrix determine where these metrics are incomplete (in some cases), over defined (in others), obtuse and opaque, or conflicting among themselves.  The Enterprise Architect would present the results of these analyses to management, together with recommendations for better metrics and more Process Effective transformation efforts (projects an programs).

The second high-level activity is to implement procedures and tooling to more effectively to efficiently observe and orient the benefits through the metrics (as well as the rest of the Mission Alignment/Mission Implementation Cycles).  Both of these activities should have demonstrable results (an Initial Operating Capability, IOC) by the end of the first 3 month Mission Alignment cycle.  The IOC need not be much, but it must be implemented, not some notional or conceptual design.  This forces the organization to invest resources in measurements of benefits and perhaps, in which component the benefits exist, control, process, or mechanisms.

Initially, expect that the results from the Benefits Metrics to be lousy for at least three reasons.  First, the AEAR is skeletal at best.  Second, the organization and all the participants, including the Enterprise Architect have a learning curve with respect to the process.  Third, the initially set of benefits metrics will not really measure the benefits, or at least not effectively measure the benefits. 

For example,I have been told, and believe to be true, that several years ago, the management of a Fortune 500 company chose IBM’s MQSeries as middleware, to interlink many of its “standalone” systems in its fragmented architecture.  This was a good to excellent decision in the age before SOA, since the average maintenance cost for a business critical custom link was about $100 per link per month and the company had several hundred business critical links.  The IBM solution standardized the procedure for inter-linkage in a central communications hub using an IBM standard protocol.  Using the MQSeries communications solution required standardized messaging connectors.  Each new installation of a connector was a cost to the organization.  But, since connectors could be reused, IBM could right claim that the Total Cost of Ownership (TCO) for the inter-linkage would be significantly reduced. 

However, since the “benefit” of migrating to the IBM solution was “Cost Reduction“, not Increased Process Effectiveness [RANT 9: Cost Avoidance in Finance Engineering parlance], Management and Finance Engineering (Yes, both had to agree), directed that the company would migrate its systems.  That was good, until they identified the “Benefit Metric” on which the management would get their bonuses.  That benefit metric was “The number of new connector installed“.  While it sounds reasonable, the result was hundreds of new connectors were installed, but few connectors were reused because the management was not rewarded for reuse, just new connectors.  Finance Engineering took a look at the IBM Invoice and had apoplexy!  It cost more in a situation where they had a guarantee from the supplier that it would cost less [RANT 10: And an IBM guarantee reduced risk to zero].  The result was that the benefit (increased cost efficiency) metric was changed to “The number of interfaces reusing existing connectors, or where not possible new connectors”.  Since clear identification and delineation of metrics is difficult even for Increased Cost Efficiency (Cost Reduction), it will be more so for Increased Process Effectiveness (Cost Avoidance).

Having effectively rained on every one’s parade, I still maintain that with the support of the organization’ s leadership, the Enterprise Architect, can create a Transformation Benefits Measurement procedure with good benefit (Increased Process Effectiveness) metrics in 3 to 4 cycles of the Mission Alignment Process.  And customer’s requiring the suppliers to follow CMMI Level 5 Key Practices, SOA as an architectural pattern or functional design, together with Business Process Modeling, and Business Activity Monitoring and Management (BAMM) tooling will all help drive the effort.

For example, BAMM used in conjunction with SOA-based Services will enable the Enterprise Architect to determine such prosaic metrics as Process Throughput (in addition to determining bottlenecks) before and after a ttransformation. [RANT 11: Management and Finance Engineering are nearly psychologically incapable of allowing a team to measure a Process, System, or Service after its been put into production, let alone measuring these before the transformation.  This is the reason I recommend that the Enterprise Architecture processes, Like Mission Alignment be short cycles instead of straight through one off processes like the waterfall process–each cycle allow the Enterprise Architect to measure the results and correct defects in the transformation and in the metrics.  It’s also the reason I recommend that the Enterprise Architect be on the CEO staff, rather that a hired consulting firm.] Other BAMM-derived metrics might be the cost and time used per unit produced across the process, the increase in quality (decreased defects), up-time of functions of the process, customer satisfaction, employee satisfaction (employee morale increases with successful processes), and so on.  These all help the Enterprise Architect Observe and Orient the changes in the process due to the transformation, as part of the OODA Loop-based Mission Alignment/Mission Implementation process.

Types of Requirements

Definition of a RequirementA Requirement is a measurable expression of what a customer wants and for which the customer is willing to pay.  Therefore, a requirement has three attributes:It has a description of what the customer wants or …

Governance, and Policy Management Processes: the Linkage with SOA

Business Rules and Process FlowIn a recent post, A Model of an Organization’s Control Function using IDEF0 Model, The OODA Loop, and Enterprise Architecture, I briefly described the Governance and the Policy Management processes within the context of t…

The hype cycle vs legacy…

I am going to talk about a consequence of the hype cycle that seems to be missed by many.  I will use an anecdote to illustrate the point…

About 10 years ago, I was engaged on a short assignment to review a new technology organization’s customer service programs.  The company had grown rapidly in a new market that was now maturing.  They had 3 customer service systems.  They had 4 main customer groups served through 4 sales channels.  Each channel used different business processes to execute the same activities and accessed all three systems.  The IT solutions had grown organically with the business and were a mess!  But now, with the market maturing, there were mainstream solutions from major suppliers that could replace these systems that were starting to constrain the business.  However, the cost of resolving this, $15M, was seen as too expensive.

A few years later, the organization had lost its competitive position, had moved from number 1 to number 2 and was taken over by a foreign competitor entering the market.  With a more complex product portfolio, more customer groups, a more complex sales model, the customer services systems were now seen as a major constraint to business growth.  I happened to be engaged through another consultancy to look at the problem again.  This time the cost of sorting it out had grown to $80M.  Again the executive board decided that this was too expensive.

Recently, the organization merged with a major competitor.  I heard that they had embarked yet again on a program to replace their legacy customer service systems.  The market is now much more complex with many more products, it is also more competitive with tighter margins.  The systems have grown in complexity since the last attempt to sort them out.  I suspect the cost this time will be $150M or more.  A ten times increase in cost in ten years. More importantly, the organization was the market leader 10 years ago but now it is in 3rd position with a likely drop to 4th.

SAM_0149a

The key point is that everything that you build before good practice emerges is likely to be poorly designed and poorly built.  It should be thrown away and you should start over.  If you don’t you will inevitably perpetuate bad practice.  Future development will be compromised because time pressures to deliver tactical business change and the constraint of the legacy.  And the cost of replacing it to deliver strategic business change will grow over time.

Sometimes I wonder why there is so much legacy.  The answer is obvious if you overlay the adoption cycle with hype cycle…

SAM_0156

So what are the lessons:

  • It is never too late to sort out your legacy
  • Don’t build on bad practice
  • The so called first mover advantage can be a handicap
  • Build knowledge before building solutions

Managing Requirements from a Business Analyst or an Enterprise Architect perspective using BABOK 2.0 and/or TOGAF 9

Many Business Analysts are using the IIBA’s BABOK 2.0 (Business Analyst Body of Knowledge ) which contains information about a Requirements Management process, from identifying organizational situations that give cause to a project, through to starting the requirements gathering process, to delivering a solution to the business or a client. TOGAF 9 from an Enterprise Architecture viewpoint also provides some techniques to gather requirements to equally deliver business solutions. This paper illustrates the two processes, defines the mapping between the two approaches and identifies gaps in each.

image

BABOK 2.0 Knowledge Area (KA) 4 covers Requirements Management and Communication which “describes the activities and considerations for managing and expressing Requirements to a broad and diverse audience” (The other KAs: Plan Requirements, Management Process, and Requirement Analysis will not be included here).

The tasks from this KA “are performed to identify business needs (the why of the project; whereas requirements are the how), the state the scope of their business solutions, ensure that all stakeholders have a shared understanding of the nature of these solutions and that those stakeholders with approval authority are in agreement as to the requirements that the business solution shall meets.

It manages a baseline, tracks different versions of Requirements documents, and trace requirements from origin to implementation.

This area includes five steps described below.

image

1. Manage Solution Scope and Requirements

In this step, we “obtain and maintain consensus among stakeholders regarding the overall solution scope and the requirements that will be implemented”. Requirements may be baseline following an approval and a signoff. That means that all future changes are recorded and tracked, and the current state may be compared to the baselined state. Subsequent changes to the requirements must follow a Change Management process and will require additional approval. As changes are approved, a Requirements Management Plan may require that the baselined version of the requirements be maintained in addition to the changed Requirement. Additional information is often maintained such as a description of the change, the person who made the change, and the reason for the change. As requirements are refined or changed as the result of new information, changes will be tracked as well.

image

A signoff formalises an acceptance by all stakeholders that the content and presentation of documented requirements is accurate and complete. This can be done in a face to face meeting.

2. Manage Requirements Traceability

Traceability consists of understanding the relationship between Business Objectives, the requirements, the stakeholders, other deliverables and components to support the business analysis among other activities. It also allows documenting “the lineage of each requirement, its backward and forward traceability, and its relationship to other requirements”. The reasons for creating relationships are “Impact Analysis”, and “Requirements coverage and allocation”. A coverage matrix may be used to manage tracing.

image

3. Maintain Requirements for re-use

Requirements re-use is another important aspect in the process and there is a need to manage knowledge of requirements following their implementation, identify the requirements that are candidates for long-term usage by the organisation. “These may include requirements that an organisation must meet on an ongoing basis, as well requirements that are implemented part of a solution” (e.g. regulatory, contractual obligations, quality standards, service level requirements, etc.). Each will have to be clearly named, defined, and available to all analysts.

image

4. Prepare Requirements Package

This step consists in selecting and structuring a set of requirements “in an appropriate fashion to ensure that the requirements are effectively communicated to, understood and usable” by the various stakeholders. This Requirements Package could have different forms such as a documentation (can be managed in a Requirements Repository), presentations, templates, etc.

image

5. Communicate Requirements

This step relates to the communication of requirements to the various stakeholders for a common understanding. It may happen that new requirements have to be considered.

image

The BABOK bundles Requirements Communication together with Requirements Management.

Requirements Analysis is another KA which describes “how we progressively elaborate the solution definition in order to enable the project team to design and build a solution that will meet the needs of the business and stakeholders. In order to do that, we have to analyze the stated requirements of our stakeholders to ensure that they are correct, assess the current state of the business to identify and recommend improvements, and ultimately verify and validate the results”. BABOK 2.0 Requirements Analysis being not really covered within TOGAF 9, there are no comparisons done at this stage.

Within TOGAF 9, the objective of the Requirements Management activity is to define a process whereby all kinds of requirements, including most notably business drivers, concerns, and new functionality and change requests for Enterprise Architecture are identified, stored, and fed into and out of the relevant Architecture Development Method (ADM) phases. As such it forms part of the activities and steps carried out in each of the ADM Phases. Architecture requirements are subject to constant change, and requirements management happens throughout the entire Enterprise Architecture implementation lifecycle.

It is important to note that the Requirement Management circle denotes, not a static set of requirements, but a dynamic process.

As indicated by the Requirements Management circle at the centre of the ADM graphic, the ADM is continuously driven by the Requirements Management process.

image

Enterprise Architecture has specific techniques to gather requirements. TOGAF as a framework uses a method based on what we call a “Business Scenario” which is used heavily in the initial phases A & B of the ADM to define the relevant business requirements and build consensus with business management and other stakeholders.

A Business Scenario ensures that there is a complete description of business problem in business and architectural terms. Individual requirements are viewed in relation to one another in the context of the overall problem; the architecture is based on complete set of requirements that add up to a whole problem description; the business value of solving the problem is clear and the relevance of potential solutions is clear.

Below is a mapping between the two approaches.

BABOK 2.0 sets up a framework for the requirements development and management, which seems to appear as a standard used by many organizations around the world. Between TOGAF 9 and BABOK 2.0, there is almost 1:1 correspondence but there may be more details and activities in the first one. TOGAF is a methodology whereas the BABOK is methodology agnostic, so it can be tricky to translate between the two but nothing prevent an Enterprise Architecture team to use this analogous technique.

If an organization follows the TOGAF methodology and Business Analysts use BABOK, the later will provide a lot of useful information, as a reference; BABOK won’t give you direction for an Enterprise Architecture.

Sources: Chapter 4 IIBA’s BABOK 2.0, TOGAF 9

Rhizome: On Dilemmas in Enterprise Architecture Planning

In the field of IS and management we often put forward a certain conception of the organisation, the social. In contemporary business consulting and management academia, the organisation is often conceptualised as a hierarchical open system with a certain body of knowledge supplying the management system with rational decision making. Other alternative, academic approaches are influenced by literature studies and Gadamer’s hermeneutics (Gadamer, 1975), promoting the need for understanding and context, the particular, rather than the universal and manageable. Emerging from these two spectra, each fighting for their own conception of subjectivity and objectivity, Anglo-Saxon and continental philosophy, each defining the criteria for truth and meaning, one uncovers systems theory and cybernetics which proposes a model for generalising structures and properties between different phenomena in the world: Bertalanffy (Bertalanffy, 1969) suggests a unified cybernetic model for living, mechanic, and social systems, whereas von Foerster (Von Foerster, 2003) and Luhmann (Luhmann, 1995) suggest a second order model based on constructed observation and interpretation. In organisation studies, first and second order systems theory each postulate their own conception or construction of social reality: Parsons defines the social as actions or events referring to each other within a structural organising of social functions, whereas Luhmann flips the tin can with a functional organisation of social structures based on communication, reproducing and sustaining itself through Maturana and Varela’s concept of autopoiesis (Maturana & Varela, 1980).

Amidst IS management’s–and thus EA’s–attempt to establish a common, trans-disciplinary foundation for research, there appears to be an ontological schism of what the social is and organisations really are. Is it a collective intelligence or logic of rational decision making? Is it a reactive, intersubjective collective attempting to make sense of the world in hindsight through history and culture? Or is it a system or a construction of a system that organises, structures, or communicates through constant adaption and recursive reproduction only by reference to its own recursion and reproductivity? The latter approach dissolves the former two boundaries by creating a boundary of distinction even more important than the understanding subject itself at the edge of every possible system. It is the distinction between system and environment that generates or fabricates meaning and truth, but it comes at the cost of reducing our very own processes of cognition and sensemaking to a set of vibrating antennas or satellites mounted at the fragile surface of every human system.  

An ontology of the social is thus far from complete. Enterprise Architecture (EA) seeks to address this by building layers of abstraction and control, thereby assuming that static systems models of socio-technical relations yield manageability and transparency. Accountability is achieved by linking formal role descriptions to process models and system landscapes, often positioned in a well-defined hierarchy and stored in a database repository for later reference and reuse. In order to reuse ‘best practices’ and assure a certain level of maturity in framework and methodology, enterprises often implement their architecture practice against existing reference frameworks and enterprise meta models. Frameworks such as FEAF even include a CMMI-like maturity model for EA, which assesses the success of architecture program by measures such as completeness and integration. OMB, the US Federal Office of Management and Budget, has furthermore published a set of measures of architectural completeness for evaluating US Federal Agencies. The highest achievement, level 5, is the architectural utopia in which the organisation practicing EA corrects its own business failures by architectural inspection. Architecture is here synonymous with optimising an organisation.


Given the above reflections on what the social really is, is it really philosophically reasonable to suggest that a stable, decomposable, hierarchical model, which most enterprise meta-models really are, is capable of building a comprehensive model of the social? Is it really meaningful to stretch virtually any organisation, be it government or private, along a five-level diagram and measure it by how well-described architectural elements are? And what happens when the Federal agency hits the ceiling after 5? Those are clever and important questions that information and organisation science ought to ask. Unfortunately, that is seldom the case. Maturity models, in the classic form of a five-step ladder, are an inherent part of any contemporary management/IS theory: process maturity, architecture maturity, service maturity, integration maturity. The five-step Capability Maturity Model (Paulk, 1995) has its roots in systems engineering carried out by engineers building space shuttles for NASA. As universal as it may be, the problems, issues, and solutions faced by modern organisations are far more muddy, messy, an ill-defined than those originally faced by defence contractors and DoD bureaucrats. Such fast-paced, deep problems are also characterised as wicked problems (Rittel & Webber, 1973):
  1. Wicked problems have no definitive formulation. One can infer that the problem exists, but will never be able to fully document the problem.
  2. The solution to a wicked problem is “good or bad” not “true or false”.
  3. Every possible solution is a one-shut operation as every solution attempt will leave a trace which cannot be undone.
  4. Each wicked problem is unique and may eventually be the symptom of another, underlying wicked problem.
Through my previous research, I have suggested a systems theoretical approach towards understanding and explaining EA. Systems theory is helpful towards describing the messy complexity of social and communicative structures. Second order systems theory adds a rich, dynamic theory for understanding communication in- and outside organisation by describing the exchange of utterances between human actors in search of meaning (Jensen, 2010). I believe, however, that these two key conceptions of enterprise planning and governance can furthermore be extended into a general theory of EA by including Deleuze’s theory of the rhizome.

Deleuze (Deleuze & Guattari, 1988) describes the rhizome structure (Deleuze & Guattari, 1976) as a meaningful alternative to uncovering complex structures, be they social or biological. Western society, Deleuze explains, has built its historicity and philosophy on the basis of binary structures: true-false, yes-no, top-bottom, maturity-immaturity. Contemporary EA frameworks are, in fact, highly binary: layers separated by clear boundaries, processes with a start and end, structured organisation charts and capability maps with a top and bottom. The rhizome is a viable alternative since it assumes an inherent complexity of what it is intended to describe. The rhizome is constantly transforming and morphing itself, making it virtually impossible to map out its structure completely at any point in time. This is exactly how wicked problems occur. Wicked, messy problems could, in fact, be described as rhizomatic structures. The rhizome structure applies well to the socio-technical nature of organisations as well, as the dissipative relationships between humans, technology, and organisation structures form a complex, dynamic, and transforming entity with no clear, formal, or necessarily logical order. This rhizomatic relationship is probably best explained in the field of technology adoption and diffusion in private enterprises where traditional positivist approaches to management and innovation struggle to explain how and why technology trends emerge and behave. This reflection on Deleuze leads to the following important claim:

Organisations are complex, dissipative structures constantly transforming complex, human knowledge and social relationships. A rhizomatic systems model satisfies such conception of organisational reality. Hence, Enterprise Architecture, in its search for whole-of-enterprise views, should adopt rhizomatic theory for uncovering and understanding the true messiness of organisations as socio-political habitats.


Understanding Enterprise Architecture as a rhizomatic systems practice, however, must come at the cost of killing certain darlings. The first darling is the idea of organisations as stable structures operating on explicit, verifiable knowledge, which in turn can be divided into clear architectural layers and segments. The second darling is the conception of a universal maturity model explaining the natural progression towards “EA nirvana”. There is no such one.
  1. Layers, segments, and hierarchical models depart from a Westernised, binary view of the world. Layering suggests decomposability and abstraction of organisational complexity. A rhizome does not have such properties. The messy, social facets of organisational life cannot be decomposed or functionally abstracted. The social does not have a single function and thus cannot be functional. Wicked problems, as they emerge from social interactions and organisational problems, are rhizomatic and cannot be explained fully through rationalist models.
  2. Maturity models are inherently binary. They suggest a natural progression towards the optimal stage 5 somewhat similar to a tree as it stretches its branches towards the rising sun. The rhizome is the exact opposite of a tree structure as its roots and shoots grow and form in any direction, shrouding and shifting its original structure. Social structures, apart from the general statistical patterns uncovered by social psychology, do not follow universal laws of transformation or branching—and hence it is impossible and meaningless to suggest a generic, universalistic maturity model of social behaviour in EA adoption and planning. There is no such nirvana of Enterprise Architecture—and if there ever were, it would be constantly shifting and transforming depending on the current managerial climate, problems of planning, and struggle for control inside the organisation. Exactly this relationship of management, planning, and control is rhizomatic as well.

For Enterprise Architecture to fully explain these sacrifices, it must adopt a view of the enterprise as a non-linear, interconnected multiplicity, for which structures can only be meaningfully traced and described in hindsight. Traces always remain interpretations. Enterprise modelling involves tracing organisation structures, but as these structures are traced and interpreted, they suddenly shift and transform into a different multiplicity. Enterprise Architecture is thus a semiotic practice of tracing and interpreting organisations as complex signs. The output, the long-term plans, roadmaps, and meta-models, are merely simplified pictures of these dissipative signs. Only by accepting these aspects of enterprise reality, can Enterprise Architecture truly characterise the challenges and solutions in strategic planning and enterprise management.  

References:
Bertalanffy, L. v. (1969). General System Theory; Foundations, Development, Applications. New York,: G. Braziller.
Deleuze, G., & Guattari, F. (1976). Rhizome : Introduction. Paris: Éditions de Minuit.
Deleuze, G., & Guattari, F. (1988). A Thousand Plateaus : Capitalism and Schizophrenia. London: Athlone Press.
Gadamer, H.-G. (1975). Truth and method. London: Sheed & Ward.
Jensen, A. O. (2010) Government Enterprise Architecture Adoption: A Systemic-Discursive Critique and Reconceptualisation. Copenhagen Business School.
Luhmann, N. (1995). Social Systems. Stanford, Calif.: Stanford University Press.
Maturana, H. R., & Varela, F. J. (1980). Autopoiesis and Cognition : the Realization of the Living. Dordrecht, Holland ; Boston: D. Reidel Pub. Co.
Paulk, M. C. (1995). The Capability Maturity Model : Guidelines for Improving the Software Process. Reading, Mass.: Addison-Wesley Pub. Co.
Rittel, H. & Webber, M. (1973), `Dilemmas in a General Theory of Planning’, Policy Sciences 4.
Von Foerster, H. (2003). Understanding Understanding : Essays on Cybernetics and Cognition. New York: Springer.