8 years, 4 months ago

Are Business Process Management and Business Architecture a perfect match?

Whenever I suggest collaboration between these two worlds, I always observe some sort of astonishment from my interlocutors. Many Enterprise Architects or Business Architects do not realise there may be synergies. Business Process Management (BPM) team have not understood what Enterprise Architecture is all about and the other way around…. There is no a single definition of Business Process Management, often it means different things to different people. To keep it very generic, BPM relates to any activities an organization does to support its process efforts.

There are many activities which can be included in such efforts:
· The use of industry Business Reference Model (or Business Process Reference Model), a reference for the operational activities of an organization, a framework facilitating a functional Lines of Business, such as

o The Federal Enterprise Architecture Business Reference Model of the US Federal Government
o The DoD Business Reference Model
o The Open Group Exploration and Mining Business Reference Model (https://www.opengroup.org/emmmv/uploads/40/22706/Getting_started_with_the_EM_Business_Model_v_01.00.pdf)
o Frameworx (eTOM) for Telco companies
o The Supply Chain Operations Reference (SCOR®) model
o The SAP R/3 Reference Model
o The Oracle Business Models : Oracle Industry Reference Model for Banking, (IRM), Oracle Retail Reference Model
o And others…

· The use of organization specific Business Reference models
· The use of Business process improvement methodologies

o Lean, a quantitative data driven methodology based on statistics, process understanding and process control
o Six Sigma, a methodology that mainly focuses on eliminating bad products or services to clients by using statistical evaluation

· Business Process Reengineering, which in reality is a facet of BPM
· The understanding of Business Change Management, the process that empowers staff to accept changes that will improve performance and productivity
· The understanding of Business Transformation, the continuous process, essential to any organization in implementing its business strategy and achieving its vision
· The use of Business Rules Management which enables organizations to manage business rules for decision automation
· The understanding of Business Process Outsourcing (BPO) services to reduce costs and increase efficiency
· The support of Business Process modeling and design, which is illustrated description of business processes, usually created with flow diagrams. The model contains the relationship between activities, processes, sub-processes and information, as well as roles, the organization and resources. This can done with many notations such as flow chart, functional flow block diagram, control flow diagram, Gantt chart, PERT diagram, IDEF, and nowadays with the standard de facto notations such as UML and BPMN
· The support of BPM tools and suites implementation. With the right, process models can be simulated, to drive workflow or BPMS systems, and can be used as the basis for an automated process monitoring system (BAM)
· The support of Business Activity Monitoring (BAM), the ability to have end-to-end visibility and control over all parts of a process or transaction that spans multiple applications and people in one or even more companies.

To combine Business Process Management and Enterprise Architecture for better business outcomes is definitely the way forward, where BPM provides the business context, understanding, and- metrics, and Enterprise Architecture provides the discipline to translate business vision and strategy into architectural changes. Both are needed for sustainable continuous improvement. When referring to Enterprise Architecture, we would mainly refer to Business Architecture. Business Architecture involves more than just the structure of business processes. It also entails the organization of departments, roles, documents, assets, and all other process-related information.

Business Architects may be defining and implementing the Business Process framework and, in parallel, influencing the strategic direction for Business Process Management and improvement methodologies (e.g. Lean, Six Sigma). The business process owners and Business Analysts are working within their guidelines at multiple levels throughout the organizations’ business process. They have roles and responsibilities to manage, monitor and control their processes.
An important tool in developing Business Architecture is a Business Reference Model. These types of models are enormously beneficial. They can be developed in the organization to build and extend the information architecture. The shared vocabulary (verbal and visual) that emerges from these efforts promotes clear and effective communication.

To illustrate the touch points between Enterprise Architecture and Business Process Management, I have illustrated in the table below the synergies between the two approaches using TOGAF® 9.


In this table, we observe that, there is a perfect match between Business Process Management and the use of an Enterprise Architecture framework such as TOGAF. BPM is often project based and the Business Architect (or Enterprise Architect) may be responsible for identifying cross-project and cross-process capabilities. It can be considered as being the backbone of an Enterprise Architecture program. We can also add to this, that Service Oriented Architecture is the core operational or transactional capability while BPM does the coordination and integration into business processes.

When using BPM tools and suites, you should also consider the following functionalities: workflow, enterprise application integration, content management and business activity monitoring. These four components are traditionally provided by vendors as separate applications which are merged through BPM into a single application with high levels of integration. The implementation of a BPM solution should theoretically eliminate the maintenance and support cost of these four applications resulting in reducing the total cost of ownership.

Business Architecture provides the governance, alignment and transformational context for BPM across business units and silos. Enterprise Architects, Business Architects, Business Analysts should work together with BPM teams, when approaching the topic of Business Process Management. BPM efforts need structures and appropriate methodologies. It needs a structure to guide efforts at different levels of abstraction (separating “the what“ (the hierarchical structure of business functions) from “the how” (how the desired results are achieved), a documented approach and structure to navigate among the business processes of the organization, i.e. a Business Architecture. They also need a methodology such as an Enterprise Architecture framework to retain and leverage what they have learned about managing and conducting BPM projects.

8 years, 4 months ago

Agility is Sensible 2011-08-26 19:02:00

First posted on Built-In-Chicago I ran into a VC from NYC at a conference yesterday. He said he wanted to see why the Chicago VC market hardly registers on the map. While I wasn’t sure what he was talking about yesterday, I went to WSJ VentureSo…

8 years, 5 months ago

Social Media and CRM

I am pretty late to the blogosphere about the differences between social media and CRM. But in customer meetings I see this kind of confusion all the time. So here goes. Oh, and since I mainly work in the travel industry, my examples come from there.So…

8 years, 5 months ago

Microphone ready events

I was working with some customers this week, and the topic of “how do you generate events?” comes up. Not how as in the dirty mechanics, but how when something is done manually today, getting that systematized. So, please bear with me on the following …

8 years, 5 months ago

The Cost of Rockets Built by NASA: Waterfall Process vs Short-cycle and Agile Processes

ShortcomingsThis post is really not about the shortcomings of NASA, it’s more about the inevitability poor, high cost deliverables when a) an organization looses its focus because of a constantly changing Vision and Mission; b) a development process fo…

8 years, 5 months ago

Towards Next Generation Process Execution

A considerable part of my professional time is spent on advising people on discovering, redesigning, and — if possible and feasible — automating their business processes. Often, trivial, cumbersome, and predictable processes can be formalised and executed using a process engine (also known as BPMS, Business Process Management Suites). Process execution is usually combined with an enterprise service bus (ESB) or middleware layer, which supplies data sources and exposes business transactions to the process layer in an open, reusable, and interoperable fashion.

BPMS is by no means a new concept. Most contemporary BPMS platforms began as workflow engines and CASE (Computer-Aided Software Engineering) tools, which subsequently found valuable use in the rising Java EE and enterprise application integration (EAI) markets of the 90’s and onwards. This, combined with an increasing interest in information management and enterprise integration, spawned today’s plethora of repository based modelling tools, sophisticated middleware technology, and process execution platforms.  

Looking into the crystal ball of process automation, what automation technologies can enterprises expect to see in the years to come? Now, here, any average IT analyst would probably come up with three all-too-often-adopted shrinkwrapped concepts:
  1. Cloud computing
  2. X-as-a-service
  3. Agile
In order to actually coming up with something original or different, I have deliberately omitted these three words in this blog entry. That is not to say that these trends aren’t influential or important, but they have already been stated elsewhere in thousands of blog posts, whitepapers, and academic papers. In the following sections I will present my stance towards aspects of next generation process automation.
Closed-Loop Roundtrip Engineering
Several toolchains support the so-called ‘roundtrip’ between repository-based enterprise modelling tools and implementation level process development tools. Too often this is a one-way exercise in which business processes are modelled by the business analyst, approved by the process owner and then exported to execution by the solution architect. However, once the process model hits the implementation floor, governance, roundtrip, and traceability are cut off. The process model is now materialised as source code rather than a visual model.

An improved, closed-loop roundtrip approach would most likely solve this bridge by making the process integration point both ways. This calls for an improved bridging strategy between the two worlds so they ultimately merge into one. That is not to say that the designed process model must equal the executable process model — the enterprise repository should still provide different, role-based views onto the same processes. The tipping point that I am arguing is that full traceability from model to execution demands to-way traceability and unified single-interface version control of all artefacts.
Light-weight Executable Process Models
Modelling standards such as BPMN 2.0 (Business Process Modelling Notation) claim to provide single, uniform language for modelling manual, semi-automated, and fully automated business processes. However, as several process practitioners have already emphasised, the notation is still far too rich and complex for non-technical professionals and businesspeople to fully comprehend. It is as if the notation, indulging in its own ambitions and adoption, has widened its gap too far and suddenly struggles to articulate all possible aspect of a process.

SOA practitioners struggled with the same complexity problems in the early 2000’s. Everywhere, new and half-baked service standards emerged, and some “standards” even offered duplicate functionality. For SOAP, what was meant to be a simple protocol for exchanging messages had now morphed into a wilderness of WS-* standards, policy documents, and pseudo recommendations. As a counter effect, REST (Representational State Transfer) was adopted as a viable, lightweight, and easy-to-implement alternative to the WS-* conglomerate. REST’s elegance was its simplicity, very similar to how the simplicity of the TCP and IP network protocols defeated complex, proprietary network protocols such as DECNET and Tymnet. Useful, open standards are easy to simple and easy to understand and communicate. WS-* was by no means a lightweight stack, just as BPMN 2.0 is too rich to be truly elegant.

What BPMS needs is a process modelling notation that is just as elegant as REST and TCP. The simpler notation, the easier it is for business analysts to pick up the notation and understand a particular model. Fewer moving parts and modelling exceptions also implies that the designed process is easier to execute. Consider the source code necessary for parsing a WSDL schema with surrounding WS-Security artefacts compared to lines of code necessary to retrieve and parse a JSON data structure across a TLS-encrypted wire. For execution, a light-weight process model format with different role-based process architecture views are necessary accommodate for easy-to-communicate and easy-to-execute processes models. 

Process Variations
My third idea is the notion of modelling and execution of process variants. Several enterprise modelling tools (such as ARIS) support the idea of variant artefacts, which allows for a configuration item to be traced back to its reference artefact. This is particularly useful when mapping a process model or architectural layer against a set of reference architectures, which in turn allows for quick discovery and gap analysis of compliance requirements.

However, for some reason this idea has yet not made its way to process execution land. The majority of BPMS platforms treat process models as isolated, transactional entities. References are done by related events or drilling into sub process models. Process layering and variation are completely unknown concepts in the world of execution, despite its inherent adoption in enterprise modelling. Many enterprises struggle with the need for selecting and executing a particular process variant depending on a set of pre-conditions, whilst still being able to reflect that the instance belongs to a particular group of variants. An executable billing process might vary slightly depending on the type of the client currently being billed, but the process is still the billing process. Integrating process variations in BPMS theory adds depth and context to the executable process models, as opposed to pure, isolated workflows.

Process Regulation and Self-Reference 
The research field of control theory and cybernetics has for long explored the properties of self-organising systems, which respond in a meaningful way to outside stimuli. Examples of cybernetic systems are everything from simple thermometers to complex jet fighter engines, which monitor and regulate their current state depending on the environment (such as temperature or altitude). Similarly, researchers in business process management (BPM) and process engineering have explored the idea of self-regulating processes: business processes that monitor, adjust, and control their own state, activity, and performance based on the general condition of the overall enterprise. In manufacturing this would be a process, which adjusts its production throughput automatically based on recent market trends received from the business intelligence system. Sales processes adjust their current inventory data based on market forecasts triggered by an external supplier. Car manufacturing robots do just-in-time adjustment of assembly line activity after observing a major slump in the stock market five minutes ago. The modern enterprise is event-driven, interconnected, and immediately responsive. However, in order for business processes to exploit this opportunity they need to become self-regulating or “self-aware.” Executable processes must be able to adjust their own complex states based on listeners and triggers from external events. This technology demands sophisticated complex event processing and a meta-process environment, which allows for easy and dynamic reconfiguration of process model layout, design, and performance based on external data. The change in state should not be limited to a pre-configured set of process patterns. Process models and metadata should automatically infer new possible process designs and subsequently select the most plausible design based on previous design choices, feedback, and execution data. 

To Be Continued
These considerations are only a small part of the ideas I have been collecting for next generation process automation, which could very well evolve into a general research programme on the future of BPMS. It is my opinion that we have reached a solid state of enterprise integration tools and middleware platforms. However, BPMS theory and practice is still in a state of flux: shiny new tools emerge every day, but fact is that we have very little experience with designing, deploying, and maintaining complex, large-scale process applications. Granted the general principles of software engineering and IS development theory still apply: effectively, most process applications involve enterprise systems in the large with a vast amount of moving parts and integration points. However, in order to successfully respond to the increasingly rapid markets and requirements change, we need faster, simpler, context-aware, and interconnected BPMS platforms driven by self-regulation and complex event processing. In my upcoming blog posts I will write more on this topic.

8 years, 5 months ago

Announcing Upcoming Book: Systems Thinking in Enterprise Architecture

Dr. John Gøtze and I have announced the forthcoming publication of a new book titled: Systems Thinking in Enterprise Architecture. The book, which targets the intersection of practitioners and academics, explores the important, notional relationship between Enterprise Architecture (EA), systems thinking, and cybernetics. A wide array of authors have been invited to contribute to the book resulting in a total of 20 chapters on the topic. 

Systems Thinking in Enterprise Architecture is still the working title of the book and it expected to change once all chapters have been reconciled. The book will be published in the Systems Thinking and Systems Engineering Series by College Publications, thus marking the successor to the remarkable volume 1: A Journey Through the Systems Landscape by Harold “Bud” Lawson, who is also one of the joint contributors to our book.
In order to read more about the book, please refer to the ITU Enterprise web site for deadlines, list of contributors, and publication guidelines. If you are interested in contributing, please do not hesitate to send us a draft manuscript!
8 years, 5 months ago

Short Cycle, Agile, Level of Effort efforts, and Changes in Roles and Responsibilities

In a recent post I briefly discussed the changes in roles and emphasis when a development or transformation effort changes from a waterfall (Big Bang) effort to a short cycle-agile effort.  This post will discuss the topic in more detail in terms …

8 years, 6 months ago

Product Architecture Thinking Versus System Architecture Thinking

Cultural Thinking about Architecture
Until the early 1960s, the discipline of architecture (or functional design) focused on the creation/design/ development/implementation of products like buildings, cars, ships, aircraft, and so on.  Actually, other than buildings, most of the Architects were called “functional” designers, or some such term, to differentiate them from detailed designers and engineers/design analysts.  This is part of the reason that most people associate architecture and an architect with the design of homes, skyscrapers, and other buildings, but not with products, systems, or services.  In fact Architects themselves are having a hard time identifying their role.

In the late 1990s, the US Congress mandated that all Federal Departments must have an Enterprise Architecture to purchase new IT equipment and software.  The thrust of the reasoning was that a Department should have an overall plan, which makes a good deal of sense.  I suspect the term “Enterprise Architecture” to denote the unification of the supporting tooling, though they could have used “Enterprise IT Engineering” in the manner of Manufacturing Engineering, which unifies the processes, procedures, functions, and methods of the assembly line.  And yet, Enterprise Architecture means something more, as embodied the the Federal Enterprise Architecture Framework (FEAF).  The architecture team that created this framework to recognize that processes, systems, and other tooling must support the organization’s Vision and Mission.  However, its up to the organization and Enterprise Architect to implement processes that can populate and use the data in the framework effectively.  And that’s the rub.

Functions vs Processes and Products vs Systems
In the late 1990s and early 2000s the DoD referred to armed drones as Unmanned Combat Air Vehicles (UCAVs), then in the later 2000s, they changed the name of the concept to Unmanned Combat Air Systems (UCAS).  Why?

There are three reasons having to do with a change in western culture, the most difficult changes for any organization.  These are: 1) a change from linear process understanding to linear and cyclic, 2) a change from thinking about a set of functions to understanding a function as part of a process, and a change in thinking from product to system.

Linear vs Cyclic Temporal Thinking
Product thinking is creating something in a temporally linear fashion, that is, creating a product has a start and an end.  D. Boorstin in the first section of his book, The Discovers, discusses the evolution of the concept of time, from its cyclic origins through the creation of a calendar to the numbering of years, to the concept of history as a sequence of events.  To paraphrase Boorstin, for millennia all human thinking and human society was ruled by the yearly and monthly cycles of nature.  Gradually, likely starting with the advent of clans and villages a vague concept of a linear series of events formed.  Still, the cycles of life are still at the core of most societies (e.g., in the east, the Hindu cycles, and the Chinese year, and in the West, Christmas and New Years, and various national holidays). 

The concept of history change cultural thinking from cycles to a progression through a series of linear temporal events (events in time that don’t repeat and cause other events to occur).  In several centuries this concept of history permeated Western Culture.  The concept of history broke and flattened the temporal cycles into a flat line of events.  With this concept and with data, information, and knowledge, in the form of books, meant that Western culture now had the ability to fully understand the concept of progress.  Adam Smith applied this concept to manufacturing, in the form of a process, which divided the process into functions (events), and which ended up producing many more products from the same inputs of raw materials, labor, and tooling.

Function vs Process

In the Chapter 1 of Book 1 of An Inquiry into the Nature and Causes of the Wealth of Nations (commonly called The Wealth of Nations), Adam Smith discussed the concept of  the”Division of Labour”.  This chapter is the most important chapter of his book and the concept of the Division of Labor is the most important concept; far more important than “the invisible hand” concept or any of the others.  It is because this concept of a process made from discrete functions is the basis for all of the manufacturing transformation of the Industrial Revolution.  Prior to this, the division of labor was an immature and informal concept; after, many cottage industrialists adopted the concept or were put out of business by those that did.

Adam Smith did this by using a very simple example, the making of straight pins.  In this example he demonstrated that eight men each serving in a specialized function could make more than 10 times the number of pins in a day when compared with each of the men performing all the functions.  He called it the division of labor; we call it “functional specialization“.

Functional specialization of skills and tooling permeates Western Culture and has led to greater wealth production than any prior concept that has been created.  Consequently, as Western Civilization accreted knowledge, the researchers, engineering, and skilled workers became more expert in their specialized function and increasingly less aware of the rest of the process.

Currently, most organizations are structured by function, HR, accounting, contracts, finance, marketing or business development, and so on.  In manufacturing there are designers (detailed design engineers), engineers (analysts of the design), manufacturing engineers and other Subject Matter Experts (SMEs).  Each of these functions vie with one another for funding to better optimize their particular function.  And most organizations allocate funding to these functions (or sometimes groups of functions) for the type of optimization.

Unfortunately, allocating funds by function is a very poor way to allocate funds.  There is a principle in Systems Engineering that, “Optimizing the sub-systems, sub-optimizes the system“.   J.B. Quinn, in “Managing Innovation: Controlled Chaos”, (Harvard Business Review, May-June 1985), demonstrated this principle, as shown in Figure 1.

Figure 1–Function vs Process Funding
As shown in Figure 1, at the bottom where you cannot really see it, for every unit of money invested in a function, the organization will get, at best, one unit of money improvement in the total process.  However, if the investment effects more than one function would yield 2(N-1)-1 in total improvement in the process.  So focusing on investing in the process will yield much better results and focusing on the function.  This is the role of the Enterprise Architect, and the organization’s process and systems engineer using the Mission Alignment process.  While this point was intuitively understood by manufacturing (e.g., assembly line manufacturing engineering) for well over 150 years, and was demonstrated in 1985, somehow Functional Management is not willing to give up their investment decision perquisite.
Product vs System

Influenced by the Wealth of Nations, from about 1800 on, industries, first in Britain, then across the Western world, and finally globally, used Adam Smith’s concept of a process as an assembly line of functions to create more real value than humankind had ever produced before.  But this value was in the form of products–things.  Developing new “things” is a linear process.  It starts with an idea, an invention, or an innovation.  Continues with product development to initial production and marketing.  Finally, if successful, there is a ramp up of production, which continues until superseded by a new product.  This is the Waterfall Process Model. 

The organization that manufactured the product had only the obligation to ensure that the product would meet the specifications the organization advertised at the time the customer purchased the product, and in a very few cases, early in the product’s life cycle.  Generally, these specifications were so general, so non-specific, and so opaque that the manufacturing company could not be held responsible.  In fact, a good many companies that are over 100 years old, exist only because they actually supported their product and its specifications.  Their customers turned into their advertising agency.

This model is good for development (what some call product realization) and transformation projects, but the model has two fatal flaws, long term.  The first (as I discuss in my post Systems Engineering, Product/System/Service Implementing, and Program Management) is that the waterfall process is based on the assumption that “All of the requirements have been identified up front“; a heroic assumption to say the least (and generally completely invalid).  The second has equal impact and was caused by the transportation and communications systems of the 1700s to the 1950s.  This flaw is that “Once the product leaves of the factory it is no longer the concern of the manufacturer.”

This second flaw in historical/straight line/waterfall thinking effects both the customer and the supplier.  The customer had and has a hard time keeping the product maintained.  For example, most automobile companies in the 1890s did not have dealerships with service departments; in fact they did not have dealerships, as such.  Instead, most automobiles were purchased by going to the factory or ordering by mail.  And even today, most automobile manufacturers don’t fully consider the implications of disposal when design a vehicle.  So they are thinking of an automobile as a product not a system or system of systems (which would include the road system and the fuel production and distribution systems.  The flavor of this for the United States is in its disposable economic thinking; in everything from diapers to houses (yes, houses…many times people are purchasing houses in the US housing slump, knocking them down, to build larger much more expensive housing…at least in some major metropolitan areas).  Consequently, nothing is built to last, but is a consumable product.

Systems Thinking and The Wheel of Progress
Since the 1960s, there has been a very slow, but growing trend toward cyclic thinking with organizations.  Some of this is due to the impact of the environmental movement, and ecosystems models.  More of this change in thinking is due to the realization that there really is a “wheel of progress”.  Like a wheel on a cart, the wheel of progress goes through cycles to move forward.
The “cycle” of the “wheel of progress” is the OODA Loop Process, that is, Observe, Orient, Decide, Act (OODA) loop.  The actual development or transformation of a system occurs during the “Act” function.  This can be either a straight-line, “waterfall-like” process or a short-cycle “RAD-like” process.  However, only when the customer observes the of the transformed system in operation, orients the results of the observation of the system in operation to the organization’s Vision and Mission to determine if it is being effective and cost efficient, then deciding to act or not during the rest of the cycle.  The key difference between product and systems thinking is that each “Act” function is followed by an “Observe” function.  In other words, there is a feedback loop to ensure that the output from the process creates the benefits required and that any defects in the final product are caught and rectified in the next cycle before the defect causes harm.  For example, Ford treated is Bronco SUV as a product rather than a system.  “Suddenly”, tire blowouts on the SUV contributed to accidents, in some of which the passengers were killed.  If Ford had treated the Bronco as a system, rather than a product, and kept metrics on problems that the dealers found, then they might have caught the problem much earlier.  Again, last year, Toyota, also treating their cars as products rather than systems, found a whole series of problems.

OODA Loop velocity
USAF Col. John Boyd, creator of the OODA Loop felt that the key to success in both aerial duels and on the battlefield is that the velocity through the OODA Loop cycle was faster than your opponent’s.  Others have found that this works with businesses and other organizations as well.  This is the seminal reason to go to short cycle development and transformation.  Short cycle in this case would be 1 to 3 months, rather than the “yearly planning cycle” of most organizations.  Consequently, all observations, orientation and deciding should be good enough, not develop for the optimal, there isn’t one. [this follows the military axiom that Grant,  Lee, Jackson, and even Patton followed “Doing something now is always better than doing the right thing later”.]  Expect change because not all of the requirements are known, and even if they are known, the technological and organizational (business) environment will change within one to three months.  But remember the organization’s Mission, and especially its Vision, change little over time; therefore the performance the metrics, the metrics that measure how optimal the current systems and proposed changes are, will change little.  So these metrics are the guides in this environment of continuous change.  Plan and implement for upgrade and change, not stability–this is the essence of an agile systems. 

This is true of hardware systems as well as software.  For example, in 1954, Haworth Office Furniture started building movable wall partitions to create offices.  Steel Case and Herman Miller followed suit in the early 1960s.  At that point, businesses and other organizations could lease all or part of a floor of an office building.  As the needs of the organization changed these partitions could be reconfigured.  This made for agile office space, or office systems (and the bane of most office workers, the cubicle), but allows the organization to make most effective and cost efficient use of the space it has available.

The Role of the Systems Engineering Disciplines
There are significant consequences for the structure of an organization that is attempting to be highly responsive to the challenges and opportunities presented to it, while in its process for achieving its Mission and Vision in a continuously changing operational and technical environment.  It has to operate and transform itself in an environment that is much more like basketball (continuous play) than American football (discrete plays from the scrimmage line with its downs)–apologies to any international readers for this analogy.  This requires continuous cyclic transformation (system transformation) as opposed to straight line transformation (product development). 

Treating Process in Product Thinking Terms
Starting in the 1980s, after the publication of Quality is Free, by Phil Crosby in 1979, the quality movement and quality circles, the concept of Integrated Product Teams (IPTs, which some changed to Integrated Product and Process Teams, IPPTs) organizations have been attempts to move from a focus on product thinking toward a focus on system thinking).  Part of this was in response to the Japanese lean process methods, stemming in part from the work of Edward Deming and others.  First international attempt to is ISO 9000 quality Product Thinking (starting in 2002), though in transition to Systems thinking, since it is a one time straight-through (Six Sigma) methodology, starting with identifying a process or functional problem and ending with a change in the process, function, or supporting system.

Other attempts at systems thinking were an outgrowth of this emphasis on producing quality products (product thinking).  For example, the Balanced Scorecard (BSC) approach, conceptualized in 1987. The BSC was attempting to look at all dimensions of an organization by measuring multiple dimensions.  It uses four dimensions to measure the performance of an organization and its management instead of measure the performance of an organization on more than the financial dimension.  The Software Engineering Institute (SEI) built layer four, measurement, into the Capability Maturity Model for the same purpose.

In 1990, Michael Hammer began to create the discipline of Business Process Reengineering (BPR), followed by others like Tom Peters and Peter Drucker.  This discipline treats the process as a process rather than as a series of functions.  It is more like the Manufacturing Engineering discipline that seeks to optimize the processes with respect to cost efficiency per unit produced.  For example, Michael Hammer would say that no matter size of an organization, it’s books can closed at the end of each day, not by spending two weeks at the end of the business or fiscal year “closing the books”.  Or in another example, you can tell if an organization is focused on functions or processes by its budgeting model; either a process budgeting model or a functional budgeting model.

Like the Lean concept, and to some degree, ISO 9000, ITIL,and other standards, BPR does little to link to the organization’s Vision and Mission, as Jim Collins discusses in Built to Last (2002); or as he puts the BHAG, BIG HARRY AUDACIOUS GOALS.  Instead, it focuses on cost efficiency (cost reduction through reducing both waste and organizational friction, one type of waste) within the business processes.

System Architecture Thinking and the Enterprise Architect
In 1999, work started on the Federal Enterprise Architecture Framework (FEAF) with a very traditional four layer architecture, business process, application, data, and technology.  In 2001, a new version was released that included a fifth layer, the Performance Reference Model.  For the first time the FEAF links all of the organization’s processes and enabling and supporting technology to its Vision and Mission.  Further, if properly implemented, it can do this in a measurable manner (see my post Transformation Benefits Measurement, the Political and Technical Hard Part of Mission Alignment and Enterprise Architecture).  This enables the Enterprise Architect to perform in the role that I have discussed in several of my posts and in comments in some of the groups in the LinkedIn site.  These are decision support for investment decision-making processes and support for the governance and policy management processes (additionally, I see the Enterprise Architect as responsible for the Technology Change Management process for reasons that I discuss in Technology Change Management: An Activity of the Enterprise Architect).   Further, successful organizations will use a Short Cycle investment decision-making (Mission Alignment) and implementing (Mission Implementation) process, for reasons discussed above. [Sidebar: there may be a limited number of successful project that need multiple years to complete.  For example, large buildings, new designs for an airframe of aircraft, large ships–all very large construction effort, while some like construction or reconstruction of highways can be short cycle efforts–much to the joy of the motoring public.]   The Enterprise Architect (EA), using the OODA Loop pattern, has continuous measured feedback as the change operates.  Given that there will be a learning curve for all changes in operation; still, the Enterprise Architect is in the best position to provide guidance as to what worked and what other changes are needed to further optimize the organization’s processes and tooling to support its Mission and Vision.  Additionally, because the EA is accountable for the Enterprise Architecture, he or she has the perspective of entire organization’s processes and tooling, rather than just a portion and is in the position to make recommendations on investments and governance.

System Architecture Thinking and the Systems Engineer and System Architect
One consequence of the short-cycle processes is that all short-cycle efforts are “level of effort” based.  Level of Effort is a development or transformation effort is executed using a given a set level of resources over the entire period of the effort.  Whereas in a waterfall-like “Big Bang” process scheduling the resources to support the effort is a key responsibility of the effort (and the PM), with the short-cycle the work must fit into the cycles. With the waterfall, the PM could schedule all of the work by adding resources or lengthened the time required to design, develop, implement and verify; now the work must fit into a given time and level of resource.  Now, the PM can’t do either because they are held constant.
 If, in order to make an agile process, we use axiom that “Not all of the requirements are known at the start of the effort”, rather than the other way around, then any scheduling of work beyond the current cycle is an exercise in futility because as the number of known requirements increases, some of the previously unknown requirements will be of higher priority for the customer than any of the known requirements.  Since a Mission of a supplier is to satisfy the needs of the customer, each cycle will work on the highest priority requirements, which means that some or many of the known requirements will be “below the line” on each cycle.  The final consequence of this is that some of the originally known requirements will not be met by the final product.  Instead, the customer will get the organization’s highest priority requirements fulfilled.  I have found that when this is the case, the customer is more delighted with the product, takes greater ownership of the product, and finds resources to continue with the lower priority requirements.

On the other hand, not fulfilling all on the initially known requirements (some of which were not real requirements, some of which contradicted other requirements) gives PMs, the contracts department, accountants, lawyers, and other finance engineers the pip!  Culturally,generally  they are incapable of dealing in this manner; their functions are not built to handle it when the process is introduced.  Fundamentally making the assumption that “Not all the requirements are known up front” makes the short-cycle development process Systems Requirements-based instead of Programmatic Requirements-based.  This is the major stumbling block to the introduction of this type of process because it emphasizes the roles of the Systems Engineer and System Architect and de-emphasizes the role of the PM.

The customer too, must become accustomed to the concept, though in my experience on many efforts, the once the customer unders the customer’s role in this process, the customer becomes delighted.  I had one very high-level customer that said after the second iteration through one project, “I would never do any IT effort again that does not use this process.”