5 years, 8 months ago

The Digital Future: Services Oriented Architecture and Mass Customization, Part 3 B

From Previous PartsPart 1 discussed the four ages of mankind.  The first was the Age of Speech; for the first time humans could “learn by listening” rather than “learn by doing”; that is, data could be accumulated, communicated, and stored by ve…

6 years, 7 months ago

Hurricanes and an Architecture for Process

The idea for this post came from a previous post on this blog regarding enterprise architecture and religious organizations.

Definitions or Definitional Taxonomies

There are three definitions that are needed to understand this post; the definition of Architecture, process using the OODA Loop, and the Hierarchy of Knowledge.  These are somewhat personnel definitions and you may interpolate them into your model universe.

Architecture

Architecture—A functional design of a product, system, or service incorporating all the “must-do” and “must-meet” requirements for that product, system, or service.

OODA Loop

All processes supporting the mission, strategies, or infrastructure of any organization fall into the OODA model of Col. John Boyd.  The OODA, as discussed in several previous posts includes four steps: Observe, Orient, Decide, and Act.  I will discuss this shortly (below) using the example of the prediction of a hurricane.

A Taxonomy for the Hierarchy of Knowledge

I defined the following taxonomy of knowledge based on much reading and some thinking.  I will demonstrate what I mean for each definition by using it in the appropriate place in my example of the hurricane prediction.
Datum—some observation about the Universe at a particular point in four dimensions

Data—a set consistent datum

Information—patterns abstracted from the data.

Knowledge—identified or abstracted patterns in the information.

Wisdom—is the understanding of the consequences of the application of knowledge

Process Architecture and Forecasting Hurricanes

Since the process for predicting or forecasting hurricanes is most likely to be familiar to anyone watching weather on TV, it is the best example I can think of to illustrate how the OODA Loop process architecture works with the taxonomy of knowledge.

Observe

Initially, data is gathered by observing some aspect of the current state of the Universe.  This includes data about the results of their previous actions.  In point of fact,

Datum—some observation about the Universe at a particular point in four dimensions

Data—a set consistent datum

Obviously observations of current temperature, pressure, humidity, and so on, are basic to making weather forecasts, like the prediction of a hurricane.  And each observation requires latitude, longitude, height above sea level, and the exact time to be useful. So one datum, or data point would include, temperature, latitude, longitude, height, and time.

When the temperature is measured at many latitudes and longitudes concurrently, it produces a set of temperature data.  If other weather measurement, like pressure and humidity, are taken at the same locations at the same time, then you have several data sets with which to make a forecast (or a composite weather data set for a particular time).  Because this is a recurrent measurement process, the weather folk continue to build a successively larger composite weather data set.  But large data sets don’t make a forecast of a hurricane.

Orient

The Orient step in the process is inserting the new data into an individual’s model of how the world (Universe) works (descriptive) or should work (prescriptive).  These models are sometimes called paradigms.  Rules from Governance enable and support the Orient step, by structuring the data within the individual’s or organization’s model.  Sometimes these model or paradigms stick around far after they have demonstrated to be defective, wrong, or in conflict with most data and other information.  An example would be the model of the Universe of the Flat Earth society.

Information—patterns abstracted from the data.  This is the start of orienting the observations, the data and information.  Pattern analysis, based on the current model converts data into information is derived from the organization’s model of its environment or Universe.  For hurricane forecasting this would mean looking for centers of low pressure within the data.  It would also include identifying areas with high water temperatures in the temperature data, areas of high humidity, and so on.  This and other abstractions from the data provide the information on which to base a forecast.  But it is still not the prediction of a hurricane.

Knowledge—identified, combined, or abstracted patterns in the information.  Using the same paradigm, environmental model, or model Universe, people analyze and abstract patterns within the information.  This is their knowledge within the paradigm.  In weather forecasting the weather personnel uses the current paradigms (or information structures) to combine and abstract the knowledge that a hurricane is developing.

When they can’t fit information into their model, they often discard as aberrant, an outlier, or as an anomaly.  When enough information doesn’t conveniently fit into their model the adherents have a crisis.  In science, at least, this is the point of a paradigm shift.  And this is what has happened to weather forecasting models over the past one hundred and fifty years.  The result is that the forecasting has gotten much better, though people still complain when a storm moves off its track by fifty miles and causes unforecast wind, rain, and snow events.

Decide

Once the organization or individual has the knowledge, he or she uses input their knowledge within their models of the Universe to make decisions.

Wisdom—is the understanding of the consequences of the application of knowledge. 

This is the hard part of the OODA Loop because it’s difficult to understand both the consequences and the unintended consequences of a decision.  If your paradigm, environment, or Universe model is good, or relatively complete, then you’re more likely to make a good decision.  More frequently than not people make decisions that are “Short term smart and long term dumb.” 

Part of the reason is that they are working with a poor, incomplete or just plain wrong paradigm (view of the world or universe).  This is where the Risk/Reward balance comes in.  When choosing a path forward, what are the risks and rewards with each path?  [Sidebar:  A risk is an unknown and it is wise to understand that “you don’t know what you don’t know”.]  For weather forecasters it’s whether or not to issue a forecast for a hurricane.  To ameliorate the risk that the are wrong, weather forecasters have invented a two part type of forecast, a hurricane watch and a hurricane warning. [Sidebar: It’s split more by giving a tropical storm watch and warning.]

The reason for this warning hierarchy is that the weather forecasters and services are wise enough to know the public’s reaction to warnings that don’t pan out and lack of warnings that do; they understand the risks.  So when they give a hurricane warning, they are fairly sure that the area they have given the warning for will be the area affected by the hurricane.

Act

Once the decision is made people act on those decisions by planning a mission, strategies, and so on within their paradigm.

For governments and utilities in the U.S. this means putting preplanned hurricane preparations into effect.  For people this generally means boarding up the house, stocking up on food, water, and other essentials or packing and leaving.  And for the drunk beach comber this means grabbing the beers out of the frig, and either heading to the hills or back to the beach to watch the show.

Programmed and Unprogrammed Processes

In the 1980s I wrote a number of articles on process types.  After doing much research and much observation, I came to the obvious conclusion that there are two types of processes, Programmed, and those that I will call Unprogrammed and which includes design, discovery, and creative processes. [Sidebar: These three sub-types differ in that design processes start from a set of implicit or explicit customer requirements, discovery processes start from inconsistencies in data or in thinking and modeling the data, and creative really have no clear foundation and tend to be intuitive, chaotic, and apparently random.] 

Programmed Processes

Programmed processes are a set of repeatable activities leading to a nearly certain outcome.  Almost all processes are of this type.  The classic example is automobile manufacturing and assembly.  Since they are repeatable how does the process architecture model apply?
The short answer is to keep them repeatable and to increase their cost efficiency (that is making them less expensive in terms of time and money to create the same outcome).

Keeping Programmed Processes Rolling

To keep your truck or car operating requires maintenance.  How much and when is an OODA Loop process.  For example, you really should check your tires, both for wear and pressure regularly; otherwise the process of driving can become far too exciting.  So you are in the “observe” step.
As the tires wear out or lose pressure too often, the data set becomes information triggering the “orient” step in the process.  At this point a driver starts to gather data on which tires are wearing fastest and where on the tire they are wearing.  This last point may indicate that tire rotation is all that’s needed.
Based on this data our driver models the information (using the computer between his or her ears) to determine how much longer it’s safe to drive on the tires.  Based on results of that model, the driver will “decide” to buy tires, rotate tires, or wait a little longer.  [Sidebar: Actually, many drivers will go through the buy/wait decision based on additional data, the cost of new tires, and their budget.]  The driver will then “act” on the decision, either get new tires or wait (which is really an action).
The point here is that like everything else in our Universe, over time, everything breaks down; so repeatable processes require an OODA Loop process to maintain them.

More Cost Efficient

However, the OODA Loop process architecture elucidated here is important for programmed processes for another reason.  For repeatable processes the key OODA process is always how to make the faster, cheaper, or making a higher quality product at the same or lower price, and producing it in the same or less time.
This is famously what Adam Smith described in his pin example in the first chapter of “The Wealth of Nations”. By creating an assembly line process that he called “the division of labour”, he demonstrated that the same number of workers produce hundreds of more pins.  He went on to describe that each worker might now invent (or innovate, that is, refine or enhance) tools for that worker’s job. [Sidebar: Henry Ford went on to prove this in spades or should I say Model-T’s.]  [Sidebar: To me the worker’s inventing or innovating tools to help them in their job is quite interesting and often missed.  Tools are process or productivity multipliers.  In the military they are called force multipliers; you always want your enemy to bring a knife to your gunfight, because a rifle multiplies your force.  Likewise, the tooling in manufacturing can increase the quality of the product, the uniformity of what is produced, while reducing the cost and time to produce…fairly obvious.]
Both the “division of labour” and the creation of tooling require the use of the Architectural OODA Loop, which means that increasing the cost efficiency of manufacturing uses the OODA Loop with the knowledge taxonomy.

Unprogrammed Process

Unprogrammed processes have stochastic (creative and unplanned activities within them).  There are really three sub-types of unprogrammed processes, design, discovery, and creative. The key differences between the design, discovery, and creative processes are the whether the process has customer requirements driven and what the requirements are.

Design

The design process has customer requirements.  [Sidebar: As I’m using it, the design process also includes custom development, implementation, and manufacturing.]  It uses a semi-planned process, that is, program or project planning creates a plan to meet the objectives, but with latitude for alternative activities because there is significant risk.  The actual design activities within the programmed process are stochastic, from a program management perspective.  That is, the creative element of these activities makes less predictable, and therefore with programmatic risk with respect to cost and schedule.  Therefore, program management must use (some form or) the OODA architecture to manage the program or project.
The stochastic activities are themselves OODA Loop processes.  The designers have to identify (observe) detailed data of the functions they are attempting to create (the “must do” requirements), while working to the specifications (the”must meet” requirements) for the product, system, or service.  The designers then have to “orient” these (creating a functional design or architecture for the function), “decide” which of several alternate proposed designs is “the best”. Finally, they “act” on the decision.

Discovery

The requirements for research, risk reduction, and root cause analysis are generally unclear and may be close to non-existent.  One of my favorite research examples, because it’s well known, and because it so clearly proves the point is the Wright Brothers researching and developing the aircraft.  In 1898 the brothers started their research efforts.  Starting with data and information then publicly available they built a series of kites. With each kite, they collected additional data and new types of data.  They then used this to reorient their thinking and their potential design for the next kite.  They found so many problems with the existing data that they created the wind tunnel to create their own data sets on which to base their next set of designs.  By the end of 1902 they had created a glider capable of controllable manned flight and by 1903 they created the powered glider known as the Wright Flyer.  It took them at least two more cycles of the OODA Loop to develop a commercially useful aircraft.
Risk reduction also uses the OODA Loop.  A risk is an unknown.  It requires some type of research to convert the unknown into a known.  There are four alternative.  First, the project/research team must decide whether or not to simply “accept” the risk. Many times the team orients the observed risk in a risk/reward model and accepts the risk.  However, another to orient is to determine if there is knowledge or knowledgeable people to convert the unknown into a known. So “transferring” the risk is one way to reduce the risk.  A second method is to “avoid” the risk.  This means redesigning the product, system, or service, or changing the method for achieving the goal or target.  The final way is to “mitigate” the risk.  This is nothing short of or more than creating a research project (see above) to convert the unknown into a Known.
Likewise, root cause analysis is a research effort.  However, the target of this analysis is to identify why some function or component is not working the way it is supposed to work.  Again, its observing the problem or issue, that is gathering data, orienting it through modeling the functions based on the data, deciding what the cause is or causes are for the problem and how to “fix” the problem, and then acting on the decision.  Sound a whole lot like the OODA Loop.

Creative

Creative processes like theory building (both through extrapolation and interpolation), and those of the “fine arts” that come from emotions also use the Architecture of Process defined in this post.  For some theories and all fine arts, “the requirements” come from emotion, based on an intuitive belief structure [Sidebar: a religious structure, see my post on architecture of religion].  This intuitive structure provides “the data, information, and knowledge” on which the creativity builds.
However, for scientific theories, at least, they are sometimes based on inconsistencies in the results of the current paradigm.  The theorist attempts to define a new structure that will account for these aberrant data.

Why make a Deal about the Obvious?

To many it might seem that I’m making a big deal about what is obvious.  If it is the case, why hasn’t it been inculcated into enterprise architecture and used in the construction and refinement of processes and tooling to support the charter, mission, objective, strategies and tactics of all organizations?  Using formal architectural models like the architecture for process, presented in this post, will enable the enterprise architects to “orient” their data and information more clearly making Enterprise Architecture that much more valuable.
6 years, 7 months ago

Hurricanes and an Architecture for Process

The idea for this post came from a previous post on this blog regarding enterprise architecture and religious organizations.Definitions or Definitional TaxonomiesThere are three definitions that are needed to understand this post; the definition of Arc…

6 years, 11 months ago

Agility is not Speed

Agility is the ability to change direction quickly.

The paradox of agility is that it is that the slower you are moving the faster you can change direction.

Just going really fast, in the belief that speed is agility, can lead to a nasty, sudden stop.

image

Photo by Dan Masa

https://www.flickr.com/photos/danmasa/27160870210/in/dateposted/

creativecommons.org 2.0

7 years, 11 months ago

Agile Development And Data Management Do Coexist

A frequent question I get from data management and governance teams is how to stay ahead of or on top of the Agile development process that app dev pros swear by. New capabilities are spinning out faster and faster, with little adherence to ensuring compliance with data standards and policies.

Well, if you can’t beat them, join them . . . and that’s what your data management pros are doing, jumping into Agile development for data.

Forrester’s survey of 118 organizations shows that just a little over half of organizations have implemented Agile development in some manner, shape, or form to deliver on data capabilities. While they lag about one to two years behind app dev’s adoption, the results are already beginning to show in terms of getting a better handle on their design and architectural decisions, improved data management collaboration, and better alignment of developer skills to tasks at hand.

But we have a long way to go. The first reason to adopt Agile development is to speed up the release of data capabilities. And the problem is, Agile development is adopted to speed up the release of data capabilities. In the interest of speed, the key value of Agile development is quality. So, while data management is getting it done, they may be sacrificing the value new capabilities are bringing to the business.

Let’s take an example. Where Agile makes sense to start is where teams can quickly spin up data models and integration points in support of analytics. Unfortunately, this capability delivery may be restricted to a small group of analysts that need access to data. Score “1” for moving a request off the list, score “0” for scaling insights widely to where action will be taking quickly.

Read more

7 years, 11 months ago

Agile Development And Data Management Do Coexist

A frequent question I get from data management and governance teams is how to stay ahead of or on top of the Agile development process that app dev pros swear by. New capabilities are spinning out faster and faster, with little adherence to ensuring compliance with data standards and policies.

Well, if you can’t beat them, join them . . . and that’s what your data management pros are doing, jumping into Agile development for data.

Forrester’s survey of 118 organizations shows that just a little over half of organizations have implemented Agile development in some manner, shape, or form to deliver on data capabilities. While they lag about one to two years behind app dev’s adoption, the results are already beginning to show in terms of getting a better handle on their design and architectural decisions, improved data management collaboration, and better alignment of developer skills to tasks at hand.

But we have a long way to go. The first reason to adopt Agile development is to speed up the release of data capabilities. And the problem is, Agile development is adopted to speed up the release of data capabilities. In the interest of speed, the key value of Agile development is quality. So, while data management is getting it done, they may be sacrificing the value new capabilities are bringing to the business.

Let’s take an example. Where Agile makes sense to start is where teams can quickly spin up data models and integration points in support of analytics. Unfortunately, this capability delivery may be restricted to a small group of analysts that need access to data. Score “1” for moving a request off the list, score “0” for scaling insights widely to where action will be taking quickly.

Read more

9 years, 11 months ago

The Next Phase of Agile Development

Guest post by Alan Morrison It’s human nature for people to build sturdy structures to shield themselves from the unpredictability of the elements. But, if you are too sheltered for too long, you weaken your ability to continuously confront change. That’s the dilemma facing IT departments. Change is raining down on them, and they are having trouble continuously adapting. A term is gaining momentum in the IT community to describe an ideal state for IT […]

10 years, 3 months ago

Placing Architecture Properly into Scrum Processes

As I’m about to complete my share of a longer engagement on using Lean principles to improve the processes at an online services firm, it occurred to me that the efforts we undertook to properly embed Architecture practices into their Scrum process were novel.  I haven’t seen much written about how to do this in practice, and I imagine others may benefit from understanding the key connection points as well.  Hence this post.

First off, let me be clear: Agile software development practices are not at all averse to software architecture.  But let’s be clear about what I mean by software architecture.  In an agile team, most decisions are left to the team itself.  The team has a fairly short period of time to add a very narrow feature (described as a user story) to a working base of code and demonstrate that the story works.  The notion of taking a couple of months and detailing out a document full of diagrams that explains the architecture of the system: pretty silly. 

The value of software architecture is that key decisions are made about the core infrastructure of the system itself: where will generalizations lie?  Will a layered paradigm be used, and if so, what are the responsibilities of each layer?  What modules will exist in each layer and why will they be created? How will the responsibilities of the system be divided up among the layers and components?  How will the modules be deployed at scale?  How will information flow among the modules and between the system and those around it?

The way these questions are answered will indicate what the architecture of the system is.  There are many choices here, and the “correctness” of any choice is a balance between competing demands: simplicity, security, cost, flexibility, availability, reliability, usability, correctness, and many more.  (These are called “System Quality Attributes”).  Balancing between the system quality attributes takes thought and careful planning.

So when does this happen in an agile process?

Let’s consider the architect’s thinking process a little.  In fact, let’s break the software architecture process into layers, so that we can divide up the architectural responsibility a little.  You have three layers of software architectural accountabilities.  (Repeat: I’m talking about Software Architecture, not Enterprise Architecture.  Please don’t be confused.  Nothing in this post is specific to the practice of Enterprise Architecture).  All this is illustrated in the diagram below.  (Click on the diagram to get something a little more readable. 

image

At the top, you have the Aligning processes of software architecture.  These processes consider the higher levels of enterprise architecture (specifically the business and information architecture) to create To-Be Software Models of the entire (or relevant) software ecosystem.  If you’ve ever seen a wall chart illustrating two dozen or more software systems with connectors illustrating things like flow of data or dependencies, you’ve seen a model of the type I’m referring to.  Creating and updating these kinds of diagrams is a quarterly or semi-annual process and reflects the gradual changes in the strategy of the enterprise. 

In the middle, you have the Balancing processes of software architecture.  These processes consider the needs of a single system but only from the level of deciding why the software will be divided up into modules, layers, and components, how that division of responsibility will occur, and what the resulting system will look like when deployed in specific technologies in a specific environment.  All of this can be conveyed in a fairly short document that is rich in diagrams with a small amount of text explaining the choices.  This occurs once when a system is moving forward, and the software architecture can be developed alongside the first couple of sprints as input to the third and subsequent sprints.

At the bottom, you have the Realization processes of software architecture.  This is where the architecture becomes software, and this is where decisions are made about the choice of specific design patterns, the appropriate level of configurability vs. simplicity, and the ability to demonstrate whether the actual intent of the architecture occurs in practice.  In agile, this layer occurs within the team itself.  The software architect can offer advice about what patterns to use, but it is up to the team to realize that advice and/or decide not to implement it.  The team will very likely implement the software architecture as described, but may choose to improve upon it.

What does the process look like

There are many visualizations of scrum running around.  Some are described in books, others in papers or blog posts.  Most share some common elements.  There is a product backlog that, through the magic of sprint planning, the team extracts a subset for the sprint.  This becomes the sprint backlog.  The illustrations then tend to show the various rhythms of Sprint as cycles (sprint cycles and daily cycles), ending with a demonstration and retrospective.

In order to illustrate and train a team on all elements, including the business analysis elements, we chose to be a bit more explicit about the steps PRIOR to sprint planning, including the processes of creating and improving the stories prior to the start of a sprint.  (as above, click on the image to enlarge it).

image

Astute observers will notice that we added a step that we are calling “pre-sprint story review.”  This is a meeting that occurs one week prior to the start of a sprint.  It is called by the product owner and he or she invites “senior chickens” (architects, user experience leads, development and test leads, and any other “non-team” members that want a say in the sprint. 

In that week prior to sprint planning, those folks, working with the product owner, can improve the stories, add constraints, refine the description and acceptance criteria.  And here’s where the architects get to play.  Architects fulfilling the role of “Balancing” in the model above will have (or can create) an outline document describing the architecture of the software system, and can “link” that document to the SPECIFIC STORIES that are impacted by that design. 

(Note: assuming you are using a tool like Microsoft’s Team Foundation Server, that fully enables Scrum in multiple forms, this is a nearly trivial activity since a document can be easily linked to any story.  Enough advertising.)

So is an architect a chicken or a pig?  Answer: depends on what “layer” the architecture is at.  The top two layers are chickens.  The third layer, realization, is done by the team itself.  The person on the team may or may not have the title of “designer”.  (I’d prefer that they did not, but that’s just because I believe that ALL team members should be trained to fulfill that role.  In reality, the skill may not be wide spread among team members).  Therefore, the third layer is done by the pigs. 

I hope this provides some insight into how a team can embed software architecture into their scrum cycles.  As always, I’m interested in any feedback you may wish to share.