AGILE & CMM : The Marilyn Monroe Connection (Part 1) : The Misfits

If you have begun reading this, you probably have drawn two correct conclusions already. One, that I am a fan of Norma Jeane Mortensen Baker, professionally recognized as Marilyn Monroe. And two, having worked with both, Agile and CMM, I … Continue reading

Should We Kill The Architecture Review Board?

OK… I’ll say it.  The whole idea of an Architecture Review Board may be wrong-headed.  That officially puts me at odds with industry standards like CobiT, ongoing practices in IT architecture, and a litany of senior leaders that I respect and admire.  So, why say it?  I have my reasons, which I will share here.

CobiT recommends an ARB?  Really?

The  CobiT governance framework requires that an IT team should create an IT Architecture board.  (PO3.5).  In addition, CobiT suggests that an IT division should create an IT Strategy Committee at the board level (PO4.2) and an IT Steering committee (PO4.3).  So what, you ask?

The first thing to note about these recommendations is that CobiT doesn’t normally answer the question “How.”  CobiT is a measurement and controls framework.  It sets a framework for defining and measuring performance.  Most of the advice is focused on “what” to look for, and not “how” to do it.  (There are a couple of other directive suggestions as well, but I’m focusing on these).

Yet, CobiT recommends three boards to exist in a governance model for IT.  Specifically, these three boards. 

But what is wrong with an ARB?

I have been a supporter of ARBs for years.  I led the charge to set up the IT ARB in MSIT and successfully got it up and running.  I’m involved in helping to set up a governance framework right now as we reorganize our IT division.  So why would I suggest that the ARB should be killed?

Because it is an Architecture board.  Architecture is not special.  Architecture is ONE of the many constraints that a project has to be aligned with.  Projects and Services have to deliver their value in a timely, secure, compliant, and cost effective manner.  Architecture has a voice in making that promise real.  But if we put architecture into an architecture board, and separate it from the “IT Steering Committee” which prioritizes the investments across IT, sets scope, approves budgets, and oversees delivery, then we are setting architecture up for failure.

Power follows the golden rule: the guy with the gold makes the rules.  If the IT Steering committee (to use the CobiT term) has the purse strings, then architecture, by definition, has no power.  If the ARB says “change your scope to address this architectural requirement,” they have to add the phrase “pretty please” at the end of the request.

So what should we do instead of an ARB?

The replacement: The IT Governance Board

I’m suggesting a different kind of model, based on the idea of an IT Governance Board.  The IT Governance Board is chaired by the CIO, like the IT Steering committee, but is a balanced board containing one person who represents each of the core areas of governance: Strategic Alignment, Value Delivery, Resource Management, Risk Management, and Performance Measurement.  Under the IT Governance Board are two, or three, or four, “working committees” that review program concerns from any of a number of perspectives.  Those perspectives are aligned to IT Goals, so the number of working committees will vary from one organization to the next.

The key here is that escalation to the “IT Governance Board” means a simultaneous review of the project by any number of working committees, but the decisions are ALL made at the IT Governance Board level.  The ARB decides nothing.  It recommends.  (that’s normal).  But the IT Steering committee goes away as well, to be replaced by a IT Steering committee that also decides nothing.  It recommends.  Both of these former boards become working committees.  You can also have a Security and Risk committee, and even a Customer Experience committee.  You can have as many as you need, because Escalation to One is Escalation to All.

The IT Governance board is not the same as the CIO and his or her direct reports.  Typically IT functions can be organized into many different structures.  Some are functional (a development leader, an operations leader, an engagement leader, a support leader, etc.).  Others are business relationship focused (with a leader supporting one area of the business and another leader supporting a different area of the business, etc.).  In MSIT, it is process focused (with each leader supporting a section of the value chain).  Regardless, it would be a rare CIO who could afford to set up his leadership team to follow the exact same structure as needed to create a balanced governance model.

In fact, the CIO doesn’t have to actually sit on the IT Governance board.  It is quite possible for this board to be a series of delegates, one for each of the core governance areas, that are trusted by the CIO and his or her leadership team. 

Decisions by the IT Governance board can, of course, be escalated for review (and override) by a steering committee that is business-led.  CobiT calls this the IT Strategy Committee and that board is chaired by the CEO with the CIO responsible.  That effectively SKIPS the CIO’s own leadership team when making governance decisions.

And that is valuable because, honestly, business benefits from architecture.  IT often doesn’t.

So let’s consider the idea that maybe, just maybe, the whole idea of an ARB is flawed.  Architecture is a cross-cutting concern.  It exists in all areas.  But when the final decision is made, it should be made by a balanced board that cares about each of the areas that architecture impacts… not in a fight between the guys with the vision and the guys with the money.  Money will win, every time.

Standardization is a 15 Letter Word

My old friend Dan French penned an interesting blog last month. He was thinking about how the common answer today to the question of how to drive efficiency (and thus profit) in big companies is around transformation and standardisation in common processes, a shift to shared services and common, single instance global ERP systems. Dan pushes back against this orthodoxy using the metaphor from a sailing friend, suggesting the very interesting question to ask with all these initiatives is “will it make the boat go faster?” A leading indicator of success in ocean racing is to have a fast, and consequently light boat, and the corollary in business is the question of whether independence and individual business unit responsibility actually delivers more than enterprise level process standardization?

Dan doesn’t actually answer the question, he merely asks how many global leaders in their industries take this particular “road less travelled”?

I discussed the metaphor with one of my customers the other day and he said that in his enterprise, individual business unit responsibility has been a disaster in IT terms. The huge number of legacy applications represent years of local decisions, of reinventing the wheel over and over again. He used another interesting metaphor – suggesting his enterprise was like a tractor pull – where the more successful they were, the greater the weight they had to drag behind them, and slowly and surely that ever-increasing weight is killing them.

And of course this question is relevant to pretty much every large enterprise in the world. As you get big and successful, the agile operating models of early years need to be matured. But frequently these enterprise level transformation initiatives fail, or take excessive time and cost to deliver. The promise of standard COTS products frequently turns into a multi-year nightmare of lowered expectations and massive increase in costs and under-performing business processes.

So enterprises engaged in tractor pull like efforts need transformation, but would probably prefer a fleet of individual racing yachts to a single Titanic. But surely there are other options? Is there not a middle course?

At CBDI we ourselves advocate a service architecture supporting a hierarchy of services with more stable, standardized services in the lower (business state managing) layers and more agile, adaptable services in the higher (capability and process) layers. In this model, variations across business units are managed by patterns that vary behaviors depending upon context and business rules. And there are numerous highly successful examples of this strategy. But is this sufficient? Is this model sufficiently lightweight and agile like a racing yacht?

What I see emerging is an evolution of SOA, in which everything, at all levels, is componentized managed in a framework that coordinates concurrent standardization and differentiation. The figure below shows the key to standardization and differentiation is the configuration catalog that governs the use of components as configuration items.

Probably the most important element of the framework is the Reference Model. Frankly most companies invest insufficient effort in this area. A good reference model is NOT a clone of an OASIS, CBDI or TOGAF meta models, it’s an enterprise specific model that defines how a specific enterprise works. By all means use existing models for the basics, but they must be customized to reflect the specific enterprise. Good reference models are hard work, but they have incalculable value – because you are doing the heavy lifting at the very earliest stage. The reference architecture then clarifies the types of component and their relationship to delivered business processes and solutions.

The central part of the framework is to establish the mechanism by which standardization and differentiation are managed. A Configuration catalog is effectively a logical version of the CMDB. It records the Configuration Items available for a Capability (and it’s a long list including business process, workflow, services, implementation components, portlets, templates, policies and so on). Each Configuration Item is attributed with flexibility options and crucially constraints (policies). The Configuration Catalog is effectively a Software Factory Schema, except that its scope is much broader than software development, covering the entire range of artifacts including COTS products, internal and external services as well as custom applications. The Configuration Catalog is the core driver of architecture integrity, providing active governance over the configuration and reconfiguration of components. If a business unit believes it must diverge from the standard configuration, the question will be, is this purely localized behaviour, or is the localized requirement actually part of a broader demand signal? Can the localized behaviour be accommodated within the enterprise reference model, in which case further configuration is not compromised, or is this a genuine one-off?

Over the years I have advised many companies on setting up asset repositories in pursuit of asset management and reuse. While the asset repository is important in recording capability and dependency, it is typically not used as an integral part of active governance right throughout the life cycle, commencing at the planning stage.

For large companies with extended, federated operations, the Configuration Catalog capability represents a significant capability maturity improvement over “basic SOA”. In fact it’s an implementation of a broader component based model that orchestrates a broader range of assets. It allows active management of standardization and variation across business units and geographies; to encourage freedom of action where it’s appropriate and to exert mandatory standardization where necessary.

Managing standardization is not a trivial task – it is a 15 letter word after all. But mandating de facto industry standard COTS packages for strategic business operations is unlikely to drive competitive advantage. Active governance over localization and standardization will allow a concurrent loose / tight enterprise which facilitates a heterogeneous portfolio incorporating, if you will permit the metaphor, fast ocean racing yachts alongside super tankers to accommodate a range of business requirements. And it will make the overall boat go faster!

Ref: Dan’s Blog

Time-to-Release – the missing System Quality Attribute

I’ve been looking at different ways to implement the ATAM method these past few weeks.  Why?  Because I’m looking at different ways to evaluate software architecture and I’m a fan of the ATAM method pioneered at the Software Engineering Institute at Carnegie Mellon University.  Along the way, I’ve realized that there is a flaw that seems difficult to address. 

Different lists of criteria

The ATAM method is not a difficult thing to understand.  At it’s core, it is quite simple: create a list of “quality attributes” and sort them into order, highest to lowest, for the priority that the business wants.  Get the business stakeholders to sign off.  Then evaluate the ability of the architecture to perform according to that priority.  An architecture that places a high priority on Throughput and a low priority on Robustness may look quite different from an architecture that places a high priority on Robustness and a low priority on Throughput.

So where do we get these lists of attributes?

A couple of years ago, my colleague Gabriel Morgan posted a good article on his blog called “Implementing System Quality Attributes.”  I’ve referred to it from time to time myself, just to get remind myself of a good core set of System Quality Attributes that we could use for evaluating system-level architecture as is required by the ATAM method.  Gabriel got his list of attributes from “Software Requirements” by Karl Wiegers

Of course, there are other possible lists of attributes.  The ISO defined a set of system quality attributes in the standard ISO 25010 and ISO 25012.  They use different terms.  Instead of System Quality Attributes, there are three high level “quality models” each of which present “quality characteristics.”  For each quality characteristic, there are different quality metrics.

Both the list of attributes from Wiegers, and the list of “quality characteristics” from the ISO are missing a key point… “Time to release” (or time to market).

The missing criteria

One of the old sayings from the early days of Microsoft is: “Ship date is a feature of the product.”  The intent of this statement is fairly simple: you can only fit a certain number of features into a product in a specific period of time.  If your time is shorter, the number of features is shorter. 

I’d like to suggest that the need to ship your software on a schedule may be more important than some of the quality attributes as well.  In other words, “time-to-release” needs to be on the list of system quality attributes, prioritized with the other attributes.

How is that quality?

I kind of expect to get flamed for making the suggestion that “time to release” should be on the list, prioritized with the likes of reliability, reusability, portability, and security.  After all, shouldn’t we measure the quality of the product independently of the date on which it ships? 

In a perfect world, perhaps.  But look at the method that ATAM proposes.  The method suggests that we should created a stack-ranked list of quality attributes and get the business to sign off.  In other words, the business has to decide whether “Flexibility” is more, or less, important than “Maintainability.”  Try explaining the difference to your business customer!  I can’t. 

However, if we create a list of attributes and put “Time to Release” on the list, we are empowering the development team in a critical way.  We are empowering them to MISS their deadlines of there is a quality attribute that is higher on the list that needs attention. 

For example: let’s say that your business wants you to implement an eCommerce solution.  In eCommerce, security is very important.  Not only can the credit card companies shut you down if you don’t meet strict PCI compliance requirements, but your reputation can be torpedoed if a hacker gets access to your customer’s credit card data and uses that information for identity theft.  Security matters.  In fact, I’d say that security matters more than “going live” does. 

So your priority may be, in this example:

  • Security,
  • Usability,
  • Time-to-Release,
  • Flexibility,
  • Reliability,
  • Scalability,
  • Performance,
  • Maintainability,
  • Testability, and
  • Interoperability.
     

This means that the business is saying something very specific: “if you cannot get security or usability right, we’d rather you delay the release than ship something that is not secure or not usable.  On the other hand, if the code is not particularly maintainable, we will ship anyway.”

Now, that’s something I can sink my teeth into.  Basically, the “Time to Release” attribute is a dividing line.  Everything above the line is critical to quality.  Everything below the line is good practice.

As an architect sitting in the “reviewer’s chair,” I cannot imagine a more important dividing line than this one.  Not only can I tell if an architecture is any good based on the criteria that rises “above” the line, but I can also argue that the business is taking an unacceptable sacrifice for any attribute that actually falls “below” the line.

So, when you are considering the different ways to stack-rank the quality attributes, consider adding the attribute of “time to release” into the list.  It may offer insight into the mind, and expectations, of your customer and improve your odds of success.

How Enterprise Architects can cope with Opportunistic Failure

You may not think that Failure is a desired outcome, and on the surface, there are some negative connotations to failure.  It just sounds “bad” for a team to fail.  However, there are times when a manager will INTENTIONALLY fail in a goal.  Let’s look at the scenario where a manager may choose to fail, and see whether EA has a role in either preventing that failure, or facilitating it.

What is Opportunistic Failure?

Does your organization manage by objectives and scorecards?  Scorecards and metrics are frequently used management tools, especially in medium and large organizations.  In a Manage-By-Objective (MBO) organization, a senior leader is not told “how” to do something, but rather a negotiation takes places that results in the development of a measurable objective.  The term “measurable objective,” used here, is a well-defined idea.  It is specific, measurable, actionable, realistic, and time-bound (SMART).  A measurable objective is a description of the results that a senior manager is expected to achieve.  In BMM terms, the objective is the “ends” while the senior leader is expected to figure out the “means.”  In business architecture parlance, the objective describes the “what” while the senior leader is expected to figure out the “how.”

Now, in many situations, a senior leader has to rely on other groups to perform, in some way, in order for him to achieve his measurable objectives.  This is quite common.  In fact, I’d say that the vast majority of senior-level objectives require some kind of collaboration between his or her people, and the people who work in other parts of the organization (or other organizations). 

For a small percentage of those dependencies, there may be competition between the senior leader’s organization and some other group, and that is where opportunistic failure comes in.

The scenario works like this: 

Senior leader has an empowered team.  They can deliver on 30 capabilities.  Someone from outside his or her organization, perhaps an enterprise architect, points out that one of those capabilities is overlapping.  Let’s say it is the “Product Shipment Tracking” capability.  The EA instructs the senior leader to “take a dependency” on another department (logistics in this case) for that.  The senior leader disagrees in principle because he has people who do a good job of that capability, and he doesn’t want to take the dependency.  However, he cannot convince other leaders that he should continue to perform the “product shipment tracking” capability in his own team. 

So he contacts the other department (logistics) and sets up an intentionally dysfunctional relationship.  After some time, the relationship fails.  Senior leader goes to his manager and says “we tried that, and it didn’t work, so I’m going to do it my way.”  No one objects, and his team gets to keep the capability.

In effect, the senior leader felt it was in her own best interest to fight the rationale for “taking a dependency,” but instead of fighting head-on, s/he pretends to cooperate, sabotages the relationship, and then gets the desired result when the effort fails.  This is called “opportunistic failure.” 

Thoughts on Opportunistic Failure

Opportunistic failure may work in the favor of anyone, even an Enterprise Architect.  However, as an EA, I’ve most often seen it used by business leaders to insure that they are NOT going to be asked to comply with the advice of Enterprise Architecture, even when it makes sense from an organizational and/or financial standpoint. 

In addition, one of the basic assumptions of EA is that we can apply a small amount of pressure to key points of change to induce large impacts.  That assumption collapses in the face of opportunistic failure.  Organizations can be very resistant to change, and this is one of the ways in which that resistance manifests. 

Therefore, while EA could benefit from EA, I’ll primarily discuss ways to address the potential for a business leader to use operational failure to work against the best interests of the enterprise.

  1. Get senior visibility. – If a business leader is tempted to use opportunistic failure to resist good advice, get someone who is two or more levels higher than that leader to buy in to the recommended approach.  This radically reduces the possibility that the business leader will take the risk to his or her career that this kind of failure may pose.
  2. Get the underlying managers in that senior manager’s team on board, and even better, get them to agree to the specific measures of progress that demonstrate success.  This creates a kind of “organizational momentum” that even senior leaders have a difficult time resisting.
  3. Work to insure that EA maintains a good relationship with the business party that will come up short if the initiative does fail.  That way, they feel that you’ve remained on their side, and will call on you to escalate the issue if it arises.
  4. Play the game – look for things to trade off with.  If the senior manager is willing to risk opportunistic failure, you may be able to swing them over to supporting the initiative if you can trade off something else that they want… perhaps letting another, less important, concern slide for a year.  

 

Conclusion

For non-EAs reading this post: EA is not always political… but sometimes it is.  Failing to recognize the politics will make you into an ineffective EA.  On the other hand, being prepared for the politics will not make you effective either… it will just remove an obstacle to effectiveness. 

Pats SOA Governance Perscription

RX 

Just recently, I was asked to provide some advice to a customer on how to adopt SOA Governance, specifically the Oracle Enterprise Repository (OER), in a step-wise and rational way.  It seemed like sage enough advice to publish here

Here is what they were trying to do which is similar to what other customers are doing:

  • Establish a single source of truth in a SOA Repository
  • Single repository supporting on-shore / off-shore distributed teams
  • Manage service artifacts (i.e. projects, service design documents, policy definition…)
  • Enables SOA program managers manage service portfolio and service demands
  • Enables SOA program managers with related reports (i.e. demand, reuse, compliance & exceptions, dependency/impact analysis, …)

So it can be successful – but you don’t want to boil the Governance Ocean – at least not all at once.  In a word, I’d advise getting a firm understanding on which services you want to govern (probably not all of them) and the types of things you want Governance to do for you. Once you have that, you can move forwards in a stepwise approach that reduces the effort AND complication.  Realize that installing OER is only a small part of the puzzle.  You need to have the right Org structure (official or unofficial) in place and the right incentives and rewards to help motivate people to “do the right thing” such as to reuse services instead of writing their own.  Then you need the right processes to for people to follow.  It’s the notion that:  

 Governance = PEOPLE + TECHNOLOGY + PROCESSES


Let’s say there are 50 key services to manage –  for discussion purposes.  Here is what I’d do at a super high level:

  • Add Projects, Policies, Classifications Asset Types as needed (JUDISIOUSLY – keep it simple at first)
  • Add users in different roles
  • Get your top 50 key existing web services in OER using the Harvester if possible.  Otherwise just take some time to add them manually.  Make sure these are the relatively static PROXY services from OSB.
    • Be sure to assign one or more people to OSR as administrators/architects to help keep things in order
  • Add the correct lifecycle stage to them
  • Add the right classifications/taxonomy to them
  • Add documentation such that developers know how to use the service (i.e. can download a doc or visio or whatever that explains it)
  • Add any and all XSD, WSDL and other files that people would need to download to actually use service
  • Add a section to the OER home page that explains to users about the WL Gore SOA program, schedules and contacts – make it a place people go for some critical PROGRAM-level information – SELL what you are doing here…
  • Get developers used to using the tool through an in-house training
  • Use the reports to get a management view into the SOA program and help fund/support what you are doing
  • Then – start entering future state services to track as they go from inception to deployment in the life-cycle

 

Things that add complexity that you can add later IF they add value to what you are trying to do:

  • Install OSR / set up
  • Enable publishing to OSR
  • Set up harvesting of SOA/BPEL projects
  • Set up/enable automated approval workflows
  • Synch up performance metrics from OSB or OEM back into OER
  • Assign CAS (custom access settings) settings to individual assets

And so on.  But add these later after the basics are down.

So – I hope this helps anyone else who wants to begin a SOA Governance effort using OER (with OSR, OSB and OEM as secondary stages after initial success).