An Actionable Common Approach to Federal Enterprise Architecture

The recent “Common Approach to Federal Enterprise Architecture” (US Executive Office of the President, May 2 2012) is extremely timely and well-organized guidance for the Federal IT investment and deployment community, as useful for Federal Departments and Agencies as it is for their stakeholders and integration partners. The guidance not only helps IT Program Planners and Managers, but also informs and prepares constituents who may be the beneficiaries or otherwise impacted by the investment. The FEA Common Approach extends from and builds on the rapidly-maturing Federal Enterprise Architecture Framework (FEAF) and its associated artifacts and standards, already included to a large degree in the annual Federal Portfolio and Investment Management processes – for example the OMB’s Exhibit 300 (i.e. Business Case justification for IT investments).

A very interesting element of this Approach includes the very necessary guidance for actually using an Enterprise Architecture (EA) and/or its collateral – good guidance for any organization charged with maintaining a broad portfolio of IT investments. The associated FEA Reference Models (i.e. the BRM, DRM, TRM, etc.) are very helpful frameworks for organizing, understanding, communicating and standardizing across agencies with respect to vocabularies, architecture patterns and technology standards. Determining when, how and to what level of detail to include these reference models in the typically long-running Federal IT acquisition cycles wasn’t always clear, however, particularly during the first interactions of a Program’s technical and functional leadership with the Mission owners and investment planners. This typically occurs as an agency begins the process of describing its strategy and business case for allocation of new Federal funding, reacting to things like new legislation or policy, real or anticipated mission challenges, or straightforward ROI opportunities (for example the introduction of new technologies that deliver significant cost-savings).

The early artifacts (i.e. Resource Allocation Plans, Acquisition Plans, Exhibit 300’s or other Business Case materials, etc.) of the intersection between Mission owners, IT and Program Managers are far easier to understand and discuss, when the overlay of an evolved, actionable Enterprise Architecture (such as the FEA) is applied.  “Actionable” is the key word – too many Public Service entity EA’s (including the FEA) have for too long been used simply as a very highly-abstracted standards reference, duly maintained and nominally-enforced by an Enterprise or System Architect’s office.

Refreshing elements of this recent FEA Common Approach include one of the first Federally-documented acknowledgements of the “Solution Architect” (the “Problem-Solving” role). This role collaborates with the Enterprise, System and Business Architecture communities primarily on completing actual “EA Roadmap” documents. These are roadmaps grounded in real cost, technical and functional details that are fully aligned with both contextual expectations (for example the new “Digital Government Strategy” and its required roadmap deliverables – and the rapidly increasing complexities of today’s more portable and transparent IT solutions.  We also expect some very critical synergies to develop in early IT investment cycles between this new breed of “Federal Enterprise Solution Architect” and the first waves of the newly-formal “Federal IT Program Manager” roles operating under more standardized “critical competency” expectations (including EA), likely already to be seriously influencing the quality annual CPIC (Capital Planning and Investment Control) processes. 

Our Oracle Enterprise Strategy Team (EST) and associated Oracle Enterprise Architecture (OEA) practices are already engaged in promoting and leveraging the visibility of Enterprise Architecture as a key contributor to early IT investment validation, and we look forward in particular to seeing the real, citizen-centric benefits of this FEA Common Approach in particular surface across the entire Public Service CPIC domain – Federal, State, Local, Tribal and otherwise. Read more Enterprise Architecture blog posts for additional EA insight!

AGILE & CMM : The Marilyn Monroe Connection (Part 3) : Some Like it Hot!

Here we are at the final part in our hot discussion – will Agile and CMM get hitched, become a potent combination? If you have followed the earlier parts of this story, you would share my excitement – for the … Continue reading

AGILE & CMM : The Marilyn Monroe Connection (Part 2) : Something’s Got To Give

In the first part of the discussion, drawing upon the genesis of Agile and CMM, we were left wondering if the two are “The Misfits”, and most likely to disappoint  the pacifists, or can they become soul-mates after all. The … Continue reading

AGILE & CMM : The Marilyn Monroe Connection (Part 1) : The Misfits

If you have begun reading this, you probably have drawn two correct conclusions already. One, that I am a fan of Norma Jeane Mortensen Baker, professionally recognized as Marilyn Monroe. And two, having worked with both, Agile and CMM, I … Continue reading

Should We Kill The Architecture Review Board?

OK… I’ll say it.  The whole idea of an Architecture Review Board may be wrong-headed.  That officially puts me at odds with industry standards like CobiT, ongoing practices in IT architecture, and a litany of senior leaders that I respect and admire.  So, why say it?  I have my reasons, which I will share here.

CobiT recommends an ARB?  Really?

The  CobiT governance framework requires that an IT team should create an IT Architecture board.  (PO3.5).  In addition, CobiT suggests that an IT division should create an IT Strategy Committee at the board level (PO4.2) and an IT Steering committee (PO4.3).  So what, you ask?

The first thing to note about these recommendations is that CobiT doesn’t normally answer the question “How.”  CobiT is a measurement and controls framework.  It sets a framework for defining and measuring performance.  Most of the advice is focused on “what” to look for, and not “how” to do it.  (There are a couple of other directive suggestions as well, but I’m focusing on these).

Yet, CobiT recommends three boards to exist in a governance model for IT.  Specifically, these three boards. 

But what is wrong with an ARB?

I have been a supporter of ARBs for years.  I led the charge to set up the IT ARB in MSIT and successfully got it up and running.  I’m involved in helping to set up a governance framework right now as we reorganize our IT division.  So why would I suggest that the ARB should be killed?

Because it is an Architecture board.  Architecture is not special.  Architecture is ONE of the many constraints that a project has to be aligned with.  Projects and Services have to deliver their value in a timely, secure, compliant, and cost effective manner.  Architecture has a voice in making that promise real.  But if we put architecture into an architecture board, and separate it from the “IT Steering Committee” which prioritizes the investments across IT, sets scope, approves budgets, and oversees delivery, then we are setting architecture up for failure.

Power follows the golden rule: the guy with the gold makes the rules.  If the IT Steering committee (to use the CobiT term) has the purse strings, then architecture, by definition, has no power.  If the ARB says “change your scope to address this architectural requirement,” they have to add the phrase “pretty please” at the end of the request.

So what should we do instead of an ARB?

The replacement: The IT Governance Board

I’m suggesting a different kind of model, based on the idea of an IT Governance Board.  The IT Governance Board is chaired by the CIO, like the IT Steering committee, but is a balanced board containing one person who represents each of the core areas of governance: Strategic Alignment, Value Delivery, Resource Management, Risk Management, and Performance Measurement.  Under the IT Governance Board are two, or three, or four, “working committees” that review program concerns from any of a number of perspectives.  Those perspectives are aligned to IT Goals, so the number of working committees will vary from one organization to the next.

The key here is that escalation to the “IT Governance Board” means a simultaneous review of the project by any number of working committees, but the decisions are ALL made at the IT Governance Board level.  The ARB decides nothing.  It recommends.  (that’s normal).  But the IT Steering committee goes away as well, to be replaced by a IT Steering committee that also decides nothing.  It recommends.  Both of these former boards become working committees.  You can also have a Security and Risk committee, and even a Customer Experience committee.  You can have as many as you need, because Escalation to One is Escalation to All.

The IT Governance board is not the same as the CIO and his or her direct reports.  Typically IT functions can be organized into many different structures.  Some are functional (a development leader, an operations leader, an engagement leader, a support leader, etc.).  Others are business relationship focused (with a leader supporting one area of the business and another leader supporting a different area of the business, etc.).  In MSIT, it is process focused (with each leader supporting a section of the value chain).  Regardless, it would be a rare CIO who could afford to set up his leadership team to follow the exact same structure as needed to create a balanced governance model.

In fact, the CIO doesn’t have to actually sit on the IT Governance board.  It is quite possible for this board to be a series of delegates, one for each of the core governance areas, that are trusted by the CIO and his or her leadership team. 

Decisions by the IT Governance board can, of course, be escalated for review (and override) by a steering committee that is business-led.  CobiT calls this the IT Strategy Committee and that board is chaired by the CEO with the CIO responsible.  That effectively SKIPS the CIO’s own leadership team when making governance decisions.

And that is valuable because, honestly, business benefits from architecture.  IT often doesn’t.

So let’s consider the idea that maybe, just maybe, the whole idea of an ARB is flawed.  Architecture is a cross-cutting concern.  It exists in all areas.  But when the final decision is made, it should be made by a balanced board that cares about each of the areas that architecture impacts… not in a fight between the guys with the vision and the guys with the money.  Money will win, every time.

Standardization is a 15 Letter Word

My old friend Dan French penned an interesting blog last month. He was thinking about how the common answer today to the question of how to drive efficiency (and thus profit) in big companies is around transformation and standardisation in common processes, a shift to shared services and common, single instance global ERP systems. Dan pushes back against this orthodoxy using the metaphor from a sailing friend, suggesting the very interesting question to ask with all these initiatives is “will it make the boat go faster?” A leading indicator of success in ocean racing is to have a fast, and consequently light boat, and the corollary in business is the question of whether independence and individual business unit responsibility actually delivers more than enterprise level process standardization?

Dan doesn’t actually answer the question, he merely asks how many global leaders in their industries take this particular “road less travelled”?

I discussed the metaphor with one of my customers the other day and he said that in his enterprise, individual business unit responsibility has been a disaster in IT terms. The huge number of legacy applications represent years of local decisions, of reinventing the wheel over and over again. He used another interesting metaphor – suggesting his enterprise was like a tractor pull – where the more successful they were, the greater the weight they had to drag behind them, and slowly and surely that ever-increasing weight is killing them.

And of course this question is relevant to pretty much every large enterprise in the world. As you get big and successful, the agile operating models of early years need to be matured. But frequently these enterprise level transformation initiatives fail, or take excessive time and cost to deliver. The promise of standard COTS products frequently turns into a multi-year nightmare of lowered expectations and massive increase in costs and under-performing business processes.

So enterprises engaged in tractor pull like efforts need transformation, but would probably prefer a fleet of individual racing yachts to a single Titanic. But surely there are other options? Is there not a middle course?

At CBDI we ourselves advocate a service architecture supporting a hierarchy of services with more stable, standardized services in the lower (business state managing) layers and more agile, adaptable services in the higher (capability and process) layers. In this model, variations across business units are managed by patterns that vary behaviors depending upon context and business rules. And there are numerous highly successful examples of this strategy. But is this sufficient? Is this model sufficiently lightweight and agile like a racing yacht?

What I see emerging is an evolution of SOA, in which everything, at all levels, is componentized managed in a framework that coordinates concurrent standardization and differentiation. The figure below shows the key to standardization and differentiation is the configuration catalog that governs the use of components as configuration items.

Probably the most important element of the framework is the Reference Model. Frankly most companies invest insufficient effort in this area. A good reference model is NOT a clone of an OASIS, CBDI or TOGAF meta models, it’s an enterprise specific model that defines how a specific enterprise works. By all means use existing models for the basics, but they must be customized to reflect the specific enterprise. Good reference models are hard work, but they have incalculable value – because you are doing the heavy lifting at the very earliest stage. The reference architecture then clarifies the types of component and their relationship to delivered business processes and solutions.

The central part of the framework is to establish the mechanism by which standardization and differentiation are managed. A Configuration catalog is effectively a logical version of the CMDB. It records the Configuration Items available for a Capability (and it’s a long list including business process, workflow, services, implementation components, portlets, templates, policies and so on). Each Configuration Item is attributed with flexibility options and crucially constraints (policies). The Configuration Catalog is effectively a Software Factory Schema, except that its scope is much broader than software development, covering the entire range of artifacts including COTS products, internal and external services as well as custom applications. The Configuration Catalog is the core driver of architecture integrity, providing active governance over the configuration and reconfiguration of components. If a business unit believes it must diverge from the standard configuration, the question will be, is this purely localized behaviour, or is the localized requirement actually part of a broader demand signal? Can the localized behaviour be accommodated within the enterprise reference model, in which case further configuration is not compromised, or is this a genuine one-off?

Over the years I have advised many companies on setting up asset repositories in pursuit of asset management and reuse. While the asset repository is important in recording capability and dependency, it is typically not used as an integral part of active governance right throughout the life cycle, commencing at the planning stage.

For large companies with extended, federated operations, the Configuration Catalog capability represents a significant capability maturity improvement over “basic SOA”. In fact it’s an implementation of a broader component based model that orchestrates a broader range of assets. It allows active management of standardization and variation across business units and geographies; to encourage freedom of action where it’s appropriate and to exert mandatory standardization where necessary.

Managing standardization is not a trivial task – it is a 15 letter word after all. But mandating de facto industry standard COTS packages for strategic business operations is unlikely to drive competitive advantage. Active governance over localization and standardization will allow a concurrent loose / tight enterprise which facilitates a heterogeneous portfolio incorporating, if you will permit the metaphor, fast ocean racing yachts alongside super tankers to accommodate a range of business requirements. And it will make the overall boat go faster!

Ref: Dan’s Blog

Time-to-Release – the missing System Quality Attribute

I’ve been looking at different ways to implement the ATAM method these past few weeks.  Why?  Because I’m looking at different ways to evaluate software architecture and I’m a fan of the ATAM method pioneered at the Software Engineering Institute at Carnegie Mellon University.  Along the way, I’ve realized that there is a flaw that seems difficult to address. 

Different lists of criteria

The ATAM method is not a difficult thing to understand.  At it’s core, it is quite simple: create a list of “quality attributes” and sort them into order, highest to lowest, for the priority that the business wants.  Get the business stakeholders to sign off.  Then evaluate the ability of the architecture to perform according to that priority.  An architecture that places a high priority on Throughput and a low priority on Robustness may look quite different from an architecture that places a high priority on Robustness and a low priority on Throughput.

So where do we get these lists of attributes?

A couple of years ago, my colleague Gabriel Morgan posted a good article on his blog called “Implementing System Quality Attributes.”  I’ve referred to it from time to time myself, just to get remind myself of a good core set of System Quality Attributes that we could use for evaluating system-level architecture as is required by the ATAM method.  Gabriel got his list of attributes from “Software Requirements” by Karl Wiegers

Of course, there are other possible lists of attributes.  The ISO defined a set of system quality attributes in the standard ISO 25010 and ISO 25012.  They use different terms.  Instead of System Quality Attributes, there are three high level “quality models” each of which present “quality characteristics.”  For each quality characteristic, there are different quality metrics.

Both the list of attributes from Wiegers, and the list of “quality characteristics” from the ISO are missing a key point… “Time to release” (or time to market).

The missing criteria

One of the old sayings from the early days of Microsoft is: “Ship date is a feature of the product.”  The intent of this statement is fairly simple: you can only fit a certain number of features into a product in a specific period of time.  If your time is shorter, the number of features is shorter. 

I’d like to suggest that the need to ship your software on a schedule may be more important than some of the quality attributes as well.  In other words, “time-to-release” needs to be on the list of system quality attributes, prioritized with the other attributes.

How is that quality?

I kind of expect to get flamed for making the suggestion that “time to release” should be on the list, prioritized with the likes of reliability, reusability, portability, and security.  After all, shouldn’t we measure the quality of the product independently of the date on which it ships? 

In a perfect world, perhaps.  But look at the method that ATAM proposes.  The method suggests that we should created a stack-ranked list of quality attributes and get the business to sign off.  In other words, the business has to decide whether “Flexibility” is more, or less, important than “Maintainability.”  Try explaining the difference to your business customer!  I can’t. 

However, if we create a list of attributes and put “Time to Release” on the list, we are empowering the development team in a critical way.  We are empowering them to MISS their deadlines of there is a quality attribute that is higher on the list that needs attention. 

For example: let’s say that your business wants you to implement an eCommerce solution.  In eCommerce, security is very important.  Not only can the credit card companies shut you down if you don’t meet strict PCI compliance requirements, but your reputation can be torpedoed if a hacker gets access to your customer’s credit card data and uses that information for identity theft.  Security matters.  In fact, I’d say that security matters more than “going live” does. 

So your priority may be, in this example:

  • Security,
  • Usability,
  • Time-to-Release,
  • Flexibility,
  • Reliability,
  • Scalability,
  • Performance,
  • Maintainability,
  • Testability, and
  • Interoperability.
     

This means that the business is saying something very specific: “if you cannot get security or usability right, we’d rather you delay the release than ship something that is not secure or not usable.  On the other hand, if the code is not particularly maintainable, we will ship anyway.”

Now, that’s something I can sink my teeth into.  Basically, the “Time to Release” attribute is a dividing line.  Everything above the line is critical to quality.  Everything below the line is good practice.

As an architect sitting in the “reviewer’s chair,” I cannot imagine a more important dividing line than this one.  Not only can I tell if an architecture is any good based on the criteria that rises “above” the line, but I can also argue that the business is taking an unacceptable sacrifice for any attribute that actually falls “below” the line.

So, when you are considering the different ways to stack-rank the quality attributes, consider adding the attribute of “time to release” into the list.  It may offer insight into the mind, and expectations, of your customer and improve your odds of success.