DSM: A useful tool for process improvement efforts

Design Structure Matrix, or DSM in short, is a tool I frequently see during my year here at MIT.  It is often mentioned in “System Thinking” talks, as well as talks on designing complex system.  While the DSM’s uses are many, my project team mates and I focused on process improvement and successfully used the DSM to help our sponsor company improve its processes.  Our sponsor company is a aircraft maintenance, repair and overhaul business, and the process we focused on was the aircraft upgrading process, which spans from defining requirements for the upgrade, to drawing out the design, to implementation, to testing and finally to delivery.

How the DSM helped

1. Reduce long rework cycles – imagine the nightmare scenario where a project gets to almost completion, and then have to loopback to the beginning for reasons such as a major design error.  It is like the chutes and ladders game.  The DSM helps to reduce long reworks by reordering the tasks.  For example, in our project, initially there were four rework cycles that would set the project back by more than 20 tasks.  After the reordering, there were none.

2. Challenge the status quo task ordering, while respecting task dependencies – in the reordered task list for our project, we realized that a number of documentation tasks were pushed to the bottom of the list.  These tasks dealt with the development of internal test reports, flight manual and maintenance manual.

Initially, we thought that meant that the company should do those documentation tasks last.  However, on investigating further, we realized it was because no or very few other tasks depended on those tasks.  We thought about that further, and a revelation hit us: if no other tasks are waiting on those documents to be developed, does that mean that employees could develop those documents late and to the lowest quality but yet not affect the entire process in any way?  How can the company ensure the timeliness and quality of those documents?  Our sponsor validated that concern, and in the end, we recommended adding sign-offs of those documents to resolve this issue.

This is just one of the many examples.  The DSM challenged the status quo task ordering, in a way that respected the task dependencies, so the new ordering still made sense and provided ideas for how the current process might be done better.

3. Facilitate understanding of the current process
The DSM creation process required us to get our hands dirty into understanding the process.  We needed to think about what level of granularity we need to get down to, and what kinds of dependencies we should capture.  These activities helped us think more about the process, and thus added in our understanding.

In addition, the DSM also provided a visual map of the process.  At one glance, we could see which are the tasks that have many dependencies.

Additional Information about the DSM

For readers interested in finding out more about the DSM, I have included some basic information here.

DSM might seem rather technical and intimidating at first glance.  And that fear is well justified, as DSMs are matrices, a mathematical artifact that inspire fear into people’s heart by the mere mention of its name.  However, if you spend 10-15 understanding how it works, you will find DSM a useful tool to include in your toolbox for process improvement projects.

Creating the DSM

To use the DSM (for the purpose describe in the article), we have to provide three main pieces of information:
1. The list of tasks in the process.  For example, in the design phase, creating the high level design would be a possible task, and doing preliminary design review would be another possible task.
2. The dependencies between tasks.  This information will specify that for Task A to start, what tasks would need to be first completed.  For example, preliminary design review needs to be completed first before detailed design can commence.
3. Possible loopbacks.  This indicates scenarios where rework need to happen.  For example, after preliminary design review, the proposed design might be deem unsuitable and the project will need to go back to high level design.

These three pieces of information can be visualized in a graph like the following:

Here, the tasks are represented by the boxes, the task dependencies by the black arrows and the loopbacks by the red arrows.  This graph is not needed for DSM creation, but is included here to illustrate information needed to create a DSM.

With the required information, a DSM can easily be created.  Using the same example, the DSM will look like this:

The DSM contains the same information as provided by the graph, but putting it in a matrix allow us to apply some tools that will help in our process improvement task.  Then, to do the task re-ordering, you just need to use a tool to “partition” the matrix.  Tools, such as PSM 32 and a DSM Excel Macro, can be found DSMweb.org.

The partitioned DSM will have the new ordering, like the one described in this article.  Analysis and recommendations can then be made based on the partitioned DSM.

Further Readings

  • “Complex Concurrent Engineering and the DSM Method” by Yassine and Braha 
  • “The Model Based Method for Organizing Product Development” by Steven D. Eppinger, Daniel E. Whitney, Robert P. Smith and David E. Gebala. 
  • “Generalized Model of Design Iterations Using Signal Flow Graphs” by Steven D. Eppinger, Murphy V Nukala, and Daniel E. Whitney.

Credits

This work is not solely of my own.  A lot of credits to my team mates Haibo Wang, Davit Tadevosyan and Kai Siang Teo.

Basic is Best

Fellow foodies will recognize the recent movement towards “farm-to-table” restaurants. These venues attempt to simplify their menus and source ingredients as close to the source as possible. I had the opportunity to dine at such a restaurant the other evening. I was gushing about the appetizer to my server when she described the preparation for the item and then punctuated her comments with “basic is best”. I reminded my fellow enterprise architect diners there was an architecture lesson in that statement. They rolled their eyes and chuckled. But they also knew I was right.

I’m reminded of Frederick Brooks’ book The Mythical Man Month and his latest The Design of Design. The former must read book talks about complexity. But he refrains from damning all complexity. The world we live in and enterprises we strive to transform with enterprise architecture are complicated organisms, much like the human body. But sometimes a simple solution is the best approach. Fewer applications (think: portfolio rationalization). Fewer components. Fewer lines of code. Whatever level of abstraction you are working at, less is more.

I’m reminded of the enterprise architecture principle “Control Technical Diversity”. At one firm I created pithy catch phrases for each principles. I named this one “Less is More”. But perhaps another variation is what my server said the other night, “Basic is Best”.

Selling Federal Enterprise Architecture (EA)

Selling Federal Enterprise Architecture

A taxonomy of subject areas, from which to develop a prioritized marketing and communications plan to evangelize EA activities within and among US Federal Government organizations and constituents.

Any and all feedback is appreciated, particularly in developing and extending this discussion as a tool for use – more information and details are also available.
“Selling” the discipline of Enterprise Architecture (EA) in the Federal Government (particularly in non-DoD agencies) is difficult, notwithstanding the general availability and use of the Federal Enterprise Architecture Framework (FEAF) for some time now, and the relatively mature use of the reference models in the OMB Capital Planning and Investment (CPIC) cycles. EA in the Federal Government also tends to be a very esoteric and hard to decipher conversation – early apologies to those who agree to continue reading this somewhat lengthy article.

Alignment to the FEAF and OMB compliance mandates is long underway across the Federal Departments and Agencies (and visible via tools like PortfolioStat and ITDashboard.gov – but there is still a gap between the top-down compliance directives and enablement programs, and the bottom-up awareness and effective use of EA for either IT investment management or actual mission effectiveness. “EA isn’t getting deep enough penetration into programs, components, sub-agencies, etc.”, verified a panelist at the most recent EA Government Conference in DC.

Newer guidance from OMB may be especially difficult to handle, where bottom-up input can’t be accurately aligned, analyzed and reported via standardized EA discipline at the Agency level – for example in addressing the new (for FY13) Exhibit 53D “Agency IT Reductions and Reinvestments” and the information required for “Cloud Computing Alternatives Evaluation” (supporting the new Exhibit 53C, “Agency Cloud Computing Portfolio”).

Therefore, EA must be “sold” directly to the communities that matter, from a coordinated, proactive messaging perspective that takes BOTH the Program-level value drivers AND the broader Agency mission and IT maturity context into consideration.

Selling EA means persuading others to take additional time and possibly assign additional resources, for a mix of direct and indirect benefits – many of which aren’t likely to be realized in the short-term. This means there’s probably little current, allocated budget to work with; ergo the challenge of trying to sell an “unfunded mandate”.

Also, the concept of “Enterprise” in large Departments like Homeland Security tends to cross all kinds of organizational boundaries – as Richard Spires recently indicated by commenting that “…organizational boundaries still trump functional similarities. Most people understand what we’re trying to do internally, and at a high level they get it. The problem, of course, is when you get down to them and their system and the fact that you’re going to be touching them…there’s always that fear factor,” Spires said.

It is quite clear to the Federal IT Investment community that for EA to meet its objective, understandable, relevant value must be measured and reported using a repeatable method – as described by GAO’s recent report “Enterprise Architecture Value Needs To Be Measured and Reported“.

What’s not clear is the method or guidance to sell this value. In fact, the current GAO “Framework for Assessing and Improving Enterprise Architecture Management (Version 2.0)”, a.k.a. the “EAMMF”, does not include words like “sell”, “persuade”, “market”, etc., except in reference (“within Core Element 19: Organization business owner and CXO representatives are actively engaged in architecture development”) to a brief section in the CIO Council’s 2001 “Practical Guide to Federal Enterprise Architecture”, entitled “3.3.1. Develop an EA Marketing Strategy and Communications Plan.” Furthermore, Core Element 19 of the EAMMF is advised to be applied in “Stage 3: Developing Initial EA Versions”. This kind of EA sales campaign truly should start much earlier in the maturity progress, i.e. in Stages 0 or 1.

So, what are the understandable, relevant benefits (or value) to sell, that can find an agreeable, participatory audience, and can pave the way towards success of a longer-term, funded set of EA mechanisms that can be methodically measured and reported? Pragmatic benefits from a useful EA that can help overcome the fear of change? And how should they be sold?

Following is a brief taxonomy (it’s a taxonomy, to help organize SME support) of benefit-related subjects that might make the most sense, in creating the messages and organizing an initial “engagement plan” for evangelizing EA “from within”. An EA “Sales Taxonomy” of sorts. We’re not boiling the ocean here; the subjects that are included are ones that currently appear to be urgently relevant to the current Federal IT Investment landscape.

Note that successful dialogue in these topics is directly usable as input or guidance for actually developing early-stage, “Fit-for-Purpose” (a DoDAF term) Enterprise Architecture artifacts, as prescribed by common methods found in most EA methodologies, including FEAF, TOGAF, DoDAF and our own Oracle Enterprise Architecture Framework (OEAF).

The taxonomy below is organized by (1) Target Community, (2) Benefit or Value, and (3) EA Program Facet – as in:

“Let’s talk to (1: Community Member) about how and why (3: EA Facet) the EA program can help with (2: Benefit/Value)”.
Once the initial discussion targets and subjects are approved (that can be measured and reported), a “marketing and communications plan” can be created.

A working example follows the Taxonomy.

Enterprise Architecture Sales Taxonomy
Draft, Summary Version
1. Community

1.1. Budgeted Programs or Portfolios
Communities of Purpose (CoPR)
1.1.1. Program/System Owners (Senior Execs) Creating or Executing Acquisition Plans

1.1.2. Program/System Owners Facing Strategic Change
1.1.2.1. Mandated
1.1.2.2. Expected/Anticipated

1.1.3. Program Managers – Creating Employee Performance Plans
1.1.4. CO/COTRs – Creating Contractor Performance Plans, or evaluating Value Engineering Change Proposals (VECP)

1.2. Governance & Communications
Communities of Practice (CoP)

1.2.1. Policy Owners
1.2.1.1. OCFO
1.2.1.1.1. Budget/Procurement Office
1.2.1.1.2. Strategic Planning

1.2.1.2. OCIO
1.2.1.2.1. IT Management
1.2.1.2.2. IT Operations
1.2.1.2.3. Information Assurance (Cyber Security)
1.2.1.2.4. IT Innovation

1.2.1.3. Information-Sharing/ Process Collaboration (i.e. policies and procedures regarding Partners, Agreements)

1.2.2. Governing IT Council/SME Peers (i.e. an “Architects Council”)
1.2.2.1. Enterprise Architects (assumes others exist; also assumes EA participants aren’t buried solely within the CIO shop)
1.2.2.2. Domain, Enclave, Segment Architects – i.e. the right affinity group for a “shared services” EA structure (per the EAMMF), which may be classified as Federated, Segmented, Service-Oriented, or Extended

1.2.2.3. External Oversight/Constraints
1.2.2.3.1. GAO/OIG & Legal
1.2.2.3.2. Industry Standards
1.2.2.3.3. Official public notification, response

1.2.3. Mission Constituents
Participant & Analyst Community of Interest (CoI)

1.2.3.1. Mission Operators/Users
1.2.3.2. Public Constituents
1.2.3.3. Industry Advisory Groups, Stakeholders
1.2.3.4. Media

2. Benefit/Value
(Note the actual benefits may not be discretely attributable to EA alone; EA is a very collaborative, cross-cutting discipline.)

2.1. Program Costs – EA enables sound decisions regarding…
2.1.1. Cost Avoidance – a TCO theme
2.1.2. Sequencing – alignment of capability delivery
2.1.3. Budget Instability – a Federal reality

2.2. Investment Capital – EA illuminates new investment resources via…
2.2.1. Value Engineering – contractor-driven cost savings on existing budgets, direct or collateral
2.2.2. Reuse – reuse of investments between programs can result in savings, chargeback models; avoiding duplication
2.2.3. License Refactoring – IT license & support models may not reflect actual or intended usage

2.3. Contextual Knowledge – EA enables informed decisions by revealing…
2.3.1. Common Operating Picture (COP) – i.e. cross-program impacts and synergy, relative to context
2.3.2. Expertise & Skill – who truly should be involved in architectural decisions, both business and IT
2.3.3. Influence – the impact of politics and relationships can be examined
2.3.4. Disruptive Technologies – new technologies may reduce costs or mitigate risk in unanticipated ways
2.3.5. What-If Scenarios – can become much more refined, current, verifiable; basis for Target Architectures

2.4. Mission Performance – EA enables beneficial decision results regarding…
2.4.1. IT Performance and Optimization – towards 100% effective, available resource utilization
2.4.2. IT Stability – towards 100%, real-time uptime
2.4.3. Agility – responding to rapid changes in mission
2.4.4. Outcomes –measures of mission success, KPIs – vs. only “Outputs”
2.4.5. Constraints – appropriate response to constraints
2.4.6. Personnel Performance – better line-of-sight through performance plans to mission outcome

2.5. Mission Risk Mitigation – EA mitigates decision risks in terms of…
2.5.1. Compliance – all the right boxes are checked
2.5.2. Dependencies –cross-agency, segment, government
2.5.3. Transparency – risks, impact and resource utilization are illuminated quickly, comprehensively
2.5.4. Threats and Vulnerabilities – current, realistic awareness and profiles

2.5.5. Consequences – realization of risk can be mapped as a series of consequences, from earlier decisions or new decisions required for current issues
2.5.5.1. Unanticipated – illuminating signals of future or non-symmetric risk; helping to “future-proof”
2.5.5.2. Anticipated – discovering the level of impact that matters

3. EA Program Facet
(What parts of the EA can and should be communicated, using business or mission terms?)

3.1. Architecture Models – the visual tools to be created and used
3.1.1. Operating Architecture – the Business Operating Model/Architecture elements of the EA truly drive all other elements, plus expose communication channels

3.1.2. Use Of – how can the EA models be used, and how are they populated, from a reasonable, pragmatic yet compliant perspective? What are the core/minimal models required? What’s the relationship of these models, with existing system models?

3.1.3. Scope – what level of granularity within the models, and what level of abstraction across the models, is likely to be most effective and useful?

3.2. Traceability – the maturity, status, completeness of the tools
3.2.1. Status – what in fact is the degree of maturity across the integrated EA model and other relevant governance models, and who may already be benefiting from it?

3.2.2. Visibility – how does the EA visibly and effectively prove IT investment performance goals are being reached, with positive mission outcome?

3.3. Governance – what’s the interaction, participation method; how are the tools used?
3.3.1. Contributions – how is the EA program informed, accept submissions, collect data? Who are the experts?

3.3.2. Review – how is the EA validated, against what criteria?

 Taxonomy Usage Example:

 

1. To speak with:
a. …a particular set of System Owners Facing Strategic Change, via mandate (like the “Cloud First” mandate); about…
b. …how the EA program’s visible and easily accessible Infrastructure Reference Model (i.e. “IRM” or “TRM”), if updated more completely with current system data, can…
c. …help shed light on ways to mitigate risks and avoid future costs associated with NOT leveraging potentially-available shared services across the enterprise…
2. ….the following Marketing & Communications (Sales) Plan can be constructed:
a. Create an easy-to-read “Consequence Model” that illustrates how adoption of a cloud capability (like elastic operational storage) can enable rapid and durable compliance with the mandate – using EA traceability. Traceability might be from the IRM to the ARM (that identifies reusable services invoking the elastic storage), and then to the PRM with performance measures (such as % utilization of purchased storage allocation) included in the OMB Exhibits; and
b. Schedule a meeting with the Program Owners, timed during their Acquisition Strategy meetings in response to the mandate, to use the “Consequence Model” for advising them to organize a rapid and relevant RFI solicitation for this cloud capability (regarding alternatives for sourcing elastic operational storage); and
c. Schedule a series of short “Discovery” meetings with the system architecture leads (as agreed by the Program Owners), to further populate/validate the “As-Is” models and frame the “To Be” models (via scenarios), to better inform the RFI, obtain the best feedback from the vendor community, and provide potential value for and avoid impact to all other programs and systems.
–end example —

Note that communications with the intended audience should take a page out of the standard “Search Engine Optimization” (SEO) playbook, using keywords and phrases relating to “value” and “outcome” vs. “compliance” and “output”. Searches in email boxes, internal and external search engines for phrases like “cost avoidance strategies”, “mission performance metrics” and “innovation funding” should yield messages and content from the EA team.

This targeted, informed, practical sales approach should result in additional buy-in and participation, additional EA information contribution and model validation, development of more SMEs and quick “proof points” (with real-life testing) to bolster the case for EA. The proof point here is a successful, timely procurement that satisfies not only the external mandate and external oversight review, but also meets internal EA compliance/conformance goals and therefore is more transparently useful across the community.

In short, if sold effectively, the EA will perform and be recognized. EA won’t therefore be used only for compliance, but also (according to a validated, stated purpose) to directly influence decisions and outcomes.

The opinions, views and analysis expressed in this document are those of the author and do not necessarily reflect the views of Oracle.

Goodbye Baked Ham

Zig Ziglar died today. The title may make instant sense to his followers or to those of you that have heard me tell his story about baked ham. It’s about his wife sawing the end off of a roast before baking because she thinks it creates a better roast. But when they call the originator […]

The post Goodbye Baked Ham appeared first on Mike Rollings.

“Making Pianos” or “Being an Artist”

I saw a story on CBS about Wally Boot who has worked at the Steinway factory for 50 years. He was born on Steinway Street and has learned how to make every part in a Steinway, but what he makes is so much more. At the end of the story, Charlie Rose says “there is […]

The post “Making Pianos” or “Being an Artist” appeared first on Mike Rollings.

Enterprise Architecture – A Perfect Tool for Operating Model Management

On this blog I have covered the discipline of Enterprise Architecture from a number of perspectives. Enterprise Architecture (EA) can be effectively leveraged as a foundation for Industry Reference Architectures e.g. The Retail Reference Architecture. Equally effectively EA can also be leveraged as the mechanism for Business and Technology Governance as well as Technology Performance Monitoring. In this article I would like to propose that Enterprise Architecture is also an effective tool for the Operating Model management, both for the definition as well as the ongoing lifecycle management. 
It may be worthwhile visiting some industry definitions for Operating Model before we explore how Enterprise Architecture can be effective here. The definition of Operating Model varies based on the Organisational and Operational context in which it is applied and hence probably one definition may not fit all Operating Model scenarios. However if I had to choose one definition, I would like to refer to the IBM’s definition of the Operating Model (see the picture below)
IBM Target Operating Model (TOM)

IBM proposes that a Target Operating Model (TOM) helps determine the best design and deployment of resources to achieve an organization’s business goals. It provides current operational maturity assessment and roadmap to defining and/or improving organisation’s Operations Strategy. Key deliverable include business review, current operating model assessment, desired future state and change management plan roadmap.
The TOM essentially is seen here as the mechanism to link the business goals and strategy of the organisation with the roadmap for change to achieve those goals. TOM then holds together various organisation concerns such as processes, technology, capabilities, customer view, governance and partners in a single cohesive fashion.
 
Now that we have briefly summarised an illustrative Operating Model definition, let us explore how Enterprise Architecture as a discipline or practice can be leveraged as a tool for its management. There are a number of good Enterprise Architecture Frameworks available for this purpose and recent revisions of certain frameworks have further established them as leading candidates for this purpose. I do not advocate or support a specific Enterprise Architecture Framework on this blog however for illustration purposes I am going to be using the TOGAF 9 as the tool for Operating Model Management. I would like to also mention the Zachman EA framework as the other leading framework which may be equally effective or in some application scenarios it may be a better fit. 
The purpose of this article is not to explain or define the TOGAF 9 and I would highly recommend visiting the OpenGroup website for relevant documentation. However for the ease of reference, I am going to share the TOGAF ADM which is the process for Enterprise Architecture Management in TOGAF. 
The process links the Vision and Strategy of the Organisation and its business / functions with a portfolio of change programs which realises this Strategy. TOGAF uses various architecture disciplines such as Business Architecture, Information Architecture (Data and Application) and Technology Architecture as mechanism for linking the Strategy with Implementation and Governance of Change programs to deliver on the Strategy. 
The central argument which I am now going to make is that such a process of Enterprise Architecture can be seamlessly deployed and leveraged to manage the Organisation Operating Model. A number of Enterprise Architecture Frameworks and especially Zachman categorically state that the application of Enterprise Architecture should not be restricted or limited to the Information Technology systems. It is a true framework for organisation and business management. For instance applying the TOGAF to manage the IBM TOM will result in following steps / mapping. The key here is to use tools, processes, approach, templates and constructs from each of the TOGAF ADM stage to define and develop the TOM stages as seen in figure – 1. 
  1. The business goals and strategy can be defined by the Preliminary phase while the vision underpinning this is defined in Phase A. Architecture Vision
  2. The Assets and the Locations of the TOM along with key processes can be captured and defined during the Phase B. Business Architecture
  3. Certain aspects of skills, capabilities, culture and processes too can be captured in Phase B
  4. The Technology, Processes, Performance Metrics can be captured through phases C and D while defining the Information and the Technology Architecture.
  5. The sourcing options and alliances can be identified and shortlisted in phase E. Opportunities and Solutions
  6. The phase F of migration planning can be used to identify the roadmap for change through what TOGAF calls as transition architectures
  7. Finally culture which is central to TOM needs to be constantly be a driving force as well as the recipient for the requirements for change
I would like to again highlight that this is simply an illustration of managing a view of Operating Model with a particular EA approach. However a number of other variations can be equally effectively managed by similar approach. It will probably make sense to present an illustration and mapping using other EA framework such as Zachman…may be a topic for next post on this blog!

References:

Strategy and transformation for a complex world, IBM Global Services, Mar 2011

The TOGAF Architecture Development Method (ADM)

The Zachman Framework

Business Performance Management, the next big thing…again

We all possess the gene to want to solve problems when faced with them. It’s human nature. People form organizations and this gene sometimes manifests itself in organizational titles and roles invented to address organizational challenges. This is natural too. For example, when support organizations face the challenge of aligning to business organizations, we see…

New Publication: A Systemic-Discursive Framework for Enterprise Architecture

John Gotze and I have published a new peer-reviewed paper in the Journal of Enterprise Architecture with the title: A Systemic-Discursive Framework for Enterprise Architecture. You can download the paper from here (note: this requires AEA membership). If you are not a member of AEA let me know and I will send you a PDF copy of the paper.

This article examines, through a case study of an Australian government agency, the systemic and discursive properties of Enterprise Architecture adoption in a government enterprise. Through the lens of Luhmann’s generalised systems theory of communication, the authors argue that the manner in which organisational communication is organised throughout the Enterprise Architecture adoption process has a noticeable impact on successful implementation. Two important conclusions are made: Firstly, successful Enterprise Architecture adoption demands sustainable resonance of Enterprise Architecture as a discourse communicated in the enterprise. Secondly, misunderstanding and reshaping Enterprise Architecture as a management discourse is an inherent premise for high quality adoption. The authors propose a new theoretical model, the Enterprise Communication Ecology, as a metaphor for the communicative processes that precede, constrain, and shape Enterprise Architecture implementations. As a result, Enterprise Architecture as a discipline must adopt a systemic-discursive framework in order to fully understand and improve the quality of Enterprise Architecture management programs.

Enjoy!