How to build a Roadmap – Gap Analysis

This post will discuss how to develop a robust gap analysis to identify any significant shortcomings between the current and desired end state. We use these findings to begin develop strategy alternatives (and related initiatives) to address what has been uncovered. The intent is to identify the difference (delta) from where we are to what we aspire to become. This exercise is critical to identify what needs to be accomplished.

NOTES – actors, agents and extras in the enterprise

If the enterprise is a story, who are the actors in that story? What are their drivers and needs? How do we model and manage the relationships between those actors in the story? (This is part of an overview and

Smart Agile Delivery

I was interested to read the recent McKinsey report on disruptive technologies. McKinsey identifies twelve potentially economically disruptive technologies including the Mobile Internet, Automation of Knowledge work, Internet of Things, Advanced Robotics, Next generation genomics and so on. The report also calls out general purpose technologies as ones that propel steep growth trajectories (think Steam or Internet) – that can be applied across economies and leveraged in many more specific disruptive technologies. Not surprisingly they don’t include software development in either list. The closest they come is with the automation of knowledge work, but this is restricted to artificial intelligence, machine learning and natural interfaces like voice recognition that automate many knowledge worker tasks that have long been regarded as impossible or impracticable for machines to perform.

Is this omission something we should be concerned about I wonder? While software development “practices” have been developing very rapidly with the adoption of Agile methods, it is a reasonable conclusion that software development “technologies ” are not undergoing dramatic changes that might qualify as disruptive. Yes there’s lots going on; in fact there’s a profusion of new languages, frameworks and databases, many open source initiatives, that are progressively specializing development technology. In addition there are significant advances in life cycle management and test technologies. But there isn’t any indication that these new technologies will have high economic impact in terms of dramatic improvement in productivity or quality. Or have a significant impact on the vast economic problem inherent in the worlds legacy systems. Rather there’s a huge proliferation of development diversity and some might say complexity.

Don’t get me wrong, I am not looking for a problem to solve. It’s clear that while smaller Agile projects are fine for tightly targeted problems, most organizations have struggled to scale Agile to larger projects and or enterprise class projects. The increase in dependencies and complexities become overwhelming and the probability of failure increases proportionately.

What’s needed, by larger projects is not process automation, but automation of the deliverable that allows the project to manage the dependencies at the model AND deliverable level. This raises the level of abstraction and can deliver dramatic productivity and quality improvements. As it happens there is a technology that can do this, but strangely it seems to be something that many people have already consigned to the trash heap of “been there done that”. I’m talking about Model Driven Development (MDD). There are many reasons why MDD has not succeeded in gaining widespread acceptance. It is actually extremely complex and requires considerable investment to establish. And in fairness it has been promoted primarily as a deliverable transformation and code generation tool. And many people will say, Oh NO!!! That’s just reinventing Case Tools all over again and we don’t want to go there.

But before we consign this technology to the trashcan of yesterday’s technologies, we need to take a hard look at what you can do if:
A. you have leaf node detail models in the asset repository that are tightly bound to execution deliverables.
B. you use best practice modern architecture with all functionality delivered as service bearing capabilities that minimize dependencies.
C. you can automate to a significant extent the population of the repository with harvested knowledge about legacy applications at that same leaf node level of detail.
D. you can run large scale, full life projects with full iteration of business, architecture, design and development models. (note here this doesn’t mean fully integrated and transformed models, we have gotten a lot cleverer over the years.)
In an Agile context this allows you to iterate functionality at extremely low cost, both in delivery and evolution life cycle stages. In fact experience shows it transforms the development project into an evolutionary approach in which you can really architect and build what you know and evolve to the optimal solution.

Model driven as a concept has been around a long time. Most developers (tell me they) don’t like model driven because it won’t handle complexities; because it diminishes developers’ jobs to be more mundane; that it produces poor code and so on and so forth. But the McKinsey report speaks to the inexorable progress of technology and the inevitability that as technology changes peoples jobs change or disappear. You are either on the train or under it.

Right now Agile MDD is probably only justifiable for the very large, complex projects. But as the case studies start showing higher success rates, with dramatic increases in productivity and quality, and the level of up-front investment is reduced as the capability is productized, we can expect to see the MDD project footprint to expand dramatically. Again the McKinsey report is incredibly bullish on the economic outlook for technology, and information technology in particular as the key general purpose enabling technology, and it’s clear that Agile processes alone are inadequate to support the ever increasing demand.

Being disciplined is for school kids; it’s time we got smart about how we deliver complex services and systems at scale.

McKinsey & Company: Disruptive technologies: Advances that will transform life, business, and the global economy. “Not every emerging technology will alter the business or social landscape—but some truly do have the potential to disrupt the status quo, alter the way people live and work, and rearrange value pools.”

Smart Agile Delivery

I was interested to read the recent McKinsey report on disruptive technologies. McKinsey identifies twelve potentially economically disruptive technologies including the Mobile Internet, Automation of Knowledge work, Internet of Things, Advanced Robotics, Next generation genomics and so on. The report also calls out general purpose technologies as ones that propel steep growth trajectories (think Steam or Internet) – that can be applied across economies and leveraged in many more specific disruptive technologies. Not surprisingly they don’t include software development in either list. The closest they come is with the automation of knowledge work, but this is restricted to artificial intelligence, machine learning and natural interfaces like voice recognition that automate many knowledge worker tasks that have long been regarded as impossible or impracticable for machines to perform.

Is this omission something we should be concerned about I wonder? While software development “practices” have been developing very rapidly with the adoption of Agile methods, it is a reasonable conclusion that software development “technologies ” are not undergoing dramatic changes that might qualify as disruptive. Yes there’s lots going on; in fact there’s a profusion of new languages, frameworks and databases, many open source initiatives, that are progressively specializing development technology. In addition there are significant advances in life cycle management and test technologies. But there isn’t any indication that these new technologies will have high economic impact in terms of dramatic improvement in productivity or quality. Or have a significant impact on the vast economic problem inherent in the worlds legacy systems. Rather there’s a huge proliferation of development diversity and some might say complexity.

Don’t get me wrong, I am not looking for a problem to solve. It’s clear that while smaller Agile projects are fine for tightly targeted problems, most organizations have struggled to scale Agile to larger projects and or enterprise class projects. The increase in dependencies and complexities become overwhelming and the probability of failure increases proportionately.

What’s needed, by larger projects is not process automation, but automation of the deliverable that allows the project to manage the dependencies at the model AND deliverable level. This raises the level of abstraction and can deliver dramatic productivity and quality improvements. As it happens there is a technology that can do this, but strangely it seems to be something that many people have already consigned to the trash heap of “been there done that”. I’m talking about Model Driven Development (MDD). There are many reasons why MDD has not succeeded in gaining widespread acceptance. It is actually extremely complex and requires considerable investment to establish. And in fairness it has been promoted primarily as a deliverable transformation and code generation tool. And many people will say, Oh NO!!! That’s just reinventing Case Tools all over again and we don’t want to go there.

But before we consign this technology to the trashcan of yesterday’s technologies, we need to take a hard look at what you can do if:
A. you have leaf node detail models in the asset repository that are tightly bound to execution deliverables.
B. you use best practice modern architecture with all functionality delivered as service bearing capabilities that minimize dependencies.
C. you can automate to a significant extent the population of the repository with harvested knowledge about legacy applications at that same leaf node level of detail.
D. you can run large scale, full life projects with full iteration of business, architecture, design and development models. (note here this doesn’t mean fully integrated and transformed models, we have gotten a lot cleverer over the years.)
In an Agile context this allows you to iterate functionality at extremely low cost, both in delivery and evolution life cycle stages. In fact experience shows it transforms the development project into an evolutionary approach in which you can really architect and build what you know and evolve to the optimal solution.

Model driven as a concept has been around a long time. Most developers (tell me they) don’t like model driven because it won’t handle complexities; because it diminishes developers’ jobs to be more mundane; that it produces poor code and so on and so forth. But the McKinsey report speaks to the inexorable progress of technology and the inevitability that as technology changes peoples jobs change or disappear. You are either on the train or under it.

Right now Agile MDD is probably only justifiable for the very large, complex projects. But as the case studies start showing higher success rates, with dramatic increases in productivity and quality, and the level of up-front investment is reduced as the capability is productized, we can expect to see the MDD project footprint to expand dramatically. Again the McKinsey report is incredibly bullish on the economic outlook for technology, and information technology in particular as the key general purpose enabling technology, and it’s clear that Agile processes alone are inadequate to support the ever increasing demand.

Being disciplined is for school kids; it’s time we got smart about how we deliver complex services and systems at scale.

McKinsey & Company: Disruptive technologies: Advances that will transform life, business, and the global economy. “Not every emerging technology will alter the business or social landscape—but some truly do have the potential to disrupt the status quo, alter the way people live and work, and rearrange value pools.”

An Integrated Electronic Health Record Needs Enterprise Architecture for Communicating Separation of Concerns

Achieving true progress in creating integrated AND interoperable electronic healthcare management and information systems is very much a real-world, current-day Enterprise Architecture (EA) challenge – and it starts with “separating the business and technical concerns” using standardized EA methods, vocabularies and reusable assets. The manner in which the challenges are communicated, in particular, would benefit all stakeholders and acquisition managers. 

An Integrated Electronic Health Record Needs Enterprise Architecture for Communicating …

A lot of activity and progress is underway around the world right now, and has been for some time, regarding integrating and sharing health data for healthcare management and delivery purposes. Many standards, reference models and authorities have arisen to guide implementation and use of IT for these purposes, for example health information exchange standards driven by the Office of the National Coordinator for Health Information Technology (ONC – http://www.healthit.gov/).  Many very new and modern health IT capabilities and products are available now, alongside systems and data that may have been first created over 30 years ago (particularly in the Federal Government).

In the media and within procurement activity, the swirl of misused phrases and definitions isn’t clarifying many approaches.  Records vs. Data vs. Information. Interoperability vs. Integration. Standards vs. Policies. Systems vs. Software vs. Products or Solutions. COTS vs. Services vs. Modules vs. Applications. Open Source vs. Open Standards. Modern vs. Legacy vs. Current.

In Enterprise Architecture (EA) terms, the messages regarding Integrated Healthcare IT requirements aren’t commonly being presented at a consistent level of abstraction, according to a consistent architecture model and vocabulary. As well, the audience or consumers of this information aren’t being addressed in ways most impactful to their specific needs and concerns.

What are the audience concerns?  IT system owners need to maintain data security and system performance, within technology and investment constraints. Doctors need consistent, instant, reliable and comprehensive visualization of data and the point of care. Government oversight bodies need recurring validation that money is spent wisely and results meet both mission and legislative requirements. Veterans, soldiers and their families need absolutely private, accurate, real-time information about their healthcare status – wherever they are. The pharmaceutical and medical device industries need timely, useful data regarding outcomes and utilization – to drive product improvement and cost-effectiveness. Hospitals, clinics and transport services need utilization and clinical workflow measurements to manage personnel and equipment resources.

The highest separation of concerns can be segmented by standard Enterprise Architecture domains or “views”.  A very generic, traditional model is the “BAIT” model – i.e. Business, Application, Information and Technology. Note that this is very similar to the widely-known “ISO Reference Model for Open Distributed Processing” (RM-ODP) Viewpoints – which underpin evolving healthcare standards including the “HL7 Services Aware Interoperability Framework” (SAIF).

The “Business Domain” encompasses the discussion about business processes, financials, resources and logistics, organization and roles.  Who does what, under what circumstances or authority, and how outcomes are evaluated and purchased.  The business drivers and enablers of successful healthcare delivery, one might say.  

The “Application Domain” concerns automating the “practice of healthcare”. Automated systems (and their user interfaces) are very helpful in planning, monitoring and managing the workflow, resources and facility environments, and of course processing data for clinical care, surveillance and health data management and reporting purposes. This is where healthcare expertise is codified in software and device configurations, where medical intelligence and knowledge meets computer-enabled automation. This domain is the chief concern of clinical practitioners and patients – where they can most helpfully provide requirements and evaluate results.  Software that’s built to process healthcare data comes in many shapes and sizes, can be owned or rented, are proprietary or completely transparent.

The “Information Domain” is in essence the “fuel” for the Application Domain.  Healthcare practitioners and patients care that this fuel is reliable, protected and of the highest quality – but aren’t too invested in how this is achieved, beyond required or trained procedures.  It’s like filling the car with gas – there’s some choice and control, but fundamentally a lot of trust that the gas will do the job.  For those whose concern is actually delivering gas – from undersea oil deposits all the way to the pump – this domain is an industry unto itself. Likewise, collecting, repurposing, sharing, analyzing information about patient and provider healthcare status is a required platform on which successful healthcare user applications and interfaces are built. This is what “Chief Medical Information Officers” are concerned with, as are “Medical Informatics Professionals”. They are also concerned with the difference between healthcare “records”, “archives” and “information” – but that’s a discussion for another day.

It is critical to note that “Information” is composed of data; core or “raw” data is packaged, assembled, standardized, illustrated, modeled and summarized as information more easily consumed and understood by users. Pictures, sound bites and brief notes taken by an officer at an accident scene are data (as are “Big Data” signals from public social media and traffic sensors); the information packages include the accident report, the newspaper article, the insurance claim and the emergency room evaluation.  These days, with the proliferation of data-generating devices and sensors, along with the rapid data replication and distribution channels available over the Internet, the “Data Domain” itself can be a nearly independent concern of some – providing the raw fuel to the information fire, oil for refined gas.

The “Technology Domain” is essentially all of the electronic computing and data storage elements needed to manage data and resulting information, operate software and deliver the software results to user interfaces (like browsers, video screens, medical devices).  Things like servers, mobile phones, physical sensors, telecommunications networks, storage repositories – this includes the machine-specific software embedded into medical equipment.

Sidebar: Data Domain Standards

Quite a bit of work and investment is required to collect, filter, store, protect and make available raw data across the clinical care lifecycle, in order that the right kind of information is then available to be utilized by users or software. Most importantly, reusable, open standards and Reference Implementation Models (RIMs) concerned with the Data Management domain are foundation requirements for any effective healthcare information system that participates in the global healthcare ecosystem.

A RIM is basically working software or implementation patterns for testing and confirming compliance with standards, thereby promoting creation of software products that incorporate and maintain the standards.  It’s a reusable, implementable, working set of code with documentation – focused on a common concern, decoupled from implementation policies or constraints. RIMs are useful for facilitating standards adoption across collaborative software development communities at every layer of the Enterprise Architecture.

For example, a data-domain RIM developed several years ago by Oracle Health Sciences (in a clinical research setting) focused on maintaining role-based access security requirements when two existing sets of research and patient care data were merged for querying.  The design of the single RIM merged the HL7 Clinical Research Model (BRIDG) with an HL7 EHR Model (Care Record) to support a working proof-of-concept – that others could adopt as relevant.  The “concern” here was data security – separate from the information and application-level concerns of enabling multi-repository information visualization methods for researchers.

The point of this discussion on EA-driven separation of concerns is illustrated as follows. When a spokesman (or RFP author) says “the system will be interoperable” – it’s likely that by “system” the meaning is some segment of the “Application Domain” being able to exchange objects from the “Information Domain”.  Instead, a better phrase might be “the software application will be able to share standardized healthcare information with other applications”. This keeps the principle discussion at the application and information-sharing software level, and doesn’t make detailed assumptions or predictions regarding concerns in the Business, Data or Technology Domains.  Those are different, but related discussions, and may already be addressed by reusable, standard offerings, RIMs or acquisition strategies.   

Taking this approach to broadly interpret the recent announcement that the DoD will seek a competitive procurement for “Healthcare Management Software Modernization” – it appears the focus of this need is the Application Domain – i.e. software packages and/or services that generate and use healthcare information while managing healthcare processes and interactions.

To support these new software application features, separate but related activity is required to address “modernization” concerns among the other EA domains – concerns relating to datacenter infrastructure, data management and security services, end-user devices and interfaces, etc.  Some of this activity may not be dedicated to healthcare management, but be shared and supported for enterprise use, for other missions. That’s why use of a current, relevant EA frameworks (such as DODAF v2.02 and the OMB “Common Approach” ) is so important, managing shared capabilities and investments.   

Using standard EA viewpoints to separate concerns will also expose reuse opportunities (and possibly consolidate or reduce acquisition needs), i.e. leveraging existing  investments that are practical enablers. Some examples might include the developing iEHR health record structured message translation and sharing services, plus HHS/ONC initiatives including Health Information Exchange Networks and the “VA Blue Button” personal health record service.   

Social BPM and Smart Process Apps Are Improving Productivity for All Process Participants

OT: OpenText recently contributed a chapter “How Social Technologies Enhance the BPM Experience for all Participants” to the book “Social BPM: Work, Planning and Collaboration Under the Impact of Social Technology”. What prompted OpenText to write about Social BPM?

DW: We are excited about exploring the technology and usage models found in social applications and the results that can be achieved by mapping them to the unique characteristics, diverse participants and emerging opportunities in Business Process Management (BPM).

Related posts:

  1. We’re talking Smart Process Apps — and so is Forrester. “Smart Process Apps are a reflection of innovations we have…
  2. OpenText Smart Process Applications: At the Intersection of Social Street and Core Systems Ave Last week, I helped kick off OpenText’s EIM Days in…
  3. OpenText Smart Process Applications: The Importance of Process On-Ramps and Off-Ramps Have you ever used your fingernail to turn a screw…

Related posts brought to you by Yet Another Related Posts Plugin.

A practical set of EA deliverables

A question on LinkedIn recently reminded me that, as the team leader for Segment Architecture in my former EA team, I was accountable for identifying a core set of deliverables for the team.  The idea was that we could focus on defining standard formats and contents for these deliverables and, in doing so, we could start to measure both our output and our quality.

We only created  pre-canned templates for a few of them.  This is partly because the team was not mature enough in its practices to get consistency, and partly because Enterprise Architecture itself is not mature, or accepted, enough to have stakeholders that would notice if our deliverables meet an objective standard.  Also this list is not intended to be comprehensive.  The goal was to describe deliverables where it may possibly make sense to go for some level of consistency.  Any EA could (and often did) create deliverables that were not on the list.

Perhaps it is time to share what we came up with. 

Note that this list is the result of a single team doing its work, and is not representative of any “standards” effort across other EA groups.  That said, I stand beside this list.  I think it is a useful start.  Note that many are technical in nature.  I did not, in making this list, differentiate between BA and EITA deliverables.  So if you are someone who believes that EA = BA + EITA, then you will see both sets of deliverables, intermixed, in the list below.  If you are someone offended by the inclusion of technical architecture deliverables in an EA list… tough.  I was working with reality.

Deliverable name

Why create it

Description

Architectural Point of View (or Technical Policy)

Provide clear input to Business or IT leadership on issues relevant to Enterprise Architecture

Short document describing a problem that requires attention and the opinion of EA for solving it.

Architectural Reference Model (or Architectural Pattern)

Provide clear input to IT project SMEs on optimal or preferable design options

Short document describing a set of concerns and a proven approach for addressing them

<Segment> Current State Model and Analysis

To demonstrate and communicate challenges inherent in current processes / systems / information

A collection of architectural models, including a context model, process models, and information models, as understood to currently exist , plus an analysis of issues and risks

<segment> Future State Vision and Model

To demonstrate the design of the future processes / systems / information needed by strategic intent.

A collection of architectural models that reflect a specific set of engineered changes

Governance Model and Analysis

Clarify roles and responsibilities and decision making processes for planning and oversight of initiatives

Process model, description of roles and responsibilities, and description of deliverables needed for planning, oversight and governance, along with implementation ROI and plans

M&A Business Case & Analysis

To provide a rationale for the acquisition of a company for the purpose of improving operational effectiveness. (M&A)

The document contains rationale including Competitive Analysis, SWOT / Twos analysis, and Strategic Alternatives Analysis

System Integration Recommendations  Document

To set a vision for how key processes and systems shall be integrated into enterprise infrastructure (primarily M&A)

End to End business scenarios, Process and System Integration points, Risks and Issues for each integration concern, and an analysis of alternatives and recommendations

Value chain and operating model analysis

To clearly address gaps and strategic requirements for integrating or divesting a set of processes and/or systems (primarily M&A)

Target value chain and operating model for post-M&A future state. Mappings of key processes to or from the enterprise core diagram, and analysis of changes with the intent of composing key initiatives.

Enterprise Core Diagram

To clearly declare the processes and systems that are NOT core to the operations of the enterprise

A list of systems and processes mapped grouped into “ecosystems” that are clearly indicated as “core” and “edge” with analysis of governance

EARB Engagement Package

To demonstrate project level architectural quality to the EA Review Board

A pre-defined collection of project architectural models and artifacts.

Capability Model and Assessment

Provide clear basis for data collection for a segment

List of capabilities for a segment with assessment of capability maturity, etc.

Capability Gap Analysis

Highlight underperforming capabilities to focus investment

Map of capabilities needed by strategies, highlighting those needed investment, and listing relative and absolute program spend against each

<segment> Roadmap (a.k.a. Transition Plan)

To clarify the scope, timing, and dependencies between initiatives needed to deliver on a strategy

List of proposed initiatives and dependencies between them to deliver on strategic intent

Strategy Map and/or Balanced Scorecard

To clarify the strategies, goals, and objectives of a segment and allow for measurement and alignment

Categorized strategies, measures, and metrics for a specific timeframe and business scope

<segment> Process Model and Analysis

To clarify and build consensus on the business processes (as-is or to-be), and as input to process improvement / measurement

Models of processes, activities, information assets and system interaction points , and an analysis of opportunities to improve.

Enterprise Scenario and Analysis

To get clarity on the experience of a key stakeholder (often a customer or partner)

Textual and diagrammatic description of an experience, often with analysis to indicate opportunities

<segment> Information Model and Analysis

To improve understanding of requirements and the rationalization of design

Well-constructed information model, at one or more well-defined levels of abstraction, covering all aspects of a segment, aligned with EDM, along with an analysis of risks and issues

Platform Assessment

Capture ability of an app or platform to meet strategic needs

Collection of measurements, attributes, and mappings to an app or platform

Proof of Concept (POC) delivery

To create a design that demonstrates, and proves, an approach for solving difficult issues

A software deliverable and an architectural reference model (see above)

Record of Architectural Tradeoffs

To clearly communicate the tradeoffs made by architects on the customer’s behalf

Textual description of architectural decisions and the implications for the owner of the process / tool

Categories Uncategorized

Towards Next Practice EA

A few weeks ago, @Cybersal and I met with @snowded to talk about enterprise architecture. He showed us a graph of the Complex Space, whose two dimensions were Evidence and Consensus. Dave has since posted a version of this graph on his blog.Source: D…

Data Management 2: Subject Areas and Objects

<p class=”p1″><span class=”s1″>This is the second blog post in the Data Management series and we will dive straight in with a discussion about subject areas and (information) objects, often called Entities. We start with a high-level overview of the theory and then dive into ArchiMate modeling.</span></p><p class=”p1″><img alt=”Subject Areas and Objects. Data Management blog series” src=”http://www.bizzdesign.com/assets/BlogDocuments-2/20130524_BasvanGils/Data-management-subject-areas-objects_424x283.png” style=”width: 424px; height: 283px;” title=”Data Management blog series. Part: Subject Areas and Objects”/></p><h2 class=”p1″>Theory</h2><p class=”p1″><span class=”s1″>A subject area model is used to differentiate between areas of interest from a data/ information perspective in the organization. These are called Subject Areas. Examples of subject areas are: customer, product, and supplier. This is not a new concept; it seems that it was introduced by James Martin in his books on Information Engineering in the late 1970s/ early 1980s. </span><br/>A Subject Area typically consists of 12-20 Business Subjects. These are the key areas of interest in the business domain. Simplifying slightly from the <a href=”http://www.amazon.com/Guide-Management-Knowledge-DAMA-DMBOK-Edition/dp/1935504029/” target=”_blank”><span class=”s2″>DMBOK</span></a> best practice, we use the concept of an Entity as a synonym for a Subject.<br/>Note that in fact we are talking about Entity Types rather than Entities (see the work of <a href=”http://www.amazon.com/Data-Reality-Perspective-Perceiving-Information/dp/1935504215/” target=”_blank”><span class=”s2″>Kent</span></a> for an extensive discussion of why this distinction is important). We focus on the type level, not the instance level. For purposes of readability, we will consistently use the term Entity rather than Entity Type. The following ORM diagram illustrates this point:</p><p class=”p1″><img alt=”Entity types related by means of a fact type” class=”left” src=”http://www.bizzdesign.com/assets/BlogDocuments-2/20130524_BasvanGils/data-management-entity-types.png” style=”width: 202px; height: 110px; float: left;” title=”Two Entity Types (Person and Company) related by means of a fact type (“works for” with the inverse reading “employs”).”/>Here we see two Entity Types (Person and Company), each identified by their na,e. These are related by means of a fact type (“works for” with the inverse reading “employs”). The purple dot signifies the constraint that “each Company employs at least one Person”</p><p class=”p1”>The population of this scheme in terms of Entities is visualized by the supporting table. Here we see for example that the label “Bas” identifies an Entity of the Entitiy Type “P<span style=”font-size: 11px; line-height: 19px;”>erson”.</span><br/> </p><p class=”p1″><span class=”s1″>Business Entities are the ‘things’ we talk about in a business context. For example, the Subject Area ‘Customer’ would be decomposed in Entities such as customer, address, and purchase history, etcetera. </span><br/>One would typically make an ERD to show the relations between entities as well as constraints on these entities (for each order the customer supplies a shipping address and a billing address). This may be a bit too much at the architecture / ArchiMate level, but we should at least be able to tie in to an ERD.</p><h2 class=”p1″>Modeling</h2><p class=”p1″><span class=”s1″>In summary: we introduced and defined two concepts for information modeling that we will be using throughout this blog post series: </span></p><ul><li class=”li2″><span class=”s1″>Subject Areas, which are used to structure areas of interest from an information perspective,  for example customer, or product; and,</span></li> <li class=”li2″><span class=”s1″>Business Entities (or just short: Entities), which are the key concepts or “things” that are part of a Subject Area, e.g. for the Subject Area customer: address or purchase history;</span></li></ul><p class=”li2″><span class=”s1″>​</span>In this part we describe how these concepts could be modeled in ArchiMate. This makes it possible to integrate model support for Data Management into the overall Enterprise Architecture expressed in the ArchiMate standard. Also, tool support such as BiZZdesign’s modeling and analysis tool for Enterprise Architecture, BiZZdesign Architect, can be leveraged for Data Management.<br/><span class=”s1″ style=”font-size: 11px; line-height: 19px;”>It seems to make sense to model the Subject Area concept with the </span><span class=”s2″ style=”font-size: 11px; line-height: 19px;”><b>BusinessObject</b></span><span class=”s1″ style=”font-size: 11px; line-height: 19px;”> concept. In BiZZdesign Architect, we could add a profile to this concept so that we can distinguish them from regular </span><span class=”s2″ style=”font-size: 11px; line-height: 19px;”><b>BusinessObjects</b></span><span class=”s1″ style=”font-size: 11px; line-height: 19px;”> in the sense that they could have a different graphical shape. </span><br/><span class=”s1″ style=”font-size: 11px; line-height: 19px;”>Along the same lines: Entities should be modeled using the </span><span class=”s2″ style=”font-size: 11px; line-height: 19px;”><b>BusinessObject</b></span><span class=”s1″ style=”font-size: 11px; line-height: 19px;”> concept. This does not require any further profiles, except for cases where the model also captures other types of business objects (i.e., non-informational objects) and we want to be able to distinguish between the two types. Two challenges remain:</span></p><ol class=”ol1″><li class=”li1″><span class=”s1″>Entities have Attributes. It may be very important to be able to represent the fact that we distinguish between “first name” and “last name” rather than simply using “name”. Since ArchiMate does not have a concept for Attribute, we propose to simply use a </span><span class=”s2″><b>CompositionRelation</b></span><span class=”s1″> to decompose </span><span class=”s2″><b>BusinessObjects</b></span><span class=”s1″> into its attributes.  Domains, cardinality and optionality are not modeled at the ArchiMate level.</span></li> <li class=”li1″><span class=”s1″>Many organizations choose to explicitly model meta-data about Entities. For example: definition of the concept, relevant business rules and legislation, stewardship (as we will describe in a later posting) and so on. It seems to make sense to use the ‘</span><span class=”s2″><b>Meaning’</b></span><span class=”s1″> concept for this. However, this quickly becomes tedious and will crowd the model. Also, some meta-data is represented by the fact that </span><span class=”s2″><b>BusinessObjects</b></span><span class=”s1″> are linked to other ArchiMate concepts in the model. We will get back to the meta-data discussion in a separate post! </span></li></ol><p class=”li1″><span class=”s1″>​</span>To provide a link with the design level, it makes sense to relate Subject Areas to more detailed ERD models. To do this, we have recreated the basic Chen ER diagramming notation as a separate meta model in Architect which allows us to do the following:</p><ul><li class=”li2″><span class=”s1″>A Subject Area as modeled in the ArchiMate world </span><span class=”s3″><b>is refined in</b></span><span class=”s1″> an ERD view</span></li> <li class=”li2″><span class=”s1″>An Entity as modeled in the ArchiMate world </span><span class=”s3″><b>is equal to</b></span><span class=”s1″> an entity in an ERD view</span></li></ul><p class=”li2″><img alt=”Subject area and entity modelled in ArchiMate ” src=”http://www.bizzdesign.com/assets/BlogDocuments-2/20130524_BasvanGils/Data-management-ArchiMate-model-package_599x274.png” style=”width: 599px; height: 274px;” title=”Subject Area as modeled is refined in an ERD view in ArchiMate. An Entity is equal to an entity in an ERD view in ArchiMate”/></p><p class=”li2″><span class=”s1″>​In the next posting in this series we will describe how entities are realized in applications. As always: if you have questions or suggestions, please drop us a note. Stay tuned!</span></p>

Categories Uncategorized