Whence, Angels?

As you’ve read over past couple of years, we’ve started investing in a hybrid Angel/VC model.  Lots of risk, lots of upside, and lots of fun new things to learn.  Applying Capability Driven Methods to management from the start has been both f…

Smart Agile Delivery

I was interested to read the recent McKinsey report on disruptive technologies. McKinsey identifies twelve potentially economically disruptive technologies including the Mobile Internet, Automation of Knowledge work, Internet of Things, Advanced Robotics, Next generation genomics and so on. The report also calls out general purpose technologies as ones that propel steep growth trajectories (think Steam or Internet) – that can be applied across economies and leveraged in many more specific disruptive technologies. Not surprisingly they don’t include software development in either list. The closest they come is with the automation of knowledge work, but this is restricted to artificial intelligence, machine learning and natural interfaces like voice recognition that automate many knowledge worker tasks that have long been regarded as impossible or impracticable for machines to perform.

Is this omission something we should be concerned about I wonder? While software development “practices” have been developing very rapidly with the adoption of Agile methods, it is a reasonable conclusion that software development “technologies ” are not undergoing dramatic changes that might qualify as disruptive. Yes there’s lots going on; in fact there’s a profusion of new languages, frameworks and databases, many open source initiatives, that are progressively specializing development technology. In addition there are significant advances in life cycle management and test technologies. But there isn’t any indication that these new technologies will have high economic impact in terms of dramatic improvement in productivity or quality. Or have a significant impact on the vast economic problem inherent in the worlds legacy systems. Rather there’s a huge proliferation of development diversity and some might say complexity.

Don’t get me wrong, I am not looking for a problem to solve. It’s clear that while smaller Agile projects are fine for tightly targeted problems, most organizations have struggled to scale Agile to larger projects and or enterprise class projects. The increase in dependencies and complexities become overwhelming and the probability of failure increases proportionately.

What’s needed, by larger projects is not process automation, but automation of the deliverable that allows the project to manage the dependencies at the model AND deliverable level. This raises the level of abstraction and can deliver dramatic productivity and quality improvements. As it happens there is a technology that can do this, but strangely it seems to be something that many people have already consigned to the trash heap of “been there done that”. I’m talking about Model Driven Development (MDD). There are many reasons why MDD has not succeeded in gaining widespread acceptance. It is actually extremely complex and requires considerable investment to establish. And in fairness it has been promoted primarily as a deliverable transformation and code generation tool. And many people will say, Oh NO!!! That’s just reinventing Case Tools all over again and we don’t want to go there.

But before we consign this technology to the trashcan of yesterday’s technologies, we need to take a hard look at what you can do if:
A. you have leaf node detail models in the asset repository that are tightly bound to execution deliverables.
B. you use best practice modern architecture with all functionality delivered as service bearing capabilities that minimize dependencies.
C. you can automate to a significant extent the population of the repository with harvested knowledge about legacy applications at that same leaf node level of detail.
D. you can run large scale, full life projects with full iteration of business, architecture, design and development models. (note here this doesn’t mean fully integrated and transformed models, we have gotten a lot cleverer over the years.)
In an Agile context this allows you to iterate functionality at extremely low cost, both in delivery and evolution life cycle stages. In fact experience shows it transforms the development project into an evolutionary approach in which you can really architect and build what you know and evolve to the optimal solution.

Model driven as a concept has been around a long time. Most developers (tell me they) don’t like model driven because it won’t handle complexities; because it diminishes developers’ jobs to be more mundane; that it produces poor code and so on and so forth. But the McKinsey report speaks to the inexorable progress of technology and the inevitability that as technology changes peoples jobs change or disappear. You are either on the train or under it.

Right now Agile MDD is probably only justifiable for the very large, complex projects. But as the case studies start showing higher success rates, with dramatic increases in productivity and quality, and the level of up-front investment is reduced as the capability is productized, we can expect to see the MDD project footprint to expand dramatically. Again the McKinsey report is incredibly bullish on the economic outlook for technology, and information technology in particular as the key general purpose enabling technology, and it’s clear that Agile processes alone are inadequate to support the ever increasing demand.

Being disciplined is for school kids; it’s time we got smart about how we deliver complex services and systems at scale.

McKinsey & Company: Disruptive technologies: Advances that will transform life, business, and the global economy. “Not every emerging technology will alter the business or social landscape—but some truly do have the potential to disrupt the status quo, alter the way people live and work, and rearrange value pools.”

An Integrated Electronic Health Record Needs Enterprise Architecture for Communicating …

A lot of activity and progress is underway around the world right now, and has been for some time, regarding integrating and sharing health data for healthcare management and delivery purposes. Many standards, reference models and authorities have arisen to guide implementation and use of IT for these purposes, for example health information exchange standards driven by the Office of the National Coordinator for Health Information Technology (ONC – http://www.healthit.gov/).  Many very new and modern health IT capabilities and products are available now, alongside systems and data that may have been first created over 30 years ago (particularly in the Federal Government).

In the media and within procurement activity, the swirl of misused phrases and definitions isn’t clarifying many approaches.  Records vs. Data vs. Information. Interoperability vs. Integration. Standards vs. Policies. Systems vs. Software vs. Products or Solutions. COTS vs. Services vs. Modules vs. Applications. Open Source vs. Open Standards. Modern vs. Legacy vs. Current.

In Enterprise Architecture (EA) terms, the messages regarding Integrated Healthcare IT requirements aren’t commonly being presented at a consistent level of abstraction, according to a consistent architecture model and vocabulary. As well, the audience or consumers of this information aren’t being addressed in ways most impactful to their specific needs and concerns.

What are the audience concerns?  IT system owners need to maintain data security and system performance, within technology and investment constraints. Doctors need consistent, instant, reliable and comprehensive visualization of data and the point of care. Government oversight bodies need recurring validation that money is spent wisely and results meet both mission and legislative requirements. Veterans, soldiers and their families need absolutely private, accurate, real-time information about their healthcare status – wherever they are. The pharmaceutical and medical device industries need timely, useful data regarding outcomes and utilization – to drive product improvement and cost-effectiveness. Hospitals, clinics and transport services need utilization and clinical workflow measurements to manage personnel and equipment resources.

The highest separation of concerns can be segmented by standard Enterprise Architecture domains or “views”.  A very generic, traditional model is the “BAIT” model – i.e. Business, Application, Information and Technology. Note that this is very similar to the widely-known “ISO Reference Model for Open Distributed Processing” (RM-ODP) Viewpoints – which underpin evolving healthcare standards including the “HL7 Services Aware Interoperability Framework” (SAIF).

The “Business Domain” encompasses the discussion about business processes, financials, resources and logistics, organization and roles.  Who does what, under what circumstances or authority, and how outcomes are evaluated and purchased.  The business drivers and enablers of successful healthcare delivery, one might say.  

The “Application Domain” concerns automating the “practice of healthcare”. Automated systems (and their user interfaces) are very helpful in planning, monitoring and managing the workflow, resources and facility environments, and of course processing data for clinical care, surveillance and health data management and reporting purposes. This is where healthcare expertise is codified in software and device configurations, where medical intelligence and knowledge meets computer-enabled automation. This domain is the chief concern of clinical practitioners and patients – where they can most helpfully provide requirements and evaluate results.  Software that’s built to process healthcare data comes in many shapes and sizes, can be owned or rented, are proprietary or completely transparent.

The “Information Domain” is in essence the “fuel” for the Application Domain.  Healthcare practitioners and patients care that this fuel is reliable, protected and of the highest quality – but aren’t too invested in how this is achieved, beyond required or trained procedures.  It’s like filling the car with gas – there’s some choice and control, but fundamentally a lot of trust that the gas will do the job.  For those whose concern is actually delivering gas – from undersea oil deposits all the way to the pump – this domain is an industry unto itself. Likewise, collecting, repurposing, sharing, analyzing information about patient and provider healthcare status is a required platform on which successful healthcare user applications and interfaces are built. This is what “Chief Medical Information Officers” are concerned with, as are “Medical Informatics Professionals”. They are also concerned with the difference between healthcare “records”, “archives” and “information” – but that’s a discussion for another day.

It is critical to note that “Information” is composed of data; core or “raw” data is packaged, assembled, standardized, illustrated, modeled and summarized as information more easily consumed and understood by users. Pictures, sound bites and brief notes taken by an officer at an accident scene are data (as are “Big Data” signals from public social media and traffic sensors); the information packages include the accident report, the newspaper article, the insurance claim and the emergency room evaluation.  These days, with the proliferation of data-generating devices and sensors, along with the rapid data replication and distribution channels available over the Internet, the “Data Domain” itself can be a nearly independent concern of some – providing the raw fuel to the information fire, oil for refined gas.

The “Technology Domain” is essentially all of the electronic computing and data storage elements needed to manage data and resulting information, operate software and deliver the software results to user interfaces (like browsers, video screens, medical devices).  Things like servers, mobile phones, physical sensors, telecommunications networks, storage repositories – this includes the machine-specific software embedded into medical equipment.

Sidebar: Data Domain Standards

Quite a bit of work and investment is required to collect, filter, store, protect and make available raw data across the clinical care lifecycle, in order that the right kind of information is then available to be utilized by users or software. Most importantly, reusable, open standards and Reference Implementation Models (RIMs) concerned with the Data Management domain are foundation requirements for any effective healthcare information system that participates in the global healthcare ecosystem.

A RIM is basically working software or implementation patterns for testing and confirming compliance with standards, thereby promoting creation of software products that incorporate and maintain the standards.  It’s a reusable, implementable, working set of code with documentation – focused on a common concern, decoupled from implementation policies or constraints. RIMs are useful for facilitating standards adoption across collaborative software development communities at every layer of the Enterprise Architecture.

For example, a data-domain RIM developed several years ago by Oracle Health Sciences (in a clinical research setting) focused on maintaining role-based access security requirements when two existing sets of research and patient care data were merged for querying.  The design of the single RIM merged the HL7 Clinical Research Model (BRIDG) with an HL7 EHR Model (Care Record) to support a working proof-of-concept – that others could adopt as relevant.  The “concern” here was data security – separate from the information and application-level concerns of enabling multi-repository information visualization methods for researchers.

The point of this discussion on EA-driven separation of concerns is illustrated as follows. When a spokesman (or RFP author) says “the system will be interoperable” – it’s likely that by “system” the meaning is some segment of the “Application Domain” being able to exchange objects from the “Information Domain”.  Instead, a better phrase might be “the software application will be able to share standardized healthcare information with other applications”. This keeps the principle discussion at the application and information-sharing software level, and doesn’t make detailed assumptions or predictions regarding concerns in the Business, Data or Technology Domains.  Those are different, but related discussions, and may already be addressed by reusable, standard offerings, RIMs or acquisition strategies.   

Taking this approach to broadly interpret the recent announcement that the DoD will seek a competitive procurement for “Healthcare Management Software Modernization” – it appears the focus of this need is the Application Domain – i.e. software packages and/or services that generate and use healthcare information while managing healthcare processes and interactions.

To support these new software application features, separate but related activity is required to address “modernization” concerns among the other EA domains – concerns relating to datacenter infrastructure, data management and security services, end-user devices and interfaces, etc.  Some of this activity may not be dedicated to healthcare management, but be shared and supported for enterprise use, for other missions. That’s why use of a current, relevant EA frameworks (such as DODAF v2.02 and the OMB “Common Approach” ) is so important, managing shared capabilities and investments.   

Using standard EA viewpoints to separate concerns will also expose reuse opportunities (and possibly consolidate or reduce acquisition needs), i.e. leveraging existing  investments that are practical enablers. Some examples might include the developing iEHR health record structured message translation and sharing services, plus HHS/ONC initiatives including Health Information Exchange Networks and the “VA Blue Button” personal health record service.   

What Happened to the Fine Art of Business Analysis? – Revisited 2013

Link: http://taotwits-too-big-to-tweet.blogspot.com/2013/05/what-happened-to-fine-art-of-business.html

From Taotwit's Too-Big-To-Tweet

Back in 2008, I wrote a paper on Business Analysis. Recently, I’ve been revisiting this subject in my day-job and this made me realise how little things had changed and inspired me to write this post, which is, basically, the original article re-written, with additional thoughts (in italics).


The role of the Business Analyst has never been more important but needs refocus on Information Systems not the technical solution. Many of us can recall a time when a distinction was made between the hardware and software supporting the business and the information used by the business – there was a clear difference between IT, to describe the former, and IS to describe the latter. 
IS stood for Information Systems: 
IS: The landscape  of business  information used by people within an organisation 
and how they use information to deliver business outcomes. 

IT, in contrast, meant: 
IT:  The  hardware  and  software  technology  that  automates  or  otherwise  supports 
information processing. 

This distinction between these two concepts is all but lost, and the disciplines associated with Information Systems (such as Business Analysis and IS Architecture), are have become too obsessed with IT. 

Read more

SCRUM at the center of Enterprise Architecture

A couple of days ago a tweet from John Gøtze cought my attention

EA can be agile and scrummilicious, says @soerenstaun in guest lecture at the IT University of Copenhagen.
— John Gøtze (@gotze) 29. April 2013

And my reaction to it was it should:

“@gotze: EA can be agile and scrummilicious, says @soerenstaun in guest lecture at the IT University of Copenhagen.” I say it should!
— Kai Schlüter (@ChBrain) 29. April 2013




(c) Tom Graves

To explore this a bit further now finally this blog post. When I am tasked to implement Enterprise Architecture first time or to improve an existing capability then I put an agile approach in the core, preferable SCRUM. There is various reasons for this. To explain the concept I borrow the SCAN framework from Tom Graves, here in particular the post Sensemaking – modes and disciplines.


In my implementations of Enterprise Architecture activities I focus on the problem space Ambiguous and Not-Known. Ambiguous problems can be quite well solved with SCRUM, where “the whole is greater than the sum of it parts”, Aristotle. Agile approaches which do not put the team into the centre of their methodology do not seem to work as well in this problemspace. The identification to which quadrant a problem and the corresponding solution belongs is in my mind always an ambiguous problem, due to the scope of Enterprise Architecture, trying to cover the whole [which is more than the sum of its parts].

Problems which belong to Simple or Complicated I usually hand over as fast as possible to better suited teams or individuals, while the ambiguous problems I keep inside of Enterprise Architecture. The Not-Known space is total different and typically I focus on finding the Innovation which emerges here instead of trying to force it. I believe that Innovation can be easier found peripheral and not really by looking for it centrally.

By implementing SCRUM in the center some key elements needs to be in place to succeed. One of the most crucial elements is the Chief Architect, be it the official announced Chief Architect, or a manager (e.g. CIO) who is filling that role. The Chief Architect is the one who gets the SCRUM role Product Owner assigned. And here typically some effort and attention is needed to secure that the Chief Architect is focussing on delivering into his role as Product Owner instead of doing the actual work. The work should be done by the SCRUM team (or Pigs).

The most important element here is to create an environment in which the team utilizes the strenghts of each other. And here also lies one of the biggest challenges, because most Enterprise Architects have been grown from technical roles and have been survived quite some selection criterias till they have become an Enterprise Architect. Statistically I observe a high amount of heroes or divas, who are quite biased that an Enterprise Architect and especially they themselves are the crown of the evolution. Concepts like the Peter Principle support that thinking even more. 🙂

The only role which is fairly easy to fill is the SCRUM Master. Just take any good SCRUM Master who is NOT knowing much about Enterprise Architecture (preferred) or willingly not going into the content (sometimes hard, if the SCRUM Master is self biased believing to know better about Enterprise Architecture). So literally someone who only focusses on securing that the process runs.

This is of course not always easy to implement, but it is my main target to achieve. And I continue developing a team towards that target, till it is achieved. And when achieved the speed can be even increased, because then the environmental problems are solved and the focus can be on the delivery of good Enterprise Architecture Services, which is a post I also plan.  SCRUM helps me to deliver to my main objective. Enterprise Architecture and especially my approach GLUE is about People first:


Comments as always more than welcome.