8 years, 3 months ago

Design Patterns for Event-driven Distributed Intelligence

Following discussions with @seabird20  and @darachennis, among others last year. I decided to publish a rough idea of the event-driven Distributed Intelligence architecture for Smart Grid (this could apply to the broader IoT). This is loosely-based on the microservices concepts & principles described here and builds on Duke Energy’s collaboration on interoperability and distributed intelligence initiative. The purpose of this post is to generate ideas and aid the ongoing dialogues. As far as I’m aware, the additional concepts I discuss here; SEDA, microservices and distributed agent patterns, are not  called out in the Duke Energy work (although the authors might reasonably claim they’re implied). My aim, however, is to make the ‘software and event data’ conceptual architecture much more explicit in this discussion.
Having said that,  the  first diagram looks pretty physical! It serves as a simple IoT-ish context: A metering sensor talks to an edge processor on a ‘Field Area Network’ and thereafter data is relayed back to the Data Centre for further processing.

In the 2nd diagram we open up the Edge Processor and to reveal a relatively simplistic view of the software architecture. This is based on the micoservices pattern described by Fred George, and my own experience as VP Product Development/Architect at VI Agents.




I found the concept of small, autonomous agents, work very well for us at VI. Moreover, I spotteda lot of parallels with Fred George’s description of microservices:

Publish anything of interest – don’t wait to be asked, if your microservice thinks it has some information that might be of use to the microservices ecosystem, then publish-and-be-damned.



Amplify success & attenuate failure – microservices that publish useful information thrive, while those that left unsubscribed, wither-on-the-vine. Information subscribers determine value, and value adjusts over time/changing circumstances.



Adaptive ecosystem – versions of microservices are encouraged –may-the-best-service-win mentality introduces variety which leads to evolution.



Asynchronous & encapsulated – everything is as asynchronous as possible – microservices manage their own data independently and then share it in event messages over an asynchronous publish-subscribe bus.



Think events not entities – no grand BDUF data model, just a cloud of ever-changing event messages – more like Twitter than a DBMS. Events have a “use-by-date” that indicates the freshness of data.



Events are immutable – time-series snapshots, no updates allowed.


Designed for failure – microservices must expect problems and tell the world when they encounter one and send out “I’m alive” heart-beats.



Self-organizing & self-monitoring – a self-organizing System-of-systems’ that needs no orchestration. Health monitoring and other administration features are established through a class of microservices.



Rapids, Rivers & Ponds
I also particularly liked Fred’s use of the labels Rapid and Rivers to describe two seperate instances of message brokers and Ponds to describe persisted data. Again very similar to the VI SixD architecture where we collected signals on a ‘Rapids’ bus and  business event & document messages flowed over a ‘Rivers’ bus, and Event Logs & Document Queues were our  ‘Ponds’. But that was back in 2002,  I do think the microservices pattern is much more elegant and more extensible.


Staged Event Driven Architecture
The 3rd diagram overlays what I see as the SEDA Stages in our Smart Grid architecture:

Stage1: Raw signal filtering, simple aggregation& correlation, negative-event detection etc.

Stage2: Goal and role based functionality via an ‘Edge Agent’ that does complex aggregation and correlation, sense-making and alerting. These might be implemented as microservices, or at least, share many of the attributes described by Fred.


Stage3: There will  be more aggregation & correlation, routing alerting and broadcasting done here. The junction of Information Technology (IT) and Operational Technology (OT)  is probably here. These are the the software components that sit back at the Data Centre, that receive and process the event messages produced by the ‘Edge Agents’. There’s also a nod to some sort of management console and the publishing of commands and requests to the ‘Edge Processing’ domain. This diagram is very sketchy right now, with loads of components missing.   I just want to keep things very simple at this stage. 

Stage4: This is where the  ‘Big Data’ heavy-lifting of historical data and predictive analytics stuff is done.

We used  the SEDA pattern to define an Events Network that was capable of processing 4bn events per day without having to invest in massive network bandwidth and huge back end processing capabilities for the Royal Mail – I believe the U.S. postal service didn’t implement a SEDA  and  throw a huge amount of cash at the problem. I love the elegance of a series of cascading queues that gradually whittle the tsunami of raw signals in real time, down to ‘right-time’ stream or tap of business-meaningful messages.



This is the early stage thinking and part of a much bigger Smart Grid, and ultimately Smart City architecture. Please feel free to tweet about. or better still, comment below. Thx.
8 years, 6 months ago

A Wiggly Path To Transformation

According to Sunday Times journalist, Carly Chynoweth, “Managers must learn how to adapt so they can solve problems they haven’t faced before.”

And:

“The fundamental impression they give is that the future can be organised and managed to achieve what you set out to achieve, and as long as you do it right it will come out as planned. But the reality is much more complex. Organisations are wiggly. They don’t operate in the neat, straight-line way conventional management thinking assumes”.

“It is not possible to predict outcomes — they emerge from people’s actions. You might have plans about what you are doing, but so does everyone else.”

****
The culture and traditions of 100 year old Utility business make behavioual change hard. Power-plant Engineer’s detailed and precise planning techniques don’t work for this sort of long-term change. In our case, over 70% of the business processes will change. The main hurdle for us was the lack of certainty and predictability over 5, 10 and 20 years. It was impossible to answer the basic questions; when, how and how much? There was discomfort over sanctioning any plans without the ‘details’. This resulted in a period of ‘decision-making-block’.



This is when we introduced ‘Transition State’ planning. This borrows ideas from Complexity Theory: Specifically, the characteristics of Complex Adaptive Systems. We started with the premise that we cannot predict the long range future.  We also accepted that outcomes will emerge at various points along the way.


Much like ’Scenario Planning’, we first developed a few hypothetical ideas. These were refined in workshops with subject experts until we reached a reasonable consensus of:
  • the principles we will apply to different aspects of the transformation ahead of us,
  • an initial guess of when certain outcomes are needed,
  • a list of the main influencing events/decisions assessed as ‘Known’, ‘Unknown’ or ‘Unknowable’ now – along with an assessment of risk.

Armed with this, we plotted-out a series of ‘Way-points’ over time. We call these ‘Transition States’. Each has a few goals (expected outcomes) that we estimate to complete.  At this stage, Transition States are not pinned to a hard date.  Rather they describe the sequence of outcomes, and roughly when we think they’ll happen. Transition States are re-planning points; a time to reassess and re-estimate the next phase of work. The Transition State is an opportunity to amplify value-adding and extinguish value-detracting aspects.


After a few iterations, a more precise view emerged of the early Transition States and a traditional approach to planning and deliverables could be applied.


At first, we were quite concerned that this approach would be rejected by the culture; a plan must be very detailed and accurate before work can start. This, however, wasn’t the case, decision-makers could see the merits of a more agile and iterative approach to complex change. They liked the way ‘doable’ increments became clearer after a few iterations. They also liked the explicit opportunities for re-think and course-correction.



****
Is a Complex Adaptive approach the only way to manage large-scale, long-running, change? I’d like to hear how others approach designing and managing business transformations and whether they see the merits of Complexity Theory that we’ve seen, 
8 years, 6 months ago

A Wiggly Path To Transformation

According to Sunday Times journalist, Carly Chynoweth, “Managers must learn how to adapt so they can solve problems they haven’t faced before.”

And:

“The fundamental impression they give is that the future can be organised and managed to achieve what you set out to achieve, and as long as you do it right it will come out as planned. But the reality is much more complex. Organisations are wiggly. They don’t operate in the neat, straight-line way conventional management thinking assumes”.

“It is not possible to predict outcomes — they emerge from people’s actions. You might have plans about what you are doing, but so does everyone else.”

****
The culture and traditions of 100 year old Utility business make behavioual change hard. Power-plant Engineer’s detailed and precise planning techniques don’t work for this sort of long-term change. In our case, over 70% of the business processes will change. The main hurdle for us was the lack of certainty and predictability over 5, 10 and 20 years. It was impossible to answer the basic questions; when, how and how much? There was discomfort over sanctioning any plans without the ‘details’. This resulted in a period of ‘decision-making-block’.



This is when we introduced ‘Transition State’ planning. This borrows ideas from Complexity Theory: Specifically, the characteristics of Complex Adaptive Systems. We started with the premise that we cannot predict the long range future.  We also accepted that outcomes will emerge at various points along the way.


Much like ’Scenario Planning’, we first developed a few hypothetical ideas. These were refined in workshops with subject experts until we reached a reasonable consensus of:
  • the principles we will apply to different aspects of the transformation ahead of us,
  • an initial guess of when certain outcomes are needed,
  • a list of the main influencing events/decisions assessed as ‘Known’, ‘Unknown’ or ‘Unknowable’ now – along with an assessment of risk.

Armed with this, we plotted-out a series of ‘Way-points’ over time. We call these ‘Transition States’. Each has a few goals (expected outcomes) that we estimate to complete.  At this stage, Transition States are not pinned to a hard date.  Rather they describe the sequence of outcomes, and roughly when we think they’ll happen. Transition States are re-planning points; a time to reassess and re-estimate the next phase of work. The Transition State is an opportunity to amplify value-adding and extinguish value-detracting aspects.


After a few iterations, a more precise view emerged of the early Transition States and a traditional approach to planning and deliverables could be applied.


At first, we were quite concerned that this approach would be rejected by the culture; a plan must be very detailed and accurate before work can start. This, however, wasn’t the case, decision-makers could see the merits of a more agile and iterative approach to complex change. They liked the way ‘doable’ increments became clearer after a few iterations. They also liked the explicit opportunities for re-think and course-correction.



****
Is a Complex Adaptive approach the only way to manage large-scale, long-running, change? I’d like to hear how others approach designing and managing business transformations and whether they see the merits of Complexity Theory that we’ve seen, 
9 years, 3 months ago

Microservices and the Internet of Things – First impressions

I must say I was sceptical when I first heard the term “microservices”. It sounded like yet another wash-rinse-repeat cycle of earlier incarnations of SOA. It appears I was wrong – this architectural pattern has some  interesting characteristics that, in my opinion, offer some real potential for event-driven, edge-processing systems (that are prevalent in the Internet of Things).
After watching Fred George’s video, I realised what he described was an event-driven, agent-based, systems’ model, rather than how many of us see SOA implementations today (often way-off the original notion of a SOA). At a conceptual level, the pattern describes a ‘Complex Adaptive’ system.  Essential principles of the architecture, however, appear teasingly elegant and simple. Few of these design principles are unique to microservices, but in combination, they make a compelling story:
Publish anything of interest – don’t wait to be asked, if your microservice thinks it has some information that might be of use to the microservices ecosystem, then publish-and-be-damned.



Amplify success & attenuate failure – microservices that publish useful information thrive, while those that left unsubscribed, wither-on-the-vine. Information subscribers determine value, and value adjusts over time/changing circumstances.



Adaptive ecosystem – versions of microservices are encouraged –may-the-best-service-win mentality introduces variety which leads to evolution.



Asynchronous & encapsulated – everything is as asynchronous as possible – microservices manage their own data independently and then share it in event messages over an asynchronous publish-subscribe bus.



Think events not entities – no grand BDUF data model, just a cloud of ever-changing event messages – more like Twitter than a DBMS. Events have a “use-by-date” that indicates the freshness of data.



Events are immutable – time-series snapshots, no updates allowed.


Designed for failure – microservices must expect problems and tell the world when they encounter one and send out “I’m alive” heart-beats.



Self-organizing & self-monitoring – a self-organizing System-of-systems’ that needs no orchestration. Health monitoring and other administration features are established through a class of microservices.



Disposable Code – microservices are very, very small (typically under 1000 lines of code). They can be developed in any language.



Ultra-rapid deployment – new microservices can be written and deployed with hours with a zero-test SDLC.

It struck me that many of these design principles could apply, in part, to a 2020 Smart Grid architecture I’m working on, and to the much boarder ‘Internet of Things’ecosystem.

The microservices pattern does seem to lend itself to the notion of highly autonomous, location-independent s/w agents that could reside at the centre, mid-point or edge of an environment. I can imagine that the fundamental simplicity of the model would help, rather than hinder, data privacy and protection by being able to include high-level system contexts, policies and protocols (e.g. encryption and redaction) applied to the event-streams. This pattern, of course, won’t be the ‘right-fit’ for all situations, but it does seem to offer interesting opportunities in:

  • Agility – very small disposable services are deployable within hours
  • Resilience – withstands service failures and supports service evolution
  • Robustness – it’s hard to break due to: simplicity, in-built failure handling and lack of centralized orchestration

It may be that the microservices pattern can only be applied to operational decision-support and behaviour profiling situations. But if that’s the case, I still see great potential in a world where many trillions of sensor-generated events will be published, consumed, filtered, aggregated, and correlated. I’m no longer a developer, but as an architect, I’m always on the look-out for patterns that could: either apply to future vendors’ products and services, or could act as a guide for in-house software development practice.

As always, I’d be keen to hear your views, examples and opinions about microservices and their potential application to the IoT. Have you come across examples of microservices pattern in an IoT context – deployed or in the labs?

I whole-heartily recommend setting aside an hour to watch the video of Fred George’s presentation on microservices:


131108 1110 Dune Fred George Recording on 2013-11-08 1106-Vimeo from Øredev Conference on Vimeo.

Post-post:
  • Another great post about microservices  – including downsides.
  • More here including “The 8 fallacies of distributed computing”.
Duke Energy are doing some interesting things in the Edge Processing space.

Here’s a video on microservices in the conext of IoT  (worth ignoring the references to Cloud/Azure):

http://www.microsoftvirtualacademy.com/training-courses/exploring-microservices-in-docker-and-microsoft-azure

I’d like to talk to anyone who’s impelmenting/ thinking about a Staged Event Driven Architecture using microservices for Edge Processing.

Phil Wills on experience of deploying microservices at The Gaurdian
9 years, 3 months ago

Microservices and the Internet of Things – First impressions

I must say I was sceptical when I first heard the term “microservices”. It sounded like yet another wash-rinse-repeat cycle of earlier incarnations of SOA. It appears I was wrong – this architectural pattern has some  interesting characteristics that, in my opinion, offer some real potential for event-driven, edge-processing systems (that are prevalent in the Internet of Things).
After watching Fred George’s video, I realised what he described was an event-driven, agent-based, systems’ model, rather than how many of us see SOA implementations today (often way-off the original notion of a SOA). At a conceptual level, the pattern describes a ‘Complex Adaptive’ system.  Essential principles of the architecture, however, appear teasingly elegant and simple. Few of these design principles are unique to microservices, but in combination, they make a compelling story:
Publish anything of interest – don’t wait to be asked, if your microservice thinks it has some information that might be of use to the microservices ecosystem, then publish-and-be-damned.



Amplify success & attenuate failure – microservices that publish useful information thrive, while those that left unsubscribed, wither-on-the-vine. Information subscribers determine value, and value adjusts over time/changing circumstances.



Adaptive ecosystem – versions of microservices are encouraged –may-the-best-service-win mentality introduces variety which leads to evolution.



Asynchronous & encapsulated – everything is as asynchronous as possible – microservices manage their own data independently and then share it in event messages over an asynchronous publish-subscribe bus.



Think events not entities – no grand BDUF data model, just a cloud of ever-changing event messages – more like Twitter than a DBMS. Events have a “use-by-date” that indicates the freshness of data.



Events are immutable – time-series snapshots, no updates allowed.


Designed for failure – microservices must expect problems and tell the world when they encounter one and send out “I’m alive” heart-beats.



Self-organizing & self-monitoring – a self-organizing System-of-systems’ that needs no orchestration. Health monitoring and other administration features are established through a class of microservices.



Disposable Code – microservices are very, very small (typically under 1000 lines of code). They can be developed in any language.



Ultra-rapid deployment – new microservices can be written and deployed with hours with a zero-test SDLC.

It struck me that many of these design principles could apply, in part, to a 2020 Smart Grid architecture I’m working on, and to the much boarder ‘Internet of Things’ecosystem.

The microservices pattern does seem to lend itself to the notion of highly autonomous, location-independent s/w agents that could reside at the centre, mid-point or edge of an environment. I can imagine that the fundamental simplicity of the model would help, rather than hinder, data privacy and protection by being able to include high-level system contexts, policies and protocols (e.g. encryption and redaction) applied to the event-streams. This pattern, of course, won’t be the ‘right-fit’ for all situations, but it does seem to offer interesting opportunities in:

  • Agility – very small disposable services are deployable within hours
  • Resilience – withstands service failures and supports service evolution
  • Robustness – it’s hard to break due to: simplicity, in-built failure handling and lack of centralized orchestration

It may be that the microservices pattern can only be applied to operational decision-support and behaviour profiling situations. But if that’s the case, I still see great potential in a world where many trillions of sensor-generated events will be published, consumed, filtered, aggregated, and correlated. I’m no longer a developer, but as an architect, I’m always on the look-out for patterns that could: either apply to future vendors’ products and services, or could act as a guide for in-house software development practice.

As always, I’d be keen to hear your views, examples and opinions about microservices and their potential application to the IoT. Have you come across examples of microservices pattern in an IoT context – deployed or in the labs?

I whole-heartily recommend setting aside an hour to watch the video of Fred George’s presentation on microservices:


131108 1110 Dune Fred George Recording on 2013-11-08 1106-Vimeo from Øredev Conference on Vimeo.

Post-post:
  • Another great post about microservices  – including downsides.
  • More here including “The 8 fallacies of distributed computing”.
Duke Energy are doing some interesting things in the Edge Processing space.

Here’s a video on microservices in the conext of IoT  (worth ignoring the references to Cloud/Azure):

http://www.microsoftvirtualacademy.com/training-courses/exploring-microservices-in-docker-and-microsoft-azure

I’d like to talk to anyone who’s impelmenting/ thinking about a Staged Event Driven Architecture using microservices for Edge Processing.

Phil Wills on experience of deploying microservices at The Gaurdian
9 years, 6 months ago

Whole-Brained Business Analysis – New Metaphor Required


I’ve been guilty using the much debated ‘Left vs Right brain’ metaphor to explain what I believe is needed. By way of example, Alec Sharp (@alecsharp), Sally Bean  (@Cybersal), Roy Grubb  (@roygrubb) and I have been Tweeting about Concept Modeling vs Concept Mapping. Alec is keen to get Data Modelers to abstract their thinking up from physical Data Models by thinking conceptually and I have been encouraging Business Analysts to think similarly when gathering requirements. This has meant that we both find that we need to introduce a different mindset: one that encourages more creative & inclusive discussion atthe initial   discovery and play-back stage of the Requirements-Solution Design journey. I expect the Agile/XP community will declare this to be their philosophy (and nothing new) and they’re probably right. But rather than get caught-up in ‘IT-centric’ methods, I’d rather think of it as a way to better understand any requirements for change – regardless of the Software Development Life-Cycle. I’d rather see such thinking applied to all aspects of business change – people, process, practice, policy and … technology.


Tried-and-tested analytical techniques should not be abandoned, they just need to be augmented with others that, in my experience, help expand ideas and produce resilient, coherent and business-value-creating solutions.  Both side of the equation are equally important. However, I’m finding (through experiment) that the more creative techniques are more engaging – simply more fun and inclusive – and, this alone, can, in my recent experience, dramatically improve business outcomes. 

In attempts to explain the need for a more ‘whole-brained’ approach, I’ve been following the lead of the ‘Design Thinking’ community in referring to both Theory X and Theory Y from MIT Sloan and the Left-brain Right-brain metaphor. This, however, is fraught with problems due, in large part to the findings of the University of Utah who debunk such binary thinking (as I was reminded by Rob England – @theitskeptic).

So I’m in a quandary: on the one hand I find that an X-Y, Left-Right, metaphor is a simple way to convey the difference between, say, Analysis vs. Synthesis, on the other hand, however, I run the risk of aligning with outdated concepts being fundamental reconsidered by neuroscientists. 

I guess the Complexity Science community might say that I’m talking about the difference between ‘Complex Adaptive’  vs. ‘Complicated’ systems, but, again, academic debate makes coming up with a simple metaphor next to impossible.

Has anyone found an alternative metaphor for a more balanced approach to Business Analysis and Enterprise Architecture?

Importantly, I’m keen to avoid the impression that people are to be seen as fundamentally one way or another. My observation is that it is the practice of Business Analysis/Enterprise Architecture that needs to be more ‘Whole-brained’ – not the individuals per se.

To get the discussion rolling, I’d like to hear views on:
  • A good Business Analyst or Enterprise Architecture must be a balance of Left-X(Reliability – Doing-things-Right) and Right-Y (Validity – Doing-the-right-thing)
  • We’ve spent to much time of methods that attempt to industrialise EA (the TOGAF 9.0 manual runs to around 800 pages in the attempt) and BAs are too often focused on methods focus on an ‘IT solution’ rather that the Whys and Whats of the current or desired business behavior
  • We need to spend more time on developing pattern-based storytelling skills in BAs and EAs to deliver break-through changes and allow for innovation in TO-BE models.
  • Economic churn and environmental challenges warrant more Y-minded thinking (with appropriate X-controls)
  • The world can’t be fully explained or governed algorithmically (thank god!)– not while values and trust dominate the way organisations function.


 

9 years, 6 months ago

Whole-Brained Business Analysis – New Metaphor Required

I’ve been guilty using the much debated ‘Left vs Right brain’ metaphor to explain what I believe is needed. By way of example, Alec Sharp (@alecsharp), Sally Bean  (@Cybersal), Roy Grubb  (@roygrubb) and I have been Tweeting about Concept Modeling vs Concept Mapping. Alec is keen to get Data Modelers to abstract their thinking up from physical Data Models by thinking conceptually and I have been encouraging Business Analysts to think similarly when gathering requirements. This has meant that we both find that we need to introduce a different mindset: one that encourages more creative & inclusive discussion atthe initial   discovery and play-back stage of the Requirements-Solution Design journey. I expect the Agile/XP community will declare this to be their philosophy (and nothing new) and they’re probably right. But rather than get caught-up in ‘IT-centric’ methods, I’d rather think of it as a way to better understand any requirements for change – regardless of the Software Development Life-Cycle. I’d rather see such thinking applied to all aspects of business change – people, process, practice, policy and … technology.


Tried-and-tested analytical techniques should not be abandoned, they just need to be augmented with others that, in my experience, help expand ideas and produce resilient, coherent and business-value-creating solutions.  Both side of the equation are equally important. However, I’m finding (through experiment) that the more creative techniques are more engaging – simply more fun and inclusive – and, this alone, can, in my recent experience, dramatically improve business outcomes. 

In attempts to explain the need for a more ‘whole-brained’ approach, I’ve been following the lead of the ‘Design Thinking’ community in referring to both Theory X and Theory Y from MIT Sloan and the Left-brain Right-brain metaphor. This, however, is fraught with problems due, in large part to the findings of the University of Utah who debunk such binary thinking (as I was reminded by Rob England – @theitskeptic).

So I’m in a quandary: on the one hand I find that an X-Y, Left-Right, metaphor is a simple way to convey the difference between, say, Analysis vs. Synthesis, on the other hand, however, I run the risk of aligning with outdated concepts being fundamental reconsidered by neuroscientists. 

I guess the Complexity Science community might say that I’m talking about the difference between ‘Complex Adaptive’  vs. ‘Complicated’ systems, but, again, academic debate makes coming up with a simple metaphor next to impossible.

Has anyone found an alternative metaphor for a more balanced approach to Business Analysis and Enterprise Architecture?

Importantly, I’m keen to avoid the impression that people are to be seen as fundamentally one way or another. My observation is that it is the practice of Business Analysis/Enterprise Architecture that needs to be more ‘Whole-brained’ – not the individuals per se.

To get the discussion rolling, I’d like to hear views on:
  • A good Business Analyst or Enterprise Architecture must be a balance of Left-X(Reliability – Doing-things-Right) and Right-Y (Validity – Doing-the-right-thing)
  • We’ve spent to much time of methods that attempt to industrialise EA (the TOGAF 9.0 manual runs to around 800 pages in the attempt) and BAs are too often focused on methods focus on an ‘IT solution’ rather that the Whys and Whats of the current or desired business behavior
  • We need to spend more time on developing pattern-based storytelling skills in BAs and EAs to deliver break-through changes and allow for innovation in TO-BE models.
  • Economic churn and environmental challenges warrant more Y-minded thinking (with appropriate X-controls)
  • The world can’t be fully explained or governed algorithmically (thank god!)– not while values and trust dominate the way organisations function.


 

10 years, 2 months ago

6 IT Trends & 15 New Habits for CIOs & Their Teams

The CIO/ITD In Crisis.

Harvard Business Review blogger, Jim Stikeleather, posted recently  The CIO in Crisis: What You Told Us – a few particular points caught my attention:

“The best executives I have met have had a great understanding of how to use technology to gain competitive advantage and improve operations. They also worked with the CIO to help them to understand the business. They worked together to identify the technologies that could improve the company’s competitive advantage versus technologies that were needed to support the business. Once this was done, the executive leadership and CIO focused on implementing technologies that improve the company’s competitive advantage”.

All the parts of the organization have to come together and build a common language to discuss their markets and their enterprise. They need to have a common appreciation of each other’s purpose. The CIO must step up and mentor the C-suite on the potentials, possibilities, threats and opportunities of information technology..”.

If IT and the CIO come to the party talking like engineers, only offer convergent lines of thought(analytical, rational, quantitative, sequential, constraint driven, objective and detailed focus) and don’t offer a more holistic, shaded divergent thinking point of view (creative, intuitive, qualitative, subjective, possibility driven, holistic with conceptual abstractions), then they have missed the point”.

The CEOs were actively aware, concerned, looking at alternatives such as chief digital officers, or creating “not-so-shadow” IT organizations under the CMO”.

For existing CIOs, ask yourself a few questions. Are you generating customer value? Are you (or do you have the potential to be) the best in the world at what you are doing? Are you required to do what you are doing? Using the answers to those questions, what do you need to stop doing, start doing or do differently?..”. [see 15 ways to change the ITD’s habits table later in this post].

In a similar vein, according to a recent CIO event run by Forrester Research: “The IT department of 2020 could disappear as a separate entity and become embedded in departments throughout the entire organization“.
This article posits that the need for change is now undeniable, and that CIOs are looking for practical steps for creating new habits in their teams. These new habits, developed now, will help prove the continuing need for a central Enterprise IT Department.


History & Trends.

The demise of the IT Department is not a new  prediction, it was first suggested in 2004 by Nicolas Carr in his book ‘Does IT Matter?‘ and again in 2007 when Chris Anderson published his ‘Black Wire – White Wire’ article. This post talked about how corporate IT was being over-taken by consumer-IT. Later, in January 2008,  Nicholas Carr famously pronounced “The IT department is dead” referring to the up-take of utility computing since his 2004 prediction. 

Since then, others made further observations about emerging IT trends that appear to strengthen those predictions. Today, around six hard trends are well established. They sit within an umbrella trend we described as ‘Externalization’ back in 2007. Later, in ‘Flash Foresight‘ Daniel Burrus explains how he identified many of the established technology trends and why they are ‘Hard’ trends rather than passing fads. More recently,  in his book ‘Agile Architecture Revolution‘, Jason Bloomberg talks about understanding the enterprise as a Complex System – a System-of-Systems. His book is architectural guide to help IT Departments respond to the Externalization trend and, at the same time, it highlights the need for a change in mindset within the IT community.

In parallel, John R. Rymer of Forrester Research coined the phrase ‘Business Technology’ (BT) to describe the ever-increasing reliance on information technology by businesses of all types to handle and optimize their business processes  and the need for a more integrated & holistic approach to the use of business-embedded information technology.  Here’s what Wikipedia says about BT

The increasing use of the term business technology in IT forums and publications is due to an emerging school of thought that the potential of information technology, its industries and experts, has now moved beyond the meaning of the term. Specifically information is seen by some as a descriptor not broad enough to cover the contribution that technology can make to the success of a business or organization“.

Focus on Externalization and BT.


Acceptance of the Externalization trend, and a deep appreciation of ‘Business Technology’ theme, provide the canvas, on which, we can sketch-out the ways in which the IT Department must change to survive. Probably most importantly, the CIO needs to find the time to think strategically: from ‘Whac-A-Mole-IT-Management’  to strategic, Business-Technology leadership. Thinking strategically means the CIO needs to develop a deep appreciation of  the various ‘markets’ his/her team serve, as both a supplier, and a broker of services, to those markets. Such markets exists within and outside the enterprise and are made up of customers, suppliers, intermediaries and other stakeholders. All with differing values and requiring different sensitivities to protect and enhance trust relationships.

How to prepare for the inevitable change.

At my current company, we use the ‘BT’ label help position our five-year vision & strategy. It helps frame the discussion about the many areas of change required: cultural, technological, procedural, organizational & regulatory. BT is not, however, a new name-tag for the IT department – it represents the new thinking required across the whole business. It might seem ironic, given the predictions, that it was our CIO who initiated the discussion – I suspect, however, this will often be the case: the CIO is frequently the only C-level executive who has a holistic understanding of both the breadth and depth of the business.

Back in May this year, I posted about the work we were doing to establish a BT Vision. This has since been developing gradually and is gaining acceptance across the IT senior leadership team, but more importantly, with C-Level executives.

Recently, I was invited to share, with a large multinational conglomerate, some of the more tangible changes we’re implementing  Our vision & journey towards ‘BT’, and our response to the the ‘Externalization‘ trend set the context for the discussion. Here’s the list of ‘contrasting behaviors‘ I shared: 

15 ways to change the IT Department’s habits

Old Habits
 New Habits
1.The department of ‘No’
2.Products focus
3.Internal SLAs
4.IT Strategy
5.Cyber security tooling
6.CAPEX-first mentality
7.Solution-focused technology architecture
8.Product standardized IT portfolio management
9.Governance of large IT projects
10.IT Cost Centre management
11.Internal procedures & methods
12.‘Family’ of IT vendors
13.Gadget-focused innovation
14.Periodic, internally-focused, measurement
15.Technology focus
1.The department of qualified ‘Yes’
2.Services focus
3.Services internal/external ecosystem –SLA-chains
4.Integrated BT strategy
5.Cyber security culture
6.Balanced, outcome-focused, investment
7.Adaptive, value-focused,  Enterprise Architecture
8.Principle-led architecture & standards-based integration
9.Company-wide, joined-up,  BT-governance
10.BT services broker, innovation-lead and advisory
11.Internal & external engagement
12.Consumer-driven, ecosystem of suppliers
13.Customer-story-based innovation
14.Constant, external & internal, feedback-loops
15.Focus on information value & risk

We’ve made good progress on many of the 15 points, but I’d say the most compelling for the business are: 1) The department of qualified ‘Yes’,  4) Integrated BT strategy, 5) Cyber security culture, and 13) Customer-story-based-innovation. I’m pleased to see these seem resonate with the observations made in the HBR article mentioned above.


Will the IT department will be dead by 2020?

Will the need for a central IT department go away by 2020? No, not in our case at least, but it does need to rapidly adapt and evolve and  we believe those  that don’t will become side-lined. We are seeing, however, other businesses taking a different view: there does seem to a dangerous, frustration-with-the-ITD, pattern emerging where IT departments are being split-up into LOB sub-teams, without considering the need for, holistic, enterprise-wide thinking.

Maybe the IT Department label won’t exist by 2020, but many organizations will require a team that focus on the value of the digitally enabled world that balances agility, resilience, security and cost across the whole enterprise. For these companies, dispersed and unbridled IT (use of consumer-led technologies and commoditized services) would lead to unprecedented levels business risk: operational, financial, commercial, reputational and regulatory. [post addendum: FUD alert! See my response to Nick Gall’s comment].

My hunch is that, once the hype has died down, the Externalization trend will actually strengthen the need for strategic, less operationally-focused, ‘Office of the CIO’ within organizations. I’m sure, however, such an entity will be unlike today’s ‘Operationally-focused’ IT shop, by 2020.

Addendum

Since posting, I was asked where VPEC-T fits in the context of the move towards BT. VPEC-T is a tool for the sense-making of complex systems-of-systems. It deals with the complexities of plurality (e.g.multiple value systems and multiple types of event). Moreover, it is used for sharing stories about such systems which helps: reach common understanding, ensure completeness and make trust explicit. These considerations will be increasingly important in the diverse and emergent world of BT. It’s most applicable to ‘New Habits’ 5,7 & 12-15.
Here’s an example of the preparation for a VPEC-T workshop based on a real session I ran earlier this year – it might help explain plurality need.

10 years, 2 months ago

6 IT Trends & 15 New Habits for CIOs & Their Teams

The CIO/ITD In Crisis.

Harvard Business Review blogger, Jim Stikeleather, posted recently  The CIO in Crisis: What You Told Us – a few particular points caught my attention:

“The best executives I have met have had a great understanding of how to use technology to gain competitive advantage and improve operations. They also worked with the CIO to help them to understand the business. They worked together to identify the technologies that could improve the company’s competitive advantage versus technologies that were needed to support the business. Once this was done, the executive leadership and CIO focused on implementing technologies that improve the company’s competitive advantage”.

All the parts of the organization have to come together and build a common language to discuss their markets and their enterprise. They need to have a common appreciation of each other’s purpose. The CIO must step up and mentor the C-suite on the potentials, possibilities, threats and opportunities of information technology..”.

If IT and the CIO come to the party talking like engineers, only offer convergent lines of thought(analytical, rational, quantitative, sequential, constraint driven, objective and detailed focus) and don’t offer a more holistic, shaded divergent thinking point of view (creative, intuitive, qualitative, subjective, possibility driven, holistic with conceptual abstractions), then they have missed the point”.

The CEOs were actively aware, concerned, looking at alternatives such as chief digital officers, or creating “not-so-shadow” IT organizations under the CMO”.
For existing CIOs, ask yourself a few questions. Are you generating customer value? Are you (or do you have the potential to be) the best in the world at what you are doing? Are you required to do what you are doing? Using the answers to those questions, what do you need to stop doing, start doing or do differently?..”. [see 15 ways to change the ITD’s habits table later in this post].

In a similar vein, according to a recent CIO event run by Forrester Research: “The IT department of 2020 could disappear as a separate entity and become embedded in departments throughout the entire organization“.
This article posits that the need for change is now undeniable, and that CIOs are looking for practical steps for creating new habits in their teams. These new habits, developed now, will help prove the continuing need for a central Enterprise IT Department.


History & Trends.

The demise of the IT Department is not a new  prediction, it was first suggested in 2004 by Nicolas Carr in his book ‘Does IT Matter?‘ and again in 2007 when Chris Anderson published his ‘Black Wire – White Wire’ article. This post talked about how corporate IT was being over-taken by consumer-IT. Later, in January 2008,  Nicholas Carr famously pronounced “The IT department is dead” referring to the up-take of utility computing since his 2004 prediction. 

Since then, others made further observations about emerging IT trends that appear to strengthen those predictions. Today, around six hard trends are well established. They sit within an umbrella trend we described as ‘Externalization’ back in 2007. Later, in ‘Flash Foresight‘ Daniel Burrus explains how he identified many of the established technology trends and why they are ‘Hard’ trends rather than passing fads. More recently,  in his book ‘Agile Architecture Revolution‘, Jason Bloomberg talks about understanding the enterprise as a Complex System – a System-of-Systems. His book is architectural guide to help IT Departments respond to the Externalization trend and, at the same time, it highlights the need for a change in mindset within the IT community.

In parallel, John R. Rymer of Forrester Research coined the phrase ‘Business Technology’ (BT) to describe the ever-increasing reliance on information technology by businesses of all types to handle and optimize their business processes  and the need for a more integrated & holistic approach to the use of business-embedded information technology.  Here’s what Wikipedia says about BT

The increasing use of the term business technology in IT forums and publications is due to an emerging school of thought that the potential of information technology, its industries and experts, has now moved beyond the meaning of the term. Specifically information is seen by some as a descriptor not broad enough to cover the contribution that technology can make to the success of a business or organization“.

Focus on Externalization and BT.


Acceptance of the Externalization trend, and a deep appreciation of ‘Business Technology’ theme, provide the canvas, on which, we can sketch-out the ways in which the IT Department must change to survive. Probably most importantly, the CIO needs to find the time to think strategically: from ‘Whac-A-Mole-IT-Management’  to strategic, Business-Technology leadership. Thinking strategically means the CIO needs to develop a deep appreciation of  the various ‘markets’ his/her team serve, as both a supplier, and a broker of services, to those markets. Such markets exists within and outside the enterprise and are made up of customers, suppliers, intermediaries and other stakeholders. All with differing values and requiring different sensitivities to protect and enhance trust relationships.

How to prepare for the inevitable change.

At my current company, we use the ‘BT’ label help position our five-year vision & strategy. It helps frame the discussion about the many areas of change required: cultural, technological, procedural, organizational & regulatory. BT is not, however, a new name-tag for the IT department – it represents the new thinking required across the whole business. It might seem ironic, given the predictions, that it was our CIO who initiated the discussion – I suspect, however, this will often be the case: the CIO is frequently the only C-level executive who has a holistic understanding of both the breadth and depth of the business.

Back in May this year, I posted about the work we were doing to establish a BT Vision. This has since been developing gradually and is gaining acceptance across the IT senior leadership team, but more importantly, with C-Level executives.

Recently, I was invited to share, with a large multinational conglomerate, some of the more tangible changes we’re implementing  Our vision & journey towards ‘BT’, and our response to the the ‘Externalization‘ trend set the context for the discussion. Here’s the list of ‘contrasting behaviors‘ I shared: 

15 ways to change the IT Department’s habits

Old Habits
 New Habits
1.The department of ‘No’
2.Products focus
3.Internal SLAs
4.IT Strategy
5.Cyber security tooling
6.CAPEX-first mentality
7.Solution-focused technology architecture
8.Product standardized IT portfolio management
9.Governance of large IT projects
10.IT Cost Centre management
11.Internal procedures & methods
12.‘Family’ of IT vendors
13.Gadget-focused innovation
14.Periodic, internally-focused, measurement
15.Technology focus
1.The department of qualified ‘Yes’
2.Services focus
3.Services internal/external ecosystem –SLA-chains
4.Integrated BT strategy
5.Cyber security culture
6.Balanced, outcome-focused, investment
7.Adaptive, value-focused,  Enterprise Architecture
8.Principle-led architecture & standards-based integration
9.Company-wide, joined-up,  BT-governance
10.BT services broker, innovation-lead and advisory
11.Internal & external engagement
12.Consumer-driven, ecosystem of suppliers
13.Customer-story-based innovation
14.Constant, external & internal, feedback-loops
15.Focus on information value & risk

We’ve made good progress on many of the 15 points, but I’d say the most compelling for the business are: 1) The department of qualified ‘Yes’,  4) Integrated BT strategy, 5) Cyber security culture, and 13) Customer-story-based-innovation. I’m pleased to see these seem resonate with the observations made in the HBR article mentioned above.


Will the IT department will be dead by 2020?

Will the need for a central IT department go away by 2020? No, not in our case at least, but it does need to rapidly adapt and evolve and  we believe those  that don’t will become side-lined. We are seeing, however, other businesses taking a different view: there does seem to a dangerous, frustration-with-the-ITD, pattern emerging where IT departments are being split-up into LOB sub-teams, without considering the need for, holistic, enterprise-wide thinking.

Maybe the IT Department label won’t exist by 2020, but many organizations will require a team that focus on the value of the digitally enabled world that balances agility, resilience, security and cost across the whole enterprise. For these companies, dispersed and unbridled IT (use of consumer-led technologies and commoditized services) would lead to unprecedented levels business risk: operational, financial, commercial, reputational and regulatory. [post addendum: FUD alert! See my response to Nick Gall’s comment].

My hunch is that, once the hype has died down, the Externalization trend will actually strengthen the need for strategic, less operationally-focused, ‘Office of the CIO’ within organizations. I’m sure, however, such an entity will be unlike today’s ‘Operationally-focused’ IT shop, by 2020.

Addendum

Since posting, I was asked where VPEC-T fits in the context of the move towards BT. VPEC-T is a tool for the sense-making of complex systems-of-systems. It deals with the complexities of plurality (e.g.multiple value systems and multiple types of event). Moreover, it is used for sharing stories about such systems which helps: reach common understanding, ensure completeness and make trust explicit. These considerations will be increasingly important in the diverse and emergent world of BT. It’s most applicable to ‘New Habits’ 5,7 & 12-15.
Here’s an example of the preparation for a VPEC-T workshop based on a real session I ran earlier this year – it might help explain plurality need.