On McKinsey’s Two Speed IT Architecture and Digital Business Models

 In a “A two-speed IT architecture for the digital enterprise”, McKinsey’s (McK) Oliver Bossert, Chris Ip, and Jürgen Laartz stated that “delivering an enriched customer experience requires a new digital architecture running alongsi…

The Cloud Is Disrupting Hadoop

Forrester has seen unprecedented adoption of Hadoop in the last three years. We estimate that firms will spend $800 billion in Hadoop software and related services in 2017. Not surprisingly, Hadoop vendors have capitalized on this — Cloudera, Hortonworks, and MapR have gone from a “Who?” to “household” brands in the same period of time.

But like any good run, times change. And the major force exerting pressure on Hadoop is the cloud. In a recent report, The Cloudy Future Of Hadoop, Mike Gualtieri and I examine the impact the cloud is having on Hadoop. Here are a few highlights:

Firms want to use more public cloud for big data, and Hadoop seems like a natural fit. We cover the reasons in the report, but the match seems made in heaven. Until you look deeper . . .

Hadoop wasn’t designed for the cloud, so vendors are scurrying to make it relevant. In the words of one insider, “Had we really understood cloud, we would not have designed Hadoop the way we did.” As a result, all the Hadoop vendors have strategies, and very different ones, to make Hadoop relevant in the cloud, where object stores and abstract “services” rule.

Cloud vendors are hiding or replacing Hadoop all together. AWS Athena lets you do SQL queries against big data without worrying about server instances. It’s a trend in “serverless” offerings. Google Cloud Functions are another example. DataBricks uses Spark directly against S3. IBM’s platform uses Spark against CloverSafe. See the pattern?

As more firms get tired of Hadoop’s on-premises complexity and shift to the public cloud, they will look to shift their Hadoop stacks there. This means that the Hadoop vendors will start to see their revenue shift from on-premises to the cloud.

Read more

The Open Group TOGAF® User Group Meeting Summary

The Open Group TOGAF® User Group meeting, held in San Francisco on January 30, 2017, focused on “Create vs. Reuse Architectures.” It addressed the question of whether Enterprise Architects need to be more involved in reusing existing architecture models than in creating new ones to meet their needs.

Consolidations in Data Governance Tooling Are Emphasizing DG importance for future Data Usages

While data governance has been a business need for years, it is becoming more visible as a center stage business concern. Driving this shift are new regulations and new requirements addressing consumer data ownership, privacy and business interest data monetization. Two of the most important regulations are GDPR (European General Data Protection Regulation), and BCBS 239 (Basel Committee on Banking Supervision regulation 239). Forrester recognized this change three years ago when described the evolution of data governance from ‘data input quality’ to ‘data usage’ – which we call Data Governance 2.0. Some emerging data governance solution vendors, like Collibra and GDE, moved aggressively to address the new requirements of data governance 2.0. But the larger established vendors – IBM, Informatica, SAS or SAP, were moving more slowly as they prioritized investments in developing a platform supporting Systems of Insight.

Two just announced acquisitions demonstrate that larger established vendors now recognize the need for renewed data governance offerings:

· Informatica purchase of Diaku Axon platform. The acquisition announced on the 22nd February of the Diaku Axon platform adds to Informatica’s current Data Governance execution capabilities (DQ, MDM, security/masking) more business oriented capabilities like vertical knowledge (finance,) and support of regulations such as GDPR and BCBS239.

Read more