Achieving agility with data virtualization (2/2)

This posting is the follow up from a previous post where we described the need for agility as well as a setting where we believe that data virtualization techniques can help.

Following the definition of data virtualization of Rick van der Lans, we see data virtualization as a group of technologies that makes a heterogeneous set of databases and files look like one integrated database, which has some commonality with how many people see the concept of a federated database. As we will see shortly, though, data virtualization picks up where “traditional” data federation stops and provides organizations with a rich set of techniques for data integration issues:

Starting at the bottom, we see a series of source systems (or at least: the data part of it). The data structures are replicated and wrapped in the data virtualization server. The idea is that the virtualization software discovers the data structures in the source systems to make them available as virtual table structures. This achieves the notion of federation as mentioned earlier. If desired, the actual content of the source systems may be (partially) cached: this has the advantage that queries can be handled mostly in the virtualization environment to prevent huge workloads for the source systems.

Based on this virtualized ‘foundation’ layer, it is fairly straight forward to build new layers of virtual tables on top. This allows for building data structures that are close to the needs of end-users (i.e. star schema’s). It also allows for easy integration, application of transformation and integration rules and so on. In practice we increasingly see virtualized data warehouses, master data management hubs, etcetera.

One aspect of agility should be obvious from this discussion: development and data integration within the virtualized environment can be considerably more agile than in a traditional setting. Requirements and specification (e.g. meta-data management) could still be used, but rather than a long build and deploy time, we now have results available immediately in a virtual table structure. As a result, it is easy to learn-while-doing in quick and highly interactive cycles with end users: quick sprints will deliver a working prototype and later adjustments can easily be made without having wasted many valuable development hours.

This also demonstrates the fact that such a system itself is also considered to be agile:

  • It will be fairly easy – and most of all: fast – to adapt to ever changing business needs for information.

  • Deploying changes to a virtualized data model is easier than changing the data structure of a physical database, which can entail all kinds of data conversion issues.

  • Dealing with the impact of changes is easier, since no software is adapted, lowering the risk of disruptions and keeping the impact localized.

  • Integration of the solution is simple, since existing interfaces remain stable.

Built-in features around security, auditing, logging, and monitoring (i.e., when things change in the source systems) provide the organization with the means to stay in control of their data. In short:

  • Data virtualization decouples access to data from the source systems. This allows further manipulation of data without impacting the original systems.

  • Virtualized access to structured and unstructured data allows for uniform querying. Caching avoids heavy work-loads on the original transaction systems.

  • Data access can be optimized for various stakeholders with different needs, concept definitions, permissions etc.

  • Virtualization allows for rapid, incremental development & delivery of information with minimal impact on source systems.

This mechanism can be considered a key resource for agility that supports key activities in the organization. A virtual data warehouse with rapid / agile development of new data structures will make it easier to accommodate management that increasingly seeks data-based, rationalized decision making to complement with creative strategic skill. Suppose, for example, that there is a feeling that international markets can be conquered with a cross selling strategy: offer one product at a discount to generate interest for high-end services that will generate revenue. Running the numbers based on historic sales in countries where the organization was already active must be swift. Even more, when executing this strategy, the system should be flexible enough to easily monitor actual investments and revenues in near real-time.

The other obvious need for system agility in the field of data lies with compliance and regulations. Many industries are heavily regulated, for example in finance or healthcare, and rules for compliance reporting change all the time. In and by itself this need not be an issue. However, we often see that concept definitions change slightly, derivations and key calculations become more complex, other types of information are required, and so on. Here also, the rapid development cycles and flexibility.

If you have any questions or suggestions: either drop a note or get in touch via E-mail!

Categories Uncategorized

Achieving agility with data virtualization (2/2)

This posting is the follow up from a previous post where we described the need for agility as well as a setting where we believe that data virtualization techniques can help.

Following the definition of data virtualization of Rick van der Lans, we see data virtualization as a group of technologies that makes a heterogeneous set of databases and files look like one integrated database, which has some commonality with how many people see the concept of a federated database. As we will see shortly, though, data virtualization picks up where “traditional” data federation stops and provides organizations with a rich set of techniques for data integration issues:

Starting at the bottom, we see a series of source systems (or at least: the data part of it). The data structures are replicated and wrapped in the data virtualization server. The idea is that the virtualization software discovers the data structures in the source systems to make them available as virtual table structures. This achieves the notion of federation as mentioned earlier. If desired, the actual content of the source systems may be (partially) cached: this has the advantage that queries can be handled mostly in the virtualization environment to prevent huge workloads for the source systems.

Based on this virtualized ‘foundation’ layer, it is fairly straight forward to build new layers of virtual tables on top. This allows for building data structures that are close to the needs of end-users (i.e. star schema’s). It also allows for easy integration, application of transformation and integration rules and so on. In practice we increasingly see virtualized data warehouses, master data management hubs, etcetera.

One aspect of agility should be obvious from this discussion: development and data integration within the virtualized environment can be considerably more agile than in a traditional setting. Requirements and specification (e.g. meta-data management) could still be used, but rather than a long build and deploy time, we now have results available immediately in a virtual table structure. As a result, it is easy to learn-while-doing in quick and highly interactive cycles with end users: quick sprints will deliver a working prototype and later adjustments can easily be made without having wasted many valuable development hours.

This also demonstrates the fact that such a system itself is also considered to be agile:

  • It will be fairly easy – and most of all: fast – to adapt to ever changing business needs for information.

  • Deploying changes to a virtualized data model is easier than changing the data structure of a physical database, which can entail all kinds of data conversion issues.

  • Dealing with the impact of changes is easier, since no software is adapted, lowering the risk of disruptions and keeping the impact localized.

  • Integration of the solution is simple, since existing interfaces remain stable.

Built-in features around security, auditing, logging, and monitoring (i.e., when things change in the source systems) provide the organization with the means to stay in control of their data. In short:

  • Data virtualization decouples access to data from the source systems. This allows further manipulation of data without impacting the original systems.

  • Virtualized access to structured and unstructured data allows for uniform querying. Caching avoids heavy work-loads on the original transaction systems.

  • Data access can be optimized for various stakeholders with different needs, concept definitions, permissions etc.

  • Virtualization allows for rapid, incremental development & delivery of information with minimal impact on source systems.

This mechanism can be considered a key resource for agility that supports key activities in the organization. A virtual data warehouse with rapid / agile development of new data structures will make it easier to accommodate management that increasingly seeks data-based, rationalized decision making to complement with creative strategic skill. Suppose, for example, that there is a feeling that international markets can be conquered with a cross selling strategy: offer one product at a discount to generate interest for high-end services that will generate revenue. Running the numbers based on historic sales in countries where the organization was already active must be swift. Even more, when executing this strategy, the system should be flexible enough to easily monitor actual investments and revenues in near real-time.

The other obvious need for system agility in the field of data lies with compliance and regulations. Many industries are heavily regulated, for example in finance or healthcare, and rules for compliance reporting change all the time. In and by itself this need not be an issue. However, we often see that concept definitions change slightly, derivations and key calculations become more complex, other types of information are required, and so on. Here also, the rapid development cycles and flexibility.

If you have any questions or suggestions: either drop a note or get in touch via E-mail!

Categories Uncategorized

Scaling Agile for the Enterprise

Guest post by Tim Mattix, Mario Gouvea and Vikram Purohit With the ever-evolving software development landscape, large enterprises are increasingly “going Agile.” Agile is applicable to many scenarios; for example, Extreme Programming (XP) zeroes in on software engineering while wrapping in novel approaches to boost quality, and Scrum is the most widely adopted agile method. While both of these frameworks work well for software development teams, Agile is even suitable for less obvious initiatives, such […]

The Value of Enterprise Architecture in Managing Risk, Compliance and Security

In my first blog post of 2014, I described how enterprise architecture delivers value in its relationship with other disciplines within the enterprise. I showed you the picture below, outlining this context of EA, and described the main focus areas of BiZZdesign’s EA service line in 2014:

  1. Realizing the enterprise strategy.

  2. Supporting strategic investment decisions.

  3. Fostering enterprise agility.

  4. Leveraging technological opportunities.

  5. Controlling risk and ensuring compliance.

 

Figure 1. Enterprise Architecture in Context

In a subsequent blog post on value-driven enterprise architecture, I focused on the right-hand side of this picture, zooming on the first three of these topics, and addressed how EA provides business value by connecting the dots between strategy, capability-based planning, portfolio management, program management, and operational delivery and change processes.

Let us now have a look at the left-hand side of the figure, in particular the value of EA in managing risk, compliance and security in the enterprise (nr. 5 in the figure).

Strategic insight into risk

To be in control of the risks you run, the first thing you need is strategic insight in your organization from a risk management perspective. This requires having a consistent and up-to-date overview of your current landscape of products, processes, applications, and infrastructure, and all related risk & security aspects. Without such an overview, you are flying blind in the fog. C-level management cannot fulfill its responsibilities without knowing what the main risk-related issues are.

Having an understanding of these relationships also helps you in assessing the effects of business decisions. This provides the business with a clear insight in the enterprise risks related to, for example, introducing new products and initiatives, outsourcing business processes or IT systems, or assimilating another organization after a merger. Thus, they can weigh the risk propensity of the enterprise against the potential consequences.

Moreover, the propagation of risks throughout the enterprise is of great concern to executives and operational management. Risks in one area may entail risks in another. For example, what are the potential ripple effects of a system failure, break-in, power outage, fraud or other mishap on critical business processes, services, clients, partners, markets, …? Enterprise architecture helps you to create insight in these relations and dependencies, and thus avoid or mitigate potential disasters.

Business-driven security and risk management

A related area in which EA provides tangible business value is in aligning security and risk management with business goals and objectives. Many organizations find it difficult to decide on the right level of security measures, and business managers often see this as a technical issue that is left to the IT people. They, in turn, don’t want to take any risk and create gold-plated solutions that are quite secure but also very expensive (and often rather unfriendly towards users).

Better alignment between business goals, architectural decisions and technical implementation helps the organization to spend its security budget wisely, focused on business-relevant risks. This may even lead to both cost savings and lower risks at the same time, because you do not invest in overly strong security measures for unimportant stuff, leaving more budget to protect the things your enterprise really cares about.

Moreover, security is not something that can be ‘tacked on’ afterwards. Inherently insecure architectures and systems are very difficult to fix later on. Rather, security and risk management should be designed in from the start, using the business goals of the enterprise to decide on appropriate measures.

Regulatory compliance and auditing

Another common reason for having a mature EA practice, especially in heavily regulated sectors such as banking and insurance, is regulatory compliance. Central banks and other regulatory bodies mandate or at least strongly recommend that financial institutions have a well-established EA practice, to ensure they are in control of their operation. They may even audit these architectures or use them in other ways to assess the risks the organization runs. Of course, internal auditors, CISO’s, and risk managers benefit from using EA artifacts as well. The insights into enterprise-wide relations and dependencies that these provide are important inputs for their tasks.

Implementing standards and policies such as SEPA, Solvency II, Basel III and others requires enterprise-wide coordination, visibility and traceability from boardroom-level decisions on e.g. risk appetite of the organization, down to the implementation of measures and controls in business processes and IT systems. Enterprise architecture as a practice, and enterprise architecture models that capture these relations, are indispensable to manage the wide-ranging impact of such developments.

Next steps

To benefit fully from the use of enterprise architecture in the context of security, compliance and risk management, we suggest that you focus on the following:

  • Align security and risk management with business strategy. Always view security and risk measures from the perspective of the business value they add.

  • Capture and visualize risk and security aspects of your organization. Visualize hazards, risks and mitigation measures in relation to the overall architecture and business strategy.

  • Measure and visualize the impact of risks and use these insights for decision making. Visualize data from e.g. penetration tests and use this to decide at the business level about necessary IT measures.

  • Prioritize security projects. Calculate the business value and impact of security projects and use this to make a prioritization of IT measures.

  • Use effective tool support. Software for fast and clear modeling, analyzing and visualizing provides the necessary insights. For example, BiZZdesign Architect, our easy-to-use, powerful tool for enterprise architecture and enterprise risk & security management.

Categories Uncategorized

The Value of Reference Architectures

In my previous blog post I wrote about value-driven enterprise architecture and its relations to different disciplines within the enterprise. In this blog, I want to focus on the added value of using reference architectures.

What are reference architectures?

Reference architectures are standardized architectures that provide a frame of reference for a particular domain, sector or field of interest. Reference models or architectures provide a common vocabulary, reusable designs and industry best practices. They are not solution architectures, i.e., they are not implemented directly. Rather, they are used as a constraint for more concrete architectures. Typically, a reference architecture includes common architecture principles, patterns, building blocks and standards.

Many domains have defined their own reference architectures. Well-known examples include:

  • the BIAN service landscape for the banking industry;

  • the ACORD Framework for the insurance industry;

  • the eTOM business process framework for the telecommunications industry from the TMforum;

  • various government reference architectures, for example the Dutch NORA and her ‘daughters’, the US FEAF or the Australian AGA;

  • the defense architecture frameworks such as NAF, DODAF and MoDAF;

  • reference architectures for manufacturing and supply chains such as ISA-95 and SCOR.

Most of these architectures include the common business functions/capabilities and business processes in a domain. Next to that, they may include for example common data models, communication standards and exchange formats, and sometimes even common software building blocks and other reusable assets.

eTOM framework in ArchiMate

Why use reference architectures?

So what is the value of using such reference architectures, and why and when should you employ them?

First of all, reference architectures provide a frame of reference that helps you to get an overview of a particular domain and they provide a starting point for your own enterprise architecture effort. They provide you with basic structures so you do not have to reinvent the wheel (which often turns out to be square anyway). Reference architectures are most valuable for those aspects and elements of your organization on which you do not compete with others.

For example, the business functions of a typical insurance company are largely similar to those of its competitors, as are many of its business processes. Competitive differences will most likely be in its products, pricing, customer segments, and customer relationships. Reusing industry best practices provided by reference architectures ensures that you are not behind the curve on these non-competitive aspects. We also see this in the implementation of many IT systems, where vendors such as SAP provide reference processes for large parts of an organization. Your accounting process, for example, is seldom a competitive advantage.

A second reason for using reference architectures is interoperability. In our increasingly networked world, organizations need to connect and cooperate with all manner of other parties. The standards and building blocks provided by reference architectures facilitate these connections. A related benefit is that using standards improves flexibility, because it is easier to exchange building blocks that connect via standardized interfaces; vice versa, it is much easier to develop standards if the building blocks themselves are standardized. LEGO is a perfect example, as my colleague Bas van Gils described in his blog recently.

This then brings us to a third reason for using reference architectures: mergers & acquisitions and outsourcing. If two parties speak the same language, use the same standards, and recognize the same boundaries between functions, processes and/or systems, it will be much easier to recombine their elements in new ways.

A fourth reason for using reference architectures is to facilitate benchmarking within your industry. Often, the differences between companies are not in the design of e.g. their business processes, but in their execution. Using reference designs makes it much easier to compare those execution results.

Benchmarking leads us to a fifth reason why reference architectures are important: regulatory compliance. Often, reference architectures are prescribed (or at least strongly recommended) by regulators. Accounting principles, practices and processes, for example, are evermore standardized and mandated. This leads to business reporting standards, even down to the level of exchange standards such as XBRL.

Another example is given by the Wijffels committee on the structure of Dutch banks, installed by the Ministry of Finance and the Dutch National Bank at the request of the Dutch Parliament. They have published a report which explicitly recommends banks to use industry standards such as BIAN. The context of this report is the need for decommissioning and breaking up banks in case of financial disaster (the so-called ‘living wills’). Structuring a bank along the lines of a reference architecture such as BIAN’s may certainly help in such a case. These issues are also addressed by the Dodd-Frank Act in the US and the new ECB Resolution Mechanism in the EU, so we may expect similar guidance from those sources.

How to use reference architectures?

Before you decide to use a reference architecture, some conditions should be fulfilled. First of all, a reference architecture should be community-based. Users, not vendors, should decide on best practices, and the architecture should be actively maintained by the user community. The world changes, and so should your reference architectures.

Such an active and open community process is ideally complemented by the use of open standards in describing the architectures. For example, the descriptions of the Dutch government reference architectures are largely based on the ArchiMate standard. BiZZdesign can provide many such reference architecture models out-of-the box.

The use of a reference architecture in an organization also requires governance: the organization should really commit to its use and this should be ‘enforced’ in some way. Reference architectures are only of value if people are really using them as intended and actually follow their guidance, otherwise the whole idea of reusing industry best practices breaks down.

Finally, your reference architecture of choice should provide true, actionable guidance. General architecture principles are not enough. Actual structure, for example in terms of business functions or processes, building blocks and standards, is needed to provide you with a useful backbone for your own architecture efforts.

Using reference architectures does not imply that you lose all your design freedom. Rather, you focus that freedom on those aspects of your enterprise where you make a real difference. That is where you as an architect can add the most value!

Categories Uncategorized

Aligning the Infrastructure Architecture with the Systems Application Architecture

Earlier this year we launched a new team within IT Services: the Technical Architecture Virtual Team. I lead this team, reporting via the Assistant Director of IT Services for Services Development (the team’s Sponsor) to the Senior Management Team. The team is made up of a small number of technical experts within the department, representing […]

Aligning the Infrastructure Architecture with the Systems Application Architecture

Earlier this year we launched a new team within IT Services: the Technical Architecture Virtual Team. I lead this team, reporting via the Assistant Director of IT Services for Services Development (the team’s Sponsor) to the Senior Management Team. The team is made up of a small number of technical experts within the department, representing […]