1 year, 5 months ago

Big Data & Analytics in Northern Virginia, DC Area

Big Data, Analytics & Data Science are taking off as regional economic development catalysts – and outcomes – around the world, and particularly so here (DC/MD/Northern Virginia) in what some call the “Big Data Capital” of the US (given the proximity and engagement of so many commercial, federal/state government, nonprofit and startup organizations in this field).  A proliferation of local “Meetup” group attests to this, as do the events taking place in the area – with full support and sponsorship by big and small companies.  

Two quick examples:

1) The Northern Virginia Technology Council’s “Big Data & Analytics” Committee – is sponsoring an upcoming meeting about “How Walmart and local Virginia companies use Big Data & Analytics for Business Growth, Increased Revenues”.  Find out more and register here – and while you’re there, check out all the Northern Virginia Big Data Committee events and information (Oracle is a member).

The program will begin with keynote remarks from the Senior Director of Walmart Technology, who will discuss Walmart’s recent expansion in Northern Virginia, how Walmart Technology uses Big Data to support the company and its goals, and what he sees as the future of Big Data in our region. The program will continue with a panel discussion featuring four local Virginia companies (Logi Analytics, CustomInk, Zoomph and Neustar) discussing what they see as the opportunities and challenges in using #bigdata and #analytics to grow their businesses. Sponsors range from large companies to startups, as well as local economic development agencies and universities.

2) Oracle and the George Mason University Volgenau School of Engineering recently held a “Big Data Symposium”, this January, presenting a day filled with speakers, students and data scientists sharing their knowledge, their research, and their perspectives regarding “Breakthroughs in Big Data Analytics in the Public Sector”.   Follow this link to the video presentations.

1 year, 11 months ago

DATA Act IT Infrastructure – Platform Consolidation, Virtualization & Collaborative …

Momentum and activity regarding the Data Act is gathering steam, and off to a great start. The DATA Act directs the Office of Management and Budget (OMB) and the Department of the Treasury (Treasury) to establish government-wide financial reporting data standards by May 2015. The act also requires agencies to begin reporting financial spending data using these standards by May 2017 and to post spending data on USASpending.gov or an alternate system by May 2018.

According to many reports, including a recent GAO testimony, “OMB and Treasury have taken several significant steps towards meeting these requirements including the release of 27 discrete data standards, draft technical specifications, and implementation guidance intended to help federal agencies meet their responsibilities under the act. However, given the government-wide scope of the technical and cultural reforms required by the DATA Act, much more remains to be done…OMB and Treasury have proposed standardizing 57 data elements for reporting under the act. They released 15 elements in May 2015, a year after the passage of the act, and have since released 12 more. Eight of these were new elements required under the DATA Act; the balance of the first 15 data elements were required under the Federal Funding Accountability and Transparency Act of 2006 (FFATA). Officials told us that they expect to complete the process by the end of the summer.”

Reaching the 2017/2018 milestones, however, will require IT infrastructure change. Some change may be simple or take advantage of existing modernization efforts; much change will be very difficult, complex and/or costly. Strategies to prepare for this change, and catalyze it, are not yet part of the government-led discussion – but they are now part of the industry-led discussion, per this new Executive Report from ACT-IAC, co-authored by Oracle: “The DATA Act – IT Infrastructure Guidance Change Facilitation for IT Departments”.

At this time, there is considerable effort and oversight focused, and rightly so, on essentially the requirements and governance around the requirements, i.e. (as the GAO report focuses on) things like:

  • Data Standards Governance
  • Identification of Data Programs
  • Stakeholder Collaboration

Yet this effort only address the first few steps towards solution strategy, design and implementation (therefore including information technology infrastructure), as Treasury recommends in its DATA Act Playbook: “To assist agencies with implementation, Treasury recommends eight key steps that can help agencies fulfill the requirements of the DATA Act by leveraging existing capabilities and streamlining implementation efforts.”

The eight key steps are:

(These are well underway by many entities)

1. Organize Your Team: Create an agency DATA Act work group including impacted communities within your agency and designate a Senior Accountability Officer (SAO);
2. Review Elements: Review the list of DATA Act elements and participate in data definitions standardization;

(The following are not yet commonly underway)

3. Inventory Data: Perform an inventory of agency data and associated business processes and systems;
4. Design & Strategize: Plan changes to systems and business processes to capture financial, procurement, and financial assistance data;
5. Execute Broker: Implement a “broker”at the agency.  The broker is a virtual data layer at the agency that maps, ingests, transforms, validates, and submits agency data into a format consistent with the DATA Act Schema (i.e., data exchange standards).  
6. Test Broker Implementation: Test broker outputs to ensure data is accurate and reliable;
7. Update Systems: Implement other system changes (e.g., establish linkages between program and financial data, and capture any new data); and
8. Submit Data: Update and refine process (repeat 5-7 as needed)

Acknowledgement of the change ahead for IT departments is out there, though…”Creating the linkages for these data (i.e. the mapping among data across various systems) is going to be one of the biggest challenges for many federal agencies. While it might seem like a relatively straight-forward task, the volume of data and the complexity of systems make it a significant challenge.” – Statement of David A. Lebryk, Fiscal Assistant Secretary, U.S. Department of the Treasury before the House Committee on Oversight and Government Reform Subcommittee on Information Technology and Subcommittee on Government Operations, United States House of Representatives, July 29, 2015

The ACT-IAC paper lays out three fundamental tenets for addressing this change, to help either reduce or facilitate processing the volumes of data, and most importantly, to reduce the complexity of the system changes required.

Consolidation: the DATA Act mandate includes no new budget, yet requires agencies to instigate or take advantage of a wealth of shared services and data management improvement or modernization programs already underway, both in government and industry, to reduce duplication and unnecessary IT management and integration complexities. Message: “standardization and consolidation initiatives are a priority, aligned via enterprise architecture tenets.”

Engaged governance: most public sector agencies are faced with generational data management change drivers already, from big data to secure mobile analytics requirements. This federal-led initiative provides a top-down, organizational imperative for actionable, cost-effective data governance across the entire community of data users and stewards. Message: “let’s get committed, transparent, and hands-on with data governance.”

Virtualization: the variety of data standards and processing maturity across all the stakeholders is so great, from federal to state, local, and private recipients, that the elements of a solution will require a great deal of abstraction from the legacy data stores, systems and acquisition plans that can’t easily be changed. This introduces a dynamic, agile layer of usefulness between the existing IT infrastructure and new users with high, consumer-driven expectations. Message: “do no harm, but expose tangible value quickly.”

Contact me or anyone at Oracle for more information about challenges regarding IT Consolidation (including Data Integration, Master Data & Data Integration Services, i.e. “Broker” approaches), Collaborative & Engaged IT Governance, or IT Virtualization Strategies (from platform technologies to cloud services) – whether to prepare your agency for DATA Act compliance, or to simply advance and accelerate IT modernization altogether.

2 years, 4 months ago

All Data as a Service (DaaS/BDaaS) – Who’s Your D-a-a-S Enabler?

That’s where we’re headed, inexorably – you’d like to know what’s going on with your systems, what your customers or constituents need, or perhaps the latest metrics concerning device utilization trends during business events. And, you’d like this information (all of it, or lots of it) right now, in an easily consumable, visual, semantically-relevant way – to share with your community and to be automatically (or easily) ingested by your other systems or analysis tools. Secure & compliant, fast, portable, standardized if necessary, high quality.

But most of all, you’d like to pay only for the data and the way it’s delivered to you – not for a bunch of information technology products and services, hardware and software. You want data-as-a-service, as a consumer; i.e. explicit data units delivered via affordable service units. (Note the service deployment method might include Database-as-a-Service, i.e. DBaaS).

Or – you’re on the other side – you want to actually build the DaaS capability, to offer DaaS (or, perhaps a better term is a “Data Sharing Service” ) to your constituents or customers – as a provider.

There are three primary and distinct roles to consider, whether you’re building or buying DaaS – regardless of the type or characteristics of data that’s being exchanged; big data, open data, fast data, IoT/IoE data, metadata, microdata, multimedia content, structured, non-structured, semi-structured…ALL DATA.

  • The DaaS Consumer – who needs not only to acquire data from somewhere (in a way that shields them from the underlying technology concerns), but also then may use it to develop information apps and services, or repackage the data to share further with others.  The consumer assigns and realizes value from the service.
  • The DaaS Provider – who actually builds, markets and operates the business service and categorized storefront (or catalog), and brokers or stewards the data quality & availability, data rights, licenses and usage agreements between the consumers and the original data owners.  The provider creates, shapes and deploys the opportunities for value-enablement of specific data assets.
  • IT Services Management  – who design, implement and operate the information and data management infrastructure the DaaS Provider relies upon – and manage the IT component and services portfolio this infrastructure includes. For example the databases, virtualization technologies, data access services, storage and middleware capabilities. (Note that “IT Services Management” may be a wholly 3rd-party role, as well as a role within the DaaS Consumer or Provider organizations – there may be 3 or more IT Services Management domains).

There’s also a less distinct, more broadly relevant role – the DaaS Enabler. a.k.a. the “Enterprise Architect”, which can be a person, a role, or an organizational capability. The EA scope includes a heavy focus on enterprise “universal” information management and governance, infused (particularly in the Public Sector) with the currently vogue philosophies of SOA, Open Data, Mobility, Privacy-by-Design (PbD) and Cloud Computing. (Note that DaaS does not have to be delivered via a “cloud” deployment model – it’s equally-applicable delivered as a private data services virtualization platform, for example).

Information management includes the entire lifecycle of “information as an asset” capabilities in an enterprise, and into the stakeholder ecosystem – from the data sources, their ingest and “staging/data quality”, to storage in various repositories and access via information & data services, user interfaces and ultimately information-sharing and digital engagement services.  (See more of Oracle’s “Enterprise Information Architecture” ).

The DaaS Enabler (as a person) might be known by other titles, like Chief Data Officer, Chief Information Officer, DaaS Architect, Information Architect – maybe even Chief Innovation Officer (focusing on data assets); regardless of the title, the experience and scope of attention is as mentioned above, coordinated across all three service roles.  EA skills are essential, because DaaS enablement includes people, processes, technology and information concerns.

Each service role (Consumer, Provider, IT Management) benefits from the DaaS Enabler, particularly given the fact that the maximum value to be realized by each role’s investment in effort and resources – is collaboratively dependent on the others, and dependent on acknowledgement of proven, trusted, pragmatic enterprise architecture principles.

Oracle is an example of a DaaS Provider  – empowering businesses and public sector organizations (i.e. DaaS Consumers) to “use data as a standalone asset and connect with partner data to make smarter decisions. Oracle DaaS is a service in Oracle Cloud that offers the most variety, scale, and connectivity in the industry, including cross-channel, cross-device, and known and anonymous data.” 

Oracle is also a DaaS Enabler – as an organizational capability, for DaaS Consumers, Providers and IT Services Management.  This includes people (Enterprise Architects, supporting organizations and communities), processes (DaaS engineering, deployment and operations models, case studies, tools and business services), technology (DaaS information and device technologies, tools and platforms, hardware and software) and information (data assets, reference architectures, knowledge capital).

Creating or using Data-as-a-Service (DaaS), Big Data-as-a-Service (BDaaS), or any other DaaS initiative, exposed to the public or entirely within your enterprise?  Identify your DaaS Enabler(s).

2 years, 6 months ago

Public Sector Digital Strategy Meets Public Safety – in Northern Virginia, Fairfax County

The Northern Virginia Technology Council’s (NVTC) Digital Strategy Committee (#nvtcdigstrat) recent event regarding Digital Strategy and Public Safety, featuring Richard R. Bowers – Chief, Fairfax Fire Department – revealed several very interesting and useful challenges for the NOVA business community.Not least of which was the current challenges around focused, resourced digital strategy planning across the County constituent agencies, and among local jurisdictions.Many targeted capabilities and improvements in “front-end” digital tools, outreach and engagement, plus initiatives on the “back-end” to handle system-specific data and information management are certainly underway, but information-sharing among the public safety stakeholders – businesses, government and the public – remains a strategic planning, governance and education hurdle to address. In other words, a B2G2C digital strategy challenge.NVTC Digital Strategy with Fairfax Fire Chief Richard BowersNVTC Digital Strategy with Fairfax Fire Chief Richard Bowers; L-R, Patrick Smaldore, David Yang, Shilo Thomas, Chief Richard Bowers, Ted McLaughlan“Simplicity” was a key concept – that seems hard to maintain in the first responder settings, particularly with the profusion of both new technology equipment and situational data. Chief Bowers illustrated the challenge with local EMS responders – on route or on scene -having to quickly use and interact with at least 5 separate kinds of equipment:

  • EPCR (Electronic Patient Care Reporting)
  • CAD (Computer Aided Dispatch)
  • MDC (Mobile Data Computers)
  • NCR (National Capital Region) Patient Tracking System
  • Mobile Phones, iPads and Radios

The variety of interfaces, variety of data granulation, variety of authentication methods – it all adds up to what can be a burdensome expectation on responders, which creates higher risk in areas of data quality and security, process coordination and mission efficiency. This hinders, therefore, the ability of the entire responder community to deliver optimal outcomes – in spite of the number and types of technologies available and in use.Furthermore, as the technologies available to both the responders and the public become more pervasive, easy to operate and use – for collecting or contributing incident reporting, sensory feedback and overall situational awareness data – it’s simply too difficult to add these inputs to the mix in a way that avoids information overload, or worse, information degradation or errors. There’s no common information architecture that anticipates a proliferation of device inputs, mobile and social channels.A standard “dashboard” visualization service for use in the field, to quickly access the various systems and growing information sources, was also mentioned as a highly-desirable capability – particularly a dashboard to sensitive systems and protected information in a BYOD environment – i.e. on personal cellphones or tablets. A related need surfaced above the actual dashboard of the response vehicles and fire engines – actually having “heads up” display on the windshield of incident information, particularly GPS and route data.Fairfax 2015 Police and Fire GamesThe Committee was also briefed on the upcoming World Police and Fire Games, coming to Fairfax County at the end of June this year (2015). It’s anticipated that over 12,000 athletes and family/guests (over 30,000 in all) will attend the games, and that Fairfax County will experience tremendous global attention, regional pride and local economic benefit from hosting the event. Over 2000 volunteer slots remain open, along with many sponsorship opportunities for businesses, organizations or individuals. The Fairfax 2015 Games Website ( http://fairfax2015.com/ ) maintains all information for athletes and all other participants, from local accommodations and event venues, to a robust social community and online marketplace.The NVTC Digital Strategy Committee looks forward to more collaboration sessions with the Northern Virginia public safety and First Responder community, and will continue to support information-sharing about B2G2C digital strategies.Thanks to the NVTC event sponsors, speakers, coordinators and volunteers, including:

NVTC Digital Strategy Committee Sponsors:

Additional Information:

3 years, 4 months ago

Public Sector Open Data via Information Sharing and Enterprise Architecture

The title of this article is quite a mouthful, and three very complex and broadly-scoped disciplines mashed together. But that’s what’s happening all over, isn’t it, driven by consumer demand on their iPhones – mashing and manipulating information that’s managed to leak through the risk-adverse, highly-regulated mantle of the government’s secure data cocoon, and instantly sharing it for further rendering, visualization or actual, productive use. Mostly a “pull” style information flow, at best constrained or abstracted by public sector EA methods and models – at worst, simply denied.

This demand for open data, however, is rapidly exposing both opportunities and challenges within government information-sharing environments, behind the firewall – in turn a fantastic opportunity and challenge for the Enterprise Architects and Data Management organizations.

The recent “Open Data Policy” compels US Federal agencies to make as much non-sensitive, government-generated data as possible available to the public, via open standards in data structures (for humans and machine-readable), APIs (application programming interfaces) and browser-accessible functions. The public (including commercial entities) in turn can use this data to create new information packages and applications for all kinds of interesting and sometimes critical uses – from monitoring the health of public parks to predicting the arrival of city buses, or failure of city lights.

But there isn’t an “easy” button. And, given the highly-regulated and tremendously complex nature of integrated, older government systems and their maintenance contracts – significant internal change is very difficult, to meet what amounts to a “suggested” and unfunded (but with long-term ROI) mandate, without much in the way of clear and measurable value objectives.

That doesn’t mean there aren’t whole bunches of citizens and government employees ready, willing and enthusiastic about sharing information and ideas that clearly deliver tangible, touchable public benefit. Witness the recent “Open Data Day DC“, a yearly hackathon in the District of Columbia for collaborating on using open data to solve local DC issues, world poverty, and other open government challenges. Simply sharing information in ways that weren’t part of the original systems integration requirements or objectives has become a very popular – and in fact expected behavior – of the more progressive and (by necessity) collaborative agencies – such as the Department of Homeland Security (DHS).  

The Information Sharing Environment is the nation’s most prominent and perhaps active federal information sharing model – though its mission really generates “open data” products for a closed community (vs. the anonymous public) – i.e. those that deal with sensitive national security challenges. For information sharing purposes, however, it’s a very successful and well-documented, replicable model for any context that includes multiple government entities and stakeholders (whether one agency or department, or a whole city or state). A pragmatic Information Sharing Environment – with enthusiastic, knowledgeable and authoritative champions – is also the first, most important leg of the stool that supports successful Open Data initiatives.

The second leg is Enterprise Architecture – thinking of “open data” as the “demand” side of the equation, and “information sharing” as the conduit and source of “authorities” (i.e. policies, rules, governance, roles; internal and external) – EA can represent the “supply”. “Represent” the supply, not “be” the supply; the “supply” are the actual agency assets, including data, budget, contracts, personnel, etc. EA can inform regarding what data is available where and when, with what constraints, in what format or representations, via which IT interfaces, and via which business or technology resources. What can or needs to be changed, or what will be impacted, for the supply to meet the demand? Perhaps reusable IT exists that can be fully leveraged to meet the requirements, perhaps existing Oracle SOA, BPM and WebCenter assets?

The third leg of course is the inventory of data assets available – data assets include not only the raw data, but the metadata and registries, data access functions and APIs, data models and schemas, and the information technologies and systems that produce, manipulate, manage, protect and store the data. Plus really neat, useful commercial and open source open data tools to help. Whether they exist already, or need to be created.

So it conceptually works as follows, very abstractly-mirroring the well-known “People, Process, Technology” business model;

  1. People – An information-sharing environment and culture develops, enabling productive dialogue and guidance about proactively or reactively creating “open data” from enterprise assets to share with the public;
  2. Process – An Enterprise Architecture method and framework is leveraged, to define and scope the “art of the possible” in leveraging enterprise data assets, in terms that enable compliant program and engineering planning; and
  3. Information Technology – Useful, standards-based data products are cataloged and exposed to the public (better with some initial protoyping), meeting requirements and expectations, appropriately constrained by law, policy, regulations and investment controls. 

 Significant open data, and open government initiatives can’t succeed and persist without all three perspectives, all three domains of organizational expertise.

3 years, 9 months ago

Hybrid IT or Cloud Initiative – a Perfect Enterprise Architecture Maturation Opportunity

All too often in the growth and maturation of EnterpriseArchitecture initiatives, the effort stalls or is delayed due to lack of “appliedtraction”. By this, I mean the EAactivities – whether targeted towards compliance, risk mitigation or valueopportunity propositions – may not be attached to measurable, active, visibleprojects that could advance and prove the value of EA. EA doesn’t work by itself, in a vacuum,without collaborative engagement and a means of proving usefulness. A criticalvehicle to this proof is successful orchestration and use of assets and investmentresources to meet a high-profile business objective – i.e. a successfulproject.

More and more organizations are now exploring andconsidering some degree of IT outsourcing, buying and using external servicesand solutions to deliver their IT and business requirements – vs. building andoperating in-house, in their own data centers. The rapid growth and success of“Cloud” services makes some decisions easier and some IT projects moresuccessful, while dramatically lowering IT risks and enabling rapid growth.This is particularly true for “Software as a Service” (SaaS) applications,which essentially are complete web applications hosted and delivered over theInternet. Whether SaaS solutions – or any kind of cloud solution – areactually, ultimately the most cost-effective approach truly depends on theorganization’s business and IT investment strategy.

This leads us to Enterprise Architecture, the connectivitybetween business strategy and investment objectives, and the capabilitiespurchased or created to meet them. If anEA framework already exists, the approach to selecting a cloud-based solutionand integrating it with internal IT systems (i.e. a “Hybrid IT” solution) iswell-served by leveraging EA methods. If an EA framework doesn’t exist, or issimply not mature enough to address complex, integrated IT objectives – ahybrid IT/cloud initiative is the perfect project to advance and prove thevalue of EA.

Why is this? For starters, the success of any complex ITintegration project – spanning multiple systems, contracts and organizations,public and private – depends on active collaboration and coordination among theproject stakeholders. For a hybrid IT initiative, inclusive of one or morecloud services providers, the IT services, business workflow and datagovernance challenges alone can be extremely complex, requiring many diverse layersof organizational expertise and authority. Establishing subject matterexpertise, authorities and strategic guidance across all the disciplinesinvolved in a hybrid-IT or hybrid-cloud system requires top-level,comprehensive experience and collaborative leadership. Tools and practicesreflecting industry expertise and EA alignment can also be very helpful – suchas Oracle’s “Cloud Candidate Selection Tool”.

Using tools like this, and facilitating this criticalcollaboration by leading, organizing and coordinating the input and expertiseinto a shared, referenceable, reusable set of authority models and practices –this is where EA shines, and where Enterprise Architects can be most valuable.The “enterprise”, in this case, becomes something greater than the coreorganization – it includes internal systems, public cloud services, 3rd-partyIT platforms and datacenters, distributed users and devices; a whole greaterthan the sum of its parts.

Through facilitated project collaboration, leading toidentification or creation of solid governance models and processes, a durableand useful Enterprise Architecture framework will usually emerge by itself, ifnot actually identified and managed as such. The transition from planningcollaboration to actual coordination, where the program plan, schedule andresources become synchronized and aligned to other investments in theorganization portfolio, is where EA methods and artifacts appear and becomemost useful. The actual scope and use of these artifacts, in the context ofthis project, can then set the stage for the most desirable, helpful andpragmatic form of the now-maturing EA framework and community of practice.

Considering or starting a hybrid-IT orhybrid-cloud initiative? Running into some complex relationship challenges? Thisis the perfect time to take advantage of your new, growing or possibly latent EnterpriseArchitecture practice.

4 years, 2 days ago

The Chief Marketing Technology Officer – CMTO – and the EA

Admittedly, it’s a bit of a leap – addressing the converging roles of the CIO and CMO (Chief Marketing Officer) with an Enterprise Architecture perspective, particularly when a CMO’s “Enterprise” ranges far and wide of the actual organization they serve. The Internet does extend now into outer space a bit, after all.

The classic scope of the Enterprise is that which is contained within both an operating and investment budget (OPEX and CAPEX) – the assets and resources that are produced, consumed and used under a common business (or mission) strategy.  Perhaps a company or agency, a department or line-of-business, or some other facility or organization segment. Enterprise Architects (EAs) most often influence these sorts of enterprise contexts.

A CMO certainly runs a business segment, investing in people, assets and consumable resources – most of which can be touched, inventoried or governed in some way to align with the segment’s business strategy (make revenue, deliver goods or services, be a public steward, contain costs, mitigate risk). A CMO’s “Enterprise”, particularly in this digital age, is also that of the online, networked audience. Social media profiles, data feed providers, branded communications channels, publisher networks and web app platforms – these also are part of the CMO’s “Enterprise”, and require some degree of monitoring, governance, investment control, integrated standardization.  Digital marketing campaign assets and advertisements aren’t usually just thrown to the wilds of the Interwebs (unless they are) – they’re carefully planned, tested, optimized, controlled, monitored and analyzed – both their original forms and any derivations.

Note that, for purposes of this blog, the “CMO” is readily compared to the “Government Services PR Lead” or “Constituent Relationship Communications Lead” – or basically any other leadership position in charge of outreach, communications and basically marketing of Public Sector capabilities or services.

“Traditional” EA doesn’t seem to address the Internet of things, stuff and services as something to be modeled, or deemed compliant, or aligned with standard reference frameworks. This isn’t unlike trying to apply one EA’s influence across an SOA interface boundary – while there are certainly very useful, open standards for both to leverage in delivering SOA success, one organization’s EA model compliance and content isn’t necessarily usable or useful to another organization.  

Can or should one’s Enterprise Architecture scope and framework be applied to all those 3rd-party Internet-hosted products and services a CMO relies upon?  Why not – particularly if this “External Interactive Marketing” business domain is scoped according to some kind of “services taxonomy” (that may likely have parallel definition back within the organization).  For example “Data Publishing” services (like Equifax), “Search” services (like Google), “Information Sharing” and “Community Management” services (like Facebook and LinkedIn).  While these Internet capabilities aren’t owned by the organization, how they’re used can certainly be modeled and approached from the same architectural principles, standards and experience as a already found within the organization.

Enter the “Chief Marketing Technology Officer” (CMTO), a role that combines digital marketing practice and Internet services technology knowledge, with the classic IT investment, management and operations knowledge of a CIO (or CTO). The CMTO not only understands what’s necessary to secure and control information within his organization, but also understands what does or can happen to this information on the public Internet – planned or not.  
Below is a proposed standard “Domain Reference Architecture” for the CMTO role, depicting also the intersection (and expansion) of the traditional EA role.

Chief Marketing Technology Officer Domain Reference Architecture

Helping the CMTO apply architectural principles, governance and repeatable methods for the information lifecycle external to the organization – that’s a worthwhile and appropriate role for the Enterprise Architect…and may be all the more relevant as programs and lines-of-business holistically outsource their information management capabilities to 3rd-party providers and cloud services.  Full-scope alignment of the EA practice to the CMO/CMTO’s domain is probably inevitable, as more industry analysts point to the rapid and dominant global enterprise demand of marketing departments on their organization’s IT investment portfolio.

As written on the Oracle Social Spotlight Blog, “CMOs must see the science behind the art. CIOs must see the art behind the science”.  EAs must align the art and science to meet the business case.

4 years, 1 month ago

An Integrated Electronic Health Record Needs Enterprise Architecture for Communicating …

A lot of activity and progress is underway around the world right now, and has been for some time, regarding integrating and sharing health data for healthcare management and delivery purposes. Many standards, reference models and authorities have arisen to guide implementation and use of IT for these purposes, for example health information exchange standards driven by the Office of the National Coordinator for Health Information Technology (ONC – http://www.healthit.gov/).  Many very new and modern health IT capabilities and products are available now, alongside systems and data that may have been first created over 30 years ago (particularly in the Federal Government).

In the media and within procurement activity, the swirl of misused phrases and definitions isn’t clarifying many approaches.  Records vs. Data vs. Information. Interoperability vs. Integration. Standards vs. Policies. Systems vs. Software vs. Products or Solutions. COTS vs. Services vs. Modules vs. Applications. Open Source vs. Open Standards. Modern vs. Legacy vs. Current.

In Enterprise Architecture (EA) terms, the messages regarding Integrated Healthcare IT requirements aren’t commonly being presented at a consistent level of abstraction, according to a consistent architecture model and vocabulary. As well, the audience or consumers of this information aren’t being addressed in ways most impactful to their specific needs and concerns.

What are the audience concerns?  IT system owners need to maintain data security and system performance, within technology and investment constraints. Doctors need consistent, instant, reliable and comprehensive visualization of data and the point of care. Government oversight bodies need recurring validation that money is spent wisely and results meet both mission and legislative requirements. Veterans, soldiers and their families need absolutely private, accurate, real-time information about their healthcare status – wherever they are. The pharmaceutical and medical device industries need timely, useful data regarding outcomes and utilization – to drive product improvement and cost-effectiveness. Hospitals, clinics and transport services need utilization and clinical workflow measurements to manage personnel and equipment resources.

The highest separation of concerns can be segmented by standard Enterprise Architecture domains or “views”.  A very generic, traditional model is the “BAIT” model – i.e. Business, Application, Information and Technology. Note that this is very similar to the widely-known “ISO Reference Model for Open Distributed Processing” (RM-ODP) Viewpoints – which underpin evolving healthcare standards including the “HL7 Services Aware Interoperability Framework” (SAIF).

The “Business Domain” encompasses the discussion about business processes, financials, resources and logistics, organization and roles.  Who does what, under what circumstances or authority, and how outcomes are evaluated and purchased.  The business drivers and enablers of successful healthcare delivery, one might say.  

The “Application Domain” concerns automating the “practice of healthcare”. Automated systems (and their user interfaces) are very helpful in planning, monitoring and managing the workflow, resources and facility environments, and of course processing data for clinical care, surveillance and health data management and reporting purposes. This is where healthcare expertise is codified in software and device configurations, where medical intelligence and knowledge meets computer-enabled automation. This domain is the chief concern of clinical practitioners and patients – where they can most helpfully provide requirements and evaluate results.  Software that’s built to process healthcare data comes in many shapes and sizes, can be owned or rented, are proprietary or completely transparent.

The “Information Domain” is in essence the “fuel” for the Application Domain.  Healthcare practitioners and patients care that this fuel is reliable, protected and of the highest quality – but aren’t too invested in how this is achieved, beyond required or trained procedures.  It’s like filling the car with gas – there’s some choice and control, but fundamentally a lot of trust that the gas will do the job.  For those whose concern is actually delivering gas – from undersea oil deposits all the way to the pump – this domain is an industry unto itself. Likewise, collecting, repurposing, sharing, analyzing information about patient and provider healthcare status is a required platform on which successful healthcare user applications and interfaces are built. This is what “Chief Medical Information Officers” are concerned with, as are “Medical Informatics Professionals”. They are also concerned with the difference between healthcare “records”, “archives” and “information” – but that’s a discussion for another day.

It is critical to note that “Information” is composed of data; core or “raw” data is packaged, assembled, standardized, illustrated, modeled and summarized as information more easily consumed and understood by users. Pictures, sound bites and brief notes taken by an officer at an accident scene are data (as are “Big Data” signals from public social media and traffic sensors); the information packages include the accident report, the newspaper article, the insurance claim and the emergency room evaluation.  These days, with the proliferation of data-generating devices and sensors, along with the rapid data replication and distribution channels available over the Internet, the “Data Domain” itself can be a nearly independent concern of some – providing the raw fuel to the information fire, oil for refined gas.

The “Technology Domain” is essentially all of the electronic computing and data storage elements needed to manage data and resulting information, operate software and deliver the software results to user interfaces (like browsers, video screens, medical devices).  Things like servers, mobile phones, physical sensors, telecommunications networks, storage repositories – this includes the machine-specific software embedded into medical equipment.

Sidebar: Data Domain Standards

Quite a bit of work and investment is required to collect, filter, store, protect and make available raw data across the clinical care lifecycle, in order that the right kind of information is then available to be utilized by users or software. Most importantly, reusable, open standards and Reference Implementation Models (RIMs) concerned with the Data Management domain are foundation requirements for any effective healthcare information system that participates in the global healthcare ecosystem.

A RIM is basically working software or implementation patterns for testing and confirming compliance with standards, thereby promoting creation of software products that incorporate and maintain the standards.  It’s a reusable, implementable, working set of code with documentation – focused on a common concern, decoupled from implementation policies or constraints. RIMs are useful for facilitating standards adoption across collaborative software development communities at every layer of the Enterprise Architecture.

For example, a data-domain RIM developed several years ago by Oracle Health Sciences (in a clinical research setting) focused on maintaining role-based access security requirements when two existing sets of research and patient care data were merged for querying.  The design of the single RIM merged the HL7 Clinical Research Model (BRIDG) with an HL7 EHR Model (Care Record) to support a working proof-of-concept – that others could adopt as relevant.  The “concern” here was data security – separate from the information and application-level concerns of enabling multi-repository information visualization methods for researchers.

The point of this discussion on EA-driven separation of concerns is illustrated as follows. When a spokesman (or RFP author) says “the system will be interoperable” – it’s likely that by “system” the meaning is some segment of the “Application Domain” being able to exchange objects from the “Information Domain”.  Instead, a better phrase might be “the software application will be able to share standardized healthcare information with other applications”. This keeps the principle discussion at the application and information-sharing software level, and doesn’t make detailed assumptions or predictions regarding concerns in the Business, Data or Technology Domains.  Those are different, but related discussions, and may already be addressed by reusable, standard offerings, RIMs or acquisition strategies.   

Taking this approach to broadly interpret the recent announcement that the DoD will seek a competitive procurement for “Healthcare Management Software Modernization” – it appears the focus of this need is the Application Domain – i.e. software packages and/or services that generate and use healthcare information while managing healthcare processes and interactions.

To support these new software application features, separate but related activity is required to address “modernization” concerns among the other EA domains – concerns relating to datacenter infrastructure, data management and security services, end-user devices and interfaces, etc.  Some of this activity may not be dedicated to healthcare management, but be shared and supported for enterprise use, for other missions. That’s why use of a current, relevant EA frameworks (such as DODAF v2.02 and the OMB “Common Approach” ) is so important, managing shared capabilities and investments.   

Using standard EA viewpoints to separate concerns will also expose reuse opportunities (and possibly consolidate or reduce acquisition needs), i.e. leveraging existing  investments that are practical enablers. Some examples might include the developing iEHR health record structured message translation and sharing services, plus HHS/ONC initiatives including Health Information Exchange Networks and the “VA Blue Button” personal health record service.   

4 years, 3 months ago

Launching an Enterprise Architecture Program within State, Local, Municipal Organizations

By Gloria Chou

When launching a formal EA program, Government organizations often begin by socializing the overall benefits of EA and developing an EA Charter and Plan.  However, while both of these are valuable, they are more useful as part of after-the-fact documentation and communication plans.  Having worked with a broad spectrum of government organizations across the US and Canada, our team, Oracle’s Public Sector Enterprise Strategy Team (EST), has found that the first and primary focus in launching an EA program should be on how to meaningfully engage top business leaders and other stakeholders to discover their needs, identify what would bring the most value to the organization, and obtain their buy-in and support for EA as a key enabler in helping the organization achieve its mission objectives. 

Why is launching (or re-launching) and EA Program relevant in the government space today?  Although state and local agencies may have had an EA team for years, many are just getting started on formalizing their practice and creating awareness of the team’s capabilities and purpose within the organization as a whole – though some are in fact successfully delivering Agency-wide Enterprise Architecture value.  Additionally, while a majority of Federal agencies have necessarily had established EA programs for over a decade in response to Clinger Cohen mandates, some are beginning to reshape their programs as they perceive the need to go beyond checkmark/compliance-based EA and demonstrate additional value to their respective organizations.  Governmental budget pressures are increasing the scrutiny on all resource allocation and deployment such that EA programs must stay relevant and drive acknowledged and desired benefits or else risk being cut.

I believe discovery and dialogue with executive leadership about their goals, objectives, strategies, and current planning processes has to come first as, only after this is known, can the team understand what is particularly valuable to the specific organization.  Too many Government EA programs seek to provide generic value and benefits, such as standardization and integration, that, while good aims in and of themselves, are not necessarily prioritized by the organization nor sometimes even compatible with their operating model and culture.  As a case in point, when working with a very large municipality in the West, the EST began discussing EA with a new, forward-thinking CIO who had been three months into establishing an EA program to change the way IT was viewed across the organization.  We had an initial meeting with the lead EA and found that the new EA team had been doing the expected: technology architecture current state analysis and building IT standards documents.  After three months, the team was well on their way to spending another year or more on documentation!  The question we posed was, how does this change the way IT is viewed across the organization? The answer was clear, it didn’t.  Understanding specific needs, gaps, and opportunities that the executives care about is essential to ensure EA is relevant and focuses on what the business needs to successfully execute on its strategies.   

Based on this understanding of the organization’s priorities and what would bring the most value, the EA team should analyze what needs to be done and propose how they can be a part of the solution.  In the example of the large municipality mentioned above, the EST helped the organization’s EA team identify areas of opportunity to engage with business leaders across the organization and facilitate meetings to better understand strategies, goals, capabilities and high-level value streams.  By starting here, we were able to get the EA team on a path to make better decisions on where they would invest their time to provide the most value to the enterprise.  As a part of this, the team needs to assess their own capabilities and competencies as well as that of other teams within the organization against what is needed and propose options as to how they might best help the organization and what other changes might be needed to achieve the organization’s goals.  In actuality, an EA approach would help facilitate this analysis and assessment of how EA itself could benefit the organization.  The team should consider developing the vision for change as well as current state and future state views of operations, analyzing the gaps, and developing recommendations and a roadmap for the successful introduction of EA into the organization.

Only after the recommendations have been presented, vetted, and selected by leadership should the team document the EA purpose, application, and approach.  While this information can be captured in the EA Charter and Plan, it only represents a part of the needed content.  The rest, especially the plan, can only be developed after seeking input from other stakeholders in the organization.  Even though the executives have weighed in with their input, direction, and approval, it is still often difficult to get an EA initiative started because so many other stakeholders also need to be convinced of the value.  For example, LOB leaders, business managers, and functional SMEs all have to be convinced of the value of EA or else they will not allocate the time and resource required to participate in facilitated sessions and verify/validate the architecture.  Executives and LOB leaders are critical in setting the vision for the future and describing the general goals for operations as well as communicating their overall investment and technology strategies.  However, even if the executives and leaders buy-in, the lower levels also have to perceive value/benefit or else they will put in minimum effort when you really need them to be fully engaged to provide detail as to the reality of operations, challenge the status quo of how things are done today, and ultimately take ownership of the architecture and support the transition to the future state.  Without the business fully on board, the EA recommendations and transition look good on paper but will never be executed. 

Similarly, other stakeholders including Corporate Strategy, Portfolio Management, Project Management, Lean/Six Sigma, and IT also need to fully support EA as they are also critical in the development, execution, and enforcement of EA.  The stakeholders in these other disciplines sometimes feel that EA encroaches on what they do and do not understand why it is necessary.  For example, Lean/Six Sigma practitioners and some business analysts already have great relationships with the business and have already documented and analyzed processes such that they believe they have already “modeled the business” such that EA business architecture development and analysis is extraneous.  IT organizations often point to their UML diagrams, systems engineering drawings, and infrastructure server drawings and say that they are already doing EA.  In seeking buy-in from these other groups, it is very important to first seek to understand and acknowledge the current state of operations – existing skills, processes, and assets – before proposing a future state of how EA enhances/complements.  Formal stakeholder analysis and RASCIs can be helpful, but I believe an attitude of respect for what others do and a collaborative approach is also critical as there are many organizational change issues and related sensitivities associated with introducing EA as a discipline as with any other transformation.  Once general buy-in and support for EA is established, the disciplines need to work out details around overall processes, governance, timing, inputs and outputs to understand synergies, cooperation, etc.  Again, this is something that can be documented via EA, further decomposing views that were used in the overall analysis for the introduction of EA.

4 years, 4 months ago

An Agile Enterprise Architecture (EA) Delivers Critical Business and Mission Agility

While working with a recent partner, the question came up; “What changes are made to the EA approach if agile methods are required, or otherwise heavily encouraged?” The initial answer at the time was “Not many – we already have an agile approach to EA embedded in our Oracle Enterprise Architecture Development Process (OADP), and our Oracle Enterprise Architecture Framework (OEAF) is independent of project management and project development approaches.”

Our OADP has always been agile and therefore supportive of business and government agility – particularly in the current context of severely constrained budgeting cycles. We firmly believe in a “just enough, just in time” philosophy, with collaborative insight and contribution across teams and leadership, and delivery of EA artifacts or guidance tuned directly to prioritized results. This means strategic, useful and reusable guidance modeled and delivered in a manner that supports both longer-term initiatives and near-term objectives.

EA delivered as an agile approach, however, does require continual line-of-sight traceability back to the IT investment strategy – which in turn is aligned to the business strategy.  

In other words, a Sprint Iteration approach might be justified (i.e. using the “Scrum” strategy), from all relevant perspectives, to quickly establish a reusable process and metadata model for a common agency function – like “Document Routing and Approval” (DRA). The output might be required to inform a software solicitation (i.e. to explain the requirements).  The output might be to establish a reference model and basic governance (business rules) for identifying and improving process efficiencies around the agency where DRA is occurring.

The actual need for this EA artifact (or “Product”, in Agile terms) may be driven from an unanticipated mandate or regulatory change, and therefore require rapid response.  The need may also be limited in scope to only a portion of the agency’s business (i.e. those who actually know they need it).

So, an EA Sprint will work, and deliver what’s needed quickly and effectively to the target audience.  The highest return on investment (ROI) in this exercise, however, only exists if actual Enterprise traceability and impact assessment occurs. In other words, an agile EA output with a strategic Enterprise outcome.

Note this is a common misunderstanding for Agile software development; Agile programming and project management may deliver useful, rapid and cost-effective “features” from a Backlog of priorities, but much of the supporting infrastructure, integrations and organizational change isn’t delivered using Agile methods, but must evolve in a more strategic, methodic manner.  Preferably with EA guidance.

Here’s what should happen.  The common DRA process, metamodel and business rules begin to shape, in a somewhat parochial “requirements-driven” context, heavily leveraging the impacted SMEs for a short period of time. As this occurs, the Enterprise Architect and stakeholders begin mapping and comparing the DRA process design (at appropriately coarse levels of abstraction) to any similar that may exist within the agency, or among agency partners or stakeholders.  This may require some additional outreach and communication.  The EA may find additional SMEs, risk factors, standards, COTS DRA solution accelerators, overlapping data management projects, etc. – essentially other activities or resources that can be used or might be impacted.

The Enterprise Architect is the Scrum Master!

Strategic oversight and influence is therefore brought to bear on the EA sprint, and by leveraging EA methods, the impacts to the rest of the organization plus any modifications to the focus EA artifact can be addressed – entirely within standard and expected IT Governance. The EA artifact development is a Sprint, but actually leverages our lifecycle methodology – from Business Context through Current and Future States, and then Roadmap (i.e. Transitional Architecture) and Governance.  The EA Sprint may actually kick off or modify a more holistic EA maintenance process.

We are therefore avoiding an “agile everything” philosophy, though we’re delivering agile results.   We contribute over-arching guidance and process for both the DRA project and the organization as a whole, to make sure that all projects underway are still aligned to meet the needs of the business and IT investment constraints.

This is essentially what we believe in applying our EA process, over time or during more Agile response cycles; always raise and maintain focus on the business strategy and drivers to guide the investment of IT budget into those areas that affect the business most – or that are the most immediate priority, such as described above.

Thanks to Oracle Public Sector Enterprise Architect Ted McLaughlan for contributing to this article!

4 years, 4 months ago

An Agile Enterprise Architecture (EA) Delivers Critical Business and Mission Agility

While working with a recent partner, the question came up; “What changes are made to the EA approach if agile methods are required, or otherwise heavily encouraged?” The initial answer at the time was “Not many – we already have an agile approach to EA embedded in our Oracle Enterprise Architecture Development Process (OADP), and our Oracle Enterprise Architecture Framework (OEAF) is independent of project management and project development approaches.”

Our OADP has always been agile and therefore supportive of business and government agility – particularly in the current context of severely constrained budgeting cycles. We firmly believe in a “just enough, just in time” philosophy, with collaborative insight and contribution across teams and leadership, and delivery of EA artifacts or guidance tuned directly to prioritized results. This means strategic, useful and reusable guidance modeled and delivered in a manner that supports both longer-term initiatives and near-term objectives.

EA delivered as an agile approach, however, does require continual line-of-sight traceability back to the IT investment strategy – which in turn is aligned to the business strategy.  

In other words, a Sprint Iteration approach might be justified (i.e. using the “Scrum” strategy), from all relevant perspectives, to quickly establish a reusable process and metadata model for a common agency function – like “Document Routing and Approval” (DRA). The output might be required to inform a software solicitation (i.e. to explain the requirements).  The output might be to establish a reference model and basic governance (business rules) for identifying and improving process efficiencies around the agency where DRA is occurring.

The actual need for this EA artifact (or “Product”, in Agile terms) may be driven from an unanticipated mandate or regulatory change, and therefore require rapid response.  The need may also be limited in scope to only a portion of the agency’s business (i.e. those who actually know they need it).

So, an EA Sprint will work, and deliver what’s needed quickly and effectively to the target audience.  The highest return on investment (ROI) in this exercise, however, only exists if actual Enterprise traceability and impact assessment occurs. In other words, an agile EA output with a strategic Enterprise outcome.

Note this is a common misunderstanding for Agile software development; Agile programming and project management may deliver useful, rapid and cost-effective “features” from a Backlog of priorities, but much of the supporting infrastructure, integrations and organizational change isn’t delivered using Agile methods, but must evolve in a more strategic, methodic manner.  Preferably with EA guidance.

Here’s what should happen.  The common DRA process, metamodel and business rules begin to shape, in a somewhat parochial “requirements-driven” context, heavily leveraging the impacted SMEs for a short period of time. As this occurs, the Enterprise Architect and stakeholders begin mapping and comparing the DRA process design (at appropriately coarse levels of abstraction) to any similar that may exist within the agency, or among agency partners or stakeholders.  This may require some additional outreach and communication.  The EA may find additional SMEs, risk factors, standards, COTS DRA solution accelerators, overlapping data management projects, etc. – essentially other activities or resources that can be used or might be impacted.

The Enterprise Architect is the Scrum Master!

Strategic oversight and influence is therefore brought to bear on the EA sprint, and by leveraging EA methods, the impacts to the rest of the organization plus any modifications to the focus EA artifact can be addressed – entirely within standard and expected IT Governance. The EA artifact development is a Sprint, but actually leverages our lifecycle methodology – from Business Context through Current and Future States, and then Roadmap (i.e. Transitional Architecture) and Governance.  The EA Sprint may actually kick off or modify a more holistic EA maintenance process.

We are therefore avoiding an “agile everything” philosophy, though we’re delivering agile results.   We contribute over-arching guidance and process for both the DRA project and the organization as a whole, to make sure that all projects underway are still aligned to meet the needs of the business and IT investment constraints.

This is essentially what we believe in applying our EA process, over time or during more Agile response cycles; always raise and maintain focus on the business strategy and drivers to guide the investment of IT budget into those areas that affect the business most – or that are the most immediate priority, such as described above.

Thanks to Oracle Public Sector Enterprise Architect Director Bryan Miller for contributing to this article!