Use of Reason
Filed under: Enterprise Architecture
Aggregated enterprise architecture wisdom
Filed under: Enterprise Architecture
Filed under: Enterprise Architecture
Filed under: Enterprise Architecture
Enterprises, in their quest to reduce labor costs are applying RPA technologies. Yet they do not have a well-defined set of principles and best practices including how to position RPA with other process tools and initatives. Today it may have become a bit more clear. Pega is the first tech provider, and only BPM market particpant of substance, to purchase an RPA provider (OpenSpan). The combination brings robotics, analytics, and case management together – and that makes sense. Think of Pega’s process/rules capibility firing off a set of RPA scripts.
RPA in many respects is an alternative, some would say the polar opposite of Pega’s current business model that feasts on the transformitive “big IT spend” for BPM, case management, automation, and customer service projects. RPA does not require invasive integration. It is a quick hit for automation, a “low touch” approach for process improvement for brittle legacy systems. The bottom line. Enterprises that employ labor on a large scale for process work, can gain efficiencies by just automating repetitive human tasks for the “as is” process.
OpenSpan is nice pick up for Pega that will help with back office BPM work, but more so with contact center environments where the agent requires human and machine multi-tasking that often spans multiple windows and web applications, few of which are integrated with each other. Cumbersome process flows, rekeying of data and lack of integration add up to lengthy call times, reduced accuracy and an overall increase in customer frustration. Pega/OpenSpan, will give Jacada, and NICE a run for thier money and the future integartion with Pega’s analytics tarcks where the RPA space is heading.
Bioingine.com; Platform for comprehensive statistical and probability studies for BigData Driven Medicine and Public Health. Importantly helps redefine Data driven Medicine as:- Ontology (Semantics) Driven Medicine Comprehensive Platform that covers Descriptive Statistics and Inferential Probabilities. Beta Platform on the anvil…. Continue Reading →![]()
Abstract: If you address the question of how to scale Agile projects by considering what framework to use, you are only looking at one aspect of the problem. Scaling is all about coordination – managing enterprise considerations and cross program dependencies, and the defacto frameworks (SAFe, LeSS and DAD) focus on the people and process dimensions. However, in combination with a factory approach you may be able to automate many of the compliance and dependency management issues.
The question of how to scale Agile development has been around a while. In January 2015 I commented [1] on a clear trend in which my customers were voicing concerns about loss of consistency; inability to govern; lack of coordination and increasing time to market. Since then I have observed many large organizations adopting SAFe (Scaled Agile Framework) perhaps because it appears to be the only game in town, or perhaps there’s good marketing or there’s strength in numbers. But the criticisms of SAFe [2] haven’t gone away. The central and continuing concern is that SAFe compromises core Agile principles of self-organizing, cross functional teams that have full responsibility for the delivery of potentially shippable increments. And that SAFe looks suspiciously like WaterScrumFall because there’s a high overhead of portfolio, value stream and project management and in all probability intentional architecture. Now I have mixed opinions on SAFe; I see positives in value streams and product management which are important. And for organizations that have large programs, highly organized, structured approaches will be seen as lower risk, and indeed something that takes them some way down the Agile path, without straying too far from conventional management comfort zones. But the outcomes are unlikely to be inherently “agile”.
What SAFe does, is provide a rather conventional approach for a complex problem that most enterprises have – that is to deliver high integrity solutions that can work in an enterprise context that demands cross project dependency management, consistent data and reference architecture, collaboration with the existing portfolio complexities, compliance with enterprise standards and governance etc.
There are alternatives. Craig Larman and Bas Vodde in their books [3] and more recently with their LeSS initiative [4] have pursued a very different path that starts with Scrum and scales by understanding the needs for coordination while adhering to core Scrum principles. In their forthcoming book [5] they say, “LeSS is (1) lightweight, (2) simple to understand, and (3) difficult to master—due to essential complexity”. And this allows us to contrast the different approaches; while SAFe clearly works at some level, it has its roots in conventional large scale project management, whereas LeSS is lightweight, but requires much deeper understanding of the systems dynamics because of the inherent complexity of all the coordination requirements.
So the real question underlying Scaling Agile should be, “can we address some of the coordination requirements in a manner that reduces complexity and eliminates some of the need for additional layers of management or events?”
In my post Service Factory 2.0 [6] I describe the conceptual background of the Software Factory, ideas pioneered by my old friend Keith Short and Jack Greenfield while they were at Microsoft. Today these have evolved and become specialized around a framework of tools, repeatable processes and patterns for creation and assembly of services – manifest as first order components with formal interfaces. If you consider that all core business functionality is increasingly composed of services and their operations, this provides us with a reference architecture that by design implements separation of concerns. Figure 1 below is a conceptual view of the scope of the service factory in terms of managed objects.
Organisations often promote themselves by expounding on their capability to deliver goods and services to particular market segments. Organisational capability is only one side of a many sided coin. In an individual it is accepted that an ability is the … Continue reading →
For a viable enterprise-architecture [EA], now and into the future, we need frameworks, methods and tools that can support the EA discipline’s needs. Yet there’s one element common to most of the current mainstream EA-frameworks and notations – such as…
This week’s episode of Tom Cagley’s Software Process and Measurement (SPaMCast) podcast, number 389, features Tom’s essay on Agile acceptance testing, Kim Pries talking about soft skills, and a Form Follows Function installment on sense-making and decision-making in the practice of software architecture. Tom and I discuss my post “OODA vs PDCA – What’s the […]![]()
Kin Lane, the API Evangelist, had a really good post on maturing an API program, with the not-so-brief title of “I Have An API Deployed, And A Base Presence Established, What Can I Do To Help Me Get The Word Out?” You should definitely go read that because there’s some really good advice there. […]
As humans we usually like as little change as possible in our life (the only change we want is with others, e.g. in a TV drama). Additionally we are very risk adverse so we only really want proven options. This tendency increases with age is often called the conservative tendency. As our and almost all … Continue reading Productivity as the only driver on architecture →
All organisations, whether large all small, are influenced by both internal and external drivers. How the organisation responds to them can affect its overall success. The following only touches some of the generic drivers that an organisation is likely to … Continue reading →