What kinds of automation are there, and is there a natural progression from basic to advanced? Do the terms intelligent automation and cognitive automation actually mean anything useful, or are they merely vendor hype? In this blogpost, I shall attempt an answer.
The simplest form of automation is known as robotic automation or robotic process automation (RPA). The word robot (from the Czech word for forced labour, robota) implies a pre-programmed response to a set of incoming events. The incoming events are represented as structured data, and may be held in a traditional database. The RPA tools also include the connectivity and workflow technology to receive incoming data, interrogate databases and drive action, based on a set of rules.
People talk about cognitive technology or cognitive computing, but what exactly does this mean? In its marketing material, IBM uses these terms to describe whatever features of IBM Watson they want to draw our attention to – including adaptability, interactivity and persistence – but IBM’s usage of these terms is not universally accepted.
I understand cognition to be all about perceiving and making sense of the world, and we are now seeing man-made components that can achieve some degree of this, sometimes called Cognitive Agents.
Cognitive agents can also be used to detect patterns in vast volumes of structured and unstructured data and interpret their meaning. This is known as Cognitive Insight, which Thomas Davenport and Rajeev Ronanki refer to as “analytics on steroids”. The general form of the cognitive agent is as follows.
Cognitive agents can be wrapped as a service and presented via an API, in which case they are known as Cognitive Services. The major cloud platforms (AWS, Google Cloud, Microsoft Azure) provide a range of these services, including textual sentiment analysis.
At the current state-of-the-art, cognitive services may be of variable quality. Image recognition may be misled by shadows, and even old-fashioned OCR may struggle to generate meaningful text from poor resolution images. – but of course human cognition is also fallible.
Meanwhile, one of the key characteristics of intelligence is adaptability – being able to respond flexibly to different conditions. Intelligence is developed and sustained by feedback loops – detecting outcomes and adjusting behaviour to achieve goals. Intelligent automation therefore includes a feedback loop, typically involving some kind of machine learning.
Complex systems and processes may require multiple feedback loops (Double-Loop or Triple-Loop Learning).
If we embed this automation into the Internet of Things, we can use sensors to perform the information gathering, and actuators to carry out the actions.
Now what happens if we put all these elements together?
This fits into a more general framework of human-computer intelligence, in which intelligence is broken down into six interoperating capabilities.
I know that some people will disagree with me as to which parts of this framework are called “cognitive” and which parts “intelligent”. Ultimately, this is just a matter of semantics. The real point is to understand how all the pieces of cognitive-intelligent automation work together.
The Limits of Machine Intelligence
There are clear limits to what machines can do – but this doesn’t stop us getting them to perform useful work, in collaboration with humans where necessary. (Collaborative robots are sometimes called cobots.) A well-designed collaboration between human and machine can achieve higher levels of productivity and quality than either human or machine alone. Our framework allows us to identify several areas where human abilities and artificial intelligence can usefully combine.
In the area of perception and cognition, there are big differences in the way that humans and machines view things, and therefore significant differences in the kinds of kinds of cognitive mistakes they are prone to. Machines may spot or interpret things that humans might miss, and vice versa. There is good evidence for this effect in medical diagnosis, where a collaboration between human medic and AI can often produce higher accuracy than either can achieve alone.
In the area of decision-making, robots can make simple decisions much faster, but may be unreliable with more complex or borderline decisions, so a hybrid “human-in-the-loop” solution may be appropriate.
Decisions that affect real people are subject to particular concern – GDPR specifically regulates any automated decision-making or profiling that is made without human intervention, because of the potential impact on people’s rights and freedoms. In such cases, the “human-in-the-loop” solution reduces the perceived privacy risk. In the area of communication and collaboration, robots can help orchestrate complex interactions between multiple human experts, and allow human observations to be combined with automatic data gathering. Meanwhile, sophisticated chatbots are enabling more complex interactions between people and machines.
Finally there is the core capability of intelligence – learning. Machines learn by processing vast datasets of historical data – but that is also their limitation. So learning may involve fast corrective action by the robot (using machine learning), with a slower cycle of adjustment and recalibration by human operators (such as Data Scientists). This would be an example of Double-Loop learning.
Some of the elements of this automation framework are already fairly well developed, with cost-effective components available from the technology vendors. So there are some modes of automation that are available for rapid deployment. Other elements are technologically immature, and may require a more cautious or experimental approach.
Your roadmap will need to align the growing maturity of your organization with the growing maturity of the technology, exploiting quick wins today while preparing the groundwork to be in a position to take advantage of emerging tools and techniques in the medium term.
Thomas Davenport and Rajeev Ronanki, Artificial Intelligence for the Real World (January–February 2018)