What Is Hyperautomation?

Hyperautomation may only be a buzzword, but automating business systems with AI is an important trend.

By Mike Loukides
October 11, 2022
The Daya Bay Antineutrino Detector The Daya Bay Antineutrino Detector (source: Energy.gov on Flickr)

Gartner has anointed “Hyperautomation” one of the top 10 trends for 2022. Should it be? Is it a real trend, or just a collection of buzzwords? As a trend, it’s not performing well on Google; it shows little long-term growth, if any, and gets nowhere near as many searches as terms like “Observability” and “Generative Adversarial Networks.” And it’s never bubbled up far enough into our consciousness to make it into our monthly Trends piece. As a trend, we’re openly skeptical about Hyperautomation.

However, that skeptical conclusion is too simplistic. Hyperautomation may just be another ploy in the game of buzzword bingo, but we need to look behind the game to discover what’s important. There seems to be broad agreement that hyperautomation is the combination of Robotic Process Automation with AI. Natural language generation and natural language understanding are frequently mentioned, too, but they’re subsumed under AI. So is optical character recognition (OCR)–something that’s old hat now, but is one of the first successful applications of AI. Using AI to discover tasks that can be automated also comes up frequently. While we don’t find the multiplication of buzzwords endearing, it’s hard to argue that adding AI to anything is uninteresting–and specifically adding AI to automation.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

It’s also hard to argue against the idea that we’ll see more automation in the future than we see now.  We’ll see it in the processing of the thousands of documents businesses handle every day. We’ll see it in customer service. We’ll see it in compliance. We’ll see it in healthcare. We’ll see it in banking. Several years ago, the “Automate all the things!” meme originated in IT’s transformation from manual system administration to automated configuration management and software deployment. That may be the first instance of what’s now been christened Hyperautomation. We can certainly apply the slogan to many, if not all, clerical tasks–and even to the automation process itself. “Automate all the things” is itself a thing. And yes, the meme was always partially ironic–so we should be on the lookout for promises that are easily made but hard to keep. Some tasks should not be automated; some tasks could be automated, but the company has insufficient data to do a good job; some tasks can be automated easily, but would benefit from being redesigned first.

So we’re skeptical about the term Hyperautomation, but we’re not skeptical about the desire to automate. A new buzzword may put automation on executives’ radar–or it may be little more than a technique for rebranding older products. The difference is focusing on your business needs, rather than the sales pitch. Automating routine office tasks is an important and worthwhile project–and redesigning routine tasks so that they can be integrated into a larger workflow that can be automated more effectively is even more important. Setting aside the buzzword, we can start by asking what a successful automation project requires. In the long run, the buzzword is unimportant; getting the job done is what matters.

Automating Office Processes

It’s easy to observe that in most companies, there are many processes that can be automated but aren’t. Processing invoices, managing inventory, customer service, handling loan applications, taking orders, billing customers: these are all processes that are largely routine and open to automation. At some companies, these tasks are already automated, at least in part. But I don’t want to trivialize the thinking that goes into automating a process. What’s required?

Office staff usually perform tasks like invoice processing by filling in a web form. Automating this process is simple. Selenium, the first tool for automated browser testing (2004), could be programmed to find fields on a web page, click on them or insert text, click “submit,” scrape the resulting web page, and collect results. Robotic process automation (RPA) has a fancier name, but that’s really all it is. This kind of automation predates modern AI. It’s purely rules-based: click here, add a name there, use some fairly simple logic to fill in the other fields, and click submit. It’s possible to augment this basic process with OCR so the application can find data on paper forms, or to use natural language processing to gather information through a chat server. But the core of the process is simple, and hasn’t changed much since the early days of web testing. We could see it as an example of 1980s-style “expert systems,” based on deterministic business rules.

That simple scenario doesn’t hold up for more complex tasks. Consider an application for filling a prescription at a pharmacy. That application has to:

  • look up when the prescription was last filled
  • look up patient data to see whether there are any refills left
  • look up the prescriber and generate a message, if there are no refills left
  • look up the patient’s other medications to determine whether there are any drug interactions
  • look up regulations about restricted substances, in which case other rules apply (for example, requiring ID when the patient picks up the medication)
  • look up the pharmacy’s stock to see whether the medication is in stock (and order it if it isn’t)
  • look up the patient’s insurance to generate charges for the insurance company 
  • look up the patient’s credit card information to generate a charge for the co-pay

There are probably even more steps (I am not a pharmacist) and variations: new prescriptions, expired prescriptions, uninsured patients, expired credit cards, and no doubt many more corner cases. None of these steps is particularly difficult by itself, and each could be viewed as a separate task for automation, giving you a web of interconnected tasks–more complex, but not necessarily a bad result. However, one thing should be obvious: to fill a prescription, you need to access many different kinds of data, in many different databases. Some of these data sources will be owned by the pharmacy; others aren’t. Most are subject to privacy regulations. They are all likely to exist in some kind of silo that’s difficult to access from the outside the group that created the silo–and the reason for that difficulty may be political as well as technological. So from the start, we have a data integration problem compounded with a compliance problem. Data integration and regulatory compliance are particularly tough in healthcare and medicine, but don’t kid yourself: if you’re working with data, you will face integration problems, and if you’re working with personal data, you need to think about compliance. An AI project that doesn’t address data integration and governance (including compliance) is bound to fail, regardless of how good your AI technology might be. Buzzword or not, Hyperautomation is doing a service if it focuses attention on these issues.

Data integration problems aren’t pretty; they’re boring, uninteresting, the “killing field of any modeling project,” as Lorien Pratt has said. So we really can’t talk about automating any significant task without seeing it as a non-trivial data integration project: matching IDs, reconciling slightly different definitions of database columns, de-duping, named entity recognition, all of that fun stuff. Some of these tasks have been automated, but many aren’t. Andrew Ng, Christopher Ré, and others have pointed out that in the past decade, we’ve made a lot of progress with algorithms and hardware for running AI. Our current set of AI algorithms are good enough, as is our hardware; the hard problems are all about data. That’s the cutting edge for AI research: automating ways to find quality data, clean it, label it, and merge it with data from other sources. While that research is only starting to filter into practice, and much remains to be done, “automating all the things” will require confronting data problems from the beginning.

Another sad reality is that a company’s data is less rich than they’d like to think. We don’t need to look any further than O’Reilly for an example. Like any online company, we have good visibility into what happens on the O’Reilly Learning Platform. We can see what books and courses our customers are using, and for how long. We know if customers only read the first chapter of some book, and can think about what how to improve it. The data available to our retail business is much more limited. We know we’ve sold X books to Amazon, and Y books to wholesalers, but we never know anything about the customers who buy those books, when they buy them, or even if they buy them. Books can sit on shelves or in warehouses for a long time before they come back as returns. The online business is information-rich; the retail business is information-poor. Most real-world business lie somewhere between those extremes.

That’s the bad news. The good news is that we’re talking about building something exciting. We’re talking about applications that use APIs to pull data from many different sources, and deliver better results than humans can. We’re talking about applications that integrate all of those sources into a single course of action, and can do so seamlessly. There are resonances between this and what, in other application domains, is being called a “metaverse.” While we’re skeptical about how the term “Hyperautomation” has been used, we also wonder: is Hyperautomation, considered properly, the business version of the metaverse? One component of a business metaverse would certainly be seamless access to data wherever it resides; the metaverse would be populated by bots that automate routine tasks. Hold that thought; we’ll return to it.

Making Good Business Decisions

Finding processes to automate is called process discovery. We have to be careful about process discovery because automating the wrong processes, or automating them in inappropriate ways, wastes resources at best; at worst, it can make a business uncompetitive. There are products that use AI to discover which processes can be automated, but in real life, process discovery will rely heavily on people: your knowledge of the business, the knowledge of subject matter experts, and the knowledge of staff members who are actually doing the work, and whose input is often ignored.  I’m reminded of a friend who was hired to build a new application to check in patients at a doctor’s office. The receptionists hated the old app. No one knew why, until my friend insisted on sitting down at the receptionist’s desk. Then it was painfully obvious why the staff hated the old application–and the problem was easy to correct.

Over the past decade, one problem with data science and its successors has been the assumption that all you need is data, and lots of it; analyzing that data will lead you to new products, new processes, new strategies: just follow the data and let it transform your business. But we also know that most AI projects fail, just as most IT projects fail. If you don’t want your projects to be among the failures, you can’t make naive assumptions about what data can do. All businesses like “up and to the right,” and data is good at revealing trends that look “up and to the right.” However, growth always ends: nothing grows exponentially forever, not even Facebook and Google. You’ll eventually run out of potential new customers, raw material, credit at the bank–something will get in the way. The historical trends revealed by data will eventually end. Data isn’t very good at telling you where the growth curve will flatten out, and for an executive, that’s probably the most important information. What will cause those trends to end, and what strategies will the business need to adopt? It is difficult to answer that kind of question with nothing but data.

Lorien Pratt outlines a four-step process for using data effectively to make business decisions:

  • Understand the business outcomes that you want to achieve.
  • Understand the actions that you can take in your current business situation.
  • Map out the paths between actions and outcomes. If you take some action, what changes? Most actions have multiple effects. 
  • Decide where data fits in. What data do you have? How can you use it to analyze your current situation, and measure the results of any actions you take?

These four steps are the heart of decision intelligence. It is a good process for any business decision, but it’s particularly important when you’re implementing automation. If you start from the data, rather than the business outcomes and the levers you can use to change the situation, you are likely to miss important possibilities. No dataset tells you the structure of the world; that requires human expertise and experience. You’ll find small, local optimizations, but you’re likely to miss important use cases if you don’t look at the larger picture. This leads to a “knowledge decision gap.” Pratt mentions the use of satellite imagery to analyze data relevant to climate change: predicting fires, floods, and other events. The models exist, and are potentially very useful; but on the ground, firefighters and others who respond to emergencies still use paper maps. They don’t have access to up to date maps and forecasts, which can show what roads can be used safely, and where severe damage has occurred. Data needs to become the means, a tool for making good decisions. It is not an end in itself.

Donald Farmer says something similar. It’s easy to look at some process (for example, invoice processing, or checking in patients) and decide to automate it. You analyze what your staff does to process an invoice, and then design a system to perform that process. You may use some process discovery tools to help. If the process you are automating requires making some simple decisions, AI can probably be used to automate those decisions. You will probably succeed, but this approach overlooks two big problems. First, many business processes are failing processes. They’re inefficient, poorly designed, and perhaps even wholly inappropriate for the task. Never assume that most businesses are well run, and that they represent some sort of “best practice.” If you automate a poor process, then all you have is a faster poor process. That may be an improvement, but even if it’s an improvement, it’s sure to be far from optimal.

Farmer’s second point is related, but goes much deeper. Business processes never exist in isolation. They connect to other processes in a complex web. That web of connected processes is really what makes the business work. Invoice processing has tendrils into accounting. Manufacturing affects quality control, customer support, finance, shipping and receiving, accounts receivable, and more. HR processes have effects throughout the organization. Redesigning one process might give you a local improvement, but rethinking how the business works is a much bigger opportunity.  Farmer points to Blackline, a company that does process automation for financial services. They don’t automate a single process: they automate all of a client’s financial processes, with the result that all actions are processed immediately; the books are always closed. This kind of automation has huge consequences. You don’t have to wait for a few weeks after the end of a month (or quarter or year) to close the books and find out your results; you know the results continuously. As a result, your relationship to many important financial metrics changes. You always know your cash flow; you always know your credit line. Audits take on a completely different meaning because the business is always auditing itself. New strategies are possible because you have information that you’ve never had before.

Other areas of a company could be treated similarly. What would supply chain management be like if a company had constant, up-to-date information about inventory, manufacturing, new orders, and shipping? What would happen to product design, sales, and engineering if a constant digest of issues from customer service were available to them?

These changes sound like something that we’ve often talked about in software development: continuous integration and continuous delivery. Just as CI/CD requires IT departments to automate software deployment pipelines, continuous business processes come from automating–together–all of the processes that make businesses work. Rethinking the entirety of a business’s processes in order to gain new insights about the nature of the business, to change your relationship to critical measures like cash flow, and to automate the business’s core to make it more effective is indeed Hyperautomation. It’s all about integrating processes that couldn’t be integrated back when the processes were done by hand; that pattern recurs repeatedly as businesses transform themselves into digital businesses. Again, does this sound like a business Metaverse? After all, the consumer Metaverse is all about sharing immersive experience. While automating business processes doesn’t require VR goggles, for an executive I can’t imagine anything more immersive than immediate, accurate knowledge of every aspect of a company’s business. That’s surely more important than taking a meeting with your bank’s 3D avatars.

This kind of automation doesn’t come from a superficial application of AI to some isolated business tasks. It’s all about deep integration of technology, people, and processes. Integration starts with a thorough understanding of a business’s goals, continues with an understanding of the actions you can take to change your situations, and ends with the development of data-driven tools to effect the changes you want to see. While AI tools can help discover processes that can be automated, AI tools can’t do this job alone. It can’t happen without subject matter experts. It requires collaboration between people who know your business well, the people who are actually performing those tasks, and the stakeholders–none of which have the entire picture. Nor can it be undertaken without addressing data integration problems head-on. For some problems, like pharmacy prescription application we’ve already touched on, data integration isn’t just another problem; it is the problem that dwarfs all other problems.

We also need to be aware of the dangers. On one hand, automating all of a company’s processes to make a single coherent whole sounds like a great idea. On the other hand, it sounds like the kind of massive boil-the-ocean IT project that’s almost certainly bound to fail, or remain forever unfinished. Is there a happy medium between automating a single process and embarking on an endless task? There has to be. Understand your business’s goals, understand what levers can affect your performance, understand where you can use data–and then start with a single process, but a process that you have understood in the broader context. Then don’t just build applications. Build services, and applications that work by using those services. Build an API that can integrate with other processes that you automate. When you build services, you make it easier to automate your other tasks, including tasks that involve customers and suppliers. This is how Jeff Bezos built Amazon’s business empire.

The Humans in the Loop

Developers who are automating business systems have to determine where humans belong in the loop. This is a sensitive issue: many employees will be afraid of losing their jobs, being “replaced by a machine.” Despite talk about making jobs more interesting and challenging, it would be unrealistic to deny that many executives look at process automation and think about reducing headcount. Employees’ fears are real. Still, as of mid-2022, we remain in a job market where hiring is difficult, at any level, and if a business is going to grow, it needs the human resources to grow. Automating processes to make decisions in routine situations can be a way to do more without adding staff: if pharmacy employees can rely on an automated process to look up drug interactions, regulations, and medical records, in addition to managing the insurance process, they are free to take on more important or more difficult tasks.

Making jobs more challenging (or difficult) can be a double-edged sword. While many people in the automation industry talk about “relieving staff of boring, routine tasks,” they often aren’t familiar with the realities of clerical work. Boring, routine tasks are indeed boring and routine, but few people want to spend all their time wrestling with difficult, complex tasks. Everybody likes an “easy win,” and few people want an environment where they’re constantly challenged and facing difficulties–if nothing else, they’ll end up approaching every new task when they’re tired and mentally exhausted. Tired and overstressed employees are less likely to make good decisions, and more likely to think “what’s the easiest way to get this decision off of my desk.” The question of how to balance employees’ work experiences, giving them both the “easy wins,” but enabling them to handle the more challenging cases hasn’t been resolved. We haven’t seen an answer to this question–for the time, it’s important to recognize that it’s a real issue that can’t be ignored.

It’s also very easy to talk about “human in the loop” without talking about where, exactly, the human fits in the loop. Designing the loop needs to be part of the automation plan. Do we want humans evaluating and approving all the AI system’s decisions?  That begs the question of exactly what, or why, we’re automating. That kind of loop might be somewhat more efficient, because software would look up information and fill in forms automatically. But the gain in efficiency would be relatively small. Even if they didn’t need to spend time looking up information, an office worker would still need to understand each case. We want systems that implement end-to-end automation, as much as possible. We need employees to remain in the loop, but their role may not be making individual decisions. Human employees need to monitor the system’s behavior to ensure that it is working effectively. For some decisions, AI may only play an advisory role: a human may use AI to run a number of simulations, look at possible outcomes, and then make set a policy or execute some action. Humans aren’t managed by the machine; it’s the other way around. Humans need to understand the context of decisions, and improve the system’s ability to make good decisions.

If we want to leave as many decisions as possible to the system, what roles do we want humans to have? Why do we want humans in the loop? What should they be doing?

  • Humans need to manage and improve the system
  • Humans need to investigate and rectify bad decisions

Neither role is trivial or simple. “Managing and improving the system” encompasses a lot, ranging from automating new tasks to improving the system’s performance on current tasks. All AI models have a finite lifetime; at some point, their behavior won’t reflect the “real world,” possibly because the system itself has changed the way the real world behaves. Models are also subject to bias; they are built from historical data, and historical data almost never reflects our ideals of fairness and justice.  Therefore, managing and improving the system includes careful monitoring, understanding and evaluating data sources, and handling the data integration problems that result. We’re talking about a job that’s much more technical than a typical clerical position.

This understanding of the “human in the loop” suggests a user interface that’s more like a dashboard than a web form. People in this role will  need to know how the system is operating on many levels, ranging from basic performance (which could be measured in actions per second, time taken to generate and communicate an action), to aggregate statistics about decisions (how many users are clicking on recommended products), to real-time auditing of the quality of the decisions (are they fair or biased, and if biased, in what way).

Likewise, all decision-making processes are going to produce bad decisions from time to time. For better or for worse, that’s baked into the foundations of AI. (And as humans, we can’t claim that we don’t also make bad decisions.) Those bad decisions will range from simple misdiagnoses, poor recommendations, and errors to subtle examples of bias. We can’t make the mistake of assuming that an automated decision will always be correct. It’s possible that automated decision-making will be  an improvement over human decision-making; but bad decisions will still be made. The good news is that, at least in principle, AI systems are auditable. We know exactly what decisions were made, we know the data that the system used.

We can also ask an AI system to explain itself, although explainability is still an area of active research. We need explanations for two reasons. Staff will need to explain decisions to customers: people have never liked the feeling that they are interacting with a machine, and while that preference might change, “that’s what the computer said” will never be a satisfactory explanation. The system’s explanation of its decisions needs to be concise and intelligible. Saying that a loan applicant was on the wrong side of some abstract boundary in a high-dimensional space won’t do it; a list of three or four factors that affected the decision will satisfy many users. A loan applicant needs to know that they don’t have sufficient income, that they have a poor credit history, or that the item they want to purchase is overpriced. Once that reasoning is on the table, it’s possible to move forward and ask whether the automated system was incorrect, and from there, to change the decision. We can’t let automation become another way for management to “blame the computer” and avoid accountability.

Improving the system so that it gives better results requires a more technical explanation. Is the system too sensitive to certain factors? Was it trained using biased, unfair data? Is it inferring qualities like gender or ethnicity from other data? Relatively simple tests, like higher error rates for minority groups, are often a sign of bias. Data is always historical, and history doesn’t score very well on fairness. Fairness is almost always aspirational: something we want to characterize the decisions we’re making now and in the future. Generating fair results from biased data is still a subject for research, but again, we have an important advantage: decisions made by machines are auditable.

To override an automated decision, we need to consider interfaces for performing two different tasks: correcting the action, and preventing the incorrect action from being taken again. The first might be a simple web form that overrides the original decision–no matter how hard we try to automate “simple web forms” out of existence, they have a way of returning. The second needs to feed back into the metrics and dashboards for monitoring the system’s behavior. Is retraining needed? Is special-purpose training to fine-tune a model’s behavior an option?

Although re-training an AI system can be expensive, and auditing training data is a big project, they’re necessary, and have to be part of the plan. Even when there are no egregious errors, models need to be retrained to remain relevant. For example, fashion recommendations from a model that hasn’t been retrained in a year are not likely to be relevant.

Another problem with interfaces between humans and AI systems arises when we position the system as an “oracle”: a voice of truth that provides “the right answer.” We haven’t yet developed user interfaces that allow users to discuss or argue with a computer; users can’t question authority.  (Such interfaces might grow out of the work on large language models that’s being done by Google, Facebook, OpenAI, HuggingFace, and others.) Think about a diagnostic system in a doctor’s office. The system might look at a photo of a patient’s rash and say “That’s poison ivy.” So can a doctor or a nurse, and they’re likely to say “I didn’t need an expensive machine to tell me that,” even if the machine allows them to treat more patients in an hour. But there’s a deeper problem: what happens if that diagnosis (whether human or automated) is wrong? What if, after treatment, the patient returns with the same rash? You can’t give the same diagnosis again.

Shortly after IBM’s Watson won Jeopardy, I was invited to a demonstration at their lab. It included a short game (played against IBM employees), but what interested me the most was when they showed what happened when Watson gave an incorrect answer. They showed the last five alternatives, from which Watson chose its answer. This level wasn’t just a list: it included pros and cons for each answer under consideration, along with the estimated probability that each answer was correct. Choose the highest probability and you have an “oracle.” But if the oracle is wrong, the most useful information will be on the layer with the rejected answers: the other answers that might have been correct. That information could help the doctor whose patient returns because their poison ivy was actually a strange food allergy: a list of other possibilities, along with questions to ask that might lead to a resolution. Our insistence on AI systems as oracles, rather than knowledgeable assistants, has prevented us from developing user interfaces that support collaboration and exploration between a computer and a human.

Automation isn’t about replacing humans; it’s about collaboration between humans and machines. One important area of research for the “office metaverse” will be rethinking user interface designs for AI systems. We will need better dashboards for monitoring the performance of our automation systems; we’ll need interfaces that help workers research and explore ambiguous areas; and we probably won’t get away from filling in web forms, though if automation can handle all the simple cases, that may be all right.

Putting It All Together

Hyperautomation may or may not be the biggest technology trend of 2022. That game of buzzword bingo is unimportant. But “automating all the things”–that’s sure to be on every senior manager’s mind. As you head in this direction, here are some things to keep in mind:

  • Businesses are complex systems. While you should start with some simple automation tasks, remember that these simple tasks are components of these larger systems. Don’t just automate poor processes; take the opportunity to understand what you are doing and why you are doing it, and redesign your business accordingly.
  • Humans must always be in the loop. Their (our) primary role shouldn’t be to accept or reject automated decisions, but to understand where the system is succeeding and failing, and to help it to improve. 
  • The most important function of the “human in the loop” is accountability. If a machine makes a bad decision, who is accountable and who has the authority to rectify it?
  • Answers and decisions don’t arise magically out of the data. Start by understanding the business problems you are trying to solve, the actions that will have an influence on those problems, and then look at the data you can bring to bear.
  • Companies marketing AI solutions focus on the technology.  But the technology is useless without good data–and most businesses aren’t as data-rich as they think they are.

If you keep these ideas in mind, you’ll be in good shape. AI isn’t magic. Automation isn’t magic. They’re tools, means to an end–but that end can be reinventing your business. The industry has talked about digital transformation for a long time, but few companies have really done it. This is your opportunity to start.


Special thanks to Jennifer Stirrup, Lorien Pratt, and Donald Farmer, for conversations about Hyperautomation, Decision Intelligence, and automating business decisions. Without them, this article wouldn’t have been possible. All three have upcoming books from O’Reilly. Donald Farmer’s Embedded Analytics is currently available in Early Release, and Lorien Pratt has a preview of The Decision Intelligence Handbook on her website.

Post topics: AI & ML
Post tags: Research
Share:

Get the O’Reilly Radar Trends to Watch newsletter