RPA Spotting: The Efficiency of Consistency

Lloyd Dugan:
The low hanging fruit here has been the first of those, which is the time and cost savings that the use of the robot represents because it essentially replaces the human worker. That comes, I'm sure. I'm sure that comes across as a bit cold, but that is the cost calculus here that a person in the seat carries with it a lot of allocated overhead costs for the facility. A lot of allocated fringe benefits, not to mention direct salary, whereas the robot largely works for free. So it's too compelling, not to at least see a blending of workers with the robots and the challenge as you described it, I think can also be said this way that we need to figure out, which is the high value work that we want the human being to do right. Complex requires cognitive evaluation moments has the personal touch, particularly if there's a need to reach out to an advocate or a consumer, because they're not going to, these people are not going to deal well with a voice that sounds like the Hall Do that right. And then. That's what we want to get this reduced down to, or right-sized down to maybe is a better word and let the rest of it be the robot. And we say, and this is our best, most cost-effective most cost efficient configuration of our resources.

Cam Wilkinson:
I think it's important to bear in mind outside of cost is the efficiency of consistency. And if you've got a bot doing something and it's making the right decisions every time, whereas a human has an error rate that perhaps is different to the bots. Then that's a really good instance or an example of where you've got to look at the human centered aspects of that.

Because as a consumer, would you want the bot to be making the right decision for you. And maybe you have to deal with a bot or do you want to run the risk of working with a human who might make a mistake on your behalf and you therefore potentially miss out on an entitlement that you are due. So that's the conundrum sometimes, isn't it?

Lloyd Dugan:
Yeah, absolutely. And it's a very undervalued under sold business use case for this . Because if, particularly if you're operating under some one or more service level agreements, SOAs, or quality of service restrictions, this is exactly the kind of thing you want to do because it puts a floor on just how wrong you can be, because as you point out, humans can be wrong.

The robots don't really make mistakes. They encounter conditions that they're not programmed to respond to. There's a difference, right? And while you can train a human to do something better, it's ultimately a hope that's what they'll do. Whereas if you program a robot to do more things that it couldn't do before it will do them, and there's not any risk that they're going to get it wrong. You will presumably weed that out during testing before you deploy it. Yeah, I think that's a very undervalued undersold use case for our business case.

Cam Wilkinson:
And the concept of why AIs is becoming a predominant talking point and differentiator for organizations is the sheer scale of power of decision-making when you've got an a predictive model say that it has been well-designed and you've got a good base of historic data from which to create your model. And the bot can refer to that model to make a better decision based on all the information at hand that a human simply couldn't do, because there's no way you can consume all that historic data to make the optimum decision. So I think that's a classic case of where a good blend of RPA and AI combined together to help people make or help organizations increase the accuracy and increase the level of intimacy and understanding and care of their customers and consumers.

Lloyd Dugan:
What kind of skills are needed here?

So you and I are both architects, so there's clearly a role for the architect to provide the visioning and technical feasibility and integration points for this kind of technology. But let's talk about who actually sits down and creates the RPA design, whether that's an unintended or intended moment, whether it's a bot or a virtual session, any of these combinations. What are the skillsets that people to do this should have, if particularly if this is going to be a technology space with jobs, opening up for people to, to fill them.

Cam Wilkinson:
I think about the approach that I like to take, which is this design thinking, human centered design, where you look at the organization and what are people achieving and what are they doing in the function?

And you work with the actual people. On the ground. And you create a team with someone like, like you as an architect, but someone who would end up being coding. Someone who's used to doing macros in Excel for instance, is a perfect kind of person to pick up the tools. And then you want your consumer, you want your human in the loop person to be there to help guide and structure and avoid those gotchas. And then you want someone from the product team, or the the ownership of that business function. So yeah, to me it really is that team approach of putting together the right unit. And you don't want a big unit. You just want a team of realistic people who have the impetus and the understanding of their own domains, but collectively the whole is so much more powerful.

Lloyd Dugan:
I agree down the line. I would simply add to that list about functional testers.

I'm not seeing anybody do that. But, if I were starting an RPA analyst design unit from scratch, I would, cause there just aren't that many out there yet. So it's not that much of a critical mass out there yet. But I would look at people who have experienced with automated functional testing tools and ask him, can they imagine applying the same mentality they have when they script a user interface behavior to something for the robot to emulate.

And if they can make that leap, I think I'm starting with somebody who already has that kind of human centric perspective of what's going on. The other is, and I don't think this applies to Australia, but it applies to the States. There's this requirement called section 5 0 8, which requires at least with government sites have to be structure encoded with things that make an assistive technology tool work. So things like you can read hidden texts. It can interpret a diagram if there's a description about it. And then give it voice as audio if the person is blind or can give it to them as a printed braille. And there's a lot of things in between. It could just be that the text is large or the graphic is bigger simply because the assistive technology is working on top of the browser or whatever, the underlying application platform.

And in, in this country, most vendors have to, if they're going to sell it to the government space, Pretty profitable for those of them. They have to provide proof that they've got the ability to do that. If you go into Microsoft, for example, any of the Microsoft products, there are places where you can leave these little things that assistant technology will pick up.

But if there is a sort of subclass of analysts slash designer out there that specializes in making sure these websites in particular are web based applications running on behalf of government agencies meet these requirements. And I think those people to bring this human centric orientation, or and I'll add to that this kind of user interface based understanding of flow that is, is largely not there.

And I think most developers skillsets because that's, unfortunately what I have seen. They try to take a developer like a SQL developer or middleware developer and say, Hey, we want you to learn RPA. Some of them can make that transition, but some of them cannot because, programming a Java object to do X is nowhere near the same as programming the RPA to interpret a value on display and then select the appropriate value from a drop list before proceeding. Two totally different skill sets in my mind.

Cam Wilkinson:
I think you've added a one, a couple there that I overlooked. The user experience designer, I think UX design is pretty critical because whilst you are automating things, you still need to manage them. And there will be aspects of interplay between humans and bots. Ensuring that's the kind of a smooth and seamless aspect.

The overarching goal is that we're trying to simplify and make decisions quicker for the organization. And we don't really want to be creating more tech debt and that can be one of the pitfalls of a big, messy deployment of RPA bots going everywhere is to manage not only the underlying systems, but this orchestration engine that you've built on top and the complications of version control and everything that comes with the system.

So that's it's pretty important to have that perspective. And that's why, when you create these units or these teams, that there is a level of integration and oversight with the IT deployment schedules and understanding the systems environment that they're working in, all these bots and the interplay and the necessity for communication of, which bot is talking to what system and what's the screen and the version level that is expecting. You change one thing and it can it can break pretty easily.

Lloyd Dugan:
I'll tell you what if I am a user experience designer and I have nothing, but the utmost respect for that skillset, the people who still apply it, I would be running towards trying to become an RPA designer. In my opinion, app, application development platforms nowadays in particular have gone so far in the direction of simplifying screen design, they've taken, honestly, all the creativity out of it. There was a time where you could actually talk about the rules of screen design, user experience design, and it had meaning it had some value in directing and normalizing designs of screens and their respective behaviors.

But the platforms nowadays has largely taken all of that away. From the designer, the no-code low-code approach has basically made the UI something that your grandmother could eventually figure out how to do. Now I'm exaggerating to make a point, but the old classic of, going to exercise of going to a user experience designer and say, Hey, we think that the screen may be a bit too dense what do you think we should do? And he, or she gives some suggestions on design or better yet, is part of the design process at the first place and prevents that from happening now in large really doesn't happen. It's more likely the developers just ran with their application development platform.

And if it does get subjected to, and it a user experience review, it's usually after the fact that perfunctory. Yeah. Okay. We'll get around to that. At some future release it's nowhere near as impactful as I recall it being just 10 or 15 years ago. So like I said, I, if I were a UX person, I would just be running to extend my credentials in this space and I would be employable for another 10 or 15 years, I think.

All right. Let's let's transition to the sort of the five steps that we called out in the deck. So again, the point of RPA spotting cribbing off of the idea of train spotting is what are the distinctive characteristics that we're looking for? So that when we pursue RPA opportunities, we're pursuing the right ones, right?

Cause like any technology you can, RPA is something that you can misuse or use badly. And then everybody gets upset and bad things happen. So how do we avoid that? And in the first step we talked about not with going too deep into this, but we want to find the external and internal perspectives of the work.

I'll talk a little bit about the external I'd like to ask you about your perspective on the internal. So for me the external perspective, particularly, if I'm dealing with a citizen or customer facing set of moments is, what does the customer journey look like? These touch points that exist in customer journey mapping are moments of interaction between the customer and the back office process / system that is answering a call provisioning a request, or just taking data and maybe doing something else. But it's a moment of interaction. And so the customer journey is a progression through such moments, branching sometimes in one direction or another based upon the emotional state of the consumer. If they're happy, they're going to keep going. If they're pissed off, then they're going to, looking for ways to get out or to get even in your experience. And so if we know where those moments are, then we can have at least some external perspective about how these touch points can be made more effective through RPA.

And one of the things that I know you and I've looked at over on the Australia side is this idea of injecting bots and what we're going to call it the dossier a little bit later. And you've already mentioned once already as a tracker as a means of making the job of the call center, supporting a citizen requests, easier to do.

And cause that's a frustrating touchpoint you call in. Maybe you get, maybe the call goes through, maybe it doesn't. If you actually get to the first year, maybe you've been waiting so long or you're upset, and now it's a strange conversation. All of these things happen, right? This is a sort of standard stuff with call centers, whether they're servicing a government situation or not, but especially with the government situation, because everybody starts from the presumption that they hate what the government is doing, whether they, whether there's a reason for that or not. And so by the time they talk to a person who is quote unquote, representing the government, they're ready to unload.

So that's an example of a touchpoint moment that we can identify specifically in a customer journey map and to say let's look at that more deeply. And see what the real problems are. Let's root cause analysis this thing, see where it leads us and then particular, we're going to flip the mirror around.

So if that's the external perspective using the customer journey map, as an example, artifact, by which to identify these touch points where perhaps the RPA can help, let's turn it now inward. And what are we looking for?

Cam Wilkinson:
So some of the things that we saw with these federal government platforms is the lack of integration. And in fact, there was three versions of the same application using pretty much the same data, but all totally different interfaces and different technologies because they were developed, this decade last decade, or maybe the previous decade.

And based on the type of a transaction that you're going to perform, you might well find one of those three systems the right vehicle to do that with. So that's really hard when you're operating in such a diverse environment and constantly flipping and changing screens. So from an internal perspective, if you're able to facilitate the right data to the to the operator so that when they're on the call, they're actually working on the thing that needs to be worked on for that person.

So I guess that's an opportunity for AI to come to the fore there because if we know the person's ID who's calling in, then we can build a model that figures out and determines what's the most likely reason that they're calling, have they just lodged an application for this claim? And if so, what's the status of it. So there's. Some really basic things that we can use to analyze the status and history of particular cases and make a reasoned judgment on what might be most important to them and surface that information to the help desk operator.

So that was the a simple one that came to mind. And I guess knowing what people are calling about takes, takes a bit of time and effort to figure that out. But by and large, a lot of these call center operators, they would know roughly as soon as they saw the status of of one file or another, they would pretty quickly understand the person's need and requirements.

So it's not you have to reinvent anything here. You can pretty quickly automate and cut to the chase and even just give a list of one of these three things is most likely, and that doesn't require a whole lot of really advanced data science skills. It's just assembling the right data and that at that point in time, and you can use a bot to trigger all these things rather than write custom code.

Lloyd Dugan:
To support that kind of analysis there's probably, you could do some process modeling or some event modeling or some system interaction modeling. There's a variety of artifacts that you can create to document the internal perspective.

But I wanted to ask you about something that I know is of keen interest to each of us, but I think you've probably done more thinking about it than I have, which is a process mining, or as it's now being coined in some places task mining to say, here are paths of work that are low complexity, high volume, and are good juicy RPA candidates because of that.

But the mining term is gonna throw many people off. So maybe you can give a description of that as part of answering my question to you about how do you see process or task mining working in support of this.

Cam Wilkinson:
So definitely the need for looking at data and history, logs and event logs is critical. As you build out your understanding and the plethora of tools that have been developed in the last, five, 10 years is is pretty fantastic in this space. I think a lot of them are very good at displaying the results and helping you simulate new parts or new insights around the pathways that a journey would take , like a call journey would take or a claim journey or whatever it is that item that you're looking at.

But the challenge that we've found is this age old problems stitching the data together and the gathering of the correct logs matching the time stamps and finding that unique identifier key that you can use as a link to thread through all the operations. So it's it's a very data intensive effort. And it's very worthwhile is my view because you get the inside running and the inside track without all the emotional baggage. So you don't have a view of someone's subjective understanding of what the process is. You look at the raw data and it will inform you directly what each of those processes is. Yeah, it's a fantastic capability that is is going to be very practical for for any RPA team.

Lloyd Dugan:
Yeah, the a decade or so ago, process mining was fairly much a science experiment that was done in some startups coming out of academia. And there might've been one or two players on the market now, right? Just about every application development platform that is been relabeled from a BPM suite platform or a case management suite platform. It has some kind of process mining component, usually through acquisition.

They've they bought them. So not naming names, but it's it's shifted from being not a part of the feature set of the platform to being and certainly an order differentiator, if you will at purchase, but I would argue that nowadays it's so ubiquitous it's really an order qualifier, meaning that you better have an answer in that space because they can go somewhere else with a different platform that has that answer.

And what I have observed as well, particularly in the last year to two years again, not naming names, cause I want to get caught in that the endorsement trap here, but the many of the RPA vendors have picked it up. Recognizing that they identification of use cases for them to use RPA emerged out of this kind of analysis.

It's not something that's in the shadows anymore. It's out, it's part of the platform space not just for the application development stuff, but now also for the RPA. And I think we're going to see more about that in coming here is as people figure out how to squeeze even more efficiencies and effectiveness of outcomes out of the work streams by throwing process mining, or task mining at it and then seeing where the RPA opportunities emerge.

Cam Wilkinson:
And I think it's going to be great to keep as an ongoing assessment tool, just to see what works. Because you'd be able to tag certain activities that are bot centric or bot only, and others that are, human run and the concept of conformance and consistency that you can report on that using a process mining tool in situations rather than looking at so much of the historic, you can look at it almost real time to see how are we performing and what are the kind of error rates and how much rework is being required.

Yeah, that can really help you fine tune either the routines and the decision points that the bots are making. And, or maybe if it's humans, then you know, what kind of additional training or or restructuring of applications and screen layouts and decisions, should we implement.

1000 Characters left

Lloyd Dugan

Lloyd Dugan is a widely recognized thought leader in the development and use of leading modeling languages, methodologies, and tools, covering from the level of Enterprise Architecture (EA) and Business Architecture (BA) down through Business Process Management (BPM), Adaptive Case Management (ACM), and Service-Oriented Architecture (SOA). He specializes in the use of standard languages for describing business processes and services, particularly the Business Process Model & Notation (BPMN) from the Object Management Group (OMG). He developed and delivered BPMN training to staff from the Department of Defense (DoD) and many system integrators, presented on it at national and international conferences, and co-authored the seminal BPMN 2.0 Handbook (http://store.futstrat.com/servlet/Detail?no=85, chapter on “Making a BPMN 2.0 Model Executable) sponsored by the Workflow Management Coalition (WfMC, www.wfmc.org).