RPA Spotting: The Final Steps

 

Lloyd Dugan:
So let's move on to the second step that was to choose the right type of RPA to apply. So we've talked about some of the variations, talked about attended RPA, where you're blending it in with the human user in some capacity, shared responsibility for getting the work done or unattended RPA, which means that there's no human being involved at all. Attended RPA can be a moment with or in a user screen moment. Unattended RPA can be a bot that's doing this routine stuff in the background or a virtual session. In the case of unattended RPA, it's almost certain that you're going to have to have some user credentials for these bad boys. And we'll talk about that in a moment about how to think about that, because it's a common problem across RPA users. So we'll talk about some general design patterns. You can kind of imagine a kind of a four quadrant kind of intersection between the kind of interaction with the human being versus what is the purpose of the work. And you're going to get these variations of flavor about RPA. Again, the first gen RPA tools out there will do all of these.

Cam Wilkinson:
And just touching on some of the security issues, you know, especially in government space where there's an expectation that a human is logging in to perform a task. One of the approaches to solve that is actually have the human login. The human is sitting there and observes the activities of the bot in a semi attended mode where it's performing tasks under the supervision of a human. And so it's a kind of a gray line, I guess you might say. But it's still the human is responsible and has that capability to override the activities that are going on. So, yeah, I think the distinction between the two, you know, attended and unattended, it's in a practical sense the unattended one could still be performing quite complex tasks in conjunction with the user. And, you know, having moments where they interact on the screen and where the human makes us makes a choice. And then the bot kind of continues. So I like the the interplay there.

Lloyd Dugan:
Yeah, this is really more of a spectrum of flavors, as well as a recognition, as you point out, that you can mix and match with this stuff. And that's that's the point of it. That's that should be the point of it. Again, you're trying to find the right answer, not just decide what is the answer and make everything fit it. So the third step is we call it out, is defining the scope of the work and the reporting of its success, what I call the yield recording. We probably had a little bit more of this in in the North America circa space. Let me speak to that a bit. Basically, the scope of the RPA work versus the yield that the RPA work at that actually does field of the work that it actually does. So you have from a design standpoint, this kind of apriority statement that says, here's what the robot's going to do. And then from a yield recording standpoint, you have this perspective, which is what did it actually do. And in my experience, this is these are two sides of the same coin, so, you know, I should be prepared to go from something I think is presumptively eligible for the robot to do, to see it as being actually eligible for the robot to have actually done. And if that's not the case, then I've got to diagnose why that happened. And so as a consequence, the design has things that are inside the design that it can do and things that are outside the design the robot is not permitted to do it. I'll have an apriority statement that says that then in terms of the yield reporting, I'll have a postiory statement, you know, after the fact that says this is what it did. And this is really a lead in to the dossier, which I give full faith and credit to you. But also, I confess that I completely stole your idea. We'll talk about a little bit of how we made it work here in North America. But tell me what your inspiration for the RPA dossier was as part of, again, supporting this measure, a bit of what the robot's doing versus what you thought it was doing. You have this prescriptive sense of it in the design. Then you have the actual sense of it that you can report.

Cam Wilkinson:
I mentioned before about the insurance claim scenario, and it was it was one that I worked on in my time at IBM in the Watson team, where we were using a framework of decision making, and it wasn't an RPA platform at all. It was basically collating information and making assessments on the suitability of an application. The biggest benefit that the client shared with me after they went live was this value of quality assurance and understanding what actually happened, what were all the artifacts that we had that led us to make this decision of whether we paid the claim or didn't pay the claim? That was was quite a insightful kind of outcome for me because it didn't register really when we were building the business case initially.

Cam Wilkinson:
Fast forward a few years to the government service contracts there, where we're trying to help them automate some of their back office function and seeing the complications of accessing data from multiple systems and trying to make a decision on whether we would accept this claim or not. It was it was pretty much the same concept. But because I knew the variety of systems and the integration of those didn't exist at all. We had to build something. We actually had to create like a temporary status, because a claim would come in, might be processed by Team A. And then nothing would happen for weeks, and then Team B might pick it up. So we needed to have a method of storing the temporary status that could be applied at any point in time. When the claim progressed, either, you know, the applicant called in again or another team picked it up and was able to to spend time processing it. And so we wanted to deal with the aspect of single point of failure. And we didn't want to be duplicating data. So we we devised the concept that we would store a temporary status and it would be reflected in the actual systems of record. So we weren't creating a duplicate status. We were just reflecting what the system of record had. And we were melding that with the other systems of record and storing it in one spot so that the bot or the human could access it. So, yeah, and that's how it came together.

Lloyd Dugan:
I think your still underselling it, because I think one of the things that's and this is a kind of Captain Obvious moment where it's like, oh, yeah, that makes sense. So this is true in a lot of customer facing situations, especially with probably government services on either side of the planet, there're going to be moments where you have to capture notes. Right now, some of these notes are are free form text of the call of the agent working the call. But, you know, sometimes it's just to record something about what was discussed. And you want to do it in a very prescriptive way if your business rules dictate that. But for example, that says, after I call is finished with the customer or the business role is you will record the topics that were covered. And the resolution something to that effect. If these things can can be reduced or maybe that's where it can be standardized around a set of of standard text phrases or values or elements, then basically the robot or the dossier can do this note taking for you without typos, without having to figure out which it is that you need to record. And I think that that is something also that's not well understood, but it should be because there is variation that's going to happen if you leave it to the human being to record all that stuff. It's just it's inevitable. But if there are parts of that, even if it's not all of it, but if there are parts of it that you can standardize the texture, then you can use about to standardize the notetaking as part of what the dossier is doing.

Cam Wilkinson:
One of the one of the aspects that we found in trying to apply machine learning to the notes taken by connex center operators is this, I guess, the the diversity of quality of recording. And so you get people using shorthand and making because they're, you know, sometimes incentivized on the minimizing the amount of out of call time. So they're trying to rapidly make it take whatever notes they can so they can accept the next inbound call. The quality of those notes is is often pretty useless in terms of mining them for for insights. So, yeah, if you can simplify and speed up that process of accurately capturing what a conversation has been about, and that's that's going to have multiplying value down the line to.

Lloyd Dugan:
That we actually what we've done is we are basically running the dossier as a plot within the virtual RPA user session, and it's recording with what the RPA virtual user is doing, but not at every step at major junctions and pairing that up with sort of task level data at the end. So now we have not just a sense of what was done, but also why. And then you and I have talked about this and this. I'm just going to state it out there, but put it out there, which is that there's absolutely no reason why this dossier isn't running for the human user as well. Because you actually want to understand why a human being made a search, took a certain action, or made a certain decision that resulted in a particular action on the screen, just as much as you want to know with yet why the RPA did that.

Cam Wilkinson:
Yeah, yeah, for sure. I mean, you can learn some best practice and say, well, these these people who are making some really good decisions here, we we want to understand that better.

Lloyd Dugan:
So I think we're going to come back to this theme of please don't treat the robot any worse or better than you treat the human being. They're all workers.

Cam Wilkinson:
Yeah. Well, as you say, you know, we're going to welcome the bot into the family of workers. Yeah. And I'll hold him to the same standards.

Lloyd Dugan:
Hold up to the same standard. And that is that should be the mantra. So the fourth step, again, we talked about a lot of this is where do insert it? And I know we've in all the examples that we've talked about, these are largely server housed moments where the RPA user is essentially an overlay to what's going on. It's run out of the RPA server. And in that sense, then again, we'll probably likely have to see these things is as real user sessions with the same sense that you have to define a refresh of user credentials or their password expires like everybody else's and stuff like that, because and you can really think through a number of basic design patterns out there that are true in any sense, given that if what's going on is session emulation, you have a bunch of responsibilities that go along with that. But you can also have agents running on the client devices. And I have not had any experience with that. Have you?

Cam Wilkinson:
Don't think so.

Lloyd Dugan:
I'd be interested to see what that's like, right? It's almost like. I mean, it is like the hack bots that you read about where there's a centralized controller. He's got a bunch of he's got 10000 bots on 10000 different machines, and he can do whatever he wants by instructing them to do something or more likely with what they do is automatically check in for instructions. In effect, the hack bots out there are a kind of RPA deployment, distributed deployment of agents operating at the at the client device level. I would be interested to see how that might work in the context that we've been discussing so far as part of a broader workstream or a bit of work that's been turned over to a robot to do as opposed to human being.

Cam Wilkinson:
One of the other projects for the Australian government was and I talked about it just earlier. To address the security issues, we needed to have a user log in. So in effect, we have the agent bot running on the user's work station. And so the the user with the bot kicks off and does a whole bunch of tasks and pauses while the user logs in. And then the bot will continue, but under the supervision of the user, of the humans. So in that scenario, they actually had like a dual screen. So on one screen, the the human could work on their normal activities. And the bot would be continuing on the other screen. So there was kind of this dual role that they were both playing. So I guess that's an effective scenario of that agent combination.

Lloyd Dugan:
If these these blended scenarios are elegant, but can be very tricky to pull off architecturally.

Lloyd Dugan:
All right. The last step was how to maintain and expand the reach of the RPA. We have to we have to have introduced a framing device which used to be known as the Gartner Hype Cycle. Although I think you can actually find it online without dropping the Gartner name. The hype cycle has a weird looking curve that that could exponentially rises up until it hits the what is known as the peak of inflated expectations, then drops as an actual experience fails to match what's expected. It hits the trough of disillusionment as it descends exponentially. And then rebounds as people start to right size their expectations and get smarter about how to use it. And that's called the slope of enlightenment. Once the maturity is settled in, then it's then called the plateau of productivity. So in some sense, this is a bit of a cheeky way of describing it. But I think by and large, I've seen it hold up. Technologies, particularly new technologies, even old technologies that have been refurbished to be new like this stuff is, you know, have those moments where they just soar like everybody wants to have. Even without the industry groups, analyst groups pushing them, and that's generally with their market. Their business model is to label to the next wave and then actually help generate the wave. And it goes too far. It ends up being overused and wrong interpretations are attached to it, and then people get pissed off. Bad things happen, projects get stalled, but then eventually people figure out how to use it. And usually it's the people who were not there at the beginning because those people have moved on to other jobs.

Cam Wilkinson:
Shiny, shiny new toys that they can play with. Exactly. Yeah, I, I really like that that concept that that graph, because of the challenges that that you face when you when you're trying to play with multiple systems and you know, the systems management systems theory concepts are really hold true in any Arpey project, I think, because you typically tackling things that. Connect across multiple applications or platforms. It's not easy sometimes. So you want to kind of you want those low hanging fruit to start with because you need you need to keep things simple. You need to dumb it right down. And you don't need the high risks that are always present when you when you're tackling a multiple system environment. So, yeah, that the concept of just doing some automation and just improving the time taken and improving the quality of the results is - that's a win.

Lloyd Dugan:
So where do you think we are? As a general market, not personal experience wise, but as a general market, are are with RPA, we hit the peak of inflated expectations, or are we at the trough of disillusionment or are we on the slope of enlightenment? Pretty sure we're not at the plateau of productivity. Still too new for that. I mean, RPA still has that new car smell about it. Where are we with respect to expectations and how much has been realized? Where do you think we are?

Cam Wilkinson:
Yeah, I'd say we're down, down the bottom in that trough of disillusionment in some areas. Well, in Australia, we had the robo debt scandal or not scandal, but a government initiated activity to try and claw back some perceived over-payments of government subsidies. And robotic automation got a really bad name. And in fact, there was a massive court case and the government lost and they were found to be unlawfully trying to extract money from citizens. So, you know, that's a really telling tale of the risks that can occur when you kind of hand over responsibility to to a machine, if you like. There's been a withdrawal, I think a massive withdrawal from relying too much. And, you know, in the world of AI at least five out of six projects end up delivering less than they're expected and and often end in failure, so the challenges are real in any kind of tech deployment and tech project. And hence, you know, the concept of dumbing it down, keeping it simple and just delivering on the easy stuff with that low hanging fruit so that you can, you know build up your confidence, get some speed, take off your training wheels, graduate away from the L plates, the Linna plates, and yeah, I reckon we're definitely climbing out of of that trough and hopefully up that slope of enlightenment.

Lloyd Dugan:
Yeah, I agree, I think we're more towards this the trough than we are at the peak at this point. The question is whether we've really bottomed out yet is something I'm not certain about myself, but I do hope for the future, because I do think in particular that Second Gen RPA, which I assume will pick up a better blending with AI and machine learning, will revitalize it, give it another umph. I mean, again, these cycles are cyclical, if that makes any sense whatsoever. Right. We're bound to see this hypoactive this hype cycle repeat itself again for the next injection of second gen RPA. Yeah, for sure. Yeah, that's that's just the nature of the beast, I think. You and I have been around long enough to know that.

Lloyd Dugan:
Well, I think we've covered, you know, everything but the remaining stuff. So we've covered topically what I was hoping to cover. And thank you so much for that. But we got a few two or three things still left to go. So one is a shameless plug moment. Let's hear about Fission Digital. What is what is it that you've got going there? What can you tell us about what it is and where it's going?

Cam Wilkinson:
Fission Digital was started a few years ago by myself and one of my good colleagues who I had the fortune of working with whilst at IBM. Fundamentally we got into business because we realized there were a lot of organizations that had maybe invested in some great tools from IBM or others and were struggling to take advantage of those. It's the combination, I think, of knowing where and how to apply AI and machine learning and natural language processing and utilizing orchestration engines like RPA to make better decisions and better support systems. So we work with a number of federal agencies like defense and other bureaus across different verticals and horizontals. So within the human resources world, where doing analysis on the longevity of staff and employment and helping them with their workforce planning. We also do really interesting projects on equipment, heavy equipment, so whether it's industrial equipment or rotational or mobile vehicles. So analyzing all the information and building like a 360 degree view of an asset, we're able to come up with models that predict behavior. And it also applies to humans. But in the world of industrial assets, it's it's really useful for prescriptive maintenance. So which equipment is most likely to fail and when? So in what kind of sequence should we arrange our repairs or our field visits and what kind of equipment and spare parts should we be carrying? So these are all really useful bits of information when you're talking about large systems and, you know, high cost assets. So, yeah, that's that's the kind of world that we've been working in. And it's fantastic. So helping helping these organizations with choices of software and deployment models and architecture and and really, you know, designing systems that are going to improve their decision support platforms.

Lloyd Dugan:
Much appreciated. Cam. Thank you so much for taking the time out of your morning to devote to this conversation.

Cam Wilkinson:
Well, I've really enjoyed our chat, Lloyd. Thank you so much and appreciate all the time and the opportunity to share some ideas and thoughts with you.

1000 Characters left


Lloyd Dugan

Lloyd Dugan is a widely recognized thought leader in the development and use of leading modeling languages, methodologies, and tools, covering from the level of Enterprise Architecture (EA) and Business Architecture (BA) down through Business Process Management (BPM), Adaptive Case Management (ACM), and Service-Oriented Architecture (SOA). He specializes in the use of standard languages for describing business processes and services, particularly the Business Process Model & Notation (BPMN) from the Object Management Group (OMG). He developed and delivered BPMN training to staff from the Department of Defense (DoD) and many system integrators, presented on it at national and international conferences, and co-authored the seminal BPMN 2.0 Handbook (http://store.futstrat.com/servlet/Detail?no=85, chapter on “Making a BPMN 2.0 Model Executable) sponsored by the Workflow Management Coalition (WfMC, www.wfmc.org).