The RPA Culture Wars

 

Lloyd Dugan:
One of the things that's certainly been my experience, and you should give voice to yours, but I'm pretty sure we've talked about this before and it's very similar, I expect, and that is what is the resistance that we encounter with the robot? We've talked about unrealistic expectations that are being set, standards to which the robot is being held that the human is not. Measuring what the robot's doing when you're not doing that at the same level for the human, even though measurably, you could pretty much count on the robot because it can't make a mistake in the technical sense. It is probably raising your quality scores, and yet you're not you're not staring at the human being with the same glare. I mean, it's it's all of us. And I will say now what I'm experiencing is people want the RPA to do something for them. So, you know, in a weird way, while there's this resistance, on the one hand, there's also this unqualified embrace on the other that's paired up with it in this weird dance. So they're either saying we don't trust what the robot's doing, so we want to do all this other stuff. But then they're also saying, but I want the robot to do all this extra work. And they're just kind of saying, well, hey, you could do this. And no one's really stopping to think about whether that's a good thing or is in conflict with somebody else's request. And because at the moment we're doing these projects is separate efforts. And I've screemed in as loud a voice as I can. But that's just a nutty thing to do. We need to consolidate the RPA work under one project or program effort, if you will. But I haven't made any headway on that. A little bit of headway recently, but not anywhere near where needs to be done.

 

Cam Wilkinson:
Not screaming loud enough, Lloyd. Is that right?

Lloyd Dugan:
You can't scream that loud over Teams. So that's what I call the RPA Culture Wars, because a lot of I probably have had the same RPA explanation conversation with leadership three and a half years running. And like I said, and yet in parallel with that, I have to keep repeating what it does and what it doesn't do explanation. They keep asking for it, do more without necessarily appreciating whether that's a good thing or a bad thing. And so I've just feel caught in this weird crossfire between two parts of the same mentality that says, I want the robot to do more, but I don't really trust what it's doing. And so I call it the RPA Culture War, because this is largely a culture of the organization, which, you know, in the case of Serco in particular, is a business process outsourcer is been focused on providing services through human beings, for the most part, through most of its existence. And yet, if we're going to truly embrace with the circle of senior leadership coming out of U.K. said, right. That has to change, right. So the top leadership understands what needs to happen. But I don't know if it's really filtered down to the regional director level or not. And then from there down to the program or project level. I'm not sure I sense that within the organization. I don't mean that as a slam to Serco, but I am saying I think this is a mindset shift for them. To no longer see the the outsourcing of this work as an exclusively human, human centric thing to to and to envision, it's now this blended.

Cam Wilkinson:
Yeah, for an outsourcer that that can be quite challenging because of the business model that they've set up with with their clients on the on the contracts. But outside of that, I think there's the Terminator effect, you know, where you've got a robot going rogue and making decisions that, you know, ruin people's lives. And there's that constant fear of AI taking over and, you know, controlling the world. And you talked about just before of trust. And I think that's the fundamental thing here, is if you don't understand and you haven't kind of built a system or you don't know what's under the hood, so to speak, then it's it can be very difficult to trust what what it's going to do. Whereas a human you kind of if you've worked with people, say, you know what to expect from them because you've seen them behave certain ways. So I think understanding and profiling what it is exactly that these bots and these RPA platforms will do and will not do. It's it just seems to be a constant need for reassurance and for those who are uninitiated and uncertain. And it's it's always a fear game. So, yeah, the cultural war, it's real. I don't think it's going go away.

Lloyd Dugan:
So what you're saying is that decade's worth of science fiction movies and books, from the HAL 9000 to Skynet have all made us basically afraid of the A.I. singularity to the point that we see it around the corner, whether it's there or not.

Cam Wilkinson:
Pretty much it's it's kind of like that horror movie that you can't turn off because you just you want to see what's coming next. You're enthralled. And because, you know, it could be interesting. It's just like RPA, you know, it can be good. You just a little bit uncertain of is it going to ruin me? Is it going to ruin my career? I think there's this there's so many aspects of fear related to the adoption.

Lloyd Dugan:
And that's unfortunate, because even in the example that you just cited, which I think is a great line, by the way, is so the robot did something that ruined somebody. You know, that happens with human beings doing it all the time. So why why this special focus? That's to me, again, an example of the the logic of it, right. Is that because it didn't involve a person, it's somehow seen more impersonal and more evil. But a person doing the exact same thing, did the exact same thing. Yeah, you can you can sue them, I guess, or you can punish them in some respect. But you can't really punish the software robot unless someone's figured out a way to do that. I don't know. Yeah.

Cam Wilkinson:
Yeah. What do they say? You know, life is stranger than fiction. Humans can do unimaginable things, whereas a robot pretty much can't.

1000 Characters left


Lloyd Dugan

Lloyd Dugan is a widely recognized thought leader in the development and use of leading modeling languages, methodologies, and tools, covering from the level of Enterprise Architecture (EA) and Business Architecture (BA) down through Business Process Management (BPM), Adaptive Case Management (ACM), and Service-Oriented Architecture (SOA). He specializes in the use of standard languages for describing business processes and services, particularly the Business Process Model & Notation (BPMN) from the Object Management Group (OMG). He developed and delivered BPMN training to staff from the Department of Defense (DoD) and many system integrators, presented on it at national and international conferences, and co-authored the seminal BPMN 2.0 Handbook (http://store.futstrat.com/servlet/Detail?no=85, chapter on “Making a BPMN 2.0 Model Executable) sponsored by the Workflow Management Coalition (WfMC, www.wfmc.org).