BPM.com
  1. Peter Schooff
  2. BPM Discussions
  3. Thursday, 05 July 2018
  4.  Subscribe via email
As automation continues to pick up pace, how do you make sure that when you automate a process it doesn't result in a worse outcome?
Karl Walter Keirstead Accepted Answer Pending Moderation
Basics for Improved Outcomes in four well-delineated steps:

1. Management on board
2. Roll-out of an inventory of mapped/improved/compiled processes (In-line processes, not on-line, not offline)
3. Real-time Case Management platform w/ connectivity to/from local and remote apps and APIs
4. UIs that makes it easy to on-board staff and keep staff on board.
Comment
  1. more than a month ago
  2. BPM Discussions
  3. # 1
Jonathan Yarmis Accepted Answer Pending Moderation

  1. Measure, measure, measure
  2. Make sure what you're measuring has actual relevance to what you're trying to accomplish
Comment
  1. more than a month ago
  2. BPM Discussions
  3. # 2
Max Young Accepted Answer Pending Moderation
Blog Writer
Measure & re-measure :-)
References
  1. http://www.capbpm.com
Comment
  1. more than a month ago
  2. BPM Discussions
  3. # 3
Jon Spira Accepted Answer Pending Moderation
Patient and careful analysis of the problem is essential, as are clear measurable definitions of what constitutes a successful execution of the process for the process' customers. Start by figuring out what the process should be, and how to achieve the desired business result(s), without any automation. Then move on to studying how (or even whether) automation will lead to differences that the business needs and values. Computer's aren't smart; they're just fast. Speeding across the wrong finish line more quickly doesn't mean you've won the race.
Comment
"Patient..." "start by figuring out..." "(or even whether)..."

Oh dear, while you were busy making sure you weren't crossing the wrong finish line, your competition crossed the right one.
  1. E Scott Menter
  2. 2 months ago
  1. more than a month ago
  2. BPM Discussions
  3. # 4
Neil Ward-Dutton Accepted Answer Pending Moderation
With apologies for being annoying, there's a problem with the question.
There's an implicit assumption that automation of work is a 'one and done' exercise. These days any sane team running an automation project will do it iteratively, measuring all the time.
Analysis to the nth degree up front is perhaps a seductive way to minimise the risk but with today's tools the far better approach is to work iteratively and measure iteratively.
Comment
There is a lot of merit to iteration - we do this a lot.

But, in enterprise development, in the absence of an agile architecture, you will - sooner than later - find yourself iterating fast towards ossification.
  1. Bogdan Nafornita
  2. 2 months ago
Karl Walter Keirstead
@Neil. I suppose by "work iteratively and measure iteratively.", you mean

a) map out a process
b) improve it to an extent
c) compile it, roll it out
d) allow test users to generate test instances where they put the process through its paces, make sure the logic is right and covers all eventualities (including skip, jump . . .)
e) allow end users to generate real Case instances where the team continues to make sure the process is accomplishing what it should be accomplishing.

Where there are any signs of issues at d) and e) go back to b)
  1. Karl Walter Keirstead
  2. 2 months ago
  1. more than a month ago
  2. BPM Discussions
  3. # 5
Bogdan Nafornita Accepted Answer Pending Moderation
Make it better, first.

If you automate a crappy process, you'll get a crappy automated process.
CEO, Co-founder, profluo.com
Comment
Yes, but even in the worst case (not recommended of course) you'll get a crappy automated process that is instrumented - so you can see opportunities for further improvement by analysing 'digital exhaust'. I'm taking a kind of extreme position here, but there's value in the idea of a 'minimum viable automation' that gives you a platform for rapid, data-driven improvement - rather than analysing ad infinitum up front.
  1. Neil Ward-Dutton
  2. 2 months ago
Sure, easy to get to a state of diminishing returns.

Better to run with automation you know than to have each intervention at a Case be a random intervention. The core requirement is "measure" as several contributors here have indicated as important.

The reason why QA is separate from production in industry is to get an independent assessment from folks who know how to do independent assessments.

In process design//rollout we need a round of testing by the designers, followed by testing by end users/feedback, then ongoing use by users with continuous sampling/testing by the designers.

No point spending much time staring a paper process maps.
  1. Karl Walter Keirstead
  2. 2 months ago
+1 to Neil's comment. Maybe it's become I'm a programmer from way back, but all this over-analysis (leading inevitably to over-specification, one of the biggest business technology challenges we face today) just makes me fidget.
  1. E Scott Menter
  2. 2 months ago
Since when (and in what world) "make it better" means "over-analysis"?
If anything, my guys are blaming me for under-analysis - I don't have the patience to lay out specs from customers that can barely articulate their visible problems. So yeah my team iterates over incomplete process and data model designs all the time.
Yet I always stop my customers from doing a dumb thing, when I see it. Classic example of an HR team reconciling monthly timesheet data from 3 systems - I will always recommend that they merge data into one source, rather than just slapping an RPA project doing said reconciliation much faster. In this case, the RPA "exhaust" is just plain garbage.

And BTW, if you'd still be a programmer today, you'll know that, in a complex enterprise stack, iteration can be VERY expensive.
A balance is always needed - iterate, but keep your head up.
  1. Bogdan Nafornita
  2. 2 months ago
  1. more than a month ago
  2. BPM Discussions
  3. # 6
David Chassels Accepted Answer Pending Moderation
The real time feedback or measurement will quickly identify areas needing addressed. Iteration in the build with user feed back will ensure a good outcomes but where there are areas where there could be unexpected results you should be able to build in automation of notifications, escalations etc as required. As ever support for the inevitable change to ensure good outcomes as circumstances change should be readily accessible by the business.
Comment
  1. more than a month ago
  2. BPM Discussions
  3. # 7
Dr Alexander Samarin Accepted Answer Pending Moderation
1. Yes, Bogdan, an architecture is a must.

2. No, Bogdan, automating a crappy process is, sometimes, a way to radically improve it. (I have a proof).

Thanks,
AS
Comment
2. I have proofs of the contrary, too.
  1. Bogdan Nafornita
  2. 2 months ago
  1. more than a month ago
  2. BPM Discussions
  3. # 8
Ian Gotts Accepted Answer Pending Moderation
Simplification comes before automation (except in the dictionary).

Only an automation software vendor would suggest you automate to identify the areas to simplify!!! A simple mapping workshop will quickly enable you to remove the waste BEFORE you automate. One a white, piece of paper or mapping app.
I don't care. But do not automate until you have a shared and agreed understanding of the process to be automated.

We have 1000's of examples of this.

Then automate, measure and iterate,
Comment
@Ian, agree

I, too, have 1.000s of examples.

You can map a process where each step features nothing more than an image of a paper form you may be hoping to convert from paper to electronic, you can encode all workflow steps to the same skill level and later update with proper routings, you can add decision branching boxes where you describe the options via narratives and later put in place rules.
  1. Karl Walter Keirstead
  2. 2 months ago
"Only an automation software vendor would suggest you automate to identify the areas to simplify!!! " - not at all. A trade-off analysis (as an integral part of the systems approach) can do this as well. BTW, I gave a real example and the client was very happy.
  1. Dr Alexander Samarin
  2. 2 months ago
  1. more than a month ago
  2. BPM Discussions
  3. # 9
E Scott Menter Accepted Answer Pending Moderation
Blog Writer
I'm with Neil. If your main purpose is avoiding the shoals, you'll never find your way to port. Navigate towards home, paying due attention to the hazards as you go. Respect the threats but focus on getting where you're going.

In other words: you're not trying to avoid a “worse outcome”; you're trying to find a better one.
http://www.bplogix.com/images/icon-x-medium.png
-Scott
Comment
tata! oh no, kata!
  1. Emiel Kelly
  2. 2 months ago
  1. more than a month ago
  2. BPM Discussions
  3. # 10
John Morris Accepted Answer Pending Moderation
It's a commonplace, especially on BPM.com, to note that "all models are wrong, some models are useful". And this is often applied to process models; all BPM process automation deployments are based around some model. Which is by definition "a simplification" of the world.

What could go wrong? (Other than the list of entertaining errors listed by team BPM.com members above?)

Any management process relies in some degree on tacit workplace knowledge. Process automation captures the "big picture" of this tacit knowledge. And for some business domains, tacit knowledge is not critical to executive process management. The drive to automate is driven by stakeholders, stockholders and incentive-driven executives. And is often very successful.

But process automation outcomes can sometimes "go south", i.e. go wrong, if one tries to automate the type of process which is (a) very dependent on tacit knowledge, and (b) where the tacit knowledge is of the type that is difficult to automate. If the process automation programme is respectful of important tacit knowledge, tacit knowledge which at the same time can't yet or easily be incorporated into automation, then the programme can be successful. But if the automation programme ends up sidelining accumulated work-cultural capital, or tacit knowledge, the probability of programme failure is high. Respect tradition if you depend on tradition -- and have nothing better to offer. And know that in many fields, the volume of tacit knowledge is very substantial.

On the other hand! The question of automation versus traditional practice is a multi-dimensional problem, but especially a problem of economics. In many cases, a standardized product can be an order of magnitude less expensive than a product dependent on craft or artisanal work processes. Much of the time we're willing to give up a little quality for a lot of affordability. At least one should approach the problem with "eyes open".

Examples can be found in almost any field (underwriting, healthcare, business services, logistics, manufacturing). All of these fields are subject to increasing automation, for sure. The question is not that any given field should not be automated, because tacit knowledge is somehow sacred. It is rather that as you automate, you have to take into consideration the how tacit knowledge is part of your road map.
Comment
Fantastic riposte @Alexander! Your article is well worth reading - especially the more we rely on "models" and "digital twins". Casual references to concepts related to technology stacks and systems can mislead decision-makers and planners.

Concerning my point about tacit knowledge and model sufficiency, the risk of "under-fitting" (which I think could be another way of describing the problem) is still there ...
  1. John Morris
  2. 2 months ago
  1. more than a month ago
  2. BPM Discussions
  3. # 11
Kay Winkler Accepted Answer Pending Moderation
As part of process optimization effort, we always establish a standardized baseline, before defining, simulating and implementing the next Irritation of the process to-be. A detailed narrative, BPMN flowchart and possible process models that can be simulated, usually provides us with a clearer "as-is" input from which the different ROI metrics can then be established. This practice is not a warranty for BPM success, but it does assure the establishment of clear foundation from which we want to improve upon. It also preserves existing competitive advantages a company may have and needs to preserve and leverage. Once the first couple of BPM initiatives have been taking place in such a structured manner, existing process repositories (applying process portfolio management), should easily provide said "as-is" baselines in a a continued fashion. In short, most will come back to looking at BPM as a methodology, rather than a technology.
NSI Soluciones - ABPMP PTY
Comment
...next Irritation of the process to-be... That's how your customers see it? ;-))
  1. Emiel Kelly
  2. 2 months ago
  1. more than a month ago
  2. BPM Discussions
  3. # 12
Emiel Kelly Accepted Answer Pending Moderation
Is it annoying to ask what an 'automated process' means in this case?

Just asking because my goal is better performing processes, not automated processes.
Sharing my adventures in Process World via Procesje.nl
Comment
Fair enough question Emiel. How about this: "better performing" is the outcome, "automation" is the how. And they are both inextricably linked and yet distinct. And when executives are only interested in the outcome, without taking responsibility for the the work of their business, inside the "black box of work", the "how", then they will be at a disadvantage compared to those that step up. Many examples: US auto manufacturers versus Japanese auto manufacturers in the 1960s.
  1. John Morris
  2. 2 months ago
  1. more than a month ago
  2. BPM Discussions
  3. # 13
Boris Zinchenko Accepted Answer Pending Moderation
Crucial and often missed factor of success for an automation initiative is careful evaluation of process interaction with other processes. As a rule, processes, which have excessive dependencies and external communications are especially difficult to automate. Complexity of automation may grow exponentially with the increase in process linking.

Overall degree of link density for processes inside a business model is essential but rarely discussed measure of integration and stability of business as a whole. Cascading process links with progressive nested dependencies further complicate and potentially delay positive process outcomes.

To correctly estimate true potential of process improvement, business analysts should never consider any process standalone but only in the context of its interaction with other parts of business model. Automating a process alone is as artificial as running a single department without the whole company in place. Preparing to enterprise automation, always begin with the study of business dependencies.
Comment
+1 the phrase "degree of link density".

Consider two business organizations in the same industry and at the same scale. What can we say about relative link density? It's quite possible that one organization has a much higher number of links -- for the same business. Enterprise Architecture guru John Zachman advocates "normalization of business functions". In other words, what can be done about reducing redundancy in the organization? Logically, this is the same as reducing total number of links. So, the organization that has seen leadership for better business architecture, resulting in a greater degree of functional normalization, and correspondingly a lower number of links, will be in a much better position to evolve and adapt business processes. So, "link density" never "just is", but is a legacy of past business architectural leadership. I suspect there would be a high correlation between F1000 failure and this phenomenon.

(The above argument is based on an apples-to-apples comparison of two organizations at the same scale, in the same sector. However, it would be quite possible for on organization to increase the semantic content of their products and services (e.g. to pursue mass customization), in which case the number of entities-to-be-linked would increase.)

(Also "link density" is not the same thing as "number of links". The above argument concerns total number of links, and the assumption that all-things-being-equal a greater absolute number of links will correspond to great link density. However, @Boris point may be slightly different re: comparing two sets of automation scenarios, and where in one scenario processes have a higher average number of links just due to the way the processes are designed. This may be optimal or sub-optimal.)
  1. John Morris
  2. 2 months ago
@John, thank you. Density of links in a model can be objective and inherent to business. In this case it is not so easy to change voluntarily. Density of links can be also a distinguishing factor expressing business identity. By analogy with chemistry, there can exist two entirely different chemical substances composed out of exactly same base elements. For instance, coal and a diamond are just different arrangements of one and the same carbon composite. Drastic difference in properties is just a result of different links between atoms inside. In the same way, structure and quality of links inside an organization may alone well make it a coal or a diamond.
  1. Boris Zinchenko
  2. 2 months ago
  1. more than a month ago
  2. BPM Discussions
  3. # 14
  • Page :
  • 1


There are no replies made for this post yet.
However, you are not allowed to reply to this post.