From Anatoly Belaychuk, who recently wrote this article: "Many organizations adopting BPM aim to have all their processes defined; others prefer to get the full control of a few first. There are methodologies supporting each approach. Which path would you recommend?"
Top-down and bottom-up as usual, I'll go with both. "Full control of a few first" is quick hits, visible, garners advocacy and engagement both. "All processes defined" is the long tail and an on-going effort. Process ontology, like content taxonomy, is a living breathing organism and is constantly examined (exhumed?), refined and improved if you do it right as you build up a library, catalogue of assets on "how we do process at this org."
Just my tuppence.
I always start from the fact that a process is a means; a means to deliver results. So by making clear what usefull results you have to deliver, you know what processes you need (and probably have already)
So preferably I would start with a clear overview of all the results customers can call you for. Then you also have your primary process landscape. BTW, this not the same as having the process mapped. It's being clear what processes you need/have.
But if that's a bridge too far and my customer would like to start fast with 'doing something with process' I want them to make very clear what process we are talking about; so what process result we are talking about and if it is worth to take a look at that process.
All to prevent good old sub optimization (being good in useless things) or prevent trying to control too large pieces of work.
"Get control of a few Important processes first" is where I would start, however this does imply measurements and that you know which ones are most important. This approach enables the team to deliver some value to the business much earlier and not get accused of 'mapping everything' for the sake of it.
Generally, it is very difficult for firms we work with to justify mapping out all their processes first as they are primarily buying a solution for a specific process which has extensive risk associated with it. Once past that process project, firms generally will start looking at other process pain points to address and how to budget for those. This also allows for contextual and capability education for internal staff so they understand what is possible with the tools when mapping and defining requirements for the next process project.
There are several approaches:
3. Pyramidal – see slide 4 in ref1
4. Center of crystallization
Usually, I use the #3 (the top of a particular pyramid) and then ##3-6 at the same time.
Do not (ever):
It's not even that hard actually... :-). Or is it?
Remember the end game in BPM is to "digitize" so always start with one project that matters. No doubt much cynicism from users at start but once they see their ideas coming to life they will quickly be enthusiastic! Best keep IT out until those legacy links required and getting ready to deploy. Remember build should be direct from thier input.
This approach will quickly prove validity ( or otherwise) of the claims made by software vendor. As confidence grows so will the spread to other areas needing help. Will also give confidence that you do not need a final detailed spec as change should be readily supported all of which saves time = money = happy CFO!
Anyone remember the Enterprise Data Models in the 80s? They hung on a wall... And, that was all you could do with them.
Quick Wins or Die.
Start small. Fail fast. Learn. Repeat.
Rework isn't the enemy: paralysis is the enemy. If you do a bunch of small things quickly, and some of it has to be redone later in order for things to fit better together—great: do it. While your competitors are restlessly tapping their feet, waiting for the steam bubbles to start rising in the ocean, you'll be enjoying the benefits of what you've already accomplished.
Mr. Belaychuk's original article is worth a visit. The article emphasizes an aspect of BPM programme success that needs highlighting: the importance of management bandwidth as a limiting factor for a BPM programme.
Here's the argument, as I understand the article:
Any newly automated process requires lots of management attention during the initial deployment transitional period ("too expensive in terms of management"). Without this attention, and before additional process controls are in place, the probability of new process failure is high.
We are refering here to "managerial labour" (my term) or the significant cognitive load on managers typically associated with operationalizing a new process, likely for an extended period of time. This transitional phase cognitive load is separate from all the labour that goes into constructing a BPM-based automated process in the first place.
If one accepts this argument, then inductively, given a fixed supply of management labour, the larger the number of new BPM processes initiated, the higher the probability of failure of any individual process and even the programme as a whole. (And this phenomenon is exacerbated as processes evolve, i.e. given that new process models are likely unstable, they are constantly in need of management attention.)
There's lots of rhetoric about why BPM is good for you. Mr. Belaychuk has opened up the "black box of BPM" and found work inside. It's a great antidote to the magical thinking that inevitably leads to disappointment. There's work to be done -- just make sure you budget for it.
From a video series I am producing. . .
"Corporate infrastructure includes working capital, access to capital, land, plant, equipment, premises, tools, software, access to the internet, patent rights, best practices [read processes], staff, outside consultants, suppliers, products, services, marketing, customers and distributors".
There is no reason why processes should receive special treatment - i..e inventory these along with all of the above in a free-form-search hierarchical knowledgebase.
Nowhere does it say what the required level of detail needs to be - "process name/ general description / start node / end node" is better than nothing.
As and when ROI assessments or annual budget allocations earmark process mapping, simulation, roll out and process monitoring initiatives, best practices can be moved from "pending" to "In progress" to "in production".
The advantages of a free-form-search hierarchical Kbase for corporate infrastructure (all of the listed categories) is that you can organize/view knowledge in as many ways as you like, all at one screen. It's not uncommon to have 10,000+ documents in a small-medium corporation.
As for how to approach inventorying processes, top-down simultaneous with bottom-up seems to work across different organizations/corporate cultures.
Thanks to Roger Burlton we know that maturity curve has two parts: “structure maturity” and “culture maturity”. The former can be imposed and the latter cannot be imposed. ( see the second illustration in ref 1)
It seems that ‘ “Go Deep” first’ recommendation demonstrates the preference to a technological way to address business challenges while we know that the people is the most complex part of an enterprise as a system.