In accordance with the 5th law of BPM (see ref1) , "The vast majority of unique business processes can be constructed from a limited set of business-centric patterns." thus we need a repository (a catalogue, animation, examples, etc.) of practical process patterns (see ref2).
Because they are not:
1/ granular - people who build these repositories believe a best-in-class S2P cycle is the same for everyone
2/ specific - vertical domains should be used as accelerators, instead of the current horizontal approach
Micro-process-as-a-pattern should be the way to go, but this requires an uncanny ability to abstract the whole business model of an enterprise and then go back and translate every piece of it into a friendly domain-specific language for the regular laymen. Not sure it's entirely possible but some focused nutcase could make some progress here.
In fewer words: a "PCF" approach is great for consultants, not for their implementing customers.
I'm sure many of my current, and past, colleagues will argue that they rarely see this happen. I like the term model, or map, because it helps actors within a process understand that what they are looking at is an abstraction not the reality of the process. It's important, in some cases but not all, that these actors have a simple and clean way to talk about how stuff works today so they can improve it tomorrow.
In areas where we see a lot of change, having a common reference model that everyone understands is very useful to help the team undertand and overcome unexpected changes. This is where I see these repositories being used today.
Multiple reasons, slight variances on Dr. S's and Bogdan's weigh-ins above:
That being said I like Dr. S' "practical" patterns a lot and have used them at my most recent client. The "uncanny ability" part comes with reps, more "weight on the bar."
Just my tuppence.
Well... Ok, here it goes:
It's absolutley not about storing process models (that already happens many times for better or for worse...), but it is about having a single source of (governed) hierarchical truth in an easy enough to understand presentation for the business. There, I said it.
Any process related information (roles, people, tools, technical detailed blueprints, procedures, KPI's, compliance references, cookies and coffee, ashtrays etc.) should, in principle, be represented in the context of this truth. And if an artefact doesn't fit, what exactly does it contribute to your business in the first place?! So far the theory...
9 out of 10 customers I visit, can tell me (basically) WHAT they produce, However: HOW they do it, (let allone WHY they do it), remains many times one big foggy swamp. It's an enormous waste that process (related) information needs to be constantly created or looked after from scratch, project after project. I do understand that you cannot have a one-2-one blue print of every single activity in your company, but at least you should have a level 1-3 representation acting as the umbrella context and extend where most needed. Not as isolated documents in a Sharepoint site.
Obviously technologies such as process mining could create realistic AS-IS of your business. But these need, guess what, also be in context of the whole.
And when you do not get the above, well, that's why it won't take off IMO. :-)
Great article by @PeterHilton.
He suggests the need for a repository with the following characteristics in order to make repositories successful
1) Public Online Content
2) Public Editing
3) Modeling Discussion
4) Model Variation
5) Open Licensing
6) Multiple Languages
I couldn't agree with this more with one major caveat. There are two main models for "Public" Editing, viz. the Wikipedia model and the Github model. (For those not familiar with Github it is THE open source software repository on the Internet).
In the Wikipedia model, literally anyone can directly edit the page. In the Github model, there is a notion of a "pull request". i.e a person other than the owning team can clone the code, make changes and issue a "pull request". i.e. request the owner to merge their changes in. For semantically interrelated artifacts like code (or processes) a direct edit public edit model is not really viable.
So, a pull request model requires
a) Allow anyone to clone public processes
b) Make changes to the cloned process
c) Issue a pull request
d) Owner can merge it in creating a new version of the public process.
Similarly, a forking model (also supported by Github) where the code is forked and never pull-requested is also critical to the innovation seen in Open Source software.
One additional point I would like to make relative to the referenced article:
Peter Hilton Wrote:
"Currently, process model repositories generally don’t allow public access, and are written by a single source, instead of allowing public contributions. This is how encyclopaedias used to be produced, before they inevitably gave way to Wikipedia."
At TMail21 we do support public repositories (and the characteristics 1-6 above) with a couple of caveats
a) For #1, the processes are public (we also support organization-private processes) but are behind a free registration 'wall'.
We plan to remove the registration wall for public processes shortly so they can be directly indexed by Google.
b) For #3, the modeling discussion is not currently public but specific to the "owning" organization. But this is a great enhancement to consider.
and the following related points
i) For #2 We use a github style pull-request type model for public editing rather than a Wikipedia style direct-edit model. (which I believe is the appropriate approach)
ii) We also support the github-style forking model in addition to the pull-request model.
It's a very good question...
We have modelled and optimised the same business processes for different clients in the same industries such as Automotive, Consumer Goods, Financial Services, Government, etc. etc. At the AS-IS stage we typically find almost identical processes, with the only differences being cosmetic or down to local terminology or cultural tradition. Occasionally we find a 'smart' process that is saving sufficient time, money and waste to create a competitive advantage. Quite often the client is oblivious to this and, before we embark on TO-BE, we have to spell out to the Exec team the potential they already have.
In the commercial sector, process differentiation should rightly be seen as one of the key means of competitive advantage, but should that view also apply in the public sector? Isn't there a clear need to identify and share best practice for all citizen services in order to ensure consistency and optimise value for money?
In UK Local Government there are 470 organisations each separately providing tens if not hundreds of citizen services which are almost identical. From collecting local taxes, handling waste collection, issuing building permits, the list is endless, but so are the local variations that we have found. From time to time we offer up the library of local government processes that we have built up over the years, but it generates little interest.
Contrast that with Switzerland. A country I worked in for a number of years and with a total population about the same as London; but with 26 Cantons and 2400 Municipal Authorities. For some years, the Swiss Federal Government has been actively promoting the identification, modelling and sharing of business processes and advocating the adoption of 'best practice' throughout local administrative centres of all types. The advantages are seen as considerable: with some Municipalities serving only 1000 citizens, affordability means there can only be 2 or 3 local administrators.
In other countries this might be seen as all the justification necessary for merging, eliminating and/or outsourcing local councils but the Swiss nation is fiercely proud of its local democracies and strives to find a better way through the sharing of its business processes.
I started thinking about this question because it occurred to me that all of the likely objections to the feasibility of the idea are things that Wikipedia figured out how to solve. That’s why my blog post about The process model repository of the future is essentially an outline of What would Wikipedia do?
I suspect that the difficulty of getting started with process modelling is a barrier to more widespread BPM adoption. While developing Effektif we can make a lot of the software support easier to use by simplifying the traditional approach to aBPMS, but we still present beginners with a blank BPMN diagram. We can provide some process examples, so people don’t have to start from a blank page, but we’re never going to be able to provide examples for everything.
We have customers who are new to BPM and using Effektif to help coordinate a process that was previously based on Excel and email. They would benefit from a process repository that has multiple examples of similar processes to their own, ideally in the same industry, that they could use for inspiration. These process models are obviously not going to be usable as-is. Our business users would still have to understand their the example processes, understand how they relate to work in their own company, and adapt them to their own needs to make them usable. These hard parts of process modelling don’t go away. But people wouldn't have to start from scratch.
Maybe people have an incomplete interpretation of what a repository is. I refer to it as "The View of the Company from 50,000 Feet," and, as such, should include much more than just processes. It should include business components (functions, jobs, people, machines, objectives, projects, and requirements), system components (systems, sub-systems (business processes), procedures, steps/tasks, programs, modules), and data components (data elements, records, files, inputs, outputs). All, of course, are cross-referenced. As such, it is the focal point for all development activities, not just BPM. The Repository here is used as a Bill of Material Processor which is invaluable for playing "what if" in changing a component (measuring the impact of change).
I have written about this in the past. Below are two articles. Hope this helps.
All the Best,
Is a modeling repository like an an accounting chart of accounts? There is some similarity. But we won't worry about whether the idea of "charts of accounts are going to take off". They "just are".
And part of "just being" is that process model repositories (even worse than charts of accounts) are many steps from value creation: (1) fund model curation in repository, likely a very fraught process; (2) fund model selection from repository for use; (3) fund model customization for specific requirements; (4) fund model deployment into production; (5) trigger create process instance in model; (6) and finally use process instance to support the work of value creation.This all-overhead value chain is hard to sell!
And we haven't even explored whether a model is executable, or only supports analysis. Or how one keeps a deployed model in sync with central models.
So rather than the implied Soviet-style repository business model, a more entreprenuerial model built around edge-oriented micro-processes-as-pattern (per @Bogdan) has promise in multiple dimensions: technically, financially and socially.
When I collected my "practical process patetrns" I kept in mind the famous "Design patterns:Elements of Reusable Object-Oriented Software" book from the Gang of Four (see ref1) and, of course, cooking recipes. Their usage is simple:
1. follow it as-is to understand it, and
2. adapt it (step-by-step) for your needs/taste.
Easy, it works and even worked in the Soviet Union. Of course, process patterns must be treated as a "public good" in the BPM community.
Judging from this thread, I'd say at least one reason is that people do not necessarily share the same idea of what a "process model repository" is. Wikipedia is the opposite: it is reasonably well structured, and there is widespread agreement about what constitutes an "article" (notwithstanding the fact that there are periodic discussions whether two articles should be merged, or one split).
Those of us on the technology side assume that we should be able to create reusable libraries that can be picked up and plugged in where needed. But BPM doesn't actually work like that. Code libraries have extremely well defined inputs and outputs, and are specifically designed to avoid causing side effects (that is, changes to state or other information that is not contained within the library itself). They work best when they are simply dropped into place and used through a well defined interface.
BPM configurations rarely work that way. You could keep a snippet in your repository for "legal review", for example, but the odds are poor that you'll be able to just drop that into your process without modification or side effects.
Not coincidentally, the value of reuse is also much higher in coding than in BPM. Having to re-implement your sort function each and every time you need one is costly. Because you don't modify reusable libraries, it's always going to be cheaper to just drop in an existing one. But BPM snippets, as noted, very frequently need to be adapted to their particular use case. Although there are certainly exceptions, as an implementer, I have generally found it to be faster and easier to build new processes from scratch.
The seductive nature of BPM Software is that a business can model its own proprietary business processes so that the software fits the business instead of having to change the business to fit the software. This has long been one of the sexiest aspects of BPM, and it is certainly something that BPM Sales people exploit to maximum advantage. This is one of the main reasons why templates, process, and form libraries end up having so little value. Although they seem like a great idea to newbies to BPM companies (every new person we have hired into a senior sales or marketing management role comes up with that as their first brilliant idea), it really doesn't have much of an impact.
Besides the workflows themselves differring, the other thorny aspect of the template library is a) the connectors to third party systems, and b) the way those connectors actually get used. This is where the possible combinations of the way the same process might look for different business starts to exand quite exponentially. It is one thing to connect to SalesForce to pull the name of a lead. It is quite another thing to pull this data in concert with product data from an AS400 and then feed a copy of an uploaded document to a very old version of Filenet.
You see, BPM is, as we all know, all about connecting systems and people. So, although a purchase process sounds like it should be the same for everyone, it becomes quite different once you add in different ERP, CMS, ECM, and a few legacy systems. Oh, and then you sprinkle on top of all that the fact that every company will be using a slightly different version of each of these systems in a slightly different network configuration, etc. The result is that the simple purchase request process starts looking quite different for each business when modeled and automated in BPM Software. Yeah, I know, REST is supposed to offer a universally accepted contract between parties. It will happen....just hasn't yet.
So perhaps the best tactic for publishing a process repository that succeds in the same way as Wikipedia is to do it inside Wikipedia, following Wikipedia’s rules. Now where’s that List of business processses page…?
It is worth pausing to think about why Wkipedia works as "user-curated/managed content store"
- there is a infastructure which users don't need to worrry about getting access
- there is a setpredefined structure for the information
- there is already stuff in there so you can see what you are aim for
And most critically people actually care about what is in their "area" and are able to make/suggest changes and get them implemented very very quickly.
Wiki's strenths are (taken fromhttps://en.wikipedia.org/wiki/Wikipedia:About
Build a process repository along these lines and it can and does work.
I propose that the problem is related to an ambiguity as to what the benefit is supposed to be.
We have a repository in place at each customer but it contains a lot more than processes. Without reusable processes and templates we would never ever complete a project in time. Corporate users could never maintain their processes without a repository, and business people would not be able to work with the rerpository without simple reusable patterns and rules. But one cant just take that public. Wikipedia is a full text repository and Github is a source code repository. One is description, the other implementation. They could not be farther apart. Herein lies the problem. Typical Process repositories are neither nor because flow diagrams are at best just 20% of a usable process. The integration problems have been pointed out as have the lack of when, what, who, how and why descriptions in businesses. Another isuue is that hardly anyone in a large organisation actually wants transparency. At best they want numbers which if they actually get them mean very little due to a lack of common descriptors. Why? Businesses are social constructs and defy standardization regardless of how often BPM experts proclaim its benefits. Yes, some process decriptions are useful but they don't really enable automation. When Automation is enforced it kills people knowledge which over time kills the business. A business is not a factory floor. Hence a global BPM pattern repository does little to nothing in solving actual business problems. They might solve some implementation problems but like programmers process designers prefer to start from scratch than meddle with someone elses source code.
To overcome some of these issues we have implemented a few years back an ontology layer in our repository. To describe the WHY in a more usable way than free text needs a term definition first. Its first use was to enable the writing of rules in business language already including the data variables. It was expanded to enable user interaction and we are currently working to enable the business to describe AND interact with an application. It needs to support multiple languages and synonyms as well.
We imported for example the ACORD insurance model which is a kind of public repository only to find that it is 80% technical and cant be understood by business. That is the main use of an ontology in our book. We need to import hhe decriptive texts as well and not just the definitions. We then needed to provide a business filter to remove all the technical stuff from view. The ACORD model contains no processes and nothing in terms of WHY either. But now business people can use the ontology to describe the WHY and HOW in goal-oriented processes. Is that now reusable? Yes, theoretically. But as businesses are unique they want to use their own terminology and their own processes. But at least we have brought it all closer to the business. Do they now benefit from a public repository? No.
I think a comparison to a Wikipedia digital asset model is far-fetched. On top of the reasons outlined above by various commenters, I would add:
1/ role of the crowd
the main value proposition of crowdsourcing is "in large enough numbers, the crowd is right, even given a minuscule marginal value of the individual contributions".
I have a hard time picturing the BPM people as a large enough crowd, for the purpose of contributing meaningful, valuable and universally accepted process models.
2/ asset criticality
Wikipedia provides hyperlinked definitions and references. These assets have marginal value as information. Hence they are not mission-critical.
On the other hand, executable public process models, if affected by amateur contributors and (highly likely) unscrutinized and untested, can derail an entire business.
3/ asset dimension
Linked to 2/ above, it is far more demanding to review and ammend towards a global optima a complex, heavy asset such as a process model. We can't compare this to grammar corrections or removal of broken hyperlinks.
Our group is building a worldwide Kbase for a client in a narrow area of medicine where there will be 100 "evidence-based best practices" comprising 20-50 steps each.
I would agree no more than something like 13 constructs, perhaps even less, will be needed/used.
Our model has zero starting constructs - the end users drag and drop to a blank sheet and then, same as David, effect one click. End users do need help with rule set building.
The users want to see bibliographic references, resutls of trials, research papers, grant sources plus feedback from the field before they agree to use a best practice.
Bottom line, the Kbase has 100 protocols plus 4,000 related documents and because the users have no time to try to interconnect these in the many different ways that suit their changing needs, they expect to see/access everything at one graphic free-form search screen.