BPM.com
  1. Peter Schooff
  2. BPM Discussions
  3. Thursday, March 09 2017, 09:53 AM
  4.  Subscribe via email
How many customers have you worked with who just want a descriptive diagram of the decision without being able to ever use the model with inputs and outputs? Can you estimate the percentage of customers that want executable DMN models?
Emiel Kelly Accepted Answer
Many times. Because I have the intention to execute the process. Not the model.

And still, the supporting gadgets (of which the execution can be designed by a model) are only one part of an executable process.

That's also why I get easily annoyed by people who say "process" but mean "process model". (I am in therapy for this)
Sharing my adventures in Process World via Procesje.nl
Comment
Yes, I think I mean that.

But most of the time "Model = Execution of a process" is not completely true. Take a good old wfm system. Then the model is some kind of blueprint for how case are routed through the workflow and maybe what data is needed in each step. Then it's still not an executable process. People need to enter the data, take decisions etc. Do some monitoring, take action based on that, etc.

Only when a model leads to a fully automated process (searches for it's own input, executes all the steps, takes all the decisions and sends the output to a place), the statement model = execution could be true.
  1. Emiel Kelly
  2. 2 months ago
Do you mean that processes are always (or finally) executable by humans and/or machines, and models are executable exclusively by machines?
  1. Dr Alexander Samarin
  2. 2 months ago
Mostly when I read about executable models it means the model, which is persisted in XML files in the case of DMN and BPMN, can be interpreted by a BPMS or BRMS. Interpreted in this case means f.e. in the case of DMN producing output data based on the decision logic in the diagram and the given input (that in the end comes from user actions). This opposed to using models / diagrams only for requirements / documentation: to be translated in hardcoded rules by a programmer (which imo clearly is not the preferred way of handling the model).
  1. Stefan Houtzager
  2. 2 months ago
  1. more than a month ago
  2. BPM Discussions
  3. # 1
Patrick Lujan Accepted Answer
Blog Writer
Never done it myself, though I've modified models extensively over the years as discovery progressed to the point of having something needed, wanted, that worked. Conversely, I've seen many an SI who've spent years, literally, making giant models printed out on big color plotters to hang on walls and in meeting rooms that came to naught. Then again, those exercises weren't actually about any tangible deliverables I don't think.

As regards DMN, that's still nascent and you usually have to discern from the task at hand that decisioning is called for, explain, mentor the client on what it is, what it does, its utility, then you incorporate that into the project as well. Right now I'd quantify the percentage of clients who want decisioning, know what that means and know that DMN is an aid to doing so around 5-10% tops.
Comment
  1. more than a month ago
  2. BPM Discussions
  3. # 2
Boris Zinchenko Accepted Answer
A model is generally not intended for execution but rather for implementation. There is a definite difference between a model and a process, which is normally expected to be executable. A process is a special type of model. Model is more generic term than process.

It is incorrect to think that BPM deals exclusively with processes. Processes comprise an important part of business process modeling (BPM). However, BPM is much wider than just process modeling alone. Non-executable models are an essential part of BPM.

As most evident example, organizational diagrams are not executable and are not processes. More generally, every EA diagram and enterprise architecture in general is a structural rather than executable field. There are a lot of BPM tools, which focus primarily on architecture and modeling aspects, rather than on execution. This modeling is equally or more important than execution as such.

It is often observed unjustified bias towards executable models in BPM. Such an approach substantially undermines generic appeal and significance of BPM neglecting EA, MDM and many more purely modeling aspects. Ideal BPM implementation should achieve harmony in its modeling and execution dimensions.
Comment
  1. more than a month ago
  2. BPM Discussions
  3. # 3
Juan J Moreno Accepted Answer
Many, many times. A model is just an abstraction of the reality, so it is useful to understand, discuss and align people around the process. Not only to automate it.
Decision models are a subset of generic models, so they inherit these main characteristics.

I like to see models as a Knowledge management tool. They are useful to extract knowledge from people's minds and formalize it -known as "externalization"-, then share, discuss and adjust -known as "socialization"-, and once it has been improved (or just changed), they can distributed and learnt by the rest of the company -known as "internalization"-.

After that, it comes the "execution" of those models. Of course, it is better to have the model and then execute, and then measure, adjust and execute again. But, if the company is not mature enough to complete this full cycle, the modelling stage alone can add great value :)
CEO at Flokzu Cloud BPM Suite
Comment
  1. more than a month ago
  2. BPM Discussions
  3. # 4
John Morris Accepted Answer
This question (concerning modeling-as-analysis versus modeling-as-executable-artefact) is very interesting. It is a question of automation. Concerning just the technology, automation requires hard thinking, which has a cost. That cost includes the fact that your automation is invariably based on a simplified model of reality. But there are also questions of sociology and governance. And even psychology -- automation requires "letting go". Automation beyond analysis uncovers economics, politics and fear. And perhaps also delight and progress.

I also like this question for personal reasons: my main business domain name is "decisionmodels.org".:D
Comment
"This question (concerning modeling-as-analysis versus modeling-as-executable-artefact) is very interesting. It is a question of automation." Bingo. Sometimes it's one, sometimes it's the other, sometimes it's both. There's more than one way to skin a cat.
  1. Patrick Lujan
  2. 2 months ago
  1. more than a month ago
  2. BPM Discussions
  3. # 5
We model all new software apps to explore different ways customers might relate to a particular layout of buttons/icons, how they might navigate within an app, etc,

Because the models have functionality gaps we mock up states and then make a video that gives a viewer the impression of seeing a not-yet-developed app running.

I suppose in the area of BPM we initially model all of our processes, all of the time, in that in our obsession to map out processes during very brief (i.e. one-hour) GoToMeeting sessions, we ask as part of "homework" that the customer give us in advance images of all forms that their organization uses.

We "park" the images in a listbox so that during initial mapping, as we drag and drop process steps onto a canvas/sheet, we can quickly/easily attach one of more form images to each step.

We then compile and roll out the "process" to stakeholders - they execute the process steps and look at the form images -they cannot record any data at steps but they can record objections (wrong forms, bad forms, wrong sequencing of steps, too many steps/not enough).

These "models" rarely have "executable auto-decision boxes" (like real forms they take too long to build) so what the users find are narratives of what we/they feel will end up being rules.

Only once we have general agreement re the "model" do we go back and build a real process map.
References
  1. http://www.kwkeirstead.wordpress.com
Comment
  1. more than a month ago
  2. BPM Discussions
  3. # 6
It is normal, that to elaborate a coherent set of machine-executable artefacts (which correspond to a domain-to-be-automated) it is mandatory to have several analytical artefacts. For example, to develop any program (a machine-executable artefact) we need to develop requirements as analytical (thanks to John) artefacts, which are not machine-executable. Also we may develop some structural (thanks to Boris) artefacts which are configurations for, in theory, machine-executable artefacts.

Value-streams and macro-processes are typical analytical artefacts because, unfortunately, they are be handled by, as far as I know, modern BPM-suite tools.

Rather often, process-related explicit analytical artefacts are “transformed” into implicit executable artefacts such Java programs. It is also a kind of automation.

Of course, the word “model” brings a lot of confusion in BPM (Business Process Management, neither modelling nor measurement nor monitoring), especially, with a well-known aphorism “All models are wrong”. Also, definition from the Oxford dictionary of model as “A simplified description, especially a mathematical one, of a system or process, to assist calculations and predictions” does not help as well.

Considering that we deal with man-made systems (e.g. enterprises) and not with natural systems or phenomena, I prefer to use the word “plan” (a detailed proposal for doing or achieving something) instead of “model”. Thus, any flow-chart is actually a plan of work.

Thanks,
AS
Comment
+1 "plan" -- and the clarification of the semantics of work, process and automation. No doubt many business people will roll their eyes about such clarifications and "just want to get on with the work" . . . but such exasperation with terminology merely ensures that unexamined default terminology is used.
Default terminology is very likely polluted with bias, prejudice, lack of perspective, assumption and error -- even though default terminology (a.k.a. business or domain folk language) may also be a vehicle for expressing accumulated tacit knowledge.
The modern way is to go beyond the limitations of common language, with management science, here, the language of business process management. As such well-defined BPM terminology is revolutionary, by definition.
Of course as with all modernisms, BPM terminology has it's own failings. One ignores the tacit and the traditional at great peril.
  1. John Morris
  2. 2 months ago
  1. more than a month ago
  2. BPM Discussions
  3. # 7
David Chassels Accepted Answer
In the early 1970s I was introduced to modelling processes as an auditor. We had to walk through process activity talking to people doing the job. Purpose was to identify weaknesses in the processes to focus the audit and make recommendations to fix such weaknesses. It was an both interesting and a bit of an eye opener as I discovered a huge systemic fraud which had gone unreported for many years. Sadly this approach to audit was discontinued as IT systems spread with all those false promises and frankly my profession lost the plot failing to focus on source creation of information just believing what the computers pumped out!

In the early 90s I came across a business person who claimed he had discovered how to build systems reflecting how business really worked with focus on people being supported as they created information and without the need to code! I did some research and was clear this was the dream and vision of many (and still is!). I became involved and it was decided to display the capability in a graphical process design as a model with the capability to click a button and the custom configuration would automatically be built as an application ready to run. Early adopters pre 2000 were on board and we all went through a steep learning curve understanding what we had built.

Analysts such as Gartner just did not get it or want to get it? With hindsight we were decades ahead of the market being ready and of course many vested interests resisting....I have the scars but we survived..... So we have proven that it is possible to build and execute from the model being able to handle all process needs....as a CEO and CFO once looked at each other they said why do it any other way....then they remembered IT......but that is now changing and others begin to emerge with same message. Quite why one would build a theoretical model when such capability now exists is now a legitimate question. Maybe accountants need to get back to basics to understand how new information is actually created.....and help business regain control of their operational processes.....?
Comment
Fascinating description @David concerning "auditing by walking around and talking to people" -- and how that stopped with the arrival of computers. It's very similar to what has happened in policing (in Canada, the U.S., not sure about Europe). Whereas there used to be "beat cops", i.e. peace officers that would know a neighbourhood intimately, and patrol on foot -- talking to everyone -- since the 1970's police were all pulled off the sidewalks and into squad cars. The distance between officer (or auditor) and citizens (or employees) became enormous. And we just trusted "the data". In retrospect, a lot was lost.
  1. John Morris
  2. 2 months ago
  1. more than a month ago
  2. BPM Discussions
  3. # 8
Ian Gotts Accepted Answer
We built a great business - Nimbus - offering a product that didn't execute processes. It simply modelled operational processes and allowed end users to access them as the "Operations Manual". We're doing the same again - Elements.cloud - but this time the core app is free for ever, for everyone.

Why - because many companies already have apps that execute their processes. The challenge is getting every member of staff "on the same page" so they all use those apps consistently.

Of course, once every process in every company is AI-enabled automation, we will be irrelevant. Until then we have an important part to play in the BPM ecosystem. And who doesn't love free?
Comment
RE "Operations Manual" - this exactly the problem my users identified about 20 yeas ago. With a Quality Management System in the organisation (certified for the ISO 9001), they had to document any modification twice - one time in programs (which executed their processes) and second time in their Quality Manual. With machine-executable processes (properly styled, of course), the same flow-chart was a program and a piece of documentation at the time. And no problem with auditors, by definition.
  1. Dr Alexander Samarin
  2. 2 months ago
@Ian.. Nice strategy.. Civerex also makes mapping available for free, but only to consultants. Our customers do NOT have apps that are able to execute processes they may have. We make our money licensing execution platforms.
  1. Karl Walter Keirstead
  2. 2 months ago
Bravo @Boris the need for transparency and models. In other words "taking responsibility". You have described AI -- whatever its undeniable merits -- as the subject of a lot of magical thinking.
  1. John Morris
  2. 2 months ago
@Ian, Nimbus is a fantastic tool. I had a pleasure working with it on some projects.

I m surprised with "AI-enabled automation" making a threat to process modeling. I see situation exactly opposite. AI is a shaky and extremely risky ground, especially, it part of insufficient modeling.

Take, e.g. a neural network: several layers, complex training rules, unknown stability in regard to multiple inputs going outside of a training set. In essence, nobody can know for sure, how it will behave in stress conditions, including even authors of the technology.

Now, imagine this black-box, which nobody actually knows or can reliably predict, takes a control of a production chain? a car? a plane? an atomic reactor? It is only a matter of time to await when it crashes or explodes. It is inevitable to see an increasing flood of disasters in AI-driven world without proper models and control.

With present explosive spread of AI, we should definitely see a growing demand exactly for modeling tools, which can clearly explain how this AI works. Alternative to it is a total loss of control by management, which blindly relies on AI without any understanding or insights into underlying technology.
  1. Boris Zinchenko
  2. 2 months ago
  1. more than a month ago
  2. BPM Discussions
  3. # 9
Bogdan Nafornita Accepted Answer
in 95% of the cases I model processes with an execution end in mind. And I do end up executing them! (the remaining 5% are just ideas that will eventually evolve into an executable process model / reusable process pattern that then gets plugged into my architecture)

as for decision models - yes DMN is young but the idea of transparent, self-editable decision rules gets excellent traction with small customers. The execution is still a bit outside customer reach, but it will get there this year.
Managing Founder, profluo.com
Comment
Yes Alex - using DMN and BPMN to design a pluggable, configurable decisioning service is the task at hand for us.
  1. Bogdan Nafornita
  2. 2 months ago
I think that stopping re-implementation of the same rule to various applications (thus avoiding maintenance nightmare by externalisation of rules) is yet another very important feature of DMN. As this artefact is externalised then its life-cycle must be properly managed.
  1. Dr Alexander Samarin
  2. 2 months ago
@Bogdan - Interesting !

Where decision rules get "parked" (locally or centrally) impacts their user-accessibility but not self-editing (the software is quite capable of remembering where the rules are stored).

Most of our rules are very local (no navigation) - this is for the convenience of super users only. We absolutely do not want any other class of user going near/touching rules. We really don't want them reading these rules - better to have an instruction sheet that describes the behavior in narrative terms if it is important for users to understand what is happening, will happen to the data they input or fail to input.

I say we are not even close to general building/maintenance of rule sets by business users - IT can sleep at night.

I am not an across-the-board fan of centralized rules BUT we link to these from gatekeeper process steps because many go/no go decisions are made on the basis of a "wave" of data values where, early on, you can get away with minor defects whereas, later on, things are made more difficult (i.e. a process step wants an address block to be filled in - if the user skips one data element, they get a warning following by "do you want to provide the missing data now or skip?" Downstream, at a step called "Mail Letter" the user, of course, gets a hard stop in lieu of a warning.

I suppose by "self-editing" you mean e.g. we are making parts to a certain tolerance level but, on assembly, we are seeing fails due, we suspect, some parts being at the opposite ends of their tolerance, so the software might selectively want to tighten up the part tolerances to reduce the fails at the assembly level?

Very important IMO for any user editing or rule sets or system editing that we have, in the operations log, a record of any changes to rules in case problems are detected in QA or problems arise in the field.
  1. Karl Walter Keirstead
  2. 2 months ago
Business leaders, please pay attention to Bogdan's statement of opportunity: "it will get there this year" . . .
Dig in, understand what Bogdan means -- and get ready to act.
  1. John Morris
  2. 2 months ago

I looked up Decision Model and Notation at Wikipedia.

Various snippets leap off the page

1. It’s complementary to the BPMN standard
2. “ BPMN and UML do not handle decision making”
3. “growth of projects using business rule management systems or BRMS which allow faster changes" [kwk: faster than other approaches they have NOT looked at?]

I am not a fan of "standard" notations, so anything complimentary to a notation is even less appealing to me.

Consider this . . .

We have, in the area of industrial controls, for decades, been relying on remote devices to reject incoming requests that these “engines” find to be deficient. The reason we send requests to these devices is to get a "clean" response to such requests. We see error messages but are not typically aware of many of the "rules" that are internal to these devices.

e.g. A pressure control value has in-line rules that allows the valve to output an “open up” signal when the pressure is too high and an “throttle back” signal when the pressure is too low. The in-line rule set includes a ‘dead zone’ so that the valve does not hunt.

Users are aware of the two output signals, but, unless they read the specs, they never know/care about the internal rules. They rely on the valve to do what it is supposed to do.

Why not, in business processes, view steps same as we view remote devices?

Each step can have a pre-processor (with rules) to prevent unwanted engagement of processing, it can have internal rules to govern the processing and it can have a post-processor (with rules) where we make sure the output is context/situational appropriate.

. . . . . Black box processing, right...

Any comments for / against?
  1. Karl Walter Keirstead
  2. 2 months ago
In practice it is far easier: almost nobody really implemented decisions in their ERP.
  1. Bogdan Nafornita
  2. 2 months ago
Not so easy, Bogdan. Decisions in DMN usually replace existing decisions in existing ERPs, house-developed apps, etc. It is not enough to implement the former, because it is mandatory to remove/replace the latter.
  1. Dr Alexander Samarin
  2. 2 months ago
Bogdan, RE "...almost nobody really implemented decisions in their ERP" - are you saying that those decisions are implicit?
  1. Dr Alexander Samarin
  2. 2 months ago
I actually am a fan of standard notations. I actually believe BPMN/CMMN/DMN are not strong enough as standards, that is why we are currently in this cacophony of half-assed BPM tech implementations, where everybody does whatever they think it's cool and digs a moat around them. Of course, any customer that talks to at least two vendors feels this and keep his hand in his pocket for a little longer. And then we wonder why BPM tech is not picking up for many years.
DMN as a complementary notation to BPMN is actually a feature. In fact, DMN is the simplest of all such business modelling standards (who doesn't understand tables?).
  1. Bogdan Nafornita
  2. 2 months ago
  1. more than a month ago
  2. BPM Discussions
  3. # 10
David Chassels Accepted Answer
A comment on process decision making. We experimented with idea that it could be possible to allow automation of decision making based upon previous decisions. This was based upon the fact we are completely data centric including tasks links rules and state. Our objective was to deliver to users a set of options based upon their declared preferences. It worked and allowed users to change their preferences and so the options changed. It also ensured there was always an an available outcome choice. This was picked up by an early adopter who created the intelligent questionnaire. We think more to come and agree such a capability is part of the future.
Comment
  1. more than a month ago
  2. BPM Discussions
  3. # 11
  • Page :
  • 1


There are no replies made for this post yet.
However, you are not allowed to reply to this post.