Resolved
4 votes

From Patrick Lujan: How do you make sure BPM doesn't get overwhelmed by too much data?

Tuesday, August 23 2016, 09:50 AM
Share this post:
Responses (12)
  • Accepted Answer

    Tuesday, August 23 2016, 10:03 AM - #Permalink
    Resolved
    4 votes

    A starting point here is how many steps do we routinely find in one process - would 1,000 be a practical limit?

    If you then say each step is likely to have 50 data points across a few forms but that only a few of the 1,000 steps are current at the same time, we have manageable data flow.

    However, one process can have hundreds/thousands of active instances, but no one user wants more than 50-100 pending tasks in their "today" InTray.

    But, a hospital or an insurance claim processing or a manufacturing entity can easily have 200 processes.

    No matter . . . - with a good run-time environment, data appends to an MS SQL data base are fast and an IIS front end with reasonable horsepower can handle many simultaneous users.

    In addition to logged-in userss, you can have hundreds of local and remote systems and apps posting data to a data exchanger with an engine posting the incoming data to the right relational dbms records.

    No problem.

    We are handling business transactions here, right, not live data from a satellite launch?

    Does anyone have better numbers than the above?

     

    • Patrick Lujan
      more than a month ago
      Have done multiple implementations on large infra (WebsFear, Weblogic farms, Oracle, DB2 clusters) where there are hundreds of millions of of work objects in flight at any given time and billions of work item steps recorded on a daily basis for global 24x7 systems. My question's bent was more towards the analytics side of things, or data at large given how much can be captured these days. Operational data, I don't think, has been a problem for a while.
    • karl walter keirstead
      more than a month ago
      @Patrick

      Very helpful - what I get out of this is there is a huge difference between the operational environment I described (where much of work only gets done when users open tasks and perform them) and your scenarios.

      I suspect your "work objects in flight" have a heavy component of machine-machine processing.

      So, with "not so much data", or with "a lot of data", the end game to me is the same i.e. mine the data so you can provide better real-time decision support and inch closer to real-time outcome predictions.

      Clearly there are options in between. - I had a Japanese colleague who build up an extensive vocabulary for various states of excess.

      He used "much", "many much", "very many much" and "too many much". Over time, I got the impression I understood what he was talking about.

      Real-time outcomes predictions is a real challenge in b2b because, in theory, no two Cases are the same.

      Each builds a history of a mix of structured vs unstructured interventions such that there are few plan side "end-to-end process", what we have at run time is process fragments (structured task sequences) plus ad hoc interventions, which are threaded together by users, robots and software.

      About the only thing we can say about any one Case is "it ain't over till it's over" and that is why we have folks called Case Managers.
    The reply is currently minimized Show
  • Accepted Answer

    Tuesday, August 23 2016, 10:12 AM - #Permalink
    Resolved
    6 votes

    If the BPM is implemented following a strategic approach (what objectives? what processes? which data?), then data and other elements will not be too much nor too less but will be exactly what the system requires. Otherwise, if the BPM is implemented at the operational level without a proper alignment with the business strategy, then this probably will lead to redundant and unnecessary information.

     

    The reply is currently minimized Show
  • Accepted Answer

    Tuesday, August 23 2016, 10:29 AM - #Permalink
    Resolved
    7 votes

    Start with focusing on effectiveness first (doing the right things) and only thereafter on efficiency (doing the things right). That way you avoid creating stuff you shouldn't create in the first place... On top of that, think twice about what is *really* needed; it is nowadays just too easy to incorporate everything (which in itself is actually a cause of broken processes in the first place; KPI's, roles/means and structure for instance should always be derived from processes and not the other way around).

    Like
    1
    The reply is currently minimized Show
  • Accepted Answer

    Tuesday, August 23 2016, 10:41 AM - #Permalink
    Resolved
    7 votes

    Well scope is one way to control it, but there is a bigger issue here. Like IoT which will get overwhelmed as well, there is so much data needed to figure patterns, contexts, proper decisions and proper actions. Without embracing cognitive assists, both people and software (in this case BPM software) will get overwhelmed. Just my two cents

    Jim

    • Patrick Lujan
      more than a month ago
      Agreed, but the "cognitive assists" are only as good as you tell them to be. And that, I think, goes back to scope and, or clearly defining objectives as per Jose and Walter's comments.
    • John Morris
      more than a month ago
      Also agreed. Interesting comparison though to so-called "expert systems", which for a while in the 80's got a lot of press or "hype". The technology of the time wasn't up to the task set by expectations (today's analytics, ML, business rules and AI software are the children of that time). But a bigger issue was the trouble finding expertise! And then encoding that expertise, which was often tacit knowledge even for practitioners! The challenge of translating knowledge into machine is I think greater than we imagine, and machine learning (ML) can only go so far on its own.
    The reply is currently minimized Show
  • Accepted Answer

    Tuesday, August 23 2016, 10:50 AM - #Permalink
    Resolved
    5 votes

    The problem of process users being overwhelmed by too much data is a both an technical problem and an economic problem.

    cheap sensors => cheap data => information overload

    The answer is to fund and perform the work of business analysis required to define important events and to filter signals all for the purpose of driving decisions.

    There is a huge governance issue here though; look at the persistence of the alarm fatigue problem, especially in health care. There's a use case for managing alarm fatigue, but the business case is harder to make -- and no one wants to make an investment.

    [Note: the focus here is primarily on the IoT ("Internet of Things"). And it's true that BPM technology-managed processes are just part of the whole information overload ecosystem. But insofar as BPM + rules + analytics are the primary tools of instantiation of the work of business analysis, the juxtaposition of BPM and information overload is appropriate.]

    Here are previous BPM.com items on this topic:

    New business process and the Internet of Things -- Alarm fatigue, healthcare, IoT, decisioning

    Mistake to avoid when automating with BPM -- Economics of information overload

    Impact of Internet of Things on BPM -- BPM anti-patterns

    Technologies making biggest impact on BPM in future -- IoT and semantic overload

    Priority of data in BPM process management -- Data as foundation for processes

    Importance of process for Internet of Things -- Resources on alarm fatigue

    --------------------------------------

    And my LinkedIn post on alarm fatigue, from 2014:

    Alarm Fatigue & IoT: What Healthcare Teaches Manufacturing

     

    • Patrick Lujan
      more than a month ago
      "The answer is to fund and perform the work of business analysis required to define important events and to filter signals all for the purpose of driving decisions." #TouchNose

      Unfortunately, most don't take that pro-active, up-front approach, stance. They think a big bucket of data will tell them something they've never seen, thought of before.
    • John Morris
      more than a month ago
      Thanks for the comment Patrick. And your mention of "tell them something they've never seen, thought of before" is the perfect description of magical thinking.
    The reply is currently minimized Show
  • Accepted Answer

    Tuesday, August 23 2016, 01:55 PM - #Permalink
    Resolved
    6 votes

    I like Walter's take - it's about the goals we set ourselves during the process. This is what we should be measuring and refining.

    I guess by now everyone knows how much I believe in the cognitive impairment theory, how this is a big limit in everything we humans do, speaking from an evolutionary perspective:

    1/ our neocortex is a small, very young area of our brain. This small portion can only handle small buffers of information, as short-term memory is usually under considerable strain when processing info. Multi-tasking is a myth, and it is actually single tasking done relatively fast.

    2/ our archaic limbic amygdala is in charge with paying attention to bad news all the time, readily interrupting our concentration and focus.

    The combination of the two cannot make for a very consistent rational set of behaviors in humans.

    I'd actually like to extend my cognitive impairment examples to analytics visuals (as this is a current area of concern as we're debating internally to which extent should we grow our in-house analytics engine). I look at all the embeddable BI suites out there and I literally see hundreds of possible graphs readily available. Seriously? Who uses all this crap? Who has the time to even understand the data preparation requirements of them?

    I honestly cannot think of business scenarios that can be more conveniently served than by the use of line charts, scatterplots, histograms, bullet charts and... that's just it. No pie charts, no gauge charts, no stock hi-lo's etc. The purpose of graphs is to quickly make a point - if you have to first understand what the hell have you just represented there... you lost.

    I think that, before we even start thinking about using cognitive assists, we should focus on making clear points in our existing graphs and tables.

    Another point that maybe we should discuss (but I'm way over my quota already) is what do we do with irrelevant, obsolete data. This is directly related to fixing our cognitive limitations: what is relevant towards the decision-making process? who gets to call the "D" in CRUD? why that data, why then?

    The reply is currently minimized Show
  • Accepted Answer

    Tuesday, August 23 2016, 02:43 PM - #Permalink
    Resolved
    5 votes

    I believe the best way to not be overwhelemed by the amount of data when thinking about process is to embrace "separation of concern" and decouple the two concepts.

    When you decouple the definition of your data from the definition of your process, you can more easily adapt them. Modeling the data (defining the attributes about whatever you are trying to process) becomes very easy when you are not bogged down by the process that will be applied to it.

    Think about how you could track your Customers in a process. The customer attributes (name, address, company, etc.) have little to do with the actual process that you want to apply to the Customer. You may have a correspondence process to manage Customers that are a part of your sales pipeline. If that process is decoupled, you are able to refine it without affecting the attributes on the Customer. With that level of separation, you could even have multiple processes linked to the same Customer Model.

    So, my recommendation is to decouple the definitions of your data and process and it won't matter if you have 10 records or 10 million records. That allows you to stay focused on building the most efficient process possible for your business.

    The reply is currently minimized Show
  • Accepted Answer

    Tuesday, August 23 2016, 03:00 PM - #Permalink
    Resolved
    4 votes
    Data and proceses are connected on different levels as I wrote here http://procesje.blogspot.nl/2016/05/does-new-gold-work-for-your-processes.html?m=1 But as Patrick said, the question is about analysis. Processes + Analysis + Data immediately made me think "Process Mining" Depending on the tool you can analyse tons of data and turn that into all kinds of process pictures, graphs an animations. You can can get easily amazed by "the insights" it gives you. But isn't it weird that it surprises you? If you are really in control of your processes, you are never amazed. You know what your processes have to deliver and you keep track on that. And I don't think so much data is needed for that. And Process Mining? That's for companies who are not so good at BPM.
    The reply is currently minimized Show
  • Accepted Answer

    Tuesday, August 23 2016, 03:21 PM - #Permalink
    Resolved
    6 votes

    Down here in the depths of the forum page, where the the accumulated weight of wisdom and punditry of the earlier posts produces a sustained ambient pressure that can be hard on the ears, we sometimes think a little differently. To wit:

    If BPM is overloaded with data, it's because BPM's purpose is not to process that data, but rather to act on it.

    IoT, Big Data, whatever: all of it boils down to waves of data threatening to swamp your application. I respectfully submit that the problem of analyzing, correlating, and interpreting that data is, in fact, distinct from the set of problems that BPM is meant to address. Lots of smart people are creating lots of amazing tools for channeling torrents of data into manageable streams of actionable information.

    BPM is at its best when provided with meaningful and timely information, which it then uses to drive decisions based on the rules and goals set by your organization. Perhaps it would be best if we left the job of distilling that information to a technology better suited for that task.

    • Patrick Lujan
      more than a month ago
      Lololol. Touche, you win. Second para? #SnailedIt. Too bad most don't.
    • E Scott Menter
      more than a month ago
      As a wise man is oft heard to say: thx, dood.
    • John Morris
      more than a month ago
      LOL the "depths of the forum page" etc . . .
    The reply is currently minimized Show
  • Accepted Answer

    Tuesday, August 23 2016, 04:10 PM - #Permalink
    Resolved
    5 votes

    I am with Jim "Without embracing cognitive assists, both people and software (in this case BPM software) will get overwhelmed." Fortunately, the solution is straighforward - the process template (which is a formal plan for the coordination of activities) is such a "cognitive assist". It should be possible to change the template, use the collected data to simulate the behavior of the process (i.e. achievement of its goals) and take an objective decision about usefulness of that change.

    Any experiment in the high energy physics is based on a model (which is an approximation of the laws of nature) which are checked vs data collected by the experiment. Sometimes the model fits the data, sometimes the model can't describe the data and sometimes collected data are wrong (e.g. because of some errors in the acquisition system).

    Fortunately, in BPM, the model (i.e. process template) is explicit thus can be simulated and optimized without rocket science.

    Thanks,
    AS

    • John Morris
      more than a month ago
      Process template as cognitive assist! The power of BPM is revealed in such insights. A process template (or process model) is therefore a "specification of a conceptualization" -- i.e. the definition of ontology. We are thus concerned with "models of the world". Or in more practical terms "BPM is the technology where the concepts of work are first class citizens of the technology of work automation". I know, I know -- the term "ontology" should never, ever be used in polite company! BPM is the revolutionary technology that no one likes to admit is revolutionary. : )
    • Dr Alexander Samarin
      more than a month ago
      RE 'A process template (or process model) is therefore a "specification of a conceptualization"' - yes, because a process template must be considered and defined a formalised specification of "WHY this process today work like that. Thus changing of those processes to follow quickly-changing business becomes rather straightforward." (see the previous BPM.COM discussion)

      RE '...the term "ontology" should never, ever be used in polite company!' I heard recently that the European Commision loves it - their IT department is going to deliver ontology-based applications.

      RE "BPM is the revolutionary technology that no one likes to admit is revolutionary. " very insightful!
    • Dr Alexander Samarin
      more than a month ago
      John, May I add yours "BPM is the revolutionary technology that no one likes to admit is revolutionary." as the 18th law of BPM, please?

      And thanks for "Process template as cognitive assist!" is a corollary of newly added the 17th law of BPM.

      See all the laws of #BPM at http://improving-bpm-systems.blogspot.ch/2015/07/laws-of-bpm-business-process-management.html
    • John Morris
      more than a month ago
      Alexander, re: an 18th law, by all means -- it would be a thrill! We are now into the "rhetorical wrapper" around the technology, i.e. the "discourse" in which the technology of BPM lives in our minds and our discussions.

      And no surprise that the EU likes ontology . . . lets hope that the results are practical.
    • Dr Alexander Samarin
      more than a month ago
      Thanks John. Now we have 18 laws of BPM.
    • John Morris
      more than a month ago
      BTW, in case anyone is interested, the phrase [explicit] "conceptualization of a specification" is the original computer science-related definition of ontology by Gruber in 1993. Here's a reference:

      http://iaoa.org/isc2012/docs/Guarino2009_What_is_an_Ontology.pdf

      Many people recoil from the idea of ontology, and probably from a business perspective this is just an act of self-preservation. However, if one thinks of ontology as only a "scientific way of doing conceptual models", and that the alternative is craft-based data and process modeling, then its possible to see ontology as useful. Ontology then becomes the basis of better model engineering.
    • Dr Alexander Samarin
      more than a month ago
      Thanks for the ref. I consider that ontology is mandatory for better model but not sufficient. As usual a combination of technologies should be used.
    • Patrick Lujan
      more than a month ago
      Content taxonomy + process ontology = #winning
    • Dr Alexander Samarin
      more than a month ago
      Maybe "subject field concepts ontology + content taxonomy + process architecture" ?
    The reply is currently minimized Show
  • Accepted Answer

    Wednesday, August 24 2016, 03:01 AM - #Permalink
    Resolved
    2 votes

    The focus of BPM will result in delivery of one version of the truth with elimination of duplication as applied to day to day work. However as machines created large amounts of new data there is a need to somehow structure into being useable by the business. I would suggest that BPM could be applied to addressing this issue by applying automation to segment the data as required. Supporting software for BPM must include application of rules and ability to manipulate data including algorithms if required. The game is to ensure delivery of the right data to users for their specific purpose in useable format. Where data falls outside expected parameters alerts escalation etc will trigger automatically. Such an approach should avoid data overwhelming the system....?

    The reply is currently minimized Show
  • Accepted Answer

    Wednesday, August 24 2016, 10:54 AM - #Permalink
    Resolved
    3 votes

    I somehow take issue with the accredited opinion that process and data should be modeled with different tools, by differently qualified people. And yes, everybody still does that, including me. Although I try as much as possible to bring a friendly data model in front of the customer, this is painful. People get tears in their eyes when they start hearing about cardinality relationships.

    Aren't we supposed to solve business problems? Is any normal business person (i.e. not Fortune 500 CIO's with tens of years of technology background) able to separate the data-driven problems from workflow-driven problems?

    IMHO, this pedantic separation is one of the key roadblocks to BPM adoption. I believe further adoption of BPM lays in the fringes of the business world, in the small, unassuming companies that cannot afford to even think architecturally but still want to get involved in the design of their operations. Will there be a robust hybrid architectural approach that will blur this religious divide between data and process?

    Maybe the Bene Gesserits of the BPM Empire have realized their full potential and we must turn our attention to the Honored Matres lurking out there in the BPM Scattering?

     

    Like
    1
    • Dr Alexander Samarin
      more than a month ago
      This is a very important topic. Agree that "pedantic separation" does not work in this and many other situations. In my serie of blogposts "Beauty of microservices" I plan to address this problem. Of course, "hearing about cardinality relationships" are not for business people as well as some business analysts. Can you share other concerns?
    • Bogdan Nafornita
      more than a month ago
      This is a big enough concern for a guy like me, Alex :-)

      I do tend to envision a lot of value in a microservices approach, but we need to start with: how do we carve out the microservices boundaries? How do we identify the "seam" in a business construct that can be sustainably represented by a business microservice interface? My blurry idea right now is that this is a combination of data and activities. As we start explicitly dividing our process-driven apps into business microservices (and we're one week away from incorporating Docker into our process-driven architecture), we have very interesting discussions around these issues in Profluo.

      From a cognitive impairment pov :-), this is too much to think about... I believe I will start with a business sub-domain and a small app and grow from there.
    • Dr Alexander Samarin
      more than a month ago
      RE " we're one week away from incorporating Docker into our process-driven architecture" - impressive! Did you my post https://www.linkedin.com/pulse/beauty-microservices-part-3-employ-bpm-tame-service-beast-samarin ? I will be interested to hear your comments and comparison with your use of dockers.
    • Bogdan Nafornita
      more than a month ago
      Ha! Didn't see your post there... I read it now... fascinating to see the same kind of questions emerging (okay, you have much more of an architectural mindset!)...

      I will comment there, not to derail the discussion here!
    The reply is currently minimized Show
Your Reply

Join the Discussion

Want to join the discussion?

Login, or create an account to participate in the forum.

Top Participants

Dr Alexander Samarin
278 Replies
30/09/2016
David Chassels
271 Replies
30/09/2016
Emiel Kelly
223 Replies
30/09/2016
Bogdan Nafornita
210 Replies
30/09/2016
E Scott Menter
182 Replies
28/09/2016