Give it some context - supporting information; video, procedure, metrics, related processes, people/experts, a rating of how good it is.
The same way data has moved to information and now insights process needs to move from being just isolated documentation and workflow transaction.
There are various aspects of process intelligence that help processes and I have included some blog posts to read. Over time I've boiled it down to processes are intelligent if they do one or more of the following:
Sense emerging patterns to notify process managers or participants (carbon based or not)
Decide to take different actions on their own based on a process analytic (usually big data and compound analytics)
Take actions independent of human intervention (usually actions are within established constraints)
Chase changing goals automatically (usually assisted by scenarios, policies and constraints)
Interact and bid for work (usually collaborating with humans, but growing interactions with the IoT)
A couple of years ago, I came up with a process IQ mechanism that helped me figure out what smarts a process could exhibit. Here is a set of blogs that could help answer this question. I tried the IQ against a couple of real world smart processes to test it and it seemed to work.
On top of Ian's key input (give it some context), I would add:
1. make it listen and react to events;
2. embed operational support measures (e.g. make it anticipate activities, make it forecast the outcome);
3. delegate rules modelling and execution to a rules engine.
and of course:
4. populate the process with intelligent players.
Fully dynamic, context-aware - interaction metadata is used to interpret policies and translate events/requests into personalized/targeted Tasks and Flows.
It's well understood that data, needs context to be relevant (data + context = information). Likewise, process requires context (process + context = optimization). Process without context is proceduralism / bureaucracy.
Us. Intelligent processes, carbon-based or not, are only as smart and adaptive as we are. If one of the ideas is the notion of "intelligent agents," those agents are bound by the parameters and thresholds we define, evaluate and constantly refine (based upon measuring and monitoring) as we work on making the process better, smarter, faster.
When talking about the ability to learn and to improve based upon that learning, what's really at the core of that is persisting that corpus of knowledge and improving and expanding upon that. And that takes us. Computers and software do exactly what we tell them to, not necessarily what we want, no more, no less. To make processes "intelligent" we have to provide the learning and what that invariably boils down to when pairing the business and technology is the decisioning and the decisioning - and on-going tweaking of the parameters and thresholds for what happens when, under what circumstances - is us.
I confess that I'm not a enthusiastic of intelligent processes in the sense that systems can do everything without people to analyze and take the decisions. Maybe in a very low level of a very structured routines it can work. I know many cases of BPMS implementations in which clients preferred to block the dynamic behavior exactly to maintain the control of their own business processes. Anyway, in the sense of the question, I suggest first process intelligence and then inteligent process.
Process intelligence to get real data with meaningful in context of process, then analyze it and decide what level of dynamic behavior can be implemented as intelligent process, with benefits to the business.
Jim is a hard act to follow on this topic, for sure. But I'd highlight context as the key differentiator. A process that knows only about itself isn't very intelligent. One that collects and acts on data throughout the organizational ecosystem—that is, data gathered from customers, applications, and databases acting asynchronously with respect to the process itself—can be considered intelligent.
A question to root out the dinosaurs amongst us, Peter!
In the pre-computer era, data was hard to obtain and its only use was to laboriously create graphs etc. from which people could make decisions.
So we created the guru methodology - to hire someone who knew all about processes, get them to design ours and install it.
No need for data - we just trusted him/her.
This wasn't new, by the way - back in the 1700s William Blake attributed unshakeable weariness to the implementation of systems. "They are built on logic as a means to seize truth of reality, thus they are all, essentially, the same". He saw them as dangerous because they fossilize so quickly, becoming a paralytic instead of an accelerant. "The pursuit of convention led to a dullness which prevented any kind of appreciation, even in the form of tradition".
That "paralytic instead of an accelerant" persists right up to the present day.
But the truly interesting thing is what happens when you convert processes into pixels.
Suddenly every step along the path has invaluable data - how long it took, how fast, what gaps between steps, what data was accessed, what percentage of resource was used etc. This can be turned into insight to create clever routing, better exception handling, resilience testing and improve the process at every step. A Six Sigma approach could take the biggest problem and reduce it every day, speedily re-iterating a better process.
But generally this data is left on the cutting room floor by gurus. Perhaps because they've never been trained to use it. Or perhaps, because it would show up their "perfect processes" as anything but and undermine their reason for being.
Peter's question asked for the key...
An intelligent process doesn't just take the data and throw it up to people as graphs, charts and figures for them to argue over. It makes the changes as it goes along, intelligently routing, pre-loading likely data, flagging up previous history where that may be a problem and dynamically improving the process every time it happens.
It takes the guru out of the equation entirely. Dismantles the whole "process improvement as a project" industry through creating systems which don't need fixing. That's the key.
From a Knowledge Management point of view, we could see two approaches:
These two approaches require a solid and trained Artificial Intelligence mechanism, integrated to the BPMS to take advantage of the knowledge embedded in the process instances database.
Below 2 URLs with more information (a complete book in spanish, and a chapter in the 2006 Workflow and BPM Handbook called "APPLYING KNOWLEDGE MANAGEMENT TO EXPLOIT THE POTENTIAL OF INFORMATION STORED IN A BUSINESS PROCESS MANAGEMENT SYSTEM")
For processes to be intelligent they must be able to adapt and perhaps anticipate business conditions. I once saw a demonstration by Lombardi where the process automatically identified that an approval was given to a certain supplier 80% of the time in certain conditions. The process then asked you if you wanted to make this a permanent rule, if you said yes the process changed itself. I’ve never seen it work in production but this must represent process design nirvana for many an operations director.
Meanwhile back on planet earth we are approaching this level of intelligence, and there are a few tools and techniques that are helping us get there.
Adaptive case management is where we let the human brain create order out of chaos where no established or formal process exists. Although manually created by the process expert, we are still managing the process within a controlled environment and will benefit from audit ad reporting. In time the goal will be to monitor this new way of working and make it a standard flow.
The next level of intelligence is business rules, where again humans currently write those rules, ideally in natural language into a rules engine to be called by the process at appropriate times. Giving us consistent application of rules and eliminating in many cases the need for human intervention.
The next level of intelligence is pattern based prediction or where the system suggests to an agent what to do next based on the customer previous behaviour. Process optimisation engines sift through the underlying process data and either suggest the next best action based on past history or how to improve the process as per the example above. Next best action has been used to great effect in telesales and call centres we are now seeing this cognitive ability used in a wider context with computers able to use “big data” to spot trends answer questions and suggest alternative courses of action.
Cognitive system like IBM’s Watson use big data analytics to adapt and learn based on the latest human input they can interpolate and suggest what might happen next. E.g. based on a the number of influenza patients diagnosed over the last 3 days they can predict the likely score for today. Cognitive agents are already with us interpreting natural language and answering our queries, Apple’s Siri, and open source tools like SolrSherlock or Open Cog.
By adding this suite of tools to regular process design we are slowly building intelligence and flexibility to our once rigid and inflexible processes.
There is obviously no agreement as to what constitutes an intelligent process. Product marketing clearly tries to sell a lot of stuff as SMART or INTELLIGENT when it is not! Intelligence is not a hardcoded flow or a rule set no matter how complex you make it. If you do process mining or intelligence to collect data and then have an expert program it, that is not an intelligent process. Turning any kind statistical finding into a rule is neither. Intelligence has to do with the ability to learn autonomously. I wrote a blog post on the subject recently and have blogged about it often over the last seven years.
Facebook, DeepMind or Watson (who I mention in my post) do nothing that would in any way be able to do anything in the area of process management. If you try to slap any current Ai toolset on top of some BPM environment you get nothing. What should it get from where? A BPM system is rigid by pure design. What would it learn from whom? You need performers and customers to be free to do what they think is the right action for them and learn from that. So it needs an ACM environment and goal oriented processes to map the actions and the outcomes to goal achieving process patterns.
I find it interesting how everyone dodges the fact that we at ISIS Papyrus have been offering a machine learning agent for ACM/BPM for many years. My approach was patented in 2007. We did present and demo our User-Trained Agent at various BPM and ACM conferences and it is in use at a number of our customers. So we are not talking about the future but about available technology.
Best Next Action and process intelligence can't be done right with BIG DATA as there is no intelligence to be found in Gauss distributions as they do not tell you the right action for the current goals and the current customer. It is also way to complex to use. The trick is to use LESS DATA that are relevant to the current context and learn from that in real-time without the need for data analyst and AI experts. We found that the best way is to organize or model the process structure and data in user terminology (ontology) and try to discover customer and user action and interaction within it. The performers and customers can therefore tell the agent which suggestions are good ones and which not, not just a process or data analyst who is utterly disconnected from the real world.
Let's not forget: You are discussing this as a maybe future thing. But I did this already SEVEN years ago.
I'd even venture to postulate the following:
The very moment when Artificial Intelligence will be superior to the most basic human intelligence by a factor of 1.0000000000000000000000000000000000000000000000000001, due to the fantastic iteration ability that this AI will have, we humans will know very soon...
But until then let's just see the reality as it is today: performance-wise, Artificial Intelligence is vastly inferior to any Mechanical Turk.
I was just going to reply "Hire great people" because really, that's the best way to make a process intelligent.
Lots of good fodder for discussion, and I have to come down on the side of Bodgan.
Example from the 90's. Lots of people in the software configuration space using AI techniques to configure complex products for sale. Using fun stuff like FRANZ LISP and SCHEME and Smalltalk : ) And then this little company in austin put together a total hack of language constructs that allowed each masters or phd-level consultant to design an algorithm tailored to the customer's situation. Our solutions would go so fast, we had to slow them down so that the customer would believe it was computing something (on an 8mb windows laptop) versus competition that took 30 minutes or more to process on a sun sparcstation, with much less likelihood of even producing a correct result.
Hype will precede reality by many many years.
Guys, you are obviously very knowledgeable, but I would have enjoyed more answers like... hire great people, make processes adaptable via business rules & KPIs, etc. ... answers that can be sold to the business people! Wondering why you don't get mainstream? Here's the test for you: can you explain it to a 6 yrs old? If you cannot, you don't understand it yourself.
This is the kind of discussion where system vendors try to push their product pitches, ending in the tipical discussion, "my system is best than yours".
My point of view is more oriented on the necessary architecture to make intelligent comes true.
At this point of time, it is possible to have intelligence in structured, rigid, predefined process definitions. For that purpose, algorithms that are able to describe models, like the ones used in process mining or machine learning, can predict variables like how much time the process is going to take to come to an end or based in correlation of data, can infer on process outcomes, for example, the second time a patient is admited in a hospital with some time of illness, most of the times the treatment will follow a predefined path.
The challenge is when the process is data oriented, unstructured without any kind of predefined model. In this cases, it is necessary to use algorithms to discover process behaviour, to acess all the data that describe it (most of it is outside of corporate bondaries), data must have a meaning (what is a difference between a customer and a customer), models most be evaluated about its fitness (overfit, underfit), must be able to deal with concept drift (try to predict oil price) and must have specialised people with deep knowledge how to build it (today they are called data scientists).
Having said that, there is a very long way to achieve real intelligence. Probably it will be achieved. The last frontier will be how to deal with randomness.