BPM.com
  1. Peter Schooff
  2. BPM Discussions
  3. Thursday, October 12 2017, 09:55 AM
  4.  Subscribe via email
What do you think are the most important questions to have answered when considering moving a process to the cloud?
David Chassels Accepted Answer Pending Moderation
Top of requirements is need to have the capability to own not just the data but also the processes to allow transfer to another host supplier. This will eliminate any "lock in" and should ensure you always have ownership and access to historical activity.
Then ensure ability to quickly implement inevitable changes.
Ability to support secure infrastructure and shared services around world for global operators.

Best to own your "cloud" and subcontract management to reliable infrastructure supplier?
Comment
  1. one week ago
  2. BPM Discussions
  3. # 1
John Reynolds Accepted Answer Pending Moderation
"Why didn't you do this sooner? ;-)
Proprietor and Product Craftsman at John Reynolds' Venture LLC
Comment
  1. one week ago
  2. BPM Discussions
  3. # 2
Juan J Moreno Accepted Answer Pending Moderation
Moving your process to the cloud, in general is a good idea. But I think there are several aspects to be considered. A few examples of questions to be answered before moving to the cloud your processes:

  1. Does your country have any law restricting where your data is located? Do you need to keep it in servers in your country?
  2. Do you need strong integration with your ERP or any other local hosted system? It could be complex/expensive/slow if you have plenty of Web Services to deploy/maintain/use.
  3. Does your process manage large files? For example, attached PDFs of +20Mb? It could be uncomfortable to upload and download them.


We now have thousands organizations subscribed to our cloud BPM solution (1), and I can tell you that these questions appear. Not every day, not in every customer, but we have seen them several times and in some cases they inhibit from moving to the cloud.

Best regards !
References
  1. https://www.flokzu.com
CEO at Flokzu Cloud BPM Suite
Comment
  1. one week ago
  2. BPM Discussions
  3. # 3
Patrick Lujan Accepted Answer Pending Moderation
Blog Writer
1. Security, security, security.
2. How big's your pipe to the cloud and what are you moving over it? Re: Juan's point 3.
3. And, 'yes,' integration. What's going on with the data and app in the cloud versus what you have on the ground. What exchange(s) will occur and do you know how that will work? This last is where all the heavy lifting is.

Just my tuppence.
Comment
Due diligence on cloud vendors is more important. The good ones will have better security that you do.
  1. Ian Gotts
  2. 1 week ago
  1. one week ago
  2. BPM Discussions
  3. # 4
Jonathan Yarmis Accepted Answer Pending Moderation
Tell me again, what's the cloud?
Comment
  1. one week ago
  2. BPM Discussions
  3. # 5
Bogdan Nafornita Accepted Answer Pending Moderation
I agree with the following:
1. Security - transfer- and persistence-wise.
2. Privacy - mostly GDPR compliance.

Integration has always been on the table for enterprise systems, but the cloud adds a few twists:
- are your integration patterns loose enough to accomodate for the messy internet?
- is your integration massively fault-tolerant to accomodate for persistent threats and failures?
- how are you treating eventual consistency of data?
- how do you distribute transactions where it most makes business sense (e.g. which is the lead system for any master data, e.g. Product Lifecycle may be managed in an on-premise ERP, but Customer Lifecycle may be started in a cloud CRM)

I believe that avoidance of vendor lock-in is a mirage, much more now than in the past. Any architectural decision will have you invested in certain vendors and technologies. Does anyone have an example of a company that has a great business tech stack and changes all the vendors every couple of years?
Managing Founder, profluo.com
Comment
  1. one week ago
  2. BPM Discussions
  3. # 6
Karl Walter Keirstead Accepted Answer Pending Moderation
The main benefit of cloud hosting is improved scalability, providing you are not setting up a private cloud.

Make sure you have access to sufficient bandwidth.

Be prepared for sleepless nights worrying about security.
References
  1. https://kwkeirstead.wordpress.com/
Comment
  1. one week ago
  2. BPM Discussions
  3. # 7
Kay Winkler Accepted Answer Pending Moderation
Patrick and Juan summarized the most important points. In addition, I would suggest a cost/benefit projection and compare it against your current on premises setup. Not having to worry about maintaining a physical server and communication infrastructure at some 2 to 6 sigma level stability and up-time can be in important cost saving factor. Also, make sure to tailor in that annual maintenance fees over licenses usually also disappear, when "going cloud".
NSI Soluciones - ABPMP PTY
Comment
  1. one week ago
  2. BPM Discussions
  3. # 8
Adeel Javed Accepted Answer Pending Moderation
Security, of course, is a big concern, but we need to accept the fact that security requirements of a cloud host and cloud platform are far stringent than any single customer.

For example, a 1000 person company (customer) will only have 10-20 people on their security team.

On the other hand, a cloud host such as AWS, Azure or Bluemix will have 100s of people in their security teams. In addition, the cloud hosts work with various different industries and the number of security certifications that they have is far more than any single customer can even imagine.
References
  1. https://adeeljaved.com/digital-process-automation/
--
Adeel Javed
Digital Process Automation Specialist (BPM, RPA, Rules & Integrations)
Comment
  1. one week ago
  2. BPM Discussions
  3. # 9
Ian Gotts Accepted Answer Pending Moderation
Moving to the cloud is a BIG project, so use it as a catalyst. So the question is:

Can we simplify / eliminate / reinvent the core operational processes BEFORE we move them to the cloud.

Then we have less to move and what we do move is more efficient.
Comment
  1. one week ago
  2. BPM Discussions
  3. # 10
Dr Alexander Samarin Accepted Answer Pending Moderation
- decide about IaaS vs SaaS vs PaaS vs APaaS
- will your integration survive cloud-to-cloud connectivity
- security and privacy (e.g. is that cloud node in Europe or in Switzerland or somewhere else)
- check all the exceptions: SLA, monitoring, recovery procedures, log analysing

Thanks,
AS
Comment
  1. one week ago
  2. BPM Discussions
  3. # 11
Karl Walter Keirstead Accepted Answer Pending Moderation
David raises a very important question "Best to own your own cloud?"

I take this to mean owning your own server at a hosting organization.

We have had customers who used Hurricane Electric over a number of years, with apparent good success - they had their private server and as I highlighted at a post at this forum, the usual reason for customers to outsource is for ease of scalability.

Ownership of data is easy (the contract between the customer and the hosting organization details who owns the data).

Possession (i.e. gaining access to live data), on the other hand, is by no means guaranteed unless the data is mirrored to an independent hosting organization (and that needs to be other than to an affiliate).

Getting to historical activity is even more problematic if the site just goes off the air without notice. Again, mirroring would save you here.

Next, we have the platform and ancillary modules that the platform talks to - if the combination of the customer organization and the proposed new hosting organization do not have licenses to all of the modules, this could take time.

If the old hosting organization is running SQL Server 2008 and you need to migrate to SQL Server 2016, there could be more hurdles.

Even a move from SQL 2008 to SQL 2008 could prove to be problematic for systems that have been configured for optimum performance for a particular app.
References
  1. https://kwkeirstead.wordpress.com/
Comment
  1. one week ago
  2. BPM Discussions
  3. # 12
John Morris Accepted Answer Pending Moderation
Consider "semantic scaleability" (but don't use this term in front of management).

Why semantic scaleability? Because that's the only interesting question about "moving to the cloud". Otherwise migrating a process to the cloud is purely a cost/benefit decision on deployment (and as per notes above, let's not forget the cost of downside risk associated with security). ("Interesting" means "interesting to company leadership".)

So, if you are going to move to the cloud for more than cost savings, what is your ability to take advantage of cloud scale? Semantic scaleability implies richer customer experiences, better journeys, more fine-grained services (with higher margins), etc. etc. Do you have the business analysis and technical capabilities to build and evolve larger process automation artefact inventories -- which now happen to be deployed "in the cloud"? Lots of implications: For example, there's a cost to realizing better business semantics which must be budgeted for; and there's a risk of disappointment if you fund a migration without acknowledging the implicit expectation of "better experiences" etc. etc.

TWEET: Migrating #process to #cloud? #CostBenefit nice - but #SemanticScaleability better - http://bit.ly/2gJk88L - @PSchooff @BPMdotcom #BPM #CX
Comment
+1 for "don't use this term in front of management".
  1. E Scott Menter
  2. 5 days ago
Karl Walter Keirstead Accepted Answer Pending Moderation
I would be interested in feedback re "security" options for "Cloud"

Healthcare is one area of focus for my group - the penalties for inadvertent disclosure of Protected Health Information are severe (fines typically work out to tens of millions of dollars).

Our approach has involved setup of a back-end server (on- site or hosted) that only authorized staff can get to.

"Customers" include patients, relatives of patients (parents of children) and various contractors i.e. temp on-contract staff.

Customers log into a portal where they can receive responses to questions they have asked or submit questions asking for responses.

They are only able to talk to an engine. The engine checks to see if the questions/responses are within the range of expectations and refers anything out of the ordinary to a human arbitrator.

Otherwise the engine logs into the back-end server on the way in and on the way out. It seems best to have the engine on a separate server.

We have had little success on the "Customer" side with dual -factor. Too complicated, they say.

Dual-factor, in any case, is not what it is made out to be - if you go thru an airport and someone steals both your laptop and your phone and the laptop has "password" as its password and the phone has "0000" as the pin number, that technology fails.

Across the many possible security options, the one I like best is a USB stick that you wear around your neck (too geeky, it seems).

My brother-in-law engaged fingerprint log-in. Bad choice, he discovered, when his laptop broke. He had to go the repair shop multiple times to let the repair techs in.

Anyone in favor of embedded chips?
References
  1. https://kwkeirstead.wordpress.com/
Comment
Great question Walter -- and in healthcare especially an urgent one. And technically the "what you have" USB key (for extra strength add a passphrase) is the right solution. But -- apparently too geeky.
Consider though how the discussion might change if we move into the marketing and human factors realm. We all wear watches -- or we used to. Was/is an accepted thing. A watch is not geeky. And hospital staff are used to wearing special things on their bodies -- how about stethoscopes? My point is that "geeky" loses, but "social-status-signalling" and "cool professional item almost like jewellery" and "convenience" (if it's so important not to lose these things, and given the cost of implementing security, why not embed an inexpensive signalling device with the key that will tell you where the thing is when you lose it) -- all adds up to a win.
Or let's go even further: the whole alarm fatigue thing in hospitals is as you know a persistent, important, and unsolved problem. Is it possible that an active key would contribute in some way to amelioration of this problem?
Just some thoughts about turning the correct technical solution from an unwanted imposition into a something acceptable, even attractive . . . and then when you do this, you will have a thousand institutions that want to buy a solution from you! You're welcome. ; )
  1. John Morris
  2. 5 days ago
The automobile industry seems to be doing good things with their smart keys (if you have the key in your pocket/purse, it lets you in).

The alarm fatigue thing is very easy to solve - all you need is "normal", "important", "very important". You cannot have "off" in healthcare as some events are life-threatening.

We had folks way back agonizing over options to pick in our apps.

One very large app has 11 pages of definable/re-definable settings. We re-programmed some of our apps to accommodate at each button a default option called "don't know/don't care" - the customers love it. They can almost be as negligent as they like - rules make sure that processing is able to move forward.

The auto keys are somewhat the same as wearable USB key (except that you have to plug the key in - best they not take if off as this leaves open the option of going away with the key plugged in).

I think FiTBit and Garmin have turned the watch industry upside down - I have several watches I no longer wear, ever.
At the risk of contributing to entropy where this BPM.com discussion is concerned, but on the topic of alarm fatigue, here is a 2014 post by me on the subject:
https://www.linkedin.com/pulse/20140730013817-1524359-what-manufacturing-iot-project-leaders-can-learn-from-healthcare/
What Manufacturing Leaders Can Learn From Healthcare
The scary thing is the reference to 500 or more alarms per bed per day -- and since then not much has changed -- if anything it's gotten worse with every new machine "featuring" new alarms.
My excuse for mentioning this is that the whole "key / security" thing is part of both alarm fatigue and security solutions.
  1. John Morris
  2. 5 days ago
@John. I read the article - the key message for me was ". . . . .medical devices are each generating their own characteristic alert in the form of beeps or other sounds, flashing lights, text messages and more"

The thing is each device has it's own set of alerts except that, especially in medicine, five alerts from 5 devices might reduce to one required action.

Therefore why not consolidate the signals, use rule sets to analyze/ interpret the data streams and then 1) automate actions where no human input is needed and 2) refer the consolidated/interpreted "problem" to a human at one "command and control center"

We see this in infrastructure protection where you have drone detection for inbound air traffic, sonar for inbound threats on/under the water, proximity detectors, vibration detectors - each of these has it's own command center and, as you point out in your article, the alarms and alerts can indeed overwhelming. (i.e. a boat lands on the beach, as the perperators advance the vibration detector goes off, then the fence alarm, then the inner perimeter alarm) - if the duty technician dispatches a response team at the1st alarm, it may be useful to see how quickly the perpetrators are moving but otherwise, all of the alarms except the 1st become secondary alarms.

Not trivial, of course, to reduce 500 alarms to a few but as we go forward with IoT, it seems to me that many of the display facilities at individual command and control centers could be eliminated by exporting what comes into each a device and exporting calculations/interpretations generated at the device and consolidating everything into one central smart command and control central.

Many device manufacturers understandably do not want to publish their data formats/analytics.
Boris Zinchenko Accepted Answer Pending Moderation
From purely technical viewpoint, most important question to ask is: how many links (between processes and external) has your present enterprise model? The second and closely related question is: do you have a tool in place to automate migration?

Being experts specifically in process migration, we have evidenced enormous efforts, which companies spend when doing such migrations practically. While there are relatively easy ways to move individual processes in some most popular notations, such as BPMN, there still exist significant problems when moving underlying business semantics between different process engines. It is not a problem (or, at least, not a big problem) to move manually a single process by mere re-drawing it in a cloud tool borrowing from an existing diagram in another tool. However, if you have to move an average enterprise model of 100,000+ diagrams, it grows to an immense labor and cost, which is simply impossible without a sort of automation.

Complexity of migration directly relates to model topology. When you have a single process not linked with other processes, you can move and then test run it individually. When you have hundredths of closely interconnected processes, the complexity of their migration grows at least exponentially with a number of process links and further complicated with inability to evaluate processes individually until the whole enterprise model has been properly moved. Presence of automated migration tools, an expert team experienced specifically in such migration task and detailed plan for migration are essential for success of such challenging mission.
Comment
  • Page :
  • 1


There are no replies made for this post yet.
However, you are not allowed to reply to this post.