Running a Successful Business Process Management Capability

What is the key difference between an effective Business Process Management Capability versus one that is costing the organisation? This is a question I’ve had to answer a number of times and hence the subject of this blog post.

This question usually comes from those familiar with the early Enterprise workflows sold for mega bucks to early adopters before the technology was understood or mature. Things have changed over the years and Business Process Management Systems (BPMS) are a different technology to those early workflows, but its still a very important question because to successfully leverage technologies like BPMS still requires certain factors to be present in the organisation plus a number of key considerations addressed.

Firstly there are two types of failures I want to address

  • Failure to achieve enough value
  • Failure to achieve any value

Failure One – Not enough value for money

A typical low return situation is an organisation moving to leverage BPM when its not prepared to document its processes or to change them or not prepared to invest time in deciding on systems and putting in place a well planned capability around BPM.

Whilst its possible to add value by documenting processes, and I usually see this being the first step in a journey, introducing a technology to automate processes demands a level of comfort changing processes to fit the new approach and allow for continuous improvement. Whilst you could get decent value by simply shifting manual forms to digitised forms, the “so called” request/approve forms problem space, this still can only be successful if certain key criteria are met and doesn’t add the level of value that a BPMS is designed to add. I also tend to find that request/approve forms are not so simple once you get into the detail.

Failure two – no return on investment

In this case we are seeing either no adaption of the technology, the adoption of one customer only or the practice costing the same or similar to other tools or general programing. Total failure scenarios hurt the most with a large up front investment of course.

To avoid both scenarios these are the key areas to focus.

Establish a practice around process discovery

Step one, get the process of analysis sorted out first. This doesn’t have to be a dedicated team of process experts or business analysts. I’ve used SMEs for this as long as they are willing to learn a few things. The key to success is use of a standard language and tools, and adopting a consistent approach. No standard does not mean using a wiki or drawing tool, standards mean languages like BPMN 2.0 and use of at least basic classification system to do things like work out which processes are high or low value, which should be dealt with using full automation, swivel chair automation or just documented for training and so on.

If you have the wrong person managing this you will fail before you have really started.

Tools like Blueworks live are an easy way to get collaboration happening around a simple process language standard. They can help shape a team quickly particularly if they think in terms of agile.

Some formal training is good here too, bring on a BPM coach is a good idea. Note information modelling is also vital here, the process staff might need help from an information architecture to do this well.

Pick the right class of tools

Step two, decide if you need to automate and if you do what type of tool you need. There are four general technologies in this space. Workflows, transactional engines, document centric engines and human centric process engines.

  • Workflows are the lowest class product and are good for enhancing an existing services such as building on an ITSM to support ITIL processes better or adding capability adding simple additional functionality to a document manager. Workflows by my definition should not be used for business process problems.  The rule I tend to apply is use workflows in ERPs to extend ERPs functionality. If a system has a workflow function then use it if its possible without customisation.  Some workflows can bolt onto systems but care must be taken here.
  • Transactional engines are usually used to handle integration problems only.
  • Document Centric Engines are, you guessed it for handling document processing, some are also embedded into ECMS products and the like.
  • Human Centric engines handle both process well and also forms for human input (hence being human centric). If there is a range of process problems to solve (even some transactional) the solution ends up being human centric to cover the broadest problem base.

The cost of implementing using the wrong tool is probably the biggest and most immediate failure point. Use say an ITSM workflow to implement a human centric process and you could be looking at cost increase with a factor of 10 to 100 over the right tool plus irreversibly damaging your ITSM landscape because of customisations that then impact the upgrade lifecycle and support model for your ITSM.

Match to the right product(s)

Step 3, is picking which tools or tool you will use, which translates to making sure you have the right functional features plus all the non-functional features covered. I’ll later publish a guide to this however most process problems can be broken into patterns and non functionals also can be broken down this way.

A key functional requirement is how many people are needed to implement a process end to end and how fast it can be done.  This can differ from 1 person being able to build processes end to end to requiring a big team with developers. Implementation can vary from hours to months to finish a process so obviously this aspect of the tool on its own could determine success or failure of the capability.

Companies under heavy regulatory requirements will need to really focus on non-functionals as many of the solutions just do not meet key requirements around accessibility, data recovery and privacy. This all depends on the type of company so not all solutions will be appropriate even if they are functionally sound.

Functionality wise think forward and explore whether patterns such as expert collaboration on processes may add value. It would be ashame to discover you need this later after you’ve purchased a system that doesn’t support such a pattern of use.

Note BPM product are built on a set of standards, usually either BPEL, BPMN, DMN or CMMN, or all of these. Some don’t use standards. Be aware each standard comes with limitations. For example BPMN is not good at representing a case, it can be done but its not pretty.  A product supporting BPMN and CMMN might provide a way to get both of these capabilities.

A few (and I really mean few) products might add some non standard capabilities to bridge these gaps but I warn you this is dangerous territory. Picking an acceptable non standard modelling tool is very hard to do with low risk because you’re guessing if the “non standard” approach is something that will add value and last the test of time. Despite limitations of BPMN it has become a standard because its easy enough to learn and expressive enough, happy middle ground. Non standard tools, if they fail, give you no ability to port what you’ve developed to another tool plus you will have to train everyone on the custom notation rather than hiring people that already know the standards.

Setup the BPM team and delivery approach

Step 4, is forming the team. By now the process people should be in place but there is usually need for an integration expert, a product/tool specialist and a process lead to pull together the team and make sure work is prioritised, done to the right level of quality (not perfect!) and value is being achieved.

The key is showing progressive wins not necessarily quick wins. Gradual benefits helps to solidify the work as a “practice” not a project. Don’t try to optimise too early, the metrics in the tool are initially needed to identify what the current state is then optimisation comes later.  Introducing a new BPM system while trying to optimise a process is risky so focus one or the other at a time.

Running the practice

Finally once some successful BPM projects have been delivered the work is on maintaining the practice. Process improvement should become possible if the right metrics were embedded into processes and the interest in changing is there.

The hardest part happens now, which is on-selling the capability onto other areas in a business, unless there is executive sponsorship or EA leadership you may very well need a sales person to go out and promote the value of BPM to the organisation. Whatever you do here do not start focusing on low value low frequency processes. These should be last on the list, unless you must do them for stakeholder commitment you want to return on value to ensure the initial cost and operating costs are fully recovered before worrying about “nice to have” processes around the organisation.

With on premise systems upgrades and maintenance can become a burden, so ensure you understand this cost / effort and have it planned in as BAU activity not as project activity.  The last thing you need is the system being locked to an old version due to funding issues doing an upgrade then to have a security team raise concerns.

Failure to upgrade regularly is a real issue. I saw a system installed less than 2 years ago thrown out because it wasn’t upgraded so don’t underestimate this cost and do not assume cloud systems automatically solve this unless their architecture and support model is designed for zero down time upgrades with no effort on your shoulders.

Conclusion

There are many things to consider with BPM technology and its too much to cover in one blog post, but in summary, get the team right and right leader, don’t try and reuse something because its there and known, pick the right tool holistically. Finally run it with even more focus than you had setting it up!