September 1, 2011 § Leave a comment
Big news this week at Nimbus Partners, a company I joined exactly 4 years ago today. We were acquired by TIBCO, a larger company with a diverse portfolio of BPM products. I’ve known about TIBCO for about ten years now as they are pioneers in the development of middleware, messaging and enterprise application integration – what is now a core capability within Services Oriented Architecture or SOA. Now, SOA is by no means new and the maturity of SOA is well advanced in large enterprises. Many organizations have spent and continue to spend substantial amounts on its promise and a fair amount of challenges remain. Still, like most revolutionary technologies, realizing the value of something that radically shifts what is possible, takes time. TIBCO has been at the forefront of SOA, BPM and BI with technology that alters how information flows between systems and how quickly business users can get to answers. The potential that lies before me and my company is exciting, with promise to connect advanced infrastructure capabilities with Nimbus’ cutting-edge business process management platform. Now, I’m not going to delve into the intricacies of what is possible or which bits fit with which widgets as I’m sure whatever I imagine will evolve to be something quite different. What I will tell you is that the power of this acquisition is an exciting event, one that will likely impact a wide variety of global enterprises.
The Complexity of Events
On the topic of events, I’m reminded of a book I read years ago called aptly enough, “The Power of Events.” It’s written by a gentleman named, David Luckham. I was fortunate enough to hear him speak at a Gartner conference not long after I read his book when it was a groundbreaking topic. Most interesting was his ability to illuminate the capabilities of Complex Event Processing (CEP). The importance of this capability is primarily for organizations that need to minimize risk to their systems and the underlying assets that those systems control. The need to minimize risk is evident in just about every organization I’ve ever entered.
I remember being inspired by Luckham’s book and speech, and it had a big influence when I founded AVIVA Consulting. One of the first opportunities to realize some of the key capabilities within CEP was with a partner company. We took a simple technology for tracking real-time data flowing from any web service and integrated their software with our Microsoft focused collaboration stack of SharePoint, SQL, InfoPath, Office and K2, a third party workflow product. We were mildly successful with it, but quickly became focused on risk and compliance requirements and never fully developed the real-time collaboration solution. Later, after I sold our flagship product, ACES, to a Microsoft service provider, Neudesic, I spent a year working to take a Microsoft platform Enterprise Services Bus (ESB) to market. I named it Neuron (yes, I’m still proud of how clever a name it is) and we launched it shortly before I left to work for Nimbus. So, even though I was on the product management and product marketing side of the ESB product, I became intimately familiar with the core capabilities and potential of middleware messaging and SOA in general.
Now, full circle back to now… this event is the joining of TIBCO, the most innovative company in the middleware space with Nimbus, the most innovative company in the BPM content governance space. I couldn’t be more excited to be at the nexus of this formation (you can kick me later if you get this). Personally, I’m excited to see what will happen when we sit with financial services companies and pharmaceutical companies to look at how risk is managed within quality systems or compliance initiatives. How well can most of these organizations manage real time events and how well designed are their processes to deal with adverse or opportunistic circumstances? This is where the opportunity lies. As I point out in my posting on business agility and the need to minimize risk through agile processes, organizations need to design processes that allow rapid response to unexpected conditions as well as the known possibilities. The events that tend to be earth shattering are not the anticipated events, so how well we have modeled the organization to respond is critical. Also, as I detail in my posting on checklists, minimally we must address the known risks with clear process handling instructions to ensure quality execution.
Rapid Response to Events = Reduced Operational Risk
So, imagine if you will, the situation where fraudulent phishing attacks attempt to lure bank customers to provide their login credentials to make a change to their account. Rather than connecting to the real bank, customers are connecting to a fraudulent system that grabs their login ID and password. The fraudsters then log-in to the real system, change the password and now begin making transactions to pull money out of the victim’s real account. With CEP technology, banks can see in real time how much activity is occurring and when irregular volumes occur on a given function (such as 10X the usual number of password changes during the past minute), the system disables the password change function and alerts the appropriate administrators. Cool stuff, right? Now, tie in the ability to provide clear instruction on the manual handling that the administrator needs to perform. This outlier password change event is rare and the steps required by the administrator may be exacting. That’s where Nimbus comes in. The admin will have clear steps to take, ensuring fast and accurate handling with quick access to all necessary resources and reference materials. End game? Very few, if any customers are impacted. Very little, if any financial damage done to the bank. Preventable adverse events are prevented. And we can imagine in reverse how opportunistic events can also be quickly acted upon with decision-makers having clear instruction on execution.
Understanding Events in Context
The key to how an organization processes and responds to a large volume of diverse events is at the core of what BPM is about. It’s not just process definition for the sake of checking a box that the auditor approves. It’s about improving the decision-making ability of management and other operational decision makers. It’s about reducing operational risk. And it’s about continual tweaking or improving those processes as we learn what is working and what is not. Gaining real time event information can be hugely beneficial, but it’s value is increased when we understand these events in context to precise operational activities.
Those of you who follow my blog already know, I’m not in the habit of reporting news or projecting the future, so consider this post a rare exception. Given the personal nature of this event and the impact it will likely have on the future of BPM technology, I felt compelled to comment. In a future post, I will explore technology specifics including how governance, risk and compliance requirements are handled with the variety of technologies available and the specific categories of capabilities including automation, content management, master data management, SOA, enterprise architecture, social networks, collaboration, search and reporting. There are a variety of analysts and prognosticators jumping to conclusions about what this merging of technical capabilities will mean to the market. I can tell you that this newly joined organization looks extremely promising, but the proof will be in how we make it happen with our customers. It’s how Nimbus has always proven its advantage in the market; through real execution and value creation in real customer environments. With the added strength and reach of capability that TIBCO brings, we should be proving what is possible very soon.
August 12, 2011 § 2 Comments
Measurement and Action – how do we improve performance through a cycle of measurement and action?
Measuring performance of operations is one of the most challenging of all management disciplines. Financial performance tends to draw the most attention with investors and management having stakes that are dependent on the cash flow, income statement and balance sheet results each and every measurable period. The quarter ending release of financials dominate most business news cycles and companies commit a fury of resources and activity to try to get numbers to align with pre-set goals, objectives and related expectations. Sales teams are pounded hourly to get PO’s in before the quarter’s deadline. Back office operations work frenetically trying to meet reporting deadlines and get all accounts reconciled in time. And of course, executives prepare summary statements, conference interviews and advance guidance statements to investors in an effort to set a level of confidence in the direction of the company and how the broader market is impacting its results.
But what I’ve always found troubling with this nearly universal cycle of chaos during each financial reporting cycle is that the financials that are measured represent just a small picture of the overall health of the business. While it’s an important part, it’s by no means even the majority of what investors, board members, partners and employees should be concerned with. There are so many other factors that need to be measured, trended, compared and ultimately weighed into the analysis and assessment of each organization.
A Business Services Example
Recently, I was working with a Shared Services division of a global corporation. This business services division manages all operational financial services including payroll, accounts payable, and receivables as well as human resource related operations. To give you an idea of how complex their existing processes are, consider these conditions:
- Many processes are “black box” in nature; managed by two separate third party global services firms. The company does not know or see what the third parties do, only the results.
- Other processes are managed by the shared services division which serves many businesses, but not all.
- The company had recently acquired another set of businesses worth several billions USD and those organizations would have to be integrated into the shared services division as well as the outsourced third party operations.
The challenge to this organization was focused on the need to consolidate processes where possible and ensure all processes were designed in a way that the varying participants could understand. As any of you who work in process management and process improvement know, just getting a common framework for communication is a major challenge.
Documentation existed everywhere; across all parts of all entities. But every bit of the process documentation was disjointed; written sometimes in Microsoft Word or Visio or PowerPoint or embedded in SAP documentation or even in printed notebooks that no one could find the electronic versions for. “OMG!”, my teenage daughter would say. “A complete mess”, my client would admit. Sound familiar? This is a common condition that I encounter at nearly all of my clients. But that wasn’t the only problem. Even if we can get all process definition content in a single place and in a single language that everyone can agree on, how do we manage it ongoing? Further, how do we know it’s right? Is it really what people are doing? Or just what they say they should be doing? And finally, how can we start measuring processes; ultimately holding those accountable at the process level for measurable results? These are big challenges for even small organizations, let alone an organization that operates dozens of businesses across dozens of countries. To solve complex issues, it’s often easiest to compartmentalize them and solve them one at a time. The key challenges with this scenario include:
- Complexity of Process
- Understanding Accountability
- Sustainability of Content
- Compliance with Process Standards
- Sustainability of Performance
- Defining measures that align with process design
As we break down these topics, one thing that becomes apparent is that the last category, “defining measures…” is something that is dependent on most of the items above. While the organization did have performance measures in place, there was very little accountability and that was mostly because it was very difficult to know which roles and individuals really contributed to the factors of the final figure. To understand what I mean by this statement, consider the scenario of Time and Expenses.
Time and Expenses is a common process in most organizations where employees record their time and expenses for payment to the employee. A key measure that was tracked was the percentage of T&E submissions paid on-time. These statistics were tracked and summarized monthly and reported through a sophisticated business intelligence system. This was one of dozens of metrics. Who is responsible for ensuring T&E submissions are paid on-time? When the percentage of on-time payments in July dropped below 90%, this was an unacceptable “red-flag” alert. Why did this happen during this month and who can ensure it is corrected? These may seem like pretty straight forward challenges with the solution being that the organization just needs to be structured such that T&E has a single process owner and that all participants in the process are managed under that owner. Right? HA! No way. It’s far more complex with this global organization. To start with, we have 5 separate high level process steps:
- Set Policy
- Arrange T&E Information
- Submit T&E Form
- Process T&E
- Pay Submitter
Each of these steps are managed separately with the policy (#1) owned by a governance board with input from Audit, steps #2 owned by the individual, #4 depends on the region and the part of the organization and #5 a third party outsourced organization that was again variable depending on what organization and region the submitter was from.
So, how the heck do you know where the process is breaking down and why some submittals are paid beyond the required deadline? I won’t go into the full analytics and forensics involved in identifying the “choke” points, but suffice it to say that a small minority of data points were throwing the average way out of range. It wasn’t that the entire process was broken, it was that for certain circumstances payments were taking 2-3 times the allotted timeframe.
At the heart of successful process management and performance management is a platform for designing, capturing, maintaining and refining process definition. To perform this exact analysis and ultimately define measurements that can be actively managed, organizations must fully document and manage processes in a common visual framework that clearly defines ownership, accountability and associations with each related process activity.
So, as we look at the next reporting cycle to review the “performance” data for the organization, take a step back and ask the following:
- How well does the organization understand their own processes and those of their dependent outsourced and supply chain partners?
- How actively are those processes managed? Meaning, how often are they reviewed, improved, updated?
- How adept is the organization at responding to drastic market shifts?
Managing process information and treating process information as a highly valued asset is the mindset that must exist at the heart of nimble and forward looking enterprises. Without such rigor within business process management, organizations pose a high degree of risk to investors and the organization’s overall health. Immobile organizations are more susceptible to rapid market shifts and less able to innovate where necessary. As I will explore in later postings, sustainability of process content is what separates highly agile organizations from laggard organizations.
Arms Around Complexity
So, how did my client get their arms around the complexity of process documentation they were confronted with? They took a number of steps including having all existing process content with the business services purview converted from static Visio files into Nimbus’ BPM platform, Control. Further, BPM strategy was developed to include process improvement methodology, process sustainability using Nimbus, including collaboration on process information across the global organization.
Globally, the organization has a heavily invested program to drive continuous excellence methods throughout the wide scope of businesses. This is a massive undertaking given the number of businesses and the number of countries that operate. A core component of the continuous excellence (CE) program is cultural with some degree of best practice standards, reporting and auditing of implementation. Another key element that falls under continuous excellence is the quality management system. This “system” is not an IT system, rather another set of methods and standards that includes reporting and auditing to ensure implementation.
It’s most impressive to see how mature and visionary the executive team have been; fully committing to an enterprise emphasis on quality and continuous process improvement. But, even with the executive vision, the level of complexity makes the challenge a tough one. At the core of the objectives that include quality, continuous excellence, process improvement, performance management, and compliance management is one common denominator: PROCESS. Understanding process activities, enables core elements of accountability, sustainability, and agility.
Associating KPIs with Process Activities and Owners
At a local level, this Business Services division developed a vision for process improvement that included the same core capabilities that was envisioned by the global Continuous Excellence program. Their objective was to actively manage Key Performance Indicators (KPIs) and not just report on them. Once their processes were established, KPIs were attached at the appropriate process level and process ownership now not only meant ownership of process definition, but also ownership of that exact performance metric. Again, these relationships that are established on the process software platform enable the ability to understand performance in a far more meaningful and accountable way. No longer is a metric just a number made of up of lots of calculations with no clear method of identifying the process failure. Having KPIs associated with key process areas, all elements that feed that indicator and all ownership of activities within that process area is easily identified.
Measurement Triggers Action
Great. We now can understand the KPI in a way that clearly identifies where the process is failing and who is responsible. So, what do we do next? Send an email? Set up a meeting? Paint the wall red? Don’t tell me that isn’t what’s going on in most organizations; it absolutely is. What do you do to manage dozens of KPIs and dozens of alerts on performance that are “out of range”? How can key management have consistent visibility into the state of action that is taking place on each of these issues? Yup, you guessed it, this is a teaser for a following post. Later, I’ll highlight how this is being done and how the full cycle of process improvement is effectively managed through your process management platform.
Note that there are many ways to approach process improvement and performance management and you’ll note I’m not proselytizing around any specific method such as Six-Sigma, Lean, Kaizen or variations on quality management programs. One area that is transforming process improvement and performance management methods is the advent of social media and social BPM capabilities within enterprises. For some interesting insight, read this recent post on BPM For Real: http://bit.ly/qFxVtz. Also, for much greater insight into Business Services and developments in BPM capabilities, please check out the latest post on Sourcing Shangri-la: http://bit.ly/nqxdnD. For some solid insight into process excellence methods, check the Process Excellence Network: http://bit.ly/gw5kSG.