New Process Adoption: How do we get people to change behavior?

August 29, 2011 § 4 Comments

Earlier in my career, circa 1994, I was working for Lotus Development and I was having lunch with my boss.  We were sharing a bit of small talk and I remember telling him about a new personal accounting application called Quicken.  I explained all the cool things it did and how I could track my expenses, put line items into categories, build charts and graphs and see where my money went.  He listened as I went on and on about how empowered I felt that I knew what I was spending my money on.  He then asked, “Are you really changing your spending behavior now that you use it?”  And I thought about that for a moment and then had to admit, “Well, no.  Not yet anyway.”  That was all he needed to ask.  I got his point.

There is a lot of this same phenomena in the areas of governance, risk and compliance as well as with operational excellence and quality initiatives.  It can be exciting to start thinking about improvement of process execution; understanding exactly what is happening with end-to-end processes; who is responsible and how activities should be measured.  For large risk and compliance efforts, new control methods, systems and activities are designed to address areas identified as high risk, high impact potential.  Documentation efforts are extensive and training of personnel occurs to ensure understanding of these new processes.  The question that should be asked that my previous boss asked me is, “Are you really changing people’s behavior?”  It’s one thing to implement a method and to provide training, but not everyone is going to adopt the new application, system, method or rule.  There are those in the organization who have been doing their job, their specialized skill for a very long time.  Simply publishing a new process diagram, a new policy, or a best practice document is not going to ensure adoption of a new process.  Further, when process change or compliance regulations impact a wide variety of process areas with dozens or hundreds of roles impacted, how do we ensure adherence to the newly stated “way of working”?  Is it okay to have 80% adoption?  90%?  99%?  How do we know where the newly defined processes are not being followed?  And if we are not at 100%, what is at stake?

Pharmaceutical Quality Management

Let me cite a recent example.  I spent several months working with a global pharmaceutical company on SOPs or Standard Operating Procedures.  SOPs are at the heart of managing quality throughout product development; starting with R&D through Clinical Trials and then ultimately, manufacturing and distribution.  SOPs become the definition of how to execute on all process steps in order to complete an end-to-end process.  The criticality of execution is perhaps magnified when you’re dealing with medications that will be ultimately be marketed, administered by physicians and taken by large numbers of patients.  The stakes are extremely high with enormous investment by the company on each effort and extremely high risk of failure including scrutiny by the FDA as well as auditors. 

Quality Management is a fundamental discipline within pharmaceuticals.  So much so, that they provide a governance function purely for the sake of managing SOPs and ensuring operational participants have read and understood the process before they can execute on any activity relative to each and every project.  That was a bit of a mouthful, so let me simplify it from the end user perspective.  If you are a clinical technician and there is a new trial starting, a new SOP will be published and you will be asked to read the document (usually 40 or so pages long), take a quiz online and then “sign off” that you have read and understood (R&U) the procedure.  Now, from an administrative perspective, it’s also quite a job managing not only the content that needs to be gathered for defining the SOP, but then administering the SOP R&U tasks itself.  This organization conducts dozens of trials per year with many running simultaneously with hundreds of participants just within the clinical trials team.  So, you can imagine the complexity.  Now, at the heart of the issue is the SOP itself.  Over most of the past twenty years, SOPs have been large documents that are created in Microsoft Word, reviewed, approved and then converted to a read-only Adobe PDF.  The document is then stored in an EMC Documentum document management data repository (DMS).  The DMS captures necessary metadata about the SOP and ensures that it was “published”.  This distinction is important for compliance with FDA 21 CFR part 11, a regulatory standard that all participants adhere to.

Now, my client had a number of  problems that many other pharmaceuticals have.  The big ones were:

  1. How do we ensure that people on the project really understand the procedure?
  2. When participants are unsure of a procedural step, the existing PDF documentation within the DMS is unwieldy and difficult to find answers.
  3. Many SOP documents contain procedural details that overlap other SOPs and there are often inconsistencies between them.
  4. Gaps may exist between SOPs and the exact steps and responsibilities become unclear.
  5. The effectiveness of the SOP documents to impact behavior is widely believed to be lacking.

So, back to my parable on using Quicken.  Good material, lots of investment, but does the material actually impact how work gets done?  The short answer is “not well enough”.  When you give a broad audience a massive document locked away in a complicated environment, you don’t get the intended results.  You don’t get adoption and adherence to the stated process.

The Power of Simplicity

A general surgeon, Atul Gawande is also the author of a few books including a recent release entitled, “The Checklist Manifesto”.  In “Checklist”, Dr. Gawande details how medical surgery as well as other very complex procedures such as construction of skyscrapers and flying airplanes have a common tool that greatly impacts the quality of execution: a simple checklist.  Recently, Dr. Gawande spearheaded efforts to educate and deploy the use of checklists for surgical procedures in hospitals across a variety of environments; many in developing countries, but also in inner-city hospitals in the US and other developed countries.  The results are astounding as the checklist program is able to greatly reduce problems from surgical errors such as post operative infections, bleeding, unsafe anesthesia and operating room fires.  Incidents of these common problems dropped 36 percent after the introduction of checklists and deaths fell by 47 percent.  After this study, staff was ultimately surveyed and asked if they would want this checklist used if they were being operated on.  93 percent said “yes”.

How does the concept of a checklist apply to the clinical trials process within a pharmaceutical operation?  A checklist can serve just as the existing SOP is intended.  People on the trial team are responsible for understanding the whole process as well as their individual role.  But as we now understand, massive documents have inherent problems:

  1. maintaining integrity of the information
  2. readability by the audience
  3. providing guidance at the point of need

The net objective of ensuring execution of each process step is not being achieved with SOP documents.

That’s where a checklist comes in.  The work I’ve been a part of involved a major paradigm shift away from traditional large volume, free-form SOP documents and toward a new model that takes advantage of cutting edge BPM technology.  The client is using the Nimbus Control enterprise BPM application.  This model involves the following core BPM principals:

  • documenting end-to-end process in a universal process notation
  • linking all relevant electronic documentation at the activity levels
  • assigning ownership for every activity
  • building a Nimbus Storyboard (checklist) to correspond to each SOP
  • establishing end-user views to provide pre-populated lists of SOPs
  • providing end users with Search capabilities to get process diagrams, related documents and Storyboards in a structured Keyword taxonomy

A Revolutionary Tool

From an operational execution perspective, the use of Storyboards is revolutionary.  A Clinical Operations Director can now open a Storyboard which is essentially a list of activities that is relevant to their project; jump to the exact activity at the point of need and see all guidance, references and regulations that are necessary to perform that step.  A large team of participants contributed to defining this new way of working within the Clinical Trials operational team.  Contributors came from the operational team, governance office, IT, as well as the Quality Assurance organization.  The result is a solution that promises an ROI of over 30X within three years. 

Improving operations that impact the quality of how we develop medicines is important not only for the companies that invest in that work, but it also impacts doctors, patients and all those who care about those patients.  The impact of process performance, process execution and quality cannot be underestimated.  These initiatives are not driven purely by regulatory requirements and audit findings.  The investment in these technologies improves all aspects of the organization, the work experience of all participants and the pride that the organization can take by reducing errors and improving quality execution.

Simplicity Takes Work

I’m reminded of a quote of Steve Jobs who just this past week resigned as CEO of Apple: “That’s been one of my mantras — focus and simplicity.  Simple can be harder than complex:  You have to work hard to get your thinking clean to make it simple.  But it’s worth it in the end because once you get there, you can move mountains.”

The Checklist, or the Storyboard, helps make massive amounts of complexity and detail quite simple.  I applaud the ambitious courage of my client for taking bold action to transform the SOP process.  As Mr. Jobs noted, “you have to work hard to get your thinking clean and make it simple.”  My client has worked incredibly hard to design a simpler way for people to perform exacting work.


Operational Risk = Investment Risk

August 17, 2011 § 3 Comments

Investment Risk

The historical focus on securities analysis, valuation and prediction has centered on the tracking of historic prices.  Investment Risk has been measured based on price volatility, particularly historic price volatility.  Price valuation ( stock or bond values) have historically been measured based on the financial performance, cash position and cash flow of the issuing corporation.  Most measures review comparable financial ratios and industry benchmarks to form an assessment of proper pricing.  This practice of value pricing is flawed.  Future prices have little to do with past price behavior and risk is in no way correlated to price volatility.  Risk has everything to do with specific operational exposure, legal exposure, regulatory risk, as well as financial risk.  Price risk is the tail – not the dog and we are wasting our time measuring price trading patterns of any particular security.

Risk is not something any company, operational unit, financial arm or investment firm can ever fully manage.  Risk is part of existence.  Risk is part of living as individuals as well as companies.  Minimizing operational risk, while noble in its cause is also completely inadequate toward satisfying investor risk.  Investors are best served with their risk profile when companies  handle adverse events in the most adaptable, expedient and agile manner.    Companies that know how to respond to the unknown unknowns – not only the expected risk scenarios.  Those are the companies that know how to deal with risk and those are the companies that survive and thrive for decades or centuries.  This operational maturity for handling adverse events is where risk truly lies and what measurements are most relevant for investors to understand.  As is everything we measure, operational risk is relative and how one organization measures in comparison to other comparable companies in the appropriate industry sector should determine the risk premium investors apply to prices and price change expectations.

Market Shifts: Could Blockbuster See the Netflix Threat?

So, how is there ever a solid method of valuating stocks – understanding the risk inherent in the security and making a value judgment on its price relative to alternative securities within a similar risk class?  I believe the analysis needs to be based on the management of operational, financial and legal risk along with understanding the operational objectives, measures, tracking ability and business agility that each relevant comparable company displays.  How could the valuation of Blockbuster drop so precipitously?  Could we have measured their market position in 2003? Of course!  Could we have measured their operational and financial controls?  Absolutely!  Could we also see how unprepared they were to alter their market model to deal with alternate competitive models?  Yes, we could have see this risk as Blockbuster had very little ability to manage their processes and alter them as needed.  They could not move when the market shifted and they took years to try to compete with Netflix.  Why?  Because they were NOT agile.  They couldn’t just change their model even though they know they needed to.  By the time they implemented a competing service to Netflix’s, their lunch had been eaten.

Fraudulent Schemes

What about Barclays circa 2003?  A phishing attack; wherein a hacker steals email information from Barclays.  The hacker then sends emails to the Barclay’s customers asking them to change their password.  The hacker steals the real passwords and immediately changes them, thus locking out customers.  Finally, the hacker transfers funds from each account, effectively stealing tens of millions of dollars in a two day span.  What’s astonishing isn’t that Barclays or any other bank could be fooled with this scheme.  In the early 2000’s, this type of phishing attack was a new method of fraud and they couldn’t have foreseen it coming as much as it might seem obvious today.  What is astonishing is that Barclays didn’t have system parameters on their password change function that prevents such outlier events from occurring.  When thousands of users are changing their passwords in a few hours, there was not an automated trigger to shut down this service.  If the normal rate of password changes is only 100 per day, multiple standard deviations away from an average event usually means something is wrong and the service needs to be halted and evaluated.  But Barclays did not have such processes in place and they were not protecting their assets in a prudent manner in 2003.  As a shareholder, the most important question relative to your risk with your investment in Barclays must be, “Does Barclays have their arms around their processes?  Do they consistently look to improve and tighten their risk and control structure?  How do they govern their processes?  How visible are their processes?  How much risk is embedded into individual or small department knowledge domains wherein a rogue trader can bring down the entire company; such as what occurred at Barings Bank in 1995?  We don’t have to look far for examples.  How about the latest with Newscorp’s wiretapping scandal?  How well did Rupert Murdoch and the rest of the leadership team understand the risks they were taking?

Where traditional investment analysis becomes moot is when we consider these outlier events (market shifts, fraud attacks, internal fraud, legal rulings, reputational loss), etc.  While Sarbanes-Oxley put some basic, prudent rules in place for public companies in the US, this regulation does nothing to reveal the true risk position of the investment.  And it’s risk that is the issue here.  Every investment provides a risk/reward proposition.  If I’m going to incur greater risk – I’d better have the opportunity for greater rewards.  And vice-versa, I may choose to limit my risk exposure – knowing full well that my return opportunity is modest.  US Treasuries are considered among the least risky securities to hold and consequentially they yield very small returns relative to other bond notes with identical coupon and duration.  You are virtually guaranteed your 3.25% return on a 10-year note and “virtually” no risk of default.  Okay, if we use the US Treasury as a benchmark for no risk, then what constant above this risk illustrates relative risk and how much should investors be compensated for each 1% of additional risk?

Starting in the 1970’s, financial scholars embraced the idea that a stock’s risk was associated with price volatility.  Further, they measured past volatility as a likely indicator of future volatility and thus the inherent risk.  Barr Rosenberg’s consulting firm, Barra, would eventually develop the “Beta”, a quantified measure that represents a stock’s sensitivity to movements in the overall market.  A stock with a Beta of 1 would have identical price volatility as the broader market; more than 1 meant more volatile and less than 1, less volatile.  Suddenly, the risk factor of a stock could be calculated, quantified and estimated.  Another economist, Robert Engle, would win a Nobel prize in 2002 for his independent view of this same idea.  Amazing.  There you have it.  Investment Risk is all about past pricing.  Nonsense, I say.

The High Impact of  Unlikely Events

In 2007, Nassim Taleb published what would become a Wall Street favorite, “The Black Swan” (not a lot of ballet in this one), “The Impact of the Highly Improbable”.  The important core theory that Mr. Taleb establishes is that unexpected events happen and the impact that some of these events have is astronomical.  Whether we’re talking about 9/11, the capital markets collapse of 2008, BP’s oil catastrophe in the Gulf of Mexico, or Baring Bank’s rouge trader.  Micro level events or macro level events; either of which may be completely unpredictable have potential to be game-changing events.  The big question becomes: how well positioned are we to deal with such events?  Forget about trying to capture, measure and plan for each exact event.  That’s all fine and good.  But, how capable is the organization; how agile is the organization to respond when the big-one hits?

When a competitor exploits a new technology that undermines our traditional sales and delivery models, how well can we analyze the issues, develop strategic approaches, institute new models and underlying processes to maintain our market position?  These are the questions that Blockbuster could not answer adequately.  Their event wasn’t even an overnight impact.  They had many months to adjust, but like a big aircraft carrier, they just couldn’t change course quickly enough.

Only The Nimble Survive

In a recent blog post by Torben Rick,  he asserts that “Business history is punctuated by seismic shifts that alter the competitive landscape. These mega trends create inescapable threats and game-changing opportunities.  They require businesses to adapt and innovate or be swept aside.”  So, when we return to the topic of risk and how well organizations are managing risk, let’s not focus on stock price volatility.  Rather, let’s look at how organizations approach their business models in an agile manner.  Just how well are they positioned to change processes quickly and respond to “Black Swan” events?  Because the question of whether or not there will be such events is certain.  There will.  Another fine recent blog post is by Norman Mark’s  who notes, “an organization needs not only to understand and assess its risks, but it needs to have a culture that embraces the active consideration of risk…”.  It’s this consideration that I suggest includes active response to the unexpected.

If you are dealing with management that is not capable of rapid change, you are dealing with high risk.  Now, my definition of risk might not be quite as quantifiable and as easily comparable as a risk quotient like Barra’s “beta”.  But for my money, it’s what really matters.

Process Performance, Complexity and How I Learned to Stop Hating Traffic Lights

August 12, 2011 § 2 Comments


Measurement and Action – how do we improve performance through a cycle of measurement and action?

Measuring performance of operations is one of the most challenging of all management disciplines.  Financial performance tends to draw the most attention with investors and management having stakes that are dependent on the cash flow, income statement and balance sheet results each and every measurable period.  The quarter ending release of financials dominate most business news cycles and companies commit a fury of resources and activity to try to get numbers to align with pre-set goals, objectives and related expectations.  Sales teams are pounded hourly to get PO’s in before the quarter’s deadline.  Back office operations work frenetically trying to meet reporting deadlines and get all accounts reconciled in time.  And of course, executives prepare summary statements, conference interviews and advance guidance statements to investors in an effort to set a level of confidence in the direction of the company and how the broader market is impacting its results.

But what I’ve always found troubling with this nearly universal cycle of chaos during each financial reporting cycle is that the financials that are measured represent just a small picture of the overall health of the business.  While it’s an important part, it’s by no means even the majority of what investors, board members, partners and employees should be concerned with.  There are so many other factors that need to be measured, trended, compared and ultimately weighed into the analysis and assessment of each organization. 

A Business Services Example

Recently, I was working with a Shared Services division of a global corporation.  This business services division manages all operational financial services including payroll, accounts payable, and receivables as well as human resource related operations.  To give you an idea of how complex their existing processes are, consider these conditions:

  • Many processes are “black box” in nature; managed by two separate third party global services firms.  The company does not know or see what the third parties do, only the results.
  • Other processes are managed by the shared services division which serves many businesses, but not all. 
  • The company had recently acquired another set of businesses worth several billions USD and those organizations would have to be integrated into the shared services division as well as the outsourced third party operations. 

The challenge to this organization was focused on the need to consolidate processes where possible and ensure all processes were designed in a way that the varying participants could understand.   As any of you who work in process management and process improvement know, just getting a common framework for communication is a major challenge. 

Documentation existed everywhere; across all parts of all entities.  But every bit of the process documentation was disjointed; written sometimes in Microsoft Word or Visio or PowerPoint or embedded in SAP documentation or even in printed notebooks that no one could find the electronic versions for.  “OMG!”, my teenage daughter would say.   “A complete mess”, my client would admit.  Sound familiar?  This is a common condition that I encounter at nearly all of my clients.  But that wasn’t the only problem.  Even if we can get all process definition content in a single place and in a single language that everyone can agree on, how do we manage it ongoing?  Further, how do we know it’s right?  Is it really what people are doing?  Or just what they say they should be doing?  And finally, how can we start measuring processes; ultimately holding those accountable at the process level for measurable results?  These are big challenges for even small organizations, let alone an organization that operates dozens of businesses across dozens of countries.  To solve complex issues, it’s often easiest to compartmentalize them and solve them one at a time.  The key challenges with this scenario include:

  1. Complexity of Process
  2. Understanding Accountability
  3. Sustainability of Content
  4. Compliance with Process Standards
  5. Sustainability of Performance
  6. Defining measures that align with process design

As we break down these topics, one thing that becomes apparent is that the last category, “defining measures…” is something that is dependent on most of the items above.  While the organization did have performance measures in place, there was very little accountability and that was mostly because it was very difficult to know which roles and individuals really contributed to the factors of the final figure.  To understand what I mean by this statement, consider the scenario of Time and Expenses. 

Time and Expenses is a common process in most organizations where employees record their time and expenses for payment to the employee.  A key measure that was tracked was the percentage of T&E submissions paid on-time.  These statistics were tracked and summarized monthly and reported through a sophisticated business intelligence system.  This was one of dozens of metrics.  Who is responsible for ensuring T&E submissions are paid on-time?  When the percentage of on-time payments in July dropped below 90%, this was an unacceptable “red-flag” alert.  Why did this happen during this month and who can ensure it is corrected?  These may seem like pretty straight forward challenges with the solution being that the organization just needs to be structured such that T&E has a single process owner and that all participants in the process are managed under that owner.  Right?  HA!  No way.  It’s far more complex with this global organization.  To start with, we have 5 separate high level process steps:

  1. Set Policy
  2. Arrange T&E Information
  3. Submit T&E Form
  4. Process T&E
  5. Pay Submitter

Each of these steps are managed separately with the policy (#1) owned by a governance board with input from Audit, steps #2&#3 owned by the individual, #4 depends on the region and the part of the organization and #5 a third party outsourced organization that was again variable depending on what organization and region the submitter was from.

So, how the heck do you know where the process is breaking down and why some submittals are paid beyond the required deadline?  I won’t go into the full analytics and forensics involved in identifying the “choke” points, but suffice it to say that a small minority of data points were throwing the average way out of range.  It wasn’t that the entire process was broken, it was that for certain circumstances payments were taking 2-3 times the allotted timeframe. 

At the heart of successful process management and performance management is a platform for designing, capturing, maintaining and refining process definition.  To perform this exact analysis and ultimately define measurements that can be actively managed, organizations must fully document and manage processes in a common visual framework that clearly defines ownership, accountability and associations with each related process activity.

So, as we look at the next reporting cycle to review the  “performance” data for the organization, take a step back and ask the following:

  1. How well does the organization understand their own processes and those of their dependent outsourced and supply chain partners?
  2. How actively are those processes managed?  Meaning, how often are they reviewed, improved, updated?
  3. How adept is the organization at responding to drastic market shifts?

Managing process information and treating process information as a highly valued asset is the mindset that must exist at the heart of nimble and forward looking enterprises.  Without such rigor within business process management, organizations pose a high degree of risk to investors and the organization’s overall health.  Immobile organizations are more susceptible to rapid market shifts and less able to innovate where necessary.   As I will explore in later postings, sustainability of process content is what separates highly agile organizations from laggard organizations. 

Arms Around Complexity

So, how did my client get their arms around the complexity of process documentation they were confronted with?  They took a number of steps including having all existing process content with the business services purview converted from static Visio files into Nimbus’ BPM platform, Control.  Further, BPM strategy was developed to include process improvement methodology, process sustainability using Nimbus, including collaboration on process information across the global organization.

Globally, the organization has a heavily invested program to drive continuous excellence methods throughout the wide scope of businesses.  This is a massive undertaking given the number of businesses and the number of countries that operate.  A core component of the continuous excellence (CE) program is cultural with some degree of best practice standards, reporting and auditing of implementation.  Another key element that falls under continuous excellence is the quality management system.  This “system” is not an IT system, rather another set of methods and standards that includes reporting and auditing to ensure implementation. 

It’s most impressive to see how mature and visionary the executive team have been; fully committing to an enterprise emphasis on quality and continuous process improvement.  But, even with the executive vision, the level of complexity makes the challenge a tough one.   At the core of the objectives that include quality, continuous excellence, process improvement, performance management, and compliance management is one common denominator: PROCESS.  Understanding process activities, enables core elements of accountability, sustainability, and agility. 

Associating KPIs with Process Activities and Owners

At a local level, this Business Services division developed a vision for process improvement that included the same core capabilities that was envisioned by the global Continuous Excellence program.  Their objective was to actively manage Key Performance Indicators (KPIs) and not just report on them.  Once their processes were established, KPIs were attached at the appropriate process level and process ownership now not only meant ownership of process definition, but also ownership of that exact performance metric.  Again, these relationships that are established on the process software platform enable the ability to understand performance in a far more meaningful and accountable way.  No longer is a metric just a number made of up of lots of calculations with no clear method of identifying the process failure.  Having KPIs associated with key process areas, all elements that feed that indicator and all ownership of activities within that process area is easily identified. 

Measurement Triggers Action

Great.  We now can understand the KPI in a way that clearly identifies where the process is failing and who is responsible.  So, what do we do next?  Send an email?  Set up a meeting?  Paint the wall red?  Don’t tell me that isn’t what’s going on in most organizations; it absolutely is.  What do you do to manage dozens of KPIs and dozens of alerts on performance that are “out of range”?  How can key management have consistent visibility into the state of action that is taking place on each of these issues?  Yup, you guessed it, this is a teaser for a following post.  Later, I’ll highlight how this is being done and how the full cycle of process improvement is effectively managed through your process management platform.


Note that there are many ways to approach process improvement and performance management and you’ll note I’m not proselytizing around any specific method such as Six-Sigma, Lean, Kaizen or variations on quality management programs.  One area that is transforming process improvement and performance management methods is the advent of social media and social BPM capabilities within enterprises.  For some interesting insight, read this recent post on BPM For Real:  Also, for much greater insight into Business Services and developments in BPM capabilities, please check out the latest post on Sourcing Shangri-la:  For some solid insight into process excellence methods, check the Process Excellence Network:

Compliance: Headache or Windfall?

August 8, 2011 § Leave a comment

Forming a Process Centric Model

Regulatory bodies and compliance rules are as old as civilization.  Early Egyptians, Greeks, Romans and Indians created standards and rules for business.  These rules were centered on weights and measures as well as currency, but today regulations come from many sources.

When we look at a regulatory construct, we are effectively looking at rules, laws, guidelines and best practices that are dictated by a governing body.  I say dictated, but these rules are generally a set of statements that have been developed, reviewed and ultimately enacted through a governing board within a corporation.  Regulations may also be established as governing boards that are industry related and many regulations are based on governmental laws (federal, state and local agencies).  The volume of these regulatory books and the volume of statements contained in each can be enormous depending on the industry and corporate size.  Public companies have the Securities and Exchange Commission to deal with.  Companies with global operations have to comply with varying laws that are relevant in each operating country; adhere to health and safety standards, hiring and firing requirements, social responsibility requirements, etc.  If you’re a financial services firm, a plethora of regulations guide how you account, record, trade and settle.  If you’re a pharmaceutical company, strict standards dictate how you run your clinical trials, record your findings, label your products, etc.  Now, add in all of the internal standards that govern your best practices related to your unique products, partnerships, and contract types.  As we can easily surmise, the complications that result are immense.

The SOX Phenomena

In 2003, after starting my own software services firm, I sat with the head of compliance for a Fortune 100 construction company to review their requirements for Sarbanes-Oxley.  Within a day of information gathering it became clear that their main objective was to look at how to manage a set of “controls” by recording who was responsible for each and whether it was working or not.  Now, to be clear, a “Control” is simply a process step that has an owner and it’s in place to mitigate a risk to the organization.  So, what this company was doing was to create a “matrix” of relationships between identified risks, controls (mitigation steps), owners, and the process area they relate to.  As I discovered in the weeks following these meetings, almost every company that was scrambling to comply with SOX was doing this exact thing and they were almost all risk/control matrices in spreadsheets.  The problems with that approach was universal and having a collaborative, relational data storage solution was an obvious need. 

Process is the common denominator

While I grew my business by developing software to address this requirement, other interesting similarities emerged from my client base.  Companies were not only interested in passing an audit or dealing with the SOX regulations.  They had a dozen or more other pressing regulations that required the same type of solutions.  In each case, whether it was FDA regulatory 21CFR part 11 or ISO9000 or Basel II, or the variety of internal standards that was being addressed, the same basic needs existed.  Companies needed to understand the regulations, identify the risks, controls, gaps, remediation steps, owners, process areas and manage all of that information somewhere.  Most commonly, that meant in an independent spreadsheet.  And what was the one thread that formed the backbone of all compliance management?  Process.

Another Fire Drill?

What I found was that each process area (ie: HR, Finance, Manufacturing Ops, etc.) was being hounded by internal audit teams, compliance directors, external auditors and quality managers to document their processes; document their controls; document their risks; document their issues; document remediation tasks, and on and on.  It’s amazing that anyone was ever actually doing their day job.  During the nearly ten years that I’ve worked with organizations on regulatory requirements, very little has changed in this regard.  I have yet to encounter a company that manages all of their compliance and regulatory requirements from a single platform.  Some organizations have made strides with managing process details in a more coordinated fashion, but most still deal with each compliance requirement as a separate challenge involving separate projects.

The issues with this condition are perhaps obvious;  each time one of the regulatory initiatives is executed, operational leaders are reliving the exact nightmare!  It’s Groundhogs Day!  I’ve had leaders within Pharmaceutical clients tell me that rarely does a year pass before they have to execute another fire drill of process capture, internal review, internal audit, and external audit.  Invariably, it’s a short sighted exercise to check a bunch of boxes and get a rubber stamp, so we can get back to normal operations.

The Single Platform Vision

Now for the good news, things are changing.  During my four year tenure at Nimbus I’ve seen an awakening within highly regulated industries to stop the nonsense.  It all begins with proper process management wherein organizations do the following:

  1. Define end-to-end processes using a simple notation for business end users.
  2. Govern process definitions and all related reference materials in support of process execution.
  3. Manage regulatory and internal standards within structured, governed statement(s).

Process management should not become the result of fire drill exercises to satisfy auditors, rather BPM should be an integral part of knowledge capture, process improvement, compliance management and business agility.  As one executive summarized when heading into a board meeting after meeting with me, “We can’t improve what we can’t understand.”  As I’ll discuss in later postings, there is both a mechanical nature to BPM and a cultural one.  Very few cultures are used to maintaining a high level of accountability and continuous management of process content.  Just putting systems in place is not a cure-all and as we’ll explore, organizational culture plays a huge role.

Process Content = Secret Sauce

August 4, 2011 § Leave a comment

As I discuss in my posting on Active Governance, I find a great deal of reluctance by executives to invest in managing and controlling process information.  There are a few factors that play into this reluctance by executives.  The first fundamental fact is that leaders hate spending time, resources and money on things that do not obviously make them money.  No company is in the business of governance.  Included in this category are other supporting operations such as Information Technology, Finance, Human Resources and Legal.  Most organizations do not value the investments in those parts of the organization as value drivers.  They are viewed as infrastructure.  They’re necessary for holding up the theme park, but not necessary to generating revenue and profits.  As long as these non-value driver (NVD) processes serve the most basic needs, keeping costs minimized is the objective. 

BPM treated as the “necessary evil”

The approach to NVD operations is the same objective to compliance and governance.  They are necessary evils.  They must be done, but the objective is to minimize costs to provide only the essential requirements.  Spending beyond the minimum provides no additional value to the organization or its investors.  If I spend more money to ensure a higher degree of IT support, it’s not likely to translate into more sales this quarter.  In fact, the cost of new systems or additional resources will shrink margins and drain profitability.

While there is some truth to this common condition, I contend that organizations must appreciate the value of ALL processes.  They need to see help desk support processes as equally valued when compared to sales processes.  Further, they must harvest and protect these unique processes as much as they value and protect the secret sauce locked in the impenetrable safe.  Where does Coca Cola or Heinz keep it’s secret recipes?  You bet they are secure and carefully managed. 

An IT Transformation Case

One of my clients has over 3,000 IT professionals in the US supporting the subset an organization of about 300,000.  How well this IT organization operates has a huge impact on the overall health of the company.  But, how are these processes managed?  How are they understood?  When I embarked on a major transformation effort with them, several issues were well understood. 

  • Costs could be reduced if they could consolidate overlapping processes across four major business units.
  • Quality could be improved if the global organization could identify the best performing processes and then standardize on those high performing processes.
  • A technology platform for managing process content could allow them to sustain all changes and improve process ongoing.
  • Future change considerations could be achieved far quicker and with a greater degree of certainty if process content was maintained and understood.
  • Process content within a management platform could be leveraged for governance and compliance initiatives, eliminating future process documentation projects.


What Makes Us Tick?

From this experience and others, I’ve found that it’s the unique processes that organizations possess that can make massive differences in profit margins as well as massive differences in revenue generation.  A sales approach is a process.  A pricing approach is a process.  Partner channel development is a process and success in those approaches needs to be harvested as repeatable processes.  It’s these successes that are some of the most important assets of an organization.  But how are they harvested?  How are they repeated?  Who actually understands them?  If a key individual leaves or a group of key people leave, does our ability to tap into that market disappear?  Do we lose the ability to form specific partner relationships?

And within IT or HR or Compliance… what about those areas of the business?  Are they just as important as the secret sauce?  A colleague of mine, Chris Taylor, recently highlighted the “secret sauce” issue in his posting on end goals.  He surmises that the “end goal of BPM is creating revenue for your company”.  As I will detail in later postings on this blog, BPM impacts top line revenue, cost containment, bottom line results, compliance management, risk management, business agility and investor confidence among other key business benefits.

I find a varying degree of understanding and appreciation for protecting the “secret sauce” of the organization.  Some organizations are highly protective of their processes and understand that the unique way they manage provides higher margins, quality products, quality service, customer experience and competitive advantage.  Process Management is the critical foundation, what is too often viewed as mundane infrastructure that is the secret sauce.  It just may be the case that new product development, marketing, and sales truly deserve the accolades, but again we must ask, how well has the organization captured that secret sauce and protected it?

Active Governance

August 1, 2011 § 2 Comments

What is true oversight?  How much oversight is prudent?

Governance, much like Business Process Management is a term that is thrown around in a variety of contexts, but rarely is understood.  The term often refers to a structure that enforces rules.    The most even handed definition I could find states that governance is: the set of processes, customs, policies, laws, and institutions affecting the way a corporation (or company) is directed, administered or controlled.  I like this definition for the fact that it tries to encompass “processes, customs, policies, laws and institutions”, but the most telling word is perhaps “affecting”.  Governance may provide some sense of structure, but only so far as it attempts to “affect” behavior of the organization.  Also, what is key to understanding what I will call “active governance”, is not just putting a structure in place, but actually putting an enforcement structure in place.  Governance cannot have much effectiveness to “affect” behavior without a complete cycle of structure and enforcement.  Further, governance is not an end-state condition.  For any company to state that they are well governed is simply a relative judgment that is meaningless when proclaimed from someone within that organization.   Organizations can establish thorough and sophisticated methods for sustaining specific levels of governance, but the degree that such governance is adequate for employees, management and investors is variable. 

It’s similar to stating a risk management position.  Organizations, as well as individuals, may assess risk and make decisions to take specific risks based on value judgments.  It’s not the fact that organizations take risks that is an issue.  The challenge for risk management is how much effort goes into understanding and mitigating known risks and how much investment in mitigation is needed.  Further, an organization’s ability to address the unknown unknowns and to plan for unknown events is an important part of what active governance is. 

In 2003, the vast majority of US public companies moved with furious pace to try to “comply” with regulations that were enacted to ensure executives were accountable for the financial disclosures of their respective companies.  With sections 403 and 204 of the Financial Reform Act of 2002, also known as Sarbanes-Oxley (from the sponsoring senators) or SOX, companies throughout the US and most global organizations with significant operations in the US suddenly found they did not have an adequate governance structure and could not reliably show compliance with SOX.  Part of the challenge that companies were facing had to do with the vagueness of the law itself, but regardless of those issues, companies throughout the US did not have adequate governance to reliably and confidently verify the numbers and statements on operational controls in their organizations.  As I began work in this area, each week I was ushered into large organizations, most with global operations that were treating SOX as a major headache that was being imposed upon them that they needed to “get through”.  In most cases, the task of dealing with SOX was managed under the chief financial officer’s role and a “director of compliance” was either tasked or newly established to address SOX compliance.  In every case I worked; cases that spanned industries from construction, financial services, consumer products, energy, and healthcare, organizations universally saw compliance as a problem imposed upon them by government.  It was a box to be checked, a hurdle to be cleared.  It was not seen as something that should be necessary within the organization, in fact, it was most widely reviled and criticized as a huge waste of time, resources and money.  Millions were being spent within each organization to accomplish SOX compliance and to most every executive it was viewed as a major waste and a major imposition.

Now, there are quite a few papers and books published detailing the variety of frauds and scandals that led to the enactment of SOX, so I won’t attempt to rehash the events within Enron, Arthur Andersen, WorldCom, Tyco and others.  These events also contributed to a view that slack governance undermines investment confidence.  And if investors cannot be sure that management disclosure of financials and reports can be trusted, then investors will turn away from holding equity or debt stakes in public companies.  This is logical and reasonable.  So, why should corporate executives take such issue with improving their governance models?  A key point I will draw out in a following post discusses how important process governance is and how it serves as a foundation of the organization.  In other words, the formula for success – I call this the “secret sauce”. 

Meanwhile, with little appreciation for the value of governance, the need to rush into place a formal structure for reporting was not trivial.  Not a single organization I encountered had a governance structure that allowed process owners to confidently attest to the performance of their financial controls.  Beyond risk-control structures, organizations also could not reliably attest to the financial reporting within every operating unit.  Given the nature of my work and the confidentiality of my relationships, I am not disclosing the names of the organizations I’ve worked for, but the issues were universal.   With such a condition of governance immaturity, the level of investment required to approach the requirements of SOX reporting was massive.  The investment would need to be made for advice from consultants, software systems that could aid attestation, reporting and internal resources to spend time dealing with such requirements, and ironically, external auditors to provide additional advice and services.  But rather than look at this challenge that was originating from regulations as an opportunity to improve risk management, operational effectiveness and investor confidence, executives became mostly defensive.

Now, that was 2003 and this is now 2011.  A lot of maturity has occurred during this stretch of time and I’m encouraged by the understanding that now exists about corporate governance.  There is still, however, a failure of public companies to fully appreciate the value of governance toward the leveraging of process information.  The ability for executives to fully appreciate the value of harvesting process information and controlling those assets is at the core of establishing a successful BPM strategy.  BPM is about harvesting process assets and fully leveraging them as key organizational assets.

Where Am I?

You are currently viewing the archives for August, 2011 at Process Maximus.