The Interconnectedness of Things

September 7, 2011 § 6 Comments

This past week the company I work for, Nimbus Partners, was purchased by a larger software company, TIBCO.  I can’t comment on the process of due diligence of the deal, but as any large acquisition is considered, a great amount of analysis must be performed.  To value any software company,  the acquirer must value the assets for product technology, position in the market, product position within the existing family of assets, the company’s existing financial state as well as projected earnings potential.   

Minimizing Risk

This acquisition is one of many major decisions that executives at TIBCO and other corporations go through every year.  Some investment options require incredibly in-depth analysis while other investment decisions may be made quickly with far less due diligence.  There are plenty of reasons for performing an analysis on an investment to a given level and not to a finer level.  When purchasing a stock or making a trade on an existing holding, how much information is driving your decision?  Did you read the prospectus of the latest 10-Q?  Did you attend the recent investor conference calls with management?  Did you get all the answers to your concerns of the latest one-time charge to net income?  The odds are you didn’t.  The odds are you’re trading on gut feel of the situation or you’re trading on some limited understanding and you accept that risk based on the fact that simply don’t have the time to do all of the research you would have liked.  Now, you might also put your trust with money managers or fund managers; expecting they are doing all the analysis required to make good value judgments that are in line with your risk profile and your investment objective.  Again, are you sure they are going down to a depth of analysis that ensures risk is minimized?

A Hedge Fund Legend

Recently, I read about a very successful investor named Michael Burry.  For those of you who haven’t heard of Mr. Burry, he gained a degree of notoriety for wisely betting against banks’ mortgage holdings and cashing in massive returns for his hedge fund when the credit crises hit full tilt in 2007.  His brilliance wasn’t just that he recognized a good bubble when he saw one, it’s the way he figured out how to capitalize on this realization that a spectacular amount of mortgages were doomed to fail.  The fact is, when Mr. Burry first became convinced that the type of lending that banks were engaged in was destined to result in large numbers of defaults, there was no real instrument for wagering against the performance of these notes.  The various tranches of subprime mortgage bonds could not be sold short.  Even with his conviction that the subprime mortgage bond market was doomed, he could not capitalize on it.

Then came Mr. Burry’s discovery of the credit-default swap.  It was basically an insurance policy that could be purchased against corporate debt, but that was only useful for betting against the companies that would likely default such as home builders.  Ultimately, he convinced a number of big wall street firms to create them including Deutsche Bank and Goldman Sachs.  Now, what made his work absolutely brilliant was the fact that he would spend untold hours poring over each bond prospectus, only investing in the most risky of those assets.  He was performing the due diligence on each of the loans, such as analyzing the loan to value ratios, which had second liens, location, absence of income documentation, etc.  Within each bond, he could sort out the riskiest of the lots and incredibly enough, Deutsche and the other banks didn’t care which bonds he took positions against.  He essentially cherry-picked the absolute worst loans (best for him) and found the bonds that backed them.

Mr. Barry would ultimately bring his investors and himself astronomical returns at a time when the vast majority of investors lost roughly 50% during this crisis.  If you read about Mr. Burry, you’ll find there is much more to his story as he is unique in many ways, but one key point that separates him from the pack is that he does his homework.  Details matter.  How these loans were structured matter to all that were connected to them.  In these bonds were real loans that represented real value.  Understanding the risk factors would immediately point to a very low valuation on these bonds.

I’m not going to delve into the full issue of responsibility relative to loan originators, banks, Fannie Mae, borrowers, etc, but suffice it to say that solid due diligence reduces the risk of any transaction.  The more you understand about the asset under consideration, the better you can predict its performance.  It’s as simple as that.

Oneness

So, what’s with my title, “The Interconnectedness of Things?”  Well, it got me thinking about just how interconnected we all are.  Without getting all Jean Paul Sartre on you, let me point out the most common difficulty in all of management: interconnectedness.  That’s right, interconnectedness.  The fact is; executives hate it.  But it exists.  We have the tendency to measure performance of an exact metric; of an exact process step, or an exact person.  We like to think that sorting out the specific items of measurement can enable us to understand what is strong and what is weak.  Fix the weak bits, keep the strong bits, and voila, you have Lean.  But, from the work I’ve been involved in, it’s not so simple.  Similar to the difficulty of sorting out all the bits that make up a good loan from a bad loan; a good mortgage bond from a bad mortgage bond; business processes can be extremely complex and highly interdependent.

How do we get our arms around the complexity of process?  Mostly, in very distinct ways.  How many of us love to look at organizational charts, value chain analysis diagrams, system architecture diagrams?  If you are shaking your head “yes”, I’m deeply sorry.  The fact is, we are trying to ensure we understand the interconnectedness of things, but we often do that work in silos.  In efforts to diagram process or entity relationships of systems or people relationships, this work is most often performed as one-off attempts with a singular purpose or project in mind.  They are not done to ensure a wider scope of understanding is gained and maintained.  And therein lies a serious shortcoming of those efforts.  With islands of understanding, there may be some level of interconnected understanding, but the silos remain silos and whenever we look at those groupings within a map or chart or a diagram, there is too much lost information.  The value of what you have is just as quickly defined by what it does not have.  (Perhaps some Camus?)

Devil’s in the Details

So, how do we connect all these silos and how do we know when we have enough detail?  These are big questions to which there are no silver bullets.  During a recent engagement, I was working with a global IT organization who brought together four business units to define standard global processes.  Ultimately, the idea was to consolidate where possible, but initially they needed to capture how each unit was operating.  I’ve done this type of work a number of times and what still amazes me each time is how often we find gaps in processes, areas that are not understood, as well as overlaps where steps are replicated and no one knew what the other one was doing.  As we embarked on the journey of process design, the key question that this team asked of me was, “how many levels down do we need to go?”  My answer was pretty simple: go down to the level of detail that someone from outside this process area can read and understand what is happening without any ambiguity.

Imagine if you will an organization that has documented down to that level in a consistent way across their organization.  Further, imagine a singular map with diagrams that connect to all appropriate related process steps, to all related electronic content and within a platform that provides instant feedback from the personnel that perform the operations.  Now, that’s getting your arms around complexity and it tells the story of the interconnectedness of things.

Finally, once we gain perspective on this interconnectivity we can truly understand what is working and where risk lies.  For it is risk that we are constantly managing.  The banks that held large amounts of mortgage credit were blind to what was in the big bag of bonds that contained smaller bags of loans that contained all kinds of facts, some of which were never gathered (such as income verification).  Did they completely understand the interconnectedness of things?  Did they get down to a low enough level of detail to really understand the assets that so much was riding on?  To reduce operational risk, the devil’s in the details.  Get your arms around process, get your arms around the details and know what you’re buying into.

Advertisements

The Power of Events: Nimbus Acquired by TIBCO

September 1, 2011 § Leave a comment

Big news this week at Nimbus Partners, a company I joined exactly 4 years ago today.  We were acquired by TIBCO, a larger company with a diverse portfolio of BPM products.  I’ve known about TIBCO for about ten years now as they are pioneers in the development of middleware, messaging and enterprise application integration – what is now a core capability within Services Oriented Architecture or SOA.  Now, SOA is by no means new and the maturity of SOA is well advanced in large enterprises.  Many organizations have spent and continue to spend substantial amounts on its promise and a fair amount of challenges remain.  Still, like most revolutionary technologies, realizing the value of something that radically shifts what is possible, takes time.  TIBCO has been at the forefront of SOA, BPM and BI with technology that alters how information flows between systems and how quickly business users can get to answers.  The potential that lies before me and my company is exciting, with promise to connect advanced infrastructure capabilities with Nimbus’ cutting-edge business process management platform.  Now, I’m not going to delve into the intricacies of what is possible or which bits fit with which widgets as I’m sure whatever I imagine will evolve to be something quite different.  What I will tell you is that the power of this acquisition is an exciting event, one that will likely impact a wide variety of global enterprises. 

The Complexity of Events

On the topic of events, I’m reminded of a book I read years ago called aptly enough, “The Power of Events.”  It’s written by a gentleman named, David Luckham.  I was fortunate enough to hear him speak at a Gartner conference not long after I read his book when it was a groundbreaking topic.  Most interesting was his ability to illuminate the capabilities of Complex Event Processing (CEP).  The importance of this capability is primarily for organizations that need to minimize risk to their systems and the underlying assets that those systems control.  The need to minimize risk is evident in just about every organization I’ve ever entered. 

I remember being inspired by Luckham’s book and speech, and it had a big influence when I founded AVIVA Consulting.  One of the first opportunities to realize some of the key capabilities within CEP was with a partner company.  We took a simple technology for tracking real-time data flowing from any web service and integrated their software with our Microsoft focused collaboration stack of SharePoint, SQL, InfoPath, Office and K2, a third party workflow product.  We were mildly successful with it, but quickly became focused on risk and compliance requirements and never fully developed the real-time collaboration solution.  Later, after I sold our flagship product, ACES, to a Microsoft service provider, Neudesic, I spent a year working to take a Microsoft platform Enterprise Services Bus (ESB) to market.  I named it Neuron (yes, I’m still proud of how clever a name it is) and we launched it shortly before I left to work for Nimbus.  So, even though I was on the product management and product marketing side of the ESB product, I became intimately familiar with the core capabilities and potential of middleware messaging and SOA in general.

Now, full circle back to now… this event is the joining of TIBCO, the most innovative company in the middleware space with Nimbus, the most innovative company in the BPM content governance space.  I couldn’t be more excited to be at the nexus of this formation (you can kick me later if you get this).  Personally, I’m excited to see what will happen when we sit with financial services companies and pharmaceutical companies to look at how risk is managed within quality systems or compliance initiatives.  How well can most of these organizations manage real time events and how well designed are their processes to deal with adverse or opportunistic circumstances?  This is where the opportunity lies.  As I point out in my posting on business agility and the need to minimize risk through agile processes, organizations need to design processes that allow rapid response to unexpected conditions as well as the known possibilities.  The events that tend to be earth shattering are not the anticipated events, so how well we have modeled the organization to respond is critical.  Also, as I detail in my posting on checklists, minimally we must address the known risks with clear process handling instructions to ensure quality execution.

Rapid Response to Events = Reduced Operational Risk

So, imagine if you will, the situation where fraudulent phishing attacks attempt to lure bank customers to provide their login credentials to make a change to their account.  Rather than connecting to the real bank, customers are connecting to a fraudulent system that grabs their login ID and password.  The fraudsters then log-in to the real system, change the password and now begin making transactions to pull money out of the victim’s real account.  With CEP technology, banks can see in real time how much activity is occurring and when irregular volumes occur on a given function (such as 10X the usual number of password changes during the past minute), the system disables the password change function and alerts the appropriate administrators.  Cool stuff, right?  Now, tie in the ability to provide clear instruction on the manual handling that the administrator needs to perform.  This outlier password change event is rare and the steps required by the administrator may be exacting.  That’s where Nimbus comes in.  The admin will have clear steps to take, ensuring fast and accurate handling with quick access to all necessary resources and reference materials.  End game?  Very few, if any customers are impacted.  Very little, if any financial damage done to the bank.  Preventable adverse events are prevented.  And we can imagine in reverse how opportunistic events can also be quickly acted upon with decision-makers having clear instruction on execution.

Understanding Events in Context

The key to how an organization processes and responds to a large volume of diverse events is at the core of what BPM is about.  It’s not just process definition for the sake of checking a box that the auditor approves.  It’s about improving the decision-making ability of management and other operational decision makers.  It’s about reducing operational risk.  And it’s about continual tweaking or improving those processes as we learn what is working and what is not.  Gaining real time event information can be hugely beneficial, but it’s value is increased when we understand these events in context to precise operational activities.

Those of you who follow my blog already know, I’m not in the habit of reporting news or projecting the future, so consider this post a rare exception.  Given the personal nature of this event and the impact it will likely have on the future of BPM technology, I felt compelled to comment.  In a future post, I will explore technology specifics including how governance, risk and compliance requirements are handled with the variety of technologies available and the specific categories of capabilities including automation, content management, master data management, SOA, enterprise architecture, social networks, collaboration, search and reporting.  There are a variety of analysts and prognosticators jumping to conclusions about what this merging of technical capabilities will mean to the market.  I can tell you that this newly joined organization looks extremely promising, but the proof will be in how we make it happen with our customers.  It’s how Nimbus has always proven its advantage in the market; through real execution and value creation in real customer environments.  With the added strength and reach of capability that TIBCO brings, we should be proving what is possible very soon.

New Process Adoption: How do we get people to change behavior?

August 29, 2011 § 4 Comments

Earlier in my career, circa 1994, I was working for Lotus Development and I was having lunch with my boss.  We were sharing a bit of small talk and I remember telling him about a new personal accounting application called Quicken.  I explained all the cool things it did and how I could track my expenses, put line items into categories, build charts and graphs and see where my money went.  He listened as I went on and on about how empowered I felt that I knew what I was spending my money on.  He then asked, “Are you really changing your spending behavior now that you use it?”  And I thought about that for a moment and then had to admit, “Well, no.  Not yet anyway.”  That was all he needed to ask.  I got his point.

There is a lot of this same phenomena in the areas of governance, risk and compliance as well as with operational excellence and quality initiatives.  It can be exciting to start thinking about improvement of process execution; understanding exactly what is happening with end-to-end processes; who is responsible and how activities should be measured.  For large risk and compliance efforts, new control methods, systems and activities are designed to address areas identified as high risk, high impact potential.  Documentation efforts are extensive and training of personnel occurs to ensure understanding of these new processes.  The question that should be asked that my previous boss asked me is, “Are you really changing people’s behavior?”  It’s one thing to implement a method and to provide training, but not everyone is going to adopt the new application, system, method or rule.  There are those in the organization who have been doing their job, their specialized skill for a very long time.  Simply publishing a new process diagram, a new policy, or a best practice document is not going to ensure adoption of a new process.  Further, when process change or compliance regulations impact a wide variety of process areas with dozens or hundreds of roles impacted, how do we ensure adherence to the newly stated “way of working”?  Is it okay to have 80% adoption?  90%?  99%?  How do we know where the newly defined processes are not being followed?  And if we are not at 100%, what is at stake?

Pharmaceutical Quality Management

Let me cite a recent example.  I spent several months working with a global pharmaceutical company on SOPs or Standard Operating Procedures.  SOPs are at the heart of managing quality throughout product development; starting with R&D through Clinical Trials and then ultimately, manufacturing and distribution.  SOPs become the definition of how to execute on all process steps in order to complete an end-to-end process.  The criticality of execution is perhaps magnified when you’re dealing with medications that will be ultimately be marketed, administered by physicians and taken by large numbers of patients.  The stakes are extremely high with enormous investment by the company on each effort and extremely high risk of failure including scrutiny by the FDA as well as auditors. 

Quality Management is a fundamental discipline within pharmaceuticals.  So much so, that they provide a governance function purely for the sake of managing SOPs and ensuring operational participants have read and understood the process before they can execute on any activity relative to each and every project.  That was a bit of a mouthful, so let me simplify it from the end user perspective.  If you are a clinical technician and there is a new trial starting, a new SOP will be published and you will be asked to read the document (usually 40 or so pages long), take a quiz online and then “sign off” that you have read and understood (R&U) the procedure.  Now, from an administrative perspective, it’s also quite a job managing not only the content that needs to be gathered for defining the SOP, but then administering the SOP R&U tasks itself.  This organization conducts dozens of trials per year with many running simultaneously with hundreds of participants just within the clinical trials team.  So, you can imagine the complexity.  Now, at the heart of the issue is the SOP itself.  Over most of the past twenty years, SOPs have been large documents that are created in Microsoft Word, reviewed, approved and then converted to a read-only Adobe PDF.  The document is then stored in an EMC Documentum document management data repository (DMS).  The DMS captures necessary metadata about the SOP and ensures that it was “published”.  This distinction is important for compliance with FDA 21 CFR part 11, a regulatory standard that all participants adhere to.

Now, my client had a number of  problems that many other pharmaceuticals have.  The big ones were:

  1. How do we ensure that people on the project really understand the procedure?
  2. When participants are unsure of a procedural step, the existing PDF documentation within the DMS is unwieldy and difficult to find answers.
  3. Many SOP documents contain procedural details that overlap other SOPs and there are often inconsistencies between them.
  4. Gaps may exist between SOPs and the exact steps and responsibilities become unclear.
  5. The effectiveness of the SOP documents to impact behavior is widely believed to be lacking.

So, back to my parable on using Quicken.  Good material, lots of investment, but does the material actually impact how work gets done?  The short answer is “not well enough”.  When you give a broad audience a massive document locked away in a complicated environment, you don’t get the intended results.  You don’t get adoption and adherence to the stated process.

The Power of Simplicity

A general surgeon, Atul Gawande is also the author of a few books including a recent release entitled, “The Checklist Manifesto”.  In “Checklist”, Dr. Gawande details how medical surgery as well as other very complex procedures such as construction of skyscrapers and flying airplanes have a common tool that greatly impacts the quality of execution: a simple checklist.  Recently, Dr. Gawande spearheaded efforts to educate and deploy the use of checklists for surgical procedures in hospitals across a variety of environments; many in developing countries, but also in inner-city hospitals in the US and other developed countries.  The results are astounding as the checklist program is able to greatly reduce problems from surgical errors such as post operative infections, bleeding, unsafe anesthesia and operating room fires.  Incidents of these common problems dropped 36 percent after the introduction of checklists and deaths fell by 47 percent.  After this study, staff was ultimately surveyed and asked if they would want this checklist used if they were being operated on.  93 percent said “yes”.

How does the concept of a checklist apply to the clinical trials process within a pharmaceutical operation?  A checklist can serve just as the existing SOP is intended.  People on the trial team are responsible for understanding the whole process as well as their individual role.  But as we now understand, massive documents have inherent problems:

  1. maintaining integrity of the information
  2. readability by the audience
  3. providing guidance at the point of need

The net objective of ensuring execution of each process step is not being achieved with SOP documents.

That’s where a checklist comes in.  The work I’ve been a part of involved a major paradigm shift away from traditional large volume, free-form SOP documents and toward a new model that takes advantage of cutting edge BPM technology.  The client is using the Nimbus Control enterprise BPM application.  This model involves the following core BPM principals:

  • documenting end-to-end process in a universal process notation
  • linking all relevant electronic documentation at the activity levels
  • assigning ownership for every activity
  • building a Nimbus Storyboard (checklist) to correspond to each SOP
  • establishing end-user views to provide pre-populated lists of SOPs
  • providing end users with Search capabilities to get process diagrams, related documents and Storyboards in a structured Keyword taxonomy

A Revolutionary Tool

From an operational execution perspective, the use of Storyboards is revolutionary.  A Clinical Operations Director can now open a Storyboard which is essentially a list of activities that is relevant to their project; jump to the exact activity at the point of need and see all guidance, references and regulations that are necessary to perform that step.  A large team of participants contributed to defining this new way of working within the Clinical Trials operational team.  Contributors came from the operational team, governance office, IT, as well as the Quality Assurance organization.  The result is a solution that promises an ROI of over 30X within three years. 

Improving operations that impact the quality of how we develop medicines is important not only for the companies that invest in that work, but it also impacts doctors, patients and all those who care about those patients.  The impact of process performance, process execution and quality cannot be underestimated.  These initiatives are not driven purely by regulatory requirements and audit findings.  The investment in these technologies improves all aspects of the organization, the work experience of all participants and the pride that the organization can take by reducing errors and improving quality execution.

Simplicity Takes Work

I’m reminded of a quote of Steve Jobs who just this past week resigned as CEO of Apple: “That’s been one of my mantras — focus and simplicity.  Simple can be harder than complex:  You have to work hard to get your thinking clean to make it simple.  But it’s worth it in the end because once you get there, you can move mountains.”

The Checklist, or the Storyboard, helps make massive amounts of complexity and detail quite simple.  I applaud the ambitious courage of my client for taking bold action to transform the SOP process.  As Mr. Jobs noted, “you have to work hard to get your thinking clean and make it simple.”  My client has worked incredibly hard to design a simpler way for people to perform exacting work.

Operational Risk = Investment Risk

August 17, 2011 § 3 Comments

Investment Risk

The historical focus on securities analysis, valuation and prediction has centered on the tracking of historic prices.  Investment Risk has been measured based on price volatility, particularly historic price volatility.  Price valuation ( stock or bond values) have historically been measured based on the financial performance, cash position and cash flow of the issuing corporation.  Most measures review comparable financial ratios and industry benchmarks to form an assessment of proper pricing.  This practice of value pricing is flawed.  Future prices have little to do with past price behavior and risk is in no way correlated to price volatility.  Risk has everything to do with specific operational exposure, legal exposure, regulatory risk, as well as financial risk.  Price risk is the tail – not the dog and we are wasting our time measuring price trading patterns of any particular security.

Risk is not something any company, operational unit, financial arm or investment firm can ever fully manage.  Risk is part of existence.  Risk is part of living as individuals as well as companies.  Minimizing operational risk, while noble in its cause is also completely inadequate toward satisfying investor risk.  Investors are best served with their risk profile when companies  handle adverse events in the most adaptable, expedient and agile manner.    Companies that know how to respond to the unknown unknowns – not only the expected risk scenarios.  Those are the companies that know how to deal with risk and those are the companies that survive and thrive for decades or centuries.  This operational maturity for handling adverse events is where risk truly lies and what measurements are most relevant for investors to understand.  As is everything we measure, operational risk is relative and how one organization measures in comparison to other comparable companies in the appropriate industry sector should determine the risk premium investors apply to prices and price change expectations.

Market Shifts: Could Blockbuster See the Netflix Threat?

So, how is there ever a solid method of valuating stocks – understanding the risk inherent in the security and making a value judgment on its price relative to alternative securities within a similar risk class?  I believe the analysis needs to be based on the management of operational, financial and legal risk along with understanding the operational objectives, measures, tracking ability and business agility that each relevant comparable company displays.  How could the valuation of Blockbuster drop so precipitously?  Could we have measured their market position in 2003? Of course!  Could we have measured their operational and financial controls?  Absolutely!  Could we also see how unprepared they were to alter their market model to deal with alternate competitive models?  Yes, we could have see this risk as Blockbuster had very little ability to manage their processes and alter them as needed.  They could not move when the market shifted and they took years to try to compete with Netflix.  Why?  Because they were NOT agile.  They couldn’t just change their model even though they know they needed to.  By the time they implemented a competing service to Netflix’s, their lunch had been eaten.

Fraudulent Schemes

What about Barclays circa 2003?  A phishing attack; wherein a hacker steals email information from Barclays.  The hacker then sends emails to the Barclay’s customers asking them to change their password.  The hacker steals the real passwords and immediately changes them, thus locking out customers.  Finally, the hacker transfers funds from each account, effectively stealing tens of millions of dollars in a two day span.  What’s astonishing isn’t that Barclays or any other bank could be fooled with this scheme.  In the early 2000’s, this type of phishing attack was a new method of fraud and they couldn’t have foreseen it coming as much as it might seem obvious today.  What is astonishing is that Barclays didn’t have system parameters on their password change function that prevents such outlier events from occurring.  When thousands of users are changing their passwords in a few hours, there was not an automated trigger to shut down this service.  If the normal rate of password changes is only 100 per day, multiple standard deviations away from an average event usually means something is wrong and the service needs to be halted and evaluated.  But Barclays did not have such processes in place and they were not protecting their assets in a prudent manner in 2003.  As a shareholder, the most important question relative to your risk with your investment in Barclays must be, “Does Barclays have their arms around their processes?  Do they consistently look to improve and tighten their risk and control structure?  How do they govern their processes?  How visible are their processes?  How much risk is embedded into individual or small department knowledge domains wherein a rogue trader can bring down the entire company; such as what occurred at Barings Bank in 1995?  We don’t have to look far for examples.  How about the latest with Newscorp’s wiretapping scandal?  How well did Rupert Murdoch and the rest of the leadership team understand the risks they were taking?

Where traditional investment analysis becomes moot is when we consider these outlier events (market shifts, fraud attacks, internal fraud, legal rulings, reputational loss), etc.  While Sarbanes-Oxley put some basic, prudent rules in place for public companies in the US, this regulation does nothing to reveal the true risk position of the investment.  And it’s risk that is the issue here.  Every investment provides a risk/reward proposition.  If I’m going to incur greater risk – I’d better have the opportunity for greater rewards.  And vice-versa, I may choose to limit my risk exposure – knowing full well that my return opportunity is modest.  US Treasuries are considered among the least risky securities to hold and consequentially they yield very small returns relative to other bond notes with identical coupon and duration.  You are virtually guaranteed your 3.25% return on a 10-year note and “virtually” no risk of default.  Okay, if we use the US Treasury as a benchmark for no risk, then what constant above this risk illustrates relative risk and how much should investors be compensated for each 1% of additional risk?

Starting in the 1970’s, financial scholars embraced the idea that a stock’s risk was associated with price volatility.  Further, they measured past volatility as a likely indicator of future volatility and thus the inherent risk.  Barr Rosenberg’s consulting firm, Barra, would eventually develop the “Beta”, a quantified measure that represents a stock’s sensitivity to movements in the overall market.  A stock with a Beta of 1 would have identical price volatility as the broader market; more than 1 meant more volatile and less than 1, less volatile.  Suddenly, the risk factor of a stock could be calculated, quantified and estimated.  Another economist, Robert Engle, would win a Nobel prize in 2002 for his independent view of this same idea.  Amazing.  There you have it.  Investment Risk is all about past pricing.  Nonsense, I say.

The High Impact of  Unlikely Events

In 2007, Nassim Taleb published what would become a Wall Street favorite, “The Black Swan” (not a lot of ballet in this one), “The Impact of the Highly Improbable”.  The important core theory that Mr. Taleb establishes is that unexpected events happen and the impact that some of these events have is astronomical.  Whether we’re talking about 9/11, the capital markets collapse of 2008, BP’s oil catastrophe in the Gulf of Mexico, or Baring Bank’s rouge trader.  Micro level events or macro level events; either of which may be completely unpredictable have potential to be game-changing events.  The big question becomes: how well positioned are we to deal with such events?  Forget about trying to capture, measure and plan for each exact event.  That’s all fine and good.  But, how capable is the organization; how agile is the organization to respond when the big-one hits?

When a competitor exploits a new technology that undermines our traditional sales and delivery models, how well can we analyze the issues, develop strategic approaches, institute new models and underlying processes to maintain our market position?  These are the questions that Blockbuster could not answer adequately.  Their event wasn’t even an overnight impact.  They had many months to adjust, but like a big aircraft carrier, they just couldn’t change course quickly enough.

Only The Nimble Survive

In a recent blog post by Torben Rick http://bit.ly/lQpHYn,  he asserts that “Business history is punctuated by seismic shifts that alter the competitive landscape. These mega trends create inescapable threats and game-changing opportunities.  They require businesses to adapt and innovate or be swept aside.”  So, when we return to the topic of risk and how well organizations are managing risk, let’s not focus on stock price volatility.  Rather, let’s look at how organizations approach their business models in an agile manner.  Just how well are they positioned to change processes quickly and respond to “Black Swan” events?  Because the question of whether or not there will be such events is certain.  There will.  Another fine recent blog post is by Norman Mark’s http://linkd.in/qyQazb  who notes, “an organization needs not only to understand and assess its risks, but it needs to have a culture that embraces the active consideration of risk…”.  It’s this consideration that I suggest includes active response to the unexpected.

If you are dealing with management that is not capable of rapid change, you are dealing with high risk.  Now, my definition of risk might not be quite as quantifiable and as easily comparable as a risk quotient like Barra’s “beta”.  But for my money, it’s what really matters.

Process Performance, Complexity and How I Learned to Stop Hating Traffic Lights

August 12, 2011 § 2 Comments

  

Measurement and Action – how do we improve performance through a cycle of measurement and action?

Measuring performance of operations is one of the most challenging of all management disciplines.  Financial performance tends to draw the most attention with investors and management having stakes that are dependent on the cash flow, income statement and balance sheet results each and every measurable period.  The quarter ending release of financials dominate most business news cycles and companies commit a fury of resources and activity to try to get numbers to align with pre-set goals, objectives and related expectations.  Sales teams are pounded hourly to get PO’s in before the quarter’s deadline.  Back office operations work frenetically trying to meet reporting deadlines and get all accounts reconciled in time.  And of course, executives prepare summary statements, conference interviews and advance guidance statements to investors in an effort to set a level of confidence in the direction of the company and how the broader market is impacting its results.

But what I’ve always found troubling with this nearly universal cycle of chaos during each financial reporting cycle is that the financials that are measured represent just a small picture of the overall health of the business.  While it’s an important part, it’s by no means even the majority of what investors, board members, partners and employees should be concerned with.  There are so many other factors that need to be measured, trended, compared and ultimately weighed into the analysis and assessment of each organization. 

A Business Services Example

Recently, I was working with a Shared Services division of a global corporation.  This business services division manages all operational financial services including payroll, accounts payable, and receivables as well as human resource related operations.  To give you an idea of how complex their existing processes are, consider these conditions:

  • Many processes are “black box” in nature; managed by two separate third party global services firms.  The company does not know or see what the third parties do, only the results.
  • Other processes are managed by the shared services division which serves many businesses, but not all. 
  • The company had recently acquired another set of businesses worth several billions USD and those organizations would have to be integrated into the shared services division as well as the outsourced third party operations. 

The challenge to this organization was focused on the need to consolidate processes where possible and ensure all processes were designed in a way that the varying participants could understand.   As any of you who work in process management and process improvement know, just getting a common framework for communication is a major challenge. 

Documentation existed everywhere; across all parts of all entities.  But every bit of the process documentation was disjointed; written sometimes in Microsoft Word or Visio or PowerPoint or embedded in SAP documentation or even in printed notebooks that no one could find the electronic versions for.  “OMG!”, my teenage daughter would say.   “A complete mess”, my client would admit.  Sound familiar?  This is a common condition that I encounter at nearly all of my clients.  But that wasn’t the only problem.  Even if we can get all process definition content in a single place and in a single language that everyone can agree on, how do we manage it ongoing?  Further, how do we know it’s right?  Is it really what people are doing?  Or just what they say they should be doing?  And finally, how can we start measuring processes; ultimately holding those accountable at the process level for measurable results?  These are big challenges for even small organizations, let alone an organization that operates dozens of businesses across dozens of countries.  To solve complex issues, it’s often easiest to compartmentalize them and solve them one at a time.  The key challenges with this scenario include:

  1. Complexity of Process
  2. Understanding Accountability
  3. Sustainability of Content
  4. Compliance with Process Standards
  5. Sustainability of Performance
  6. Defining measures that align with process design

As we break down these topics, one thing that becomes apparent is that the last category, “defining measures…” is something that is dependent on most of the items above.  While the organization did have performance measures in place, there was very little accountability and that was mostly because it was very difficult to know which roles and individuals really contributed to the factors of the final figure.  To understand what I mean by this statement, consider the scenario of Time and Expenses. 

Time and Expenses is a common process in most organizations where employees record their time and expenses for payment to the employee.  A key measure that was tracked was the percentage of T&E submissions paid on-time.  These statistics were tracked and summarized monthly and reported through a sophisticated business intelligence system.  This was one of dozens of metrics.  Who is responsible for ensuring T&E submissions are paid on-time?  When the percentage of on-time payments in July dropped below 90%, this was an unacceptable “red-flag” alert.  Why did this happen during this month and who can ensure it is corrected?  These may seem like pretty straight forward challenges with the solution being that the organization just needs to be structured such that T&E has a single process owner and that all participants in the process are managed under that owner.  Right?  HA!  No way.  It’s far more complex with this global organization.  To start with, we have 5 separate high level process steps:

  1. Set Policy
  2. Arrange T&E Information
  3. Submit T&E Form
  4. Process T&E
  5. Pay Submitter

Each of these steps are managed separately with the policy (#1) owned by a governance board with input from Audit, steps #2&#3 owned by the individual, #4 depends on the region and the part of the organization and #5 a third party outsourced organization that was again variable depending on what organization and region the submitter was from.

So, how the heck do you know where the process is breaking down and why some submittals are paid beyond the required deadline?  I won’t go into the full analytics and forensics involved in identifying the “choke” points, but suffice it to say that a small minority of data points were throwing the average way out of range.  It wasn’t that the entire process was broken, it was that for certain circumstances payments were taking 2-3 times the allotted timeframe. 

At the heart of successful process management and performance management is a platform for designing, capturing, maintaining and refining process definition.  To perform this exact analysis and ultimately define measurements that can be actively managed, organizations must fully document and manage processes in a common visual framework that clearly defines ownership, accountability and associations with each related process activity.

So, as we look at the next reporting cycle to review the  “performance” data for the organization, take a step back and ask the following:

  1. How well does the organization understand their own processes and those of their dependent outsourced and supply chain partners?
  2. How actively are those processes managed?  Meaning, how often are they reviewed, improved, updated?
  3. How adept is the organization at responding to drastic market shifts?

Managing process information and treating process information as a highly valued asset is the mindset that must exist at the heart of nimble and forward looking enterprises.  Without such rigor within business process management, organizations pose a high degree of risk to investors and the organization’s overall health.  Immobile organizations are more susceptible to rapid market shifts and less able to innovate where necessary.   As I will explore in later postings, sustainability of process content is what separates highly agile organizations from laggard organizations. 

Arms Around Complexity

So, how did my client get their arms around the complexity of process documentation they were confronted with?  They took a number of steps including having all existing process content with the business services purview converted from static Visio files into Nimbus’ BPM platform, Control.  Further, BPM strategy was developed to include process improvement methodology, process sustainability using Nimbus, including collaboration on process information across the global organization.

Globally, the organization has a heavily invested program to drive continuous excellence methods throughout the wide scope of businesses.  This is a massive undertaking given the number of businesses and the number of countries that operate.  A core component of the continuous excellence (CE) program is cultural with some degree of best practice standards, reporting and auditing of implementation.  Another key element that falls under continuous excellence is the quality management system.  This “system” is not an IT system, rather another set of methods and standards that includes reporting and auditing to ensure implementation. 

It’s most impressive to see how mature and visionary the executive team have been; fully committing to an enterprise emphasis on quality and continuous process improvement.  But, even with the executive vision, the level of complexity makes the challenge a tough one.   At the core of the objectives that include quality, continuous excellence, process improvement, performance management, and compliance management is one common denominator: PROCESS.  Understanding process activities, enables core elements of accountability, sustainability, and agility. 

Associating KPIs with Process Activities and Owners

At a local level, this Business Services division developed a vision for process improvement that included the same core capabilities that was envisioned by the global Continuous Excellence program.  Their objective was to actively manage Key Performance Indicators (KPIs) and not just report on them.  Once their processes were established, KPIs were attached at the appropriate process level and process ownership now not only meant ownership of process definition, but also ownership of that exact performance metric.  Again, these relationships that are established on the process software platform enable the ability to understand performance in a far more meaningful and accountable way.  No longer is a metric just a number made of up of lots of calculations with no clear method of identifying the process failure.  Having KPIs associated with key process areas, all elements that feed that indicator and all ownership of activities within that process area is easily identified. 

Measurement Triggers Action

Great.  We now can understand the KPI in a way that clearly identifies where the process is failing and who is responsible.  So, what do we do next?  Send an email?  Set up a meeting?  Paint the wall red?  Don’t tell me that isn’t what’s going on in most organizations; it absolutely is.  What do you do to manage dozens of KPIs and dozens of alerts on performance that are “out of range”?  How can key management have consistent visibility into the state of action that is taking place on each of these issues?  Yup, you guessed it, this is a teaser for a following post.  Later, I’ll highlight how this is being done and how the full cycle of process improvement is effectively managed through your process management platform.

Addendum

Note that there are many ways to approach process improvement and performance management and you’ll note I’m not proselytizing around any specific method such as Six-Sigma, Lean, Kaizen or variations on quality management programs.  One area that is transforming process improvement and performance management methods is the advent of social media and social BPM capabilities within enterprises.  For some interesting insight, read this recent post on BPM For Real: http://bit.ly/qFxVtz.  Also, for much greater insight into Business Services and developments in BPM capabilities, please check out the latest post on Sourcing Shangri-la: http://bit.ly/nqxdnD.  For some solid insight into process excellence methods, check the Process Excellence Network: http://bit.ly/gw5kSG.

Compliance: Headache or Windfall?

August 8, 2011 § Leave a comment

Forming a Process Centric Model

Regulatory bodies and compliance rules are as old as civilization.  Early Egyptians, Greeks, Romans and Indians created standards and rules for business.  These rules were centered on weights and measures as well as currency, but today regulations come from many sources.

When we look at a regulatory construct, we are effectively looking at rules, laws, guidelines and best practices that are dictated by a governing body.  I say dictated, but these rules are generally a set of statements that have been developed, reviewed and ultimately enacted through a governing board within a corporation.  Regulations may also be established as governing boards that are industry related and many regulations are based on governmental laws (federal, state and local agencies).  The volume of these regulatory books and the volume of statements contained in each can be enormous depending on the industry and corporate size.  Public companies have the Securities and Exchange Commission to deal with.  Companies with global operations have to comply with varying laws that are relevant in each operating country; adhere to health and safety standards, hiring and firing requirements, social responsibility requirements, etc.  If you’re a financial services firm, a plethora of regulations guide how you account, record, trade and settle.  If you’re a pharmaceutical company, strict standards dictate how you run your clinical trials, record your findings, label your products, etc.  Now, add in all of the internal standards that govern your best practices related to your unique products, partnerships, and contract types.  As we can easily surmise, the complications that result are immense.

The SOX Phenomena

In 2003, after starting my own software services firm, I sat with the head of compliance for a Fortune 100 construction company to review their requirements for Sarbanes-Oxley.  Within a day of information gathering it became clear that their main objective was to look at how to manage a set of “controls” by recording who was responsible for each and whether it was working or not.  Now, to be clear, a “Control” is simply a process step that has an owner and it’s in place to mitigate a risk to the organization.  So, what this company was doing was to create a “matrix” of relationships between identified risks, controls (mitigation steps), owners, and the process area they relate to.  As I discovered in the weeks following these meetings, almost every company that was scrambling to comply with SOX was doing this exact thing and they were almost all risk/control matrices in spreadsheets.  The problems with that approach was universal and having a collaborative, relational data storage solution was an obvious need. 

Process is the common denominator

While I grew my business by developing software to address this requirement, other interesting similarities emerged from my client base.  Companies were not only interested in passing an audit or dealing with the SOX regulations.  They had a dozen or more other pressing regulations that required the same type of solutions.  In each case, whether it was FDA regulatory 21CFR part 11 or ISO9000 or Basel II, or the variety of internal standards that was being addressed, the same basic needs existed.  Companies needed to understand the regulations, identify the risks, controls, gaps, remediation steps, owners, process areas and manage all of that information somewhere.  Most commonly, that meant in an independent spreadsheet.  And what was the one thread that formed the backbone of all compliance management?  Process.

Another Fire Drill?

What I found was that each process area (ie: HR, Finance, Manufacturing Ops, etc.) was being hounded by internal audit teams, compliance directors, external auditors and quality managers to document their processes; document their controls; document their risks; document their issues; document remediation tasks, and on and on.  It’s amazing that anyone was ever actually doing their day job.  During the nearly ten years that I’ve worked with organizations on regulatory requirements, very little has changed in this regard.  I have yet to encounter a company that manages all of their compliance and regulatory requirements from a single platform.  Some organizations have made strides with managing process details in a more coordinated fashion, but most still deal with each compliance requirement as a separate challenge involving separate projects.

The issues with this condition are perhaps obvious;  each time one of the regulatory initiatives is executed, operational leaders are reliving the exact nightmare!  It’s Groundhogs Day!  I’ve had leaders within Pharmaceutical clients tell me that rarely does a year pass before they have to execute another fire drill of process capture, internal review, internal audit, and external audit.  Invariably, it’s a short sighted exercise to check a bunch of boxes and get a rubber stamp, so we can get back to normal operations.

The Single Platform Vision

Now for the good news, things are changing.  During my four year tenure at Nimbus I’ve seen an awakening within highly regulated industries to stop the nonsense.  It all begins with proper process management wherein organizations do the following:

  1. Define end-to-end processes using a simple notation for business end users.
  2. Govern process definitions and all related reference materials in support of process execution.
  3. Manage regulatory and internal standards within structured, governed statement(s).

Process management should not become the result of fire drill exercises to satisfy auditors, rather BPM should be an integral part of knowledge capture, process improvement, compliance management and business agility.  As one executive summarized when heading into a board meeting after meeting with me, “We can’t improve what we can’t understand.”  As I’ll discuss in later postings, there is both a mechanical nature to BPM and a cultural one.  Very few cultures are used to maintaining a high level of accountability and continuous management of process content.  Just putting systems in place is not a cure-all and as we’ll explore, organizational culture plays a huge role.

Process Content = Secret Sauce

August 4, 2011 § Leave a comment

As I discuss in my posting on Active Governance, I find a great deal of reluctance by executives to invest in managing and controlling process information.  There are a few factors that play into this reluctance by executives.  The first fundamental fact is that leaders hate spending time, resources and money on things that do not obviously make them money.  No company is in the business of governance.  Included in this category are other supporting operations such as Information Technology, Finance, Human Resources and Legal.  Most organizations do not value the investments in those parts of the organization as value drivers.  They are viewed as infrastructure.  They’re necessary for holding up the theme park, but not necessary to generating revenue and profits.  As long as these non-value driver (NVD) processes serve the most basic needs, keeping costs minimized is the objective. 

BPM treated as the “necessary evil”

The approach to NVD operations is the same objective to compliance and governance.  They are necessary evils.  They must be done, but the objective is to minimize costs to provide only the essential requirements.  Spending beyond the minimum provides no additional value to the organization or its investors.  If I spend more money to ensure a higher degree of IT support, it’s not likely to translate into more sales this quarter.  In fact, the cost of new systems or additional resources will shrink margins and drain profitability.

While there is some truth to this common condition, I contend that organizations must appreciate the value of ALL processes.  They need to see help desk support processes as equally valued when compared to sales processes.  Further, they must harvest and protect these unique processes as much as they value and protect the secret sauce locked in the impenetrable safe.  Where does Coca Cola or Heinz keep it’s secret recipes?  You bet they are secure and carefully managed. 

An IT Transformation Case

One of my clients has over 3,000 IT professionals in the US supporting the subset an organization of about 300,000.  How well this IT organization operates has a huge impact on the overall health of the company.  But, how are these processes managed?  How are they understood?  When I embarked on a major transformation effort with them, several issues were well understood. 

  • Costs could be reduced if they could consolidate overlapping processes across four major business units.
  • Quality could be improved if the global organization could identify the best performing processes and then standardize on those high performing processes.
  • A technology platform for managing process content could allow them to sustain all changes and improve process ongoing.
  • Future change considerations could be achieved far quicker and with a greater degree of certainty if process content was maintained and understood.
  • Process content within a management platform could be leveraged for governance and compliance initiatives, eliminating future process documentation projects.

 

What Makes Us Tick?

From this experience and others, I’ve found that it’s the unique processes that organizations possess that can make massive differences in profit margins as well as massive differences in revenue generation.  A sales approach is a process.  A pricing approach is a process.  Partner channel development is a process and success in those approaches needs to be harvested as repeatable processes.  It’s these successes that are some of the most important assets of an organization.  But how are they harvested?  How are they repeated?  Who actually understands them?  If a key individual leaves or a group of key people leave, does our ability to tap into that market disappear?  Do we lose the ability to form specific partner relationships?

And within IT or HR or Compliance… what about those areas of the business?  Are they just as important as the secret sauce?  A colleague of mine, Chris Taylor, recently highlighted the “secret sauce” issue in his posting on end goals.  He surmises that the “end goal of BPM is creating revenue for your company”.  As I will detail in later postings on this blog, BPM impacts top line revenue, cost containment, bottom line results, compliance management, risk management, business agility and investor confidence among other key business benefits.

I find a varying degree of understanding and appreciation for protecting the “secret sauce” of the organization.  Some organizations are highly protective of their processes and understand that the unique way they manage provides higher margins, quality products, quality service, customer experience and competitive advantage.  Process Management is the critical foundation, what is too often viewed as mundane infrastructure that is the secret sauce.  It just may be the case that new product development, marketing, and sales truly deserve the accolades, but again we must ask, how well has the organization captured that secret sauce and protected it?