Real Time vs Analysis

September 19, 2011 § 4 Comments

It happens every day, every hour, minute and second.  Stuff.  Stuff happens, and lots of it.  Every so often, something happens that make us go, “oh, that’s big”.  And sometimes so “big” that we scramble to react to either take advantage or take cover; to move money in or out; run for higher ground or head out to sea.  Sometimes we have a bit of notice, but other times we don’t.

Previously, I wrote about risk, fraud and how Barings Bank was brought down by a single rouge trader.  Well, it happened again just a few days ago.  UBS AG, a large Swiss bank appears to have lost somewhere in the neighborhood of $2 Billion.  The news caused its stock to promptly drop; closing 11% lower than the previous day’s close.  Moody’s Investor Service quickly reacted suggesting they would review UBS for a possible downgrade; citing concerns that it’s not adequately managing risk.

It’s much too early to determine how this trader pulled off his scheme.  Early information suggests he may have manipulated back-office operational systems as he previously worked in back-office operations and would have had that knowledge.  Did UBS have a policy to restrict back-office workers from transferring to front-office trader positions?  They didn’t comment.

There is much that needs to come to light.  Was this the work of the single trader, Kweku Adoboli, that is currently being implied or were others involved?  What controls were in place to prevent these type of trades and why did they fail?  How long did it take for monitors to catch the rouge activity and did they prevent additional potential damage? 

To give a sense of size, it only took Nick Leeson’s $1.3B cheat to bring down Barings in 1995.  Jerome Kerviel devised a scheme that cost Societe Generale $7.16 Billion in 2008.  Other scandals have impacted banks over the years and the fraudulent events don’t seem to end.  Regulations can be implemented and made more stringent; auditors can review organization’s processes for compliance to those regulations, but still big stuff happens.  It’s the kind of big stuff that wipes out all other assumptions.  You can be the finest analyst in the universe, performing all the due diligence necessary to make the most prudent investments.  You believe in UBS, the fact that they brought back senior leadership that they are serious about reform.  Oswald Grubel were supposed to be turning around the troubled UBS, but it appears he and his leadership team was just not that concerned about managing operational risk.  Simple bottom line is: one event can be catastrophic erasing all other assumptions.

So, the questions that are most pertinent:  Which operational events need real-time monitoring?  What events need process controls in place to automatically prohibit additional risk exposure?  How can managers respond in real-time to both opportunities and adverse situations?  As Pete Seeger adapted from the Book of Ecclesiastes, “there’s a time to gain, a time to lose, a time to rend, a time to sew”.  Similarly, there is a time for analysis and there is a time for real-time response.  All the analysis in the world cannot determine the future.  As the Heisenberg Uncertainty Principal states; the more precisely one property is measured, the less precisely the other properties can be controlled or determined.   In other words, the mere act of observance imposes yet another factor into the set of conditions.  There are no absolutes about tomorrow and there is no such thing as risk-free.  So, while I point out the immense advantage that doing your homework will bring with a previous blog post in interconnectedness, at the end of the day, a single event can wipe out all of your assumptions. 

Well, I know what you’re thinking…. that sucks.  First you tell me that I should do fantastic amounts of due diligence to identify opportunities, but then you say, “ahhh, it’s all a waste once a single unexpected event strikes.”  Okay, I can see that paradox or contradiction, but really what I’m saying is: you have to do both.  Good operational process management is about analysis of the details; of every single activity; every single owner, reviewer, regulation and risk.  And yet, it’s also about agility.  What do we do when things don’t go as planned?  What do we do when the proverbial poop hits the fan?  Can we analyze each activity for its risk exposure?  Can we find methods and control activities to mitigate against adverse events… especially the catastrophic ones?  And can we buy insurance to position ourselves for gain if adverse events strike?  Absolutely, I say!  Why some organizations don’t, especially financial institutions that are particularly vulnerable, is beyond me.  Sometimes, it’s just incompetent management, but often it’s a simple lack of appreciation for how solid operational process management requires a sizable investment in process thinking, risk management and development of a process improvement culture. 

Fortunately, a lot is being done during this generation to advance process-based thinking and to raise the level of consciousness about business process management and its impact on corporate governance and risk.  But, it’s happening slowly.  Maybe events like last week’s UBS debacle will open a few eyes…. let’s hope so.

The Interconnectedness of Things

September 7, 2011 § 6 Comments

This past week the company I work for, Nimbus Partners, was purchased by a larger software company, TIBCO.  I can’t comment on the process of due diligence of the deal, but as any large acquisition is considered, a great amount of analysis must be performed.  To value any software company,  the acquirer must value the assets for product technology, position in the market, product position within the existing family of assets, the company’s existing financial state as well as projected earnings potential.   

Minimizing Risk

This acquisition is one of many major decisions that executives at TIBCO and other corporations go through every year.  Some investment options require incredibly in-depth analysis while other investment decisions may be made quickly with far less due diligence.  There are plenty of reasons for performing an analysis on an investment to a given level and not to a finer level.  When purchasing a stock or making a trade on an existing holding, how much information is driving your decision?  Did you read the prospectus of the latest 10-Q?  Did you attend the recent investor conference calls with management?  Did you get all the answers to your concerns of the latest one-time charge to net income?  The odds are you didn’t.  The odds are you’re trading on gut feel of the situation or you’re trading on some limited understanding and you accept that risk based on the fact that simply don’t have the time to do all of the research you would have liked.  Now, you might also put your trust with money managers or fund managers; expecting they are doing all the analysis required to make good value judgments that are in line with your risk profile and your investment objective.  Again, are you sure they are going down to a depth of analysis that ensures risk is minimized?

A Hedge Fund Legend

Recently, I read about a very successful investor named Michael Burry.  For those of you who haven’t heard of Mr. Burry, he gained a degree of notoriety for wisely betting against banks’ mortgage holdings and cashing in massive returns for his hedge fund when the credit crises hit full tilt in 2007.  His brilliance wasn’t just that he recognized a good bubble when he saw one, it’s the way he figured out how to capitalize on this realization that a spectacular amount of mortgages were doomed to fail.  The fact is, when Mr. Burry first became convinced that the type of lending that banks were engaged in was destined to result in large numbers of defaults, there was no real instrument for wagering against the performance of these notes.  The various tranches of subprime mortgage bonds could not be sold short.  Even with his conviction that the subprime mortgage bond market was doomed, he could not capitalize on it.

Then came Mr. Burry’s discovery of the credit-default swap.  It was basically an insurance policy that could be purchased against corporate debt, but that was only useful for betting against the companies that would likely default such as home builders.  Ultimately, he convinced a number of big wall street firms to create them including Deutsche Bank and Goldman Sachs.  Now, what made his work absolutely brilliant was the fact that he would spend untold hours poring over each bond prospectus, only investing in the most risky of those assets.  He was performing the due diligence on each of the loans, such as analyzing the loan to value ratios, which had second liens, location, absence of income documentation, etc.  Within each bond, he could sort out the riskiest of the lots and incredibly enough, Deutsche and the other banks didn’t care which bonds he took positions against.  He essentially cherry-picked the absolute worst loans (best for him) and found the bonds that backed them.

Mr. Barry would ultimately bring his investors and himself astronomical returns at a time when the vast majority of investors lost roughly 50% during this crisis.  If you read about Mr. Burry, you’ll find there is much more to his story as he is unique in many ways, but one key point that separates him from the pack is that he does his homework.  Details matter.  How these loans were structured matter to all that were connected to them.  In these bonds were real loans that represented real value.  Understanding the risk factors would immediately point to a very low valuation on these bonds.

I’m not going to delve into the full issue of responsibility relative to loan originators, banks, Fannie Mae, borrowers, etc, but suffice it to say that solid due diligence reduces the risk of any transaction.  The more you understand about the asset under consideration, the better you can predict its performance.  It’s as simple as that.


So, what’s with my title, “The Interconnectedness of Things?”  Well, it got me thinking about just how interconnected we all are.  Without getting all Jean Paul Sartre on you, let me point out the most common difficulty in all of management: interconnectedness.  That’s right, interconnectedness.  The fact is; executives hate it.  But it exists.  We have the tendency to measure performance of an exact metric; of an exact process step, or an exact person.  We like to think that sorting out the specific items of measurement can enable us to understand what is strong and what is weak.  Fix the weak bits, keep the strong bits, and voila, you have Lean.  But, from the work I’ve been involved in, it’s not so simple.  Similar to the difficulty of sorting out all the bits that make up a good loan from a bad loan; a good mortgage bond from a bad mortgage bond; business processes can be extremely complex and highly interdependent.

How do we get our arms around the complexity of process?  Mostly, in very distinct ways.  How many of us love to look at organizational charts, value chain analysis diagrams, system architecture diagrams?  If you are shaking your head “yes”, I’m deeply sorry.  The fact is, we are trying to ensure we understand the interconnectedness of things, but we often do that work in silos.  In efforts to diagram process or entity relationships of systems or people relationships, this work is most often performed as one-off attempts with a singular purpose or project in mind.  They are not done to ensure a wider scope of understanding is gained and maintained.  And therein lies a serious shortcoming of those efforts.  With islands of understanding, there may be some level of interconnected understanding, but the silos remain silos and whenever we look at those groupings within a map or chart or a diagram, there is too much lost information.  The value of what you have is just as quickly defined by what it does not have.  (Perhaps some Camus?)

Devil’s in the Details

So, how do we connect all these silos and how do we know when we have enough detail?  These are big questions to which there are no silver bullets.  During a recent engagement, I was working with a global IT organization who brought together four business units to define standard global processes.  Ultimately, the idea was to consolidate where possible, but initially they needed to capture how each unit was operating.  I’ve done this type of work a number of times and what still amazes me each time is how often we find gaps in processes, areas that are not understood, as well as overlaps where steps are replicated and no one knew what the other one was doing.  As we embarked on the journey of process design, the key question that this team asked of me was, “how many levels down do we need to go?”  My answer was pretty simple: go down to the level of detail that someone from outside this process area can read and understand what is happening without any ambiguity.

Imagine if you will an organization that has documented down to that level in a consistent way across their organization.  Further, imagine a singular map with diagrams that connect to all appropriate related process steps, to all related electronic content and within a platform that provides instant feedback from the personnel that perform the operations.  Now, that’s getting your arms around complexity and it tells the story of the interconnectedness of things.

Finally, once we gain perspective on this interconnectivity we can truly understand what is working and where risk lies.  For it is risk that we are constantly managing.  The banks that held large amounts of mortgage credit were blind to what was in the big bag of bonds that contained smaller bags of loans that contained all kinds of facts, some of which were never gathered (such as income verification).  Did they completely understand the interconnectedness of things?  Did they get down to a low enough level of detail to really understand the assets that so much was riding on?  To reduce operational risk, the devil’s in the details.  Get your arms around process, get your arms around the details and know what you’re buying into.

The Power of Events: Nimbus Acquired by TIBCO

September 1, 2011 § Leave a comment

Big news this week at Nimbus Partners, a company I joined exactly 4 years ago today.  We were acquired by TIBCO, a larger company with a diverse portfolio of BPM products.  I’ve known about TIBCO for about ten years now as they are pioneers in the development of middleware, messaging and enterprise application integration – what is now a core capability within Services Oriented Architecture or SOA.  Now, SOA is by no means new and the maturity of SOA is well advanced in large enterprises.  Many organizations have spent and continue to spend substantial amounts on its promise and a fair amount of challenges remain.  Still, like most revolutionary technologies, realizing the value of something that radically shifts what is possible, takes time.  TIBCO has been at the forefront of SOA, BPM and BI with technology that alters how information flows between systems and how quickly business users can get to answers.  The potential that lies before me and my company is exciting, with promise to connect advanced infrastructure capabilities with Nimbus’ cutting-edge business process management platform.  Now, I’m not going to delve into the intricacies of what is possible or which bits fit with which widgets as I’m sure whatever I imagine will evolve to be something quite different.  What I will tell you is that the power of this acquisition is an exciting event, one that will likely impact a wide variety of global enterprises. 

The Complexity of Events

On the topic of events, I’m reminded of a book I read years ago called aptly enough, “The Power of Events.”  It’s written by a gentleman named, David Luckham.  I was fortunate enough to hear him speak at a Gartner conference not long after I read his book when it was a groundbreaking topic.  Most interesting was his ability to illuminate the capabilities of Complex Event Processing (CEP).  The importance of this capability is primarily for organizations that need to minimize risk to their systems and the underlying assets that those systems control.  The need to minimize risk is evident in just about every organization I’ve ever entered. 

I remember being inspired by Luckham’s book and speech, and it had a big influence when I founded AVIVA Consulting.  One of the first opportunities to realize some of the key capabilities within CEP was with a partner company.  We took a simple technology for tracking real-time data flowing from any web service and integrated their software with our Microsoft focused collaboration stack of SharePoint, SQL, InfoPath, Office and K2, a third party workflow product.  We were mildly successful with it, but quickly became focused on risk and compliance requirements and never fully developed the real-time collaboration solution.  Later, after I sold our flagship product, ACES, to a Microsoft service provider, Neudesic, I spent a year working to take a Microsoft platform Enterprise Services Bus (ESB) to market.  I named it Neuron (yes, I’m still proud of how clever a name it is) and we launched it shortly before I left to work for Nimbus.  So, even though I was on the product management and product marketing side of the ESB product, I became intimately familiar with the core capabilities and potential of middleware messaging and SOA in general.

Now, full circle back to now… this event is the joining of TIBCO, the most innovative company in the middleware space with Nimbus, the most innovative company in the BPM content governance space.  I couldn’t be more excited to be at the nexus of this formation (you can kick me later if you get this).  Personally, I’m excited to see what will happen when we sit with financial services companies and pharmaceutical companies to look at how risk is managed within quality systems or compliance initiatives.  How well can most of these organizations manage real time events and how well designed are their processes to deal with adverse or opportunistic circumstances?  This is where the opportunity lies.  As I point out in my posting on business agility and the need to minimize risk through agile processes, organizations need to design processes that allow rapid response to unexpected conditions as well as the known possibilities.  The events that tend to be earth shattering are not the anticipated events, so how well we have modeled the organization to respond is critical.  Also, as I detail in my posting on checklists, minimally we must address the known risks with clear process handling instructions to ensure quality execution.

Rapid Response to Events = Reduced Operational Risk

So, imagine if you will, the situation where fraudulent phishing attacks attempt to lure bank customers to provide their login credentials to make a change to their account.  Rather than connecting to the real bank, customers are connecting to a fraudulent system that grabs their login ID and password.  The fraudsters then log-in to the real system, change the password and now begin making transactions to pull money out of the victim’s real account.  With CEP technology, banks can see in real time how much activity is occurring and when irregular volumes occur on a given function (such as 10X the usual number of password changes during the past minute), the system disables the password change function and alerts the appropriate administrators.  Cool stuff, right?  Now, tie in the ability to provide clear instruction on the manual handling that the administrator needs to perform.  This outlier password change event is rare and the steps required by the administrator may be exacting.  That’s where Nimbus comes in.  The admin will have clear steps to take, ensuring fast and accurate handling with quick access to all necessary resources and reference materials.  End game?  Very few, if any customers are impacted.  Very little, if any financial damage done to the bank.  Preventable adverse events are prevented.  And we can imagine in reverse how opportunistic events can also be quickly acted upon with decision-makers having clear instruction on execution.

Understanding Events in Context

The key to how an organization processes and responds to a large volume of diverse events is at the core of what BPM is about.  It’s not just process definition for the sake of checking a box that the auditor approves.  It’s about improving the decision-making ability of management and other operational decision makers.  It’s about reducing operational risk.  And it’s about continual tweaking or improving those processes as we learn what is working and what is not.  Gaining real time event information can be hugely beneficial, but it’s value is increased when we understand these events in context to precise operational activities.

Those of you who follow my blog already know, I’m not in the habit of reporting news or projecting the future, so consider this post a rare exception.  Given the personal nature of this event and the impact it will likely have on the future of BPM technology, I felt compelled to comment.  In a future post, I will explore technology specifics including how governance, risk and compliance requirements are handled with the variety of technologies available and the specific categories of capabilities including automation, content management, master data management, SOA, enterprise architecture, social networks, collaboration, search and reporting.  There are a variety of analysts and prognosticators jumping to conclusions about what this merging of technical capabilities will mean to the market.  I can tell you that this newly joined organization looks extremely promising, but the proof will be in how we make it happen with our customers.  It’s how Nimbus has always proven its advantage in the market; through real execution and value creation in real customer environments.  With the added strength and reach of capability that TIBCO brings, we should be proving what is possible very soon.

Where Am I?

You are currently viewing the archives for September, 2011 at Process Maximus.