October 11, 2018 § Leave a comment
Back in the late ‘80’s, I was a high-yield bond analyst in New York… the buy-side of a junk-bond portfolio. There was no internet. Bloomberg terminals were just showing up and were limited, so most of the information we gathered and modeled was done by scouring 10-k’s, 10-Q’s, newspapers and building models in spreadsheets. This what I did everyday as part of my job, but because of the personal computer and the advent of what was then considered “super computers”, some mathematical modeling was developed to automatically suggest or even execute trades in real-time.
These systems were effectively what we now label as Artificial Intelligence or AI. They contained a model that factored large sets of data to compute a conclusion which drove an action. Those 80’s systems were so rudimentary compared to what we can do today with cheap storage, much faster computing power and far more data than ever before. However, the attempts at modeling our world remains the same. Whether we’re talking about the stock market, consumer behavior, the weather… it’s never perfect. Of course, that can be said of anything and everything that is science. We make hypotheses, use the scientific method to test a specific hypothesis and draw conclusions from the data to either prove or disprove our original hypothesis. Most often, the aggregated data we collect is surmised as “significant” or “insignificant”. But what about the grey areas? Where do we draw that “significant” line exactly? What biases could have influenced the testing of the hypothesis? Don’t we almost always want to see our hypotheses prove out? With pharmaceutical companies, don’t they want to see their drug trials prove efficacy for the treatment of the problem they’re trying to solve?
So, the reason I’m even bringing this up is that these models and inherent biases impact the events themselves. Today, the market had its 6th straight decline with the S&P 500 dropping below the 200-day moving average. Every media company, journalist and talking head had an explanation as to “why” the sudden downturn. WHY. WHY. Everyone needs to know WHY as if every time market prices change, we need to blame or credit something… some policy or some news event. The model must have received an input that caused the new direction in prices. Right?
People’s thirst for explanation, prediction and reason is simply primal. It’s one of the most common traits that pervades the public and most media feeds that need every day with quotes, testimonials and headlines to help you make sense of it all. Sure, we’ll tell you why the market tanked… stay tuned and watch this commercial. And you will.
So, let me try to sound a bit less cynical for a second and tell you that there are some folks who keep their head and accept the randomness in our universe. Today’s best read is https://www.bloomberg.com/view/articles/2018-10-11/the-stock-market-meltdown-that-everyone-saw-coming from @ritholtz Barry Ritzholz.
Another fun spin is: Jason Zweig’s WSJ blog post: https://www.wsj.com/articles/when-markets-tank-do-this-1539276465?mod=searchresults&page=1&pos=1&mod=article_inline
Lastly, back to AI and how machines are used in today’s markets. With all the panic happening this week, Reuters’ Trevor Hunnicutt wrote his “I’ll tell you why” piece entitled, “Machines take the blame as U.S. stock market sells off”. https://www.reuters.com/article/us-usa-funds-riskparity-analysis/machines-take-the-blame-as-u-s-stock-market-sells-off-idUSKCN1ML2XG?feedType=RSS&feedName=businessNews&utm_source=Twitter&utm_medium=Social&utm_campaign=Feed%3A+reuters%2FbusinessNews+%28Business+News%29
While it hits right in the face with an exact example of everything wrong with this type of reporting, I still find it fascinating to read. Without rehashing the whole article, he explains “risk parity” funds and how they re-balance automatically using specific models.
Simply put, AI trading through machine learning models are ever present in today’s trading environments and even with the incredibly large data-sets and sophisticated algorithms we find the same challenges of human biases.
For many, the need to seek answers is not unlike religion – a constant search for an explanation. What will happen next? As George Carlin once said, “Weather forecast for tonight: dark”. I miss you, George.
September 24, 2018 § Leave a comment
The topic I’ve chosen today illustrates a contrast of examples – one where machine learning and AI has received a great deal of hype and attention. We’ll contrast that story with another example where machine learning and AI has shown great results and is already making a difference every day.
A couple of months ago, I was in Las Vegas for a Microsoft conference and while using Lyft, I was lucky enough to have two rides in the same day with an APTIV self-driving car. The car was actually a BMW 5-series that was outfitted with a self-driving system from a company called APTIV and it uses LIDAR with lots of cameras to view, measure and predict the movement of all types of objects. The technology was absolutely fascinating and there was a co-pilot giving me a full tutorial on how the technology worked, what we were seeing on the object monitor and how they are gathering and dumping terabytes of data every evening into machine learning models across 30 vehicles to improve their models and the performance of the system. Without going into every aspect of the APTIV system, suffice it to say that it’s a promising technology and the performance that it’s capable of even today is very impressive.
Now, companies like Google, Tesla, Uber, Lyft and many car makers are in some stage of development or partnership with autonomous vehicle technology developers. You may have heard of Waymo which is Google’s project, General Motors has something called Zoox and Ford has also started something. And a new company called Aurora just was announced – apparently dedicated to creating self-driving technology that they would license across many auto production companies.
Okay, so here’s the point of all of this. The hype and vision of a future where people no longer drive their cars, one where we all sit on comfy couches reading or working as our cars safely navigate the most efficient route to our destinations is still a long way away. Yes, Tesla has that technology working today, but frankly, it’s only going to be adopted by a small fraction of early adopter types who just love watching the wheel automatically turn. It’s nowhere near ready to have people turn their attention completely away from the road. So, while these technologies hold fantastic promise and yes, they’ve already solved so many difficult problems, still, there’s a sizable percentage of highly complex problems that remain to be solved. Further, many solutions will likely require coordination across city, state and federal governments to standardize a set of upgraded road and highway infrastructure to handle the most difficult of these problems. Vegas has already passed some legislation to become a “smart city”, but in the test car that I was in, the driver had to manually drive the car when pulling onto any of the hotel properties as those private properties are not yet willing to allow this technology. And finally, perhaps the biggest hurdles these technologies face are the regulators and the entire auto industry. The disruption that rapid adoption would have would likely create a few big winners and lots of losers from an industry that doesn’t like rapid change.
So, what about that example where machine learning and AI is already successful….
Our company Quogent has worked with a number of customers to solve problems using machine learning and AI. A recent challenge we faced involved a customer service organization that deals with massive amounts of call and ticket volume. When they broke down the types of calls they were getting, they knew they could automate some of the inquiry scenarios to reduce the load off of the help desk. Using machine learning, this organization now has now reduced their overall volume by 6% and they have a learning model that continues to find new opportunities for automation, self-help and what essentially amounts to better customer experience.
Now, obviously, these are two ends of the complexity spectrum when it comes to machine learning and AI. Developing learning engines that solve the problem of creating a safe, self-driving vehicle is far more challenging that improving how customer inquiries are handled. So, it’s hardly surprising that we could set up a system and achieve results in just a matter of months with a customer service org and still, we don’t have our sleeping pod on wheels automatically picking us up from our app. The point is, the promise of machine learning, deep learning, AI, etc… is huge, but just know that while we now can make progress with some of these difficult problems, actually hitting value goals may be many years to achieve the most aspirational of goals, but very fast for others.
#AI #machinelearning #automation
August 20, 2012 § 8 Comments
One of the top stories in the US this week was the Mega Millions Lottery. This state sponsored lottery is the largest of its kind in the US. It spans a collection of states and its jackpot increases with each week that yields no winner. So, this week the jackpot exceeded $300 Million and every major media outlet picked this up as a major news story, showing the frenzy of ticket buying while publicizing this jackpot as an exciting chance for some lucky future winner(s). Just two months ago the media frenzy was even more frenetic as the jackpot exceeded ½ $Billion.
Heart Breaking News
This phenomenon of government sponsored gambling is troubling for the future prospects of general societal health and overall economic conditions. While defenders of the lottery will argue that it’s a voluntary tax, the real wealth redistribution that occurs with lotteries is regressive. What this means is that lower economic classes pay a higher proportion into the tax and yield a lesser benefit. So, while the state may yield significant revenues from their lottery systems, the net impact on the general population is detrimental. The working poor, lower middle class, middle class and senior citizens make up the vast majority of ticket buyers; some spending more on lottery tickets than on essentials. The impact is not just on the individuals who buy the tickets, but on the families they support. Children go without enough food while a parent continues to hinge future dreams on a lottery jackpot. What’s most egregious is that the lotteries are state government sponsored institutions and media outlets fuel demand through coverage. Every major network morning show including ABC, NBC, CBS, FOX, and CNN recently aired this story with anchors saying they got their ticket and how exciting this all is. Really?! Is that what our news outlets have been reduced to?! They are clearly acting as a platform for hawking worthless paper that effectively serves to bilk the lower economic classes of our society. This practice is not unlike other legal practices such as predatory lenders (payday loans, auto title loans), pawn brokers, and not so legal enterprises such as loan sharking.
Special Interests Trumping Public Good
I recently met with a public administration consultant who has worked nearly 40 years as town manager for several towns in New England, in municipal banking; consulting municipalities on business operations, financial management and negotiations. His stories are incredible – anecdotes of collective bargaining agreements with labor unions, political arm wrestling with councilmen (board selectmen) and outright corruption on permitting and development contracts. In an effort to protect the innocent, I won’t mention any names or exact places. One such story of corruption he recently shared with me involved the development of a Massachusetts casino on non-native American land. Powerful developer interests and local government tax interests push new laws that reject previous statues to keep gambling only on Native American lands.
The motivation for political leadership to support casinos is perhaps obvious as it can lead to new net jobs and increased tax revenues. As the US becomes increasingly tolerant of gambling and casino activities, it is now tolerating exceptions to previously established boundaries that had kept gambling establishments on the periphery of large population centers. To see how accepted gambling has become, one needs only to turn on any of a number of cable networks to see televised gambling such as World of Poker televised as a sport on ESPN, replete with “stars”; or visit casinos on Native American land in or near most every metropolitan area. In Arizona, the pitchman for a local Indian Casio is the broadcast announcer for the Arizona Diamondbacks.
A Public Gambling Addiction
The damage that gambling causes to the economically weak in American society is growing. Local and state governments with large budget shortfalls become increasingly desperate to find alternative revenue sources. Not only have state governments created policy that encourages gambling and sponsored lotteries, but also has managed to influence our news outlets. The result is an entire generation of citizens sees gambling and lottery dreams as part of the American dream. Shots in the dark, needles in haystacks, pies in the sky fantasies have replaced messages to young citizens that education, hard work, ingenuity, and family commitments are values to pursue.
It appears that other states are likely to copy the acts of Massachusetts and allow greater integration of gambling activities in common public places. With greater access, public promotion and celebrity endorsements, the great division of wealth that already plagues US society will only increase. The most vulnerable are children who are dependent on parents or other caretakers who indulge in lotteries and gambling. Their suffering is real. One in five American’s are impoverished. State sponsored lotteries and gambling is an epidemic in the US and leadership continues to take the easy political path rather than deal with the harsh realities that such policies are having on future generations.
January 17, 2012 § Leave a comment
Recently, I read through the latest World Economic Forum “Global Risks 2011” report which is an initiative of the Risk Response Network. It’s an impressive assessment of global risks produced in cooperation with Marsh & McLennan, Swiss Re, Wharton Center for Risk Management, University of Pennsylvania and Zurich Financial. What is compelling about the report is it is not simply a survey result or a list ranking, rather it details and illustrates the interrelationships between risk areas; identifying the causes in an effort to identify points of intervention. The report highlights response strategies and even proposes long term approaches.
As with any risk report, it has a tendency to feel alarmist, but its value and content cannot be dismissed and its emphasis on response is encouraging. The two most significant risks the report identifies are relative to economic disparity and global governance. The main point being that while we are achieving greater degrees of globalization and inherent connectedness, the benefits are narrowly spread with a small minority benefitting disproportionately. Global governance is a key challenge as each country has differing ideas on how to promote sustainable, inclusive growth.
The Rise of the Informal Economy
The report goes on to highlight a number of risks including the “illegal economy”. The illegal economy risk includes a cluster of risks: political stability of states, illicit trade, organized crime and corruption. Specifically, the issue lies with the failure of global governance to manage the growing level of illegal trade activities. In a recent book by Robert Neuwirth entitled, “Stealth of Nations: The Global Rise of the Informal Economy”, the author estimates that off-the-books business amounts to trillions of dollars of commerce and employs half of all the world’s workers. If the underground markets were a single political entity, it’s roughly $10 trillion economy would trail only the US in total size. Further, it’s thought to represent in the range of 7-10% of the global economy and it’s growing. To be clear, underground markets are not only dealing in illegal substances, crime, prostitution or drugs. It’s mostly dealing in legal products. Some of the examples Mr. Neuwirth provide include:
- Thousands of Africans head to China each year to buy cell phones, auto parts, and other products that they will import to their home countries through a clandestine global back channel.
- Hundreds of Paraguayan merchants smuggle computers, electronics, and clothing across the border to Brazil.
- Scores of laid-off San Franciscans, working without any licenses, use Twitter to sell home-cooked foods.
- Dozens of major multinationals sell products through unregistered kiosks and street vendors around the world.
A Global Risk?
Are the underground markets really a global macro-economic risk? Mr. Neuwirth makes solid arguments that these markets provide jobs and goods that are essential to these populations and that it is the corrupt authorities in most developing countries that are being worked around. In some ways, it can be argued that these unlicensed vendors and importers are the purest of capitalists; innovatively providing goods by avoiding intervention. In a recent interview in WIRED magazine, Mr. Neuwirth points out that Procter & Gamble, Unilever, Colgate-Palmolive and other consumer products companies are selling through small unregistered, unlicensed stores in parts of the developing world. He goes on to point out that P&G’s sales in these unlicensed market’s make up the greatest percentage of the company’s sales worldwide. I found this tidbit shocking. Really, a company that brings in over $80 Billion in revenue a year is actually pulling in most of its revenue through unlicensed channels? Now, that doesn’t mean P&G is directly selling through those channels, but they sell through distributors that may use others that do sell through to unlicensed vendors who don’t pay taxes.
The WEF concludes that illicit trade has a major effect on fragile country states given that the high value of commerce and resulting high loss of tax revenues impinge on national salaries and government budgets. An example that’s included in the report is that of Kyrgyzstan. “Members of the Forum’s Global Agenda Councils argue that the undermining of state leadership and economic growth by corrupt officials and organized crime contributed significantly to social tensions which erupted in violent conflict in June 2010, causing widespread destruction, hundreds of civilian deaths and the displacement of 400,000 ethnic Uzbeks.”
The Threat to Quality and Public Safety
So, if you were guess what type of goods top the list of sales that take place in these underground markets, what would you guess? Cocaine? Opium? Software Piracy? Cigarettes smuggling? Small arms? Topping the list with a rough estimate of $200 billion in value is counterfeit pharmaceutical drugs. Just behind at $190 billion is prostitution. Which leads me to the next serious risk issue if global efforts don’t improve to govern these markets: quality. I’m not qualified to address the quality of prostitution, but let’s consider the quality of counterfeit pharmaceuticals and the general issue of public safety. If these markets go unregulated and unmonitored, we are likely to see terrible abuse by profiteers whose only concern is to bring high value products to market quickly. No regulation also means an inability to create safe work environments and to protect rights of laborers all along the supply chain.
On the other hand, the vast majority of workers and consumers in developing countries thrive because of these markets. A strong effort to disrupt or disband these markets would cause a high degree of distress in communities that rely on these markets for access to essential goods. But in return, without tax revenue that can only be gathered from legitimate, licensed businesses can governments function and provide oversight services that would benefit quality and public safety concerns. It’s an endless loop as we say in the software world; a true catch-22. Even relatively well functioning supply chain operations at pharmaceutical companies in developed countries are consistently challenged to maintain a high degree of quality (note recent impact of product recalls at Novartis). Considering how much effort and money is spent on quality assurance, inspections, and FDA audits on legitimate pharmaceuticals, it’s beyond scary to consider the quality of counterfeit pharmaceuticals that are circulating in illicit markets.
Within the US, in the state of California, we’ve seen recent evidence of solutions such as bringing the trade of marijuana within the framework of the law. Potential results include ensuring quality and safety for the public, raising tax revenue and reducing the profits of organized crime. Still, the issue of economic disparity is a much tougher nut to crack. Widening gaps in income within all economies provide incentive for lower income individuals to work outside of established trade structures. This incentive leads to greater illicit trade which in turn hinders a government’s ability to effectively tax businesses and provide services such as regulatory oversight.
Can We Govern Illicit Markets? And If So, Should We?
These are obviously very difficult challenges, but ones that the WEF is analyzing in an effort to form solutions. The relationships between economic disparity, illicit commercial trade, public safety and government corruption becomes glaringly clear. How can the global community govern these illicit markets? They exist everywhere to some degree, even in the US where informal markets are estimated to account for 10-20% of GDP. One solution that WEF recommends is to strengthen financial systems. The implication is that weakened systems are the result of the heightened volatility and risk deriving from the recent capital markets crisis. With diminished confidence comes incentive to work outside the system. Some suggestions include:
- Better surveillance of the financial sector, including all systemically relevant players
- Tighter capital and liquidity ratios for all banking institutions (including non-banks), with higher ratios for systemically relevant institutions
- Risk retention for securitization (so-called “skin in the game”)
- Improved transparency and counterparty risk management in “over-the-counter” derivative markets
Perhaps the most interesting part of this global risk challenge is how interrelated these issues are. The influence that government corruption has on illicit markets is direct, but not the only factor. Further, the ability of governments to regulate, control and tax this commerce is not straight-forward and overly severe policies can prove detrimental to workers and consumers. And how much do other factors such as financial stability contribute to activity moving outside conventional channels? There is no certain view on these underground markets as we must consider why they exist, for whom they exist and how valuable they are for the good of all.
January 3, 2012 § 11 Comments
With each year end, all forms of media spew a tidal wave of predictions. From the apocalyptic to the mundane, we get predictions from prognosticators on who will win an Oscar to which Republican will win in Iowa to how well the market will do in 2012 to who will win the Super Bowl. But it’s not only during year ends that we get a hefty dose of soothsaying. It’s a public non-stop obsession. Dare I say, it’s an addiction. Predictions are in every facet of society – within industry, we are constantly trying to get insight on the level of demand this month for our products; the level of prices within each product type; and which company will gobble up which other company. The fact that foresight can be a key advantage when competing for resources and competitive superiority is not surprising. What is surprising is the amount of noise pollution and the insatiable desire to listen to that noise.
What is an Expert?
I still hear my favorite finance professor lecturing during one of my b-school classes about “experts”. He illustrated quite powerfully (obviously, it’s stayed with me for all these years), how poor predictions were made by economists on interest rates, GDP growth, oil prices, and stock prices among many other measures. In article after article, economists, industry experts, political experts, scientific experts were shown to be just slightly better than random guessing. What’s worse, most “experts” tended to influence each other, so that consensus predictions prevailed. The group of economists’ predicting the direction of interest rates tended to lump together in narrow ranges which indicated that working from the same sets of data with the same sets of assumptions, they also tended to create the same range of estimates.
Risk and Probability
We all know that the future is uncertain and that many unknown factors impact future events, yet we go to great lengths to predict. The bottom line is that we can draw conclusions that are more about probability than actual pinpoint calculation. If we normalize probabilistic outcomes for earnings per share for Apple this coming quarter, we can estimate EPS outcomes within ranges. If we believe published consensus estimates by analysts, we can find that mean estimates are at 9.81 with a coefficient variance of 4.39. Statistically, this variance is only significant for historical relevance and should not be seen as a predictor, but given analysts do not have crystal balls, they still use it as the main factor for setting probabilities. So, if we conclude there is a 95% chance that EPS will fall within the range 8.56 – 11.06 (or two standard deviations), we are essentially placing bets based on probability. Now, the valuation of a share of Apple common stock will vary greatly depending on where in this range actual EPS falls. Of course, there is still the 5% chance that EPS falls outside the expected range. And further, these numbers are purely estimates based on one set of assumptions that no two analysts would ever agree on.
When looking at all these predictions, it can quickly become apparent which “experts” are really viewing their data through a critical lens and which are simply along for the ride by echoing other expert’s viewpoints. What I find most discouraging is how confident some prognosticators are, especially those on television and web broadcasts. They emphatically proclaim their view in an effort to persuade viewers they are right – trying to create self fulfilling prophesies through persuasion – perhaps the most egregious offense. We see this regularly on political discussion panels where party-aligned or candidate-partial analysts make their case for persuading us what people really want and how they will vote. Are they really giving us a scientifically sound viewpoint or simply trying to manipulate our view about what will be?
Predicting Human Behavior
The digital age has provided a powerful platform for gathering information on individuals’ behavior. Companies can gain insight into buying behaviors as well as data on individual and group interests in entertainment, politics, as well as professional and social connections. The usefulness of this information is at once obvious and complex. For instance, if we know that there is better than 50% chance that a person buying a camera will also buy a camera case then it would be an effective sales practice to suggest a camera case at the point the buyer selects a camera. This practice is quite common now with online purchasing. We also see it fairly frequently with phone sales as well as with fast food ordering; e.g.: “Would you like fries with that?”. Recently, I was shopping for a new lawn mower and researching options on the website homedepot.com. Within several minutes, after performing a search on the site, a pop-up offer to chat with a representative came up. I took that offer as I had some questions about the models. After about five minutes, the rep offered me a 10% discount if I’d like to purchase the item online and he’d help me through the purchase process. I had already made the decision to buy the item, so I was happy to get this discount, but I wanted to pick it up at a store near me rather than have it shipped. Their process allowed for this flexibility as I could purchase online and the item would be instantly put aside for me to pick up that evening. This process was ingenious. What percentage of people shop to the point of sale and then drop, reducing the chance of buying the item through the original site or perhaps not buying that item at all? This new discount offer through the chat rep can help homedepot.com reduce that drop percentage. Now, Home Depot does not know if I would have purchased that item that day online anyway and they do not know if I would have gone to the store and been willing to pay full price. The fact is: there will be some percentage of margin they relinquish for the sake of capturing a higher percentage of potential sales. Home Depot, like so many other retailers, banks, insurance companies, and drug companies are in the business of prediction – predicting what you will do if they communicate with you in a certain way at a certain time.
Statistical Sampling – A Foundation for Predictions
Statisticians often tell a joke about a man with his head in a refrigerator and his feet in an oven – on average he feels about right. Sampling is used to determine probabilities and to make decisions on the level of risk we are taking. It’s sampling and probability that determines the rate we pay for life insurance, car insurance, and all other types of insurance. So, when we try to predict Apple’s earnings per share for next quarter or whether a catastrophic disaster will strike an Asia Pacific country next year, we must calculate the odds, the percentages, the probabilities. The fact remains, however, that not all outcome distributions fall into a normalized curve and highly unlikely events can be game changers.
Prediction should be all about risk, uncertainty, and likelihood, but what you’ll hear this week and throughout the year is a chorus of experts telling you with great certainty what the future will bring. Don’t believe them. If you’re jonesing for advice, try listening to those who are providing detail on probability, risk and trends. But know that the future is never about certainty and always about probability. When prognosticators get it right, they were just plain lucky. They may have played the odds. They may have had some truly intuitive insight that others did not. But there is never a sure thing.
November 17, 2011 § Leave a comment
I’m old enough to have been on the leading edge of personal computing technology. At eighteen years old, I sold my beloved drum kit to buy one of the first IBM PC computers; no hard drive, just two floppy drives, one for the program and one for data. I was obsessed with specifications and design; reading every computer magazine I could find – comparing the Commodore 64 to the Kaypro II or maybe the TRS-80. These early machines had big differences and like the early days of the automobile, you had to be part engineer to know what you were looking at. For the next twenty plus years, while consolidation occurred and the PC, Windows platform dominated, hardware specifications was always a big factor when choosing a computer. Clock speed, chip type, drive capacity, cache, video card, etc, all became a part of the lexicon for computer shoppers. But this has changed in a big way the past few years; and good riddance. MG Siegler’s recent post on techcrunch, clearly articulates this change in the computer market. In reading this post, it brought me back to when I first adopted portable digital music technology, using MP3 and varying devices.
The iPod Experience
The iPod had been released, but I was still very much a PC guy and I just wanted to small player that fit in my pocket and cost no more than about $150. I bought a product from a company called Digital River which used Microsoft Media Player or some earlier version of it and had loads of problems. Often times songs didn’t sync or were lost. As a user, I found the method of ripping music, loading them to the player and managing my music very confusing and it simply did not work as I would expect. Ultimately, my user experience was poor and after several other attempts at using other Microsoft platform products, I eventually ended up with an iPod. Once I had that first iPod, it was clear to me that the platform worked well and my experience was dramatically improved. What Apple understood was how to take the product from the end user experience perspective and make it easy and pleasurable. The design work that went into the iPod was not only elegant and simple, the entire ecosystem that included iTunes was also a vast improvement over the alternatives. Quite simply, it worked. And it worked significantly better than anything out there.
Now, as I’m working with software technology within Life Sciences companies, I’m seeing similar trends in approach. No longer is it as important to focus on and flaunt the incremental improvements in specifications as it is to understand and address end-user experiences. Product design and specifications are not only about the specifications; but how end users respond to the product itself. Last week I spent time meeting with a medical device manufacturer, discussing my company’s software product, but we took a break at one point. Members of one of the product teams tested a product in our conference room and one of our team members was asked to test the product and provide feedback from the user perspective. He was able to give the product team advice on what he was experiencing and how it was working for him. Ultimately, the product’s success in the market will be determined more about how the user reacts to using the product – more so than the specifications that are listed on the label.
Mr. Daniel R. Matlis has again written an apropos article entitled, “Is That Car a Medical Device?”. Here, he interviews Robert B. McCray, President and CEO of the Wireless Life Sciences Alliance, WLSA. It’s a fascinating view into what is happening on the forefront of medical device development and how wireless technology is improving patient’s health management. The key development with these technologies is not just the capabilities these devices can deliver, but how the patient experiences the device’s use that will determine its success. Mr. Matlis notes, “A great example of the use of convergence [medical technology, connectivity and consumer devices] to support this challenge is Ford’s In-Car Health and Wellness Solutions. Researchers at Ford, in partnership with Medtronic and WellDoc, have developed a series of in-car health and wellness apps and services aimed at monitoring people with chronic illnesses or medical disorders so they can manage their condition while on the go.” Several major advancements are at work with these developments; increased speed of information to patients, improved monitoring of their conditions, decreased therapeutics and treatments for patients through lower need for diagnoses. Perhaps most importantly, patients will not require as many diagnoses and treatments with real-time monitoring. In turn, it’s the experience of patients that what will drive adoption for these technologies.
The Internet of Things
The expression, “The Internet of Things” sounds kind of goofy – a bit like the title of a children’s book. This phrase is commonly used to describe the connectivity of the wide variety of devices to the web. Whether through 3G, 4G, or Wi-Fi, varying appliances, medical devices, automobiles, you name it, are quickly coming online. They are sending status messages; in TIBCO terms, “events”, to the web. For example, I recently purchased an all electric vehicle, the Nissan Leaf. One of the features of the car is that it sends data wirelessly about the current state of the battery. Using an iPhone app, I can get full details on the battery’s charge status and I can opt to get text messages or email notifications when certain events or thresholds are crossed. This capability adds tremendous value to my experience as I feel in greater control over the state of battery life remaining which is the biggest concern when owning this type of car. Similarly, medical devices such as glucose monitors can be enabled to send data directly to the web and alerting patients via text, email or even having their car speak to them if and when specific conditions exist. Now, I’m not certain of the exact capabilities that WLSA, Ford or others will bring to market, but one thing is certain: the customer experience will determine how well the product is accepted and demanded in the marketplace.
Making it Work with all that data
When I wrote this title, I couldn’t help hearing the voice of Tim Gunn from the show “Project Runway” uttering his famous tag line. At the end of the day, organizations who are bringing products to market are seeing a massive opportunity coupled with a massive challenge. The convergence of medical devices, consumer devices, and ubiquitous connectivity brings enormous potential and with it the challenge of handling a tidal wave of event data that is frequently sent by a massive volume of devices. How should data be managed? Again, taking the customer perspective, the end user wants this data to be monitored and reported immediately for specific conditions. Especially, when we’re talking about medical devices that are monitoring critical patient physical conditions – the response must be fast. Patients cannot have the data sent to some massive data repository, stored and then queried periodically. This is the method of data storage and search that is quickly becoming antiquated.
21st Century Architecture
Capabilities exist with TIBCOs platform to perform what is referred to as pattern event matching or complex event processing (CEP). These capabilities allow organizations to look at patterns within sets of events that may indicate specific conditions. Within the Life Sciences sector, the movement towards creating greater value and usability with connected devices is growing. The software platform that supports these capabilities is critical toward realizing that value. In order for medical device companies to provide near real-time status to a patient, the system must utilize software that cannot only capture that data, but analyze it as it is received and trigger actions based on user-driven rules. For instance, using the glucose monitor example, every patient may have specific requirements that they want to personally set for when and how they are notified by the device. Providing a platform that allows the user to easily set those rules and modify them whenever they choose enables a positive user experience. For insurers, patients that are able to personally manage their condition will require less professional consultation and thus will reduce the overall care expense.
As Life Sciences and other sectors fully adopt 21st century information architecture, I see an evolving process management paradigm which in turn brings us new customer experiences. As any good Six Sigma expert would tell you, the customer experience drives the requirements for quality. Make it work for the customer experience and then you know you’re on to something.