"Horse sense is the thing a horse has which keeps it from betting on people" - W.C. Fields
A bookie matches bets. If a bookie sees lopsided interest in bettors taking the Giants by 3 in Sunday’s big game against the Patriots, she’ll ultimately need to adjust the odds she is offering to get a matched book. The instant before the game starts, you could calculate an implied probability for each team winning based upon the bets in the book. Does the bookie care what these specific probabilities are? Absolutely not. Does the bookie even care which team wins? Not if she’s done her job right. The only thing the bookie cares about is that the sum total of the implied probabilities for the teams adds up to more than 100%. Because the amount over 100% is her vigorish – and that’s how she buys dinner. To borrow a term from asset pricing theory, her position is “risk-neutral”.
Contrast the bookie now with bettors, who are in a different position entirely. Bettors might conduct fundamental research into the severity of center Ryan Wendell’s sprained ankle, or how wide receiver Victor Cruz’s hamstring is doing. Bettors are obviously interested in the odds and payout from the bookie, but they are concerned as much or more with the real world likelihood that the Pats will win the day. On this latter analysis of the real world rests the core betting decision. It would certainly be wrong to conclude that the bookie held all relevant analysis of the probabilities based on the bets in her book.
Of course, this describes our financial markets as well. Does a vanilla equity options trader care whether the Jan14 40 MSFT calls expire in the money? Not one bit. His money is made regardless of outcome: he runs a matched book functionally identical to the bookie’s. But how about investors who hold a position in those same options? Absolutely they care. And in the case of options there is an important extension to consider. The somewhat radical and at the time very unintuitive conclusion by Fisher Black, Myron Scholes, and Robert Merton, is that for purposes of pricing an equity option, the trader must assume the growth rate of MSFT to be the risk free interest rate. The same “risk-neutral” term again applies: the options are fully hedgable and as such the underlying growth must be assumed to be the risk free rate or arbitrageurs will enter the market and make it so. The investor of course in order to evaluate a buy or sell decision, must examine the real world expected growth rate of MSFT; it is the real world growth that matters. Is there a difference? Over the last 5 years MSFTs annual ROE has averaged about 40.1%. Over the same time 6M Tbills have yielded 0.24% - big difference.
At Intuitive Analytics we welcome the increased application of asset pricing and valuation theory to the municipal market. We believe it is a very positive and constructive development which ultimately can lead to better decisions under uncertainty. At the same time we believe, particularly in light of recent financial history, that the right models be used the right way, for the right reasons and with well-vetted inputs. There is important context to consider in the use of any financial model. Though it might be nice if we all could live in the bookie's world to make our vig, states, municipalities and the dedicated debt managers who serve them live in the real
one. They have to take real, unhedged positions; their job is harder.
Two people were examining the output of the new computer in their department. After an hour or so of analyzing the data, one of them remarked: "Do you realize it would take 400 men at least 250 years to make a mistake this big?" Unknown
I'm a big fan of Riccardo Rebonato. From the book on interest rate models, a required text in my grad school, to the papers he's done on interest rates measures in the "real-world", he's an extremely clear thinker on otherwise murky stuff. I can't recommend more highly his recent book, Plight of the Fortune Tellers. If you or your clients are in the business of making tough financial decisions, it's a must read and enjoyable to boot. Enough gushing (I need payment to go any further ...)
One extremely important concept woven throughout Plight is the difference between the traditional "probability as frequency" concept and the more general Bayesian or "subjective" probability. Probability as a pure frequentist concept is a special case of Bayesian/subjective probabilities that would be appropriate when looking at the likelihood of a head after a coin flip. Outside of a belief the coin is fair, no prior knowledge is necessary to reliably assess the likelihood of such an event. Contrast that with say, the probability that the Jets win the SuperBowl in 2011, or the Republicans retake the House in November, or even that gold goes over $1,500 an ounce by year end. These are all events to which we could also assign a probability, though analyzing purely historical data in a frequentist sort of way will yield few helpful results. We are much more inclined to include and use other relevant information such as the Jets strong defense going into the next season, the anti-incumbent mood of the electorate, and the growth of global money supplies.
What does this have to do with the use of raw historical data in financial decisions support analytics? A lot. Certain financial questions are better answered using frequentist concepts. Others are far more judgment-based relying on more subjective criteria and professional experience. But how do you know which situations are which? Though no hard and fast rules exist, there are basically four criteria:
Data frequency - The more relevant data you have, the more inclined towards a frequentist approach.
Time horizon - the longer the horizon of analysis, the more likely a subjective analysis will be more relevant.
Rarity of event - the more rare the event, the more the analysis calls for a Bayesian/subjective approach.
Time homogeneity of data - Were there no regime changes or other tectonic shifts in the underlying phenomena from which data was gathered? If so, analysis will tend more towards frequentist methods.
So for long time horizons, a scarcity of data, significant changes through time in the realm in which the data lives, and highly improbable events, we land squarely in the realm of subjective probabilities. Though historical/frequentist data isn't ever completely irrelevant, in these circumstances professional judgment of the situation at hand trumps pure number crunching. Unfortunately, from rating agencies to regulators to a large swath of finance professionals, this is not well understood. Things are just much more clean and simple if we allow ourselves to believe that 100 data points and a fancy model will yield 99.97% confidence precision. This is a particularly dangerous type of belief in finance, as acutely borne out over the last 18 months.
The good news is that whether frequentist or subjective, widely available probability-based models should always be used to capture risk metrics, evaluate best and worst outcomes, assess breakevens, and ultimately to avoid the ever pervasive flaw of averages.