Two people were examining the output of the new computer in their department. After an hour or so of analyzing the data, one of them remarked: "Do you realize it would take 400 men at least 250 years to make a mistake this big?" Unknown
I'm a big fan of Riccardo Rebonato. From the book on interest rate models, a required text in my grad school, to the papers he's done on interest rates measures in the "real-world", he's an extremely clear thinker on otherwise murky stuff. I can't recommend more highly his recent book, Plight of the Fortune Tellers. If you or your clients are in the business of making tough financial decisions, it's a must read and enjoyable to boot. Enough gushing (I need payment to go any further ...)
One extremely important concept woven throughout Plight is the difference between the traditional "probability as frequency" concept and the more general Bayesian or "subjective" probability. Probability as a pure frequentist concept is a special case of Bayesian/subjective probabilities that would be appropriate when looking at the likelihood of a head after a coin flip. Outside of a belief the coin is fair, no prior knowledge is necessary to reliably assess the likelihood of such an event. Contrast that with say, the probability that the Jets win the SuperBowl in 2011, or the Republicans retake the House in November, or even that gold goes over $1,500 an ounce by year end. These are all events to which we could also assign a probability, though analyzing purely historical data in a frequentist sort of way will yield few helpful results. We are much more inclined to include and use other relevant information such as the Jets strong defense going into the next season, the anti-incumbent mood of the electorate, and the growth of global money supplies.
What does this have to do with the use of raw historical data in financial decisions support analytics? A lot. Certain financial questions are better answered using frequentist concepts. Others are far more judgment-based relying on more subjective criteria and professional experience. But how do you know which situations are which? Though no hard and fast rules exist, there are basically four criteria:
Data frequency - The more relevant data you have, the more inclined towards a frequentist approach.
Time horizon - the longer the horizon of analysis, the more likely a subjective analysis will be more relevant.
Rarity of event - the more rare the event, the more the analysis calls for a Bayesian/subjective approach.
Time homogeneity of data - Were there no regime changes or other tectonic shifts in the underlying phenomena from which data was gathered? If so, analysis will tend more towards frequentist methods.
So for long time horizons, a scarcity of data, significant changes through time in the realm in which the data lives, and highly improbable events, we land squarely in the realm of subjective probabilities. Though historical/frequentist data isn't ever completely irrelevant, in these circumstances professional judgment of the situation at hand trumps pure number crunching. Unfortunately, from rating agencies to regulators to a large swath of finance professionals, this is not well understood. Things are just much more clean and simple if we allow ourselves to believe that 100 data points and a fancy model will yield 99.97% confidence precision. This is a particularly dangerous type of belief in finance, as acutely borne out over the last 18 months.
The good news is that whether frequentist or subjective, widely available probability-based models should always be used to capture risk metrics, evaluate best and worst outcomes, assess breakevens, and ultimately to avoid the ever pervasive flaw of averages.
"The first and most important thing to understand about Monte Carlo is that it is a numerical technique, not a model."
Ricardo Rebonato, Plight of the Fortune Tellers
If you ever hear people talking authoritatively about their powerful "Monte Carlo model," be very suspicious of the message and the messenger. The Monte Carlo numerical method (in contrast to the lovely place on the French Riviera) is no more a "model" than addition is a "model" for ascertaining that two plus two equals four. It is simply a way to perform certain calculations. For any lingering Pythagoreans out there, Monte Carlo is specifically a very efficient way to calculate integrals in high dimensional spaces. In finance, Markov chain Monte Carlo is used for generating estimated distributions for things like interest rates, equity prices, investment returns, and exchange rates. People who think the Monte Carlo technique is a "model" are confused. My hope is this quick post clears that up and convinces you the distinction is important.
The simple fact of the matter is that once we face a situation that involves more than about three risk factors, Monte Carlo methods are the best we've got for calculating statistics of interest. Modern homo sapiens, with our flat screen TVs, computers, multi-tasking cell phones, ipads, and big brains have simply not invented anything better than Monte Carlo to evaluate these types of problems. And the more complicated the analysis, the more factors to analyze, and the better Monte Carlo does relative to other approaches. Without getting bogged down in only mildly relevant detail, this is a direct result of Monte Carlo's uniquely wonderful properties in the face of the curse of dimensionality.
So what? Why should you care? If you're like me, you hear people periodically either dismissing outright the utility of "Monte Carlo models," or alternatively gushing about how amazingly well their "Monte Carlo model" predicts the future. When you hear this now you can rest comfortably in your understanding of the much more moderate truth: neither the naysayers nor the chest thumpers are in a position to properly use Monte Carlo to help make better financial decisions. And properly used, Monte Carlo can absolutely help inform difficult financial decisions. To that end, I leave you with a quote from Mr. Black Swan himself.
"The dividend of the computer revolution to us did not come in the flooding of self-perpetuating email messages and access to chat rooms; it was in the sudden availability of fast processors capable of generating a million sample paths per minute."
, Fooled by Randomness
It's 2010 and time to raise the bar. Public finance software has remained substantially unchanged for over a decade, and probably more like two. In this day and age your public finance software, in addition to all the other stuff it's done since TRA86, must add the following five features:
1. Reflect Uncertainty and Quantify Risk
Because so much brainspace is bogged down in satisfying tax rules coupled with analysts' difficulty implement models outside the ubiquitous spreadsheet, public finance analytics tend to force single point forecasts for market elements: variable rates, SIFMA/LIBOR ratios, VRDN support costs, etc. What does this mean? It means uncertainty is modeled with certainty which today borders on the absurd and exacerbates our species' now well documented tendency towards overconfidence. Your public finance software needs to at least provide for the identification of cash flow risks and their modeling in a straightforward, efficient way.
2. Visual Interaction with the Problem
Have you seen a video game made any time in the last 5 years? These are true technological achievements. What fraction of the computing power going into a video game rendering the bad guy around the corner finds its way to helping you visualize the complex financial decisions you or your clients make? 0.1%? 1% maybe? Let's raise the bar - make it 10% and let's see what that looks like. Unless you're able to quickly get a visual read of the problem and interact with it visually (move revenue lines, modify risk constraints, etc) your public finance analytics need a makeover.
3. Calculate Solutions Subject to Explicit Risk Constraints
Public finance analytics that solve for the minimum bond size achieving an overall expected debt service shape, wrapping around existing bonds and derivatives as applicable, while also measuring additional marginal risk contribution are now just a baseline. Public finance analytics in 2010 must also allow the user to enter an explicit risk constraint to which the solution is bound. In this way, the user sees the most cost effective, risk-adjusted solution determined from multiple financing sources on a maturity by maturity basis.
4. Accommodate Swaps and Other Derivatives
Tax-exempt variable rate bonds aren't going away any time soon. Therefore and despite some reporters confusion over what "speculation" is, interest rate swaps, caps, and collars probably aren't going away either. If your public finance software doesn't analyze these very non-trivial instruments in very non-trivial ways, you're missing a very big part of the financial analysis for you or your clients.
5. Include Refunded Bond Selection as Integral to Solution
Selecting what bonds to refund isn't always as simple as firing up your refunding screen and grabbing everything above 3% pv savings. A number of interrelated factors go into determining the marginal contribution that an additional refunded bond has to the economics of a refunding. Your public finance analytics should rise to the challenge.
If you're an issuer, the features above offer you a deeper understanding of what's going on with your capital structure. This makes you a far more knowledgeable consumer of banking and advisory services. If you're a pubfin banker or advisor, these features are a pre-requisite to demonstrating you understand your clients.
The world is changing fast... and public finance is no exception.