By Jeffrey Reynolds, managing director, Darling Consulting Group
Spring is upon us early in New England, which means sports fans here are shifting focus to Fort Myers, Fla., where the hometown Red Sox are in training. Perhaps one of the most interesting off-season developments was the recent statement by the team’s principal owner John Henry’s that over the past few years the team had relied too heavily on quantitative model assessments.
John Henry is a smart guy. He made his fortune in the hedge fund business, which largely explains why the team has been so committed to using quantitative analytics (a.k.a. “Money Ball”) to pick players for the team. That approach led to some big wins during his ownership tenure, including World Series victories in 2004, 2007, and again in 2013.
No one questions strategies when all goes right, but when things go wrong everything is on the table for debate.
After two straight last-place finishes despite one of the highest payrolls, some questionable player signings were reviewed. Specifically, a perennially portly third baseman nearing the age of 30 is seen as one of the linchpins to a number of shake-ups in management. With that, there has been an organizational shift to balance the quantitative (e.g. model predictions of trends) with the qualitative (e.g. an overweight infielder is likely to lose speed and flexibility as he nears 30).
The parallel to banking is uncanny.
April marks my 20th anniversary at DCG, and without a doubt the biggest change in that time in risk management has been the way we use technology to utilize data to make better decisions.
An expectation that the technology churns the data correctly is often taken as a given. Calculations are more complex because shorter processing times have made it possible, and this has really transformed the market.
Do not get me wrong, modeling and technology certainly have their place in our world and have made it better. If the Titanic had radar capabilities, I never would have had to sit through that movie with my wife.
But while technology and financial modeling can enable you to preempt and side step issues, they can also create a false sense of comfort.
Model risk remains
How many financial models does your bank rely on these days?
Probably quite a few: interest rate risk, liquidity, allowance calculations, credit stress testing, budget, customer profitability, loan pricing, deposit pricing, bond accounting, etc. Many numbers are getting churned within the walls of a bank for different end goals.
Generally each model is backed by a program or a spreadsheet, the coding of which is usually tested to make sure it can function correctly.
The key word is “can.”
My experience has been that the math is usually not the culprit when a model is proven grossly inaccurate or there is a financial calamity.
Usually, the problem lies in the data entered.
In most cases, the shortcoming is driven by a failure to collect and feed the right data into the model; the failure to formulate and input the right assumptions into the model; and/or a lack of understanding of the purpose of the model and its inherent limitations.
Don’t let models mask reality
This is not a theory. Historical experience shows that it’s true. Here’s some history for reflection.
• Forgetting the biggest variable of all: humans. There are a couple of generations of bankers who probably do not remember Long Term Capital Management. LCTM was a hedge fund started by John Merriweather, the man who revolutionized bond trading at Salomon Brothers by introducing quantitative modeling to the profession to uncover spread dislocations that created opportunities.
According to the book When Genius Failed: The Rise And Fall Of Long-Term Capital Management, Merriweather and a handful of very smart people were cleaning the market’s clock and produced 40% returns for roughly five years straight. (That select group included Myron Scholes, the Nobel Prize winner for his work with Fischer Black on their famed option pricing model.)
But an inherent model flaw existed, at least according to the account in the book: The model could not fully account for human behavior and the interconnectivity of markets.
When the Asian financial crisis of 1997 was followed by the Russian crisis of 1998, the highly leveraged LTCM lost $4.6 billion in less than four months and nearly brought down the financial system.
Very smart people, armed with the highest-level math and best of models, were brought down by the fact that nobody can model every possible outcome along with a leverage structure that suggested the models could not be that wrong.
• Pop goes the bubble. More recently, we had the housing bubble that burst in 2008. I recently balanced out my own having to watch “Titanic” by making my wife watch “The Big Short.”
In a nutshell, common sense says that if you aggregate and re-pool the credit sludge of a bunch of mortgage pools, you should have a giant pool of credit sludge. However, through a modeling process by the ratings agencies underpinned by a flawed assumption—that diversification always reduces risk—the seeds of the “Great Recession” were planted.
Both are great examples of the shortcomings of financial models (no matter how complex and no matter the IQ of the designer) and how destructive they can be when used as the key driver of strategic decisions.
Pragmatic use by community banks
I will give you two hypothetical options.
Group “A” has a financial model to measure and manage a risk factor at their bank.
Admittedly, it isn’t the most sophisticated model on the shelf. However, the group embraces risk management and uses the model to challenge their current strategy and financial condition. They know the key inputs, and they know what the key assumptions are. Periodically they change up the assumptions to demonstrate to their risk committee how important they are, and reaffirm everyone’s understanding of how they were developed.
Group “B” also has a model, one that is far superior in math coding to “Group A’s.” If that group had the technical capabilities to get to about 85% of the answer, Group “B” gets to 95%. However, “Group B” hangs its hat on the reputation of their model and does not challenge it. When the information comes out, they take it as gospel and rolls strategic direction under the assumption that it is 100% correct.
If you had to hand capital investment to one of these groups, which one would you choose?
Back to spring training
Somewhere in Florida John Henry is looking at the infield, and cannot help but turn his eyes to the left side and third base. There is his player whom the computer told the organization it should hire. He is in year two of a five-year contract that will pay out a guaranteed $95 million. The player’s skills really tailed off over the past year, and while the Red Sox PR department says otherwise, he looks a bit heavier.
While many of us in Henry’s shoes would no doubt like to fly back to Boston and take a 7 iron to Carmine—nickname of the computer—the computer is still with the club. However, Henry put a new regime in charge of on-field decisions that is known for maintaining a better balance between the quantitative with a good dose of the qualitative.
The hope, I would assume, is having someone stand up and say, “I don’t care what Carmine the computer says. Players slow down when they reach their 30s and this guy will probably break down more quickly given his body type.”
Henry is a smart guy, and he did not throw out the baby with the bath water. After all, Carmine helped drive some really good decisions in the past. The mistake over the last few years was probably an overreliance on the numbers, and not enough on what eyeballs and experience should have told him.
An expensive lesson learned by him that no doubt can be a lesson for us all.
Don’t let a model fool you
The next time you are in a risk committee meeting and the group glosses over some model results that are compared to policy limits, ask yourself if the underlying assumptions have been reviewed lately and if the magnitude of “being off” is understood.
This is the best way to understand model risk, and the exercise might lead to better decisions—and even save you an expensive mistake.
About the author
Jeff Reynolds is a managing director at Darling Consulting Group. After serving as an auditor in the insurance and banking industries, Jeff joined DCG in 1996. His analytical and managerial skills led him on a career path within DCG that culminated in his current role as Managing Director. In this capacity, Jeff’s primary responsibility is advising clients on ways to enhance earnings while more effectively managing their risk positions. He regularly assists clients with strategic and capital planning projects and has also served on numerous due diligence teams for client acquisitions. Jeff is a frequent author and speaker on a variety of balance sheet management topics and has served as a guest faculty member for the ABA’s Stonier Graduate School of Banking.