Reading my favorite blogs this week I stumbled across an article posted in the New York Times. It really gives a good history about what happened in the financial community concerning risk management in the last decade. I'll try to summarize it, but here's the link for those of you who have enough time and interest to read: http://www.nytimes.com/2009/01/04/magazine/04risk-t.html?_r=2&pagewanted=all
For a long time now the financial community (with some notable exceptions) held the view that with the common, complex statistical and mathematical models risk was measurable and manageable. All non-statistic risk management approaches had a hard time to stand up to this believe.
The most common and accepted of these models is and was VaR - Value at Risk. Value At Risk is a mathematical model developed in the early nineties by "quants" that were working for JPMorgan. To make it short, VaR gives the number how much money you'll loose with a probability of 99% over a given period of time. E.g., a certain portfolio has the chance to decrease in worth by 50m€ during the next week. Benefits of the VaR approach is that it is easy to understand, can be applied to a single risk as well as to a companies overall risk, and makes risks comparable. This was actually so successful that in the end regulators set the rules for capital requirements (accruals) for banks on this measurable. On the negative this lead to an institutional believe into this system of risk measurement. So it created a false sense of security among senior managers and regulators.
Another major drawback is hidden in the mathematics used for the calculation. Firsthand, those models concentrate on the risk you see and measure, but obviously not on those you never expect to happen. But earthquakes happen and financial catastrophes happen - with tremendous impact! Secondly, VaR usually uses a normal distribution and as a short term measure assumes that there is not much change to the conditions around. But if you take the statistical data of a bubble as reference point this does not help to predict a downturn. And what some prominent risk wizards like Nassim Nicholas Taleb argue is that the focus on the middle 99% does not tell you what will happen in the remaining 1%. But exactly here are the dragons hidden, as Marc Groz, another well known risk consultant likes to comment. I really like that expression: "Dragons" - the events that cause you to loose millions that are hidden in the low probabilities.
This even went worse when companies incentivized their managers for having low VaR values while making big profits. Like always people try to find the easiest way to fulfill those directives and took so called asymmetric risk positions in their portfolio that had very low risk, rare losses combined with small profits. But when those products have losses they are huge. This made their VaR value look good as the low likelihood is ignored by VaR mathematics as mentioned before.
But for Goldman Sachs exactly these models worked totally fine. In December 2006 their risk indicators suggested something was wrong and after a thorough analysis (not only of VaR) they decided to get rid of some of the risk of those mortgage-backed securities that later triggered the current disaster in the financial markets. So Goldman then avoided what the others had to suffer in summer of 2007.
So from my point of view I agree with Gregg Bearman of RiskMetrics who said "you can't blame math" when either the models used were insufficient to begin with or fed with insufficient data or management didn't pay attention to what the models told. Or the final conclusion of Joe Noceras article: "When Wall Street stopped looking for dragons, nothing was going to save it. Not even VaR."
Well, wikipedia delivers a nice definition for VaR:
"For a given portfolio, probability and time horizon, VaR is defined as a threshold value such that the probability that the mark-to-market loss on the portfolio over the given time horizon exceeds this value is the given probability level."
Sounds complicated but simply means that at a given probability (mostly 95% or 99%) the loss in a given timeframe is not exceeding a certain amount.