Central bank stress tests: mad, bad, and dangerous.

AuthorDowd, Kevin
PositionReport

In my youth it was said that what was too silly to be said may be sung. In modern economics it is put into mathematics.

--Ronald Coase

One of the most important aspects of the remarkable transformation of central banking following the onset of the 2008 global financial crisis is the growth of regulatory stress tests for the larger banks. The relevant regulator--typically the central bank--uses these to determine the ability of the banks to withstand stress, and uses the results of the tests to assess the overall financial health of the banking system. A key purpose of the stress tests is to reassure the public that the banking system is sound.

When putting banks to such a test, the relevant authority starts by imagining some stress scenario(s) to which banks might be exposed--these are effectively just guesses pulled from thin air-and uses a bunch of models based on a bunch of further guesses to determine how the scenario(s) will affect the banks' capital adequacy (i.e., their ratio of capital to assets) over the course of the stress period. It then passes or fails individual banks according to whether their capital ratio has remained above some minimum by the end of that period. To take a typical example, in its latest (2014) stress tests, the European Central Bank (ECB) assumed a single scenario, took the capital ratio to be the ratio of Common Equity Tier 1 (CET1) capital to risk-weighted assets (BWA), and selected a minimum required ratio of 5.5 percent measured in terms of the CET1/RWA ratio. Any bank that maintained a CET1/RWA ratio of at least 5.5 percent by the end of the stress period was then deemed to have passed, and any bank whose capital ratio fell below this minimum was deemed to have failed.

These regulatory stress tests are the ultimate in the appliance of financial "rocket science" to the banking system, and many of the models themselves are derived from the physical science models used so successfully in real rocket science. However, by their very nature, all these models--the financial models and the stress tests themselves--are impenetrable black boxes to any outsider, and we are asked to take their reliability on trust. The analogy with rocket science, though appealing and even comforting, then breaks down in two critical respects:

* Real rocket science is grounded in the science of physics, and the laws of physics are well established. By contrast, so-called financial rocket science is merely a set of beliefs and practices based on sets of convenient assumptions that ape some of the assumptions made in physics, but are wide of the mark as descriptions of how financial markets really work.

* We know that the methodology underpinning real rocket science actually works because it is scientifically tested, but we have no such assurance with its financial and central bank equivalents. Indeed, going further, we can say, with confidence, that we know that the methodologies underpinning both financial models and regulatory stress tests do not work: the stress tests provide an extremely unreliable radar system.

My purpose in this article is to spell out this latter claim--or, more precisely, to assess the methodology of regulatory stress testing both by reference to first principles and by reference to its track record. The results are shocking.

Financial Risk Models Are Worse than Useless

The first point to appreciate is that central bank stress tests are based on models of financial risk--models that predict potential losses and their associated probabilities--and these models are not so much useless as worse than useless. More precisely, the stress tests are dependent on risk models because they make use of risk-weighted asset measures that are dependent on the risk models. These models are useless at predicting financial losses and worse than useless as risk management tools because of their game-ability and the false risk comfort that they provide.

Consider the foundations of risk modeling. The first is the standard assumption that financial returns (or losses) follow a Gaussian (or normal) distribution. A nice example illustrates that this assumption is impossibly implausible for the large losses that really matter. Back in August 2007, Goldman Sachs' hedge funds were experiencing enormous losses. "We're experiencing 25-sigma [standard deviation] events, several days in a row," explained their CFO, David Viniars (Larsen 2007), the suggestion being that Goldman had been very unlucky as opposed, e.g, to merely being incompetent. Financial commentators were quick to pour scorn on this lame excuse, and 25-sigma events were being likened to events one would expect to see on one day in 10,000 or 100,000 years. That's a long waiting time for events that actually happen quite frequently in financial markets.

However, under the Gaussian distribution, the waiting time to observe a single-day 25-sigma event is much, much longer than even 100,000 years. In fact, the waiting time is 1.3e^135 years: 1.3 with 135 zeros inserted after the decimal point (Dowd et al. 2008: 3). To put this number into perspective, the number of particles in the known universe is believed to be somewhere in the region of 1e^80, which is literally infinitesimally smaller. To recycle an old Richard Feynman joke, a number like 1.3e^135 is so large that the term "cosmological" hardly suffices; perhaps we should describe it as "economical" instead. Thus, the Gaussian distribution massively underestimates the risks of the really big losses that truly matter.

A second pillar of risk modeling is the Value at Risk (or VaR) risk measure. This tells us the maximum likely loss that can occur on a position at a certain level of probability, for example, on 99 times out of 100. In plain English, this definition boils down to the worst we can do if a bad event does not occur. Unfortunately, it tells us nothing about the loss we might experience if a bad event does occur-and it is the very high losses that we should worry about; the VaR is blind to the risks that matter, the ones that can wipe a bank out.

A third problem with the risk models is simply that they don't work. One could give many examples (see, e.g., Dowd 2014: 6-8), but Figure 1 suffices. The continuous dark plot shows banks' average risk weight, which includes the impact of risk models; the dashed line shows a primitive metric, bank leverage, the ratio of bank assets to capital, which ignores risk models. The risk-weight plot suggests that risks were continually falling; the leverage plot shows that they were rising up to 2008. As Bank of England economist Andrew Haldane (2011: 3) noted, "While the risk traffic lights were flashing bright red for leverage [as the crisis approached], for risk weights they were signalling ever-deeper green." The risk weights were a contrarian indicator for risk, indicating that risk was falling when it was, in fact, increasing sharply.

There are a host of reasons why the models failed so badly, but only one that matters: gaming. The models were being used not to manage risks, but to game the risk-weighting system. No model can take account of the ways in which it will be gamed, and market players have strong incentives to game the models used to control them.

So why does bad modeling persist? The reason is that banks want bad models because they understate their risks, and the regulatory system endorses bad models because it is captured by the banks.

Most risk modeling is then just a game: banks pretend to model risks, but they are really gaming the risk numbers. This game even has a name: risk-weight optimization. You fiddle with the models to get low risk numbers and you come up with clever securitizations to game the risk-weighting rules. (1) The lower the risk number, the lower the capital requirement, and the more capital can be siphoned off and distributed as dividends or bonuses.

In short, the real (though seldom publicly...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT