Prior to the credit crisis, the US combined these functions in the Federal Reserve while the UK kept them separate. That neither structure prevented the crisis strongly suggests that both are fundamentally flawed.
Regular readers know that the fundamental flaw in both structures is the regulators' monopoly on all the useful, relevant information on the current asset and liability-level data at the banks. As a result of this monopoly, market discipline cannot be applied to the banks as the markets are not able to perform their analytical function and instead must rely on the regulators performing this function.
Unfortunately, as documented by the Bank of England's Andrew Haldane and others, unlike market participants like competitors and credit and equity market analysts, regulators have difficulty translating this data into useful, actionable information.
This is why this blog has recommended the adoption of the FDR Framework. Under this framework, the data is made available to the market participants. They can turn it into useful information for the purposes of market discipline and also to help the regulators.
Assuming that the FDR Framework is adopted to cure the fundamental flaw in current bank supervision and regulation practices, the question then becomes should monetary policy and bank supervision and regulation be separated?
Individuals like Paul Volcker would argue that it should not be separated. His argument is based on the idea that the regulator in charge of monetary policy is also the lender of last resort. As the lender of last resort, there is a need for understanding the collateral that is pledged to the central bank. As Walter Bagehot described in Lombard Street, central banks are only suppose to lend against good collateral.
Under the FDR Framework, central banks do not need to be responsible for bank supervision and regulation in order to receive the information necessary to determine the quality of the collateral. Like all market participants, they will have access to it. More importantly, the central banks will also see the valuation that the credit markets place on any individual asset that might be pledged as collateral.
Alan Greenspan makes the case for why they should be separated. In a recent A-list blog posting on the Financial Times, Mr. Greenspan discussed why regulators should take more risk and allow banks to operate with lower capital requirements.
Regular readers know that I am not a big fan of higher capital requirements as capital is an easily manipulated accounting construct. Higher capital requirements are not a replacement for ending the regulators' information monopoly and disclosing all the useful, relevant information in an appropriate, timely manner to market participants.
That said, the natural focus of bank supervision and regulation is financial stability. If this regulator is doing its job right, it is working with the market's to take pre-emptive action (which is what market discipline is) to restrain individual bank's from taking excessive risk.
What happens if this regulator has to report to an individual who believes in gambling on financial stability with less frequent intervention and lower capital requirements? The results when this occurred in the US under Chairman Greenspan was a financial crisis of epic proportions.
A response to some of the issues raised in Mr. Greenspan's post.
Since the devastating Japanese earthquake and, earlier, the global financial tsunami, governments have been pressed to guarantee their populations against virtually all the risks exposed by those extremely low probability events. But should they?
Guarantees require the building up of a buffer of idle resources that are not otherwise engaged in the production of goods and services. They are employed only if, and when, the crisis emerges...The choice by the Bush and Obama administrations to extend guarantees to the financial services industry was a choice to preserve the existing banks. There was a clear alternative. Let the existing banks fail and sell off their assets to new banking firms.
Any excess bank equity capital also would constitute a buffer that is not otherwise available to finance productivity-enhancing capital investment.
The choice of funding buffers is one of the most important decisions that societies must make, whether by conscious policy or by default. If policymakers choose to buffer their populations against every conceivable risk, their standards of living would almost certainly decline....
Buffers are largely a luxury of rich nations...Perhaps I am missing something, but did the lack of adequate buffers in the recent credit crisis result in the rich nations losing more money than they could afford to lose? If not, then why is the US, Ireland, Portugal, Spain and Greece staring at default and austerity programs?
How much of its ongoing output should a society wish to devote to fending off once-in-50 or 100-year crises? How is such a decision reached, and by whom?
In the 19th century, when caveat emptor ruled, such risk judgments were not separable from the overall price, interest rate and other capital-allocating decisions struck in the marketplace.
Today, while the decisions of what risks to take remain predominantly with private decision-makers, the responses to the global financial and, of course, the Japanese earthquakes have been largely government scripted.In the 21st century, when the FDR Framework, which combines the philosophy of disclosure with the principle of caveat emptor, is in place, governments will not have to script responses. Investors understand that they are responsible for absorbing all losses and it will be expected that the government will step in and shut down any bank before taxpayers would suffer any losses.
In the immediate aftermath of such crises, it is very difficult to convince people that the recent wrenching events are not likely to recur any time soon, because, with a (very) low probability, they might. This is especially the case having just been through the brunt of a financial crisis that is likely to be judged the most virulent ever.Actually, it is impossible for the regulators to convince people that they know what they are doing now. Everyone knows that they were in a position to identify and take steps to mitigate the financial crisis but failed to do so.
Without the fundamental change of ending their information monopoly, why should anyone believe that regulators can and will do their job better in the future?
In the wake of the Lehman bankruptcy in 2008, private markets and regulators are requiring much larger capital, ie buffers, to support the liabilities of financial institutions. Had banks and other financial entities maintained adequate equity capital-to-asset ratios before the 2008 crash, then by definition, no defaults or contagion would have occurred as the housing bubble deflated. A resulting recession, though possibly severe, would almost certainly not have been as prolonged or required bail-outs.Actually, had all the useful, relevant information been disclosed in an appropriate, timely manner to the markets, there would have been no reason to hold much larger capital buffers. The market would have been able to exert market discipline and kept the risk within the capacity of the bank's capital base to absorb.
Bank managements, currently repairing their demonstrably flawed risk management paradigm, have been moving aggressively to build adequate capital to enable them to lend.I wonder if the discussion by regulators to significantly increase capital requirements has any influence on management's decision to retain more capital?
... What is not conjectural, however, is that American policymakers, in recent years, faced with the choice to assist a major company or risk negative economic fallout, have regrettably almost always chosen to intervene.The choice to intervene is driven by the lack of data available to the market.
If the market had the current asset and liability-level data for each financial firm, each participant could manage their exposure to every financial firm based on its risk profile. Knowing that they might lose all of their exposure, each financial firm has an incentive not to take on greater exposure than it can afford to lose. With disclosure, the risk of contagion is minimized.
Without disclosure, there is a strong incentive to intervene.
Failure to act would have evoked little praise, even if no problems subsequently arose; but scorn, and worse from Congress, if inaction was followed by severe economic repercussions.
Regulatory policy, as a consequence, has become highly skewed towards maximising short-term bail-out assistance at a cost to long-term prosperity.A problem that is easily solved with disclosure of all the current asset and liability-level data for each financial institution.