Traditionally, non-commercial banking (i.e., everything except savings deposits and consumer loans) was about one of two things:
Tax arbitrage or
The rest is window dressing; that is, it was basic financial intermediation, usually for the purpose of helping Corporate and/or High Net Worth clients.*
That was until the late 1990s and the Noughts, when the third level came to liquidity-prominence:
Credit rating arbitrage
The third is the most chimerical of all, becausemdash;unless you’re selling to or buying from the company that is involved (which has correlation issues, as I noted long ago)—neither party (in theory) has control over the outcome of events. It’s asymmetric information on both sides: not so much gambling against the house as shooting craps in the alley, not certain whether there is a bobby down the block. All of which is an indirect way of saying: Go Read Kash Mansori. Especially if you think US institutions are managing better than the EU is. (Hint: it may be true on the governance level, but the financial institutions’s exposure appears to tell another story.)
*Think corporate deposits, lines of credit, commercial loans, IPOs that are often used in part to pay off debt, and the like. Normal course of business options, with the selection influenced by tax or regulatory considerations.
Absurd news today from S&P’s credit rating analysts, who have apparently been drinking liberally from the Deficit Crisis Kool-Aid:
NEW YORK (MarketWatch) — Standard & Poor’s cut its ratings outlook on the U.S. to negative from stable on Monday, lighting a fire under Washington’s deficit-reduction debate and sending stock markets sharply lower.
The rating agency effectively gave Washington a two-year deadline to enact meaningful change, just days after House Budget Committee Chairman Paul Ryan and President Barack Obama each outlined their plans for slashing debt. S&P nonetheless kept its highest rating, AAA, on the U.S.
US debt is still rated as AAA, which effectively means that S&P’s rating analysts believe there is a zero percent chance of the US government not making payments on its debt. However, this new “ratings outlook” indicates that they now believe that there’s a reasonable chance that some time within the next two years they will change their mind, and start to believe that there’s a chance — albeit a remote one — of the US government defaulting on its debts.
For perspective, the following chart shows the OECD’s forecast for the burden of debt payments in the US and the world’s other largest developed economies. Net interest payments both this year and next year will be lower than any other major OECD country with the exception of Japan.
Update: NYT offers six more reactions here. (h/t Rebecca)
Ah, but no doubt S&P is worried about what will happen to that debt burden beyond 2012. After all, there are alarming predictions that the currently large budget deficits will continue to be unduly large after 2012, even as the economy recovers.
But deficit projections are notoriously slow to catch up with the business cycle. When the economy is doing well and deficits are small, forecasters tend to look in the rearview mirror and make very rosy projections into the future. And when the economy is doing poorly and deficits are large, forecasters also tend to project doom and gloom going forward.
So let me put up this reminder about how bad, and backward-looking, medium-term deficit forecasts can be. It shows the US government budget balance as forecast by the CBO in 1993 and 1995, and compares those forecasts with what actually happened.
I don’t want to argue that the US has no long-term deficit problems. It does. And steps will need to be taken — when the economy is in good shape — to bring revenues more in line with spending. But the current fear-mongering over the US’s budget deficit is just that: fear-mongering. And today S&P played a shameful role in it.
Well, it seems as if Congressional Republicans are going to propose a complete refashioning of the Medicare program. Specifically, they are going to recommend scrapping Medicare as a provider of health insurance to seniors, and instead replace it with a system that will provide subsidies to individuals who will then buy health insurance from private insurance companies. In other words, they want to get the federal government completely out of the health insurance business for senior citizens.
House Republicans plan to propose Tuesday historic changes to Medicare, Medicaid and other popular programs that pour federal money into Americans’ lives, arguing that a sacrifice now will keep those programs solvent for the future.
…On Medicare, Ryan will propose altering the plan so that the federal government no longer acts as a health insurer for seniors. Instead, he would create what’s called a “premium support plan.” Seniors would pick from a list of private insurance plans, and Medicare would subsidize their coverage.
The idea, again, is to use market competition to create a system with lower costs. Ryan’s plan would not apply to Americans age 55 and older, for whom Medicare would remain under the current system.
The notion that Medicare costs have been rising because it is a government-run health insurance program, or because it is not a “competitive” health insurance program, is odd, to say the least…
Theoretically, economists can list a number of very specific ways in which the markets for health care and health insurance are characterized by market failures. And for those of you who have forgotten your Econ 101 lessons, please recall that economic theory clearly predicts that when there are market failures there is no reason to necessarily expect that competition (i.e. the free market solution) will provide a good outcome.
Providing yet another example when economic theory actually matches what we see in the real world quite well, we find that there is absolutely no evidence that competition among private health insurance companies leads to lower costs. The Kaiser Family Foundation conducts a survey of employer-sponsored health insurance programs every year to estimate private health insurance premiums. Health insurance premiums for workers in large companies — those employing 200 people or more, which encompasses about 65% of all workers covered by private, competitive, employer-sponsored health insurance plans — rose by 135% over the ten year period 1999 to 2009.
Meanwhile, Medicare spending per person rose by about 103% over the same period. (Note that to get this figure I simply divided total Medicare costs from the CBO (pdf) by the number of Medicare enrollees as provided by Census (pdf).)
Given this, I’m really baffled by this repetition of the assertion that more competition in the market for health insurance is the answer. There’s no theoretical justification for it, and no empirical evidence for it. The fact is that people in the US consume more health care services every year. So every year we pay more.
There’s been a bit of discussion floating around about whether the US’s deficit and debt situation makes it appropriate to draw comparisons with Greece. Of course, such a comparison is ridiculous for a number of reasons, not least because the US has its own currency. But Greece has been on my mind lately for unrelated reasons, including the following news:
Euro economists expect Greek default, BBC survey finds Greece is likely to default on its sovereign debt, according to the majority of respondents to a BBC World Service survey of European economists. Two-thirds of the 52 respondents forecast a default, but most said the euro would survive in its current form.
…The forecasters the BBC surveyed are experts on the euro area – they are surveyed every three months by the European Central Bank (ECB) – and as well placed as anyone to peer into a rather murky crystal ball and say how they think the crisis might play out. The survey had a total of 38 replies and two messages came across very strongly.
Not only do I agree that default by Greece on its sovereign debt is quite possible… but I think it increasingly likely that policy-makers in Greece may decide that it is the least bad option at this point, particularly in the face of an increasingly hard-line attitude from Germany regarding bailouts (which will only be reinforced by recent election results).
The problem is easy to lay out: Greece has more debt than it can realistically make payments on, and being a euro country also has a currency over which it has no control. If it had its own currency, it would be in a classic debt crisis similar to several Latin American countries in the 1980s, or possibly Mexico in 1994.
However, it effectively has a fixed exchange rate with the rest of the euro zone, and has invested enormous political and economic capital in maintaining its committment to the euro. In that sense, the best analogy might be with Argentina in 2001, which was struggling to maintain a rock-solid fixed exchange rate with the US dollar through a currency board arrangement.
Argentina in the late 1990s had a slowing economy, uncompetitive industries, large current account deficits, and a vast amount of external debt denominated in a currency that was not its own. Sound familiar? In an effort to meet its debt payments while simultaneously keeping its exchange rate pegged to the dollar, the Argentine government squeezed and squeezed the economy. Finally, however, the resulting deflation and recession grew so severe that the government collapsed, and in early 2002 a new government dropped the peg to the dollar (after fiddling with a hybrid system with multiple currencies existing simultaneously) and eventually defaulted on its debt.
And look what happened.
From 1999-2002 Argentina suffered through years of a gradually contracting economy as it tried to maintain its peg with the dollar and service its external debts. When it finally dropped the peg in January of 2002 and then defaulted on its external debts, the economy (along with the value of the peso) crashed quite spectacularly.
But after a year or two, things didn’t look so bad in Argentina. And through most of the 2000s, the economy did quite well, despite the loss of the ability to borrow internationally.
I’m not necessarily advocating that Greece follow the same path. However, I do think that the comparison with Argentina in 2001 is a very good one, and because of that, that there is indeed a very good chance that policy-makers in Greece in 2011 will reach the same conclusion that policy-makers in Argentina did in 2002.
Last week I took a look at the way that higher labor productivity has not translated into higher worker compensation, particularly during the 1980s and 2000s. This is at odds with classical labor market theory, which suggests that as workers become more productive, their increasing value to firms should cause their wages to be bid higher so that their compensation rises accordingly.
There are a number of possible explanations for the divergence between productivity and compensation, and for how this may play into the broader phenomenon of stagnant wages for average workers. Part of the explanation is that an increasing share of worker compensation takes the form of benefits rather than wages and salaries. As shown in the chart below, fully one-fourth of worker compensation in 2010 took the form of benefits. (Source: BEA personal income data.)
This upward trend has been driven almost entirely by the rise of health care costs in the US, and the corresponding rise in health insurance premiums. Note that the one dip in the series in the late 1990s was due to the widespread implementation of HMOs – but they clearly proved to provide a one-time gain rather than a permanent increase in health insurance efficiency. So part of the reason that workers’ paychecks have not been rising is directly attributable to the rise in health care costs in the US.
But that’s not the whole story, and doesn’t address the question of slowly growing total compensation (as opposed to stagnant wages). There are, I think, reasonable arguments to be made about social and political factors, such as the decline in the power of unions. Along similar lines, Mike Konczal recently wondered to what degree this could be due to the Fed’s consistent and explicit desire to prevent wage increases.
And then there’s plain old supply and demand as a possible explanation. What did the 1980s and 2000s have in common from a macroeconomic point of view? One answer is this: multi-year long periods of high or rising unemployment rates.
The chart below shows, in blue, the seven-year moving average of the portion of increased labor productivity that were paid to workers in the form of higher compensation. During the 1960s and 70s, for example, workers typically received around 80% of gains in labor productivity over any given seven year period. Then during the 1980s that portion fell to about 40%. Meanwhile, the series in red is the seven-year moving average of the unemployment rate.
To make it a little easier to interpret, I’ve color coded the 60 years shown in the chart by shading the periods when workers were losing their share of productivity growth red, while the periods when workers were increasing their share of productivity gains are shaded in green. This helps to make it quite clear that “green” times – i.e. times when workers seem to be enjoying more of the gains in productivity – were periods when unemployment was falling. “Red” times (I guess it actually looks more pink than red in this chart) are clearly associated with periods when the unemployment rate was stagnant or rising.
One implication of this is clear: the high unemployment rate in the US right now, which is expected to decline only slowly over the next several years, is likely to mean that it will be a long time before worker compensation begins to rise as rapidly as worker productivity. Put another way, the overall level of high unemployment right now not only has the obviously enormous personal implications for those who are unemployed — it also is likely to seriously affect the compensation of workers who have never lost their jobs, for years and years to come.
I’ve been receiving questions about this week’s rather dramatic appreciation of the yen. Central banks around the world have been intervening today to prevent further volatility in exchange rates, but that still doesn’t explain exactly why currency traders have been so eager to buy yen this week.
There are rarely easy answers to questions involving exchange rate movements. However, I have shared a few thoughts on the subject over at The Street Light.
Yesterday Ezra Klein had a chart (from a paper by Larry Mishel and Heidi Shierholz at EPI) showing that both private sector and public sector wages have been stagnating for the past several years, and have certainly not kept up with productivity growth. I think it’s useful to look at the relationship between productivity and compensation over a longer time horizon.
The following chart shows labor productivity and real hourly compensation since 1950. (Data from the BLS.) Two things strike me particularly about this graph. The first is how closely the two series track each other between 1950 and 1980. During those 30 years labor productivity in the nonfarm business sector of the US economy rose by 92%; real hourly compensation paid to workers rose by a nearly identical 87%. Classical economic theory says that is exactly what we would expect – as workers become more valuable to firms by producing more output with every hour of labor, firms should compete with each other to employ them, driving up wages by an equal amount.
The second striking feature of this picture is, of course, how much the two series have diverged since the early 1980s. Output per hour of work in 2010 was 87% higher than in 1980, while real hourly compensation was only 38% higher.
The table below shows changes in labor productivity and hourly compensation by decade. Again, let me draw your attention to two features. First, this data confirms that the “great productivity slowdown” of the 1970s and 80s seems to have been vanquished; over the past 15 to 20 years US businesses have been improving productivity at rates as high as during the 1950s and 60s. Yet more evidence that Tyler Cowen’s “Great Stagnation” is not a productivity story.
The second remarkable feature of this table is that the vast majority of the gap between productivity and hourly compensation comes from the 1980s and 2000s, while during the 1990s workers shared in productivity gains nearly as fully as they did in the 1960s. And that, of course, leads us directly to the $64,000 question: what was it about the 1980s and 2000s that made it so difficult for workers to reap the fruits of their more productive labor?
More than one-third of all wages and salaries in this country are actually government handouts. We should be alarmed that we’ve become a nation of dependents.
Using data mined from the Bureau of Economic Analysis, TrimTabs Investment Research has found that 35% of wages and salaries this year will be in the form of a government payment. That’s up sharply from 2000, when it was 21%, which is more than double the rate — 10% — of 1960.
The payouts are primarily Social Security and Medicare benefits, and unemployment checks. But they are not limited to those programs.
In any case, we’re seeing before us a disturbing trend. A society can’t survive moving in this direction.
Sigh. Where to begin.
First of all, just to set the record straight: the press is reporting the numbers wrong. The true figure, according to the BEA data, is that about 18% of personal income in 2010 was in the form of transfer payments from the government. Meanwhile, exactly zero percent of wages and salaries were in the form of transfer payments, because wages and salaries were, well, wages and salaries. I suspect that many people are conflating “wages and salaries” with “personal income” as they report this statistic. But there’s actually a big difference, and wages and salaries actually make up only a bit more than half of personal income in the US.
Much more importantly, one must realize that of coursetransfer payments were higher than usual in 2010 – we were emerging from the deepest recession in 75 years. Transfer payments are crucial automatic stabilizers for the economy, and comprise our society’s safety net. They have been operating exactly as they’re supposed to, with payments rising during a recession to make up for the fall in other types of income. When you have the deepest recession since the invention of transfer payments, as we did in 2008-09, then of course you would expect to find them rise to their highest levels ever.
Finally, the alarming statistics cited in such articles are really just due to one, and only one, phenomenon: the incredible and seemingly unstoppable rise in health care costs in the US.
The blue line in the following picture shows transfer payments as a percent of total personal income. The red line shows transfer payments excluding Medicare and Medicaid.
With the exception of health care costs, there’s really no trend to see in this data at all. Really, it’s all about health care costs. Again. Still.
Moody’s has downgraded Greece’s debt to “highly speculative” prompting an angry response from the finance ministry. Greek bonds fell after the rating agency cut its rating from Ba1 to B1.
Moody’s cited “endemic tax evasion”, “very ambitious” austerity plans, and the possibility that the EU may force a debt restructuring on Greece after 2013 as reasons for its decision.
Greece’s finance ministry said the move was “incomprehensible” and called for tighter regulation of rating agencies.
“Ultimately, Moody’s downgrading of Greece’s debts reveals more about the misaligned incentives and the lack of accountability of credit rating agencies than the genuine state or prospects of the Greek economy,” said the Greek finance ministry in a statement.
“Having completely missed the build-up of risk that led to the global financial crisis in 2008, the rating agencies are now competing with each other to be the first to identify risks that will lead to the next crisis.”
I have to say, I agree completely with the sentiments expressed here by the Greek finance ministry. In my mind, the credit rating downgrades have become a lagging indicator of problems, and have demonstrated little or no predictive power. Take a look at the chart below.
The blue line shows the interest rate spread on long-term Greek government bonds over the equivalent bonds issued by the German government. The other two lines show the ratings assigned to Greek sovereign debt by S&P and Moody’s since January 2008, normalized so that they’re on the same scale.
The question I ask myself is this: would paying attention to ratings have helped me make (or save) any money on Greek debt? If we assume that the market is charging the Greek bonds a higher interest rate to compensate for the higher chance that the Greek government will default, then the interest rate spread is a rough measure of the market’s estimation of the likelihood of Greek government default. The market seems to pay little attention to credit ratings, and if I had used the credit ratings as a cue to buy or sell Greek government bonds, I can’t see that it would have provided me with any benefit.
But this has become completely typical of credit ratings in recent years – just think about how badly they mis-rated a giant swathe of assets leading up to the financial crisis of 2008. All in all, I’m not quite sure why we should care about what rating Moody’s and S&P assign to Greece’s sovereign debts – or any other debt, for that matter.