Relevant and even prescient commentary on news, politics and the economy.

Paul Krugman Declares Victory

Paul Krugman put many of his thoughts together here “What Do We Actually Know About the Economy? (Wonkish)” Basically he concludes that some economists are confused but Paul Krugman knows a lot (no one has ever accused him of being diplomatic). Of course I agree with him.
However, I am very pleased to note that I finally find one or two points of disagreement.

I’d just click the link but to try to summarize

“Macroeconomics is better than you think, microeconomics worse, and data are limited”

[skip]

in an important sense the past decade has been a huge validation for textbook macroeconomics; meanwhile, the exaltation of micro as the only “real” economics both gives microeconomics too much credit and is largely responsible for the ways macroeconomic theory has gone wrong.

[skip]

Now, the thing about IS-LM-type analysis is that using it isn’t that big a deal in normal times, but it makes some very strong predictions – predictions very much at odds with many peoples’ priors — about abnormal times. Specifically, this kind of analysis says that when there is a really big adverse shock to demand – say, from the collapse of a major housing bubble – there’s a regime change, and neither monetary nor fiscal policy have the same effects they do in normal times.

On the monetary side, old-fashioned macro says that once interest rates have been driven down to the zero lower bound, monetary policy loses traction.

[skip]

What about fiscal policy? Traditional macro said that at the zero lower bound there would be no crowding out – that deficits wouldn’t drive up interest rates, and that fiscal multipliers would be larger than under normal conditions.

The overall story, then, is one of overwhelming predictive success. Basic, old-fashioned macroeconomics didn’t fail in the crisis – it worked extremely well. In fact, it’s hard to think of any other example of economic models working this well – making predictions that most non-economists (and some economists) refused to believe, indeed found implausible, but which came true. Where, for example, can you find any comparable successes in microeconomics?

Then by microeconomics he means mostly microeconomic theory (the micro on which the Chicago school decided Macro had to be founded) and by data without theory he means accidental theory — basically assuming any parameter you estimate is stabe so any estimate reveals the law of motion of the economy.

I have mild criticisms of each of the three parts of the essay.

First on macroeconmics we are short one equation. Krugman discusses IS-LM but 1960s macro was IS-LM-Phillips curve. In any case, to complete the model one needs a model of aggregate supply. Krugman doesn’t mention the death, rebirth and re-death of the Phillips curve. 1960s macro implies that wage inflation should be increasing. The change from 10 to 3.9% unemployment with a very modest change in the rate of nominal wage inflation is a mystery. The unreversed decline in the share of labor is a puzzle (not to mention a tough problem for workers). This is also a case in which Paul Krugman in particular made predictions which were contradicted by the data. He mocks those who forecast hyperinflation in 2010, but he forecast deflation. Instead wages and prices conditnued to increase albeit very slowly. Krugman recognizes that even he didn’t appreciate 1960s macroeconomists (for example James Tobin) who stressed downward nominal wage rigitidy. This shows that off the shelf 1960s macro wasn’t a total success (largely because some of it was left on the shelf).

The current puzzle is worse. It has lead some people to use the wages taboo site:angrybearblog.com . The failure is the exact opposite of that predicted by Friedman and Lucas who argued that the correctly understood Phillips curve (as a structural causal relationship) is not a downward sloping curve but a vertical line. The data seem to think it is pretty much a horizontal line. But changing parameters are a problem for macroeconomics no matter which direction they change.

On microeconomics, Krugman briefly praises empirical micro, but then goes on to criticize the theory.

I am contrarian enough to immediately try to think of a success of a surprising prediction based on micro theory which non-economists found implausible. The prediction was popularized by Krugman who argued that the California electricity crisis would be resolved if the Federal Government put a maximum price on electricity flowing across state lines. The argument was that the crisis was created by electicity companies (including Enron) and that, if they couldn’t charge huge prices to relieve shortages, they wouldn’t create shortages to relieve. The hypothesis was based on a close reading of California’s rules for electricity pricing and the guess that that really was time for some game theory. When they finally intervened, the shortages vanished. Then Enron went bankrupt and was investigated showing that the game theory was entirely exactly correct. I think this was good micro theory. They key point was that economics 101 (really 1st semester economics 101) was inadequate because one can’t assume the wholesale electricity market is perfectly competitive. It is dominated by a few firms hence the game theory. This shows how good micro is based on sweating the details. Someone not involved in the scam had to read the regulations to figure out how they were being manipulated.

But more generally Krugman’s review of micro does not correspond to the current balance of articles and citations, because most research is now empirical. Micro theory still exists, but it doesn’t interfere with empirical work in microeconmics.

This brings me to Krugman’s critique of the accidental theorist. He considers how one would go wrong assuming correlations are constant whether or not the economy is in a liquidity trap. This is, indeed, an example of how theory is useful. The theory is very very simple, the interest paid on cash is zero and can’t be negative (except for storage costs).

I really agree entirely with Krugman that one can’t analyse data without assumptions, without a specification or prior or something. But it is a bit odd to call all identifying assumptions “theory”. This is technically true but highly misleading. Non-economists don’t perceive arguments about keeping cash in a safe as theory (although they are theory in a way). Very generally, a lot of the new empirical economics consists of looking for natural experiments– often using the states which are the laboratories of democracy as uh laboratories.

The theory is also common sense. it is immediately comprehensible to ordinary people who also find it convincing. It is very very different from the sterile theory which lead macroeconomics astray. It is also very different from the industrial organization applied game theory which was useful when discussing elecriticy shortages in California, and, finally, not at all like the theoretical work for which Krugman was awarded a Nobel memorial prize.

Finally, new empirical micro is relevant to macroeconomics. The micro distribution of changes in wages with a huge spike at zero which appeared around 2009 is very strong evidence for downward nominal rigidity. Basing macro on the assumption that people’s behavior fits micro observations of peoples’ behavior is a way to micro found which is completely unlike the project started in the 70s. I don’t think it should be dismissed as un-necessary, like the failed effort or accidental theory.

I also don’t think Krugman dismisses it. But I do think his emphasis is other than ideal

Comments (2) | |

Reskilling America

Conversable Economist Tim Taylor presents a chart representing spending over a life time on Education and Skills in America.

“Figure 4 (depicted) is from a report by the White House Council of Economic Advisers, titled “Addressing America’s Reskilling Challenge” (July 2018). The blue area shows public education spending, which is high during K-12 years, but the average spending per person drops off during college years. After all, many people don’t attend college, and of those who do many don’t attend a public college. Private education spending shown by the red area takes off during college years, and then trails off through the 20s and 30s of an average person. By about age 40, public and private spending on education and skills training is very low. Spending on formal training by employers, shown by the gray area, does continue through most of the work-life.

The figure focuses on explicit spending, not on informal learning on the job. As the report notes: “Some estimates suggest that the value of these informal training opportunities is more than twice that of formal training.” Nonetheless, it is striking that the spending on skills and human capital is so front-loaded in life. The report cites estimates that over a working lifetime from ages 25-64, the average employer spending per person on formal training totals about $40,000.”

Tags: , Comments (7) | |

Banging Drum

I almost always agree with Kevin Drum who is, among other things, a brilliant economist even thoug (or largely because) he didn’t study economics much in college.

But I don’t entirely agree with his one minute explanation of the importance of the yield curve for macroeconomic forecasting.

the ever-fascinating yield curve, which tracks the difference between long-term and short-term treasury bond yields. Normally the long-term yield is higher to compensate investors for the risk of the economy eventually going sour. But what if you think things are about to get sour really soon? Then you’ll bid down the price of short-term bonds, which increases their yield, and pretty soon long-term yield is less than the short-term yield. The yield curve has “inverted,” which suggests that investors are nervous about a recession hitting.

My comment

I think the yield curve story isn’t that simple really. First it always used to be an indicator of monetary policy. The Fed controls short term interest rates. When it chooses contractionary monetary policy (to fight inflation) it sets high short term rates. The long term rates don’t move up as much, because investors are sure the fed will relent after inflation falls. This was always the normal pattern.

Back in the good old days (before 1999) an inverted yield curve occured if and only if the Fed was cracking down to fight inflation. Notice the 90s. The yield curve was very close to flat during the whole late 90s boom. What was happening was the Fed was pressing gently on the brake worried about inflation & the magic of the internet (or foolish dot com mania) kept the economy booming. The alarmingly exuberance caused the fed to raise rates in 2000 (not at all trying to prevent Gore from being elected nooo Greenspan would never do such a thing). And the bubble burst.

Notice also the S&L recession happened without a dramatic yield curve inverstion. There were these two really smart time series econometricians Stock and Watson who had a model which “predicted” recessions really well. In 1990, it never said a recession was coming. Their explanation was that it detected inflation fighting recessions — that from wwII until 1990 recessions occured when the Fed decided to cause a recession to fight inflation (the also very smart Romer and Romer noted that recessions occured after statements like “we have to cause a recession to fight inflation” appeared in the Fed open market committee minutes).

I’d say a steep yield curve shows a fed desperately trying to pump up the economy and, therefore, pushing short term rates far below normal (long term rates being equal to the short term rate investors think is normal plus a small term premium cause they know they don’t know what is normal).

So the graph shows desperate efforts to stimulate when Republicans are in the White House or Bernanke or Yellen is chair (not that Saint Alan Greenspan was partisan or anything). The flattening just shows that the FOMC is no longer stimulating as hard as it can by keeping the short term rate at 0.25%.

Also the long term rate which investors now guess is normal is very low. That is called secular stagnation not incipient recession. Looking at short and long rates separately helps. Both are very low now. In 2000 both were high as the Fed was fighting the boom (a tiny bit too hard but it lead to a tiny miniscule recession). 2008 was a strange strange time when both short and long term interest rates were almost zero and yet demand was low. Then zero was not low enough. Now the FOMC thinks zero interest is a bit too low.

So I don’t agree with your story.
In general economic downturns cause low interest rates both directly and through active monetary policy. The yield curve slopes up because investors fear the Fed might decide to fight inflation, not because they fear a recession will just happen and it will drive up interest rates. The causation is high interest rates cause recessions not the other way.

In 1990 and 2008, I’d say the issue in 1990 and (much more so) in 2008 was people expected long lasting trouble, so persistenly low short term interest rates, so long term rates were low too. In 2000 and all recessions post WWII and pre presidents Bush the yield curve inverted because short term interest rates were high because the Fed was pressing on the brake.

Comments (2) | |

Jobs, Jobs, Jobs — GUARANTEED!

The current mania for “job guarantee” policies is making the Sandwichman anxious. I’ve been on the full employment beat for over 20 years so I think I have a pretty good grasp of the terrain. First principle is that there are no panaceas. My favorite policy option — reduction of working time — is not a panacea. Neither is yours.

Like my learned friend Max B. Sawicky, I am in favor of a job guarantee — provided it meets MY criteria. The proposals currently being shopped around don’t. That should not be a fatal flaw. Inadequate policy proposals can serve as the starting point for dialog that can lead to better proposals. From the left, Matt Brunig, and from the center?, Timothy Taylor have offered constructive critiques of the current proposals. I would like to offer a bit of critique from history.

Comments (9) | |

Job Guarantees, Collective Bargaining and the Right to Strike

“Guaranteed jobs programs, creating floors for wages and benefits, and expanding the right to collectively bargain are exactly the type of roles that government must take to shift power back to workers and our communities,” — Senator Kirsten Gillibrand

“By strengthening their bargaining power and eliminating the threat of unemployment once and for all, a federal job guarantee would bring power back to the workers where it belongs.” — Mark Paul, William Darity, Jr., and Darrick Hamilton,

“Support for workers’ right to organize and collectively bargaining would, of course, be part of any such effort.” — Harry J. Holzer

 “This, then, was the broad issue to which Samuelson and Solow’s paper was addressed: Were price stability and full employment – or, as it was sometimes put, were price stability, full employment and collective bargaining – compatible in the America of their times?” — James Forder

Under conditions of full employment, can a rising spiral of wages and prices be prevented if collective bargaining, with the right to strike, remains absolutely free?  Can the right to strike be limited generally in a free society in peace-time? — William Beveridge, Full Employment in a Free Society

Everyone is talking about Job Guarantees these days and no one appears to have thought through the implications of such a policy for collective bargaining with anything like the thoroughness that William Beveridge did in 1946. In 1960, Paul Samuelson and Robert Solow concluded their discussion of full employment and inflation with a disclaimer:

We have not here entered upon the important question of what feasible institutional reforms might be introduced to lessen the degree of disharmony between full employment and price stability. These could of course, involve such wide-ranging issues as direct price and wage controls, anti-union and antitrust legislation, and a host of other measures hopefully designed to move the American Phillips’ curves downward and to the left.

We are told by the adherents of Modern Monetary Theory that inflation is not a problem. The government just sops up inflation by taxing back some of the money it has created to fund the program expenditures. Correct me if I’m wrong, but that seems like what they say. At the same time, though, at the same time, advocates of a Federal Job Guarantee tout the increased bargaining power that it would give to workers.

Usually that bargaining power is not specified as collective bargaining power. Harry Holzer’s comment is the exception. Senator Gillibrand’s mention of Job Guarantee and expanding the right to bargain collectively may have just been a smorgasbord of good things and not meant to imply advocacy of collective bargaining specifically for people in the Job Guarantee program. To use a distinction Richard Freeman and James Medoff adopted from Albert O. Hirschman, the “bargaining power” mentioned by Paul, Darity and Hamilton could as easily refer to the “exit” of individual choice as to the “voice” of collective action.

Well, who doesn’t want to see workers gain more bargaining power? That is not a rhetorical question. To ask it is to call attention to the very powerful political forces that have seen to it, especially over the last 40 years or so, that they don’t. Could it be that the advocates of the Job Guarantee have not done their opposition research? Do they suppose that the regime of supply-side, trickle-down, corporate neo-liberalism was inadvertent?

I am not so certain that the Kochs and the Waltons and Jeff Bezos and Jamie Dimon are going to shrug their shoulders and say, “O.K., workers, your turn now. Best of luck!” Regardless of whatever MMT says about inflation, the “inflation!” card will be played against any proposed job guarantee election platform, as will the “socialism!” card, the “moochers!” card, the “boondoggle!” card, and, yes, even the “lump-of-labor!” card.

In individual terms, bargaining power comes down to the alternative options if one quits a job — what is the Best Alternative if There is No Agreement (BATNA). Collectively, bargaining power is determined by strike leverage, which is a mutual perception of the relative capabilities of the two parties to endure a prolonged work stoppage. A Job Guarantee would appear to give additional leverage to unions in the event of a work site closure or the hiring of replacement workers. The amount of leverage depends on what the rules are regarding the eligibility of striking workers for a Job Guarantee. Presumably, workers currently on strike would be ineligible. But what happens if the employer hires scabs (otherwise known as “replacement workers”)? What if the company closes down and moves away? Would there be a waiting period before discharged workers become eligible for the Job Guarantee?

And what about the rights of the Job Guarantee workers themselves to collectively bargain and to strike? Until relatively recently public employees were denied the right to collective bargaining and the right to strike. Even today those rights are not universally acknowledged:

All Government employees should realize that the process of collective bargaining, as usually understood, cannot be transplanted into the public service… A strike of public employees manifests nothing less than an intent on their part to obstruct the operations of government until their demands are satisfied. Such action looking toward the paralysis of government by those who have sworn to support it is unthinkable and intolerable.

Who said that? Governor Scott Walker in 2011? Chris Christie? No, Franklin Delano Roosevelt, in a 1937 letter to the president of the National Federation of Federal Employees. Scott Walker cited FDR in a 2013 speech. Could a Job Guarantee program that denied participants the right to strike become a Trojan horse for rolling back public sector unionism? That is not a rhetorical question.

The conspicuous lacunae in the Job Guarantee literature regarding collective bargaining and the right to strike strikes me as an elephant in the room. The fact that no one talks about it could not conceivably be because no one notices it. For what is at stake here is nothing less than the sovereignty of the State and its monopoly on the legitimate use of violence. In an astonishing paragraph in his essay on the “Crtique of Violence,” Walter Benjamin makes this not so much “clear” as available for deciphering.

Benjamin’s provocative claim, distilled from the writings of Georges Sorel and Carl Schmitt, is that “Organized labor is, apart from the state, probably today the only legal subject entitled to exercise violence.” Let that sink in…

Benjamin goes on to offer qualifications and explanations that address the inevitable objections to that statement. By conceding the political right’s standard objection to the labor strike as violent, however, Benjamin — again following Sorel — has isolated and emphasized the one circumstance in which it is not — the revolutionary general strike. This is not to discount the inevitability of retaliatory violence from the State.

The insertion of Benjamin’s argument into the debate on the Job Guarantee idea may seem esoteric to the casual reader. The reason it doesn’t seem esoteric to me is that I have spent the last 20 years studying the history of anti-labor rhetoric of the right and how it gets translated ultimately into seemingly innocuous “policy principles.” Public works as an employment stabilizer sounds like a good idea — what happened to it? Full employment after the war sounds like a good idea — what happened to it? The reduction of the hours of work sounds like a good idea — what happened to it? As John Stuart Mill rightly pointed out, “He who knows only his own side of the case, knows little of that.”

Comments (2) | |

The Relative Price of Housing and Subsequent GDP growth in the USA

The great recession of 2008-9 followed an extraordinary house price bubble. The sluggish was characterized by a very slow recovery of residential investment. Oddly, the extensive revision of macroeconomic models which implied a very low probability of great recessions has not involved a focus on housing. Instead it has focused on financial frictions – essentially it is assumed that the 2008-9 recession was extraordinary because a major financial crisis occurred. Dean Baker dissents (as he often does) arguing that the severity of the recession could have been predicted given the massive decline in housing prices and earlier estimates of the effect of home equity on consumption. This note attempts to being to assess that claim. It also asks if it is possible to forecast GDP growth over the medium term. Finally it is part of the Rip Van Keynes series, because I will use an empirical strategy which has been out of fashion for at least four decades – basically an ad hoc OLS regression (sometimes I even include an exponential trend).

The basic result is that if the relative price of housing is high (compared to an exponential trend) then GDP growth over the following 5 years is low (compared to an exponential trend). Aiming to test out of sample forecasting, I start using 20th century data only.

-7.52 is a fairly impressive t-statistic.
Lnindex L20 is the logarithm of the ratio of the all transactions house price index to the consumer price index lagged 20 quarters. Gdp5 is the growth of the logarithm of real gdp over the past 5 years. Quarter is the calender quarter up to the 4th quarter of 1999 = 1999.75. The data were downloaded from Fred and are described in what might be generously considered a sort of data appendix. One point must be mentioned here – the all transactions house price index is available only starting in 1975, so the first useful observation is growth of GDP from 1975q1 to 1980q1.
The series are quarterly, so the dependent variable is a moving average of changes summed over 20 quarters. In the crudest attempt to deal with this, I calculate Newey West standard errors with 19 lags. These would be valid if log GDP were a random walk with drift (the constant) and trend (growth slowdown).
This regression is at least a hint that 8 years before the great recession began, there was already evidence that extremely high relative price of housing was likely to be followed by low GDP growth. Because the regression is, at best, barely presentable, I focus on out of sample forecasting. pgdp5 is the fitted value which can be considered a very crude forecast of real gdp growth over the following 5 years.
Out of sample the forecasts and outcomes are positively correlated. The correlation of pgdp5 and gdp5 over 2000q1 through 2018q1 is over 0.86. Out of sample forecasts of GDP growth over the following 5 years seem to be quite useful. This may be simply due to the estimated trends.

The following regression shows that forecasts of deviations from trend are correlated with deviations from trend.
. newey gdp5 pgdp5 quarter if quarter>1999.9,lag(19)

This is a test of out of sample forecasting performance. It is, to put it mildly, rather more successful than out of sample tests of long term macroeconomic forecasts usually are.
The data are, perhaps, more usefully summarized with a graph. Figure 1 (finally) is a scatter of the logarithm of the relative price of housing and GDP growth over the following 5 years.

This ignores even the deterministic trends. Also the whole sample is graphed. Notably while some periods show extraordinarily high relative prices of housing and extraordinarily low subsequent real GDP growth, the GDP growth does not look anomalous. The computer is not surprised by the severity and duration of the great recession given the early 21st century housing bubble.
Here are the time series. L20.lnindexm4 is lnindex lagged 20 quarters – 4.0 (the base years for the all transactions housing price index and the CPI are different).

Notice that the first observation for the index lagged 20 quarters is 1980q1 because the index is available from 1975q1 on.
Here are the series of outcomes and forecasts. The only anomaly is that the great recession was so mild. The computer forecast 5 year gdp growth as low as -10% and it never actually was less than zero. Still this is unusually successful out of sample forecasting of medium term gdp growth.

Robustness checks etc after the jump. Also this post is available as a pdf here.

Comments (1) | |

LOMPIGHEID: “Omgekeerd omgekeerd.”

Last week I was browsing through one of the books on the shelf at work, which had in it three essays by the inter-war German Marxist Karl Korsch. One of the essays, a 1932 introduction to Capital mentioned mentioned a section in Chapter 24, “The So-Called Labour Fund” as exemplary of Marx’s critique of political economy. The “labour fund” was more commonly known as the wages-fund, the doctrine famously recanted by John Stuart Mill in 1869.

After it had been repudiated in various degrees by the economists who formerly propounded it, the defunct doctrine became a straw man “fallacy” attributed to precisely the trade unionists who had been the targets of the doctrine’s disdain. Marshall dubbed the re-purposed doctrine the fallacy of the fixed work-fund. David F. Schloss christened it the Theory of the Lump of Labour.

As is my habit, I searched on “labour fund” and “lump of labor/labour” to see if anyone had previously made the connection between Marx’s critique in Capital and the ubiquitous attributions of the fallacy by economists to non-economists. What I discovered was a six-page discussion of my own historical investigation by a Belgian economist, Walter Van Trier, published in 2013 in the Belgian journal Over-Werk.

The title of the journal is somewhat of a pun as “over” means both “about” and “above” in Dutch, so it could mean both about work and overwork in English. The word lompigheid also contains a bit of a pun — as one might guess lomp is lump and lompigheit (with a ‘t’) refers to lumpiness, while lompigheid (with a ‘d’) means rudeness or clumsiness.

I am posting below a translation of the section from Van Trier’s article that deals specifically with my analysis of the lump of labor fallacy. The full article, in Dutch, can be found here. Happy May Day!

Comments (10) | |

Job Guarantee versus Work Time Regulation

There has been a bit of commotion recently about the Job Guarantee idea (AKA employer of last resort). I don’t consider myself an opponent of the strategy but I do have several reservations about its political feasibility, the marketing rhetoric of its advocates, and its economic and administrative transparency. Some of these concerns I share with an analysis presented by Robert LaJeunesse in his 2009 book, Work Time Regulation as Sustainable Full Employment Strategy. For that reason, it would be timely to post an excerpt from Bob’s discussion of”Job guarantees versus work time regulation.”

One thing that has puzzled me about the Job Guarantee rhetoric is the invocation of Hyman Minsky as patron saint of the strategy. There is no question that he advocated a job guarantee with the government acting as employer of last resort. But in the passages I’ve read, the proposal was either contingent to a broader discussion or supplemented with various other proposals some of which might be regarded as more far-reaching and controversial even than the job guarantee.

For example, a 1968 proposal argued that, “In addition, it will be necessary to restrain profits and investments; in particular, the highly destabilizing tendency for investment demand to explode will have to be brought under control.” Nineteen years later, Minsky supported a proposal for “a maximum of 32 hours of work a week at the minimum wage” but argued it needed to be supplemented by other programs such as a universal, non-means tested child allowance. Both of these proposals were historical and context specific, with the earlier one arising from a critique of LBJ’s War on Poverty and the later one in response to Reagan administration proposals for welfare reform.

The following excerpt is from pages 125-134 of  Work Time Regulation as Sustainable Full Employment Strategy. 

Comments (3) | |

Minimum Wage Effects with Non-Living Wages

I’m teaching “Economics for Non-Economists” this semester. This is an interesting experiment, and is strongly testing my belief that you can teach economics without mathematics so long as people understand graphs and tables. (It appears that people primarily learn how to read graphs and tables in mathematics-related courses. Did everyone except me know this?)

Since economics is All About Trade-offs, our textbook notes that minimum wage increases should also mean some people are not employed. Yet, as I noted to the students, in the past several decades, none of the empirical research in the United States shows this to be true. (From Card and Kreuger (1994) to Card and Kreuger (2000) to the City of Seattle, in fact, all of the evidence has run the other way, as noted by the Forbes link.)

Part of that is intuitive. If you’re running a viable business and able to generate $50 an hour, it hardly makes sense not to hire someone for $7.25, or even $9.25, to free up an hour of your time. The tradeoff is that your workers make more and your customers can afford to pay or buy more. Ask Henry Ford how that worked for him.

The generic counterargument (notably not an argument well-grounded in economic theory) was summarized accurately by Tim Worstall in one of his early attempts to hype the later-superseded initial UW study for the Seattle Minimum Wage Study Team.

[T]here is some level of the minimum wage where the unemployment effects become a greater cost than the benefits of the higher wages going to those who remain in work.

This seems intuitive in the short-term and problematic in the long term, even ignoring the sketchiness of the details and the curious assumption of an overall increase in unemployment (or at least underemployment) if you assume a rising Aggregate Demand environment. To confirm the assumptions would seem to require either a rather more open economy than exists anywhere or a rather severe privileging of capital over labor.*

On slightly more solid ground is the assumption that minimum wage should be approximately half of the median hourly wage. But then you hit issues such as median weekly real earnings not having increased much in almost forty years, while a minimum wage at the median nominal wage rate suggests that the Federal minimum wage should be somewhere between about $12.75 and $14.25 an hour. (Links are to FRED graphics and data; per hour derivations based on the 35-hour work week standard for “full-time.”)

So all of the benchmark data indicates that reasonable minimum wage increases will have virtually no effect, and none on established, well-managed businesses. The question becomes: why would that be so?

One baseline assumption of economic models is that working full-time provides at least the necessary income to cover basic expenses. Employment and Income models assume it, and it’s either fundamental to Arrow-Debreu or you have to assume that people either (a) are not rational, (b) die horrible deaths, or (c) both.

If you test that assumption, it has not obvkiously been so for at least 30 years:


The last two increases of the Carter Administration slightly lag inflation, but they are during a period of high inflation as well; the four-year plan may just have underestimated the effect of G. William Miller. (They would hardly be unique in this.)

By the next Federal increase, though—more than nine years of inflation, major deficit spending, a shift to noticeably negative net exports, and a couple of bubble-licious rounds of asset growth (1987, 1989) later—the minimum wage was long past the possibility of paying a living wage, so any relative increase in it would, by definition, increase Aggregate Demand as people came closer to being able to subsist.

The gap is greater than $1.50 an hour by the end of the 1991 increase. The 1996-1997 increase barely manages to slow the acceleration of the gap (to nearly $1.70), leaving the 10-year gap in increases to require three 70-cent increases just to get the gap back down to $1.86 by their end in 2009.

Nine years later, almost another $1.50 has been eroded, even in an inflation-controlled environment.

Card and Kreuger, in the context of increasing gap between “making minimum wage” and “making subsistence wage,” appear to have discovered not so much that minimum wage increases are not negatives to well-run businesses so much as that any negative impact of an increase, under the condition that the minimum wage does not provide for subsistence income, will be more than ameliorated by the increase in Aggregate Demand at the lower end.

My non-economist students had very little trouble understanding that.

*The general retort of “well, then, why not $100/hour” would create a severe discontinuity, making standard models ineffective in the short term and require recalibration to estimate the longer term. Claiming that such a statement is “economic reality,” then, empirically would be a statement of ignorance.

Comments (18) | |