Relevant and even prescient commentary on news, politics and the economy.

A New Pareto Liberal Paradox (reposted from 2004)

(Dan here….lifted from Robert’s Stochastic Thoughts)

A New Pareto Liberal Paradox (reposted from 2004)

One of the core principles of Liberalism is that there must be equality before the law. The law must not discriminate. In practice, this principle is often restricted to citizens and people are citizens only if they are born in the liberal polity or have the right ancestors. I personally consider this restriction absolutely inconsistent with my core beliefs.In any case, equality before the law is a core principle. Liberals might consider equality of income very important or not at all important, but we must defend legal equality or else we are not liberals.

I naively imagine that I am pretty utilitarian. Consequentialist enough to accept Pareto improvements anyway. I reconcile my absolute respect for legal equality with my absolute respect for utils ideologically, that is by convincing myself that reality is such that I can hold both moral beliefs. In plain English, I am deeply convinced that legal equality is not just good in itself but also is the most efficient legal rule. I think that hereditary priviledge is not only wrong but also leads to incompetence in key positions.

Comments (0) | |

Demographics, housing, and the economy

Demographics, housing, and the economy

Way back during the Great Recession, I first noted that demographics were about to become a tailwind for the housing market. The argument, in its simplest terms, is that the median age of first time home buyers is about 30, and the nadir of the “baby bust” was 1973-76. That means that the demographic nadir of the population of first time home buyers who ultimately drive the market (since everybody else just moves up from one house to another) was in the 2003-06 period.

Yesterday I started looking into quantifying that tailwind. Without getting into too much detail, my suspicion is that it has amounted to an increase in the pool of potential first buyers on the order of roughly 250,000 households per year since 2010 — i.e., the increase of each year over the year previous, continuing year after year. That’s just a back of the envelope approximation.

Lo and behold, Bill McBride a/k/a Calculated Risk posted on a similar subject yesterday, opining that the demographic tailwind was likely to continue for years for both housing and the economy generally, concluding that “My view is this is positive for both housing and the economy, especially in the 2020s. “Then Mike Shedlock a/k/a Mish responded with regard to housing, opining that “On the surface, the demographic trends may appear neutral or slightly favorable…. [but] For now, and the next five years, attitudes and affordability are the key issues. They far outweigh any potential demographic benefit, if any.”

Who’s got the better argument? Because historically we’ve been around this block before, in a pretty big way. You may have heard of it: it was something called the “baby boom.”

In my opinion that history gives us a pretty definitive answer.

Let’s start with the demographic history. Here’s a graph of live births for each year since 1900:

Comments (1) | |

The consumer edges closer to the precipice

The consumer edges closer to the precipice

In addition to my “long leading/short leading” model adapted from the work of Profs. Geoffrey Moore and Edward Leamer, and the “high frequency” weekly variation on the same, I also have several “alternate” recession forecasting models. The most noteworthy model is really a consumer nowcast. It turns on consumers running out of options to to continue increasing purchases (i.e., no interest rate financing, no wage real wage increases, and no increasing assets to cash in). When that happens, and consumers turn more cautious by saving more, a recession begins.
I first posted the model 10 years ago under the title, “Are Hard Times Near?  The great decline in interest rates is ending.”  The history is straightforward.  Since the 1970s, real average hourly earnings had declined.  Average Americans coped by spouses entering the workforce, by borrowing against appreciating assets, and by refinancing as interest rates declined.
By 1995 the spousal avenue peaked.  Borrowing against stock prices ended in 2000.  Borrowing against home equity ended in 2006.  When interest rates failed to make new lows, the consumer was tapped out, and began to curtail purchases.  A recession began – and its effects lingered for a long time. “Hard Times” had indeed begun.
What does the consumer model show now? I haven’t updated it in about two years, and there have been noteworthy developments. Let’s take a look.
Real hourly wages haven’t increased since last July, are up only 0.1% YoY and barely more in the past two years:
According to Ironman at Political Calculations, real median household income has declined slightly  for nearly two years:

Mortgage rates haven’t made a new low since 2013, and if anything are trending up, on the verge of breaking a 30 year trendline:

Comments (0) | |

Whither Social Capital?

Whither Social Capital?

This past Friday there was yet another retirement conference, this time honoring “Mr. Social Capital,” Robert S. Putnam, who is retiring from Harvard’s Kennedy School at age 77.  I was not invited, but I know some people who attended, including my sister and brother-in-law, the latter speaking at the dinner as family, the brother of  Bob’s wife, Rosemary.  As it is, I have known Bob Putnam since before he became Robert S. “Mr. Social Capital” Putnam.  That came especially with the publication in the 1990s of his massive hit book, Bowling Alone, in which he presented massive amounts of  data and arguments about the idea of social capital.  This helped trigger an outright fad among academics and policymakers, including at the World Bank, about the importance of increasing social capital in many nations so as to supposedly improve their economic and social situations.  It also led Putnam to become one of the most frequently cited living social scientists (176,000 plus times and counting, according to Google Scholar).

According to him the idea has been floating around in and out of public discourse since it first appeared in a report on education in West Virginia a good century ago.  However, it began to pick up serious academic steam starting in the late 1980s when sociologist James Coleman began studying it and writing about it.  Putnam’s own background was mostly as an expert on Italian politics, but he picked up on the idea from Coleman in the early 1990s in a famous book he wrote on Italian democracy. For him it became a crucial factor in explaining the much  better economic and social (and political) performance of northern Italy compared to southern Italy, albeit with a deep historical background.  Whereas substantial portions of northern Italy had been independent city states with long histories of civic engagement in local ruling groups, most of southern Italy spent several centuries ruled by outside and autocratic Spain, which laid the foundation for the rise of the mafia as a countervailing power, which would come to dominate the society of the Mezzogiorno with its secrecy and lying and corruption.  For Putnam one finds lots of people engaged in civic group activities in  the North, but not in the South, and I even heard him once give a talk in which he claimed that one could explain 90% of the difference in economic growth rates among Italian regions by just looking at the percentage of their populations that belonged to civic choral groups in the 1870s.  That correlation is probably correct, even if it has almost nothing to do with causation.

Comments (1) | |

Higher wage growth for job switchers: more evidence of a taboo against raising wages?

Higher wage growth for job switchers: more evidence of a taboo against raising wages?

Yesterday the Atlanta Fed published a note touting the wage growth for those who quit their jobs and transfer to a different line of work, writing that:

Although wages haven’t been rising faster for the median individual, they have been for those who switch jobs. This distinction is important because the wage growth of job-switchers tends to be a better cyclical indicator than overall wage growth. In particular, the median wage growth of people who change industry or occupation tends to rise more rapidly as the labor market tightens.

The following graph was posted in support of this point:
Essentially the Atlanta Fed is highlighting the orange line as a “better cyclical indicator.”
Is it? There’s no doubt that wage growth among job switches declined first in the last two expansions. But I would want to see a much longer record before being that confident.
Because what I see in the above graph is a decline among job keepers (the green line) that is only matched by those declines presaging the onset of the last two recessions. Meanwhile the orange line, while still rising, has flattened.
In fact I think the Atlanta Fed’s graph mainly shows evidence of what I highlighted last week as an emerging “taboo” against raising wages — i.e., a stubborn refusal to raise wages even if it would lead to even higher output and gross profits for a net gain.
Once again the JOLTS data gives us a good proxy.  If wage growth is increasing at a “normal” rate compared with previous expansions, there shouldn’t be an inordinate need to change jobs in order to get a raise, i.e., a rate higher than previous expansions. Thus the ratio of job changers who quit vs. the number of actual hires should be equivalent to similar stages in those expansions. If, on the other hand, employers have become inordinately stingy, quitting is almost essential to get ahead, in which case the ratio of quits to hires should be higher than normal.
Here is what the data shows:
For the last several years, Quits have been in the range of 58%-60% of hires, the highest since 2001, and specifically higher than the 56%-58% peak of the last expansion.
In other words, it looks like what the Atlanta Fed’s graph is showing is that employees are reacting to the taboo against raising wages by quitting their jobs and moving to employers in fields that are already paying more.

Comments (2) | |

A better name for The Kids Today: iGeneration

(Dan here…better late than not!)

by New Deal democrat

A better name for The Kids Today: iGeneration

You know the drill. It’s Sunday so I get to ruminate about all stuff that isn’t dry economics.
The oldest member of the Millennial generation is 38. Not only do I not think that The Kids Today would want to be lumped with that age group, but their uncool parents are probably precisely members of that group!
So what to name the generation that came after the Millennials? both “post-Millennials” and “Gen Z” are condescending and probably don’t cut it with The Kids Today. Remember, “Gen X” was originally called “the baby bust,” and Millennials were originally called “Gen Y” or “the echo boom,” before catchier names were found.
A good dividing point is whether or not you remember 9/11. If you do, and were born after 1980, you’re a Millennial. If you don’t, you’re not. Most studies seems to agree with this, using 1996 or so as the cut-off year after which you are not a Millennial. A similar if less apocalyptic marker is the Columbine school shooting of 1999. If you remember it, you’re a Millennial. If your schooling always included “active shooter” drills, you’re not.
But while the War on Terror or mass shootings have always been in the background for The Kids Today, everyday life has been dominated by something else.  If you were born after 1996, iPods were always around — and there’s a good chance you owned one. So were cell phones. For most of your youth — *always* for the younger part of this cohort — iPhones and flat screen TV’s have been around, and you probably have had one (or another smart phone) since junior high school. In fact you may spend most of your time glued to one! The term “iGeneration” captures this perfectly.

Comments (16) | |

Wages and Steel Tariffs (not painfully wonkish)

Paul Krugman demonstrates just how simple models can and should be. He presented a trade model on the New York Times opinion pages. He also apologised for extreme wonkishness, but I don’t think he had to. His aim is to find an example in which Trump’s tariffs on steel cause lower wages (also for steel workers). Here the trick is to make sure that everyone has the same income “not gonna get into income distribution today”. This means that a reduction in total money metric welfare implies a reduction of each person’s welfare (including workers in the protected industry).

This “itsy-bitsy teeny-weeny model” is a major demonstration of Krugman’s extraordinary ability to make models simple.

Imagine a world of two countries, Home and Foreign. (That’s an old-fashioned trade theory convention.) In both countries, labor is the only factor of production (not gonna get into income distribution today). There are two goods, cars and steel. Cars are a final good, sold to consumers; steel is an input into car production. I’m going to assume that both countries end up producing cars.

To keep down on notation, I’m going to do some sneaky things with choice of units (another old trade theory tradition.) We assume that in each country building a car requires one unit of labor (which amounts to measuring labor in units of how much it takes to build a car.) We also assume, using the same trick, that building a car also requires one unit of steel.

This leaves us with just two parameters to specify: the amount of labor it takes to make one unit of steel in each country. Call this a in Home and a* in Foreign (stars for Foreign is traditional.) And let’s assume that a* < a. That is, Foreign has a comparative advantage in steel production. Under free trade Home will import steel from Foreign. Let w and w* be the wage rates in the two countries, in any common unit. Then the cost of producing a car in Home will be w + w*a* (because we’re importing the steel) while the cost of producing a car in Foreign is w* + w*a* Since both countries will be producing cars, these costs will have to be equal, implying w = w*: wages will be the same. Now suppose Home imposes a tariff sufficiently high to induce car producers to use domestically produced steel instead. Now Home costs of car production are w + wa = w* + w*a* because car production costs must be equal. And this implies w/w* = (1+a*)/(1+a) <1 That is, to offset the higher costs imposed by the tariff, Home’s relative wage has to fall – the opposite of what you expect from a tariff on final goods.

My points (if any) are that it is equally easy to find and example in which tariffs on final goods can be bad for everyone in the protecting country and another model in which tariffs on intermediate goods are bad for workers in the country which doesn’t protect (and do not harm or benefit people in the country which does protect).

So model 2 in both countries, 1 unit of labor is needed to make 1 unit of steel. In the home country, 1 unit of steel and a units of labor are needed to make a car and in the forgeign country 1 of steel and a* of labor are needed with a*a. So with free trade all cars are made domestically. The real wage is 1/(1+a) in both countries. With steel tariffs, foreigners start making cars. Then their wage in cars is 1/(1+a*) < 1/(1+a). So it is about equally easy to get any result one wants.

Comments (1) | |

A Guide to the (Financial) Universe: Part 1

by Joseph Joyce  

A Guide to the (Financial) Universe: Part 1

A decade after the global financial crisis, the contours of the financial system that has emerged from the wreckage are becoming clearer. While the capital flows that preceded the crisis have diminished in size, most of the assets and liabilities they created remain. But there are significant differences between advanced economies and emerging markets in their size and composition, and those nations that are financial centers hold large amounts of international investments. Moreover, the predominance of the U.S. dollar for official and private use seems undiminished, if not strengthened, despite the widespread predictions of its decline. A guide to this new financial universe reveals a number of features that were not anticipated ten years ago.

Philp R. Lane of the Central Bank of Ireland and Gian Milesi-Ferretti of the IMF in their latest survey of international financial integration (see also here) provide an update of their data on the size and composition of the external balance sheets. Financial openness, as measured by the sum of gross assets and liabilities, peaked on the eve of the crisis, and for most countries has remained approximately the same. But its magnitude differs greatly amongst countries.  Financial openness in the advanced economies excluding the financial centers, as measured by the sum of external assets and liabilities scaled by GDP, is over 300%, which is approximately three times as large as the corresponding figure in the emerging and developing economies. This is consistent with the large gross flows among the advanced economies that preceded the crisis. However, the same measure in the financial centers is over 2,000%. These centers include small countries with large financial sectors, such as Ireland, Luxembourg, and the Netherlands, as well as those with larger economies, such as Switzerland and the United Kingdom.

Some advanced economies, such as Germany and Japan, are net creditors, while others including the U.S. and France are net debtors. The emerging market nations excluding China are usually debtors, while major oil exporters are creditors. These net positions reflect not only the acquisition/issuance of assets and liabilities, but also changes in their values through price movements and exchange rate fluctuations. Changes in these net positions can influence domestic expenditures through wealth effects. They affect net investment income investment flows, although these are also determined by the composition of the assets and liabilities (see below). In many countries, such as Japan and the United Kingdom, international investment income flows have come to play a large role in the determination of the current account, and can lead to a divergence of Gross Domestic Product and Gross National Income.

Comments (0) | |

Minimum Wage Effects with Non-Living Wages

I’m teaching “Economics for Non-Economists” this semester. This is an interesting experiment, and is strongly testing my belief that you can teach economics without mathematics so long as people understand graphs and tables. (It appears that people primarily learn how to read graphs and tables in mathematics-related courses. Did everyone except me know this?)

Since economics is All About Trade-offs, our textbook notes that minimum wage increases should also mean some people are not employed. Yet, as I noted to the students, in the past several decades, none of the empirical research in the United States shows this to be true. (From Card and Kreuger (1994) to Card and Kreuger (2000) to the City of Seattle, in fact, all of the evidence has run the other way, as noted by the Forbes link.)

Part of that is intuitive. If you’re running a viable business and able to generate $50 an hour, it hardly makes sense not to hire someone for $7.25, or even $9.25, to free up an hour of your time. The tradeoff is that your workers make more and your customers can afford to pay or buy more. Ask Henry Ford how that worked for him.

The generic counterargument (notably not an argument well-grounded in economic theory) was summarized accurately by Tim Worstall in one of his early attempts to hype the later-superseded initial UW study for the Seattle Minimum Wage Study Team.

[T]here is some level of the minimum wage where the unemployment effects become a greater cost than the benefits of the higher wages going to those who remain in work.

This seems intuitive in the short-term and problematic in the long term, even ignoring the sketchiness of the details and the curious assumption of an overall increase in unemployment (or at least underemployment) if you assume a rising Aggregate Demand environment. To confirm the assumptions would seem to require either a rather more open economy than exists anywhere or a rather severe privileging of capital over labor.*

On slightly more solid ground is the assumption that minimum wage should be approximately half of the median hourly wage. But then you hit issues such as median weekly real earnings not having increased much in almost forty years, while a minimum wage at the median nominal wage rate suggests that the Federal minimum wage should be somewhere between about $12.75 and $14.25 an hour. (Links are to FRED graphics and data; per hour derivations based on the 35-hour work week standard for “full-time.”)

So all of the benchmark data indicates that reasonable minimum wage increases will have virtually no effect, and none on established, well-managed businesses. The question becomes: why would that be so?

One baseline assumption of economic models is that working full-time provides at least the necessary income to cover basic expenses. Employment and Income models assume it, and it’s either fundamental to Arrow-Debreu or you have to assume that people either (a) are not rational, (b) die horrible deaths, or (c) both.

If you test that assumption, it has not obvkiously been so for at least 30 years:

The last two increases of the Carter Administration slightly lag inflation, but they are during a period of high inflation as well; the four-year plan may just have underestimated the effect of G. William Miller. (They would hardly be unique in this.)

By the next Federal increase, though—more than nine years of inflation, major deficit spending, a shift to noticeably negative net exports, and a couple of bubble-licious rounds of asset growth (1987, 1989) later—the minimum wage was long past the possibility of paying a living wage, so any relative increase in it would, by definition, increase Aggregate Demand as people came closer to being able to subsist.

The gap is greater than $1.50 an hour by the end of the 1991 increase. The 1996-1997 increase barely manages to slow the acceleration of the gap (to nearly $1.70), leaving the 10-year gap in increases to require three 70-cent increases just to get the gap back down to $1.86 by their end in 2009.

Nine years later, almost another $1.50 has been eroded, even in an inflation-controlled environment.

Card and Kreuger, in the context of increasing gap between “making minimum wage” and “making subsistence wage,” appear to have discovered not so much that minimum wage increases are not negatives to well-run businesses so much as that any negative impact of an increase, under the condition that the minimum wage does not provide for subsistence income, will be more than ameliorated by the increase in Aggregate Demand at the lower end.

My non-economist students had very little trouble understanding that.

*The general retort of “well, then, why not $100/hour” would create a severe discontinuity, making standard models ineffective in the short term and require recalibration to estimate the longer term. Claiming that such a statement is “economic reality,” then, empirically would be a statement of ignorance.

Comments (14) | |