Relevant and even prescient commentary on news, politics and the economy.

Notes Toward Modeling a Risk-Free Rate with Default Possibilities

Brad DeLong asks why it hasn’t been done, if it hasn’t been done.  The biggest problem I can see is that you don’t know how insane the participants are—and that will have a major effect on how much damage is done when.

Don’t get me wrong; the damage is already being done; it has been since at least May, and if Barack H. Obama weren’t an idiot, he would have been mentioning that over the past two months.  Unfortunately, the sun is yellow on our world, and counterfactuals are masturbatory, not participatory, acts.

So let’s start with what we know:

i= r + πe

Nick Rowe apparently would have us believe his (completely understandable) claim that i would not be directly affected by a short-term default. This strikes me as absurd.
Even when the economy is working on all cylinders–where G contributes something around 10-15% of growth at most—reducing G to zero for a week is about 2% of 15% or 0.2%-0.3%—noticeable, but arguably rounding error against the difference between π and πe. So, if you assume a short-term issue, you get something like those legendary two weeks from 11-22 September 2001, when only the Saudi Royal Family was spending anything, writ somewhat smaller only because Gunderstates the effect on r.)

We can concede that inflation expectations themselves aren’t going to go up independently: any additional borrowing cost will be a drag on r, so it’s not unreasonable to assume that i will be fairly steady—again, working a very short-term issue.

But, as often happens, we leave out a variable in our assumptions, simply because we define i as the risk-free rate of return.  Let’s put it back in:

i= (r + πe)*(1+ Pd)

Where (1+Pd) is 100% plus the probability that there will not be a default. (Note that in a model environment, Pd=0, otherwise, we would not call it risk-free: actors have the power to make and manage budgets, including the power to tax to pay for services desired by their plutocrats constituents.)

Finance people will recognize this reduced form equation:.  (1+ Pd)= β, the risk of the stock or portfolio in excess of the risk of the market.  For convenience, let’s just call this version Ω. So,

i= (r + πe)*Ω

Next comes the hard part: term structure.  Or, as Robert said in a similar context, are you talking about the Federal Funds rate, or the rate on three-month Treasury Bills?

Well, that depends on how long we expect the issue to be an issue.  If Barack “I’m an idiot who stands for nothing and you’ll vote for me anyway because my opponent will be insane” Obama treated this “crisis” the way he treated the last (real) one, he would insist on getting a clean bill raising the debt ceiling passed through both Houses and on his desk for signing by the end of next week.  If he takes it as another chance to blow Cass Sunstein and the rest of his University of Chicago buddies, then it’s a complicated bill that will get a few Congressmen killed* and several others de-elected, and we might be talking weeks.

Right now, the markets are assuming the former.  Let’s be optimistic and assume they’re correct.

Four weeks ago, there was a Treasury Bill auction that produced a yield of  0.00%.  Extending that bill does no harm at all—not even to expected debt totals. (Investors mileage may vary, but they bought it with full knowledge of the timing.  And there were 10% again behind them bidding at the same rate.)

Some specific Notes and probably Bonds—it is August—will have coupon payments due on the 15th. But those are coupons, not principal repayment, so again we’re not talking much value of the Note or Bond itself, once you hit five years or so.

Bills will be a problem.  Short-term notes will be a problem.  Fed Funds is uncharted territory.  Tripartite Repo specifically, and Repo in general will be a major problem due to questions of collateral value.  And guess who uses those the most?  Hedge fundsThe people who have been financing John Boehner’s and Eric Cantor’s campaigns.

So the term structure looks like it would if you’re going into a recession: short-term rates rise significantly, while the longer term securities shift upward a bit. (Select Notes and Bonds with near-term coupons kink the curve, but there’s no certain arbitrage there, especially with transaction costs.  Cheapest-to-deliver calculation is also affected in the futures market. I could go on, but let’s just pretend—correctly—that these are minor issues.)

Because now our “baseline” rate is no longer risk-free—and we’re not certain what Pd is over time.  We know it will return to zero at some point, and we presume (at least at the beginning) that it will be soon. But we also know that there already are follow-on effects, and that they will only get worse. Even if we ignore the effect on G (and therefore i) of a short-term default, we lose our bearings for a while.

So the big question is collateral and spreads.  Been posting Treasuries to borrow against?  Yields up due to Pd > 0, so prices down, so less flow. And probably haircuts due to uncertainty of any return to “risk-free.” Posting Treasuries with a coupon due?  Haircut! Posting Treasuries with a near-term coupon?  Haircut! Posting Munis?  Think Michael Jordan (or Telly Savalas, if you’re Of a Certain Age).  So you can borrow less, and probably have to sell some of your assets.

Which ones?  If we’re lucky, it’s longer-term Treasuries, and some of the yield curve inversion mentioned earlier is reduced.  But the market is going to be less liquid than usual, so maybe some of those other bonds get sold—corporates, for instance.  The bid-offer on Munis is basically going to be zero-coupon bonds at a high discount. (Think fast about how many state and local municipal projects depend on some form of Federal funding.  Then realize that your estimate is probably low by an order of magnitude.)  Or corporations that are dependent on government funds (DoD providers, automobile fleets, interstate paving contractors, power supply and distribution companies, etc.)

In a ridiculously oversimplified model, the spreads simply expand by Ω, with a possible adjustment downward based on direct exposure to government financing. This, again, probably understates the effect.

So, in a closed economy, everyone gets to pretend things are close to the same—just more expensive, with a lot of damage to hedge funds and municipalities and borrowing costs and credit lines. So money supply drops significantly (multiplier effect reduction) even without Fed intervention.  You get an Economic Miracle: reduced supply and higher yields.

But we don’t live in a closed economy.  So there’s another factor.  And I’m running out of Greek letters, so let’s just use an abbreviation everyone knows:

i= (r + πe)*Ω*d(FXd)

where d(FXd) is the change in the FX rate due to the default adding uncertainty to cash flows.

That’s right: we get not one but two economic miracles:  (1) domestically, a reduced supply of risk-free securities produces higher yields and (2) internationally, higher yields lead to a depreciation of the domestic currency.

Anyone still wonder why no one wants to build the full model?

*No, I don’t want this to be the scenario.  But if you offered me the bet, I wouldn’t take the under at 0.99.

Tags: , , , , Comments (1) | |

Modeling Sunshine and Shadows: Inequality, long hours and crisis

Tom Walker
(aka Sandwichman at Ecological Headstand)

Modeling Sunshine and Shadows: Inequality, long hours and crisis

Alex Harrowell at A Fistful of Euros sees sunshine beaming from the IMF in a working paper by Michael Kumhof and Romain Rancière that identifies income inequality as a potential source of financial crisis. No shit, Sherlock! Outside of the formal modeling, the proposition hardly sounds remarkable.

As goldilocksisableachblonde noted on Mark Thoma’s site, “This finding is consistent with intuition and common sense , meaning – according to mainstream economic theory backed by models and more models – it’s gotta be wrong.” Kumhof and Rancière themselves note that “the link between income inequality, household indebtedness and crises has been recently discussed…” but they object that the authors “do not make a formally consistent case for that argument.” What they mean by not “formally consistent” is presumably not using a dynamic stochastic general equilibrium (DSGE) model such as they employ. I would like to see K&R try that argument in court.

“Your honor, sir, I object, the videotape of my client breaking open the ATM with a sledgehammer is not a DSGE model and thus is not formally consistent as evidence.”

“Objection overruled.”

But it is well that K&R build their formally consistent model to demonstrate that possibility of something happening, which the rest of us can observe with our naked eyes. This will keep other formally-consistent DSGE model builders busy tinkering with assumptions until they can explain the findings away. I betcha Lee Ohanian could come up with a doozy — and it would get more media!

Anyway, where there is sunshine, there are bound to be shadows and the Sandwichman couldn’t resist the temptation of scouring the working paper for some shade. And here it is:

“Finally, the addition of a shock to workers’ labor supply would help to address an important issue raised by Reich (2010), who emphasizes that in the United States households faced with higher income inequality have employed two other important coping mechanism apart from higher borrowing, namely higher female labor force participation and longer hours. This allowed them to replace some of the lost income, and therefore to limit the amount of additional borrowing.”

Now I haven’t read Bob Reich’s new book but I did the next best thing. I saw him talk about it at a book tour event in Point Reyes Station in October. Reich’s argument is that 1. incomes have stagnated since the early 1980s 2. the first response of households was to increase hours supplied to the labor market to maintain purchasing power but when that strategy ran up against its limit, 3. households began to borrow aggressively. I think Reich has the ingredients right but they’re in the wrong chronological order. That can be crucial when you’re baking a cake or explaining history. Long before incomes began to stagnate, hours of work ceased a century long trend of decline, a trend that BLS economist Joseph Zeisel had called “one of the most persistent and significant trends in the American economy in the past century.”

Not to put too blunt a point on it, Americans suddenly stopped taking part of the gains of technology in the form of leisure. It’s not as if they “just decided” to do this, either. There were all sorts of structural changes in the U.S. labor market that broke the trend. Just to name a few, there was the abandonment of the shorter hours employment strategy by organized labor in favor of promoting economic growth fueled by government spending, there was the explosion of per-employee benefits (quasi-fixed costs) as a proportion of total compensation and there was the FLSA provisions themselves which, in effect, were a double-edged sword with regard to the incentive of overtime pay.

From a long-term historical perspective, hours of work stagnated before wages did. I’m well aware of the post hoc propter hoc fallacy. Just because the hours stagnation came first, doesn’t necessarily mean it caused the wage stagnation. On the other hand, there is a sufficient body of theory suggesting that just such a causal chain is likely.

Ira Steward articulated this theory in the 1860s. Marx also presented a theoretical explanation linking technological advance, immiseration of workers and economic crisis. Sydney J. Chapman confirmed the basis of Steward’s and Marx’s theories but within a neo-classical framework. Dorothy W. Douglas reviewed Steward’s theory during the Depression judged it to be rich in explanatory power, institutionally speaking.

More recently Keynes and Luigi Pasinetti advanced theories that none of the formally-consistent model builders have sought to confirm or refute. Instead, the formally-consistent model builders busied themselves refuting a theory that didn’t exist — that “the amount of work to be done was a fixed quantity.” Not surprisingly, they succeeded in refuting that faux fallacy (in a formally-consistent manner, of course) and figured that was all they needed to do. Steward, Marx, Chapman, Keynes and Pasinetti be damned!

Don’t get me wrong. I think formally modeling realistic assumptions is a huge step forward from formally modeling laughable ones. I’m just not sure I (or the unemployed) can wait another sixty years or so for economists to get around to building a formally-consistent model that reflects the powerful explanatory theories the formal modelers have ignored for the last sixty years.

Tags: , Comments (9) | |

To send money is not to spend money

Robert Waldmann

Atrios vs Bernanke.

OK so I agreed with Atrios about Greenspan (just below). Now I disagree with him about Bernanke. He equates loans with gifts. He equates worse than optimal with worse than nothing (dealing with free market fanatics can cause one to overlook the difference).

Bernanke could have sent money from the Fed’s magic money machine in all kinds of ways. They could have paid down mortgages. They could have put money in my bank account. They could have given it to state governments. What they did was prop up a failed banking system, and the worst failures of the failed banking system, under the premise that capital misallocating financial intermediaries were necessary for a stable economy.

Note the unusual usage “sent money.” This is not a typo. Atrios did not hit the p too gently meaning to type “spent,” because the Fed did not spend money bailing out the banks. It loaned money, guaranteed loans and guaranteed assets. If they put money in Atrios’s bank account, full stop, then they wouldn’t be able to get it back. A loan is not a gift.

The latest estimate I saw of the cost of the bailout to the Fed was Negative $ 125,000,000,000.

I also think our experience with Lehman shows that, in the short run (which means until congress acts) bad banks run by incompetent greed heads are better than no banks. But in any case, even if you think we would be slightly better off if they had all gone bankrupt, $125,000,000,000 is a nice chunk of change.

update. However, I remembered incorrectly. The estimate was a profit of $ 115,000,000,000 via Barry Ritholtz who was not convinced. Close but no cigar. I mean ten billion here ten billion there and soon you’re talking real money.

Total delirium after the jump.

Here I think part of the problem is that, like most Americans, Atrios rejects socialism. If The Fed could make a killing intervening and saving banks, it could make a killing most years. Does that mean that a mostly private financial system is inefficient compared to one where the Federal government bears more risk and pockets the risk premia ? Sure it does. Wouldn’t that be public ownership of the means of production, that is socialism ? Yes. Am I saying that Bernanke and Paulson proved the superiority of a shift towards socialism (not all the way to nationalized popsicle stands). Yes I am.

I think that TARP is one of the best programs ever, and, if we were half rational, would be our first step on a path that leads part way to socialism.

Tags: , Comments (26) | |

Only that which is rational is real

CNBC reports

Greenspan … added that the financial crisis could not have been foreseen.

“It is just not feasible to forecast a financial crisis,” he said. “A financial crisis by definition is a sharp abrupt, unexpected decline in asset prices.”

Notice that he says if it is unexpected, then it *can’t* be forecast. That is if the risk bearing capacity weighted average investor doesn’t forecast something then it isn’t forecastable. Greenspan is asserting that financial markets are efficient by definition. He displays no willingness to allow data or evidence any role in answering the question.

Only that which is rational is real. The key policy relevant question is answered by definition, that is, by authority. Efficient is whatever markets are, and also what they should be, therefore markets are what they should be – by definition.

This is dogma, this is faith, this is medieval thinking. Galileo lived in vain. The enlightenment arrived in the Fed when Bernanke replaced Greenspan.

via Atrios. More ranting after the jump.

Also “Former Federal Reserve Chairman Alan Greenspan said that the recent stock market decline is “typical” of a recovery,” If a decline is typical then it is forecastable. If a decline is typical during a recovery, then financial markets are inefficient. I see no basis for the claim (which, I amdit, was constructed with only one actually quoted word).

Note how the efficient markets hypothesis is switched on and off for convenience. Something which could have been prevented with tighter regulation must have been unpredictable. Something which might or might not alarm people was predictable and isn’t news. I am fairly sure that no one believes that financial markets are efficient. It is just a debating trick. It can be assumed at will in order to make absurd arguments. It is absurd and has high status, for some reason, so it is a license to make clearly false claims which are not dismissed.

Tags: , Comments (10) | |

hundreds of billions = 0 ?

Robert Waldmann

The www.washington.com Headline and abstract person has outdone himself or herself writing

CBO sees debt estimates soar

Analysts say health law has not improved budget and Obama’s tax agenda will make things worse.

Lori Montgomery

As Kevin Drum says always click the link. Lori Montgomery actually wrote

President Obama’s overhaul of the health-care system has done little to improve the nation’s budget outlook, congressional budget analysts said Wednesday.

So “little” has become none. The abstract contradicts the actual story.

Finally well down in the story we get to what Doug Elmendorf said

The health-care overhaul made “steps in the direction of a sustainable fiscal policy. But they are small steps relative to the journey that will be needed for fiscal sustainability,” CBO director Douglas Elmendorf said Wednesday in testimony before Obama’s bipartisan commission on the deficit.

small “relative to the journey that will be needed for fiscal sustainability” is not small. We do not normally measure sums of money “relative to the journey that will be needed for fiscal sustainability”. Another way of putting that would be “unimagninable huge immense and gigantic but nowhere near as colossal as the long term budget shortfall”.

So in the hands of the Washington Post small “relative to the journey that will be needed for fiscal sustainability” becomes “small” and then none. Too the Post hundreds of billions of dollars are zero.

Clearly that organization is not qualified to report the news. Even the simplest most cut and dried gigantic numbers are too subtle for them.

This is the end of my short punchy post. A general rant follows after the jump.

Beyond this, the CBO report isn’t news. All we learn is that the CBO headline number must be based on what Congress claims it will do, so it is based on the the assumption of no more alternative minimum tax fixes and no more doc fixed and, especially, all Bush tax cuts expire.

Pretending it is news is hyping a fact. It fits the panic about the deficit agenda. This is part of the agenda of the Washington Post opinion pages. It is not good that the news staff is hyping non news about how the long run deficit picture is grim.

Also Montgomery’s next sentence miss-allocates blame

They also said the president’s tax agenda — including a pledge to extend an array of tax cuts for the middle class — would only make things worse.

This is only true if one interprets “the president” to be George W Bush. Obama is reversing some but not all of the Bush tax cuts. The silly trick of saying they would sunset makes the change in law a tax cut, but the change in policy is a tax increase. Montgomery is blaming Obama for not undoing all of the damage that Bush did.

This is the general slant. Obama is blamed for the long run budgetary shortfall, because the huge gigantic improvements that he has achieved plus the huge gigantic and popular improvements which he has proposed are not huge and gigantic enough to undo the damage the Republicans did when they were in control.

He’s been Posted.

Tags: , Comments (13) | |

You Are A Carrot


Look at this lovely old drawing. Fernlike leaves, flower heads like old fashioned crochet embroidery. This is Daucus carota ssp sativa, originally native to temperate regions of Europe and southwest Asia. Now print this picture out and go to the supermarket and try to find it in the produce section.

If you are lucky, you might spot some by their leaves. But mostly you will find only the bare, leafless root, or even stubby vegetable batons identifiable only by their colour.

Carrots themselves don’t want to be stubby batons of pure edibility. In fact, that kind of carrot can’t exist as a living thing – it’s just what’s left when most of the living parts of the carrot are shaved and chopped away.

Now, no grocery shopper would choose the whole carrot (leaves, stems, flowers, taproot, feeder roots and a bit of dirt) over the bagged orange cudgels when planning to make a stew. But would a sane grocery shopper deny that all those bits need to exist? Would they demand to pay “only for the carrot” and not pay for the other bits?

And how long would carrots last if they were “paid” only by root-weight, not on a whole-life basis? As a wholly domesticated species, they wouldn’t. They’d be gone in a generation, if they even came to exist in the first place. Only the wild ones would still live, of a low quality from the shopper’s viewpoint, but surviving.

This is why business, worldwide, needs to pay fairly, and provide benefits and pensions.

Woah! Where did THAT unsignaled left turn come from?

I’m a carrot too. So are you. We need our lacy finery, our crocheted flowers. We need to set seed and droop into a shabby graceful old age. We need our feeder roots. And our taproot is not there for some shopper to chomp – it’s there to nourish the plant while it engenders seed, and dower those seeds with enough stored food to get their own proper start.

Business only exists because we exist, yet like other critters it tries its best to get all the candy and none of the wrappers. It isn’t intelligent enough, overall, to realize that if it slices out the taproot and starves the rest of the plant, carrots will disappear, immediately followed by the businesses who depend on them.

At present, business has managed to interpose itself between many of us carrots and our sustenance, collecting a toll each time something of value crosses through the toll gate. This can work very well for us and for them, provided the toll is low enough and enough of the profit is returned to nourish the carrots. But over the past couple decades less and less has been returned to the carrots. The taproots have been tapped out.

Luckily for business, people aren’t like carrots, are not fully domesticated. In the poorest nations you see this in a pure form. We’re still a wild species, we continue to raise our kids, save and build and flower and set seed even in the bleakest situations. People don’t stop living, and the reason for living is living.

Business flourishes when people flourish, but many businesses don’t know this, or are content to get most of a shrunken root rather than part of a plump one. Will they ever grasp this? The incentives seem to go the other way, with the short-term spoils going to the greediest..

That’s why someone besides business needs to regulate business, and why great profitability in business should be viewed with suspicion, not placid satisfaction.

Tags: , Comments (7) | |

Peanut Butter Regulations

by Rusty Rustbelt

PEANUT BUTTER REGULATIONS

Watching people spread peanut butter is interesting.

I am a semi-neurotic peanut butter spreader, I try to cover most of the bread and try to get the thickness close to even, but I am not a perfectionist-neurotic spreader. There are also slap-and-eat messy spreaders.

Having dealt with a wide range of government regulatory agencies over 35 years, dealing with both health care and small business, I am familiar with “peanut butter regulation.”

Peanut butter regulation spreads regulatory effort evenly over all regulated entities, even when it is well known that 20% of the targets represent 80% of the problems.

Nursing home regulation comes immediately to mind. Also food safety. And the SEC.

Regulation and regulatory capture are a hot topic these days. Do we need more targeted regulatory efforts?

Peanut butter regulation reminds me of drunk driving checkpoints, stop everybody and eventually the cops find a drunk. Is there a better way?

Some regulators are complaint based, such as wage-and-hour and OSHA, should these agencies be more like peanut butter?

We gotta get better.

Tags: , Comments (7) | |

Nick Rowe asks if new Keynsian models make sense

Robert Waldmann

Nick Rowe asks a very interesting question. After the jump I attempt an answer

Please read Rowe’s post first. The following will make no sense at all if you don’t (note reading Rowe’s post is a necessary not a sufficient condition for it to make sense).

The short explanation is that Rowe finds a contradiction. I think he finds a contradiction with new Keynesian models, because he assumes that the central bank can achieve any real interest rate that it wants. I don’t think all real interest rates are achievable in Nash equilibrium in new Keynesian models.

I know nothing about new Keynesian models (well I know about old New Keynesian models from around 20-25 years ago). So consider this a totally fresh look.

In your example, everything is real. How odd since nominal rigidities are central. I think the key is that in new Keynesian models, the central bank can’t set the real interest rate to any level it wants. You quickly moved from setting nominal to real interest rates. Now one might imagine that a central bank can forecast inflation (they have rational expectations too) and then add say 5%. However, since private sector agents have rational expectations, their behavior depends on the central bank’s policy. It’s not like there is an inflation rate which is given no matter what the central bank does. The question becomes, is there a Nash equilibrium in which the Central bank gets r=5% (presumed to be its only goal) and private agents maximize their utility given the monetary policy rule, tastes, technology and nominal rigidities (or menu technology if you insist). I think the answer is that there is one and only one such equilibrium and that is the tinkerbell equilibrium with production equal to production in the flexible price steady state.

Now the economy can be elsewhere with, say, output below that level (because prices are too high because … well I just assumed they are at the beginning of time cause no way am I going to model any uncertainty). I think that, in that case, the central bank can’t achieve r=5% always, that there is no such Nash equilibrium. In other words, for any nominal interest rate rule, the real interest rate will not be 5%.

I think the contradiction is between new Keynesian models and your assumption that the central bank can achieve any real interest rate which it wants.

I will try to invent a simple new Keynesian model on the spot.

Producers are self employed. Their marginal cost in units of consumption is the marginal disutility of work divided by the marginal utility of consumption. T.his declines if they work less and consume less (disutility of work convex utility of consumption concave).

They make different goods with a constant elasticity of substitution (all consumers have Dixit Stiglitz preferences) so their utility is maximized if they set a price equal to one plus a constant markup times their marginal cost.

OK a nominal rigidity. They are on a circle and a clock hand goes around say once a month. When the hand points at me, I can adjust my price. Otherwise it stays the same.

Is there an equilibrium with r = 5% and consumption less than the flexible price consumption (for a steady state with r = 5%) ? It seems that if I am working less and consuming less than in the flexible price steady state, then I want to lower my relative price, that is set a price lower than the average price over the next month. So there can’t be an equilibrium with a constant price level.

I will assume that my loss from having other than the best price is quadratic in log price (just because I want to and new Keynesians always do stuff like that)

How about one with a constant deflation rate of 1% per month ? Well then I forecast the average log(price) will fall 1% over the month so will be on average 0.5% lower than when I set my price. so I set my price *below* the current average price minus 0.5%. Prices as set fall 1% a month, so, when the hand pints at me, my price is 0.5% higher than the average price (I am making a linear approximation to an exponential here). so I cut my price by more than 1% so deflation is more than 1%.

So if I assume that deflation is 1% per month, then it is more than 1% per month. There is no equilibrium with r=5% and consumption below the flexible price steady state.

I haven’t proved it, but it seems to me that this happens for prices being any function of time.

One last example (here the r=5% actually matters). If the deflation rate is
exp(-(constant)t) so it goes to zero exponentially. Then if I lower my price according to the deflation rate it will be lower than the average over the next month (since later price adjustments will be smaller than mine). So I do get a price lower than the average over the month of my average competitor’s price. However, this difference gets smaller and smaller
(it shrinks just like exp(-(constant(t))). This is only optimal if my consumption is getting closer and closer to flexible price steady state consumption. So there are equilibria, but in those equilibria consumption grows till it converges to FPSS consumption (what you call full employment consumption).

This can’t happen if r=5%, because r=5% implies constant consumption. I think this means there is no sticky price equilibrium with consumption below FPSS consumption and r=5% always. There is no way the central bank can make r=5% always no matter what it does with nominal interest rates.

To repeat maybe:
I think this means that if current consumption is below the flexible price steady state, then the central bank can’t keep r=5%. I think it means that the economy has to converge to the flexible price steady state (which means r must be greater than 5% if consumption is now below flexible price steady state consumption)

Tags: , Comments (0) | |

How to Build an Economic Model

Let us see if we can translate my previous post on job selection into an economic model.  Start with a basic formula:

(1) AcceptOffer = a(1) + a(2)*w + a(3)*b + a(4)*oa + a(5)*t

where a is a constant, w is wages, b is benefits, oa is “opportunity for advancement” and t is treatment received in the workplace.

The first observation we make is that several of these variables are difficult to quantify—and even more difficult to objectify. So let’s start with the easy ones.

w is very identifiable: reported (on a per capita aggregate basis), subject to enforcement penalties (e.g., minimum wage laws), and used in “downstream applications” (e.g., tax filings) and therefore relatively verifiable.

b is (1) known to be non-negative and (2) often variable within, let alone between, organizations. (Vacation time, sick days, insurances offered and costs to the employee all may vary depending on level, time of service, location of office, etc.)

This could present a problem, but here we can use standard economic theory to our advantage. We do not know the amount of b, but we can assume that the employer is rational, and is offering a total compensation to the worker that s/he expects will be less than or equal to the marginal product of that person’s labor. We therefore can reasonably assume that b is related to w. If we then review the available aggregate data we can approximate that benefits offered will be approximately a certain percentage of w—and that workers will assume that assumption (and, in most cases, verify that assumption within a margin of error) before accepting the job.

We then restate the equation as

(2a) AcceptOffer = a(1) + a(2’)*w’ + a(4)*oa + a(5)*t

where w’ is the weighted combination of w and b above, and a(2’) is the restated coefficient.

If we then assume that all parties have full information of the ratio of wages to benefits, then a(2’) = a(2)=a(3), so we simplify to:

(2b) AcceptOffer = a(1) + a(2)*w’ + a(4)*oa + a(5)*t

We now have to consider opportunities for advancement and treatment. Here, we have two problems that are difficult, possibly insurmountable, for modeling.

The first is a lack of measurability. There are no public records for “didn’t get promoted.” Nor, except in extreme cases, is there a way to measure treatment by supervisors. The data that might be available&mdassh;lawsuits, official complaints, even Human Resources files (for which there are significant privacy considerations)—is all negative and, accordingly, skewed (biased). This is because (a) ninety percent or so of all workers and/or bosses will never have a complaint filed against them and (b) the ability to file a complaint may be present because the general work atmosphere is more amenable to filing one than not, so the presence of a complaint is not in itself a good or bad thing for the overall measure.

The second is that tolerances vary by person. To use an absurd example, people who use “Every Breath You Take” for their wedding may be more likely to tolerate attentions that others view as harassment. Similarly, forcing people to clock out for a “smoke break” will be viewed differently depending upon whether one is a smoker or not. General policies are just that—general.

So, if we are building an economic model, we must come up with a reasonable approximation of these last two variables. The most direct way to do this is the standard method: assume each individual has their own Utility Curve, and “prices” accordingly.

Based on their preferences and options, then, we map the compensation required to offset negative consequences from oa and t. While the variables still are not directly observable, we can make a simplifying assumption:

Assume that the compensation required to do the work is a factor of w’.

Have to work in the sewer system? Change w’ to compensate. Need to work the night shift and/or weekends? Same type of adjustment. Boss clearly favors buxom blondes and you’re a petite redhead? Adjust current salary requirements to compensate for lowered opportunity for advancement/promotion. You’re a b.b. who will have trouble getting work done because the boss will harass you? Adjust accordingly.

We assume—due to the constraint: a lack of available data—that we can reduce “a(4)*oa + a(5)*t” to some proportion of w that will compensate the worker for the environment into which they are being placed. If we further assume that the worker has complete information as to hisser preferences, the worker will not accept a job that does not offer that level of compensation.

So we can restate equation 2(b) using the Utility Curve assumption. Assume

(3a) a(6)*w” =a(4)*oa + a(5)*t

such that w” also proportionate to w(and therefore w’ as well) and a(6) is the coefficient selected by the individual that makes the offered wage compensatory to the opportunities for advancement and expected treatment on a Present Value basis.

We can then reduce equation (2b) to

(3b) AcceptOffer = a(1) + a(2)*w’ + a(6)*w”

or, given that (a) w” is proportionate to w and w’ and (b) that the multiplier in most cases is 1, and (c) the constant (e.g., signing bonus) can be assumed without loss of generality to be 0,

(3b) AcceptOffer = clip_image002[10]

to indicate that the value varies with individuals.

To concretize the example, assume that a redhead and a blonde, as above, are both offered a job. Assume further that the redhead’s compensation requirement—lower-but-still-positive opportunity for advancement—is lower than the blonde’s for will-be-harassed-and-work-will-be-impeded. That is

clip_image002(r) < clip_image002[4](b)

There are four possibilities:

  1. The offered wage will be belowclip_image002[12](r), in which case neither will accept the job
  2. The offered wage will be below clip_image002[14](b) but above clip_image002[16](r), in which case one of the two positions will be filled
  3. The offered wage will be above clip_image002[18](b), in which case both will accept the offer and the company will have offered a higher wage than was required to fill both positions. (That the offer is what the company believes will be the employees’ s marginal product of labor [MPL] is a collateral issue.), or
  4. The company will negotiate with each, offering the redhead clip_image002[20](r) and the blonde clip_image002[22](b), and everyone will be happy—so long as initial expectations were accurate (or, if you prefer, the new employees both had full information).

Note also that there is a learning process for both the applicant and the employer. Offers and demands will be adjusted based on historic data (if both decline the offer, the next candidates of similar background will be offered more, and perceptions of growth (improvements in experience and/or education by the worker).

If we generalize this, we note that there is a distribution of clip_image002[24] (due to Individual Preferences). If we further make simplifying assumptions—e.g., a normal distribution of clip_image002[26] among the population—we come to the conceit of the “reservation wage,” and all the economic literature that is attendant upon it.

So that is how you build an economic model.  The question then becomes: how do you use it? A relatively short (though it does incorporate a micro model) discussion of that continues below the fold.


The problem—if it is one (I’m inclined to argue it is; YMMV)—is that, having built a model in which all the proxying assumptions are “simplified” into a single variable, we lose some granularity, having made a trade-off for the sake of measurement.

Accordingly, a change in the “reservation wage” may not in itself tell us whether the real wage has gone up or the work environment has become, on balance, more or less acceptable.

Again, an example, one that will be familiar to students of microeconomics. You are given two choices: (1) you can receive $100 right here and now or (2) you can travel a known distance (say, five miles)—with a finite chance of death or injury—and receive $1,000,000 on completion.

Surroundings, at this point, matter. If the five miles traveled is as an American soldier in full uniform walking outside of the Green Zone, you might well choose $100. Even when you are native to the area, the choice may vary: walking five miles through one gang’s territory is a different option than traveling that distance through different territories. Or even a pure environmental matter may have an effect: Walking five miles through a desert with no canteen, or having to swim five miles from shore in a dangerous waters, is not the same risk as walking five miles down an unpaved dirt road in the middle of the day. Even if you would be required “only” to walk down a heavily-used interstate highway with no shoulder or sidewalks, discretion may be the better part of valor.

Over time, through the “learning process” (op cit. Arrow, 1962, as all good op cits must), the dollars offered will be adjusted so that the payouts balance on a risk-adjusted basis. (Collaterally, there may be other reasons for the greater payouts; signaling by any other name.)

Now suppose the landscape changes. There is a canteen every half-mile in the desert. A gang is run out of its territory, or takes over another’s territory. The Green Zone becomes larger.

The balance has changed; the risk is different. The job is demonstrably different, and therefore requires lower (higher) compensation. But the difference has nothing directly to do with the base salary/benefits requirement and everything to do with the overall attractiveness and/or treatment received.

If we were to forget that, we would conclude that there is more demand for the job itself, and therefore people are willing to take a lower salary. If, on the other hand, we keep in mind that there are more factors to the reservation wage than just the salary itself, we realize that producing a more pleasant work atmosphere is beneficial to our firm, as it enables us both to present a good face to clients and to reduce our cost of labor.

The first type of economist probably should be avoided, as he adds very little value to the discussion of how to use the model.

Tags: , , , Comments (2) | |

Those Low Rates

Via (what else?) Alea’s Twitter feed, John Taylor defends himself against Ben Bernanke:

“The evidence is overwhelming that those low interest rates were not only unusually low but they logically were a factor in the housing boom and therefore ultimately the bust,” Taylor, a Stanford University economist, said in an interview today in Atlanta.

It’s not actually that they’re not saying the same thing. Bernanke argued (and I agreed) that low rates did not cause the housing bubble. We have had low rates without producing housing bubbles before. (Other asset bubbles are another question.) Indeed, the last lasting housing bubble peaked just as the Federal Funds rate did:

More accurately (and also via ATF), Caroline Baum takes Bernanke to task for sleight-of-hand:

For example, Bernanke takes great pains to rebut criticism that the funds rate was well below where the Taylor Rule…suggested it should be following the 2001 recession. The Taylor Rule uses actual inflation versus target inflation and actual gross domestic product versus potential GDP to determine the appropriate level of the funds rate.

Substitute forecast inflation for actual inflation, and the personal consumption expenditures price index for the consumer price index, and — voila! — monetary policy looks far less accommodating, Bernanke said.

It’s always easier to start with a desired conclusion and retrofit a model or equation to prove it.

Ouch. Is it a great day when the journalist is making more sense about the economist’s work than another economist is?

But more to the point, the argument that rates were kept unnaturally low from ca. 2002 through ca. 2005 depends very much on the idea that the Fed does not have two jobs. (Once again, h/t to Dean Baker.)

The other half below the break

As Baker notes at the link above, “the dual mandate [of the Fed] is full employment (defined as 4.0 percent unemployment) and price stability.”

Let’s be generous. I’ve plotted the Civilian Employment/Population Ratio and the Official Unemployment Rate below. The blue line at 4.5 applies only to the Unemployment Rate (red line). (I didn’t plot it at 4.0 because that would be cruel.)

So what we have is a situation where (1) the Employment/Population Ratio by the end of 2006 is barely back near the level it was at the end of the recession of 2001 and (2) it is only near the end of 2006 that the Official Unemployment Rates approaches the official target rate (which it hadn’t seen since before the 2001 recession).

It seems apparent that Taylor’s “Rule” (which considers inflation and GDP, but not employment per se) is not compatible with official Fed mandates. In such a context, Caroline Baum’s “gotcha” is more a case of her using inappropriate variables—and Bernanke substituting a more appropriate model, given the Fed’s mandates—than it is a case of Bernanke “retrofitting.”

No wonder John Taylor says we should worry about inflation; in his world, we never have to worry about unemployment, so long as there are enough bubbles to inflate GDP.

Tags: , , , , , , , Comments (4) | |