Relevant and even prescient commentary on news, politics and the economy.

Note to Reinhart/Rogoff (et. al): The Cause Usually Precedes the Effect

Or: Thinking About Periods and Lags

No need to rehash this cock-up, except to point to the utterly definitive takedown by Arindrajit Dube over at Next New Deal (hat tip: Krugman), and to point out that the takedown might just take even if you’re looking at R&R’s original, skewed data.

But a larger point: I frequently see econometrics like R&R’s, comparing Year t to Year and suggesting — usually only implicitly or with ever so many caveats and disqualifiers — that it demonstrates some kind of causation. I.e. GDP growth in 1989 vs. debt in 1989, ’90 vs. ’90, etc.

Haven’t they heard of looking at lags, and at multiple lags and periods? It’s the most elementary and obvious method (though obviously not definitive or dispositive) for trying to tease out causation. Because cause really does almost always precede effect. Time doesn’t run backwards. (Unless you believe, like many economists, that people, populations: 1. form both confident and accurate expectations about future macro variables, 2. fully understand the present implications of those expectations, and 3. act “rationally” — as a Platonic economist would — based on that understanding.)

By this standard of propter hoc analysis, R&R’s paper shows less analytical rigor than many posts by amateur internet econocranks. (Oui, comme moi.) This is a paper by top Harvard economists, and they didn’t use the most elementary analytical techniques used by real growth econometricians, and even by rank amateurs who are doing their first tentative stabs at understanding the data out there.

Here’s one example looking at multiple periods and multiple lags, comparing European growth to U.S. growth (click for larger).

This doesn’t show the correlations between growth and various imagined causes for the periods (tax levels, debt levels, etc.) — just the difference, EU vs. US, in real annualized growth. You have to do the correlations in your head, knowing, for instance, that the U.S. over this period taxed about 28% of GDP, while European countries taxed 30–50%, averaging about 40%.

But it does show the way to analyzing those correlations (and possible causalities), by looking at multiple periods and multiple lags. (I’d love to see multiple tables like this populated with correlation coefficients for different “causes.”)

Dube tackles the lag issue for the R&R sample beautifully in his analysis. In particular, he looks at both positive and negative lags. So, where do we see more correlation:

A. between last year’s growth and this year’s debt, or

B. between last year’s debt and this year’s growth?

The answer is B:

Figure 2:  Future and Past Growth Rates and Current Debt-to-GDP Ratio

(Also: if there’s any breakpoint for the growth effects of government debt, as suggested by R&R, it’s way below 90% of GDP. More like 30%.) See Dube’s addendum for a different version of these graphs, using another method to incorporate multiple lags.

Here’s what I’d really like to see: analysis like Dube’s using as its inputs many tables like the one above, each populated with correlations for a different presumed cause (“instrumental variable”). Combine that with Xavier Sala-i-Martin’s technique in his paper, “I just ran four million regressions“.

That paper looks at fifty-nine different possible causes of growth/instrumental variables (not including government debt/GDP ratio) in every possible combination, to figure out which ones might deliver robust correlations. I’m suggesting combining that with multiple periods and lags for each instrumental variable. IOW, “I just ran 4.2 billion regressions.” Not sure if we’ve got the horsepower yet, but…

Cross-posted at Asymptosis.

 

Comments (7) | |

Reinhart/Rogoff Shot Full of Holes Updated X3

This story has rapidly made the rounds in the blogosphere, and it is indeed a big deal. One of the most significant economics papers underlying the argument for why high government debt (especially over 90% of gross domestic product) is bad for growth was published in 2010 by Carmen Reinhart and Kenneth Rogoff, “Growth in a Time of Debt” (ungated version here).

The basic finding of this paper was that if debt exceeds 90% of GDP, then on average growth turns negative. But as Thomas Herndon, Michael Ash, and Robert Pollin report in a new paper (via Mike Konczal at Rortybomb), there are substantial errors including data omitted for no reason, a weighting formula that makes one year of negative growth by New Zealand equal to 19 years years of decent growth by the UK, and a simple error on their spreadsheet that excluded five countries from their analysis altogether (see Rortybomb for the screen shot).

The authors say that with these errors corrected, the average growth rate for 20 OECD countries from 1946 to 2009 with debt/GDP ratios over 90% is 2.2%, not the -0.1% found by Reinhart and Rogoff. This is a huge difference. We still have a negative correlation between debt/GDP and growth rate, but it is much smaller, as we can see from Figure 3 from their paper:

Debt/GDP Ratio     R/R Results     Corrected Results
Under 30%            4.1%               4.2%

30-60%                 2.8%               3.1%

60-90%                 2.8%               3.2%

Over 90%             -0.1%               2.2%

As Paul Krugman (link above) argues, what we are likely seeing is reverse causation: slow growth leads to high debt/GDP ratios. That is certainly what EU countries are finding as they implement austerity measures and slip back into recession. But even if high debt/GDP did cause slower growth, we can see it is nowhere near the crash that Reinhart and Rogoff’s paper made it out to be.

The bottom line here is simple: the focus on deficits and debt that have dominated our political discourse is completely misplaced. We need to do something about the unemployment crisis by increasing growth, something that is even truer in the European Union where the unemployment rate in Spain and Greece exceeds 26%.

Update: Reinhart and Rogoff have responded in the Wall Street Journal. They emphasize that there is still a negative correlation, and that having debt/GDP above 90% for five years or more reduces growth by 1.2 percentage points in developed countries, which is still substantial for developed economies.

Update 2: Paul Krugman’s response to Reinhart and Rogoff is here.  He pronounces it very disappointing, saying they are “evading the critique.”

Update 3:  Reinhart and Rogoff have a new response in the Financial Times (registration required). Here, they admit they committed the Excel error, but claim there was nothing nefarious in their disputed data choices:

The ‘gaps’ are explained by the fact there were still gaps in our public debt data set at the time of the paper. Our approach has been followed in many other settings where one does not want to overly weight a small number of countries that may have their own peculiarities.

This is a very odd response from two authors who equated one year of New Zealand to 19 years of the far larger UK economy. Worse still when you add the fact that by excluding several years when New Zealand had a debt/GDP ratio over 90%, they got an “average” (actually only one year) growth rate of -7.6%, when the correct average, with all relevant years over 90% included, was 2.58%, a 10.18 point swing!

It’s obvious that the austerity crowd is still going to defend this paper, but that doesn’t mean anyone else should be taken in by them.
Cross-posted from Middle Class Political Economist.

Comments (12) | |

Empirical Methods and Progress in Macroeconomics

Mark Thoma, among many others, discusses some implications for readers to consider for macro overall: Empirical Methods and Progress in Macroeconomics

(Quote)The blow-up over the Reinhart-Rogoff results reminds me of a point I’ve been meaning to make about our ability to use empirical methods to make progress in macroeconomics. This isn’t about the computational mistakes that Reinhart and Rogoff made, though those are certainly important, especially in small samples, it’s about the quantity and quality of the data we use to draw important conclusions in macroeconomics. Everybody has been highly critical of theoretical macroeconomic models, DSGE models in particular, and for good reason. But the imaginative construction of theoretical models is not the biggest problem in macro – we can build reasonable models to explain just about anything. The biggest problem in macroeconomics is the inability of econometricians of all flavors (classical, Bayesian) to definitively choose one model over another, i.e. to sort between these imaginative constructions. We like to think or ourselves as scientists, but if data can’t settle our theoretical disputes – and it doesn’t appear that it can – then our claim for scientific validity has little or no merit. There are many reasons for this. For example, the use of historical rather than “all else equal” laboratory/experimental data makes it difficult to figure out if a particular relationship we find in the data reveals an important truth rather than a chance run that mimics a causal relationship.(unquote)

Tags: Comments (2) | |

Okay Fine, Let’s Call Investment “Saving.” Or…Not

I really like Hellestal’s comment and linguistic take on this whole business:

I’m comfortable changing my language in order to communicate. I have very little patience for people who aren’t similarly capable of changing their definitions.

This discussion is really about the words we use to describe different accounting constructs. Nick totally gets that as well.

So I’m ready to say, “fine, let’s call investment saving.” That’s perfectly in keeping with the very sensible understanding found in Kuznets, father of the national accounts. He characterized real capital — the actual stuff we can use to create more stuff in the future — as “the real savings of the nation.” (Capital in the American Economy, p. 391.)

So when you spend money to produce something that has long-lived (and especially productive) value, you’re “saving.”

But still, I gotta wonder: why don’t we just call it…investment?

Because this S=I business confuses the heck out of everyone. Some of the smartest econobloggers on the web have spilled hundreds of thousands of words over the last several years trying to sort out this confusion. I’ve read most of them, and I’m still confused. And I’m quite sure that all non-economists who’ve looked at this (and many or even most economists) are as well.

And that’s not a surprise. Here are a few reasons why:

1. When you invest in real assets, you’re spending. That’s why it’s called investment spending. So spending = saving. Really?

2. When you pay someone to build you a drill press, you’re saving. When you don’t eat some of this year’s corn crop, you’re saving. When you pay off some of your money debt, you’re saving. When you don’t spend some of the money in your checking account, you’re saving. Each of these is true within a given (usually implicit) balance-sheet/income-statement accounting construct. But are they anything like the same thing?

3. As I showed in my last post, f you look at the “real” domestic private sector — households and nonfinancial businesses (most people’s implicit default context) — the amount of saving (income minus expenditures) has absolutely no relationship to the amount of investment spending. Saving is always insufficient to “fund” investment. And the changes in the two measures don’t move together, either in magnitude or direction. (Aside from the long, multi-decadal growth in both as the economy grows.)

4. When you “save” by investing, you decrease the amount of money on the left-hand (asset) side of your balance sheet, while increasing the amount of real assets on that tally. Your total assets are unchanged. Have you saved?

5. When you pay someone to write a piece of software, you get a long-lived real asset. You’ve saved. But the money you gave them is income for them, so it contributes to their (money) savings as well. Do you double-count those savings, or did “the economy” get that software for free?

6. Investment means “gross investment” — all the money spent on long-lived goods, including replacement of long-lived assets that have been consumed in the period (through use, decay, and obsolescence, and — for inventory of consumer goods — actual consumption). But in KuznetsWorld, shouldn’t we be talking about net investment — the additions to our stock of long-lived assets? Gross consumption minus consumption of fixed assets (and inventory changes)? Shouldn’t we call net investment “saving”?

I know: there’s (at least apparent) confusion in some of these, but that’s rather my point. And there are answers to all of these in the context of S=I. (All of them, I think, based on the flawed [neo]classical accounting constructs embodied in the NIPAs. That’s my next post.) I’ve read them all, every which way from Sunday. But do they help anybody understand how the economy works, or…quite the contrary? If they do, why do all those econobloggers feel the need to worry at this, constantly?

I’m not sure this really solves the problem, but I’d like to suggest that saving should mean what everybody in a monetary economy means when they use the word: money saving. Monetary income minus money expenditures. In dollars, or whatever. (And while we’re about it, when you take out a loan or spend out of your savings, let’s call those “borrowing” and “spending,” not “dissaving.”)

Meanwhile investment (in economics discussions) should mean what economists mean when they use the word: “spending to create fixed assets and inventory.” (Because the national accounts only count spending on structures, equipment, software, and inventory as investment.)

And actually, that’s what it already means.

Why do we need to call it saving?

Cross-posted at Asymptosis.

Tags: Comments (3) | |

Criticizing the IMF staff and Ryan Avent

Lifted from Robert Waldmann’s Stochastic Thoughts:

In the post below, I vigorously criticize IMF staff and Ryan Avent for claiming that central banks adopted low inflation targets in the early 80s without noting that the Fed did not adopt an inflation target until January 25 2012.  I have now read Avent’s post as patiently as I can (meaning I skipped ahead).
 
Avent wrote “That the disinflation of the 1980s has generated a flattening of the Phillips curve is precisely what the IMF demonstrates:”

This claim is illustrated by a figure which does not show that.  Even if a curve hasn’t changed at all, the slope depends on where you are (that is the curve is not a straight line).  The figure does suggest that  the IMF staff are willing to assume that the Phillips curve is a straight line, or rather that they are willing to support their argument by presenting a graph which tends to convince people willing to make that assumption.



The graph does not demonstrate any change at all in the Phillips curve (I’m not saying it didn’t change just that the question can’t be answered with the graph).  You can’t see if different points lie on the same curve by plotting changes on changes, because the slope of a curve isn’t constant.

In particular, inflation is much lower now than it was in the early 80s.  It is possible that the slope of the Phillips curve is lower now, because the Phillips curve is a curve.  The pattern from 2007 on is clearly different from the pattern in the 1930s.   It is not clear that it is different from what would have happened from 1980 to 1994 if inflation had been around 2% in 1980.  

Oh and the 30s were different. In most developed countries, the unemployed don’t risk starvation any more.  The welfare state was quite different back when high unemployment caused sharp deflation.

I swear that this post has been edited to make it less rude.  You don’t want to read the first draft.
Also I deleted a draft conclusion to the update to the post below, because it was too inflammatory.  I am trying to be as polite as I possibly can without actually lying.

update:  Now I am going to make some graphs.  They are totally unlabeled only partly because I am lazy but also because I want the reader/graph eyeballed to try to guess what they are.  They are US analogs of the IMF graph with the change in core inflation on the y axis and the change the civilian unemployment rate on the x axis.  All graph 17 data points (as in the green series from the IMF).  Two  show data from after the Fed flattened the Phillips curve in the “early 80s”.  Which two show the new flat Phillips curve?

Figure 1 (chosen from three figures at random by the eyes closed point and click method)

Figure 2 (not chosen at random)

Figure 3

Don’t peek 

Come on it’s more fun if you guess than look ?

OK the answer is that figures 1 and 3 show the new flat post early 80s Phillips curve which is due to inflation targeting.  

Did you guess without peeking ?

Figure 1 shows 17 quarterly inflation changes from 1985q1 minus 1984q4 on.  They are the first data which came undeniably after the early 80s.  Figure 3 shows the most recent 17 quarterly changes.   It is not markedly different from figure 1 because of auto scaling (not “not *just* because I am lazy” does not imply “I am not lazy”) but it is much flatter (the range of unemployment changes is 4 times as large and the range of core inflation changes is about the same).

Figure 2 is the first 17 available quarters from Fred from 1958q2 – 1958q1 (when the core CPI series starts).  The first of those data were collected before Phillips published his famous scatter (with labeled axises even) .  The last in the first quarter of 1962 rather before the modern advances in monetary theory.  It is very flat indeed.  If Phillips had relied on FRED, he wouldn’t have gotten published at all.  Inflation bounced around way back then, but there is almost no relationship between the change in inflation and the change in unemployment.

This is what Phillips saw for extremely low inflation rates.  The rediscovery of the fact that the Phillips curve has a low slope at inflation rates near zero is not path breaking progress.

Tags: Comments (0) | |

Reading Mankiw in Seattle

A while back Nick Rowe challenged amateur internet econocranks (my word, not Nick’s) like me to actually go read an intro econ textbook. (He was specifically targeting the author of Unlearning Economics — who I, at least, don’t consider to be an econocrank, he’s far better-versed than I am, though Nick might.)

I took him up on the challenge, and am finally writing up my thoughts because I need to reference this from another post.

Figuring I ought to go straight to the belly of the beast, I picked up a used copy of Greg Mankiw’s Principles of Microeconomics. I didn’t read every word — I’ve been poring through various econ textbooks online, plus innumerable papers and blog posts, for years, so I knew a lot of it already. But I did go through it fairly carefully (especially the diagrams), and it had some of the effect that Nick was hoping for. Some of the things that I didn’t think were (sensibly) covered in intro econ, in fact are. And not surprisingly given my autodidact’s typical spotlight (and spotty) pattern of knowledge, I learned quite a few new things.

But still, my overall impression was amazement at what is not covered, and in particular what is not covered right up front.

In place of Mankiw’s nostrums about tradeoffs, opportunity costs, margins, incentives, etc., I would expect to see discussion of the fundamentals that underpin all that:

Value. What in the heck is it? How do we measure it? This was the topic of the opening class in my one accounting class, at the NYU MBA school. Basically: accounting for non-accountants, teaching us to deconstruct balance sheets and income statements into flows of funds. A darned rigorous course, taught by a funny and cranky old guy, formerly on the Federal Accounting Standards Board, with a young assistant prof playing the straight man and the enforcer. That first class was one of the most valuable (?) I’ve ever sat through.

The phrase “theory of value” doesn’t even appear in Mankiw’s text, even though he uses the term “value” constantly, and it’s obviously a term that has some import in economics. Imagine an undergraduate who’s had zero exposure to the ideas of subjective versus objective value, or the centuries of (continuing) discussion and debate on the subject, trying to parse the following sentence, and think critically about what it really means.

…we must convert the marginal product of labor (which is measured in bushels of apples) into the value of the marginal product (which is measured in dollars).

Money. What is it? What’s its value relationship to real goods, and in particular real capital? How is it embodied in financial assets? Where did it come from? (Hint: from credit tallies and for coins, military payments of soldiers, not barter between the butcher and the baker. That’s an armchair-created fairy tale.) The phrases “medium of account” and “medium of exchange” don’t appear in the book. Since economics is all about monetary economies, this seems like a significant omission.

Utility. The most fundamental construct in economics — the demand curve — is derived from utility maps. But Mankiw doesn’t even mention the term until page 447, where it’s discussed as “an alternative way to describe prices and optimization.” Alternative? There’s no discussion of ordinal and cardinal utility, or of the troublesome doctrine of revealed preferences (which 1. is the doctrine that allows economists to avoid talking about utility, 2. constitutes a circular definition, and 3. is never mentioned in the book).

All this gives me a feeling of indoctrination into a self-validating, hermetically sealed body of beliefs floating in space, with no egress outside that bubble, into thinking about the thinking going on therein. There are huge and not-wacky bodies of thinking out there that seriously question what goes on inside, often refuting it on its very own terms, and in the words of its own most eminent practitioners.

Yes, you could argue that I’m asking too much of undergraduates, but I would suggest that you’re asking too little (or the wrong thing) of undergraduate professors.

Is Mankiw teaching his “customers” to understandthe hallmark of the North-American higher-education system, in my opinion, compared to most other countries — or is he teaching them to adopt an undeniably ideological world view (no, neoclassical economics is not purely “positive,” not even close), and to just go obediently through the motions as prescribed in the textbook? In my opinion, he’s doing the latter.

I’m tempted to suggest that this is all true because (neoclassical?) economists don’t have a coherent or non-circular theory of value, and money, and utility. (Neither do I, but I’m working on it!) But saying that would make me sound like an internet econocrank.

Cross-posted at Asymptosis.

Tags: Comments (9) | |

Solow on Bernanke (and both, on Libertopians)

I’m just sayin’. (Emphasis mine, words Solow’s):

[Bernanke’s] preferred answer is better and more system-oriented regulation. One has to ask then why regulation failed to see the crisis of 2007–2008 coming and take action to head it off. Bernanke suggests that regulators were lulled into inattention by the so-called Great Moderation. Our masters are all too eager to take the Panglossian view that a system of “free markets,” including financial markets, is self-regulating and self-stabilizing. Bernanke is surely right about this. The scholar of the 1930s has to be aware that there was similar talk about the New Era in the years before 1929. Dr. Pangloss has lots of helpers among the sharpshooters who profit most from the absence of effective oversight, and among simpleminded ideologues. They are still with us.

Cross-posted at Asymptosis.

Tags: Comments (0) | |

Time Duy on QE and Signalling*

Mark Thoma once claimed to be pleased that I was shrilly criticizing him.  I sure hope he meant it, because here I go again.  He
Update: I have trouble with reading comprehension.  A one syllable name was too hard.  I am commenting on Tim Duy who posted at Mark Thoma’s site.  I apologize for the mistake.

Tim Duy

 comments on the newly released FOMC minutes

I have long believed that the Fed failed to appreciate the signalling component of quantitative easing. Indeed, I could be convinced it was the most effective channel of transmission. I am glad to see that policymakers are starting to see that as well.

I am no longer sure whether Thoma Duy thinks that QE as actually implemented was an effective signal of future monetary policy (as I guessed when writing the comment below) or whether he thinks that a new combined strategy of forward guidance and QE at the same time with the explicit assertion that the QE means the forward guidance should be taken seriously would work.

Below I copy the comment I made based on the first interpretation and argue that QE2, QE3 and QE4 were not effective signals of future conventional monetary policy.   I went to FRED and stayed there (part of the reason is I had exams to grade and FRED was one place to hide from them).

The signalling effect of QE should show up as low medium term interest rates.  I think there really is no sign of this on the dates of announcement of QE2 (3 or 4 dates right there) QE3 or QE3.1 (Dec 12 2012).  I don’t see it either on the day (as asset prices should respond quickly) or over an interval of time.

I really think that the hypothesis that signalling future short term rates is a highly effective component of QE is fairly easily testable and rejected by the data.

Consider QE3 announced September 13 2012 the 5 year constant maturity rate went from 0.7 on he 12th to 0.65 the 13th to 0.72 the 14th.  These are tiny fluctuations.  Over a longer horizon in all of September it went from 0.62% on the first trading day (Tuesday the 4th) to 0.62% on Friday the 28th.  The 3 year rate declined all of one basis point on the 13th then rose 5 on the 14th (0.33 to 0.32 to 0.35) in all of September it moved not at all (0.31 on the first trading day  to 0.31 on the last).

There is no sign at all of a forward guidance effect. I note QE3 included explicit forward guidance about future short term rates.

QE4 (December 12th 2012) was definitely a surprise.  The 3 year rate shows no change on the 12th and up 2 basis points on on the 13th.  From the first to last trading day in December it went up 2 basis points.  The 5 year rate down one basis point on the 12th and up 4 on the 13th.  In December overall it went up from 0.63% to 0.72%.

Again no sign of a forward guidance effect.  The fluctuations are tiny, much to small to be economically significant and even too small to be statistically significant.

QE2 is harder as the announcement was telegraphed.  There was Bernanke’s August 27 2010 Jackson Hole speech, the  and the final announcement November 3 2010 and other announcements in between (and even the FOMC meeting before the speech).  The 5 year rate went up (a bit) on August 27th and was higher the 28th than the 26th.  It dropped 11 basis points from the November 2nd to 4th. This is the best news the forward guidance hypothesis and it is definitely economically insignficant.  Overall from August 26th till the last trading day in November 2010 the 5 year rate went up.

I might be convinced that forward guidance is the strongest channel for QE to work, but only because I might be convinced that QE doesn’t work at all.  In fact, I think that the right kind of QE would work — that so called QE would be buying 100% of new issues of RMBS at a higher than market price.

Comments (6) | |

Delivering water quality to the tap

David Zetland at Aguanomics reminds us that we always need to get people on board and invested with results of policies, and perhaps a way to keep track relevant to daily lives as well.  Water delivery is pretty local so far in the US, but taken for granted in only parts of the US:

Delivering water quality to the tap

I’m now in Kiev (looking into their water utility regulation), and a typical problem has popped up, i.e., the difficulty in delivering water quality to the tap.
 
The physical layout of water systems — taking raw water from ground or surface sources, treating it, pumping it through large pipes to smaller pipes, then to building pipes and finally to the tap — means that there are multiple points at which clean water can get contaminated. There’s the problem of dirty water at source and poor treatment, of course, but the more common problems occur when water in transit gets dirty from old or leaking pipes.
 
By my understanding, most utilities run their systems to deliver clean water to the building (usually at a meter), with the quality of the piping between the meter and the tap being the building owner’s responsibility.
Building piping has two problems. The first is old or leaky pipes that may contaminate the water. The second is that some buildings have many residents who share the same pipes (and sometimes the same meter). Neighbors therefore need to find ways to share their water (rather, the bill) as well as keep their communal and unit-plumbing in good condition.


There are several ways to share the bill (divide by people, install submeters, etc.), but I want to concentrate on the plumbing problem in buildings and networks. The first issue is to know whether the water is safe to drink at the tap. If that’s not true, then it’s important to find where it gets contaminated. That question can result in finger pointing between customers and the utility as to who should pay for water testing, so I came up with this idea, taking as given the fact that utilities (and their regulators) need to ensure that water is safe to drink; it’s not the customer’s obligation, even if it’s in the customer’s interest. Such a “fact” means that utilities need to take the lead on quality testing, and here’s how I’d do it.

  1. Any utility that says its water is safe also needs to persuade customers of that fact.
  2. So it can include coupons in a few hundred bills (every month or so) that can be used to get a free water test at the customer’s tap by a qualified tester. The real cost will be $10-100, depending on local labor costs.
  3. Qualified testers are listed on a website; they are certified and equipped to test water in the house. That website also provides them with free advertising.
  4. Customers contact a tester who measures their water quality (hand held testers are getting better and cheaper). Test results are posted on the internet (without an exact address) and to the utility.
  5. “Safe” results allow the tested customer (and some share of neighbors) to have a better opinion of their water — and drink more of it.
  6. “Unsafe” results will trigger a second test by the utility, to determine if water at the mains is clean (thus the building plumbing has issues) or also dirty. Those results will make it easier for the utility and/or building owner to take action.
  7. There’s a potential problem if tester trying to get false results of unclean water (to get more business), which can be reduced by witholding payment from those whose tests are contradicted in a retest. Those who get too many false positives can be removed from the list (in 2), which will hurt their business. So they are likely to be honest.
  8. We also hope that the utility is honest, but that’s the regulator’s responsibility — and no utility will be able to cover up bad test results for very long.

This idea will help utilities find contamination problems and persuade customers that water is safe to drink (and thus worth paying for!) at the same time as it supports an independent industry for assuring water quality. (Such an industry could survive in places like the Netherlands because testers are also likely to be plumbers who are ALWAYS needed.)

Bottom Line: Water quality is hard for an individual to determine, but utilities can make it easier — and make their product more attractive — by paying for random water tests.

Tags: , Comments (5) | |

To centralize or not to centralize?

http://www.aguanomics.com/2013/04/to-centralize-or-not-to-centralize.html
 
Tcentralize or not to centralize? o

I’ve run into many instances of a struggle between small- and large-scale governance, e.g., local vs. regional or national water management.
These struggles occur over money, regulations, water allocations, and so on.
I can see why they happen — someone in power decides to take over responsibilities from a lower-level of government* — but I can also see why they are inefficient and unfair. It’s one thing, for example, to impose the metric system on a country, entirely another to impose the same water tariff!
The problems of overcentralization are three (at least). First, centralization tends to impose one-size-fits-all solutions onto situations that do not require them. Second, those solutions tend to create correlations in mistakes that would normally offset each other, e.g., standards that strike high or low. Third, centralization increases the cost of gathering information, administering the system, etc. because details are lost in aggregation.
In the EU, they speak in terms of “subsidiarity,” i.e., pushing responsibility down to the lowest possible level of competence, and that’s the right term to use here. The Swiss have been well governed for centuries due to their relentless pursuit of it. The Dutch have done the same with their water boards. American mayors tend to their potholes and schools.
But there are many examples of failures: The Colorado river is NOT managed across the whole watershed (Upper and Lower in the US, separated from the Mexican tail), water and wastewater are often managed by different organizations, the US Department of Education intervenes when there’s no need to homogenize methods across the country. You get the idea.
So my advice is to solve problems based on solutions that are formed at the right scale. Incumbents who are at higher scales may protest at their loss of power, but those who care about results (instead of their power) will relinquish it.** That’s a tough conversation, of course.
Bottom Line: Centralization has costs and benefits. Don’t tell me what to eat for lunch, and I won’t tell you how to educate your children.


* Ukraine has a Ministry of Regional Development, but why does the center need to develop the regions? It appears that the MRD redistributes money taken form the regions, but that’s an invitation for imbalances.
** Economists speak of Tiebout competition among different cities that attract citizens looking for different mixes of amenities, but there’s also a value in publicizing these differences, so that people (and administrators) can compare ideas without having to move.

Comments (0) | |