The 24 Trillion Dollar Bezzle
At the beginning of 2007, net worth of households and non-profit organizations exceeded its 1947-1996 historical average multiple, relative to GDP, by some $16 trillion. It took 24 months to wipe out eighty percent, or $13 trillion, of that colossal but ephemeral slush fund.
In mid-2016, net worth stood at a multiple of 4.83 times GDP, compared with the multiple of 4.72 on the eve of the Great Unworthing. Below is a FRED graph of GDP and household income both indexed to 1952:
The empty spaces between the red line and the blue line that open up after around 1995 is what John Kenneth Galbraith called “the bezzle” — summarized by John Kay as “that increment to wealth that occurs during the magic interval when a confidence trickster knows he has the money he has appropriated but the victim does not yet understand that he has lost it.”
In Chapter 8 of The Great Crash, 1929, Galbraith wrote:
“In many ways the effect of the crash on embezzlement was more significant than on suicide. To the economist embezzlement is the most interesting of crimes. Alone among the various forms of larceny it has a time parameter. Weeks, months or years may elapse between the commission of the crime and its discovery. (This is a period, incidentally, when the embezzler has his gain and the man who has been embezzled, oddly enough, feels no loss. There is a net increase in psychic wealth.) At any given time there exists an inventory of undiscovered embezzlement in – or more precisely not in – the country’s business and banks. This inventory – it should perhaps be called the bezzle – amounts at any moment to many millions of dollars. It also varies in size with the business cycle. In good times people are relaxed, trusting, and money is plentiful. But even though money is plentiful, there are always many people who need more. Under these circumstances the rate of embezzlement grows, the rate of discovery falls off, and the bezzle increases rapidly. In depression all this is reversed. Money is watched with a narrow, suspicious eye. The man who handles it is assumed to be dishonest until he proves himself otherwise. Audits are penetrating and meticulous. Commercial morality is enormously improved. The bezzle shrinks.”
In the present case, the bezzle is an economic policy feature. It is the product of tax cuts, Greenspan puts and banking deregulation. Here is a FRED graph close-up of the post 1995 asset bubbles:
To make a long story short, think of wealth as capital. The value of capital is determined by the present value of an expected future income stream. The value of capital fluctuates with changing expectations but when the nominal value of capital diverges persistently and significantly from net revenues, something’s got to give. Either economic growth is going to suddenly gush forth “like nobody has ever seen before” or net worth is going to have to come back down to earth.
Somewhere between 20 and 30 TRILLION dollars of net worth will evaporate within the span of perhaps two years.
When will that happen? Who knows? There is one notable regularity in the data, though — the one that screams “Ponzi!”
When the net worth bubble stops going up…
…it goes down.
Lower growth, lower interest rates, higher asset values, more volatility
Sandwichman… I presume your current post is an elaboration and further extension of your original post in response to Jared’s “robotics” post, which was in based on his observations of productivity measures.
In other words the issue isn’t robotics or automation but measures of productivity which don’t capture the effect on jobs being lost to automation.
I make a couple of observations. Productivity is widgets/hour of human labor. Aggregated economic values can’t be directly separated into number of units and unit prices in any one industry, much less made into equivalent “leeks” across multiple industries and products, goods, and services. So the method which quantizes unit volumes in and unit prices from the composite prices (GDP) is based on huge assumptions across multiple industries which are only occasionally reset by sampling from a few industries .. .last time in 2009 I think — which is why current measures of “real” data is indexed to 100 = 2009 values.
The method of actually separating unit volumes (quantities) and unit prices from the composite price changes from year 0 to year 1, etc. isn’t clear or transparent. It is fundamentally based on a single assumption: Increases in composite prices relate to combined unit price x number of units such that higher composite prices equate to lower unit prices and higher volumes… and the reverse. There is thus some “constant” defined for each product type so that unit volumes can be derived from composite prices.. such constant’s change with time of course, as do the units being measured.
Next is labor hours factor. Labor Hours per unit time is assumed to be the same across all industries and occupations on average. That’s a reasonable method but that also changes with time … Are labor hours full time equivalent hours? or only the average hours reported by respondents in the surveys and then extended to all employees at all moments in time during a year? The method of actually obtaining hours by industry is likewise not transparent in detail.
Both of these metrics (output quantities & input labor hours)I are prone to error and the applicability of assumptions used. They are in fact derived from the far more measureable “goods” producing industries. How and in what fashion the methods apply to services industries “quantities” of output are likewise not transparent. I keep wondering how a legal service is quantized? There are several types of legal outputs: Advice (which may or may not be take in whole or part, so has no measurable output); Court defense and prosecution services; Out of court paper slinging between contending parties that get “settled” out of court — what was the output.. one party lost, the other gained, net zero output.
There are many other services which expend billions in composite dollars and hours, but for which output quantities are immeasurable.. so what is the labor output quantity?
Finally, I cut to the chase by measuring human labor productivity in a more direct and net most significant fashion.
We know total employment (numbers of people employed) in non-black-market employment… we know this approximately monthly in fact. We also know nipa’s output quantities (nipa table 1.2.3) for the composite GDP, for Goods, and Services..
So a straight forward measure in terms of how output changes with time and employment changes with time in goods and services can be directly measured as a “gross” measure of productivity.
I’ve done this and of course it shows that labor productivity in Goods employment has skied (because the denominator keeps falling while the numerator increases. In services the numerator increases only a little but the denominator increases by nearly the same so productivity of services labor is tiny by comparison.
The composite productivity of employment though has to weight each (Goods and Services) appropriate to it’s contribution. Should that be weighted by employment proopr tions in each industry or by output quanties proportions? Services employs ~ 10x more than Goods (~ 124 million v 12 million at present) which would weight the low Services productivity at 10x that of goods productivity.
I’ve plotted and quantized this data since 1969 and it can be easily separated into periods of linear changes in slope by standard statistics (or by eyeballing it as well) Anyway I come up with a total (in 2016) productivity of employment measure for Goods which is ~ 12x that of Services… and the trend is continuing to increase that differential even while employment in Goods continues to drop below 10% and Services continues to absorb all gains in employment.
If you want to know how robotics and automation affect jobs and employment this method is a gross, but direct measure. It is however s till based on nipa’s quantized outputs… which as I say are not very transparent nor are the assumptions necessarily applicable. They are perhaps a decent and valid way to compare over time frames but that doesn’t make the values correct or accurate.
I have several real questions about the distinction in Goods and Services labor as well as real measures of labor hours in services (or Goods labor depending on how labor is categorized).
An ;engineer working for a goods producing industry is classed as Goods labor, which in fact though is a service being purchased in support of a goods producing business. If such service is purchased on the outside (from “consultants” or ” independent contractors”, or an “engineering services” firm) then is it measured as part of goods or as a service?
Is an import business that purchases goods that he sells to goods manufacturers and that also purchases goods he sells to retailers or wholesalers a service or a goods producing business? How is their labor output quantity measured? By the number of items he sells to a mfg’er of goods or by the number sold to retailers? or is it differentiated an apportioned ? It’s still a service though. But if an employee in the purchasing dept of a goods producer purchases imports from the foreign mfg’er just as the import business does, are the purchasing dept employees of the goods producer part of a service or are they part of a goods producing business?
Here’s another conundrum…
If a business both designs and mfg’ers their product and can for example improve the product’s measured output in several measures of output important to the user and do so at a fraction of the cost of their product before the improvement, what is the productivity improvement?
A concrete example:
A disk drive at year t0 had n bytes of data stored on it ,and it took x time to access any desired piece of data and it cost the mfg’er $y dollars, including marketing and all overhead, interest, ect per unit sold. The sales price was $10y.
At time t1 the data stored increases by 10n, and time to access reduces to x/10 and the cost drops to $y/4, but the price only drops from 10y to 10y/2.
What is the productivity gain?
Next, at time t2 the disk drive business moves 98% of the domestic labor content to a foreign producer, where it now has improved the data stored to 100n, the time to access data reduced to x/50, and the cost from 10y/4 to 3y/4 to but the price is only reduced to 10y/4. That producer now imports this t2 device paying no import duty
Now what is the productivity gained?
You get the picture… what’s measured as output? To the user it’s the amount of data which can be stored and how fast it can be accessed. To the economist it’s the number of units (disk drives) produced and sold on the market. What’s the productivity in each case and which is the correct or applicable real measure of national productivity gain?
In each time version of the device as the price drops so does the volume (quantity) sold increase (but it’s actually the price/unit data stored and time of access that determines the volume sold, not the price reduction by itself). As a matter of real fact the volumes sold could be precisely predicted as a function of price/unit data stored with an further bump to volumes with access time reductions. There was no market place ambiguity.
A basically identical effect occurred with semiconductors, where the fundamental functional unit of value to the user was in fact the number of transistors per unit area (on a wafer or on a chip from the wafer) and the speed with which a fundamental electronic “instruction” could be completed.
Each of these computational devices enabled labor in other goods and services to be reduced, Were the in-house programmers in a goods producing industry that purchased computers and disk drives a Services labor or Goods labor? In fact we know that huge numbers of goods producers contracted with outhouse consultants and “independent contractors” for software services to enable their goods producing business to reduce production costs by reducing manufacturing & engineering labor content. As a major economic side-effect they could also outsource accounting and payroll services that were otherwise “in-house” at reduced prices relative to inhouse costs, reduce real-estate (floor space) applied to this service and shift it to rental income or to direct manufacturing space or sell it for a profit.
So, if all these formerly classified Goods production labor hours were in fact largely actually a service and the computing industries made those services far more productive, then why aren’t services showing up as also being far more productive? Or are they infact showing any productivity gain at all only because a far lower number of such labor content is now required to do 10x more than could be done before… but the overwhelming numbers of services employment hours are in services that have had near zero gain in productivity?
Here’s another issue with productivity measure, I think.w
Google, as just one example, is an advertising media business. It employs programmers and computer hardware as a service to enable / create and support that media. But advertising itself has no direct output. Advertising only has output when somebody uses that advertisement to purchase something .. a good or service. So how does one measure the productivity of a Google programmer in the economy?
We can use Oracle (accounting and data analysis software provider) as well.. They sell programing software (as a service purchased on some lease or per use terms). This has no output directly but only has an output in a user’s service output or goods production output (and even in goods production it is only a service to that producer rather than actually producing anything itself).
So for Oracle, what’s the quantized unit of output? Lines of code measured in bytes of code? Or is it a packaged set of codes wrapped under each “named package” sold to users, independent of how many lines of code are involved?
I could go on and on but I won’t. The issue is whether the catagorizatioin of occupations employed are a service or goods producing agent of labor and if a service, then how is the unit of service output quantized? Computing has made it possible to separate services from goods production where the service is applied by using a computer to aid in what used to be far more labor intensive exercises to obtain an output. Not only that but the output with a computer is far more capable than was every justified in output by the far more labor intensive methods used earlier… they are far more refined, far more options and quantized tradeoffs are not included to find more optimal results.
Many, many things that were impossible before by humans alone with all the technology then available has only become possible by computing and this has been translated to all kids of devices used for new commercial products and services in the business community —
I’m not referring to consumer’s gains in comforts and convenience which is or may be an artifact of commercial competition in selling these new devices and services to the public.
Sandwichman, on your chart of net worth relative to GDP, I note the following.
The first deviation began in 1997 and peaked in 2000 — 3 years.
The second deviation began in 2003 and peaked in 2007 — 4 years.
The third and current deviation began in 2009 and it still hasn’t peaked by 2016 — 7 years and still counting.
Each deviation exceeds the magnitude and percentage magnitude of the prior one. This indicates that the forcing function of the deviations has been increasing.
Both the longer times between onset and peaks of the deviations and their increasing magnitudes together comprise what might be referred to as improved “learning” of how to maximize the forcing functions…
So if I go back to the 1st two deviations and assert that they were effectively the result of a con-game, then the forcing function is the con.
The first con was the speculation referred to and which was in fact the dot-com boom (and bust). This con was predicated on the mark being conned into believing that the dot-com companies which had no profits and indeed only highly hyped plans for profits would translate to huge profits in the future (all cons are based on “future” events materializing as the con describes). This gave the mark “confidence” in making investments in those and other enterprises which moved the equity markets to higher and higher levels in those “conned enterprises”… I’m reminded for Global Crossing for example… just tone of many. So the first deviation’s peak in “net worth” was due to gains in capital income (both realized and unrealized gains are counted in net worth).
The 2nd deviation is referred to as the “housing boom” (and bust) which was a con just like the first one but with a different con speculation… real-estate in the form of residential housing. LIke the first the net worth growth was due to realized and unrealized residential value gains. and like the first con was predicated on the mark believing that the housing market would continue to grow. Since this was a global phenomena (in Europe and the US) simultaneously then the con’s were able to also use he European’s as marks… same reasons, same effects on residential real-estate.
The current and 3rd deviation is also a con, but the con’s have learned that all they need to do is refine the speculative future gains in the mark’s eyes to get them to invest in whatever the “future gain” being promoted is. It is as I see it the extension of a recovery from an apparently “depressed level” of the market — “apparently depressed” because the lower level at the beginning of the 3rd con was below the r peak of the 2nd con. As the recovery continued the con simply extended the future potential gains to newer heights. This of course aided and abetted by policies… speculative at first, but with a GOP congress in the majority then the speculated policy changes would extend those future potential gains to newer heights as if they can be full supported indefinitely by the U.S. (and global) economy.
In all of these cases the con has been based on FUTURE extended and unlimited growth in some aspect of the equity investments being offered. Of course in all cases this unlimited extension into the future is not possible in reality, but the mark’s for some unfathomable reason are willing to bet that the con’s are right.
The cons have “learned”, I think by the first two con-games that the specificity of the source(s) for these future gains must be ambiguous enough so that they cannot be specifically shown to be limited in future potential growth, and thus end the con and result in the next ‘grounding’.
Part of the current con is being helped along by foreign investment in the US due to it’s growing economy relative to the other economies’ slower growth and greater risks at the present time. This places a greater demand on available US assets and thus forces increasing asset prices which increase net worth. Another part is that corporate profits haven’t being reinvested in new production capacity or productivity gains (due to still depressed demand) and thus diverted to stock buy-backs which also decrease asset supply availability and thus increase the asset values and net worth.
Both of these along with anticipated deregulation under the GOP congress (not to mention Trump which just boosts the con even more) to give the mark’s confidence that profits will growth at even more unprecedented rates keep the mark’s in the con’s game.
But an unlimited gain in asset values and profit growth will begin to peak out and end the “unlimited” con game. The name of the con game is therefore to extend the future appearance of growth and keep the speculation on these various and ambiguous things which will enable them to remain on the table as potential future gains in the mark’s eyes. Tax cuts, deregulation, lower enforcement by SEC, a conservative on SCOTUS bench give the conservatives a majority in laissez-faire decisions and less federal oversight and power, more defense spending increases and less social spending, etc. All of these things are fueling speculated future unlimited growth of assets, hence investment in them (so the mark’s don’t think their being left behind)
When it busts it will be a bigger bust than either of the last two deviations, with greater adverse consequence… and the marks will be left holding the proverbial bag. as they always have. When this occurs I have no idea… that it will occur is certain and inevitable.
I did the % difference calc from the same data you used.. nominal values for both Net Worth and GDP
For reference the highest prior percent difference occurred in 1961 at 288% difference. The lowest prior difference was 1978 at 218%. From 1985 – 1995 it averaged ~ 265% difference. On the whole prior to 1996 the average was ~ 255%. This will give a rough basis for evaluating the three deviations after 1996.
The first deviation was 341%, the second was 372%, and the third and still on-going deviation is presently 383% The two low points between the deviations are 280% .. roughly nearly equal to the prior highest percent difference peak in 1961.
What’s interesting to me is that the percentage difference between Net Worth and GDP has always, prior to the onset of the first deviation, been in the range from > 218% to < 288% of GDP or roughly 253% +/- 35%. In other words the maximum deviation from the average has been ~ +/- 14% of the average percent difference over time (= 35 / 253).
Assuming that's a 2.5 sigma deviation from the mean, then the 3 sigma deviation would be +/- 17% of the mean, or in this case +/- 42% from 253%.. .. e.g. the 3sigma max might be expected to be no greater than ~ 295% difference relative to GDP… I'll give it even 300% at the outside.
Thus the deviation of the three excursions from the mean since 1996 have been hugely above the maximum statistically expected 0.2% probability of the maximum distribution of the percentage differences of Net 'worth to GDP.. and it's occurred three times already.
Clearly there's been a huge change that has allowed these three spikes in 2000, 2007, and the current and continuing one in 2016 at present. .
Even if you use the low points between the three deviations as a new "mean" at 280% difference to GDP, the peaks have a 22%, & difference to the mean, are 22%, 33%, and currently 37% and continuing, percentage deviation from the mean.
These three "confidence game" events are all well above any possible 3sigma probability condition…. and should be so rare that they wouldn't occur in over 100 years of quarterly economic data….and that 's if 280% difference to GDP is a "new" mean… for whatever reasons it would rise suddenly friom a 253% mean is another issue entirely.
“The first con was the speculation referred to and which was in fact the dot-com boom”
I think some slight adjustment is needed to better understand the situation. In addition to totally hyped companies, the boom built real bandwidth. It was excessive, but Youtube and Netflix invented uses for it, so there was some time shifting of real wealth improvement in addition to the hype. Similar with the housing boom, but the invention of new ways to use buildings was not perceptible.
Perhaps removing the bezzle you get a curve through neither the peak nor the valley of the FRED data.
Most of my net worth is in my 401k. Most of my father’s (when he was my age) was in the present value of his pension. If the FRED numbers do not account for the shift from pensions to 401ks, you would expect there to be a growing divergence beginning in about 1980. Only some of the divergence would be illusory.