A1 and Productivity Growth
Yeah, I left one part of the title out . . . “Job Loss.” I do not believe there will be any. If there is it will probably be minimal and there will be other jobs we can go too. I believe in the ability of people to think there way out of things, making decisions on the spot, and based upon the knowledge they have while adapting to a changing environment. Believe it or not, that is much of supply chain.
Dean Baker says the same. Only for different reason(s). Meanwhile the news is using scare tactics to sell more news and gain more advertising. Fear is a profitable business to be in.
AI, Job Loss, and Productivity Growth, Center for Economic and Policy Research, Dean Baker
It is really painful to see the regular flow of pieces debating whether AI will lead to mass unemployment. Invariably, these pieces are written as though the author has taken an oath that they have no knowledge of economics whatsoever.
The NYT gave us the latest example on Sunday, in a piece debating how many jobs will be affected by AI. As the piece itself indicates, it is not clear what “affected by AI” even means.
What percent of jobs were affected by computers? The answer would probably be pretty close to 100 percent, if by “affected” we mean in some way changed. If by affected, we mean eliminated, then we clearly are talking about a much smaller number.
Thinking of AI like we did about computers is likely a good place to start. First of all, we should remember that there were predictions of massive layoffs and unemployment from computers and robots for decades. This did not happen.
In fact, we have a measure of the extent to which computers, robots, and other technology are displacing workers. It’s called “productivity growth,” and the Labor Department gives us data on it every quarter.
Productivity is the measure of the value of output that a worker can produce an hour. We expect this to increase through time as we get better equipment and software, we learn how to do things better, and workers get more educated.
For the last two centuries, productivity growth has been a normal feature of the U.S. economy, and in fact, most normally functioning economies around the world. This is the basis for rising living standards through time. It is the reason that we can feed our whole population, and still export food, even with just around 1.0 percent of the workforce in agriculture, as opposed to more than 50 percent in the 19th century.
The big question is the rate at which productivity grows. Productivity growth has actually been pretty slow in recent years. It averaged just 1.3 percent annually since 2006. By contrast, it averaged close to 3.0 percent in the quarter century from 1947 to 1973.
Rather than being a period of mass unemployment and declining living standards, the rapid productivity growth in that period was associated with widespread improvements in living standards. We went from depression era living standards in 1947 to a prosperous middle-class society by the end, as ordinary workers were able to afford to buy houses and cars, and send their kids to college.
We should think of the promise of AI in the same way. The first paragraph in the NYT piece warns/promises:
“In 2013, researchers at Oxford University published a startling number about the future of work: 47 percent of all United States jobs, they estimated, were ‘at risk’ of automation ‘over some unspecified number of years, perhaps a decade or two.’”
That warning is pretty vague but let’s say that we could use AI to eliminate 47 percent of current jobs over two decades. If we held GDP constant over this period, that would roughly correspond to the 3.0 percent annual productivity growth we saw during the post-World War II boom. And, just as we saw high levels of employment through the post-war boom (unemployment got down to 3.0 percent in 1969), we could maintain high employment if the economy had the same sort of rapid growth that we had in that quarter century. That will be a policy choice not an issue determined by technology.
Will Prosperity be Shared?
In the post-war boom the benefits from productivity growth were widely shared. To be clear, not everyone was doing great. Blacks were openly discriminated against, and virtually excluded from many better-paying jobs. The same was true of women, as the barriers were just beginning to come down. But the gains from productivity growth went well beyond just a small elite at the top.
Whether that happens with AI and related technologies will depend on how we as a society choose to structure the rules around AI. One reason why Bill Gates and others in the tech industry became incredibly rich was that the government granted patent and copyright protection for computer software. That was a policy choice. If we did not have these government-granted monopolies, Bill Gates would probably still be working for a living. (Okay, maybe he would be collecting his Social Security by now.)
These monopolies serve a purpose, they provide an incentive to innovate, but it’s not clear they have to be as long and as strong as is currently the case. Also, there are other ways to provide incentives. For example, the government can pay for people to do the work, as it did when it paid Moderna roughly a $1 billion to develop and test its Covid vaccine. Of course, the government also gave Moderna control over the vaccine, allowing the company’s stock to generate five Moderna billionaires in a bit over a year.
It is not hard to envision routes through which AI can lead to widespread prosperity in a way comparable to what we saw in the post-war boom. Suppose that we don’t have government-granted monopolies restricted access to the technology, so that it can be freely used.
In that world, I could likely go to a medical technician (someone trained in performing clinical tests and entering data), who could plug various test results into an AI system, and it would tell me if I have heart problem, kidney problem, or anything else. Rather than seeing a highly paid physician, I could have most of my health care needs met with this technology and a reasonably compensated medical professional, who may get less than one-third of the pay of a doctor.
There would be a similar story with legal assistance. Certainly, for standard legal processes, like preparing a will or even arranging a divorce, AI would likely be up to the task. Even in more complicated cases, AI could likely prepare a brief, which a lawyer could evaluate and edit in a fraction of the time it would take them if they were working from scratch.
People have pointed out that AI makes mistakes. There have been many instances where we have heard of AI systems inventing facts that are not true or citing sources that don’t exist. This is a real problem, but presumably one that will be largely fixed in the not distant future. We shouldn’t imagine that AI systems will ever be perfect, but the number of errors they make will surely be reduced as the technology is developed further.
In addition, it is important to remember that humans also make errors. There are few of us that cannot recall a serious mistake that a doctor made in diagnosing or treating our own condition or a close family member. A world without mistakes does not exist and cannot be the basis of comparison. We need AI to be at least as good as the workers it is displacing, but that doesn’t mean perfect.
AI and the Distribution of Income
We structured our economy over the last four decades so that most of the gains from the productivity growth over this period went to those at the top. Contrary to what is often asserted, most of the gains actually did not go to corporate profits, they went to workers at the top of the pay ladder, like CEOs and other top management, Wall Street types, highly paid tech workers, and doctors and lawyers and other highly paid professionals. These workers used their political power to ensure that the rules of the economy were designed to benefit them.
Whether or not that continues in the era of AI will depend on the power of these groups relative to less highly paid workers. Just to take an obvious example, doctors may use their political power to have licensing restrictions that prevent less highly trained medical professionals from making diagnoses and recommending treatments based on AI.
If that seems far-fetched, we already have laws that make it very difficult for even very well-trained foreign doctors from practicing in the United States. While the cry of “free-trade” was used to expose manufacturing workers to international competition, and thereby depress their pay, it almost never came up with doctors and other highly paid professionals.
Anyhow, we may well see a similar story with AI, where highly paid professionals use their political power to limit the uses of AI and ensure that it doesn’t depress their incomes. This also is an issue with ownership of the technology itself. If we don’t allow for strong patent/copyright monopolies in AI, and make non-disclosure agreements difficult to enforce, we can ensure the technology is more widely spread and cheap. This would mean that the gains are widely shared and not going to a relatively small group of Bill Gates types.
It is also important to understand how high incomes for a small group depress incomes for everyone else. Most of us don’t directly pay for our own health care. We have insurance provided by an employer or the government. However, insurers are not charities. (You knew that.)
If insurers have to pay out lots of money to doctors, then it will mean that our employers pay higher premiums, which they will look to take out of our paychecks. Alternatively, if the government is picking up the tab, there will be less money to pay for child tax credits, day care, and other good things.
Also, when the lawyers, doctors, tech workers and other would be beneficiaries from AI get high incomes, they buy bigger and more houses. That raises the cost of housing for everyone else. We can and should build more housing, but when you have a small segment of the population that has far money than everyone else, it is difficult to keep housing affordable for ordinary workers.
Anyhow, the point here is straightforward. Keeping down the pay for those at the top is not an issue of jealously. The more money that goes to the top, the less there is for everyone else, as long as we have not structured the rules in a way that takes away the incentive to be innovative and productive.
Fear the Rich, Not AI
The moral of the story is that there is nothing about AI technology that should lead to mass unemployment and inequality. If those are outcomes, it will be the result of how we structured the rules, not the technology itself. We need to keep our eyes on the ball and remember that structuring the rules is a policy choice.
And, one other point: those who want to structure the rules so that all the money goes to the top will want to say the problem is technology. It is much easier for them to tell the rest of us that they are rich and everyone else is not because of technology, rather than because they rigged the market. Keep that in mind, always.
Nobody seems to be paying attention to the amount of human effort that goes into building the database that is AI’s intuition, for lack of a better term, the human “ability of people to think their way out of things, making decisions on the spot, based upon the knowledge they have while adapting to a changing environment.”
My master’s thesis (somewhat tongue-in-cheek) tied dBase, the original database technology and core of the Internet today, Druidic tattooing and cascading memories to what was loosely then thought of as ‘AI’: human intuition is simply a database of our accumulated experience. How we utilize or what we do (or do not do) with it is beside the point and not germane to the topic at hand … the point is AI has to be programed by humans, it doesn’t develop it’s own database, it’s own intuition.
AI isn’t the thing to be afraid of …
TB:
A long time ago and in a distant galaxy when I was reading science fiction Clark, Asimov, etc. I read one story about military using artificial intelligence. A1 drove every thing. As it turned out in the story, one person’s A1 failed and he was forced to use his wits to command his vehicle. A1 had no counter for his mechanical application of actions. Where everything else was ending in a draw, his actions were out of the ordinary or bounds of what A1 learned or was programed for to counter. We are strange creatures who have a mind and human computer in our heads to which no machine can counter our reactions or thoughts which we can change to meet the challenge. What if . . .
yes, “we” have a mind. or some of us. AI does not. we have already seen the aggravation computerized interfaces with government or even, ahem, the internet, causes. the idfference between human intuition and AI intuition is that humans have human intuition and computers have the “intuition” of programmers built into it. programmers are not people…at least not at the time the are programming for the use of others [other people] whose intuition may not match the programmers.
Dean Baker, one of my favorite people, seems to have forgotten that during the late producivity boom wages for workers have stagnated.
no, i am not a luddite. i am just not happy with the results of humans who think computerizing human life will make them more money and that’s what is important. we already have too much (sic) humans acting like robots.
An exceptional essay.
speaking of AI.
I’ve seen various estimates on the impact of AI on employment.
My guess is that it will be profound, over the course of a generation or two.
Displacement, if not outright loss. I would guess the latter. Machines (computers) will be doing the work that other (creative-types!) have done. Certainly sit-com scripts will be AI generated. Doctors are already looking into AI-written scripts to help them with their bed-side manner awkwardness: this will be less ‘displacing’ I am sure. Doctors love the tech stuff, when it helps them.
Creative-writing ‘workers’, para-legals, screen writers, copy writers of all sorts have every right to be nervous. So do the professors who teach them.
Will it have much impact on jobs and employment. Hmmm. Did computers? I wonder…
This is the next wave of the Industrial Revolution, and it will be affecting people who were previously untouched. Will it boost productivity? If you include in ‘productivity’ the goods that are produced by ‘machines’ (who don’t earn wages or salaries, or purchase goods), then surely it will.
Goldman Sachs Predicts 300 Million Jobs Will Be Lost …
Forbes
https://www.forbes.com › sites › jackkelly › 2023/03/31
Mar 31, 2023 — The investment bank estimates 300 million jobs could be lost or diminished by this fast-growing technology. Goldman contends automation creates …
Generative AI Could Raise Global GDP by 7%
Goldman Sachs AI report – the ‘good news’ part
Two thirds of occupations could be partially automated by AI
At the same time, advances in AI are expected to have far-reaching implications for the global enterprise software, healthcare and financial services industries, according to a separate report from Goldman Sachs Research. With well-known tech giants poised to roll out their own generative AI tools, the enterprise software industry appears to be embarking on the next wave of innovation, after the development of the internet, mobile and cloud computing transformed the ways we operate as a society.
“Generative AI can streamline business workflows, automate routine tasks and give rise to a new generation of business applications,” Kash Rangan, senior U.S. software analyst in Goldman Sachs Research, writes in the team’s report. The technology is making inroads in business applications, improving the day-to-day efficiency of knowledge workers, helping scientists develop drugs faster and accelerating the development of software code, among other things.
Software companies are already arming their product portfolios with new generative AI offerings. Software-as-a-service firms, for example, are using it to open opportunities for upselling and cross-selling product and increasing their customer retention and expansion, the authors note. They see multiple ways that such businesses can leverage generative AI for growth: 1) through new production and application releases, 2) by charging premiums for AI-integrated offerings, and 3) by increasing prices over time as existing products are supplemented with AI-enabled features and prove their value to customers. Added up, GS Research estimates the total addressable market for generative AI software to be $150 billion, compared with $685 billion for the global software industry.
As more generative AI tools are developed and layered into existing software packages and technology platforms, the team sees businesses across the economy benefiting, from enhancing office productivity and sales efforts, to the design of buildings and manufactured parts, to improving patient diagnosis in healthcare settings, to detecting cyber fraud.
While much is unknown about how generative AI will influence the world economy and society, and it will take time to play out, there are clear signs that the effects could be profound.
Goldman Sachs Predicts 300 Million Jobs Will Be Lost
My sense is there is a movement to substantially reduce human populations, with the idea that 1 or 2 billion is going to work much better for the planet than current levels. Possibly too much to call it some plan, but AI very likely will be used to impoverish the prospects of millions, maybe billions. More or less the same kinds of folks who set policy that cratered manufacturing in upstate New York and eastern Ohio are going to be setting policies for AI. So watch some idiotic drivel on TV, take your opioids, don’t raise a family and die young. The renewable energy transitions won’t replace the energy we have right now 1:1 so better not keep populations 1:1 either. The most logical exploration of AI would clearly drive the work week towards 20 hours and then even lower, but not feeling that’s in the pipeline here. In 5 years the critics will be singing the praises of the “JD Vance” of Cupertino….of course that author probably won’t be human as the publisher understands the AI written book will be just as big a seller for a lot less cost.
Eric,
something like that.
A.I. May Someday Work Medical Miracles.
NY Times – June 26
… ChatGPT-style artificial intelligence is coming to health care, and the grand vision of what it could bring is inspiring. Every doctor, enthusiasts predict, will have a superintelligent sidekick, dispensing suggestions to improve care.
But first will come more mundane applications of artificial intelligence. A prime target will be to ease the crushing burden of digital paperwork that physicians must produce, typing lengthy notes into electronic medical records required for treatment, billing and administrative purposes.
For now, the new A.I. in health care is going to be less a genius partner than a tireless scribe.
From leaders at major medical centers to family physicians, there is optimism that health care will benefit from the latest advances in generative A.I. — technology that can produce everything from poetry to computer programs, often with human-level fluency.
But medicine, doctors emphasize, is not a wide open terrain of experimentation. A.I.’s tendency to occasionally create fabrications, or so-called hallucinations, can be amusing, but not in the high-stakes realm of health care.
That makes generative A.I., they say, very different from A.I. algorithms, already approved by the Food and Drug Administration, for specific applications, like scanning medical images for cell clusters or subtle patterns that suggest the presence of lung or breast cancer. Doctors are also using chatbots to communicate more effectively with some patients. …
Studies show that patients forget up to 80 percent of what physicians and nurses say during visits. The recorded and A.I.-generated summary of the visit, Dr. Thompson said, is a resource her patients can return to for reminders to take medications, exercise or schedule follow-up visits. …
Doctors Use Chatbot to Improve Their Bedside Manner
NY Times – June 12
… doctors (are) asking ChatGPT to help them communicate with patients in a more compassionate way.
In one survey, 85 percent of patients reported that a doctor’s compassion was more important than waiting time or cost. In another survey, nearly three-quarters of respondents said they had gone to doctors who were not compassionate. And a study of doctors’ conversations with the families of dying patients found that many were not empathetic.
Enter chatbots, which doctors are using to find words to break bad news and express concerns about a patient’s suffering, or to just more clearly explain medical recommendations. …
Those “hallucinations” are a problem in law as well and have already gotten some lawyers using AI in trouble because of them. It may be that the systems will generate more human work due to the necessity to “edit” their products which in some cases will require duplicating the research or at least reviewing whatever the systems cite.
Jack:
Or make sure the data is correct or correct with exceptions.
Yes but to do that often will require significant human work; in some cases it may be significant enough that it’s duplicative and saving no time at all.
yes, yes, we already have many people with computer generated empathy (we feel your pain. our hearts and prayers go out to you. have a nice day.)
frankly, i don’t need your empathy, even if it is sincere. I do from time to time need your expertise. I prefer it come from actual experience, but there are no doubt some times when it can be made better by enhancing it with the kind of “knowledge” what is called AI can look up very quickly. but don’t let us all stampede to that side of the boat.