Will AI take all the jobs?
Noah Smith has a blog post arguing that AI won’t take away all the jobs from humans. It’s a clever and well-written post that deserves your attention.
Here are some money grafs:
“So as AI gets better and better, and gets used for more and more different tasks, the limited global supply of compute will eventually force us to make hard choices about where to allocate AI’s awesome power. We will have to decide where to apply our limited amount of AI, and all the various applications will be competing with each other. Some applications will win that competition, and some will lose.
“This is the concept of opportunity cost — one of the core concepts of economics, and yet one of the hardest to wrap one’s head around. When AI becomes so powerful that it can be used for practically anything, the cost of using AI for any task will be determined by the value of the other things the AI could be used for instead.”
Read the whole thing. Smith isn’t a Pollyanna, and he enumerates valid concerns about the expansion of AI.
Kevin Drum isn’t having any of this:
“It seems unlikely that we’d all keep working just because, technically, that last 0.01% of compute power could be put to better use. It would have to be a helluva better use, no? An improvement of 1% in GDP wouldn’t cut it.
“So it’s a nice argument, but I don’t buy it. It seems vanishingly unlikely that, politically, we’d condemn ourselves to lives of drudgery based on an ultra-purist free-market promise that it’s for the best. We certainly never have before.”
Discuss.
Noah Smith on the future of AI
Kevin Drum on the future of AI
Here are some money grafs:
“So as AI gets better and better, and gets used for more and more different tasks, the limited global supply of compute will eventually force us to make hard choices about where to allocate AI’s awesome power. We will have to decide where to apply our limited amount of AI, and all the various applications will be competing with each other. Some applications will win that competition, and some will lose.
“This is the concept of opportunity cost — one of the core concepts of economics, and yet one of the hardest to wrap one’s head around. When AI becomes so powerful that it can be used for practically anything, the cost of using AI for any task will be determined by the value of the other things the AI could be used for instead.”
Read the whole thing. Smith isn’t a Pollyanna, and he enumerates valid concerns about the expansion of AI.
Kevin Drum isn’t having any of this:
“It seems unlikely that we’d all keep working just because, technically, that last 0.01% of compute power could be put to better use. It would have to be a helluva better use, no? An improvement of 1% in GDP wouldn’t cut it.
“So it’s a nice argument, but I don’t buy it. It seems vanishingly unlikely that, politically, we’d condemn ourselves to lives of drudgery based on an ultra-purist free-market promise that it’s for the best. We certainly never have before.”
Discuss.
Noah Smith on the future of AI
Kevin Drum on the future of AI
AI and robots have a common problem: they don’t seem to have the perceptive and creative abilities of well trained humans. The problems of AI writings in the legal field just “making shit up” is well known as is the inability of driverless vehicles to perceive all dangers and avoid doing harm. Perhaps these things will, in time, be corrected and/or improved but the need for human “supervision” is likely to hang around. The notion of “comparative advantage” sort of speaks to this. If a lawyer has to proof read and cite check every brief prepared by AI, he or she might as well do it him/her self in the first place. Perhaps Joel can apply that in the medical field. How much human intuition and creativity is helpful in the formation of medical diagnosis and therapy? Should someone be “supervising” the use of the preprogrammed tests and analysis of histories?
@Jack,
AI and robots are making major contributions to medicine already. I myself had robotic surgery several years ago. Yes, it was supervised by a surgeon, but it was nevertheless robotic.
AI is already more accurate in reading certain radiological images than clinicians. I’ll have a post up tomorrow about a major advance of AI in clinical histology and predicting cancer metastasis.
Did it need to be supervised by a surgeon?
@Jack,
From the comment I posted immediately above:
“Yes, it was supervised by a surgeon, but it was nevertheless robotic.”
I knew it was. I wanted your opinion on whether or not that was necessary.
@Jack,
I don’t know exactly how the surgery works, but I suspect some adult supervision was required. I was under general anesthesia during the operation.
Typically “robot surgeries” are “robot assisted surgeries” where the robot is a tool, like a drone or remote operated vehicle. They’re not supervised, they’re piloted.
It is still neither artificial nor intelligent: it’s an ever more complex database, someone had to program all that information into it, and it can’t tell a joke …
@Ten,
I’m not sure what the definition of “intelligent” is. Machine learning is not merely a database, for sure.
@Ten,
“someone had to program all that information into it”
From a Reddit:
“When it comes to modern AI (i.e. which includes all sorts of machine learning approaches utilizing neural networks), asking what’s in the database is a nonsensical question. Databases are structures for storing data in an organized way. Neural networks don’t store information in a way that can be retrieved like a database can. When neural networks are trained, they can often produce the same (or similar) information they were trained on, but it’s not because they’re recovering it from a lookup somewhere.”
In other news…
Reddit Prices I.P.O. at $34 a Share, in a Positive Sign for Tech
NY Times – yesterday
The social media company raised $748 million in the offering. Its shares begin trading on the New York Stock Exchange on Thursday.
Maybe it’ll take only half.