The American working class is doing better, thank you very much
The American working class is doing better, thank you very much
– by New Deal democrat
With the release of the CPI report earlier this week, I can update several measures of average middle class American income.
Real average hourly wages increased 0.2% in June, and are up 1.6% from one year ago:

Real aggregate payrolls for the entire spectrum of nonsupervisory American workers increased 0.3% in June and are up 3.1% from one year ago:

This is an excellent coincident marker of an expanding economy vs. recession. It tells us that, mainly thanks to declining gas price since June 2022, average American workers and their households have had a significant increase in their ability to buy things. A very good positive sign.
Finally, one last note about various measures of inflation. A few commentators highlighted the still-high +5.9% YoY increase in “sticking price” inflation. But as shown below, the 1 month and 3 month average changes in even that metric have declined sharply, to close to the Fed’s target range:

By any reasonable measure, inflation is no longer a major problem.
Vaguely related?
AI’s future worries us. So does AI’s present
Boston Globe – July 14 – probably behind a pay wall
The long-term risks of artificial intelligence are real, but they don’t trump the concrete harms happening now.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” So say an impressively long list of academics and tech executives in a one-sentence statement released on May 30. We are independent research fellows at the Center for AI Safety, the interdisciplinary San Francisco-based nonprofit that coordinated the statement, and we agree that societal-scale risks from future AI systems are worth taking very seriously. But acknowledging the risks associated with future systems should not lead researchers and policymakers to overlook the all-too-real risks of the artificial intelligence systems that are in use now.
AI is already causing serious problems. It is facilitating disinformation, enabling mass surveillance, and permitting the automation of warfare. It disempowers both low-skill workers who are vulnerable to having their jobs replaced by automation and people in creative industries who have not consented for their work to be used as training data. The process of training AI systems comes at a high environmental cost. Moreover, the harms of AI are not equally distributed. Existing AI systems often reinforce societal structures that marginalize people of color, women, and LGBT+ people, particularly in the criminal justice system or health care. The people developing and deploying AI technologies are rarely representative of the population at large, and bias is baked into large models from the get-go via the data the systems are trained on. …
All too often, future risks from AI are presented as though they trump these concrete present-day harms. In a recent CNN interview, AI pioneer Geoffrey Hinton, who recently left Google, was asked why he didn’t speak up in 2020 when Timnit Gebru, then co-leader of Google’s Ethical AI team, was fired from her position after raising awareness of the sorts of harms discussed above. He responded that her concerns weren’t “as existentially serious as the idea of these things getting more intelligent than us and taking over.” While we applaud Hinton’s resignation from Google to draw attention to the future risks of AI, rhetoric like this should be avoided. It is crucial to speak up about the present-day harms of AI systems, and talk of “larger-scale” risks should not be used to divert attention away from them.
Downplaying the present effects of AI systems benefits nobody but the companies developing the technology. These companies have said they want AI to be regulated in theory, but they have resisted proposals that would interfere with their business models, such as “minimizing” training data to mitigate bias and allowing recourse for data theft. Regulation can’t be relegated to the future. Talk of large-scale AI risk, however sincere, may amount to little more than free advertising for AI companies, unless there is regulatory follow-through to tackle both present and future harms. …
The idea that attending to the risks of future AI systems precludes attending to the harms of present systems strikes us as both unfortunate and mistaken. The same structural factors that could make AI a threat to civilization at large also underlie AI-powered surveillance, propaganda, misinformation, discrimination, and job displacement. In all these cases, the basic problem is that corporations are incentivized to develop and deploy AI systems in a way that prioritizes their interests or those of their political stakeholders over those of the rest of us. These technologies are being steered in directions that put them into conflict with human flourishing. When corporations compete to package research models into consumer-facing products, they will not ensure that the systems they develop are safe and free of bias. Data about consumers is a commodity that can be bought and sold, so companies will develop systems that collect and aggregate it. When a chatbot — however flawed — will work for free, companies will force workers out of their jobs.
Although it is an open question whether (and when) significantly more capable AI systems will be developed, discussing such systems is no longer the province of sci-fi. It is clear that our species is inventing technologies that could cause irreparable harm if they are not deployed in a way that is sensitive to human needs. Where some see two areas of inquiry — the harms of present models and those of future ones — we see one: the study of how to integrate AI into the fabric of human societies in a way that enhances rather than endangers our collective well-being.
(A am a volunteer medical librarian, and have been for about thirty years, unpaid, and a real person – I think. The last paid librarian at our local hospital was laid off about three years ago, to save money, on April Fool’s Day of the first year of Covid in 2020. I work from home and am unseen at the hospital I serve, operating entirely by computer, about 1000 hours per year.)
I am also a bit dyslexic when it comes to typing.
Several posts above about AI & it’s impact, may or may not make sense, because they are in support of a post awaiting moderation, which is a column of ‘the impact of AI’ appearing in the Boston Globe.
(The post awaiting moderation, without the links.)
Vaguely related?
AI’s future worries us. So does AI’s present
Boston Globe – July 14 – probably behind a pay wall
The long-term risks of artificial intelligence are real, but they don’t trump the concrete harms happening now.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” So say an impressively long list of academics and tech executives in a one-sentence statement released on May 30. We are independent research fellows at the Center for AI Safety, the interdisciplinary San Francisco-based nonprofit that coordinated the statement, and we agree that societal-scale risks from future AI systems are worth taking very seriously. But acknowledging the risks associated with future systems should not lead researchers and policymakers to overlook the all-too-real risks of the artificial intelligence systems that are in use now.
AI is already causing serious problems. It is facilitating disinformation, enabling mass surveillance, and permitting the automation of warfare. It disempowers both low-skill workers who are vulnerable to having their jobs replaced by automation and people in creative industries who have not consented for their work to be used as training data. The process of training AI systems comes at a high environmental cost. Moreover, the harms of AI are not equally distributed. Existing AI systems often reinforce societal structures that marginalize people of color, women, and LGBT+ people, particularly in the criminal justice system or health care. The people developing and deploying AI technologies are rarely representative of the population at large, and bias is baked into large models from the get-go via the data the systems are trained on. …
What it takes to be in the top 1%
SmartAsset – March 9, 2023
The gap between the top 1% of earners and average Americans is stark. In fact, the average American household earns a median income of under $70,000, but in some places, the top 1% can earn as much as $955,000. Those annual earnings can seem far out of reach in a country where less than 10% of all households earn more than $200,000, according to the U.S. Census Bureau. Ultimately, climbing the economic ladder is difficult, but it’s less intimidating in some states than in others. …
… Connecticut. The top 1% of taxpayers here have the highest average tax burden (27.77%). Residents in the next highest state (Massachusetts) need roughly $58,300 less to be a top 1% taxpayer.
(It’s actually highest in the District of Columbia, but the ranking above is by state.)
America’s Top 1% is Different in Each State
SmartAsset – July 17
(from a related story in the Boston Globe:) … Rounding out the top five are California ($844,000), New Jersey ($817,000), and Washington ($805,000). Note: In the anomaly known as the District of Columbia, the 1 percent threshold is more than $1 million. …
At the bottom of the list: Arkansas ($451,000), Kentucky ($445,000), New Mexico ($411,000), Mississippi ($382,000), and West Virginia ($368,000). ,,,