Does AI pose a meaningful existential threat to humanity? If an existential threat is one that can lead to mass death or human extinction, and a risk of extinction is meaningful if it is (say) at least 10% as large as the risk of a nuclear holocaust, my answer is that I have no idea.
But it seems clear that AI does pose a serious threat to democratic stability. It will give anti-democratic actors a powerful new tool for spreading political misinformation and fostering discontent with elites and resentment of socially disfavored groups. More generally, AI will speed the process of social and economic change, which unsettles people and makes them more receptive to the promises of authoritarians.
This is very much worth worrying about, and it strengthens the case for social media reforms that encourage information curating or gatekeeping.
I see 4 major concerns about social media: 1) threats to democratic stability caused by the propagation of right-wing misinformation, 2) psychological harm, especially to young people, and especially to teenage girls, 3) privacy concerns, and 4) anti-trust concerns related to price gouging and competition. In my view this is the right order of priority: limiting the spread of misinformation is very important; in comparison I just don’t care much about anti-trust problems. (Large firms and privacy risks might contribute to political risks. If so, this bolsters the case for addressing privacy and reducing firm size. But putting political risks aside, even if we assume that anti-trust costs are at the high end of the plausible range, they will still pale in importance relative to the first problem, and quite likely the second, and probably the third).
The threat to democracy posed by AI will largely operate through social media and the internet, at least in the foreseeable future. Ideally, worry about AI will motivate us to think about how to limit the political destructiveness of social media. How to proceed is far from clear, at least to me, but we could take a careful look at the case for a tax on digital advertising laid out by Paul Romer. Romer proposes a graduated tax on ad revenue for two reasons: it would favor smaller firms, thus reducing the danger posed by behemoths like Google and Facebook, and it would push the internet towards a subscription model.
This is a very useful proposal. I might prefer a flat tax that starts at a low rate and rises slowly over time, gently nudging the internet in the direction of a subscription model that allows for more effective information curating and gatekeeping. If a graduated tax worked to reduce the size of individual firms – which is the point of using a graduated tax, rather than a flat tax – it would not create pressure to move to a subscription model, since smaller firms would pay no tax. This is potentially a problem: Twitter would pay no tax under Romer’s graduated tax plan, but it plays a role in the political disinformation process, and a graduated tax would create many more Twitters (if it worked as Romer hopes). AI makes encouraging gatekeeping and curating even more important; it should lead us to put more weight on moving to a subscription model. In addition, a flat tax would be less costly to the big incumbent firms than a graduated tax, which would make it easier to pass. A graduated tax would mostly benefit small firms that do not yet exist, and are therefore in no position to lobby in favor of it.
In any event, it’s not entirely clear how well a tax designed to push the internet and social media towards a subscription model would work. For one thing, some people might well pay for misinformation, or for accounts on social media platforms that allow misinformation to spread. But it might work, and it would help support organizations that do real shoe-leather reporting. Ad taxes might also lead to usage fees, which in turn might limit social media use by teens, and by the rest of us.
I am worried that the apocalyptic rhetoric surrounding AI will divert our attention from political threats that are both serious and immediate.