SINCLAIR Davidson dabbles in Meat Loaf, Essendon, whiskey and the dismal science. His work in recent years at the prang-strewn crossroads of economics, official humbug, zombie prohibitionism and especially the blockchain has been truly stellar. For this reason – and given my lack of fitness for wrestling with the suddenly everywhere topic of AI – I decided to get into shape by studying his working paper on the prospective role of artificial intelligence in macroeconomics. The subject has become critical in the discipline and few people are better at distinguishing re-badged errors from disruptive novelties than RMIT’s Professor Davidson. He says the treatise was “rushed out” and is a preliminary version. I expect it to be fattened up into a book eventually. Running to eleven pages, ‘The Economic Institutions of Artificial Intelligence’ (PDF) steals a march on all the finest minds in Canberra – whose acuity was on glorious display last week.
Davidson’s conclusion is that, no, AI is not capable of making the statist fantasy of central planning any more possible than it was in the days of Engels, Keynes and their many harebrained imitators. The ‘information problem’ of attempting to command efficacious, universal outcomes – identified long ago by von Mises and Hayek – will not be cracked even by the masses of data at AI’s disposal. That isn’t to say AI will not revolutionise economies top to bottom by powering through countless ‘bounded rationality’ limitations as the crow flies at firm and industry levels. Problems that human beings have dealt with as best they can using fallible heuristics will henceforth be solved, or at least optimally handled, as never before. Political corollaries are less predictable. Just because AI cannot unbound every irrationality doesn’t mean it won’t be asked.
As ‘decarbonisation’ and the covid panic starkly demonstrate, states now have the communications infrastructure and the gall to convince publics that a sciency authority above interrogation already exists. Increasingly, our political and woke corporate leaders like to be regarded not as advocates of subjective ideals but disinterested conduits of objective programs. To a Chris Bowen or a Dr Fauci, the heuristics of mere opinion are effectively obsolete. Even if central planners will not be rendered omniscient by AI, then, they will be empowered by it to delude themselves and damage societies on grander scales. As for accountability, it could become weaker than it already is (not one Premier or CHO having been jailed for the ‘pandemic’ catastrophe). After all, what will be left to contest when ministers start citing bots of the non-Amanda kind in Parliament?