Could AI replace managers and even politicians? In their now-famous paper (pdf) describing how half of American jobs could be replaced by computers, Carl Frey and Michael Osborne say no: they estimate that chief executives, line managers and HR managers are among the 10% of occupations least likely to be computerized. From the perspective of neoclassical economics, this is weird. It pretends that the job of bosses is to maximize profits, given a production function and prices of inputs. This is a constrained optimization problem which can easily be done by a computer. Similarly, if you believe, Sunstein-stylee, that policy-making is a technocratic function of choosing optimal fiscal or monetary policy or the right choice architecture, the job can be delegated to computers, which have
chris considers the following as important:
This could be interesting, too:
chris writes Don’t abolish private schools
chris writes Disaster capitalism: some doubts
chris writes Simplicity: smart & stupid
chris writes How not to be an arrogant prat
Could AI replace managers and even politicians? In their now-famous paper (pdf) describing how half of American jobs could be replaced by computers, Carl Frey and Michael Osborne say no: they estimate that chief executives, line managers and HR managers are among the 10% of occupations least likely to be computerized.
From the perspective of neoclassical economics, this is weird. It pretends that the job of bosses is to maximize profits, given a production function and prices of inputs. This is a constrained optimization problem which can easily be done by a computer.
Similarly, if you believe, Sunstein-stylee, that policy-making is a technocratic function of choosing optimal fiscal or monetary policy or the right choice architecture, the job can be delegated to computers, which have the benefit of being immune – in principle – to cognitive biases.
Which poses the question: what is it about bosses and politicians that make their jobs so unlikely to be replaced by tech? There are, I suspect, four things.
One is that the choice of what technology to adopt is not merely a matter of objective efficiency. It is also about power. In his excellent new book The Technology Trap, Frey describe how pre-industrial governments often forbade the use of labour-saving techniques, fearing they would cause a backlash and disrupt established hierarchies. And Joel Mokyr has written:
Resistance to technological change is not limited to labour unions and Olsonian lobbies defending their turf and skills against the inexorable obsolescence that new techniques will bring about. In a centralized bureaucracy there is a built-in tendency for conservatism. Sometimes the motives of technophobes are purely conservative in the standard sense of the word. This is equally true for corporate and government bureaucracies, and cases in which corporations, presumably trying to maximize profits, resisted innovations are legend. (The Gifts of Athena, p238)
A second problem is that knowledge is not necessarily codifiable. AI works where all possible options can in principle be listed. Alphazero, for example, learned chess and Go by being programmed with the laws of the games and then playing millions of games against itself. In other contexts, such an approach doesn’t apply, and not just because what we are trying to achieve is sometimes not as simple or articulable as winning a game.
In many contexts there are what Donald Rumsfeld called unknown knowns – things we don’t know that we know. Tacit knowledge – hunches, gut feelings, things we have forgotten that we knew – matters. One reason why management is not merely a constrained optimization task is that even with given technology, good managers can tweak the production possibility curve outwards (pdf) by incremental improvements based upon hunches. Maybe brute force algorithms can replicate all these possible hunches and find the optimal one. But this is most easily done where the environment can be completely described – which is true of chess but not the real world.
There’s something else. An algorithm is, by definition, a set of rules. Sometimes, though, we succeed by breaking rules. For Israel Kirzner, this is in fact the very essence of entrepreneurship:
The Schumpeterian entrepreneur does not passively operate in a given world, he creates a world different from which he finds. He introduces hitherto undreamt of products, he pioneers hitherto unthought of methods of production, he opens up a new market in hitherto undiscovered territory.
Entrepreneurship, he says, “is the process of discovering new knowledge and possibilities that no one has either previously imagined or noticed."
Herein lies one reason why AI can’t replace politicians. The success of Trump, Farage and Johnson has come from breaking conventions, be it the idea that leaders should conform to a particular ideal of good character, or that policies be evidence-based and compatible with the interests of business and the median voter. As Will Davies says:
Come November this year, Farage, Johnson and their allies may well have achieved a far greater disruption of the political and economic status quo than Thatcher or Blair ever managed, with a smaller popular mandate and far less effort. They don’t need think tanks, policy breakfasts, the CBI or party discipline. They don’t even need ideas. All they have to do, in pursuit of their goal, is to carry on being themselves.
There’s one final thing. Even if algorithms weren’t racist, they’d be only indifferent managers for one other reason. Sometimes, we want the human touch – the arm round the shoulder, the jolly-up, or the understanding that we’re having an off day.
It’s for a similar reason that AI will, I suspect, never write great music. Yes, it can produce passable if derivative tunes. But it has not yet given us truly original songs or insightful lyrics, nor what great music gives us, the sense of one soul speaking to another.
The same thing applies in journalism. AI can, in a fashion write news stories: these follow articulable rule. But it cannot (yet) produce marketable opinion columns: readers want the prejudices, biases and errors that only humans can provide.
What they also want are stories, even, or especially, if they are nonsense. We’ve good evidence – corroborated by the fact that so many of them recommended Woodford’s funds – that financial advice is poor. And yet robo-advisors have not expanded as much as they should. A big reason for this is that people don’t want to believe that simple rules work (such as buy cheap tracker funds). They want narratives and great investors they can trust. Only humans offer these.
And this is why AI cannot replace politicians. We prefer bad decisions taken by humans to good ones taken by machines. As Ben Jackson says:
Much of our politics today revolves around the perception that decisions are being taken elsewhere, whether in Westminster, Brussels or Washington. Passing off the work of decision-making to the ultimate aloof elite, a computer, is not a serious way of confronting this issue. Sometimes it’s important to decide things for ourselves, and to feel like we’re deciding, even if we often go astray.
What I’m driving at here is not merely a version of Moravec’s paradox. Nor is it a story about computers, especially as we might be on the verge of an era of super-powerful ones. Instead, my point is about the nature of power and decision-making. These are not merely technocratic algorithmic processes but are instead essentially and inherently human. Which is what conventional neoclassical economics has traditionally under-estimated.