Mainstream Weekly

Home > 2023 > Thomas Klikauer’s Review of Machines Behaving Badly

Mainstream, VOL 61 No 46-47 November 11 & November 18, 2023

Thomas Klikauer’s Review of Machines Behaving Badly

Saturday 11 November 2023


Reviewed by Thomas Klikauer

Machines Behaving Badly:

The Morality of AI

by Toby Walsh

La Trobe University Press and Black Inc. Press, Collingwood

2022. 275 pp.

pb ISBN 9781760643423

Toby Walsh begins Machines Behaving Badly by making clear that ‘[i]n reality, artificial intelligence is [not a] conscious robot […] We cannot yet build machines that match the intelligence of a two-year-old […] program computers […] do narrow, focused tasks’ (1). Like a coffee maker – a machine – AI does not have morality. Still, every time someone asks Siri – Apple’s virtual assistant – a question, they are using AI. At the same time, AI is based on ‘machine-learning algorithms that [can, for example, also] predict which criminals will reoffend’ (2), which, for Walsh is problematic.

Walsh correctly argues that there is a ‘A common misconception […] that AI is a single thing. Just like our intelligence is a collection of difference, AI today is a collection of different technologies’. Surprisingly and after decades of research into AI, Walsh freely and correctly admits that ‘we have made almost no progress on building more general intelligence that can tackle a wide range of problems’ (3).

Despite this, AI does have an impact on society. The author emphasises that ‘we walked straight into [a] political minefield in 2016, first with the Brexit referendum in the United Kingdom and then with the election of Donald Trump in the United States. Machines are now routinely treating humans mechanically and controlling populations politically’. Worse, ‘Facebook can be used to change public opinion in any political campaign’ (11).

Even worse than Facebook – at one time, and in cahoots with the highly manipulative Cambridge Analytica – Walsh identifies one of the most fundamental moral problems of AI, namely that there is ‘a dangerous truth somewhat overlooked by advertisers and political pollsters. Human minds can be easily hacked. And AI tools like machine learning put this problem on steroids. We can collect data on a population and change people’s views at scale and at speed, and for very little cost’ (11).

Yet, when writing an algorithm for artificial intelligence, those who write these scripted codes are more often than not, overrepresented by the venture capital that turns AI into profits. These venture capital companies can be divided into ‘three roughly equally sized parts: Silicon Valley (which has now spread out into the larger Bay Area), the rest of America, and the rest of the world’ (21). For many of these ‘white dudes’ working in AI, the philosopher Ayn Rand provides some sort of semi-rational background. Walsh notes that ‘any readers of Atlas Shrugged, especially in the tech community, relate to the philosophy described in this cult dystopian book’. These AI programmers truly believe that ‘our moral purpose is to follow our individual self-interest’ (23).

As a consequence of following this ideology, many AI ‘dudes’ became ‘techno-libertarians [who] wish to minimise regulation [also believing that] the best solution […] is a free market’. Inside AI, Rand’s ideological came to the forth with John Perry Barlow’s 1996 Declaration of the Independence of Cyberspace, which is ‘perhaps the seminal techno-libertarian creed’ (25).

Yet, at some point, corporations took over. Today, we see, for example, that ‘Apple’s turnover is more than the GDP of Portugal’. Worse, ‘[m]ost Big Tech companies are sitting on large cash mountains. It is estimated that US companies have over $1 trillion of profits waiting in offshore accounts’ (39) – a nice word for tax havens, an immoral and rather shady tax-reduction scheme.

Besides the semi-criminal side of big tech corporations, Walsh is not worried about super-intelligent machines that bypass human beings. He notes that many of those ‘working in AI, are not greatly worried about […] super-intelligent machines [yet] the philosopher Nick Bostrom […] fears that super-intelligence poses an existential threat to humanity’s continuing existence […] Suppose we want to eliminate cancer. “Easy,” a super-intelligence might decide: “I simply need to get rid of all hosts of cancer.” And so, it would set about killing every living thing!’ (43).

In short, clever corporate and highly manipulative marketing disabled many customers’ autonomy by making people addicted to tobacco (first) and online platforms (later). Yet, for AI, the autonomy of human beings remains a very serious issue. Walsh argues that ‘[a]utonomy […] is an entirely novel problem. We’ve never had machines before that could make decisions independently of their human masters’ (61). On this, he alludes to ‘the trolley problem’ or ‘trolleyology’ (78), as he calls it.

This brings Walsh to MIT’s moral machine, which itself has moral problems. MIT’s basically asks you to vote on ‘moral issues’. Yet Walsh argues that voting on morality is a dicey issue, writing that ‘we humans often say one thing but do another. We might say that we want to lose weight, but we might still eat a plate full of delicious cream doughnuts’ (83). But there is also a second problem. Unlike real elections, MIT’s ‘moral machine [is] not demographically balanced’ (8), mostly used by young college-educated men – Walsh’s sea of ‘white dudes.’

Much worse than MIT’s voting machine is the issue of automated ‘killer robots’ (85). On this, the author quotes the United Nations’ António Guterres, who has stated: ‘Let’s call it as it is. The prospect of machines with the discretion and power to take human life is morally repugnant’ (86). Still, on the issue of automated and algorithm-guided warfare, one also finds the ‘Martens Clause’ (98) regulating armed conflict. And interestingly, we also find the ‘US Department of Defense’s ethical principles for the use of AI’ (101), which speaks with the moral authority of having dropped two atom bombs, engaged in the Korea and Vietnam wars and engineered Abu Ghraib during the Iraq War in the absence of any weapons of mass destruction.

Beyond that, AI-engineers are extremely concerned with the issue of ‘human v. machines’. On this, Walsh argues that ‘humans already can, and often do, behave ethically. Could we perhaps reproduce [this] in machines […] Or are human intelligence and machine intelligence […] fundamentally different? […] One important difference between humans and machines is that we’re alive and machines aren’t’ (103). Yet what we might call life, AI calls ‘living systems’, systems which ‘maintain some sort of equilibrium, have a life cycle, undergo metabolism, can grow and adapt to their environment, can respond to stimuli, and can reproduce, and evolve’ (104).

Apart from all this, Walsh is convinced that ‘[t]here’s no need – indeed, no place – for free will’. One of the great things about Machines Behaving Badly is that it consistently reminds readers that ‘computers are deterministic machines that simply follow the instructions in their code’ (106). Next to this fact, morality might come with the following point, which might not really concern us in the very short-term: ‘Once machines are conscious, we may have ethical obligations to them in how we treat them. For instance, can we now turn them off?’ (107)

AI and morality are highly important yet problematic in the realm of medicine (140). This leads Walsh to the morality of fairness and he uses the example of the UK’s Ofqual, where ‘[s]tudents from poor state schools were more likely to have their grades marked down than students from rich private schools’ (149). Even graver things have happened in the area of policing. Walsh notes that ‘[p]redicting future crime [while using] historical data will then only perpetuate past biases […] Those who use AI to learn from history are doomed to repeat it.’ In fact, it is worse than repetition. We may construct dangerous ‘feedback loops in which we magnify the biases of the past’ (151). Particularly on the issue of fairness in policing, there is also the danger of ‘a common mistake in machine learning where we confuse correlation with causation’ (156). On the morality of fairness, Walsh says, ‘there are now at least 21 different mathematical definitions of fairness in use by the machine-learning community’ (158).

Yet fairness also plays into something as simple as speech recognition. Walsh lists speech-recognition systems such as those, for example, developed by Amazon, Apple, Google, IBM and Microsoft. He notes that ‘[a]ll five systems performed significantly worse for black speakers than for white speakers. The average word error rate across the five systems was 35 per cent for black speakers, compared with just 19 per cent for white speakers’ (168). In other words, ‘AI systems will often reflect the biases of the society in which they are constructed’ (171) as well as the bias of those who create algorithms, that is, a ‘sea of white dudes.’

Walsh says that ‘[w]e can expect more and more decisions to be handed over to algorithms’ (179). And this is happening even in the most pressing issue of our planet – global warming – where digitalisation is responsible for ‘less than 5 per cent of all electricity [that] is used to power computers. This means that less than 1 per cent of world energy consumption is spent on all computing’ (211).


[t]raining [the enormous model of ChatGPT-3] is estimated to have produced 85,000 kilograms of CO2. To put this in perspective, this is the same amount produced by four people flying a round trip from London to Sydney in business class [and exactly as much as] people in a row of eight economy-class seats [use] for the same trip [and in] practice, many data centres run on renewable energy. The three big cloud computing providers are Google Cloud, Microsoft Azure, and Amazon Web Services. Google Cloud claims to have “zero net carbon emissions” (212).

‘Claims to have’. At the same time, ‘[t]ransportation is responsible for around one-quarter of global CO2 emissions. AI can be a great asset in reducing these emissions’ (217).

Walsh concludes his insightful book by saying that ‘[i]t should be clear by now that we cannot today build moral machines, machines that capture our human values and that can be held accountable for their decisions […] Machines will always and only ever be machines. They do not have our moral compass […] If we can’t build moral machines, it follows that we should limit the decisions that we hand over to machines’ (222) – a rather logical and inevitable conclusion.

23 October 2023

(Reviewer: Thomas Klikauer (MAs, Boston and Bremen University and PhD Warwick University, UK) teaches MBAs and supervises PhDs at the Sydney Graduate School of Management, Western Sydney University, Australia. He has 700 publications and writes regularly for BraveNewEurope (Western Europe), the Barricades (Eastern Europe), Buzzflash (USA), Counterpunch (USA), Countercurrents (India), Tikkun (USA), and ZNet (USA). His next book is on Media Capitalism (Palgrave))

[This review from Marx & Philosophy Review of Books is reproduced under a Creative Commons License]

ISSN (Mainstream Online) : 2582-7316 | Privacy Policy
Notice: Mainstream Weekly appears online only.