Mainstream

Home > 2022 > Eyes in the wartime sky: Machines are learning to track wartime destruction (...)

Mainstream, VOL LX No 21, New Delhi, May 14, 2022

Eyes in the wartime sky: Machines are learning to track wartime destruction | Hannes Mueller, Andre Groege

Saturday 14 May 2022

by Hannes Muelle and Andre Groeger

A new automated analysis of satellite imagery can reveal the extent of building destruction in conflict zones, providing vital information for humanitarian aid.

Among the shocking images to emerge from Ukraine in recent weeks is the sight of civilian lives torn apart by shelling. Shredded apartment buildings, with their ordinary contents – microwaves, shoes, cushions – incongruously spilling into the muddy, rubble-strewn streets. 

Building destruction during war is a form of violence that is particularly harmful to civilians. Used by aggressors as a strategy to displace populations, it is responsible for tremendous human suffering beyond loss of life. The Red Cross warns that massive urban destruction has dramatic knock-on effects on health, as it often involves destroying water and power supplies, as well as hospitals.

Reliable and up-to-date data on destruction in war zones plays an important role in humanitarian relief efforts. It also informs human-rights monitoring, reconstruction initiatives and media reporting, as well as the study of violent conflict by researchers. 

As high-resolution satellite imagery becomes more readily available, images can be monitored for building destruction weekly or even daily. At the same time, recent advances in machine learning have provided sophisticated tools to extract information from these images. 

Recent research has demonstrated a tool that automatically analyses satellite imagery to identify buildings destroyed by shelling. The method combines existing state-of-the-art computer-vision methods with a new strategy of augmenting the labels applied to destroyed or intact buildings. An additional post-processing step improves classifier performance. 

The standard approach for this kind of task is something known as convolutional neural networks (CNNs). These have achieved unprecedented success in large-scale visual image classification with accuracy rates beating humans’. The new method trained a CNN to spot destruction features from artillery and bombing in satellite images – for example, the rubble from collapsed buildings or the presence of bomb craters. The researchers tested the concept on six Syrian cities, including Aleppo, which suffered significant damage during the civil war of 2012–16.

Even in heavily bombed cities, the percentage of destroyed buildings relative to intact ones is low. This means that even a small number of mistakes will result in unacceptable false-positive predictions. This would yield data that is useless for all practical purposes. In heavily bombed Aleppo, only 2.8 percent of all images of populated areas contained a building that was classified as destroyed by the UN Operational Satellite Applications Program (UNOSAT) in September 2016.

To expand the computer’s training dataset, the research used a series of images taken over time. This brought down error rates, although it added complications caused by seasonal changes in vegetation or non-war related building demolition. The researchers then tested the tool on images of other bombed Syrian cities, with good results. 

Organisations such as the United Nations, the World Bank and Amnesty International currently use humans to manually classify satellite images to produce damage-assessment case studies. Academics studying violent conflict often rely on news organisations, who themselves rely on eyewitness accounts. All these methods are slow and subject to bias.

An automated building-damage classifier for use with satellite imagery, which has a low rate of false positives in unbalanced samples and allows tracking on-the-ground destruction in close to real time, can therefore be extremely valuable for the international community and academic researchers alike. The researchers calculated that manual labelling of the images from Syrian cities would have cost US$200,000. 

Such a tool could also work with humans. It could take a first pass through masses of data to flag the most noteworthy images for human classification. More training and checking by humans could also improve the computer’s predictions. 

Monitoring of conflict tends to reduce armed violence, which suggests that a monitoring tool could be a vehicle for greater peace. However, sometimes the opposite has been true: violence increased because someone was watching. 

Because building destruction is often used to create wholesale displacement of people, information about atrocities and destruction may help aggressors displace a population. If the aggressors do not fear repercussions linked to the monitoring of these atrocities, then monitoring itself can increase violence. Covert monitoring could be used in such a situation.

As satellite images improve in quality and resolution, a computer monitoring tool to automatically reveal building destruction in times of conflict can be expected to increase in reliability. Lower-cost, automated monitoring offers the possibility of long-range, long-term, safe assessment of urban destruction and its toll on the innocent citizens of cities at war. 

(Authors: Hannes Mueller is a tenured researcher at the Institute for Economic Analysis (IAE(CSIC)) and an associate research professor and program director for the Data Science for Decision Making M.Sc. at the Barcelona School of Economics (BSE).; Andre Groeger is assistant professor of economics (tenure track) and Serra Húnter fellow at Universitat Autònoma de Barcelona (UAB), affiliated professor at the Barcelona School of Economics (BSE), and fellow at Markets, Organizations and Votes in Economics (MOVE).)

[Originally published under Creative Commons by 360info™]

Notice: Mainstream Weekly appears online only.