Site icon Conservation news

Data fusion opens new horizons for remote imaging of landscapes

  • Scientists use remotely sensed data from satellites to map and analyze habitat extent, vegetation health, land use change, and plant species distributions at various scales.
  • Open-source data sets, analysis tools, and powerful computers now allow scientists to combine different sources of satellite-based data.
  • A new paper details how combining multispectral and radar data enables more refined analyses over broader scales than either can alone.

Imagine you needed to map the spread of an invasive plant species in a tropical forest. Hyperspectral imaging and LiDAR are great at identifying vegetation, but have their limitations and tend to be costly.

Other, more accessible remote-sensing technologies now exist that, combined, can do the trick. Radar, which emits radio waves and measures the signal created as these bounce off an object, can trace out the forest canopy structure, while multispectral imaging can be used to analyze the reflected sunlight off the leaves to determine vegetation type.

Solutions to this and other kinds of remote imaging and surveying applications don’t have to be new or expensive technology, a recent review paper suggests, but can instead be an innovative blending of available remote-sensing technologies known as data fusion.

In the paper, the authors introduce techniques for combining satellite data, along with their respective benefits and drawbacks for ecological studies.

Satellite image showing agricultural plots in Brazil and forest remaining in between them.
Satellite image showing agricultural plots in Brazil and forest remaining in between them. Image courtesy of NASA.

Transmitting and detecting energy

Radar and multispectral sensors are perhaps the best-known examples, respectively, of active and passive remote sensors typically found on board Earth-observation satellites.

Active sensors emit their own radiation and then measure the backscatter, or radiation that reflected back from the target object. The radiation is in the microwave portion of the electromagnetic spectrum, which is small enough to penetrate clouds and most weather conditions. The information from the backscatter permits calculation of distances, size, volume, and orientation of objects on the Earth below.

Radar sensors emit radiation that backscatters, or bounces off objects, in different ways. These difference enable the sensor to "identify" the object.
Radar sensors emit radiation that backscatters, or bounces off objects, in different ways. These difference enable the sensor to “identify” the object. Image credit: Fernandez-Ordonez et al (2009)

Radar can’t see color but can determine the structure and surface roughness of objects, such as buildings or trees of different height, width, and density, or soils that are dry versus water-saturated, even below the canopy if the wavelength is fine enough. By emitting its own radiation, radar also functions day and night.

Multispectral sensors, because they are passive, require an external source of radiation, namely the visible and infrared bands, or portions of the electromagnetic spectrum. They detect the natural energy radiated by the sun or reflected by another object and produce what most of us think of as satellite images.

Multispectral sensors can distinguish levels of brightness and color, allowing us to differentiate between green, healthy vegetation and unhealthy vegetation, as well as identify the chemical properties of the surface of objects, such as the carbon or moisture content of vegetation, that correspond to different degrees of reflectance of energy.

Greener vegetation reflects more of the green portion of the electromagnetic spectrum than brown vegetation. The variability in reflectance of the spectral bands in a satellite image can be used to assess vegetation health.
Multispectral satellite images distinguish vegetation by the reflectance of sunlight in several bands, or portions of the electromagnetic spectrum. Greener vegetation reflects more of the green portion of the spectrum than brown vegetation, which enables the use of satellite imagery to assess vegetation health. Image credit: educationally.narod.ru

The two types of sensors thus detect complementary types of data, both of which are now freely available for the whole planet. The European Space Agency’s new Sentinel-1 satellite provides free access to global radar data, while Landsat and Sentinel-2 satellites, among others, make their respective global multispectral imagery freely available.

Two is better than one   

The opportunity created by the availability of spatial data, plus the computing power to process and combine them, prompted this recent assessment. Combining structural data from radar with the reflectance data from multispectral imaging can improve the accuracy of assessing and monitoring biodiversity, especially at large scales or across gradients.

“Satellites provide opportunities to access information about the natural world at scales that are inaccessible to people measuring things on the ground,” said author Nathalie Pettorelli of the Zoological Society of London.

Cost, time, and logistics have limited where we can measure the distributions of plant species. Most freely available multispectral and radar data have spatial resolutions that are too coarse to assess species distributions while hyperspectral imaging and LiDAR data, in addition to being costly, tend to be available primarily at fine (landscape, rather than regional or continental) scales.

Combining multispectral reflectance data and radar structural data has helped various research efforts to more accurately map and predict the distribution of both plants and animals. Researchers have, for instance, mapped the extent of alfalfa species in grasslands by their different growth form and leafing, flowering, and fruiting periods and predicted tree species distributions across South America, using two spectral indices—plants’ leaf area index and their greenness (a.k.a. NDVI)—in combination with canopy moisture and roughness metrics derived from radar data.

The example highlighted at the beginning comes from a study in which scientists mapped invasive plant species in a tropical forest by combining vegetation type, from multispectral imagery, with canopy structure derived from radar.

Scientists have also mapped animal species that rely on specific vegetation structure: one study more accurately predicted bird species distributions by combining data on woody material, or biomass, from radar with vegetation type information from multispectral imagery.

Joining multispectral data that pick up on differences in vegetation “greenness” with radar backscatter containing information about canopy structure and volume helps map ecosystems and land cover. For example, the structural information has helped researchers distinguish vegetation stands at different stages of regrowth, primary from secondary forest, or plantations (trees of the same age and species) from natural forests, as well as assess vegetation health. Radar information on the direction of the return signal can also identify wetlands or saturated soils, even under the canopy.

The variability in reflectance of the spectral bands in a satellite image can be used to distinguish among ecosystems and land cover types.
The variability in reflectance of the spectral bands in a satellite image can be used to distinguish among ecosystems and land cover types. Image credit: educationally.narod.ru

Pettorelli told Mongabay-Wildtech that her team plans to incorporate fusion techniques into projects mapping peatlands in Indonesia and various habitats in West Africa’s W-Arly-Pendjari (WAP) transboundary protected area.

“Our aim is to start answering some of the issues we mention in the paper, using these case studies,” she said. “This means trying all different types of fusion techniques and comparing their efficiency in these particular contexts.”

Data fusion may enhance the mapping of threats, such as deforestation. The authors propose that adding radar data, which can penetrate clouds, may help speed the time needed to detect forest clearing in regions with heavy cloud cover, enabling more timely responses on the ground. The recent addition of radar data to Brazil’s monthly Amazon deforestation monitoring, for example, suggests that prior deforestation rates were likely underestimates.

Large-scale data on vegetation structure would better enable assessments of forest degradation, the loss of vegetation under the canopy that is difficult to identify using spectral imagery alone. Mapping invasive plant species, which may differ from the native species in leaf chemistry (which affects spectral reflectance) and/or growth patterns (which affects radar backscatter), would also benefit from combining these data types.

Data fusion techniques

We can combine these two types of information using software for integrating or fusing the data, which lead author Henrike Schulte to Bühne describes in a refreshingly easy-to-understand video:

Data integration, which the authors also call decision-level fusion, simply uses the two images as separate variables to classify land cover types or predict a parameter of interest, such as the presence of a species or habitat, or to assess woody biomass across a landscape.

Object-level fusion extracts vector objects, such as lines or shapes, from pixel-based imagery by clustering adjacent pixels with similar signals into objects and combining them with clusters of pixels with similar values from the second image.

The resulting feature map contains objects that relate to ecological features on the ground. Thus, clusters of pixels from both image types with values that together indicate old-growth forest can then be labeled as old-growth forest polygons, and linear clusters of pixels from both image types with values that together indicate water would be combined as rivers.

Image, or pixel-level fusion combines the pixel values in the two images to create a single new image with all-new pixel values. Each pixel retains its unique spectral and radar signal values (and is not clustered into objects, as with object-level fusion) so may be better suited for mapping variables that vary at fine spatial scales, such as different successional stages or soil types within a forest category.

This method speeds analysis by reducing the amount of data that needs processing, but the new blended variables may be more difficult to interpret than the original measures of reflectance or structure.

Several of the algorithms allow the user to combine images with different spatial resolutions and can help “refine” a lower-resolution image without losing its original value.

An overview of multispectral-radar SRS data fusion techniques (in text above) to predict either a categorical variable, like land cover, or a continuous variable, like species richness. In pixel-level fusion, the original pixel values of radar and multispectral imagery are combined to yield new, derived pixel values. Object-based fusion refers to (1) inputting radar and multispectral imagery into an object-based image segmentation algorithm, or (2) segmenting each type of imagery  separately before combining them. Finally, decision-level fusion refers to quantitatively combining multispectral and radar imagery in a predictive model or classification algorithm to derive the parameter of interest. 
An overview of multispectral-radar SRS data fusion techniques– pixel-level, object-level, and decision-level fusion– to predict either a categorical variable, like land cover, or a continuous variable, like species richness. Image credit: Schulte to Bühne & Pettorelli (2018).

Challenges facing adoption of data fusion techniques

Open-source modeling software that can combine these data already exists, and the increasing availability of global-scale data at minimal cost has increased access to users beyond remote sensing specialists. Nevertheless, some experience working with spatial data would be helpful for most of these methods.

Many ecologists are unfamiliar with remote sensing data, especially radar data. Sourcing, pre-processing, and interpreting the data take some expertise and hardware capacity, though cloud computing may help reduce this barrier.

In addition, not all studies reviewed in the paper compare the mapping accuracy achieved after data fusion to that achieved by using a single type of sensor, so it remains unclear what the added value of data fusion was in these cases. Better understanding of the contexts in which data fusion adds value will help to broaden its use.

The authors recommend that the ecology and remote sensing communities collaborate to identify where and how to cost-effectively apply the growing availability of satellite data to addressing the challenges of monitoring biodiversity.

“Collaboration between these communities could be particularly beneficial when set around a project in biodiversity-rich countries, where information can be sparse,” said Pettorelli. “How is the difficult question, as it requires finding long-term solutions to boost interdisciplinary work.”

 

Reference

Schulte to Bühne, H., & Pettorelli, N. Better together: Integrating and fusing multispectral and radar satellite imagery to inform biodiversity monitoring, ecological research and conservation science. Methods in Ecology and Evolution. https://doi.org/10.1111/2041-210X.12942

Exit mobile version