Site icon Conservation news

Automating drone-based wildlife surveys saves time and money, study finds

  • Reserve managers have begun to survey wildlife in savanna ecosystems by analyzing thousands of images captured using unmanned aerial vehicles (UAVs, or drones), a time-consuming process.
  • A research team has developed machine learning models that analyze such aerial images and automatically identify those images most likely to contain animals, which, according to the authors, is usually a small fraction of the total number of images taken during a UAV survey effort.
  • The new algorithms reduced the number of images that needed human verification to less than one-third of that using earlier models, and they highlight the patterns in those images that are most likely to be animals, making the technique useful for image-based surveys of large landscapes with animals in relatively few images.

The Great Elephant Census, conducted in 2014 and 2015, counted more than 350,000* elephants across 18 African countries. Human observers in small planes flew some 294,000 kilometers during more than 1,500 hours to systematically count the animals.

Could a future census be managed locally, using unmanned aerial vehicles (UAVs, a.k.a. drones), cameras, and computer vision to detect specific objects, such as elephants, rhinos, or zebras?

Rhinos in the Kuzikus reserve, Namibia. Image by Friedrich Fedor Reinhard.

Although surveying the large animals in their individual reserves is a smaller job than the Great Elephant Census, such surveys cost managers substantial time and money.

A Swiss research team recently tested a new approach to wildlife surveys. They mounted commercial cameras on UAVs to take aerial photos of Kuzikus, a private game sanctuary in Namibia, and applied a type of machine learning to automate part of the image processing.

Surveying terrestrial mammals from the air

Sending field teams out to survey the wildlife of a large nature reserve on the ground, especially where animals occur at low densities, is inefficient and laborious. Some 3,000 animals traverse the 103-square-kilometer (40-square-mile) Kuzikus reserve, on the edge of the Kalahari desert, corresponding to 29 animals per square kilometer (75 per square mile).

Surveying with planes and helicopters covers far more area but can disturb wildlife, requires experienced pilots and human observers, and is both risky and too expensive for many reserves.

With the rapid development of small Unmanned Aerial Vehicles (UAVS), reserve managers and even livestock ranchers have experimented with them to count animals, flying UAVs mounted with cameras to take images or video footage of the animals below.

The research team launches the unmanned aerial vehicle (UAV, or drone) for a test survey at Kuzikus reserve, Namibia. Image by Friedrich Fedor Reinhard.

A UAV can be programmed to fly specific routes, cover 100 square kilometers (39 square miles) per week, and require as little as one pilot on the ground to run. They are quieter than planes or helicopters, so are less likely to disturb wildlife, and they remove the risk of having human observers inside a plane flying survey patterns at low elevation.

Nevertheless, a series of UAV flight campaigns can produce many thousands of images, each of which must be reviewed for the presence of animals. The UAVs used in this study, for example, took more than 150 images per square kilometer (389 per square mile), and, from above, African animals can easily resemble rocks, shrubs, and other features of the landscape.

“We acquired data that intentionally contained acquisitions made at several times of multiple days to account for variations [in the sun’s position], doctoral candidate and lead author Benjamin Kellenberger and co-author Devis Tuia, both at Wageningen University and Research in the Netherlands said in an email to Mongabay-Wildtech. “In our experiments, we actually found shadows cast by animals to be particularly helpful for detection performance, as they allowed distinguishing between e.g. gazelles and similarly looking rocks and dirt mounds.”

For this study, the researchers used photographs, rather than video. The main issue with video is that it generates prohibitively large amounts of data, Kellenberger and Tuia said. While it could be useful for showing animal movement and behavior, for the purposes of this study, where the focus was on detecting animals, static images sufficed, they said.

Zebra and wildebeest other savanna grazers from the ground look very different from the air. Image by Sue Palminteri/Mongabay

Neural networks streamline image analysis

Producing useful information from the hundreds or thousands of images generated during UAV-based monitoring projects typically requires project teams to spend many hours reviewing and analyzing the photos. So the research team developed machine learning algorithms to automate part of the process of detecting and identifying animals in the images.

They used convolutional neural networks (CNNs), a type of artificial intelligence able to effectively detect objects in large databases, to assess its potential for surveying wildlife over extensive areas, and developed recommendations for training a CNN on a large UAV-based dataset.

The algorithms highlight the patterns in the images most likely to be animals, enabling the researchers to quickly eliminate most of the images that did not contain wildlife.

“This initial phase of elimination and sorting is the longest and most painstaking,” Tuia said in a statement. “For the AI system to do this effectively, it can’t miss a single animal. So it has to have a fairly large tolerance, even if that means generating more false positives, such as bushes wrongly identified as animals, which then have to be manually eliminated.”

Automating an object-recognition task through machine learning requires a big data set to train the software to recognize the features of interest, in this case large mammals seen from above. The team created a data set of images with and without animals in the Kuzikus reserve by conducting a crowdsourcing campaign in which some 200 volunteers identified animals in thousands of aerial images taken by the researchers.

They trained the AI system by assigning penalty points to different types of errors. They gave the system one point for mistaking a bush for an animal but gave it 80 points for missing an animal completely. In this manner, the authors say, the software learns to distinguish wildlife from inanimate features without missing any animals.

“Automating part of the animal counting makes it easier to collect more accurate and up-to-date information,” Tuia said in the statement.

Prediction results on a test set image comparing the current model using neural networks to a 2017 model used as a baseline. Both models were set to minimize false positives while also detecting 90% of the animals present in test data set. The neural network (blue) produced far fewer false alarms than the 2017 model (red). Ground truthed locations are shown in yellow. Figure from Kellenberger et. al. (2018).

Once the dataset contains just those images that the AI system recognizes as containing animals, a human conducts the final sorting. The system places colored frames around questionable features to let human interpreters know to examine that part of the image.

In this study, the researchers developed CNN training recommendations that substantially reduced the number of false positives generated by previous models while still detecting 90 percent of animals present. This combination thus detected almost all animals in the Kuzikus reserve automatically and minimized the number of images that the Kuzikus rangers had to screen manually.

Kellenberger and Tuia designed their model with low animal densities in mind but believe it would also work well with higher densities. “Results on footage containing a larger number of animals indicated that precision and recall of our detector were still satisfactory,” Kellenberger and Tuia said.

For areas with higher densities of wildlife, they said, the system might require a few adjustments, but it should be apt for the task without any technical difficulties.

Scaling up

According to the researchers, recognizing animals from aerial images becomes challenging, as individuals of the same species may look different due to variations in size, fur color and patterns, position, and angle from the camera. Automated object detection algorithms must therefore be able to learn and consider the various ways a given species appears in images.

Birds-eye view of blue gnu, or wildebeest, moving across the Kuzikus, Namibia landscape. The different positions of each animal and their respective angles to the unmanned aerial vehicle present a challenge to teaching machines to identify and count them automatically. Image by Friedrich Fedor Reinhard.

However, broadening the definition of what a “zebra” or “greater kudu” looks like to a machine may cause the algorithms to mistake background objects, such as downed trees or branch formations, for the target animal species.

The researchers explored how to scale the CNNs, which not only train an object classifier using the training dataset but also extract features of objects that pertain to the specific project at hand, to survey wildlife over extensive areas and developed recommendations for training CNNs on other large UAV-based datasets.

They state in their paper that while the greater variety of landscape types generally associated with larger study areas does not usually cause problems for human image interpreters, it may decrease the success rate of machine algorithms trained on a certain landscape.

“Given our system employs off-the-shelf point-and-shoot cameras, occlusions due to dense and tall vegetation will inevitably cause the system to struggle,” Kellenberger and Tuia said. “In such cases, a good alternative would be to employ thermal infrared cameras, which in turn requires data acquisition at night, due to an increased temperature contrast between livestock and soil.”

The researchers did attach thermal sensors to the UAV for the current study, but the landscape’s warm vegetation and soil did not contrast sufficiently with the heat of the animals, so they did not use the thermal image data.

“All these effects have the consequence that extrapolating results from a small to a big area is not going to yield trustworthy results,” the researchers caution, “and models trained on a small subset and then evaluated on larger areas will not perform satisfactorily.”

This image shows the algorithm’s predicted locations of animals in red boxes and the animals’ locations verified by a human interpreter in blue. The red box to the right was a dead tree, not an animal, but the others were confirmed as wildlife. Image by Friedrich Fedor Reinhard.

Neural networks to survey savannas

The researchers said they intended first to show how to train state-of-the-art deep learning-based models for this particular task.

The researchers now plan to apply the technology to different areas, by collaborating with institutes using data from Kenya, and study how to adapt to repeated acquisitions. Given a model trained on one dataset (area, acquisition year, etc.), the researchers are exploring how they can adapt it to a second one with minimal effort required by the user. The researchers suggest developing their models — currently in prototype stage — to develop into an app for use by the wider wildlife research community.

“Regular usage of our model would require deploying it into a dedicated application (at the moment it is an academic prototype),” Kellenberger and Tuia said. Once that’s done, however, all the complicated core parts like the neural network design would essentially be hidden under the hood, and the user would be left with an easy-to-use interface. “The rest — training the model and predicting animals in new images — would be done automatically without heavy requirements from the user.”

*Estimated from counts during the sample flights.


Kellenberger, B., Marcos, D., & Tuia, D. (2018). Detecting mammals in UAV images: Best practices to address a substantially imbalanced dataset with deep learning. Remote Sensing of Environment216, 139-153.

FEEDBACK: Use this form to send a message to the editor of this post. If you want to post a public comment, you can do that at the bottom of the page.