- WildTrack, a U.S.-based nonprofit, uses footprints as a noninvasive way of identifying and tracking wildlife.
- The organization has developed an artificial intelligence model that can identify 17 species, including rhinos, lions, leopards and polar bears.
- The footprint identification model has been used to identify the home range of fishers in California, support antipoaching efforts in Botswana, and understand human-wildlife conflict in South America.
Zoe Jewell was working in Zimbabwe in the 1990s when a harsh reality dawned on her. The female black rhinos (Diceros bicornis) she was helping monitor were suffering from a loss in fertility after they were chased to be collared.
Later, when she and a colleague brought it up with the local trackers who accompanied them, they broke out in laughter. Their advice: ditch the radio collars and track footprints instead.
The pair took that advice to heart and got to work. Years, and many trials and experiments, later, they now work to help people and organizations around the world identify animals by using pictures of their footprints. North Carolina-based nonprofit WildTrack has developed tools that measure and analyze footprints of animals to identify the species. They’ve also developed an artificial intelligence model that can identify 17 species, including rhinos, lions (Panthera leo) and leopards (Panthera pardus). “If you look down, there’s a wealth of information,” Jewell said. “They’re just waiting to be picked up.”
The technology is being used for a wide array of applications. In California, WildTrack is working with the U.S. Forest Service to identify the habitat and breeding range of the weasel-like fisher (Pekania pennanti), carnivorous mammals native to the boreal forests of the U.S. and Canada. In Botswana, they’re focused on identifying the movement patterns of rhinos to protect them from poaching. The technology is also being used to understand human-wildlife conflict in Brazil and other countries in South America.
Jewell said her team at WildTrack intends to continue building their database while simultaneously engaging with communities around the world to help them collect data.
“By using their traditional knowledge to pick up data, help us interpret data, process data, they have a real seat at the table,” she said.
Zoe Jewell spoke with Mongabay’s Abhishyant Kidangoor about the applications of WildTrack’s technology, the hurdles they’ve run into, and their plans for the future. The following interview has been lightly edited for length and clarity.
Mongabay: To start with, where did your interest in wildlife stem from?
Zoe Jewell: I’ve always been interested in wildlife for as long as I can remember. When I was a child, conservation wasn’t really a big thing. Everybody just thought there was as much wildlife out there as we wanted.
My grandfather was the first person who got me thinking about wild animals because he was a fur trapper. He had to go out to Siberia, buy fur, take them back to London and make them into fur coats. As a small child, I remember sitting in his office in London and he would pick me up and just put me on a pile of fur. I remember sitting on this pile and trying to figure out what they were because it was something I’d never come across. I remember him saying to me, “Oh, they’re skins that are going to be made into fur coats.” From an ethical point of view, it seemed to be completely at odds with everything I thought was right, but I was very small to understand what was going on.
A little later on, when it became obvious that the natural world was in trouble, I realized I wanted to work in conservation. But there wasn’t really a university course directed toward conservation. So I thought the best thing would be to become a vet. I did veterinary training, and then I realized it’s not what I want either. So I had to create my own rather convoluted path to be able to do what I wanted to do.
Mongabay: How would you describe WildTrack to someone who doesn’t know about it?
Zoe Jewell: WildTrack is a nonprofit. We are looking at ways of monitoring endangered species using completely noninvasive technologies. In other words, we do not dart or chase or collar or trap or interfere with the natural physiology or behavior of the animals that we’re working with.
We take the value of traditional ecological knowledge. If you take somebody who really knows what they’re doing and you take them out into the wild, they will be able to pick up so many cues from the environment and put together a story. That’s what traditional ecological knowledge is. We take that and we combine elements of that with cutting-edge technology which allows people like us, who are not experts, to be able to tap into some of that expertise. Using those two approaches, the ancient and the modern, we have a tool that we think can make a very big contribution to conservation. It is a very different way of looking at things. A lot of traditionally trained biologists tend to go straight for the darting and collaring approach. To be able to use footprints, you have to be able to see them. But looking down is something that we’re not trained to do. Most of us are looking up into the horizon to see the rhino or whatever it is we’re looking for. But if you look down, there’s a wealth of information. They’re just waiting to be picked up.
Mongabay: What gaps were you trying to fill when you started working on WildTrack?
Zoe Jewell: In the 1990s, we were working in Zimbabwe for the Zimbabwean government to monitor black rhinos. I was fresh out of vet college, and Sky [Alibhai, Jewell’s colleague and co-founder of WildTrack] was teaching at the University of London, and it was supposed to be a one-year sabbatical. We went to a place where they were doing a lot of darting and collaring. Our job was to follow the rhino after they’d been collared and to find out where they were going. During that period, we collected lots of data, and the data surprisingly informed us that instead of the rhinos doing well because they were being protected with the collars, they were actually suffering from a loss of fertility. The females had reduced fertility and that reduction in fertility was directly correlated with how many times they were immobilized. That was a bit of a shock to us. That’s really when we started thinking that there must be a better way of doing this. We started talking to the trackers we went out with every day looking for these rhinos. They were laughing at us and saying, “Well, you know, you don’t need this [radio collar] thing. All you need to do is look at the ground and all the evidence you need is there.”
We realized that there’s something in this. But there’s no means of validating this or making it objective for Western science. We thought that maybe we should really try and figure out a way of extracting data from these footprints. We can learn a lot from the trackers, and we can translate that for science. That was really the trigger point.
This data is everywhere. Everywhere you look, there are footprints. But everybody thinks of it as a mystical thing that trackers do and that it’s not science. So we thought we should try and create a bridge between ancient, traditional ecological knowledge and something that can be accepted by science as evidence.
Mongabay: How did it go from there? When did artificial intelligence enter the picture?
Zoe Jewell: It was yet another long and slow process. We started looking at footprints on the ground and thinking of ways to take measurements from them that will tell us the difference between species or the difference between individuals. Because we were able to track the rhinos and actually see them, we knew who the footprints belonged to when we found the footprints. So we spent a whole season using acetate sheets and tracing the outline of the footprints. But it was a complete disaster. We spent hours and hours measuring these things. But everybody’s drawings were slightly different, and it depended on the angle the sun was shining out, whether you were directly overhead or not, and so on and so forth. So that was a complete waste of time.
Then we got digital cameras. We put these footprint images onto the computer and start taking measurements. Around that time, we were lucky enough to meet the people at JMP Software who said that they were interested in the work we were doing. They showed us how we can visualize the data, and that was a real turning point. We could actually translate something from the footprint into something for science.
That was the beginning of a long period of experimentation trying to figure out which measurements were the most important in identifying species, individual, sex and age for a range of different species in different habitats under different conditions. Lots and lots of projects started coming to us from around the world, and so we had all this data coming in, and that was really what drove the development process until we got a footprint identification technology that gave us high accuracy across species, individual, sex and age for a range of different species.
The challenge then was if we wanted to scale this up and we would like people to be able to do this for themselves fairly rapidly, we needed to think about using machine-learning techniques. We knew at that time that machine learning had been used successfully to identify whole animals from camera traps. But trying to get a machine to identify a footprint is a whole different thing because a footprint is basically an object which is the same as its background. If you think about identifying a human face or a cat and dog, you’ve got a clear outline. You know where the object of interest is. With a footprint, it’s a kind of splodge and the outline is not always clear. Inside of the footprints, there are things that are irrelevant like grass or stone. So we teamed up with a group at the University of California, Berkeley, and some teams at Harvard, and started looking at the AI side. I was very skeptical, I must admit. I thought, “Oh, this is just not going to work. It’s difficult enough for a human to be able to do it, let alone a computer.” But to my surprise, it has actually been giving us very good classification rates at the species level.
We are now in the process of writing a paper looking at 17 different species, and we’re getting in the order of 90-92% accuracy, which is pretty good. The more data we bring in, the more accurate it gets. So the AI side is looking very promising.
Mongabay: Could you walk me through how the model works?
Zoe Jewell: Each species has its own unique footprint. Using morphometrics, we put points on a footprint to take measurements from those. The positioning of a number of those points is different for different species. For example, for bears, which have got five toes, we’ll use all five toes, sometimes a metacarpal pad and a carpal pad. For tiny mice that we’re working on, they’ve got many more pads. For an elephant, it’s got one huge, uncomplex-looking foot. It has got four toes, but they’re hard to pick out in the footprint. With the AI model, we’ve been able to identify 17 different species. The model will tell you which species it is.
The model can now identify black rhino and white rhino [Ceratotherium simum], which are pretty similar, polar bear [Ursus maritimus], jaguar [Panthera onca], tiger [Panthera tigris], lion and leopard. We’ve got puma [Puma concolor] in the Americas. We’ve got two types of tapirs [Tapirus spp.]. There are about 30 species of small mammals which haven’t been included in the model yet. There’s more being added all the time.
We’ve predominantly, until now, gone for large, iconic species that tend to leave a lot of footprints. We work very closely with trackers, not only traditional ecological trackers, but people who have come to tracking from a nonexpert background and they’re constantly throwing ideas at us for new species.
Mongabay: What do the training data look like?
Zoe Jewell: The training data is basically raw images. So we bring in a raw image and make sure that it’s labeled correctly. Then we label whether it’s left or right, front or hind. And then we draw a box around where the footprint is in the image. Let’s say you’ve taken a picture of a rhino footprint and it’s bang in the middle of the image. So you need to box where it actually is. You do that for all the images in the training set, and then you run the model to learn where the footprint is and what the differences are. Each species is labeled and so the model will learn “this is a rhino,” or “this is a mongoose,” or “this is an elephant,” or “this is a polar bear” from those footprints. Initially, you hold back some data from the original set. Then you put that in, and see how accurate the model is and how well it’s performing. Then you can bring in unknown data and it’ll classify it for you. There are very many tests and checks that you have to do along the way to make sure the model is actually doing what it should be.
To give you an interesting example, when we started doing AI, we had a group come to us and say, “Give us some data. We’ll do it.” They came back and said they have got 100% accuracy in identifying leopards from lions. We thought, “Wow, that’s incredible.” So we decided to give it a bit more data and the whole thing collapsed. The model just couldn’t function and it didn’t get any of it right. As it turns out, the lion data were on a slightly reddish soil substrate, and all the leopard data were in a slightly pale sandy substrate. So the model had simply learned the difference in color.
We have this hype about AI being amazing. It’s only as good as the data you feed it and as good as the human checking that is done. People underestimate the amount of human intelligence that goes into creating a good AI model. If it makes things easier, that’s great. But you can’t trust it to run completely on its own and get things right. It’s about finding that balance of knowing what it can do and what it can’t do.
Mongabay: How is the model being used on the ground?
Zoe Jewell: I would say that the application of our work tends to span three areas.
The first is pure science, where people want to know numbers and distribution of the species they’re working with. For example, we’ve been working with the U.S. Forest Service to try and determine the range of fishers using footprints. Fishers are little carnivores in the Americas. They exist in the boreal forests, and even down as far south as the southern part of Sierra Nevada in California. We put out track plates with bait that these animals go and visit and leave footprints on. Using the footprints, we’ve been able to determine the distribution of females, which is very important to understand how they’re dealing with forest fires and whether they’re being forced out of burnt areas or whether they’re managing to hold on because the females represent the breeding areas.
The second part is focused quite clearly on antipoaching. There’s a lot of poaching going on in sub-Saharan Africa, particularly with rhinos. We’re working with the government of Botswana, the Botswana Defence Force and a group at the Botswana International University of Science & Technology to try and understand movement patterns of endangered species and ultimately how to protect them better. We’ll be using drones to look at rhino trails and basically understand where they’re going, which individuals are using which areas, and to protect them from poaching.
The third main application area is human-wildlife conflict. We have a puma and jaguar project in Brazil and the rest of South and Central America where we’re looking at being able to identify puma from jaguar, and look at how we can best create corridors to protect these animals from conflict with humans. They tend to roam in agricultural areas, and they get shot for predating wildlife. We are trying to understand how to avoid that conflict by encouraging the animals to use corridors that humans are not using, or to encourage humans to protect their livestock better in the areas where the animals are going. It’s really to do with understanding what those animals need and sometimes even identifying a particularly troublesome animal in an area.
Recently, we have started looking at the possibility of using small mammals like tiny mice and shrews as indicators of environmental integrity. These small mammals tend to occupy the central part of the food chain. They are what holds everything together. They eat everything below them, and they are predated by everything above them. So if the environment is hit by a negative impact, the numbers will go down quickly. And if it’s doing well, numbers come up quickly. And not only that, the composition of the species within that cohort changes according to the type of impact in the environment. For example, after fire, some species will come back more quickly, or some will be hit more. Or perhaps, if there’s pollution, some species will be impacted more than the others. We are in the process of developing a system that will allow you to capture small animals walking over a track plate and we’re training both the morphometrics and AI model to be able to identify those prints. We’ve just started working on that in Southern Africa
Mongabay: What have been the challenges in doing this over the years?
Zoe Jewell: We did have a lot of challenges. Firstly, there was the challenge of just being out there in the field for 10 years collecting data in difficult conditions. Then there was the technology challenge. We can see there’s something in this footprint, but how do we translate it for science?
The third challenge was surprising, which was that there was a lot of kickback from people who were involved in the collaring industry. It was really difficult. I think we were quite naive in thinking that just because the technology works, everybody’s going to approve of it. We were told, back in the day, not to use the word “noninvasive” because it was inflammatory. But it’s better now. People are beginning to look at traditional ecological knowledge and realize that there is a lot of evidence around us in the environment, if we only know how to interpret it.
The final challenge is funding, and that’s something that everybody has to put up with in conservation. It’s a constant struggle. Climate change is what’s forcing humans to pay attention, and climate change is so inextricably tied up with biodiversity loss that hopefully solving one will help with the other.
Mongabay: What does the future look like for WildTrack?
Zoe Jewell: Once the technology is fully finished, which will never happen [laughs], we want to start focusing on engaging communities around the world to help us collect data. We’re already doing that. We’ve got a platform where we’ve got people feeding in data all the time, but we’d like to really expand that so that anybody who’s got a phone anywhere in the world can pick up a footprint image when they see it and send it. What I would love to see is the world map completely filling up with data points from endangered species around the world, and having that data be available to decision-makers.
I think the other side of the community participation bit that’s exciting is that it gets people involved. By using their traditional knowledge to pick up data, to help us interpret data, process data, they have a real seat at the table. I think that’ll completely democratize the process of conservation and change the way we think about it.
Banner image: The AI model can differenciate between a black rhino’s footprint and white rhino’s one. Image by Shannon Potter via Unsplash (Public domain).
Abhishyant Kidangoor is a staff writer at Mongabay. Find him on 𝕏 @AbhishyantPK.