Many kinds of evidence

Conservation projects are very complex and the contexts in which they are implemented vary from project to project. So peer-reviewed studies alone are not enough to help make decisions, the BINGO sources said.

“We forget that there are thousands of conservation managers on the ground, trying to figure out what’s working and what’s not on a day-to-day basis,” said David Wilkie, director of conservation measures for WCS. “And the type of evidence that they gather with their teams is often expert opinion, based on person-days working in the field or working with local communities. While it’s not academically rigorous evidence, it is what makes conservation work.”

Game agreed that expert judgement can be an important form of evidence as long as it is used transparently and systematically. But even if we restrict the scope of evidence to peer-reviewed research, well-designed studies that look at the effectiveness of strategies are hard to find, the sources said.

For one, the scientific literature is heavily skewed toward strategies that are more well-known or popular. Little-known or not-so-glamorous conservation strategies have very few people studying them. In fact, in December last year, the International Initiative for Impact Evaluation (3ie), an NGO that supports the synthesis and production of evidence to inform policy, published a “gap map” with funding from WWF. This map highlights the most and least frequently evaluated forest conservation strategies. Overall, it shows that the impacts of high-profile strategies like protected areas and payments for ecosystem services have been widely studied, whereas there was very little evidence for lesser-known strategies like training and education campaigns, agroforestry, or national forestry programs.

“So in some ways we have too much evidence about some strategies and not enough of others,” Game said. “I think it’s probably in many cases a mismatch between the time when you most need that evidence and when that evidence is available.”

When it comes to strategies that are relatively new, in a proof-of-concept stage, or otherwise understudied, conservation groups rely heavily on people’s experiences, local expertise, and anecdotes. “And that type of evidence is no less valuable than the quantitative information,” said Louise Glew, director of conservation evidence at WWF and co-author of the 3ie gap map.

Scientific papers are often not practitioner-friendly. Image: Mongabay.

BINGOs run into many other problems in trying to use the available science, the sources said. For example, research is often carried out at different spatial scales than what the NGOs need. A case study might talk about the impacts of a small community-managed peat forest on the island of Sumatra in Indonesia. But the study’s results may not be useful for someone trying to set up community-managed forests across Indonesia, which has many different kinds of forests under various levels of human pressure. “The scientific literature kind of tells part of the story, but we need the whole story in order to really advocate one conservation strategy over another,” said Curan Bonham, director of monitoring and evaluation at CI.

There is also a mismatch in time scales. A large chunk of effectiveness studies are carried out by Ph.D. and postdoctoral students at academic institutions, who work on short timelines, usually between two and five years. In contrast, the BINGOs tend to engage in conservation projects that can take many years, sometimes several decades, to show any detectable effects on the ground. Project managers in the field may also need information about certain impacts over very short time periods to guide their day-to-day management, which, again, may not be what researchers looked at for their doctorates. “This mismatch needs to be overcome if we are to make conservation evidence mainstream,” Glew said.

The scientific literature is also rarely user-friendly. A peer-reviewed paper, with its word limit, heavy jargon, and set format, does not give a detailed backstory, for instance. “The reporting is often poor in the papers,” said Andrew Pullin, chair of the board of the U.K.-based Collaboration for Environmental Evidence (CEE). “You don’t know exactly what the authors did, how they generated their data, whether they’ve reported all of their data or just some of the data.”

The available science is also extremely fragmented. Some studies are published in more popular scientific journals or covered widely in the media, and are easy to find. Others stay locked behind paywalls of lesser-known journals. “It takes a lot of effort for practitioners to gather all the relevant literature together and try to get real value out of them,” Pullin said.

Glew agreed. “When you have two studies doing the same thing, but coming up with two completely different outcomes, it’s very difficult to understand why those impacts are different and how they’ve occurred,” she said. “And so it’s a case of distilling the information as best we can into making a decision.”

Crested black macaque. Photo by Rhett A. Butler / Mongabay.

Making it easy for NGOs

Recognizing these limitations, a few organizations have started extracting information from peer-reviewed studies that might be useful for conservation managers. Pullin’s CEE, for example, is an open community of scientists and managers that produces systematic reviews on broad strategies to understand their overall effectiveness. The group works independently of conservation NGOs and has produced several systematic reviews. But NGO project managers are not using these reviews as extensively as the team would like, Pullin said. “It’s a challenge to raise awareness that we exist, that our products exist, and that they exist to help people make decisions,” he added.

Sutherland’s Conservation Evidence Project at the University of Cambridge takes a slightly different approach. The group focuses on smaller-scale, more localized conservation strategies, and lists all available scientific studies that describe the effects of those strategies. Again, the team is looking for ways to disseminate their information in a more practitioner-friendly way, Sutherland told Mongabay.

“In distilling the evidence base into more accessible forms, the Conservation Evidence Project is an important part of the solution, but more needs to be done,” Glew told Mongabay in July. People working to produce or synthesize evidence must engage directly with decision-makers, she added, recognizing that they are not one group but many. This is important to understand the types of evidence that are relevant to their decisions, and to pinpoint the critical knowledge gaps.

Apart from academic institutions, some NGOs have also come together to collate evidence. The Science for Nature and People Partnership (SNAPP), for example, is a collaboration between TNC, WCS, and the National Center for Ecological Analysis and Synthesis (NCEAS) at the University of California, Santa Barbara. Launched in 2013, SNAPP has created more than 20 working groups of experts, each tackling different environmental questions. One of these working groups has dug through the available scientific literature to produce a map of the impacts of conservation on human well-being.

But these kinds of evidence-synthesizing groups and collaborations remain few and scattered. Moreover, despite these growing resources, the problem of generating and finding good science remains. Publication is often biased: high-profile journals tend to publish research that shows novel results or broad-scale trends, none of which may be relevant to project managers on the ground. There is also a tendency for journals to publish studies that show evidence of change, usually positive, rather than of no or negative change. What this means is that there isn’t enough incentive for academics, either in the form of grants or published papers in high-profile journals, to do the kind of high-quality impact evaluations that the field so badly needs, said Wilkie of WCS.

It is, however, critical to build up evidence. And since BINGOs are at the center of the conservation game, some say that they are best placed to monitor their own projects and generate evidence themselves. But there seems to be very little consensus on how they should go about it.

Measuring impact

Experts from all the conservation NGOs we spoke to agreed that monitoring the outcome of their conservation projects is vital. “Where monitoring was kind of a niche activity five to 10 years ago, it’s now become mainstream and most practitioners have some preliminary knowledge of what that entails,” CI’s Bonham said.

The Conservation Measures Program, for instance, is a collaboration between several NGOs, including the BINGOs, that has developed a set of open standards to help practitioners monitor their projects and determine if they are working.

But rigorous monitoring or impact evaluations — quasi-experimental designs that meet the gold standard of scientific evidence — remain a very small part of the NGOs’ monitoring process.

WWF, for instance, currently invests in impact evaluations of less than 10 percent of its projects, Glew said. Only strategies that are high profile, new or debated in the scientific literature, or strategies for which the group can actually implement an impact evaluation with viable controls in place, currently make the cut. “The vast majority of WWF’s projects do some form of monitoring,” Glew added. “For the more robust types of impact evaluations, we’re still talking about a handful of interventions that are carefully selected to respond to specific information needs of decision-makers.”

WWF is slowly extending impact evaluations to more of its projects. CI and TNC have similarly small, but growing, portfolios of projects whose impacts they evaluate rigorously.

In fact, WWF, CI, and TNC have one such ongoing experiment in the waters of the Bird’s Head region in West Papua, Indonesia. The seascape is home to a network of 12 marine protected areas (MPAs) stretching across 225,000 square kilometers (86,873 square miles) to protect the region’s rich tropical fish and coral diversity. To find out if these MPAs are working, the organizations have been tracking coral cover and fish numbers both inside the MPAs and in control sites outside the protected areas since 2012. The teams are also monitoring human well-being in settlements within the MPAs and in non-MPA controls. “We are beginning to see preliminary findings from that work, but we’re just scratching the surface of the data with that effort,” Glew said.

Monitoring all projects with that kind of rigor, and producing peer-reviewed studies based on such monitoring, might sound like a good idea. Impact evaluations like the program at Bird’s Head require not only sophisticated scientific and technical skills, however, but also huge amounts of staff and other resources to manage large volumes of data generated at remote sites.

CASE STUDY: EVALUATING IMPACT

[EXPAND] Some smaller NGOs, like the Jersey Island-based Durrell Wildlife Conservation Trust, have been more proactive than the BINGOs in rigorously evaluating their impacts.

Durrell’s mission is to save severely threatened species — like the lion tamarin (Leontopithecus sp.), pink pigeon (Nesoenas mayeri) and the Malagasy giant rat (Hypogeomys antimena) — from extinction. To find out if its conservation efforts have been effective, the organization has come up with the Durrell Red List Index of Species Survival. The index tracks the conservation status of the target species over time, and compares it to an alternative scenario of what the species’ status would have been if Durrell had not implemented its conservation actions.

“We have a key performance indicator for our mission: the survival probability for our species and how that’s changed over time compared to a counterfactual scenario,” said Richard Young, director of conservation evidence at Durrell. “That same principle applies right down to our individual projects on the ground. It’s really important to be absolutely clear on what it is you’re trying to achieve. With those really clear objectives, you can start to design monitoring plans that you need to be able to collect the data to understand the progress you’re making toward those targets. And ultimately, whether you achieve those targets or not.”

Being a smaller, and therefore nimbler, NGO has definitely helped, Young said. But Durrell’s success at incorporating and generating evidence is mostly because it has made a strategic decision to expand impact evaluations to most of its projects. “I think one key to the progress we’ve made is that we actually started to fund those impact evaluations on both a mission as well as organizational level,” he said. “This way we could demonstrate both internally across the organization, but also to our supporters, that investing in monitoring and impact evaluation is a really important thing for a conservation charity to do, and we can define the difference that we make. We’ve got people behind us who enable the investment that is needed, although we’ve still got some way to go.”

More importantly, they require generous funding that currently seems to be lacking: donors just don’t seem to be too interested in rigorous impact evaluations. “Donors want impact stories. They want to know their money is making a difference, they want to know that good things are happening,” Game said. “I see more demand for good impact stories than I do for full-blown evaluations.”

Steve McCormick, former president of The Gordon and Betty Moore Foundation, which has granted more than $1 billion to conservation groups, agreed that foundations should do more to encourage rigorous monitoring. But he added that rigorous impact evaluations make sense only for certain strategies, and are not a universally applicable tool. “Social change, for example, requires long time horizons, innovation through experimentation and learning; and is rapidly changing and unpredictable,” he said. “Monitoring and evaluating social change would be as much about gathering ‘soft’ or intangible evidence, as it would be about applying hard metrics.”

Amy Rosenthal, formerly a program officer at the MacArthur Foundation, another major donor for NGOs, echoed that sentiment. Rosenthal is currently a senior science strategist for Chicago’s Field Museum. “Not all projects receive this kind of support for impact evaluation; not all need it,” she said. “These are very expensive studies to conduct, and funders balance the dollars they invest in evaluation with the dollars invested in the conservation efforts themselves.”

Figuring out if marine protected areas work is expensive. Photo of Sulawesi island, Indonesia by Rhett A. Butler/Mongabay.

In fact, the cost of an impact evaluation should be judged relative to the value of the information it will produce, some experts say. “A $2 million evaluation of a $500,000 program might be extremely cost effective if the study helps policymakers decide whether or not to scale up into a billion-dollar national program,” Ian Craigie of James Cook University in Australia and colleagues wrote in a 2015 study. “Costs also mean that impact evaluations should not be required of every program; rather they should be commissioned strategically.”

According to Glew, impact evaluations are costlier than traditional monitoring because they require experts with an advanced understanding of statistics, both to design and analyze the evaluation. But she is optimistic that the cost of conducting impact evaluations is dropping as more practicing conservationists become familiar with such evaluations and more graduates leave school with the skills to conduct them. Impact evaluation can also be more cost-effective if it is incorporated into a project right from the beginning, she said, rather than trying to modify an existing outcomes-focused monitoring program.

Given current costs and funding constraints, though, the BINGOs continue to focus on measuring outcomes to track the effectiveness of their work.

WCS’s 5 Measures Program, for instance, focuses on collecting long-term data on five metrics: status of species, habitat conditions, law enforcement, quality of life for people living within project sites, and natural resource governance. These trend data give WCS some indication of their projects’ progress, and even look promising in some cases. In the Huai Kha Khaeng Wildlife Sanctuary in western Thailand, for example, where WCS has been working since 2004, tiger numbers have risen by 50 percent. In fact, the group’s 2015 survey estimated about 60 tigers in the sanctuary, which it says is the “single largest population of tigers in the Indochina subregion.”

“We can’t prove that the trend is a consequence of our action, but once we measure the changing state of these five things over time, we can begin to answer: are our investments sending those trends in the direction that we want them to be?” Wilkie said.

But data can be hard to collect.

Conservation NGO staff are traditionally ecologists or biologists. Photo by Rhett A. Butler / Mongabay.

The trials of collecting evidence

Traditionally, most staff at conservation NGOs have been biologists or ecologists. But conservation is no longer about biology or ecology alone. As human pressures on the environment grow, conservation groups are increasingly being forced to think about how their work affects people, governments, and other stakeholders.

The field staff, however, usually trained in ecological skills like counting animals or quantifying tree cover, often lack the expertise to measure social or economic changes. So the bulk of the monitoring resources, and ultimately the evidence, remains skewed toward ecological outcomes. But as overall conservation scientist numbers at the BINGOs shrink, even this data may be in trouble.

The BINGOs are trying to fill this gap in their skills by working with experts from a variety of disciplines.

TNC, for instance, is part of The Bridge Collaborative, a platform that brings together health, development, and conservation experts to measure the impacts of conservation projects. “What we considered to be evidence of impact on human health is probably not the same as what someone who’s a public health official or a clinical health person would consider reasonable evidence,” Game said. “So this initiative will help us come up with a common understanding of evidence.”

Overall, though, what constitutes evidence and what effective monitoring looks like remain open questions. There is no consensus about the best way forward, the sources say. “There are groups who are still stuck on vast numbers of indicators, while others are looking at before-after controlled kind of designs or random controlled trials,” WCS’s Wilkie said.

Wilkie also thinks that expecting conservation NGOs to do impact evaluations for their own projects is a flawed idea. Conservation NGOs are always struggling to secure funds, he said, and are unlikely to be totally unbiased when describing the success or failure of their projects. Doctors don’t design their own clinical trials, he added. “They only diagnose their patients’ illness and treat them based on the available evidence generated by public health researchers and epidemiology departments. But we not only have to diagnose the problem and treat the problem … we have to evaluate it ourselves, which is crazy.”

Way forward

Conservation NGOs have different goals and objectives, different ways of working. But they are in the common business of reducing species extinctions and habitat loss.

For their work to be effective, the conservation community needs to develop a common understanding of what credible evidence means, how to best use different strands of evidence, and how organizations can evaluate their work and create evidence that others can use, experts across the conservation spectrum seem to agree.

But to achieve this, more collaborations are necessary, the experts say. Not only must donors and conservation NGOs work more closely together, the academic community must also work hand in hand with the NGOs to ensure that conservation actions are effective.

Writer: Shreya Dasgupta
Editors: Rebecca Kessler, Mike Gaworecki
Copyeditor: Hayat Indriyatno

This is part four in the Mongabay series Conservation Effectiveness. Read the other stories in the series here.

Disclaimer: Mongabay receives funding from the MacArthur Foundation. The foundation does not have editorial input regarding the content of Mongabay stories.

Banner image: An adult male orangutan receiving a health check in Indonesia. Photo by Rhett A. Butler / Mongabay.

Follow Shreya Dasgupta on Twitter: @ShreyaDasgupta

Article published by Shreya Dasgupta
, , , , , , , , , , , , , , , , , , , , , ,

, , , , , , , ,

, , ,

Print button
PRINT