- A couple of seminal studies published almost 20 years ago found that conservationists needed to start examining whether their actions were actually causing the desired effects.
- Assessing conservation projects through a causal lens takes more effort but can ultimately be a big piece of the puzzle that helps practitioners identify cause-and-effect relationships between various factors.
- “What’s needed now is making causal evaluation standard practice rather than the exception. With biodiversity in crisis, we can’t afford to keep guessing whether our actions work,” a new op-ed argues.
- This post is a commentary. The views expressed are those of the author, not necessarily of Mongabay.
In 2006, Paul Ferraro and Subhrendu Pattanayak issued an urgent warning: conservation lacked the causal evidence needed to know what actually works. This mattered because decades of conservation efforts were failing to stall the decline in biodiversity around the world, suggesting that scarce funding was potentially being diverted to well-intentioned but ineffective efforts, rather than toward approaches with demonstrable impact — hence the title of their paper, “Money for nothing?”
The message was clear: conservationists needed to start examining whether their actions were actually causing the desired effects. A classic study published two years later showed why this mattered. In 2008, Kwaw Andam and colleagues, including Ferraro, found that protected areas were less effective at reducing deforestation than earlier research had claimed. The problem was that the earlier studies hadn’t accounted for the fact that protected areas are often created far from roads and towns, places where deforestation is already less likely. By failing to account for this location bias, protected areas appeared more effective simply because they were located in places that were less likely to be deforested in the first place.
The protected area example illustrates the pitfalls of relying on correlation to infer impact. Most of us are familiar with the refrain that “correlation is not causation,” yet correlation remains seductive simply because it’s easier to observe that two things happen together, than to prove that one caused the other. We may observe that forests inside protected boundaries remain standing while surrounding forests disappear. But without ruling out alternative explanations — in this case, that remoteness, not protection, preserved those forests — we risk misinterpreting what drives conservation outcomes.

Causal thinking in conservation
Establishing causation requires more than passive observation. It demands thinking through alternative scenarios: What might have happened if the conservation program hadn’t been implemented? What if it had been done differently? These alternative hypothetical universes are known as ‘counterfactuals’ – a concept my co-authors and I define and discuss in this paper – and they are a key part of cause-and-effect reasoning.
Thinking about causation also involves working through mechanisms. This means tracing the sequence of changes that are expected to link the intervention to the ultimate outcome. For example, if community forest management is expected to reduce deforestation, is that by changing local harvesting practices through increased awareness? Is it by increasing compliance with rules through community-based monitoring and enforcement? Does local participation in forest management increase the perceived legitimacy of rules, thus increasing compliance? Understanding these intermediate steps helps distinguish between interventions that genuinely drive change and those that merely coincide with it.
Seeing the world through this kind of causal lens takes more effort but ultimately helps us get one step closer to identifying cause-and-effect relationships. Without causal evidence, we are effectively engaging in “magical thinking,” spending limited funds and hoping for impact. This is no longer tenable given the scale of the environmental crisis, our limited available resources and the sophistication of today’s analytical tools.
Where are we now?
Nearly two decades on from Ferraro and Pattanayak’s warning, there’s been a lot of progress, with increasing numbers of impact evaluations of conservation projects, and some reviews (here and here) of these studies. However, most of these are academic exercises led by research teams at universities: although valuable, these are often highly technical and challenging for practitioners without statistical backgrounds to interpret and to generalize to their context. Despite a few examples of organizations conducting their own impact evaluations (often in collaboration with universities), the vast majority of conservation projects are still not evaluated for impact. This is becoming critical: biodiversity continues to decline and key tipping points are now being overshot.
This isn’t for lack of want: most conservation organizations are keen to learn from their efforts. In my role as chair of the Society of Conservation Biologists’ (SCB) Impact Evaluation Working Group, I interact with practitioners from conservation organizations around the world and there is a lot of interest in impact evaluation, especially from the smaller organizations. This may be because they have more limited resources, and each unit of currency spent must work.
So why aren’t they all doing more?

A lot has been written about why conservation organizations are still not relying on causal evidence, and reasons range from limited budgets to the technical complexity of evaluation methods to a lingering belief that traditional case studies are equally valid. Back in 2018, Mongabay’s Conservation Effectiveness series highlighted that even the largest conservation NGOs were still struggling to generate causal evidence. The article noted: “The available science is not easy to use, NGO representatives said, and doing rigorous impact evaluations of their own projects is expensive and requires technical and analytical skills that are often unavailable within the organizations.”
Technical challenges remain, but a recent special issue of Conservation Science and Practice, which I co-edited, showcases how these are being addressed through collaborations between researchers and practitioners, and creative approaches to real-world technical hurdles. One promising development is qualitative methods, which, when implemented rigorously, can reveal whether conservation programs are working and serve as a gateway to quantitative approaches that generate statistical effect sizes.
Partnerships are especially proving essential. For instance, the SCB’s Impact Evaluation Working Group brings together researchers, practitioners and funders to find ways to collaborate and increasingly integrate impact evaluation into conservation practice. By pooling expertise across disciplines and building strategic collaborations, the group aims to ensure that technical barriers don’t stand in the way of understanding what works.
Yet even as solutions to technical challenges emerge, a recent paper published in the same special issue identifies a deeper challenge. In the context of agri-environmental schemes in the U.S., Ferraro and Kent Messer highlight the problem of misaligned incentives within organizations: conservation professionals are rewarded for leadership and success, yet meaningful evaluation requires a readiness to accept uncertainty, identify shortcomings, and use that evidence to do better. This creates a tension between personal professional incentives and the wider learning culture needed for evidence-based conservation. Admitting that a flagship project didn’t work, or that results are ambiguous, rarely leads to promotion within an organization.
But this incentive misalignment also operates at another level: between organizations and their funders. Conservation groups may feel pressure to demonstrate success to secure future funding, and this creates a fundamental tension between internal incentives to learn what actually works (even when that means acknowledging disappointing results) and real — or perceived — incentives to demonstrate success to maintain reputation and funding streams. When evaluation might reveal that a project isn’t working as hoped, conducting rigorous causal assessments may be perceived as a risk rather than an asset.
This dynamic is reinforced by how most funders currently structure their requirements. Many conservation funders demand evidence of activities (for example, the number of rangers hired, workshops conducted, or hectares under management) without requiring causal evidence that these inputs improved conservation outcomes like forest cover or wildlife populations.
In this context, impact evaluations become additional to organizations’ main activities. Conducting an evaluation implies overtime and extra work on top of an organization’s core duties, and the fear that disappointing findings might compromise future funding rather than being valued as learning opportunities.

Funders matter and some are taking the lead
If conservation funders really desire impact, they could transform this dynamic by incentivizing learning rather than focusing exclusively on “success.” This would involve rewarding organizations for honest causal assessments of how and when and why conservation works and doesn’t work, regardless of whether results are positive or disappointing. Such an approach would enable genuine adaptive management rather than the performative reporting that is often the norm.
This is what the Arcus Foundation is piloting in its Great Apes and Gibbons Program. In collaboration with the SCB Impact Evaluation Working Group, they are testing a capacity-sharing and networking program with selected grantee partners to strengthen impact evaluation in ways that are both realistic and practical. The approach recognizes the genuine challenges in impact evaluation: the absence of perfect counterfactuals, differences between interventions and sites, and the complex realities practitioners face on the ground.
Rather than demanding rigid evaluation protocols, the initiative aims to create dialogue with grantee partners about counterfactual thinking and causal inference concepts, supporting them to consider their work through a causal lens while understanding what’s feasible given real-world constraints. Alongside this, learnings will be used to adapt the Arcus Foundation’s grant report form where appropriate, so that grantee partners can describe the ways in which they have approached causal evaluation, the barriers (and opportunities), and the lessons learned.
As a pilot, the goal is to learn. This means understanding what resonates with grantee partners, what proves practical to implement, and what additional support would help organizations integrate counterfactual thinking and causal approaches into their work, where appropriate. The process acknowledges that measuring real conservation impact takes time and must reflect conditions on the ground, but also that the field (including funders and practitioners) needs to push substantially further than simply documenting activities and outputs.
Moving forward with causal inference in conservation practice
Impact evaluation — establishing causal links between interventions and outcomes — does not imply ticking a box and merely identifying binary measures of success or failure. It involves learning how an intervention works, what assumptions underpin those mechanisms, the different pathways that lead to the same effect, and how tweaking attributes of a project (for example, more rangers, or more compensation for human-wildlife losses) leads to changes in behavior, ecosystems and livelihoods. There are many ways that impact evaluation can lead to valuable learning beyond the simple measure of “success” versus “failure,” and this is what most conservation organizations want.
Nearly two decades after Ferraro and Pattanayak’s warning, the momentum toward this kind of causal learning is finally building. Technical barriers are being overcome through creative partnerships. Funders like Arcus are showing it’s possible to reward this kind of learning over performative success, and practitioners increasingly see causal thinking not as a burden but as a path to the adaptive management they’ve always aimed for.
The tools exist, the expertise is growing, and the will is there. What’s needed now is making causal evaluation standard practice rather than the exception. With biodiversity in crisis, we can’t afford to keep guessing whether our actions work, but it looks like we’re finally headed in the right direction.
Tanya O’Garra is a senior research associate at the University of Oxford and chair of the Society for Conservation Biology’s Impact Evaluation Working Group.
Statement of competing interests: The Impact Evaluation Working Group – of which the author is the chair – is in discussions with Arcus Foundation regarding potential funding for training and mentoring activities for Arcus grantee partners. If agreed, this funding will include remuneration for mentors and workshop coordinators, which could potentially include members of the working group board including the author. The author reports no other financial or personal relationships with Arcus.
See related commentaries:
Small grants are key to a successful next generation of conservationists (commentary)
When abandoned conservation projects are counted as progress, what are we protecting? (commentary)
Citations:
Ferraro, P. J., & Pattanayak, S. K. (2006). Money for nothing? A call for empirical evaluation of biodiversity conservation investments. PLOS Biology, 4(4), e105. doi:10.1371/journal.pbio.0040105
Andam, K. S., Ferraro, P. J., Pfaff, A., Sanchez-Azofeifa, G. A., & Robalino, J. A. (2008). Measuring the effectiveness of protected area networks in reducing deforestation. Proceedings of the National Academy of Sciences, 105(42), 16089-16094. doi:10.1073/pnas.0800437105
Joppa, L. N., & Pfaff, A. (2009). High and far: Biases in the location of protected areas. PLOS One, 4(12), e8273. doi:10.1371/journal.pone.0008273
O’Garra, T., Martin, R., Pynegar, E., Polo‐Urrea, C., & Eklund, J. (2025). Selecting among counterfactual methods to evaluate conservation interventions. Conservation Science and Practice, 7(11), e70066. doi:10.1111/csp2.70066
Börner, J., Schulz, D., Wunder, S., & Pfaff, A. (2020). The effectiveness of forest conservation policies and programs. Annual Review of Resource Economics, 12(1), 45-64. doi:10.1146/annurev-resource-110119-025703
dos Santos Ribas, L. G., Pressey, R. L., Loyola, R., & Bini, L. M. (2020). A global comparative analysis of impact evaluation methods in estimating the effectiveness of protected areas. Biological Conservation, 246, 108595. doi:10.1016/j.biocon.2020.108595
O’Garra, T., Neugarten, R., Pynegar, E., & Eklund, J. (2025). Impact evaluation for conservation: Bridging research and practice. Conservation Science and Practice, 7(11). doi:10.1111/csp2.70179
Ferraro, P. J., & Messer, K. D. (2025). Lessons learned from 10 years of embedding experimentation in agri‐environmental programs in the United States. Conservation Science and Practice, 7(11). doi:10.1111/csp2.70047
Catalano, A.S., Redford, K., Margoluis, R., & Knight, A.T. (2018). Black swans, cognition, and the power of learning from failure. Conservation Biology, 32(3), 584-596. doi:10.1111/cobi.13045