- Conservation has many widely used strategies, but far less reliable evidence about how well they work, making it difficult to direct scarce resources effectively. Researchers increasingly argue that measuring causal impact — not just tracking activities or trends — is essential to understanding real outcomes.
- Impact evaluation seeks to determine what would have happened without an intervention, but doing so is challenging because conservation actions occur in complex, real-world settings where experiments are often impractical. Without accounting for factors like location bias, programs can appear more effective than they truly are.
- To address this, conservationists are adopting methods from fields such as economics and public health, including randomized trials where possible and quasi-experimental approaches when they are not. Different tools suit different contexts, and evaluation needs evolve as projects move from pilot stages to large-scale implementation.
- Evidence gaps, limited resources, and institutional incentives can all discourage rigorous evaluation, yet the stakes are high as biodiversity loss accelerates. Most experts now agree that while not every project requires exhaustive study, systematic learning about what works is crucial to improving conservation outcomes.
Conservation has never lacked ideas. Protected areas, payments for ecosystem services, community management, certification schemes, and public campaigns have all been promoted as solutions to biodiversity loss. What has often been missing is reliable knowledge about how well these interventions work, for whom, and under what conditions. A growing body of recent research argues that answering those questions requires moving beyond counting activities to establishing causal impact — determining whether observed outcomes can truly be attributed to conservation actions.
Two recent commentaries underscore this shift. One, published on Mongabay by Oxford researcher Tanya O’Garra, warns that conservation risks spending scarce funds on “well-intentioned but ineffective efforts” without stronger causal evidence. Another, published in Nature, argues that biodiversity policy suffers from an “evidence problem,” with many interventions not grounded in robust research. Together with recent methodological papers, they reflect a field attempting to move from persuasion to proof.
From monitoring to impact evaluation
Traditional conservation monitoring focuses on trends: forest cover, species abundance, or compliance indicators. These metrics are valuable but insufficient. A forest might remain intact because of protection, or because it lies far from roads, markets, or settlements. Distinguishing between these possibilities requires impact evaluation — assessing changes that can be causally attributed to an intervention.
Impact evaluation centers on a deceptively simple question: what would have happened without the intervention? Because this counterfactual world cannot be observed, researchers approximate it using comparison groups or statistical techniques. The aim is to rule out alternative explanations for observed outcomes.

In a recent special issue of Conservation Science and Practice devoted to impact evaluation, researchers led by Rachel Neugarten of the Wildlife Conservation Society (WCS) synthesize decades of methodological work into a practical guide for practitioners. They note that impact evaluation seeks to “credibly attribute an observed outcome to an intervention by ruling out alternative explanations.” It is especially valuable for large, costly, or high-risk projects, or for approaches that are untested or require proof of additionality.
This distinction between monitoring and causation has practical consequences. Without causal evidence, organizations may misallocate resources to programs that appear effective but are not. An editorial introducing the Conservation Science and Practice special issue warns that conservation efforts risk funding initiatives that may be “ineffective, or even detrimental” without credible evidence of impact.
The counterfactual challenge
Establishing causation is difficult in complex socio-ecological systems. Conservation interventions rarely occur in controlled settings, and random assignment is often impossible for ethical or logistical reasons. Protected areas cannot be placed randomly across landscapes; they are typically established in remote or politically feasible locations.
This creates selection bias. Early studies comparing deforestation inside and outside protected areas often concluded that parks were highly effective. Later analyses showed that many parks were sited in places where deforestation pressure was already low. When studies accounted for this bias using counterfactual methods, estimated impacts often declined.
A global review of protected-area evaluations highlights the problem. Simple comparisons between protected and unprotected areas can overestimate effectiveness because the areas differ in terrain, accessibility, or economic value. Counterfactual approaches that identify comparable control areas produce more credible estimates.
Such findings do not imply that protected areas fail. Rather, they show that effectiveness varies and cannot be assumed. The lesson is methodological: without careful design, conservation may mistake favorable geography or pre-existing conditions for policy impact.
Experimental and quasi-experimental methods
Researchers increasingly borrow tools from economics, public health, and development studies. Experimental designs, particularly randomized controlled trials (RCTs), are widely considered the most robust way to establish causation. By randomly assigning units — communities, households, or landscapes — to treatment and control groups, RCTs create credible counterfactuals.
In conservation, RCTs remain rare but are gaining attention. A recent review describes a “causal revolution” in the field, driven by the need to predict real-world impacts more accurately. Many conservation programs operate by changing human behavior, making experimental evaluation both valuable and challenging.

When randomization is not feasible, quasi-experimental methods offer alternatives. Techniques such as matching, difference-in-differences, and synthetic control attempt to construct comparison groups that approximate the counterfactual. These methods are increasingly used to evaluate policies like forest protection, subsidy reform, or community programs.
Importantly, no single method is universally superior. The appropriate approach depends on context, data availability, timing, and ethical constraints. The recent editorial published on Mongabay by O’Garra emphasizes adopting a “causal lens” — systematically considering mechanisms, alternative explanations, and plausible counterfactuals — rather than prioritizing specific techniques.
Learning across the project life cycle
Evaluation needs also change as projects mature. Early-stage interventions often rely on rapid learning methods: expert consultations, pilot tests, or qualitative research to refine a theory of change. As programs expand, more rigorous evaluation becomes feasible and necessary.
A framework proposed by Sheila Reddy of The Nature Conservancy (TNC) and colleagues distinguishes among “emerging,” “validating,” and “scaling” stages. Early phases emphasize hypothesis generation and feasibility, while later phases prioritize impact and performance evaluation to inform decisions about expansion or termination.
This staged approach recognizes a trade-off between speed and rigor. Demanding randomized trials for every pilot project would slow innovation, yet scaling interventions without evidence risks amplifying ineffective strategies. Adaptive management — adjusting actions in response to evidence — seeks to balance these concerns.
Evidence gaps and accessibility
Even when research exists, it is not always easy to use. Conservation practitioners often operate under tight budgets and time constraints, with limited access to technical expertise. The literature itself can be fragmented and difficult to navigate.
Large initiatives aim to synthesize evidence. The Conservation Evidence project at the University of Cambridge has reviewed more than a million papers across multiple languages, summarizing studies that test conservation interventions. Yet even this vast effort is constrained by what has been studied — often in high-income countries — leaving gaps in other regions and contexts.
Mongabay’s own Conservation Effectiveness series reached similar conclusions. Reviewing hundreds of studies on widely used strategies, it found that many lacked sufficient rigor to attribute outcomes to interventions. Some strategies had been scarcely studied at all.

This does not imply that interventions are ineffective. Rather, the evidence base is uneven, making it difficult to know which tools work best in different settings.
Measuring social and behavioral interventions
Many conservation programs target human behavior rather than ecosystems directly. Campaigns, education initiatives, and incentives aim to change consumption patterns, compliance, or public attitudes. Evaluating such interventions poses additional challenges because impacts are indirect and diffuse.
A study using “conservation culturomics” illustrates the difficulty. Researchers analyzed online engagement data to assess digital campaigns promoting biodiversity. Despite widespread exposure, the campaigns produced limited measurable effects on information-seeking behavior, with some localized impacts but little global change.
These findings highlight a broader issue: awareness does not necessarily translate into action. Evaluations must therefore trace causal pathways from communication to behavior to ecological outcomes — a demanding task.
Incentives, institutions, and politics
Methodological challenges are only part of the story. Institutional incentives can discourage rigorous evaluation. Organizations often rely on demonstrating success to secure funding, creating reluctance to expose ambiguous or negative results. Monitoring outputs — hectares protected, workshops held, rangers trained — is easier and safer than measuring outcomes.
The recent commentaries emphasize that funders play a crucial role. If donors reward learning rather than performative success, organizations may be more willing to conduct honest evaluations. Some initiatives now aim to integrate impact assessment into grant requirements and reporting frameworks.

Evidence-based decision-making also requires cultural change. A practical guide to conservation decision-making edited by William J. Sutherland argues that using evidence should become routine, as in medicine or aviation, where professional norms demand justification based on research.
When evidence is sufficient
Calls for more research raise a legitimate concern: analysis can delay action. In many cases, the drivers of biodiversity loss — habitat conversion, overexploitation, pollution — are already well understood. The challenge lies not in identifying solutions but in implementing them at scale.
Sandra Díaz, an ecologist cited in recent discussions, told Nature that evidence for some interventions is already substantial, particularly regarding harmful subsidies and policy failures. In such cases, the barrier is political will rather than scientific uncertainty.
The implication is that evidence and action must advance together. Waiting for perfect certainty is neither feasible nor necessary, but acting without evidence risks inefficiency or unintended harm.
Toward an evidence-informed conservation practice
Taken together, recent literature suggests a field in transition. Conservation science is moving from documenting decline to evaluating solutions, from correlation to causation, and from anecdote to systematic evidence.
Several themes recur:
- Causal attribution matters. Observed improvements may not result from interventions.
- Method choice depends on context. Experimental, quasi-experimental, and qualitative approaches each have roles.
- Evaluation should be built into project design. Retrofitting assessments later is costly and less reliable.
- Learning requires institutional support. Funders and organizations must tolerate uncertainty and failure.
- Evidence must be accessible. Syntheses and guidance help bridge the research–practice gap.
None of these insights is revolutionary on their own. Their significance lies in convergence: across disciplines and regions, conservationists are grappling with the same problem of how to spend limited resources effectively in a rapidly changing world.
For practitioners, the emerging consensus is pragmatic. Not every project needs a randomized trial, but every project benefits from a clear theory of change, credible monitoring, and a willingness to test assumptions. Where stakes are high — large budgets, irreversible decisions, or uncertain outcomes — rigorous evaluation becomes indispensable.
Biodiversity loss continues at a pace that leaves little room for ineffective interventions. Determining what works will not solve the crisis by itself, but without that knowledge, even well-funded efforts risk missing their mark. The task ahead for the sector is not only to conserve nature, but to learn systematically how conservation succeeds.
Banner image: Redwoods in Thornewood Open Space Preserve in Woodside, California. Photo by Rhett Ayers Butler
Citations:
- O’Garra, Tanya (2026). Conservation programs must embrace causal evidence when evaluating impact (commentary). Mongabay.
- — (2026). Fix biodiversity’s evidence problem. Nature. https://doi.org/10.1038/d41586-026-00309-1
- Neugarten, Rachel A., Rodewald, Amanda D., Eklund, Johanna, & O’Garra, Tanya (2025). An introduction to impact evaluation for conservation. Conservation Science and Practice, 7: e70169. https://doi.org/10.1111/csp2.70169
- O’Garra, Tanya, et al. (2025). Editorial: Impact evaluation for conservation — bridging research and practice. Conservation Science and Practice, 7. https://doi.org/10.1111/csp2.70179
- Schleicher, Judith, et al. (2020). The effectiveness of protected areas in conserving biodiversity: A global review. Biological Conservation, 241.
- Pynegar, Emma L., et al. (2025). RCTs in the wild: Designing and implementing conservation programs as randomized controlled trials. Conservation Science and Practice, 7. https://doi.org/10.1111/csp2.70029
- Reddy, Sheila M. W., et al. (2025). Accelerating learning and impact in conservation by tailoring learning and adaptive management. Conservation Science and Practice, 7: e70117. https://doi.org/10.1111/csp2.70117
- Buřivalová, Zuzana; Richardson, Gwendolyn A.; Rabach, Bennett; Mukul, Sharif A.; Butler, Rhett A. (2025). An open-contributions platform for evidence on forest conservation. Ecological Solutions and Evidence, 6: e70028. https://doi.org/10.1002/2688-8319.70028
- Sutherland, William J., et al. (n.d.). Conservation Evidence. University of Cambridge.
- — (2016). Conservation Effectiveness. Mongabay.
- — (2018). Conservation Effectiveness series sparks action, dialogue. Mongabay.
- Sutherland, William J., Pullin, Andrew S., Dolman, Paul M., & Knight, Teri M. (2023). Transforming Conservation: A Practical Guide to Evidence and Decision-Making. Cambridge: Open Book Publishers. https://doi.org/10.11647/OBP.0321
