A year ago, U.S. President Donald Trump shut down public access to the Development Experience Clearinghouse, a $30 billion database holding 60 years’ worth of institutional knowledge from more than 150,000 projects administered by the U.S. Agency for International Development. But before the closure, former USAID employee and artificial intelligence scientist Lindsey Moore used a large language model (LLM) to read all of the information in this database — rescuing critical lessons on development, environmental, economic and social projects in countries across the globe, all documented by USAID.
The data also included information on conservation projects. Many of the challenges presented in these projects repeated over the years, but the lessons were rarely retained — something Moore’s tech startup, DevelopMetrics, hopes to change.
Moore joins this week’s podcast to explain what those lessons are and what conservationists can learn from them. DevelopMetrics deploys an AI model capable of understanding not just the information from USAID’s database, but also other public databases that could be at risk of deletion or being lost to time.
Moore says the problems identified in the data are often not technological in nature, as they occurred over the course of six decades across various sectors and countries. Instead, they tend to be institutional, often rooted in the lack of local community engagement.
“Most of the work of development happens in these air-conditioned rooms. And of course, field work is always encouraged, but it’s expensive.”
Many of the solutions that Moore highlights in the conversation involve directly engaging local people, empowering them to make decisions, delivering outcomes to the places they live, and consistently monitoring and mentoring in-field education and skill-building to produce long-term, sustainable results from projects.
“The example I gave was about [an] energy governance project in Central America,” she says. “It was built around community-defined management systems for solar and water-pumping infrastructure. So with the residents actually picking the committees, setting the tariffs and training local technicians.”
Environmental concerns about the energy and water consumption costs of LLM models are something Moore’s team works to address for clients by providing upfront information on those costs and whether the benefits outweigh them.
“It makes sense because they have to think about it in a much different way than most organizations are thinking about it from effectiveness or cost saving, or they’re thinking about it from an environmental or conservation perspective.”
Please take a minute to let us know what you think of our podcast, here.
Mike DiGirolamo is the host & producer for the Mongabay Newscast based in Sydney. Find him on LinkedIn and Bluesky.
Banner image: Mangroves on Vanua Levu Island, Fiji. Image by Rhett A. Butler/Mongabay.
Related reporting:
After USAID cut, Ethiopia’s largest community conservation area aims for self-sufficiency
Democratizing AI for conservation: Interview with Ai2’s Ted Schmitt and Patrick Beukema
Transcript
Notice: Transcripts are machine and human generated and lightly edited for accuracy. They may contain errors.Lindsey Moore: It seemed the difference between a policy that works and one that doesn’t is geographic, and that’s what we were just talking about. In a lot of ways, programs perform best when decisions, follow-ups, and problem-solving are happening where the people actually live—whether that’s the forest or the wetlands—and not in kind of the air-conditioned meeting rooms a hundred miles away. And it sounds obvious. The reality is, when I was stationed in Bangladesh, for example, to get actually to the mangrove fields, I probably went once in my three years living in the country, because it’s just hard to get out of the embassy, especially in a lot of places where there’s violence occurring. Often, most of the work of development happens in these air-conditioned rooms. And of course, field work is always encouraged, but it’s expensive. And so I think even though this is a lesson that people intuitively understand, in practice—and again, this is going back to the whole thing about organizational structures—even if you know something, the organizational structures, the bureaucracy, the money to get out to the field, wasn’t really there. So that lesson kept getting relearned over and over again: get out of the embassy. But it’s harder to implement than it sounds because of the structure of the Foreign Service.
Mike DiGirolamo: Welcome to the Mongabay Newscast. I’m your host, Mike DiGirolamo, bringing you weekly conversations with experts, authors, scientists, and activists working on the front lines of conservation, shining a light on some of the most pressing issues facing our planet, and holding people in power to account. This podcast is edited on Gadigal Land. Today on the newscast, we speak with Lindsey Moore, who is the CEO of DevelopMetrics and has a decade of experience as a Foreign Service economist with the United States Agency for International Development—USAID. Just before the Trump administration closed access to a public database containing 60 years of USAID project evaluations, including conservation projects, Moore used a large language model to read all the data and compile it. She joins me today to talk about what she found in this data, which contained detailed lessons from international development and conservation projects that repeated across decades, sectors, and countries, but were rarely retained. She explains that the challenges projects faced were often not technical, but institutional, and that they showed a lack of direct local engagement with the people impacted by, or intended to benefit from, these projects. She explains how these challenges can be circumvented today and how conservation projects and international aid could be improved. Hi Lindsey. Welcome to the Mongabay Newscast. Thank you for speaking with me today.
Lindsey: Great to be here. Thanks for having me.
Mike: So I took a look at your background and you have a really incredible story, and I’d like you to explain a little bit more about it. But just to set the stage here: you’re utilizing critical data from lots of projects that were in a database from USAID, and some of this data is being used for conservation. But before we get to that, can you first explain how it is you became involved with the U.S. Agency for International Development and what your expertise and role was there?
Lindsey: Yeah, sure. So I was a Foreign Service economist with USAID for 10 years. I was based first in Bangladesh. I was a country economist in Bangladesh, then the regional Caribbean economist based in the Dominican Republic. And then my final post was as the senior economist of the Asia Bureau. And in that role I was really tasked with allocating USAID’s biggest economic growth portfolio—so hundreds of millions of dollars—to conservation projects, environment projects, economic growth, you name it: agriculture, you name it. And being an economist, when allocating this money, I really wanted to make sure that I was allocating to the most statistically significant intervention that was going to have the best result possible, as everyone did. And we all talk about the importance of evidence-based learning and evidence-based action, decision-making, but in actuality it was really hard to do, just because of the human constraints: the amount of time that it takes to design these projects and allocate the money, and then the amount of information. That database you talked about has over 200,000 documents going back 60 years. Everything USAID has—well, not everything USAID’s ever done, but everything that’s ever been evaluated and posted into this database. So there is some missing data. But I wanted to go back and see: what’s everything we’ve ever done before, so that I could really understand how to allocate the money better. And it wasn’t possible. It was just too much time to go through this kind of archaic database. And so that’s when I started, seven years ago, looking at large language models: how could I train a large language model to understand all this data to make better decisions? So I left to get my PhD in AI and policy, and from that started my current company, DevelopMetrics, which focuses on really using AI and data science to improve evidence-based decision-making.
Mike: Yeah. So now we’re going to get into that. But first I want to describe the database. So it’s an archive, right? And it’s something that the Trump administration blocked public access to. And I believe it’s called the Development Experience Clearinghouse—correct me if I’m wrong—and it has data on 150,000 project evaluations over the course of 60 years. I think you said 200,000, but I saw 150, and some of which are conservation projects. So can you describe for our audience what exactly is in this database, and how much of it had to do with conservation?
Lindsey: Yeah, I think because there’s so many different documents, that’s why the numbers are a little bit different. There’s evaluations—which you’re quoting—which I would say, in the environment sector alone, at least 25% had environmental aspects. Conservation would be less than that. But every project that we worked on had a certain environmental aspect to it, regardless. So I would say every report at least had something relevant. But there was also final evaluations, there were midterm evaluations, there were other special reports—anything that the agency had written, any kind of learning product. So there’s a lot of valuable information in there. And some of it was typewritten, all the way back to really the beginning of the agency. So it’s just an incredible archive that, yes, we don’t know what happened to it. We don’t know if it was deleted or if it still exists in some hidden place. It was on the public domain for, I don’t know, however long—at least for the last 20 years—and growing, and then overnight it was deleted before anyone had the chance to save it. Luckily, at DevelopMetrics, because we built all of USAID’s large language models, we had backed it up because it was our core training data. And so we have that archived version of it. But it’s a huge loss. It’s the biggest development database known.
Mike: Yeah, I correct me if I’m wrong, but I read from your piece in the Stanford Social Innovation Review that $30 billion was spent on the Development Experience Clearinghouse. Can you just describe the scope of the investment involved in this?
Lindsey: Yeah. If you think about an evaluation, each evaluation costs, I would guess, around about $200,000, depending on the size, of course. Some are less, some are more. If it’s an impact evaluation, it can be upwards of, I don’t know, $700,000. So if you think about just multiply that by the amount of evaluations in there, and that’s the amount of money that went into these learning products that were hosted there because the agency really cared a lot about learning from the past. And so they invested a lot of money into it.
Mike: And the data, but the data itself, as you describe it already, was incredibly difficult to access. And I think you described that like having someone go through manually would be really unrealistic and just unpractical. But you found a way before the Trump administration blocked access to it. You found a way to analyze it using your language learning model. So can you talk about the nitty-gritty of how that worked?
Lindsey: Yeah, so this was back in the day before GPT was released as a chat, so there wasn’t so much the AI rush that there is today. So what we really did with every USAID bureau, almost every USAID bureau and also submissions and then also other partners—academic world, UN Partners—is we would go through those evaluations, hand-label them. What does conservation mean? If you ask GPT, it might get a totally different definition, or you don’t even know what that definition is that it’s pulling from. Whereas we would sit down with conservation experts and say, “Okay, this is conservation,” and then we would look at, we’d highlight texts over and over again and say, “Is this excerpt conservation? Is this not?” And then we would run the LLM and see what it was tagging itself, and then we would correct it again until it really sounded like the conservation expert at USAID. We did that with water. We did that with agriculture, and even different USAID bureaus had different definitions of things as simple as agriculture. One bureau thought it included livestock, and the other one didn’t. So you can see how technical experts—the meaning behind their words—and especially in an environment like resilience, we spent two months sitting around talking about what does resilience mean? And so from that, we trained an algorithm to really understand international development data so that this archive—it wasn’t, there was tagging, but it wasn’t very well-tagged. There wasn’t really consistent tagging, and even to go through it and find the document you were looking for was really difficult. So first we had to figure out all the metadata, which is hard, because what is the budget of a project when it has so many budget numbers in it? Or what is the country that it’s talking about when it’s also referencing other countries? So just figuring out what’s a project name when there’s acronyms or so many other things. So just figuring out the metadata to tag it all, and then go in there and pull out every single sentence on conservation—every lesson learned over the past 60 years on conservation—took some time. But with AI, we were then able to take this kind of useful but difficult archive where it was really just a dump of everything and make it into something that you could really extract key lessons and look at patterns over time. And that’s really the power of AI when it’s done properly. Because if you run it through GPT, you would get a completely different interpretation than the one you would get if you used kind of our model.
Mike: And there were about five common lessons that you pulled out, and we will address those. But what I want to first ask you is, were there any really surprising or illuminating findings about conservation that you got from this data?
Lindsey: I think this wasn’t in the article, but when I looked at the conservation data overall, one of the most interesting things was that there are persistent kinds of advice families of what I called it. What we did is we took out all the lessons and we created a taxonomy of every kind of type of lesson. Those are what we call them advice families. What’s every kind of conservation advice that’s ever gotten? And we really saw that there was a repeated diagnosis over and over again, even across the 60 years of similar implementation problems. So the fact that kind of the advice stayed the same over so much time made us think that a lot of it was not necessarily technical. While there were some technical lessons, it seemed that the technical solutions were quite strong. But what the lessons really were around were more organizational or institutional problems, like, you know, organizational incentives or accountability arrangements that kind of were rediscovered over and over again and seemed to be more of a systematic issue, a persistent issue, outside the technical, if that makes sense.
Mike: And did those issues also apply to other projects that didn’t have anything to do with conservation?
Lindsey: Yeah, I mean, it was also—we saw it more in the conservation space, but I think overall, there’s this literature strand that Pritt talks a lot about, that development practice is less constrained by a shortage of technical solutions than by the organizational and institutional conditions required to implement them. And I think we saw that more in some sectors than in others. But I think in conservation, maybe one of the—I’m not a conservation expert myself, but maybe one of the issues is really the political environment with which it’s happening in the communities where the conservation is happening. That creates more of a need. And this was another big lesson that came up over and over again, is just the need to engage with the local community and the institutional structures. And I think those in the conservation sector are more of an inhibiting factor than necessarily the technical solutions. Because it seemed like the tech sector really has the technical solutions more or less down, at least from the USAID perspective.
Mike: That’s a good jumping off point then for us to start talking about those lessons. So you found these patterns. They kept resurfacing across countries, decades, and sectors, and you described them as old truths rediscovered but rarely retained. And the first lesson was “bring delivery closer to households.” I’m really interested in unpacking that with you. So how does that apply to conservation projects, would you say?
Lindsey: Yeah, it seemed the difference between a policy that works and one that doesn’t is geographic, and that’s what we were just talking about. In a lot of ways, programs perform best when decisions, follow-ups, and problem-solving are happening with the people who actually live, whether that’s the forest or the wetlands, and not in kind of the air-conditioned meeting rooms a hundred miles away. And it sounds obvious. The reality, when I was stationed in Bangladesh, for example, to get actually to the mangrove fields, I probably went once in my three years living in the country because it’s just hard to get out of the embassy, especially in a lot of places where there’s violence occurring. Often, most of the work of development happens in these air-conditioned rooms. And of course, fieldwork is always encouraged, but it’s expensive. And so I think even though this is a lesson that people intuitively understand, in practice—and again, this is going back to the whole thing about organizational structures—even if you know something, the organizational structures, the bureaucracy, the money to get out to the field wasn’t really there. So that lesson kept getting relearned over and over again: get out of the embassy. But it’s harder to implement than it sounds because of the structure of the foreign service.
Mike: So I guess hopefully someday if changes are made, what would you be suggesting as a solution to that?
Lindsey: There was always an encouragement to get out to the field, but there wasn’t the budget. And I think these are the kind of the same things that more funding for people working on these issues to actually spend time in the places where the projects are occurring. It’d be interesting to look at a study and see how much time are people really spending with these communities in a meaningful way, or is it run by the community themselves? But it always comes down to budget in the end.
Mike: Yeah, I guess that also ties in well to your second lesson, which is “practice changes practice,” which you describe as behavior changes only when skills are practiced in the real world. So can you just paint a picture for me? What would that look like in terms of environmental work or conservation?
Lindsey: Yeah. There wasn’t a lot of evidence of this. It’s crazy because a lot of the way that these projects were evaluated is by standard indicators, right? People really like the numbers because that’s the easiest way to look at a project. And so what you normally saw in these conservation projects and most in general is number of people trained, disaggregated by sex, right? Because we have to know how many women, how many men. And that’s the “okay, a hundred people trained, done.” But it didn’t really prove to do anything. When you look at the qualitative texts, which is where large language models or AI can really help you look at qualitative texts like quantitative, you see when, you know, someone is in the field showing someone something over and over again or when they’re practicing and then they’re reviewed by a peer or group and critiqued, that’s when the lessons really take hold. So the PowerPoints, the, again, air-conditioned room—we should stay out of air-conditioned rooms altogether—and really looking again and again at the actual what’s happening. And with conservation projects, it’s so different each time. It’s not like agriculture where you can say, “Okay, you have a field crop where you can go back again and reshow them.” But it’s a little bit more varied. So we’d have to dig a little bit more into those specific lessons. But again, classrooms are not really that effective in terms of learning.
Mike: Hello listeners, and thank you for tuning in. If you’re curious to read more reporting on USAID or artificial intelligence and their intersection with conservation, I’ve included links to recommended stories in the show notes. And if you haven’t already, I very much encourage you to leave a review of this show on the platform you’re tuning in on and tell a friend about it. We want the podcast to grow and reach as many ears as possible. So let us know what you think or feel free to fill out our podcast survey, which is also linked in the show notes. Thank you very much, and back to the conversation with Lindsey Moore.
So lesson three is “designed for scale, not for pilots.” And the way that I read this is that programs have to be designed with ownership and long-term sustainability in mind, like even after the implementers, as it were, go away and exit the field. But feel free to correct me if I’m wrong. So what does this actually look like and how does it work with conservation?
Lindsey: Yeah, I think people really are allergic to the idea of pilots because there’s been so many—you hear pilots or things like that. But actually, pilots in the end seem to be a great solution overall as long as they had a long-term plan after the pilot. Which is funny, that’s usually what was missing. It was like, “Okay, check, pilot done. What are we doing now?” Rather than “What are we again?” And it comes back to budget and the organizational constraints. Did they have budget for a follow-up or did they think about how they were going to transfer that local ownership over to the community? Example, if you’re in a park, for example, working—there’s a lot of examples working with wildlife—if you have a pilot to create some sort of conservation strategy to protect these animals, great. But then the second that you leave, if you don’t have a follow-up plan, then everything, all that work just goes away overnight. And that’s why people had a certain kind of allergy to pilots because they see that, you go back and all your work has been undone. But if there’s a follow-up plan, then they scale and they work really well, or they’re stopped before you invest too much money. So I actually really like pilots as long as there’s a follow-up to them and a plan to pass it over to the local community there.
Mike: I think it’s really important for us to pause here and just reflect that this is based on 60 years of data. So these are problems that you saw occur time and time again. And it does blow my mind that these lessons weren’t retained in all that time. Again, can you just describe why is it that such—using your own words—obvious-type lessons were not seemingly passed on or communicated to whomever was going to implement similar types of projects or challenges?
Lindsey: One of the interesting things is even though pilots or a lot of these kinds of terms seem like a newer idea, but actually, you see the same ideas happening over and over again. They were just called something different, right? So international development loves to make up new terms for the same things that they’ve been doing. And I think actually, honestly, probably every sector is like this—new terms for things that they’ve been doing for a long time, right? So even though we call pilots, they were doing pilots 60 years ago, they just weren’t calling it a pilot, right? So the cool thing with AI is you can train it to understand the actual thing that’s happening regardless of the term, right? Private sector engagement, that was a fancy term made up in the last decade, not—or popularized in the last decade—but actually, private sector engagement had been happening from the very beginning. So it’s hard to trace those threads down when the terminology is changing constantly. And I would also say, human knowledge—there were plenty of conservation experts at USAID who would’ve known all these things that we found anyway. We found that a lot at USAID. We’d go in with our big finding and the person who is the expert there would be like, “Yeah, I know this.” So it was in individuals’ heads. It just was hard to get the organizational support to listen to those humans, and then when they left, we had to relearn it all over again.
Mike: Yeah. The institutional knowledge like left with them when they retired or…
Lindsey: Yeah. Or people just wouldn’t listen to that person, even though they knew. We had one example with child safety. We went in and we said, “USAID is doing all this program with digital work in schools, but we’re not thinking about protecting children from digital harm. We need to focus on this. It’s a huge gap.” And the digital harm guy said, “Yes, I’ve been saying this.” And everyone else was like, “Wow, we’re not doing anything.” So sometimes, it’s just, you need a lot of evidence to get the support of that person who already knows it.
Mike: Yeah. Yeah. Okay, so lesson four. This is really interesting and really, this is one of my favorites, is “co-creation beats consultation.” And I was really struck by your words you said, or you wrote down, “Projects last when the people who must run them share real power.” And I could think of so many ways that this could apply to conservation, but please do elucidate for us what you think that looks like.
Lindsey: Yeah, again, I think this is something that people know, but it’s hard to implement because people funding projects like to control them. And they really, within governments, we’re responsible for taxpayer money. So there’s a huge fear of kind of transferring real power. And I think overall, power transfer is probably one of the most difficult things to do. And so again, this kind of goes back to that these lessons aren’t necessarily technical, but they’re more organizational or institutional in that we’re really talking about power transfer. And so I think in the example I gave was about the energy governance project in Central America that was really built around community-defined management systems for solar and water pumping infrastructure. So with the residents actually picking the committees, setting the tariffs, and training local technicians. And there was another project in Madagascar where the villages actually selected their own activities and monitored their own progress. And so that really, when they were monitoring their own progress, it really turned decision-making into a shared practice rather than consultation. And so the idea of, again, handing power over for people to monitor—of course, it was set up with the same people benefiting, weren’t monitoring it, there was actually transferring. That power had a huge impact. And we should see more of that.
Mike: It’s not surprising to me, but it’s so inspiring and uplifting to hear that, that there are these concrete examples that you can point to and see, “Look at how this worked.” Thank you for sharing that. The fifth lesson is “strengthen the middle layer,” which I also found interesting. You spoke to another journalist and you told them, “When the middle layer is supported, systems hold. When it isn’t, everything else is just theory.” Can you expand on that and, of course, tell us how that could apply to conservation?
Lindsey: Yeah. It’s like the teachers, the nurses, the agronomists, the cooperative leaders, those who are responsible for daily implementation. Those are the key kind of people to engage. There was a Clean Air Green Cities project in Vietnam that really worked with, it was working with low-cost air quality sensors. But the idea is that it was working with kind of teachers and student mentors to help them transfer the data or interpret the data and kind of run what they called awareness drives. And so this kind of middle tier of teachers and even students, they organized these kinds of clear air days and they convinced so many households to ditch conventional kind of stoves. And they turned this abstract data into all these small actions that then went all the way up to the political levels and laws were changed. So it’s these teachers, these people engaging every day—these nurses—that when they can actually sink their teeth into the lesson or they can actually own the project again for themselves, we see change. Where often these projects miss this middle layer. They go right to the bottom layer or they go right to the top, but they somehow maybe think that this middle layer, because they are not necessarily the ones being influenced directly or because they’re not the political decision-makers, they get skipped. When actually they have a really incredible role to play.
Mike: So all this insight is obviously incredibly valuable and I’m just curious, how do you condense this material and give it or teach it to NGOs or conservation organizations that want to learn from you?
Lindsey: Yeah, this is just one archive of data, right? There are so many other conservation archives that are under threat or environmental data that could be used. So this is granted USAID 60 years, which is incredibly valuable. But also, organizations have their own archives where we’ve worked with some, and when we combine their own data with this or with some of these other lost data sets or even existing data sets, it gets so much stronger, right? Because then if you bring it in, this would be really useful if USAID existed. Unfortunately, we just got to it after they shut down. But there’d be some real clear recommendations for how USAID would need to change its organizational structure in order to be more effective. Every organization has slightly different issues. So if you can look at your own database and then combine it with others, and then of course bring in local data as well, so you can, if you can talk to the local communities, then you have this incredible lighting up of knowledge, which was never available before. Because we’re talking about going through hundreds of thousands of documents and data points and qualitative, which I find so much more valuable than quantitative. And so if organizations could do this type of exercise and really look meaningfully at their data and it’s not just about running their data through a large language model, you really need to think about what is your underlying knowledge graph or taxonomy. If you’re a conservation organization, what is every intervention that you’ve done and how do you define it? Because that’s really the core knowledge that you have. And then once you have that, you can very easily, and you don’t have to create a whole large language model. You can run it on top of Microsoft or others, but just create those kinds of fine-tuned workflows that would enable the data to be understood the way you do and within your organizational context. Because if you don’t do that, you just get this very generic kind of output. But if you can adapt it to your own organizational context and look at it meaningfully, you could not make these same mistakes over and over again.
Mike: Do you currently work with any NGOs or conservation organizations that are trying to use this for lessons on how to improve their projects?
Lindsey: Yeah, we work with a bunch of UN agencies, a couple of INGOs, we’re working directly with WWF right now, but we are talking with them a lot. And they have some really interesting ways that they’re thinking about this because they’re finding large language models in this type of analysis really useful. But they’re also thinking about the energy aspect of running these models on data in terms of a conservation perspective and trying to think of the trade-off. If we run this model, it takes up this much energy, but it saves this much energy, then how—or it saves this much endangered species or whatever the outcome of the product is, how do we kind of model those? And so we have built an environmentally sustainable large language model that doesn’t use RAG. So it’s not running through the whole document every time you want to question, but actually, we just identify the key terms and extract that text. So you have a much smaller set of data that then you’re going through every time you want to look at a question to save the environmental implications of running these models, right? Because that is, of course, environmental sustainability is something we have to think about with these models. But honestly, AI in the conservation sector is really not very prevalent from what I’ve seen with a lot of the NGOs or they’re just starting to think about it. I think it makes sense because they have to think about it in a much different way than most organizations are thinking about it from effectiveness or cost-saving, or they’re thinking about it from an environmental or conservation perspective. So it makes sense to take longer, but it hasn’t been, this type of work, from what I’ve seen, hasn’t really been popularized in the conservation sector.
Mike: Yeah, I was going to ask you about that. But the energy and water costs of running these models is something to consider. And I was going to ask you if you had detailed data on how people could find that out. And it sounds like you do, but can you explain a little bit more?
Lindsey: Yeah, I mean, it depends so much on the amount of data, the model, all of that. And so what we do is in an analysis, before we run any large language model, we look at how much energy, how much—what is that taking? But we found for our sector, we don’t need to look at the entire internet, right? We don’t have to go through so much data. We can really reduce the amount of energy by having those really clear knowledge graphs, like I said, of what is your organization doing and finding that specific data, taking it all out, and then just going through the data that’s absolutely necessary. Rather than what a lot of people are doing is just taking these whole unstructured databases and throwing it into these models and you can look at the different costs and it’s huge. And there are some pioneering efforts going on in that way, but it’s still pretty nascent, and there’s a lot more work to be done on it. So right now, we’re just doing it on a case-by-case basis.
Mike: I thought I read somewhere that part of your model was open source. Do I have that right?
Lindsey: Yeah. Our—we have so many different models these days and we published an academic article on our first model, development evidence large language model. That’s the one we created for USAID. That one is based on Roberta Large. It’s one of kind of the older, large language models that’s very easy to fine-tune. So on top of Roberta Large, we brought in all those hundreds of thousands of hand-labeled data sets that I mentioned, and from that, had it better understand the development sector. We still use that. Honestly, we still find that Roberta Large is really responsive to fine-tuning. But every day a new model comes, there’s a new way of using it. So now before we do any project, we look at everything available, and we’re constantly tweaking it and changing it. But still that is our core model and we still find it to be the most, usually it has the best metrics for performance and environmental sustainability.
Mike: So if anyone was interested to learn more about develop metrics or possibly wanting to work with you, what would be the best way to get in touch with you or engage you?
Lindsey: Yeah, I can share in the show notes our contact information, and yeah, we developmetrics.com. People can just reach out directly. We have a whole team, happy to help. We’re a social enterprise and our mission is to use evidence to improve outcomes on the ground because we just see that there’s less money than ever to deal with these types of issues, and there are more issues than ever. So we really want to say, how can we use all this data, all this evidence available to improve outcomes on the ground? And then our other core social mission is how do we preserve all these lost databases? Not just USAID is a powerful example of what you can do with one, but it’s a structural issue in the conservation sector that this data is fragmented, and there are lots of data sets under attack. So how could we bring all this data together and not just create another deck, right? Not just another database where all this data is together, but actually make it usable. Which I think is a key point. And, so I would love to see efforts like that happening in the environmental or in the conservation sector, and that we need philanthropy for that.
Mike: Yeah. Yeah, for sure. This is a question that may be helpful for transparency, but how are you funded exactly?
Lindsey: So we’re just self-funded. We’re a bootstrapped startup. We are five years old now. Woman-owned social enterprise, where we thought about being a nonprofit, but really wanted to make sure that our clients were willing to pay for the data being used. As an economist, I see so many knowledge projects being set up and then they’re just these ghosts. Websites that don’t get used once the funding is gone. And like the pilot lesson or like the sustainability lesson, we wanted to really see that data would be paid for and used by clients. So we are for-profit, but that being said, we are seeing this market failure now and it’s a huge market failure in terms of no one is willing to pay for the structuring and the gathering of all these types of data. Everyone wants to use it, but there’s the whole tragedy of the commons and creating it right now. And so that is where I am now seeing space for philanthropy or to build that commons that would be open to everyone to use.
Mike: So taking donations is something that potentially you would be able to do.
Lindsey: And we are starting a nonprofit arm just for this reason to try to solve this issue.
Mike: Wow. Okay. Lindsey Moore, thank you very much for speaking with me today. It’s been a pleasure having you on the show.
Lindsey: Yeah. Thank you so much for having me. Really appreciate it.
Mike: If you want to read more reporting on USAID or artificial intelligence, please see the links in the show notes. As always, if you’re enjoying the Mongabay Newscast or any of our podcast content and you want to help us out, please do spread the word about the work that we’re doing by telling a friend and leaving a review. But you can also support us by becoming a monthly sponsor via our Patreon page at patreon.com/mongabay. Did you know that Mongabay is a nonprofit news outlet? So when you pledge a dollar per month, you are making a very big difference in helping us offset production costs. So if you’re a fan of our audio reports from Nature’s Frontline, head to patreon.com/mongabay to learn more and support the Mongabay Newscast. But you can also read our news and inspiration from Nature’s Frontline at mongabay.com or follow us on social media. Find Mongabay on LinkedIn at Mongabay News and on Instagram, Threads, Blue Sky, Mastodon, Facebook, and TikTok, where our handle is @mongabay or on YouTube at Mongabay TV. Thank you as always for listening.

