AI and Sound Recordings Reveal Forest Recovery in Ecuador

  • Acoustic monitoring and AI tools were used to track biodiversity recovery in plots of tropical Chocó forest in northwestern Ecuador.
  • The study found that species returned to regenerating forests in as little as 25 years, indicating positive progress in forest recovery.
  • Acoustic monitoring and AI-based methods proved to be powerful and cost-effective techniques for assessing biodiversity levels in restored forests, including insects and animals that don’t vocalize.
  • The authors hope these methods make biodiversity monitoring more transparent, accountable, and accessible to support land managers and market-based conservation mechanisms that rely on forest restoration, such as payments for ecosystem services.

Does planting trees bring back the animals? Around the world, people are working to restore forests, either by planting trees or allowing the flora to return naturally. But as the trees grow, it can be difficult to determine if communities of birds, bugs, frogs, and other life forms are also rebounding.

In the Chocó of northwestern Ecuador, one of the most biodiverse rainforests in the world, a group of German and Ecuadorian scientists measured biodiversity recovery across different land types, including active pastures and farms, abandoned farms regrowing into young secondary forests, and mature old-growth forests.

Using a combination of acoustic monitoring and DNA-based surveys, the team found that species were returning to the regenerating forests after just a couple of decades.

Old-growth Chocó forest on Jocotoco’s Canandé reserve. Photo by Javier Aznar.

“We can see how the Chocó rainforest reestablishes itself over time. Within 25 years, you get a lot of species back,” Martin Schaefer, one of the study’s co-authors and the director of the Ecuadorian NGO Jocotoco Foundation, told Mongabay. “It’s a positive message that we need to bring to the public… Over just three decades, forests are regrowing.”

The researchers also found that analyzing animal and insect sounds yields information about the overall biodiversity of the forest, including the silent creatures that don’t sing, call, chirp, or squawk.

Sound-based monitoring using artificial intelligence tools is a powerful and cost-effective technique for monitoring biodiversity recovery in tropical forests and, the authors say, these tools are needed “to support market-based conservation mechanisms that may rely on forest restoration, such as payments for ecosystem services, biodiversity offsets and credit markets.”

“What they’ve done is make a convincing argument that, yes, acoustic monitoring can be used to assess the effectiveness of a forest restoration project,” Wesley Hochachka, an ecologist at Cornell University’s Lab of Ornithology who was not involved in the study, told Mongabay. “I’m not aware of any more thorough examination or demonstration that the potential [of acoustic monitoring] is real and can be realized.”

The research team established 43 plots across a “forest recovery gradient” in the Chocó rainforest. On each plot, they used three acoustic monitoring methods and one DNA-based method.

A figure from Mueller et al 2023 shows the location of study plots.
One of the pasture study plots, where the forest was cleared decades ago.  Image courtesy of Jocotoco Foundation.

On each plot, the team put out sound recording boxes that captured all ambient sounds (like rain and wind) as well as insects and vocalizing vertebrates such as frogs, monkeys, and birds.

The first acoustic method employed experts to listen to 28-minute recorded segments and identify mammals, birds, and amphibians by their calls. This is a long-standing method, but it is limited by the amount of data humans can process by ear and can be costly.

The second method used recordings to create an acoustic index analysis, which describes qualities like the complexity and diversity of sounds. For instance, the soundscape of a mature forest is more diverse and therefore denser than that of a pasture.

Finally, a deep-learning computer model called a convolutional neural network (CNN) was trained on recordings of 75 known bird species. Once trained, the CNN essentially did the work of bird experts to identify calls from weeks of recordings rather than short segments.

“Rather than having to do it by hand with an expert, this [CNN] allowed us to really scale up our model,” Schaefer said.

For the final surveys, using DNA, the team set up light traps to catch insects at night. They then used a technique called metabarcoding to analyze the DNA from the insects to get a sense of the diversity of insects present.

Sound recording boxes (left) and automatic light traps (right) were set up on each plot to capture sounds and nocturnal insects to assess biodiversity. Image courtesy of University of Wurzburg.

AI acoustic data correlated well with overall biodiversity levels, they found, even for species not directly detected in audio. Schaefer said this was expected, but they were pleased with how well the AI models performed.

“I think the most important finding is that AI models allow us to measure biodiversity levels relatively well, even in their simple versions,” Schaefer said. “These AI models are also a good indicator for the recovery of species that you do not hear in the soundscapes of the forest.”

“Of course (there is) no information on plants or silent animals. However, birds and amphibians are very sensitive to ecological integrity, they are a very good surrogate,” Jörg Müller, a professor and ornithologist at University of Wurzburg Biocenter told AFP

The neural network model also effectively tracked bird community changes across the gradient. The results showed that the community of vocalizing animals shifted along the recovery gradient, with the most mature forests having a distinct and more diverse soundscape.

The endangered banded ground cuckoo (Neomorphus radiolosus) in the Chocó rainforest. Image courtesy of Jocotoco Foundation.

“The best thing about the paper is the integration of sound analysis, neural network models and biodiversity levels that are unconnected to sound … specifically the insect data,” Shaefer said.

There are a few caveats to these methods. For instance, acoustic monitoring can’t determine if species (especially birds) are just passing through or if they live in and use these plots. Nor can it tell about the abundance of species in a plot.

Bringing this method to more forests will require AI models to be trained on a greater number and diversity of animal sounds.  Sound libraries such as Xeno-Canto and Cornell’s Macauley Library of Natural Sounds, are working towards these goals, but more research and funding are needed, said Hochachka, especially in the global south.

However, the study demonstrates that sound recordings and AI have the potential to make biodiversity monitoring more transparent, accountable, and accessible. The authors hope this will help managers determine if their hard-fought efforts are bringing back the whole forest, not just the trees.

Banner image of male cock-of-the-rock birds in Peru’s Kosñipata valley, a bird also found in the Choco. Image credit: Rhett A. Butler

Liz Kimbrough is a staff writer for Mongabay and holds a Ph.D. in Ecology and Evolutionary Biology from Tulane University, where she studied the microbiomes of trees. View more of her reporting here.

Global Biodiversity Information Facility

Related Posts