spot_img
HomeNews & Current EventsAI-Generated Wildlife Imagery: A New Frontier in Conservation Efforts

AI-Generated Wildlife Imagery: A New Frontier in Conservation Efforts

TLDR: Researchers at Duke University’s Marine Robotics and Remote Sensing Lab are pioneering the use of AI-generated ‘deepfake’ images of rare wildlife, such as the critically endangered North Atlantic right whale, to overcome data scarcity challenges in ecological research. This innovative approach aims to enhance the training of AI models for detecting and monitoring elusive species, thereby bolstering conservation strategies.

In a significant stride for ecological research, scientists at Duke University’s Marine Robotics and Remote Sensing Lab (MaRRS Lab) are harnessing the power of artificial intelligence to create ‘deepfake’ images of rare wildlife. This groundbreaking application of AI, traditionally associated with celebrity spoofs, is poised to revolutionize conservation efforts by addressing the critical issue of data scarcity for endangered species.

Ecologists increasingly rely on remote sensing imagery from satellites, planes, and drones to study species behavior and population trends. However, training AI detection tools for rare or elusive species, like the North Atlantic right whale, presents a formidable challenge due to the limited availability of real-world footage. As Dave Johnston, director of the MaRRS Lab at Duke University’s Nicholas School of the Environment, explains, ‘We are truly in the age of big data when it comes to remote sensing in ecology and conservation. Over the past two decades, our ability to collect high-resolution remote-sensing imagery has grown exponentially, largely due to advances in drone technology and increased satellite capabilities.’ Yet, for species with populations as low as the North Atlantic right whale, which numbers fewer than 400 individuals, obtaining diverse enough images for robust AI model training is difficult.

Henry Sun, a 2025 Duke graduate, explored this concept in his senior thesis, investigating whether AI could generate images realistic enough to supplement drone footage of the North Atlantic right whale. His research, inspired by a collaboration to build a space-based detection system for these whales, focused on using diffusion models—AI systems that create images from descriptive text or exemplary images. Sun’s team is reportedly the first to apply diffusion models for this specific purpose in whale detection.

The researchers experimented with various image generation methods, including text prompts, image prompts, and a technique called fine-tuning. Sun noted that initial attempts sometimes produced ‘anatomically deformed whale images, like whales that are conjoined or whales with multiple sets of fins,’ indicating the model’s incomplete learning. Fine-tuning, which involves further training a base model on a smaller, specific dataset, proved crucial in overcoming these inaccuracies.

To validate the credibility of their AI-generated whales, the team created hundreds of synthetic aerial images of both North Atlantic right whales and humpback whales. They then used Google’s Reverse Image Search to see if the tool could correctly identify the species in the synthetic data. While images generated solely from text or image prompts often led to misidentification (e.g., North Atlantic right whales being mistaken for humpbacks), the fine-tuned images resulted in correct identification for almost all instances of both species.

The next phase of this research, led by Duke undergraduate Max Niu, will directly test whether these synthetic whale images can effectively supplement training data for AI detection models. Sun stated, ‘Max has been training deep-learning models using both real images and some of the fake images that I’ve made. The idea is to see if there’s a proportion of fake images that will benefit the model.’

Also Read:

While the potential for AI in conservation is immense, ethical considerations are also being addressed. Holly Houliston, a Ph.D. student with the British Antarctic Survey and the University of Cambridge, emphasized that the energy and water demands of AI data centers necessitate a conservative and targeted approach to generative AI data augmentation. ‘You have to be really clear on the ecological question you’re trying to answer,’ Houliston advised, suggesting that synthetic imagery should be generated only when truly needed, for instance, to augment data for specific life stages like calves. This ensures responsible use of a technology that, as Johnston concludes, represents a growing ‘intersection between computer science and environmental sciences.’ Duke University is further fostering this interdisciplinary work through initiatives like the ‘Artificial Intelligence for Metascience’ research program, a partnership with OpenAI aimed at accelerating scientific discovery through AI.

Ananya Rao
Ananya Raohttps://blogs.edgentiq.com
Ananya Rao is a tech journalist with a passion for dissecting the fast-moving world of Generative AI. With a background in computer science and a sharp editorial eye, she connects the dots between policy, innovation, and business. Ananya excels in real-time reporting and specializes in uncovering how startups and enterprises in India are navigating the GenAI boom. She brings urgency and clarity to every breaking news piece she writes. You can reach her out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -