Google’s AI research lab, Google DeepMind, says that it has created an AI model that can help decipher dolphin vocalizations, supporting research efforts to better understand how dolphins communicate.
The model, called DolphinGemma, was trained using data from the Wild Dolphin Project (WDP), a nonprofit that studies Atlantic spotted dolphins and their behaviors. Built on Google’s open Gemma series of models, DolphinGemma, which can generate “dolphin-like” sound sequences, is efficient enough to run on phones, Google says.
This summer, WDP plans to use Google’s Pixel 9 smartphone to power a platform that can create synthetic dolphin vocalizations and listen to dolphin sounds for a matching “reply.” WDP previously was using the Pixel 6 to conduct this work, Google says, and upgrading to the Pixel 9 will enable researchers at the organization to run AI models and template-matching algorithms at the same time, according to Google.
Read the full article here