Whilst expeditions are par for the course for geographers and experts in prehistory and protohistory, it is rather unusual to find computer scientists out in the field. Prof. Dr. Elmar Nöth and Christian Bergler from the Chair of Computer Science 5 (Pattern Recognition) at FAU are an exception. In the summers of 2018 and 2019, they were out on the North Pacific for several weeks at a time, between Prince Rupert Island and Vancouver Island and the west coast of Canada. The researchers hoped that using deep learning to analyse whale sounds would allow them to gain a better understanding of killer whales. Killer whales are particularly well-suited to analysis, as they often communicate within the group, for example while hunting or with juveniles.
Elmar Nöth and his team were able to access 20,000 hours of audio material from the Canadian OrcaLabs, which have been recording killer whale sounds and calls since 1985. However, nobody knows which of these sounds belong to which animal, which is rather unfortunate as this is precisely the information the researchers need if they are to recognise speech patterns and assign them to individual animals and families. Determined not to be put off, Elmar Nöth decided to listen to the animals directly in their habitat. A film team accompanied the researchers on their undertaking in a second-hand trimaran they converted to meet their needs. As whales have a very sensitive sense of hearing, the team replaced the old diesel engine with an electric motor in order to avoid disturbing the whales with engine noise. The most important piece of equipment for Nöth and his team were eight underwater microphones, known as hydrophones, which trawled behind the boat in two waterproof tubes to receive any sounds made by whales.
How did the researchers manage to find the approximately 300 whales in a region roughly the size of Belgium? Although the researchers were able to use the hydrophones to locate the killer whales directly, Nöth and Bergler came up with an even more pragmatic solution: ‘We set a radio device to the frequency used by commercial whale watching companies which meant we could always hear where pods of killer whales had been spotted,’ said Elmar Nöth. Once the team detected a group of whales, they started to use the hydrophones. Using two hydrophones simultaneously meant that the sounds from the animals reached the devices at different times, allowing the researchers to determine which direction the whale was in. At the same time, researchers filmed the whales and used image recognition software they designed themselves to tell the different animals apart reliably. They hope that in future their research will make it possible for acoustic signals to be matched to specific animals and linked to certain forms of behaviour using deep learning. This will help scientists to record a killer whale vocabulary which will help us learn to understand the animals better.
About the author
Sebastian Teichert conducts research on Arctic biodiversity, coralline red algae and ecosystem engineers at the GeoCenter Northern Bavaria at FAU.
FAU research magazine friedrich
This article first appeared in our research magazine friedrich. You can order the print issue (only available in German) free of charge at email@example.com.All articles