This year, for the first time, scientists are working with Microsoft artificial intelligence (AI) experts to train AI computer programs. The programs will perform the most tedious, expensive, and time-consuming part of analyzing acoustic data: classifying detections as beluga calls or false signals.
The team is working with a type of AI called “deep learning.” Deep learning imitates the way the human brain processes data and uses it for decision making.
“Deep learning is as close as we can get to how the human brain works,” said Manuel Castellote, NOAA Fisheries affiliate with the University of Washington, Joint Institute for the Study of the Atmosphere and Ocean, who led the study. “And so far the results have been beyond expectation. Machine learning is achieving more than 96 percent accuracy in classifying detections compared to a scientist doing the classification. It is even picking up things human analysts missed. We didn’t expect it to work as well as humans. Instead, it works better.”
The machine learning model is not only highly accurate, but can process an enormous amount of data very quickly. “A single mooring dataset, with 6-8 months of sound recordings, would take 10-15 days to manually classify all the detections,” Castellote explains. “With machine learning tools, it is done overnight. Unsupervised.”
There is a network of 15 moorings in Cook Inlet, which we deploy and retrieve twice a year. The machine learning model will mean a huge savings of time and money in analyzing the acoustic data we retrieve from these moorings.
“Remote sensors, like acoustic moorings, have revolutionized our ability to monitor wildlife populations, but have also created a backlog of raw data that has ecologists spending more time clicking than conducting research. Work like this makes scientists more efficient, so they can get back to doing science instead of labeling data,” said Dan Morris, principal scientist on the Microsoft AI for Earth team.
Most importantly, it means managers get the accurate information they need much more quickly.
“This project, promoting biodiversity by helping beluga whales, speaks to the core of what AI for Good is about—enabling scientists to make the world a better place,” said Rahul Dodhia, senior director of data science for Microsoft’s AI for Good team.
A Major Advance
The use of machine learning for image analysis has been well established for years. But it is only very recently that it has begun to be used in bioacoustics.
“This is definitely the first time machine learning has been applied to acoustic monitoring of belugas, and one of the few efforts to date for any cetaceans,” Castellote said. “Now we are working to optimize it.”
The team is now using the machine learning only for part of the analysis: classifying detections. The next step is to teach it to both detect and classify beluga calls. They hope to accomplish that by summer 2020.
Castellote points out that this first step in using machine learning to classify beluga signals has much broader applications. The Microsoft AI team has developed a tool that can be customized to many other species.
“Normally when we advance in our analysis methods, it is in little steps,” said Castellote. “This is a major step to a new and very different approach.”
This research is a collaboration between NOAA Fisheries’ Alaska Fisheries Science Center Marine Mammal Laboratory, Microsoft AI for Good, and the University of Washington’s Joint Institute for the Study of the Atmosphere and Ocean.