Ever hear birds chirp and wonder if they’re talking to each other? Cognitive scientists at University of California at Merced can now answer that question, using a new method for sound analysis. And when they listened closely, they found that some animal speech patterns were a lot like our own.
[Sound recording of orcas.] That’s a pod of orca playing in the Puget Sound. UC Merced professor Chris Kello is listening through computer speakers in his tidy office.
“What you can hear, is that there seems to be one sound and then another sound comes in and the other sound comes back, and what we can imagine is that there is some communication going on here between the orca,” Kello said.
Kello has spent the better part of this year listening to singing whales and trilling birds. But what he’s listening for can’t be captured by the ear alone.
“If somebody’s very excited, or very somber, it will change the timing of your voice. And at some level that’s the essence of what our method is picking up on,” Kello said. “There’s other layers that are going on at faster time scales that we don’t really pick up on. What this method shows is that all of these different layers are related to each other.”
He and fellow researcher Ramesh Balasubramaniam developed a barcode system for analyzing sound. It captures spikes in volume during audio recordings and turns each one into a line.
Often, these volume peaks are happening so fast that we can't hear them. But by looking at the lines, the scientists can tell when a lot of information or emotion is being conveyed at once. They call it a temporal hierarchy. Balasubramaniuam says this method could help answer a question that scientists have been mulling for a really long time.
“Is there something that is common between language and music, and other forms of communication that, say, other species engage in? And part of what this work shows is that there is something that is common across these things, which is this hierarchical temporal structure," Balasubramaniuam said. "That it’s not just that humans have phonemes that become syllables that become words that become sentences.”
Their new paper also uses music as a comparison tool. Take a bit of jazz. On the barcode, that looks a lot like a cocktail conversation between humans. It also resembles the orca pod. In all three recordings, the researchers found the same densely clustered pattern.
That doesn’t tell Kello what’s being said - just how much information is being shared, and whether a conversation is taking place.
“What we’re measuring is the coordination - the ability, the cooperation for these species to sort of work together to solve problems," Kello said.
But not everything is layered so intricately. Take the humpback whale and the hermit thrush. They're both solo singers. And their barcodes are remarkably alike.
And if you’re wondering about the temporal hierarchy of your house pet, don’t get too excited. The researchers did analyze some cat recordings... but they didn't find anything worth publishing.
Human language is much more complex, and the UC Merced researchers found some surprising quirks hidden in the bar codes.
“If you’re talking about a friendly topic that you have common ground, that you have shared interests, our barcodes converge.Your speech becomes like my speech and my speech becomes like your speech," Kello said. "However, in a polarizing conversation where you’re on one side of the political spectrum, our bar codes do not converge.”
So what does all this mean? The scientists say it opens doors to lots of further research. Automatic sound classification - the ability to tell if something is music or speech without actually listening to it - is one possibility. And grad students are already working on comparing barcodes between languages. Going forward, Kello and Balasubramanium might tune into another animal: [sound of elephant call].