Bodony Earns NSF CAREER Award to Study the Human Voice
What does a person’s ability to say “Ahhhh” have to do with aeronautics? More than one might think.
“Speaking involves a complex interaction of the flexible vocal folds and the unsteady jet of air they generate when they oscillate. Despite our reliance on oral communication, we still don’t know what happens in the voice box to make sound,” said AE Assistant Prof. Daniel J. Bodony, who studies aeroacoustics. “The unsteady glottal jet and the dynamic folds, plus the mouth, tongue, and soft palate, all work in concert to create most of our spoken vowels and consonants. There are fundamental questions about how the dynamic fluid-structure system works in general and, in particular, how they work together to create understandable sounds.”
Understanding this biological phenomenon also leads to better designs for aircraft, Bodony maintains. “Gone are the days when high-performance air vehicles are efficiently designed by the aerodynamicists and structural dynamics working separately. The space shuttle was probably the last major example of this approach, where rigid ceramic tiles were glued to a rigid structure to soak up the heat.”
“We can no longer treat aircraft as rigid structures, and we now have to look at both solid and fluid mechanics together,” he continued. “The structure heats up, vibrates, and changes the aerodynamics. Both very fast (hypersonic) and very slow (micro UAVs, such as those worked on by AE Assistant Prof. Soon-Jo Chung) are becoming flexible. Predicting the performance of flexible aircraft and, more importantly, developing methods for optimizing and controlling them, including where to place sensors and actuators, is a critical new area in aerospace engineering.”
Studying how air interacts with the vocal folds to make sound, and connecting the fundamental ideas to the flight of UAVs and hypersonic vehicles, has led to a Faculty Early Career Development Program (CAREER) Award for Bodony from the National Science Foundation. The award supplies $400,000 in funding for the project over five years.
Bodony will work to model the vocal folds in 3-dimensions in their proper geometry. His goal will be not only to better understand how speech works, but also to discover ways it can be controlled and optimized to improve current, or enable new, methods of surgical speech recovery.
Colleagues from the Beckman Institute for Advanced Science and Technology, including Bioengineering Assistant Prof. Brad Sutton, will join him. Sutton will conduct functional magnetic resonance imaging (fMRI) studies to learn more about how the brain and speech production interact to communicate. Bodony’s simulation data will be used to provide a missing link between the brain’s inputs to the vocal folds and the resulting speech.
“No one has tried to look at speech in such a holistic way,” Bodony said.