Building Virtual Spaces with Realistic Sound
I will cover explorations in virtual acoustic matching and sound field estimation, personalized HRTF inference for accurate spatial localization, and observer model construction...
Abstract: The rapid development of augmented and virtual reality (AR and VR) has given rise to a host of fascinating new problems in computational acoustics. It has become apparent that rendering plausible acoustics is the key to immersive, engaging experiences; to allowing, for instance, an AR device wearer to feel as if they are sharing physical space with a remote caller; or to allowing VR users to experience sound that aligns with the visuals they are consuming, sound that emanates from the geometry around them, and that follows their intuition about how sounds move and change in the real world. While these problems have traditionally been studied with DSP and physics-based tools, my work looks towards data-driven, deep learning approaches, and approaches that leverage human feedback and perception data. In this talk, I will cover explorations in virtual acoustic matching and soundfield estimation, personalized HRTF inference for accurate spatial localization, and observer model construction for perceptual validation. More broadly, I hope to provide an overview of a rich, newly emerging research area at the intersection of audio-visual machine learning and auditory perception that is ripe for deeper study.
Bio: Ishwarya Ananthabhotla is a Research Scientist on the Meta Reality Labs Research Audio Team. At present, she works on machine learning applied to problems in room acoustics, spatial audio, auditory perception, and behavior and communication understanding in conversations. More generally, her research interests lie at the intersection of machine learning, audio, signal processing, and auditory cognition. She completed her Bachelors in Electrical Engineering and Computer Science from MIT in 2015, her M.Eng. from MIT in 2016, and her PhD the MIT Media Lab in 2021. She was supported by the NSF Graduate Research Fellowship (2016-2019) and the Apple AI/ML Fellowship (2020-2022), and spent time interning at Meta Reality Labs Research and Spotify Research.