3D spatial reasoning in dynamic, audio-visual environments is a cornerstone of human cognition yet remains largely unexplored by existing Audio-Visual Large Language Models (AV-LLMs) and benchmarks, which predominantly focus on static or 2D scenes. We introduce SAVVY-Bench, the first benchmark for 3D spatial reasoning in dynamic scenes with synchronized spatial audio.
SAVVY-Bench is comprised of thousands of carefully curated question-answer pairs probing both directional and distance relationships involving static and moving objects, and requires fine-grained temporal grounding, consistent 3D localization, and multi-modal annotation. To tackle this challenge, we propose SAVVY, a novel training-free reasoning pipeline that consists of two stages: (i) Egocentric Spatial Tracks Estimation, which leverages AV-LLMs as well as other audio-visual methods to track the trajectories of key objects related to the query using both visual and spatial audio cues, and (ii) Dynamic Global Map Construction, which aggregates multi-modal queried object trajectories and converts them into a unified global dynamic map. Using the constructed map, a final QA answer is obtained through a coordinate transformation that aligns the global map with the queried viewpoint.
Empirical evaluation demonstrates that SAVVY substantially enhances performance of state-of-the-art AV-LLMs, setting a new standard and stage for approaching dynamic 3D spatial reasoning in AV-LLMs.
For the best viewing experience, we recommend watching the video with headphones or a device that supports spatial audio playback.
SAVVY-Bench is the first benchmark for evaluating 3D spatial reasoning of AV-LLMs in dynamic, multi-room scenes. It builds on the egocentric Aria-Everyday Activities (AEA) dataset, which includes over 600 sound events across 58 daily-life scenarios. Each scenario provides synchronized visual input and spatial audio captured by 7-microphone array on Aria glasses. SAVVY-Bench poses queries on spatial relations among moving and static entities in 3D space.
We introduce SAVVY-Bench and SAVVY, the first benchmark and training-free pipeline for 3D spatial reasoning in dynamic audio-visual environments. SAVVY-Bench features thousands of spatial questions grounded in egocentric videos and multi-channel audio, spanning both egocentric and allocentric perspectives. SAVVY significantly improves spatial reasoning performance over standard AV-LLMs by integrating snapshot-based perception, audio-visual tracking, and dynamic global mapping. Together, they provide a foundation for advancing spatial intelligence in multi-modal AI systems. Future work includes adapting our pipeline to generate spatial reasoning traces for AV-LLM fine-tuning and enhancing spatial audio understanding through real-world audio-visual pretraining.
@article{SAVVY}