The AuralReality project aims to transform the way immersive spatial audio is developed, implemented, and experienced for location-based entertainment formats (amusement parks, immersive exhibitions, escape rooms, and event locations). It is a direct response to growing market demand for interactive, accessible, and technically more flexible audio environments and addresses critical issues with existing solutions. These include the lack of real-time spatial simulation tools (digital twin), limited support for multi-zone speaker playback, inadequate workflows for authoring and deployment, and rigid licensing models.
At the heart of AuralReality is the development of a modular, low-latency software platform that unifies three previously separate areas of immersive audio production:
- An authoring tool for spatial sound design across multiple zones, including timeline automation, OSC/DMX control, and dynamic routing,
- a runtime engine for object-based rendering on any speaker layout, and
- a real-time acoustic digital twin based on real measurement data that enables predictive simulations and offline authoring.
This toolchain enables sound designers, integrators, and operators to efficiently create, test, and deploy interactive spatial audio experiences in complex real-world environments. The authoring system enables design of spatial soundscapes remotely and virtually in a digital twin of the target location, including complete auralization, signal distribution, and visualization of the speaker setup. This reduces costly on-site iterations and supports parallel development with other trades (e.g., show control, lighting, or video).