Welcome to the online version of HOLOPHONIX Documentation.
The HOLOPHONIX processor provides an extremely advanced environment to mix, reverberate and spatialize sound elements from various devices, using several spatialization techniques developed by the STMS Lab (Sciences et Technologies de la Musique et du Son), a 1995-created laboratory hosted by IRCAM, uniting the CNRS, Sorbonne University, French Ministry of Culture and IRCAM around an interdisciplinary research topic including music and sound sciences and technologies.
This polyalgorithmic sound spatialization system is one of a kind; it allows users to select and combine different techniques (or algorithms) in real time. The HOLOPHONIX processor offers an almost unlimited number of spatialization busses, each one enabled to execute one of the integrated sound algorithms - including Higher-Order Ambisonics (2D, 3D), Vector-Base Intensity Panning (2D, 3D), Vector-Base Amplitude Panning (2D, 3D), Layer Based Amplitude Panning, Wave Field Synthesis, Angular 2D, k-Nearest Neighbors, Stereo Panning, Stereo AB, Stereo XY, Native A-Format Ambisonics, Native-B Format Ambisonics, Binaural, and more.
The HOLOPHONIX Controller web-based interface is compatible with every operating system that includes a Web browser; mainly iOS, macOS, Windows and Android. This GUI offers a 3D view of the room, and allows an easy interaction, in real time, with sound objects, loudspeakers and many other parameters.
Moreover, this interface allows the user to import 2D-drawings (CAD format), to present them as axonometric projections and to visualize them in three dimensions.
There are three main paradigms used in spatialized content production and transmission: channel-based, scene-based, and object-based techniques.
With the channel-based technique, content is produced for a specific, standardized listening system (e.g., stereophonic or 5.1 according to ITU-R BS 775). It is recorded in a format containing as many tracks as diffusion points, and each track is sent directly to the relevant channel.
The scene-based approach treats a spatialized soundstage as a whole, encoding it independently of the listening system (typically, in Ambisonics format). This format needs to be decoded to play the spatialized sound, and it is possible to apply several global transformations of the original soundstage (e.g., an overall rotation).
Finally, object-oriented mixes handle sound entities (most often mono or stereo) associated with control metadata (position, orientation, gain, etc.). All these data (audio, control) are sent to the place the sound must be played, where a rendering engine spatializes sound elements according to the listening system (headphones or set of speakers).
During the rendering, the final user is henceforth able to interact with the objects the sound stage is built of, or even to remix them. When using binaural techniques, it is possible to render the mix using customized or adapted to the listener HRTF files, inside the rendering device itself.
The HOLOPHONIX processor allows using immersive sound on two aspects. First on the frontal side, to offer an optimum sound spatialization, allowing every audience member to perceive the right sound localization whether he is placed at the center or the side of the venue. Then on the immersive side, thanks to algorithms that allow a full 3D positioning for the sound sources.
HOLOPHONIX allows combining multiple algorithms simultaneously. Each algorithm can thus be chosen according to its relevance considering the associated electroacoustic setup, the audio content, or the artistic choices.
For example, the user can combine a Wave Field Synthesis (WFS) bus, for the frontal sound reinforcement, with an Ambisonic (HOA) bus for the immersive speakers, and provide to the audience a fully immersive sonic experience.
HOLOPHONIX being a highly customizable environment, it offers many possibilities, such as decoding directly into the processor Ambisonics streams coming from external software or Ambisonics microphones.
The HOLOPHONIX signal path is based on five elementary objects:
- Dante™ audio inputs (or any other digital audio transport format, according to the chosen configuration).
- The virtual sources, which correspond to the sound objects positioned in space and rendered by the spatialization algorithms.
- The spatialization algorithm buses, to which sources are assigned. Each bus uses its own spatialization algorithm, and is associated with a specific set of speakers.
- The direct routings, allowing to send input channels directly to speakers
- The speakers, matched to Dante™ audio outputs (or any other digital audio transport format, according to the chosen configuration). The position of the speaker in the venue will be used by the algorithm.
Projects are folders that can contain:
- Venue map
- HRTF files for binaural
- SDIF files for WFS (legacy)
The following settings are stored in the Project
- Audio settings (except buffer size)
- Open Sound Control (OSC) settings
- Venue map alignment
- Default startup preset
- Virtual sources
- Spatialization buses
- Speaker setup
- The routings
- Buffer size
and all the parameters related.