Wonder/Wander immersive live set

On May 17th, 2023, I performed a live set featuring my EP titled “Wonder/Wander” in a spatial audio format.

I started this project back in December 2022, with the goal to actually produce in spatial audio (not just produce the songs in stereo and then upmix the stems to spatial audio, which is what happens in most cases nowadays) since this would enhance the mixing process later on, because you won’t run in to issues like bus processing, which is way harder to do in spatial audio (not impossible, but you need to make use of some workarounds). Once the songs were produced and getting in their final stages I would turn them into a playable live set, specifically designed to be played in spatial audio.

In this article I will go over what exactly an immersive live set is, how I produced the songs for this and made it possible to perform them live in this spatial audio format.

Pre- production

Before the actual live set I had to produce the songs and mix them in spatial audio. With this first step I already came across some problems. First, I preferably work in Ableton Live to produce my music, but this DAW dos not support spatial audio yet, so I had to find a solution for this. But before I explain this I will need to explain some basic concepts of spatial audio. 

What are beds and objects?

In a normal stereo mix, you can have elements playing in front of you (with the phantom center), to your left, and to your right, and everything in between. You can also create the illusion of distance by playing with volume and reverb. In spatial audio, you can do all these things, but instead of the 2D environment from stereo, this becomes a 3D environment, making it possible to place elements behind you, above you, basically anywhere in 3D space, limited by the speaker setup you have. This speaker setup is very important because this is what we, in most cases, will call the “bed”. A bed is a collection of channels that stay in a fixed position, in most cases, the location of your physical speakers. This is very easy to work with because you can just directly send audio through the speakers. But what if you wanted to move around elements in realtime? This is where objects come in. With objects, you can automate the location of any element of your song. Regardless of whether there is a speaker in this location or not, because the renderer will just calculate how loud the surrounding speakers need to play to create a phantom “speaker”. This way it seems like the audio is playing from the location of the object. By using objects, you can make the sound move everywhere around you in 3D space, the possibilities are (almost) limitless.

Regarding the bed being equal to the amount of speakers, there are a few exceptions. For example, when using the Dolby Atmos renderer, you are limited to a 7.1.2 bed. If you use a speaker setup with a greater number of speakers (e.g., 7.1.4 or 9.1.6), then the extra speakers will act as a static object.

Production process

To solve the issue of producing all the songs in Ableton Live while also being able to do some basic mixing in spatial audio, I used ADMix by Ircam. ADMix is a spatial renderer that allowed me to route channels from Ableton to ADMix and pan them only on a channel-based level. However, ADMix has a feature to use objects combined with a VST plugin called ToscA, which can be used to automate the object’s location and send the panning data to ADMix. In theory, this seems like a good solution, but in practice, it did not work consistently, so I stuck with working on a channel-based level. Eventually, when the arrangement was finished and I got to the mixing stage, I exported all the tracks from Ableton and mixed the songs in Pro-Tools using the Dolby Atmos Renderer.

Live set production

Once the songs were close to being finished, I could start working on the preparations for the live set. In the future, I plan to do things differently because much of this was not very time-efficient. The PA setup consisted of an Avid S6L-48D mixing desk, which took in all 64 audio channels sent out through the AVB protocol + MIDI timecode. From here the audio was sent through Dante to Spat Revolution for panning before going back through the mixing desk to be sent to the speakers. The problem was that I had to redo all the panning and mixing of the tracks in Spat Revolution, which cost a lot of time since I had to go over each separate track, make spreadsheets with the panning position, and reverb and delay send volume. Additionally, I had to group a few tracks together since most songs had a greater number of channels than the 64 channels I could send out. For automated objects, we used Qlab for the automation and triggered them using MIDI time code. In the future, I plan to use a renderer myself to send out the channels of the bed, while the object rendering happens in the render. This way I can even use ADM files for the backing track without worrying about objects not being rendered. The only downside would be limited control for the FOH mixing engineer over the sound.

Speaker configuration

The complete setup used more than 20 speakers, but to make it manageable, it was approached as a Dolby 7.1.4 setup. Following Dolby standards is not necessarily ideal for live performances since these are designed for cinema. However, these standards are easy to work with and easy to upscale.

On the images below, you can see a color-coded diagram illustrating how the speakers were grouped together.

Dolby 7.1.4 setup
the upscaled setup that was used

As you can see from the images above, we used the line arrays from the venue as the left and right channels (red). The center speaker (purple) was moved to a higher layer, and we added more subs (cyan) since one sub would not be sufficient for the entire venue. For the side speakers (green), there were three speakers on each side grouped together because of the venue’s shape. Moving on to the rear speakers (blue), we encountered two problems. First, using only the four speakers on the bottom layer would cause latency for people standing closer to the front. Second, these speakers could not reach all the way to the front. To solve this, we added two more rear speakers in the top layer, which created added latency while still keeping all audio in sync. These speakers also ensured that the rear speakers were audible in the front. As for the top layer, we used four speakers instead of two for the frontal top speakers (orange) and four for the rear top speakers (pink). To help you visualize the speaker setup, I created this fun animation.

Live Set Equipment

To send out the audio to the S6L, I made a big Ableton project with all the separate tracks of every song. This way, I could send all these channels separately to the mixing desk to pan them in Spat. The equipment I used for the set consisted of an Ableton Push for triggering samples and loops, an SPD-SX Pro for playing drums and percussion samples, and an NI KK S88 which sent MIDI to Ableton to play the piano parts and other virtual instruments/synths. The video below has some fragments from the actual performance.

Additional resources:

More info on Dolby Atmos setups on www.dolby.com
More info about Ircam on ircam.fr

Scroll to Top