What is 360 Reality Audio?
At CES 2019 in Las Vegas, Sony announced ‘360 Reality Audio’, describing it as an object-based spatial audio technology which is based on MPEG-H 3D Audio.
But to understand what’s on offer, it’s important to separate two different elements of delivery by 360 Reality Audio. The first is the application of object-based audio mixing to music. The second is the delivery of a promised surround effect even in stereo listening.
We’re excited by the first of these, but rather cynical about the second.
What is MPEG-H?
MPEG-H (our separate and earlier primer article is here) has been slow getting out of the blocks compared to the object-based audio formats we know from home theatre — namely Dolby Atmos, DTS:X, and Auro-3D.
But the focus of MPEG-H on streaming, broadcast and VR applications has kept the uptake going, adopted first by South Korea, also part of the DVB A/V codec specification, while the commercialisation ‘MPEG-H Audio Alliance’ is a powerful combination of Fraunhofer, Technicolor, and Qualcomm. The Fraunhofer team seems particularly active, delivering MPEG-H as part of the European Broadcasting Union’s first live production and distribution of UHD content at the September 2018 European Athletics Championships in Berlin, trialling High Frame Rate, High Dynamic Range, and ‘Next Generation Audio’ (NGA). Tests included MPEG-H Audio with HFR and HLG video at 1080p/100 and 2160p/50.
Sony’s ‘360 Reality Audio’
But Sony’s implementation of MPEG-H as ‘360 Reality Audio’ seems dedicated to music, at least for now. “Sony is working with major music labels, music distribution services and other such music organisations to provide the technology for building a musical ecosystem around 360 Reality Audio, which will include the creation, distribution and playing of music content”, says Sony’s CES 2019 release.
Crucially it cites plans to distribute 360 Reality Audio compatible content on the premium plans offered by four music distribution services — Deezer, nugs.net, Qobuz (not yet available in Australia), and TIDAL. Warner Music Group is also on board.
As for content, in addition to studio mixes, Sony has already worked with Live Nation to capture audio from concerts in New York, L.A., Philadelphia and San Francisco; these may be some of the first 360 Reality Audio made available, and since nugs.net is big on the live stuff, keep an ear on their latest releases if you’re interested.
Object-based music — good or bad?
Ever since the earliest days of Dolby Atmos, we have been suggesting that object-based audio would be an interesting way to mix music. Each audio element is given its position in space using metadata, rather than being mixed to fixed channels. The final mix doesn’t really exist until the user’s decoder maps the data for whatever replay system is available — in Atmos, for example, this can be rendered into anything from mono up to a full cinema’s worth of 64 speaker channels. As Sony puts it, “sound location data is handled in the playback equipment”.
The ability to adjust the soundfield on the fly like this also makes object-based audio nicely suitable for virtual reality and augmented reality, where user movement may require rapid adjustment of sound positions. Positional feedback information can be just one more input to the decoder processing.
But there are a number of issues we can see.
Firstly, many music recordings rely on ambient acoustic information, whether this be an actualite recording of hall acoustic or stereo reverb added to a vocal. Object-based mixing is not good with ambience. Hence Dolby Atmos combines its objects with a ‘traditional’ 10-channel bed of 7.1.2-channel audio. We might assume, but we don’t know, that MPEG-H will do something similar, otherwise the results of purely object-based mixing could sound highly artificial — fine for dance music, perhaps, but not for orchestral or acoustic music.
Secondly, while the ease of mixing music in surround has seen increasing quantities of surround music mixes available, a lot of it is superficially surround (audience channels, ambience etc.) because in a real situation, most music comes from the front. Those mixes which really go nuts on surround (we're looking at you, The Flaming Lips) are most successful only when the music is artificial in nature. For a straight band mix in surround, where would you start? Stick a guitar behind you? Drums behind you? How do you handle individual drum miking and microphone spill? It's a bit odd. One example is AIX, which often delivers multiple surround mixes on its beautifully recorded Blu-rays and DVDs, with one mix presented as if from the second row of the hall, as it were, and another mix from a virtual 'on-stage' position. The former is usually more listenable, the on-stage mix interesting, but a bit weird.
This is accentuated by the death of surround in the modern lounge room, and the common use of smaller speakers at the rear. Movie soundtracks expect that, but a multichannel music mix really requires identical speaker sizes all the way around. Concert soundtracks aside, multichannel music mixes are as yet relatively rarely played. (And some are produced extremely quickly. One of our team once saw a 5.1 music mix being created at Abbey Road, and they pretty much slammed it through in real time, with no care and attention to subtlety; a set-and-forget mix.)
Surround from stereo
Sony says it will provide content creation tools and work with major music labels to prepare and augment music into 360 Reality Audio content. But also, existing 'multitrack' (presumably meaning surround) music will be able to be converted into a 360 Reality Audio compatible format using the production tools that Sony will provide.
So the output is clearly intended to create a surround effect, and Sony says the system can scale up to a true surround speaker system if required.
But Sony also says that it “will initially focus on technological development of headphones and wireless speakers… For headphones, a dedicated device will not be needed for the user to play back 360 Reality Audio compatible content and experience a sound field with a realistic feel unlike anything conventional headphones have been able to offer.”
So the second part of 360 Reality Audio is the promise of a surround effect through stereo headphones and whatever they mean by wireless speakers — many of which have very limited stereo or are actually mono devices.
There have been many attempts at surround effects from headphones — Q Sound, Ambisonics, Dolby Headphone, DTS Headphone:X, Dirac VR. Most have either messed with phase, or they use comb filtering, or both. We’ve never heard one that truly works with real-world material, though we’ve heard widening effects and some degree of rear positioning with effects and bleeps.
Mapping your ear
Will 360 Reality Audio be different? Sony has one trick up its sleeve, which Sony Australia’s Andrew Hughes explained to us — Sony will have an app which takes a picture of your ear and adjusts the sound accordingly. This is not a frequency adjustment for hearing, as has become a new trend with the likes of Mimi (beyerdynamic), Audeara and others. Instead by imaging the external ear — your auricles and pinnas — Sony will adjust the head-related transfer function, which is what the brain is used to interpreting in any individual to locate sound in three dimensions. Sony indicates this will also introduce adjustments for room effect (reflections) and distance, to deliver an ‘out of the head’ headphone sound. Perhaps this combined with the object-mapping mixdown will deliver something newly effective. We must wait to see (or rather, hear).
Certainly Rob Stringer, CEO of Sony Music Entertainment, seems impressed.
“We are partnering with 360 Reality Audio to elevate the listening experience for the consumer,” he said at the Sony keynote at CES. "The results, audio-wise and sonically, are truly spectacular, and we are now rolling out titles across our vast catalogue.”
More information, though nothing much technical, can be found here: https://www.sony.com.au/electronics/360-reality-audio
And Fraunhofer’s MPEG-H page is here: https://www.iis.fraunhofer.de/en/ff/amm/prod/digirundfunk/digirundf/tvaudio.html