Last month Lacquer Channel Mastering held a small panel discussion at The Soho House on audio quality. Sage Kim was in the audience and she recently posted this to Facebook. It’s reposted with permission.
Some geeky follow-up thoughts motivated by the Soho discussion event of last month.
1. The main focus of the discussion was the changed delivery method of music to digital formats and the lack of appreciation & ritual regarding listening music but I think we still have the room to discuss sound quality. The psychoacoustics model of lossy codecs are roughly science-based but it is not a hard science. Technically there is still no objective way to measure if high-res mp3s sounds the same with the source to human ears as traditional methods for audio quality assessment like measuring S/N or total harmonic distortion is useless for perceptual encoding. PEAQ which is the only objective measurement method developed is yet to be extended to multi-channel/high-res/stereo audio signal. So universities like McGill are still doing the subjective assessment – double blind test – but any result of subjective ones can be nothing more than an example for better assumption due to many reasons.
I think a lot of people, not just trained listeners like engineers and musicians, can tell the difference between high-res mp3 and the source, even people who claim they cannot notice it. (We often tend to dismiss things which cannot be articulated in words.) However, due to life-style matters, younger generation will keep sticking to the streaming service or Youtube which is not a pleasing news(?) to engineers or anyone who appreciates the decent quality of audio.
2. The funny and ironic thing is while the sound quality is being less appreciated due to digital culture, the sound elements as musical ideas turned to be more crucial for the same reason. More musicians produce, mix and master their music these days especially when there is significant portion of digital elements in music, not just because it is easier and cheaper than before but sound elements themselves are important parts of their compositions and it makes sense to think all the processes are better to be combined. When musicians/producers do all by themselves, the overall sound balance is more inclined for emphasizing their specific styles, so it is often noticeable just by listening to it. For example, Grimes sounds like that she has mixed herself but not mastered, whereas Chromatics sounds like Johnny Jewel has done everything including mastering. The difference is though Chromatics sounds awesome in delivering Jewel’s musical ideas, it still sounds a tiny bit off in terms of ideal (or traditional I should say?) production quality. Also when I compare electronic musicians from 90s and 2010s, veteran electronic musicians from old days like Chemical Brothers who work with other engineers in the production process, sound much more ideal in a traditional way (warmer and fuller even in their harshest songs) than electronic musicians of more recent days who take care of a lot of things themselves.
Does it mean the idea of ideal sound balance (or even quality) could be changed because of digital tech? Probably. Like lo-fi and glitch have already claimed their own aesthetics, it will be challenged by pushing the envelope in many ways. Though I still think a lot of great musicians including Chromatics could sound better by working with right engineers inside of my engineer box.