A brief history of XR remastering, 2007-2012
Five years since the first breakthrough
It's hard to believe it - for me at least - but five years has passed since I first stumbled accidentally over the basic procedure which was to become the crucial underpinning concept behind XR remastering. Since then I've used the method on many hundreds of hours of music, and seen it adopted by other remastering engineers working in the same field. I've been able to refine it, adapt it, and take advantage of new technologies and computing power, which continues to improve the results it achieves.
It all began over the Christmas and New Year period of 2006-7. I was working on Toscanini's classic 1936 New York Philharmonic recording of Beethoven's Seventh Symphony (PASC068) and it wasn't going well.
In theory everything should have been fine. I had a set of near-mint HMV pressings, and, having figured out a couple of years previously how best to cope with their ubiquitous "bacon frying" crackly surfaces, this should have been a pretty routine task. At the time I had a reasonably fixed method of working to deal with everything a decent-quality 78rpm disc could throw at me, and this was working as expected. The trouble was, the results sounded truly awful, if not actually painful!
(Over the years a number of recordings I've worked on have given me headaches, though less so more recently - but this week, and for the first time ever, I worked on a recording during which I became so emotional and choked up by the power of music and performance I had to stop, walk away, dry my eyes and pull myself together before I could continue. More of that anon...)
Anyway, back to the Toscanini. I spent days on that recording. There was a terrible harshness to it that I quickly realised had nothing to do with my pressing, the disc surface, or indeed anything a remastering engineer would normally tackle. The major flaws that were hurting my ears were literally "hard-wired" into the recording. Something must have gone badly wrong with the microphones, or the mixing equipment, or the disc cutting amplifiers - but what?
It seemed to me that no amount of equalising, boosting or cutting of frequencies I could manage was working - I simply couldn't work out how to attack it. This, for me, was unusual; I'd spent more than a decade mixing live radio at the BBC, something that hones an almost instinctive ability to nail a problem frequency and cure it, fast, on air, so as not to annoy several millions of listeners. Yet now I had to admit it: the Toscanini had me beaten.
I did though have one tool which offered assistance in a way unavailable to me when mixing live radio - in many ways similar to something I'd first seen a colleague using at the BBC's Maida Vale studios to engineer a David Bowie session back in the early 1990s: FFT spectral analysis. Back in the 90s it was a very difficult thing technically to do live, but the very top-end mixing desks of the day allowed the sound engineer to switch the channel meters (LED bar meters) into a mode which gave an accurate spectral analysis which gradually averaged out as a track played. It's the kind of thing which, in a much simplified version, started to show up as colourful bar graphs on cheap hi-fi products in the 90s, giving an idea about the amount of volume at a given range of frequencies.
My colleague, who had been producing rock sessions at extreme volumes for many years (he was also responsible for the sound at massive concert broadcasts, such as 1985's Wembley Live Aid event), and was notoriously hard of hearing as a result. He was also one of the very best in the business, and one of his secrets was the use of this spectrum graphing method to visually check and correct for any tonal deficiencies in his own hearing. If there were any unusual "holes" in the bar graph he could boost those frequencies, and he'd learned to know what kind of overall mix of bass, midrange and treble to look for on the 48 parallel LED meters. At the end of a long day's recording, when one's top end response was severely compromised by the brain shutting down its ears to protect them, my colleague was still able to produce results which sounded great - yet where any other engineer would probably have boosted the treble to a ridiculous amount simply to be able to hear it.
Fifteen years later and a piece of software called Har-Bal did a similar thing on a PC - expect in this case instead of gradually averaging out as a multitrack tape was played through it, it took a digital music file, analysed it, then showed you the results in a continuous on-screen line graph. It also allowed you to load up a second music file, normally a reference file of something you felt was a good recording, as a visual guide to anything that might be wrong with your original, possibly for the kind of reason outlined above. (As with most music software it sells best to those working in the rock music industry.)
I'd downloaded a good reference file for my Beethoven 7 from eMusic - the London Symphony Orchestra doing the same work under Haitink - and I was playing around adjusting various frequencies on the Toscanini and getting nowhere in particular. Finally, in a fit of frustration, I did the one thing which every manual and guide says you must never do: I lined up the two graphs perfectly with each other (or at least, as perfectly as was possible given the somewhat limited resolution of that early version of Har-Bal at the time). It was a kind of "sod it, what the hell?" type approach - after all it could hardly sound any worse.
Using my mouse, I pushed and pulled at the line depicting the frequency response of the Toscanini until it lay directly "on top of" the Haitink, the effect of which is to create an equalisation curve to match the average volume levels at all analysed frequencies of the former former to those of the latter.
Then I pressed play.
What I heard was to change my world! Once I'd picked my jaw up off the floor, it set into a grin which stayed with me for the rest of the day, if not the week. I'd accidentally stumbled across something I think nobody had ever heard happen before - though it took me a long while to get fully to the bottom of it.
It was pure luck - or was it intuition? - that I'd chosen to match Toscanini's Beethoven 7 to a modern Beethoven 7, rather than any other work plucked at random using similar musical forces. It turned out that this matching of works was the key to this success, and probably therefore also the reason nobody had figured this out before. Because the works, and thus the orchestral forces producing their sound, were essentially the same, the normal rules governing this kind of equalisation didn't apply. Quite the opposite, it turned out.
What the matching up had done was to create an equalisation curve so complex it would be beyond even the most experienced, golden-eared sound engineer to re-fashion by listening and hand-adjusting alone. It unmapped the complex interrelations between the 1936 recording equipment which had conspired to produce such poor sound quality - or if you prefer, it had reverse-engineered out many of those faults inherent in the original recording.
It would be nice to think this is all we need to do: feed the music in one end, add a reference, mix well and serve. Alas this isn't the case. It turns out that many of those vintage recordings need to be hauled about quite severely. Instead of a bit more treble here, or a boost in the bass there, the complex equalisation that gets applied looks, when viewed as a graph, less like a series of smooth curves and more like a particularly vicious Alpine mountain range. It's up and down at steep angles and reaches some extreme peaks and troughs, in a seemingly random fashion. It's this wild sonic havoc which helps create that distinctively "vintage" sound in older recordings, more so than the hiss, crackle and limited upper frequencies that characterise those badly faked "old recordings" in the movies that never sound quite right to those of us who know!
Now whilst this re-equalisation makes the music sound great, it does the opposite with all the background noise inherent in any analogue recording, whether its disc surface noise or tape hiss, as this too is being hauled all over the same acoustic terrain. What was previous a nice, evenly-balanced background noise suddenly has major and very nasty zones of harsh hiss and noise. It's difficult to listen to, and needs tackling. The ear likes (or tolerates) a nice even random white-noise-style hiss, but not one where noise frequencies are boosted to the extent of leaving what sounds like odd whistles, even if the music at those same whistling frequencies now sounds fine. Bring on the digital noise reduction - but applied in varying degrees across the frequency range to try and even this all out.
Put very simply, this is what I've been doing ever since: equalise an older recording to a modern equivalent, then filter out the excess noise this produces to try and create a quiet, but crucially a sonically even, white noise background. (It's actually quite a bit more complicated than that but this describes the nub of it well.)
I've learned a lot since 2007. The tools at my disposal have progressed from being blunt instruments to something far more surgical. New ones have appeared which have transformed some of the most difficult aspects of the various procedures. I've realised just how important very precisely matched tuning is between source and reference. I've discovered the "hidden frequencies" in older recordings which inspired the label "XR" (it originally stood for extended response), and how to help make them properly audible again.
Meanwhile computing power has increased massively, to the extent that highly complex digital noise reduction techniques (for example) that were simply too slow to be practical are now everyday tools. No doubt this will continue - for the first time this week, for example, I've managed to preview my most extremely processor-intensive noise reduction routine on stereo material for the very first time. This involves taking one of the fastest and most expensive 6-core Intel i7 processors on the market and persuading it to run considerably faster than its makers intended (a process known as over-clocking), without overheating and literally burning it out, in order to squeeze more performance out of it. I imagine this result will shortly be a routine achievement, by which time the software's authors will no doubt have cooked up an even more processor-intensive but sonically-superior method.
In the middle of last year we became the first to release recordings which had been pitch-stabilised using Celemony's Capstan software. This is another major processor-hog. I was told to expect the initial analysis period for any recording fed into the system to take about as long as the recording's real time duration - ask it to untangle an hour of music and you might as well do something else for an hour, while it runs your chips flat out at 100% and you hope they don't start glowing. This same operation now takes me about 25 per cent of the original time thanks to that PC speed boost - meanwhile Celemony have won a thoroughly-deserved technical achievement Grammy for Capstan, to be presented in a few weeks' time.
I don't recall precisely when Ambient Stereo appeared on the scene, but like Capstan it was one of those things which, once heard, I knew I couldn't do without, and it got added to the "XR" toolkit. I remember my nerves at issuing Ambient Stereo recordings, given the reputation of "fake stereo" processes for creating appalling recordings in the past. Yet this was different, and now the vast majority of our customers not only agree, but choose Ambient Stereo wherever possible.
Naturally there are other technical fixes which can be brought into play as and when required, and it seems that every time I think my PC is powerful enough to speed up my work rate a new must-have turns up which I can't live without, but which slows the whole remastering process back down again and has me craving ever more computing power.
I still can't always predict what results any individual recording will produce in advance of XR remastering, though I do have a much better idea now than I did five years ago. And like a drug addict, whilst I still crave that high I got the first time I heard it, I still get a real kick out of what I do hear, and remain permanently itching for my next sonic "hit"...
6 January 2012
Pristine Classical - DRM-free historic FLAC and MP3 downloads since 2005