Photo Credit: Jeff Yeager
It’s no understatement to say Jason Gossman is the Swiss Army Knife of Metallica’s audio and production team. Gossman has become an indispensable part of many sonic situations, filling multiple roles, from helping run the Tuning Room on tour to organizing sessions with Rob Trujillo when both are in Southern California. He has also become an integral part of the studio team, working tightly with producer Greg Fidelman.
Hailing from Virginia Beach, VA, Gossman went to recording school and moved to Los Angeles in 2002 to take on studio work, ending up in places such as Sunset Sound and the Sound Factory. He has worked on the Red Hot Chili Peppers albums Stadium Arcadium and I’m with You and recorded and mixed shows from two subsequent tours. Plus, Mr. G won a Grammy for engineering on Ben Harper and Charlie Musselwhite’s Get Up, which won Best Blues Album in 2014. His Metallica work started at the end of the Death Magnetic sessions, assisting Andrew Scheps on mixes and doing some editing. Gossman has subsequently edited several releases, including Hardwired…To Self-Destruct.
I wanted to speak with Jason about the precise work he was involved with early on during the 72 Seasons sessions when the pandemic had slowed much of the world to a crawl. Metallica wanted to get to work, and Gossman played a crucial part in discovering the solutions to help move this album forward in a very real way.
Rather than present this as a Q & A article, I decided to hand the reigns over to Jason to explain those early pandemic challenges himself. To describe the trial and error and the many programs that kept the band collaborating remotely, all leading up to the 2020 Helping Hands Concert at HQ in Northern California, which saw them finally together in the same studio space for the first time since the pandemic began. Without further ado...
In terms of recording or working in the studio, the pandemic created obvious problems. With the lockdown in full effect, it was undoubtedly a massive challenge to try to create an environment and workflow where they could be creative and comfortable. During the writing process, the band has always been very interactive, especially with Lars and James throwing ideas back and forth and feeding off what the other one is doing. That wasn’t a possibility because everybody was hunkering down in their own homes because of Covid.
When we first started, it was me, Lars, and Greg individually in our own homes and then on a Zoom call. And we used Audiomovers’ LISTENTO as a plug-in for streaming audio back and forth. When you’re first starting off, it’s straightforward, very straightforward, just listening through riffs. But eventually, we would get to the point where we would make loops of a minute and a half of each riff, two minutes, whatever, and Lars would have to play to it in his studio. We discovered this application called Splashtop, which is essentially remote control software for businesses, and we had it installed on all our computers. Basically, it let either Greg or I take control of Lars’ computer – I still use it to do stuff for them.
So Lars would be in his studio with his drums set up, and the engineer “was sitting in the room” with him. Either Greg or I would be running his rig, and then we’d be streaming audio back to ourselves so we could hear what was going on. At the end of the night, I would upload it to the server, then download it and work on it just by myself. At that point, it was just me editing at home, and then I’d send it to Greg.
We did that for a few weeks, categorizing the riffs. You’ve heard Lars speak about the “food groups?” This is the “sad” food group, this is the “Phil Rudd,” this is the “Creep” or whatever, just to help categorize the type of riff. We did that for a while, and then James got involved.
Then it became, “What’s the best way for Lars and James to be productive together?” So it got even more complicated. It became almost like a square of signal flow. I’m at home in Los Angeles, Greg is at home in Los Angeles, Lars is in the Bay, and James is at his place, so how do we do this? How do we make it feel like we’re in the studio together? I took control of James’ Pro Tools rig and streamed what he was hearing back to me via Splashtop and Audiomovers. Greg controlled Lars’ computer and streamed back what he was hearing. Then we had streams going from James to Lars and from Lars to James so they could hear each other, and we could record each of their parts on the other’s rig.
I know, I know, it’s crazy. We liked Zoom for communication, Audiomovers had the hi-rez audio streaming, and then Splashtop was great for the remote control. So, it was three bits of software together, which all worked side by side because nothing at the time did everything we needed.
People ask if latency was a problem with this, and yes, it absolutely was a problem. Person A can play – and Person B can play along in time with them, but Person A cannot listen to Person B while that is happening. So, in most cases, James would play the riff. Lars and James would hear each other and get the feel of the idea. But when it came time to record, and Lars was going to play along with James, we had to mute Lars on James’ end because there’s physical distance [which causes] that degree of latency. If we were to have listened to Lars playing along in real-time, it would’ve been a nightmare; it would be a second behind or something like that.
So, we would get James playing along to a click, mostly so there was a common reference track on both machines; you’ll see why in a sec. We’d stream that click to Lars on one channel and then James’ guitar on a second channel. Then, those two things would be recorded on Lars’ Pro Tools rig while he played his parts. Lars could hear the click, he could hear James, and he would play along with James on whatever riff. Let’s say “You Must Burn!” because that was one of the first ones we did.
So, to summarize, we’d stream one channel of guitar and one channel of click over to Lars; he would record all his multitrack drums, then after each take, I would unmute Lars on James’ rig, and Greg would play back from Lars’ so that James could hear it. We would have a stereo mix of Lars’ drums coming back to James so he could hear what Lars was playing. Not the whole multitrack, just the stereo mix of- …this is so complicated. I’ll try!
James’ computer had the master recording for the guitar on it. We always recorded the guitar signal directly via a DI box (short for Direct Injection – it captures the sound of the guitar as it comes right out of the jack before it hits an amplifier), and then two different amp sounds coming out of his amp simulator, his Fractal Axe-Fx. That existed on James’ computer.
The multitrack for the drums lived on Lars’ computer. That means the master recording of each instrument is at two different sites, and at the end of the night, we’d upload both sessions to the server; I’d download them and put them together to make a master session. I should add that when we first started, the two Pro Tools rigs were not in sync with one another! We can certainly laugh now, but when we first started, it was like we were not necessarily even recording at the same time on both machines. And then marrying those two sessions was like somebody took two different jigsaw puzzles of the same thing, mixed all the pieces up, and then threw them all on the ground, and I’d have to try to assemble these pieces. That’s where the click was essential; since it was recorded on both Pro Tools rigs, I could look at the first click of each take and accurately align the two recordings. This helped us preserve the feel of each piece, which is crucial to the goal of the sessions being as interactive as possible.
Eventually, Greg and I were able to add a Sync I/O to Lars’ rig, basically an Avid Pro Tools box for synchronization where you can chase or generate time codes. Each session that I would originate from James’ house would have a printed track of this time code in it. So that would be an additional channel of stream from James to Lars that would go to the Sync I/O and allow the two machines to be locked in sync. Life became substantially easier.
Some of you reading might ask why we didn’t do it one of several other ways, and the reason is we didn’t want to have to stop and send files back and forth in the middle of recording things. Some [options] made it easy to collaborate; you’d just record this, drag it into the little box, and it sends it over. We didn’t want to do that. We wanted it to be as interactive and as in-real-time as possible.
It was around November 2020 that it all started to settle down, once we figured out how to get in flow with it. I certainly welcomed that moment around the AWMH show at HQ, where everybody convened in a room together with Greg to work on new music. But the very cool thing about going through all these other hoops that we did, essentially preproduction on fifteen songs, when everybody walked into HQ that first time, they knew the songs, which was something of a first.
Steffan asked me how I decompressed from those earlier sessions and how I kept my mind fresh; my first answer was that there was no escape because we were all locked in our homes! But my getaway was to go and drive somewhere. We didn’t ever have sessions on back-to-back days because we all knew that would simply have been too much. Plus, there was always a lot of work to prep for the next session. So, I’d get in my Toyota Corolla, which ended up being a sanctuary of sorts, and I’d drive on the highway, like I-5, and just go wherever. There were very few cars, so it was a relaxing way to decompress.
I will have to read this back to make sure it makes sense because it was that crazy a time; I’m not sure it really does!