|
Post by seawell on Mar 6, 2024 12:43:26 GMT -6
Your assumptions are wrong. Either I didn't' clearly lay things out, or you need to read it again. Either way, the sessions are quite complex, and they Just Work. Blackdawg , seawell , myself and virtually all other HDX users just get to work and are done with it. There is no need to be worrying about latency, delay compensation or how many busses we are cascading. It just works. You're incorrect. They are not assumptions. Digital processes incur latency. They just do. Add enough of them together, and at some point it could be a problem. I'm not saying it's necessarily a problem for you in your use case, but it's also not a zero added latency situation either. Not a hard concept. Seawell himself similarly mentioned being curious about how such a high number of busses might affect latency. If you are fine with not knowing, that's obviously up to you, but there's nothing wrong with wanting to know more about how something works, as opposed to just assuming that it always will, especially when using it at its extremes. "It just works" doesn't really address the question. In any case, I'm gonna go and look to see if I can find the latency numbers for HDX busses, as you apparently don't know. Pro Tools delay compensation definitely has a limit and it is dependent on the sample rate of your session. I can't remember the numbers off the top of my head but I'll try to take a look later this evening. A session at 44.1 kHz can hit that limit quicker than 96kHz for example. So, if you use a bunch of plug-ins on a track or something with a long look-ahead setting you can definitely hit the limit. When that happens, the track will turn red and you'll probably start to notice some things sounding out of sync. It very rarely happens, but when it does, the easiest solution is to just commit the plug-ins to that track and you're back in sync. I should also mention, I spent a lot of time working in both Logic and Studio One, even sold my Pro Tools rig when I went Logic for a while so I found lots of Pros and Cons with all of them(Pro Tools included). The main reason I came back to Pro Tools is I've done a lot of high demand tracking(full band tracking at the same time, full drum kit, etc..) and hybrid mixing. I guess, if your DAW doesn't have delay compensation for hardware inserts, you could write down the delay times for each sample rate and just reference that as needed? I'm not sure, I've never done it that way. Having said all that, I made the transition from TDM to HDX but if the whole Pro Tools ecosystem changed that drastically again, I'd have to seriously consider other options as the tracking side of my business has gone down drastically since 2020. Which, is also why I feel I need hardware more than ever during mixing. All of these people recording themselves at home is pushing my mixing skills to the limit 🤣
|
|
ericn
Temp
Balance Engineer
Posts: 16,098
Member is Online
|
Post by ericn on Mar 6, 2024 13:04:54 GMT -6
I do love the idea of software, not needing a tech, every UAD LA2 used sounds and reacts the same and they don’t change over time, instant recall, the small footprint, portability. Etc. Where all this goes out the window is when I plug a bunch of gear and the RADAR into the DDA grab a couple of comps and reach for a fader, all the above goes right out the window.
Software is always getting better everyday, but it still sounds like the VCA’s in the Status compared to the DDA, it just doesn’t seam to jump. Now I’m going to say something controversial, if I use the RADAR for conversion, the Yamaha DM1000’s summing algorithm sounds better than logic or PT it’s not bad, just not wow. I think one of the biggest problems is as an industry we have accepted a reality of one or the other instead of really trying to build tools that integrate both, we need a modern industry wide control protocol so we can see the potential of products like the SSL Sigma.
|
|
|
Post by drbill on Mar 6, 2024 13:22:27 GMT -6
as you apparently don't know. Blissfully ignorant, and just working. Continue on with the science!! <<thumbsup>>
|
|
|
Post by Quint on Mar 6, 2024 13:33:22 GMT -6
You're incorrect. They are not assumptions. Digital processes incur latency. They just do. Add enough of them together, and at some point it could be a problem. I'm not saying it's necessarily a problem for you in your use case, but it's also not a zero added latency situation either. Not a hard concept. Seawell himself similarly mentioned being curious about how such a high number of busses might affect latency. If you are fine with not knowing, that's obviously up to you, but there's nothing wrong with wanting to know more about how something works, as opposed to just assuming that it always will, especially when using it at its extremes. "It just works" doesn't really address the question. In any case, I'm gonna go and look to see if I can find the latency numbers for HDX busses, as you apparently don't know. Pro Tools delay compensation definitely has a limit and it is dependent on the sample rate of your session.  I can't remember the numbers off the top of my head but I'll try to take a look later this evening.  A session at 44.1 kHz can hit that limit quicker than 96kHz for example.  So, if you use a bunch of plug-ins on a track or something with a long look-ahead setting you can definitely hit the limit.  When that happens, the track will turn red and you'll probably start to notice some things sounding out of sync.  It very rarely happens, but when it does, the easiest solution is to just commit the plug-ins to that track and you're back in sync.  I should also mention, I spent a lot of time working in both Logic and Studio One, even sold my Pro Tools rig when I went Logic for a while so I found lots of Pros and Cons with all of them(Pro Tools included).  The main reason I came back to Pro Tools is I've done a lot of high demand tracking(full band tracking at the same time, full drum kit, etc..) and hybrid mixing.  I guess, if your DAW doesn't have delay compensation for hardware inserts, you could write down the delay times for each sample rate and just reference that as needed?  I'm not sure, I've never done it that way.  Having said all that, I made the transition from TDM to HDX but if the whole Pro Tools ecosystem changed that drastically again, I'd have to seriously consider other options as the tracking side of my business has gone down drastically since 2020.  Which, is also why I feel I need hardware more than ever during mixing.  All of these people recording themselves at home is pushing my mixing skills to the limit 🤣 Thanks. That's good info. I look forward to hearing what numbers you find. As an aside, part of the reason I would like to know this stuff is purely on an academic level. However, I also like to always be aware of my options and, IF I ever decided to ditch my Luna/Apollo system, I suppose I might be interested in an HDX card with 32 or 48 channels of Lynx (n). I've gotten accustomed to the DSP workflow, and wouldn't want to go back to purely native. It's a long shot that this would ever happen, but I have thought about it, hence my interest in learning some more details about the inner workings of HDX.
|
|
|
Post by ragan on Mar 6, 2024 13:49:57 GMT -6
I’m interested too. Pretty much solely because:
1. PT (Native or Studio or whatever they’re branding it as these days) has a bad combo of features in that it is a) buggy b) expensive
2. Using a separate monitor mixer for low latency (Symphony Conrol app in my case) is clunky
|
|
|
Post by copperx on Mar 6, 2024 13:52:14 GMT -6
as you apparently don't know. Blissfully ignorant, and just working. Continue on with the science!! <<thumbsup>> DrBill, just to confirm (before I go looking for an HDX setup), do you automate using a controller, or do you draw automation?
|
|
|
Post by Shadowk on Mar 6, 2024 13:53:29 GMT -6
I AM talking about busses into busses though. NOT parallel. I don't know how many busses Bill is running in series, but I think it's safe to assume that a decent number of those busses are in series, and not all parallel. 700 busses, with many of those busses in series, is still really high, but I can sort of see how you might get that high if you're sending busses to busses to busses. But 700 busses all or nearly all in parallel would just be nuts. I doubt he would even have 700 tracks, much less the need to buss it all on a parallel level. So I'm assuming that his complicated routing means that he's running busses into busses into busses, etc. I can read Quint, so whoa c'mon and I already gave the answer, Bill said 5 in sequence later on so you just add it up (simple).. The latency was measured from a few HDX users, it doesn't specify exactly in the manual because it's automatically compensated for, it works differently in certain situations plus it's like 1600 pages long already, Pro Tools does more stuff than you'd ever believe.
There's specific reasons for bussing in certain scenarios and this is where Pro Tools gets even more complicated like the following:
"Recording Audio from an External MIDI Instrument
You can record audio from an external MIDI in- strument in one of two ways: • BybussingaudiofromtheoutputoftheInstrument (or Auxiliary Input) track used to monitor the MIDI instrument to an audio track for re-cording. • Bysettingtheaudiotrack’sAudioInputPathselector to the same Audio Input Path as the Instru- ment (or Auxiliary Input) track used for monitoring the external MIDI instrument. This second method avoids any additional latency associated with bussing. However, be sure to mute the Instrument (or Auxiliary Input) track used for monitoring while recording the same audio path to the audio track."
This second method avoids any additional latency associated with bussing. However, be sure to mute the Instrument (or Auxiliary Input) track used for monitoring while recording the same audio path to the audio track."
You'll just have to trust the HDX users that it works without issue and Ultimate / HDX is made for massive productions like Bill's OTT templates. seawell the delay compensation max amount is 16,383 samples @ 48Khz.
|
|
|
Post by copperx on Mar 6, 2024 14:02:34 GMT -6
Good lord. I've avoided Pro Tools most of my life out of principle, but if what you're all saying regarding buss latency is true, I'll eat my shoe.
|
|
|
Post by Quint on Mar 6, 2024 14:17:27 GMT -6
as you apparently don't know. Blissfully ignorant, and just working. Continue on with the science!! <<thumbsup>>
|
|
|
Post by Quint on Mar 6, 2024 14:23:35 GMT -6
I'm not saying that the delay compensation is broken or not doing what it's supposed to do, but if you delayed one track by one second, and then everything else on all other tracks accordingly was delay compensated, you'd still be in a situation where the entire song is now delayed by one full second. One second is not an indiscernible amount of time, if it were to cause issues with things like automation. I'll go more in depth when I can make a longer post, but in relation specifically to automation... with a caveat being that I'm a mouse/pencil man myself... on native, you'll do with the incurred latency no matter what, to my knowledge. If I'm writing automation on any fader when dealing with one second of delay, it's to my understanding that you could drop the fader down abruptly and not hear that impact your mix for one whole second. Now, whether that gets corrected with delay compensation and moves that automation point back one second afterwards? Not totally sure. On an HD system, I believe it's more nuanced. If the track your automating, say, a vocal, has no or minimal latency against several different tracks that have varying degrees of latency induced from plugs or inserts, I believe as long as they're not summing and going through processes itself, your fader movements on the vocal would be closer to real-time. Basically whatever the the delay incurred on that targeted track is? Because, I think it's effectively being buffered on playback. Again, could be absolutely wrong, as I'm not doing much automation on HD or HDX. I'll go in more detail later though. I'll be working on some IRs tonight so I'll see what Native has for buss latency. It'll be native, so my guess is my numbers might be higher. Ok. Thanks
|
|
|
Post by Shadowk on Mar 6, 2024 14:26:02 GMT -6
Good lord. I've avoided Pro Tools most of my life out of principle, but if what you're all saying regarding buss latency is true, I'll eat my shoe. As I mentioned in the Carbon thread the amount of whacked out instances I've come across when bussing, using DSP based mixers, certain plugins, PDC not working right across several DAW's (whether that's an instance, gradual etc.). I mean, don't wax those shoes and get that knife and fork out..
I'm not sure at this stage if I'm just unlucky or I subcontiously find ways to screw it up, I used to have a TDM / HDX setup and then went native around the Pro Tools 9 era. I switched to Samplitude because things like delay compensation actually worked, sidechaining wasn't a "HDX feature" etc. now they've fixed that a few years back in Studio, plus there's Carbon so I decided to jump back in. Crossing my fingers till they ache there's been zero issues thus far..
It has gotten to the stage where I really don't trust audio equipment as far as I can throw it..
|
|
|
Post by Quint on Mar 6, 2024 14:30:07 GMT -6
Good lord. I've avoided Pro Tools most of my life out of principle, but if what you're all saying regarding buss latency is true, I'll eat my shoe. As I mentioned in the Carbon thread the amount of whacked out instances I've come across when bussing, using DSP based mixers, certain plugins, PDC not working right across several DAW's (whether that's an instance, gradual etc.). I mean, don't wax those shoes and get that knife and fork out..
I'm not sure at this stage if I'm just unlucky or I subcontiously find ways to screw it up, I used to have a TDM / HDX setup and then went native around the Pro Tools 9 era. I switched to Samplitude because things like delay compensation actually worked, sidechaining wasn't a "HDX feature" etc. now they've fixed that a few years back in Studio, plus there's Carbon so I decided to jump back in. Crossing my fingers till they ache there's been zero issues thus far..
It has gotten to the stage where I really don't trust audio equipment as far as I can throw it..
Which is why I'm asking questions about how busses and latency work in PT. I don't trust any of these companies to get this stuff right. I'm certainly not just going to assume that it all "just works".
|
|
|
Post by Shadowk on Mar 6, 2024 15:02:31 GMT -6
Which is why I'm asking questions about how busses and latency work in PT. I don't trust any of these companies to get this stuff right. I'm certainly not just going to assume that it all "just works". I agree, although we've only got first hand experience and again and as I said I've had no issues thus far. I've never gone to the extent Bill has, probably max is 250 tracks including busses and this was years ago. For general recording purposes I might use 15 - 40 or so (tops) tracks including busses so there's no issues at all even with Carbon. It works and does it well..
Avid support has actually been pretty great, I submitted a support case and had an answer in less than five hours. Here's the thing, there's always something and I'm not saying that HDX / Ultimate hasn't had its buggy less than ideal versions but with Ultimate / Avid HW you're always a priority client and things get fixed quickly IME. IMO the HD / native / studio / artist crowd has always been an afterthought (at best) for Avid, that's where a lot of my frustations came from. Studio is catching up bu even today you can tell what their priorities are as there's no I/O ping tool with sample correction for third party HW. You're either in it or you're out when it comes to Avid and even then it's not perfect..
That being said, HDX is a console replacement eco-system and it actually does what it says on the tin. I've been using Pro Tools on and off for decades, even had a digi Mbox on LE and I know exactly where I stand with them for better or worse. Pick your poison..
|
|
|
Post by Quint on Mar 6, 2024 15:09:37 GMT -6
Which is why I'm asking questions about how busses and latency work in PT. I don't trust any of these companies to get this stuff right. I'm certainly not just going to assume that it all "just works". I agree, although we've only got first hand experience and again and as I said I've had no issues thus far. I've never gone to the extent Bill has, probably max is 250 tracks including busses and this was years ago. For general recording purposes I might us 15 - 40 or so (tops) tracks including busses so there's no issues at all even with Carbon. It works and does it well..
Avid support has actually been pretty great, I submitted a support case and had an answer in less than five hours. Here's the thing, there's always something and I'm not saying that HDX / Ultimate hasn't had its buggy less than ideal versions but with Ultimate / Avid HW you're always a priority client and things get fixed quickly IME. IMO the HD / native / studio / artist crowd has always been an afterthought (at best) for Avid, that's where a lot of my frustations came from. Even today you can tell what their priorities are as there's no I/O ping tool with sample correction for third party HW. You're either in it or you're out when it comes to Avid and even then it's not perfect..
That being said, HDX is a console replacement eco-system and it actually does what it says on the tin. I've been using Pro Tools on and off for decades, even had a digi Mbox on LE and I know exactly where I stand with them for better or worse. Pick your poison..
I just wanted to know what the bus latency in PT was, and how many busses Bill had in series. I'm not quite sure why this thread turned into a defense/promotion of PT thread. And it originally was, and still kind of is, a thread about hardware versus software mixing. But threads go to interesting places sometimes.
|
|
|
Post by Shadowk on Mar 6, 2024 15:16:10 GMT -6
I just wanted to know what the bus latency in PT was, and how many busses Bill had in series. I'm not quite sure why this thread turned into a defense/promotion of PT thread. And I told you what it was even with the exceptions y'know cause I was trying to help, how is this turning into a defense / promotion of Avid? Whatever.. Don't care.
|
|
|
Post by Quint on Mar 6, 2024 15:21:48 GMT -6
I just wanted to know what the bus latency in PT was, and how many busses Bill had in series. I'm not quite sure why this thread turned into a defense/promotion of PT thread. And I told you what it was even with the exceptions y'know cause I was trying to help, how is this turning into a defense / promotion of Avid? Whatever.. Don't care. That wasn't meant as a dig at you, by the way. Sorry if you took it that way. It was not my intention. I was just musing at the general flow of this thread.
|
|
|
Post by drbill on Mar 6, 2024 15:24:05 GMT -6
Blissfully ignorant, and just working. Continue on with the science!! <<thumbsup>> DrBill, just to confirm (before I go looking for an HDX setup), do you automate using a controller, or do you draw automation? I primarily use an Artist Mix (8 faders) for automation, but I'm equally adept at trackball automation as well. Depends where my hands happen to be, and what's called for. I rarely draw in static levels by hand. I'm all about dynamic automation that is constantly moving though. Countless automation points in every mix I do. If I flip to "volume" graph it looks crazy... . If there's a track that doesn't have lots of movement, I probably forgot about it. LOL
|
|
|
Post by Dan on Mar 6, 2024 19:55:10 GMT -6
Just as a theoretical question...In your opinions, can software "achieve the same results as HW - it just might take longer and more effort?" Or is it intrinsically inferior and the same results can't be achieved? I'm still a HW sounds better kinda guy, e.g. - While interface pres and software emus come close, I still prefer the HW pres. The complicating factor is the stacking of tracks. The modeled pres can sound really good, but I do think there are differences especially when you're stacking. But I guess that's not really sticking to my mixing question and more about tracking. Comps/EQ - there's still a more demonstrative attack with HW comps. Or at least that's what I think the issue is. But I have a much, much harder time telling a difference between HW and SW EQs. I have had the experience that HW EQ's sound more natural in the top end - like you can go to more extremes without it getting weird. Is that a Samplerate thing? Aliasing? IDK. It just occurred to me: Good is good. If a SW mix sounds inferior, why can't you just mix it until it doesn't sound inferior? They're supposedly the same tools, right? Here's an analogy (probably a convoluted one) I know when I first started using Luna, I was using the Neve "summing" and I felt like Luna might sound a little better than other daws. Once I heard that, I was able to work that into Pro Tools with a little more push with Slate VCC or Noise Ash N-Console. HW can be similar - sometimes you put something in the chain and it "just works." What I've found is that I can get plugins to be very similar...the advantage to HW being, I didn't have to futz with it as much. That's awesome - but brings me back to my original question: Can the same results be achieved? It makes me wonder about the most important thing in our whole chain - monitoring. If you can hear everything, why can't you mold it into the same result? Like - instead of hardware, why not invest in high end monitoring and DA instead of HW comps and EQs? I don't know the answer - just asking opinions. John, can you hear a difference? Can you get the same results? That’s all that’s going to count at the end of the day. For me personally, hardware is way way sonically superior. Just putting my Thermionic Phoenix MP Vari—mu and Swift tube EQ across the stereo mix bus and mixing into that, takes me into a sonic world of wonder plug-ins can’t even begin to enter imho So I use heaps of hardware (in growing number!) This subject is incredibly subjective, and then there’s space, money, power, heat, workflow. There’s a lot to consider in making this choice. yet you could easily shove plugins or hardware on your two bus that would be much cleaner or much dirtier than your chain. It just wouldn’t be the same as your chain. Mixing clients do not care as long as it solves their problems. Now going back to more common hardware, if you were using say an api 2500 plugin that just doesn’t get you where the hardware does at all, you might have to use an additional non-linear processor to get you there, further modulating the volume, or adding more distortion. This is pretty much the counter to the Michael Brauer replace cool hardware with chains of plugins @johnkenn . The sound is more distorted and overmodulated in the end. The solution is just use something else. If you’re taking the top off and using the equal power filter, maybe just use the Vulf comp for some crazy punch and dirt, or to rms level it and add some stupid overshoots and a little murk, the MDWDRC2 on two bus instead. Now for SSL bus, I’ve found I can get the action without the tone with the glue on maxed out real time oversampling. The glue being a standard clean digital compressor that just emulates the control path of the ssl bus comp. The distortion isn’t there through. You’ll have to add another plugin for that. It’s cleaner for better or worse than the fxg comp and clones. But I can get the ssl bus comp thing ON CRACK and WAY LOUDER from the Oxford Limiter set right, made by ex SSL people. It’s dirtier for better or worse. The Waves and SSL native bus comps don’t sound the same at all and feel like they weren’t made by people who used the hardware. The DMG Trackcomp2 SSL Bus is also way off but the PSP Buspressor does feel like an ssl with way more murk. But this is just one thing and you cannot get both the action with the tone in software. You have the cytomic with the action with no tone, the sony/sonnox that’s like the hardware if the hardware snorted ground up amphetamine pills, and everything else (including the psp that's like the hardware if it were made by Behringer and not SSL) makes me unhappy to use to the point I’d rather use something else on two bus.
|
|
|
Post by enlav on Mar 6, 2024 20:29:20 GMT -6
I haven't gone through all the replies since I started fumbling over my last one via phone, so I can't comment on anything that has developed, but this is what I can report on at present.
(Before I forget: 96k, 24-bit) Using a small session I had for capturing IRs (or rather, test tones for deconvolution), I have five stereo tracks, no busses. Only EQ3 on the test tone playback track, which is only being used to trim down the signal to a more suitable point. I made a Master just to somewhat emulate what an actual session would have, and a submaster. With all five stereo tracks with their assigned outputs to the Submaster, I get 0 reported delay or compensation needed for any tracks (EQ3 also doesn't report any latency to PT). I add another buss -- I'm routing three of the five original audio tracks to a new aux channel I've named DummyBuss. DummyBuss is then routed to Submaster. I've created a Dummy2Buss that I'm using as sends instead for each of the five stereo tracks. That aux ultimately goes to the Submaster as well.
Pro Tools is still reporting no delay compensation needed. I'll be honest, I don't know if this was the same in HD7-9, but I wouldn't be surprised if there would be no sample delays there either because... (I'm pulling this out of my ass) I have a feeling that any digital or internal routing (especially on a native system like this) is being done within the buffer, so latency that could come about from this bussing would not incur as long as the system is still playing without error on the buffer size. Needlessly add 100 blank aux tracks and busses, creating an elaborate signal path and I'm guessing you would hit some wall where you would need to increase buffer size.
Of course, this doesn't cover the thing that introduces more significant latency... hardware inserts and plugins. As soon as I introduce convology xt to one of the five tracks, I get 288 samples of reported delay. Every other track and bus that does not touch that specific track has 288 samples of compensation. If we take that plugin off of the audio track and move it to the submaster, we'll find the same reported delay, but because all tracks feed into the submaster, there's no need for compensation.
I'm not sure what good it's going to do, but I'm hosting these three pictures showcasing the scenarios. (I find it necessary to remark that I did not pilot the original session.) Anyways, hypothetically speaking, yes, doing lots of bussing can start to add up on your session delay if you're actually bussing with reason (ie - to add an effect to the combined signal of multiple tracks or send them through a hardware insert). I can only speak for myself when it comes to mindset and workflow, but given I've really only started doing more HW at mixdown (rather than just on the way in), I guarantee I'm doing things sub-optimally. But... let's return to that later...
The gargantuan number of busses, at least to me, aren't because I'm doing complex routing but rather just out of simplicity..? Every time you assign a track's output to a new aux track or a new audio track, you're effectively making use of a new buss. That buss gets named whatever you name the aux/audio track and you move on with the session. (Someone correct me if I'm wrong, but back in HD7 or maybe 6, you had to manually name busses. It's quite possible I just didn't know about assigning directly to a track though! I was definitely not experienced at Pro Tools then...) You may think that it only adds up to be so much, but when you think about just your post... Drum Buss, Parallel Comp Drums, Drum Verb, Drum Loops, Bass and Sub Bass Instruments, Keyboards, Guitar Busses, Vocals, backing vocals, any sort of ancillary busses you may need for instrumentation, and that's before getting in any routing you may need just to make printing master buss effects more easily. And that's just post - if you get that session from Studio A one town over, you could be inheriting other busses. The whole reason why number of busses matters is because at one point, Pro Tools versions had a limited number. I don't remember what it was for things like LE or M-powered. I'm guessing (as someone that has only ever seriously used Pro Tools, with tiny bits of composition work in Cubase when I was far younger) that most DAWs don't actually have a limit on internal bussing... I mean, outside of the system's limitation. I think it's really only been evident to me with "lite" versions of DAWs as a way to get people to upgrade (Pro Tools included with LE/M-Powered, and whatever other products they've had on the market).
Getting back to what I'm guessing is my sub-optimal approaches and hopefully how all of this relates back to latency/delay comp... if I'm running something back through any sort of hardware (or a plugin I don't have at home, we'll say), I'm going to end up bouncing/recording its effects back into Pro Tools on a separate track, making the original inactive (after taking plenty of notes), and moving on with the session. So by the end of the session, I have a handful of inactive tracks, like individual backing vocals, that have been summed and run through an aux buss, through plugins and then through an xpressor or whatever hardware, back in on an audio track and re-recorded. The X number of backing vocals are inactive and hidden, and while the buss is no longer in use, the 2-channel buss "BVOX" is still in my list of busses and ready/willing if I ever need to go back to those original tracks and do another run through that processing. In the scenario where all of this is in real-time and not rendered, those audio tracks, going through a buss (no reported delay), to an aux track, processed by whatever plugins (significant latency, potentially), through the converters for outboard (significant as well), back into PT and maybe more plugins (even more!) and then through any other submaster/master processing/plugins/hardware. For a native system like mine, yeaaah, that would definitely be significant, especially since any DA>AD trip would be a matter of milliseconds instead of... what.. potentially sub 1ms? If you tackle some of that processing by rendering at certain points, you aren't dealing with nearly as much. And the lower that delay from processing and converter-round trips, I'm guessing the less you end up needing to worry about real-time fader riding.
Not to say you can't just do anything on HD (maybe HDX? haha) - running plugins of RTAS and TDM back to back was basically adding huge amounts of latency forcing the audio between the CPU and HD cards a needless amount of times. HD, and by that nature, HDX, has the whole advantage of TDM/DSP processing reducing the latency you would have through RTAS/Native processing, and the much lower converter RTL when compared to interfaces that aren't HDX supported. That all noted, let me know if I'm missing the question or inquiry. At some point in this post, I started to question if i was even answering the right thing!
EDIT: I forgot to add the imgur URL.
|
|
|
Post by thehightenor on Mar 7, 2024 3:25:48 GMT -6
John, can you hear a difference? Can you get the same results? That’s all that’s going to count at the end of the day. For me personally, hardware is way way sonically superior. Just putting my Thermionic Phoenix MP Vari—mu and Swift tube EQ across the stereo mix bus and mixing into that, takes me into a sonic world of wonder plug-ins can’t even begin to enter imho So I use heaps of hardware (in growing number!) This subject is incredibly subjective, and then there’s space, money, power, heat, workflow. There’s a lot to consider in making this choice. yet you could easily shove plugins or hardware on your two bus that would be much cleaner or much dirtier than your chain. It just wouldn’t be the same as your chain. Mixing clients do not care as long as it solves their problems. Now going back to more common hardware, if you were using say an api 2500 plugin that just doesn’t get you where the hardware does at all, you might have to use an additional non-linear processor to get you there, further modulating the volume, or adding more distortion. This is pretty much the counter to the Michael Brauer replace cool hardware with chains of plugins @johnkenn . The sound is more distorted and overmodulated in the end. The solution is just use something else. If you’re taking the top off and using the equal power filter, maybe just use the Vulf comp for some crazy punch and dirt, or to rms level it and add some stupid overshoots and a little murk, the MDWDRC2 on two bus instead. Now for SSL bus, I’ve found I can get the action without the tone with the glue on maxed out real time oversampling. The glue being a standard clean digital compressor that just emulates the control path of the ssl bus comp. The distortion isn’t there through. You’ll have to add another plugin for that. It’s cleaner for better or worse than the fxg comp and clones. But I can get the ssl bus comp thing ON CRACK and WAY LOUDER from the Oxford Limiter set right, made by ex SSL people. It’s dirtier for better or worse. The Waves and SSL native bus comps don’t sound the same at all and feel like they weren’t made by people who used the hardware. The DMG Trackcomp2 SSL Bus is also way off but the PSP Buspressor does feel like an ssl with way more murk. But this is just one thing and you cannot get both the action with the tone in software. You have the cytomic with the action with no tone, the sony/sonnox that’s like the hardware if the hardware snorted ground up amphetamine pills, and everything else (including the psp that's like the hardware if it were made by Behringer and not SSL) makes me unhappy to use to the point I’d rather use something else on two bus. Not sure why you quoted me? I avoid plug-ins where possible.
|
|
|
Post by Dan on Mar 7, 2024 7:33:57 GMT -6
700 buses? Now I'm really curious. My system begins to add latency with 5-10 buses, even with zero latency plugins. . OK.  700 is probably an exaggeration, but I need a LOT.  I mentioned my setup before (above ^^^).  If Im doing a huge orchestral / modern hybrid mockup, almost all tracks will be stereo, there will most likely be 50-100 VI's going to individual record tracks.  Let's call it 75 for the sake of argument. That's 150 busses for the VI's.  Going to record tracks.  Another 150.  That's 300.  Going to print tracks.  Another 150 - that's 450.  Going to stems - that could be another 20-40.  Then final stereo print.  Thank God I'm only doing stereo usually and not 5.1.  So that puts me around 500.  But there's always the sessions that push harder.... This type of workflow allows me several very valuable options.  1.) my writing, production and mixing session is linked in the same session.  I can start making EQ and reverb choices that are reversible as I write that I deem "part" of the writing process.  FX that become integral are already set up for mix as I'm writing.  I can make automation moves while writing.  My writing becomes much more streamlined, and by the time I'm ready to mix - I'm already good distance into it.  i.e. FASTER!!  Once done writing, I'll print the "record" tracks and get to automating and balancing the mix - although as mentioned, I'm probably already quite a ways in.  Once the mix is "finished" I print the print tracks, the stems and the final mix in one pass.  3-4 minutes and I'm done.  Once mixed, if recalls are needed - which honestly is rarely for me - I can go one step back and boost or EQ a stem, 2 steps back and tweak a single element., 3 steps back and adjust or change something fairly major, or all the way back to midi/VI/production tracks if a rewrite or major change has to take place. The biggest strain on the system is in the writing mode while VI's are instantiated and all subsequent tracks are on input.  As I finish writing, the VI's are made inactive and hidden.  Once I get to the print track stage, the automated tracks are made inactive and hidden.  When I'm completely done, I'll leave the main mix, stems, and maybe print tracks "live" and the rest is inactive and hidden until needed - if ever. One thing to note that I mentioned earlier - I'm on a 2010 apple Mac Pro tower that's been upgraded as far as it will go.  I'm due for a whole new setup and will hopefully put it into the chain this year.  This above template is pushing things really hard, but it's been a faithful computer for me for well over a decade.  That's the power of HDX.  AVID's bread and butter is Film/TV and those templates make this one look like childs play.  LOL your computer isn’t new enough to use most of the plugs that behave as well as hardware or the newer cool flexible distortion plugs so it’s tough to compare. Also 300-400 vstis with a lot of cool distortion plug plugs will wreck a session. There’s a reason so many modern streaming productions sound lame. They’re not even trying to hide how “stock” their sample libraries are. They’re not removing the baked in problems with their recording and pre-mixing then try to make them sound a little interesting. You might even have problems opening the guis in a session of some newer plugs. AAX DSP development is dead. For well behaving plugs, you’re stuck with old Paul Frindle Sony/Sonnox/Pro Audio DSP dynamics which sound like they sound for better or worse (smooth, chunky attsck and the lookaheads can bring the highs up or hold them down), McDSP eqs (cool eqs, awful dynamics processors), old Massey plugs that can be cool but are dirtier older dsp, and Brainworx stuff that’s all over the place. Modern CPUs don’t need dsp to process that amount of routing in reaper and logic. You’ll break the audio engine of pro tools native and Cubase before you overload the multicore power of the cpu.
|
|
|
Post by Dan on Mar 7, 2024 7:44:57 GMT -6
You're incorrect. They are not assumptions. Digital processes incur latency. They just do. Add enough of them together, and at some point it could be a problem. I'm not saying it's necessarily a problem for you in your use case, but it's also not a zero added latency situation either. Not a hard concept. Seawell himself similarly mentioned being curious about how such a high number of busses might affect latency. If you are fine with not knowing, that's obviously up to you, but there's nothing wrong with wanting to know more about how something works, as opposed to just assuming that it always will, especially when using it at its extremes. "It just works" doesn't really address the question. In any case, I'm gonna go and look to see if I can find the latency numbers for HDX busses, as you apparently don't know. Pro Tools delay compensation definitely has a limit and it is dependent on the sample rate of your session.  I can't remember the numbers off the top of my head but I'll try to take a look later this evening.  A session at 44.1 kHz can hit that limit quicker than 96kHz for example.  So, if you use a bunch of plug-ins on a track or something with a long look-ahead setting you can definitely hit the limit.  When that happens, the track will turn red and you'll probably start to notice some things sounding out of sync.  It very rarely happens, but when it does, the easiest solution is to just commit the plug-ins to that track and you're back in sync.  I should also mention, I spent a lot of time working in both Logic and Studio One, even sold my Pro Tools rig when I went Logic for a while so I found lots of Pros and Cons with all of them(Pro Tools included).  The main reason I came back to Pro Tools is I've done a lot of high demand tracking(full band tracking at the same time, full drum kit, etc..) and hybrid mixing.  I guess, if your DAW doesn't have delay compensation for hardware inserts, you could write down the delay times for each sample rate and just reference that as needed?  I'm not sure, I've never done it that way.  Having said all that, I made the transition from TDM to HDX but if the whole Pro Tools ecosystem changed that drastically again, I'd have to seriously consider other options as the tracking side of my business has gone down drastically since 2020.  Which, is also why I feel I need hardware more than ever during mixing.  All of these people recording themselves at home is pushing my mixing skills to the limit 🤣 yeah you’ll break the audio engine of pro tools native and Cubase before your cpu gives out with a modern cpu unless you’re stacking up tons of stuff on a di guitar or awful vsti. I’m up against the limits of a 12900k but only for di guitars that need some work and for the current guitarist I’m working with, it’s his Behringer DI that causes the need for the extra plugs beyond the amp sim, smooth compressor, and a bit of colored eq (I’m just using the insane + mode in nova ge that models a transformer). His pedals into the di are fine, which is rare. The new hybrid engine for HDX basically is an admission that aax dsp development is dead. DSP plugins seem to be dead. We have Weiss and UAD ported to native from the sharcs. We just need Eventide to port more h9000 algorithms and the Bricasti reverb. Crane Song Phoenix is coming to native too I hear.
|
|
|
Post by Shadowk on Mar 7, 2024 8:24:31 GMT -6
your computer isn’t new enough to use most of the plugs that behave as well as hardware or the newer cool flexible distortion plugs so it’s tough to compare. Also 300-400 vstis with a lot of cool distortion plug plugs will wreck a session. There’s a reason so many modern streaming productions sound lame. They’re not even trying to hide how “stock” their sample libraries are. They’re not removing the baked in problems with their recording and pre-mixing then try to make them sound a little interesting. You might even have problems opening the guis in a session of some newer plugs. AAX DSP development is dead. For well behaving plugs, you’re stuck with old Paul Frindle Sony/Sonnox/Pro Audio DSP dynamics which sound like they sound for better or worse (smooth, chunky attsck and the lookaheads can bring the highs up or hold them down), McDSP eqs (cool eqs, awful dynamics processors), old Massey plugs that can be cool but are dirtier older dsp, and Brainworx stuff that’s all over the place. Modern CPUs don’t need dsp to process that amount of routing in reaper and logic. You’ll break the audio engine of pro tools native and Cubase before you overload the multicore power of the cpu. Yes, I have been a bit audio "grumpy" lately but to balance context a bit here. I've dipped my toe into film / TV before and I've discussed on the purple site with people who do this as a career, you'd surprised how many high budget productions due to compatability issues (because they work as teams) use stock Pro Tools plugins in HDX. If you can't make a masterpiece with these you're out the door, you either sink or swim. Then again, they also tend to have the best arrangers, composers, studio artists, sound designers and access to whatever room, instruments or tracking equipment they feel like. They have this mysterious thing in audio called a budget. So, their general opinion of this plugin vs. HW situation is in the grand scheme it's like dry pissing into a lake. It's not exactly a level playing field though, I can't just go to my local church and say hey, can I just borrow your hall for three days whilst I record these tracks? Even if I could on a usual mix budget I'd blow it with rentals in the first 5 minutes. When it comes to bands in general, jeez you've no idea what the state of recording is going to be. In less than ideal situations comes creative ways to solve issues, technically speaking there is no reason why you can't get away with stock plugins to create world class mixes but in the world of dry mic'ing, VST's etc. it's just not that easy. Also mixing now seems to collapse into sound design, some sort of compositional replacements and even mastering hence mixing isn't "mixing" anymore.
On the CPU thing, I've got A Ryzen 32 core on a windows machine. I can break that at 32 samples with six VSTI's and Ozone, then again I can run about 400 "standard" plugins maybe even tripple that (I got bored). I also have a Macbook Pro, quite recent but same again.. I've tried it in all sorts of configurations, you can't work around how DAW's multi-thread (per channel / per core), nothing is perfect. We've had this discussion before and we make the best of it..
|
|
|
Post by Dan on Mar 7, 2024 8:30:50 GMT -6
yet you could easily shove plugins or hardware on your two bus that would be much cleaner or much dirtier than your chain. It just wouldn’t be the same as your chain. Mixing clients do not care as long as it solves their problems. Now going back to more common hardware, if you were using say an api 2500 plugin that just doesn’t get you where the hardware does at all, you might have to use an additional non-linear processor to get you there, further modulating the volume, or adding more distortion. This is pretty much the counter to the Michael Brauer replace cool hardware with chains of plugins @johnkenn . The sound is more distorted and overmodulated in the end. The solution is just use something else. If you’re taking the top off and using the equal power filter, maybe just use the Vulf comp for some crazy punch and dirt, or to rms level it and add some stupid overshoots and a little murk, the MDWDRC2 on two bus instead. Now for SSL bus, I’ve found I can get the action without the tone with the glue on maxed out real time oversampling. The glue being a standard clean digital compressor that just emulates the control path of the ssl bus comp. The distortion isn’t there through. You’ll have to add another plugin for that. It’s cleaner for better or worse than the fxg comp and clones. But I can get the ssl bus comp thing ON CRACK and WAY LOUDER from the Oxford Limiter set right, made by ex SSL people. It’s dirtier for better or worse. The Waves and SSL native bus comps don’t sound the same at all and feel like they weren’t made by people who used the hardware. The DMG Trackcomp2 SSL Bus is also way off but the PSP Buspressor does feel like an ssl with way more murk. But this is just one thing and you cannot get both the action with the tone in software. You have the cytomic with the action with no tone, the sony/sonnox that’s like the hardware if the hardware snorted ground up amphetamine pills, and everything else (including the psp that's like the hardware if it were made by Behringer and not SSL) makes me unhappy to use to the point I’d rather use something else on two bus. Not sure why you quoted me? I avoid plug-ins where possible. you gave your mix bus chain of two modern tube pieces that are very expensive and I’m sure sound perfectly fine but as an example of why hardware is superior, it is an illogical claim. You can build a mix bus chain that’s far cleaner than anything tubes with software or hardware that would fit a far wider variety of material recorded around the world if you knew how to use it or could create one with software or hardware that is far dirtier with a very flexible range of behavior. Hardware, there are things like the Dangerous equipment with the still made vca compressor that sounds quite good and clean and the bax eq that has a subtle tone but you can also get the Drawmer 1968 that sounds like a guitar amp and hook it up to a Drawmer eq that sounds like warm, fat old school opamp solid state or warm, “red” tube amp just to make equipment I’m fond of and have used. Then there are the faster optical compressors (less controllable than VCA) and the PWM compressors that are inherently cleaner methods of gain control than VCAs and tubes, more similar to digital where there’s no hit from the multiplier, control path bleeding into the audio, or have need for distortion cancelation like a JFET (that distortion is present even in the smoother ones like the Drawmer and the Daking but usually masked by all the other craziness going on in the Drawmer but you can easily hear it in on transients in an 1176 or Daking FETs) Digital, you could easily use an eq with extremely subtle distortion like the Slick EQ M exciters or dialed back Fuse VQA-1540 to name two things I’ve used recently with some harmonic distortion or something pristine like the Weiss EQ1, Slick EQ M with the exciters off, or many of the Izotope developed eqs running at higher sample rates (unfortunate many cramp and guis awful) into something like the MDWDRC2 or Kotelnikov GE for extremely clean control or you could set the Tokyo Dawn and variety of sound collaboration that is the slick eq ge on one of the “hit it hot to tear the audio a new asshole” modes like excited, funky, or seven into the Goodhertz Vulf compressor which is a cleaned up and expanded version of the legendary boss sampler vinyl sim compressor just to name some purely digital things. The difference in software is that it’s cheaper to use in terms of money and time so more new designs. Unfortunate like hardware, most new plugins seem to be an ersatz of the past to sell to musicians and producers.
|
|
|
Post by Dan on Mar 7, 2024 9:00:58 GMT -6
your computer isn’t new enough to use most of the plugs that behave as well as hardware or the newer cool flexible distortion plugs so it’s tough to compare. Also 300-400 vstis with a lot of cool distortion plug plugs will wreck a session. There’s a reason so many modern streaming productions sound lame. They’re not even trying to hide how “stock” their sample libraries are. They’re not removing the baked in problems with their recording and pre-mixing then try to make them sound a little interesting. You might even have problems opening the guis in a session of some newer plugs. AAX DSP development is dead. For well behaving plugs, you’re stuck with old Paul Frindle Sony/Sonnox/Pro Audio DSP dynamics which sound like they sound for better or worse (smooth, chunky attsck and the lookaheads can bring the highs up or hold them down), McDSP eqs (cool eqs, awful dynamics processors), old Massey plugs that can be cool but are dirtier older dsp, and Brainworx stuff that’s all over the place. Modern CPUs don’t need dsp to process that amount of routing in reaper and logic. You’ll break the audio engine of pro tools native and Cubase before you overload the multicore power of the cpu. Yes, I have been a bit audio "grumpy" lately but to balance context a bit here. I've dipped my toe into film / TV before and I've discussed on the purple site with people who do this as a career, you'd surprised how many high budget productions due to compatability issues (because they work as teams) use stock Pro Tools plugins in HDX. If you can't make a masterpiece with these you're out the door, you either sink or swim. Then again, they also tend to have the best arrangers, composers, studio artists, sound designers and access to whatever room, instruments or tracking equipment they feel like. They have this mysterious thing in audio called a budget. So, their general opinion of this plugin vs. HW situation is in the grand scheme it's like dry pissing into a lake. It's not exactly a level playing field though, I can't just go to my local church and say hey, can I just borrow your hall for three days whilst I record these tracks? Even if I could on a usual mix budget I'd blow it with rentals in the first 5 minutes. When it comes to bands in general, jeez you've no idea what the state of recording is going to be. In less than ideal situations comes creative ways to solve issues, technically speaking there is no reason why you can't get away with stock plugins to create world class mixes but in the world of dry mic'ing, VST's etc. it's just not that easy. Also mixing now seems to collapse into sound design, some sort of compositional replacements and even mastering hence mixing isn't "mixing" anymore.
On the CPU thing, I've got A Ryzen 32 core on a windows machine. I can break that at 32 samples with six VSTI's and Ozone, then again I can run about 400 "standard" plugins maybe even tripple that (I got bored). I also have a Macbook Pro, quite recent but same again.. I've tried it in all sorts of configurations, you can't work around how DAW's multi-thread (per channel / per core), nothing is perfect. We've had this discussion before and we make the best of it..
I don’t know about that. Most of the itb engineers and producers I’ve talked to quit busters own every plugin under the Sun for compatibility with any received session. Not every band is Metallica with an entire rack of BAE pres or Daft Punk who could afford to make the last really big million dollar budget record that went anywhere. Even then modern Metallica still sounds like amp sims and superior drummer and they don’t really try to hide it. Film / tv audio is mostly dialogue. Except for sometimes needing Every restoration plugin under the Sun or to switch between limiters to catch something awful there’s a not a lot to do. The level of product quality in streaming media is far worse than in recorded music no matter how much we make fun of bad sounding big budget albums. The only difference is it’s not limited to death because of the broadcast standards and contracted standards for deliverables. It is under produced now even in the biggest Hollywood movies and streaming television shows with often noticeable lack of de-essing, issues from vsti sample libraries left in, etc. Where 10-20 years ago, it would be cedar dns to death and mixed with generally clean digital and analog tools, often now is it stock reaper, Logic, Cubase/nuendo (for composers) or pro tools stuff, whatever plugins or weird little hardware (dbx 902/520 is a life saver) they can afford or have, that is rendered, sent off, etc to be eventually mixed down into pro tools HDX for the film. The tools to make surround productions sound ultra loud and awful do not even really exist. You have a couple surround limiters that come with the daws that cannot be pushed too hard without sucking horribly. no broadcast processor but shittier style multiband limiters. Just things that are sub Waves L2 level of quality at best like what comes in pro tools and the mcdsp. Surround compressors, there were a couple of hardware units and plugs designed for 5.1 home audio production that are outdated today and a a few ho hum plugs like the stock avid, logic, and waves one and the new PSP Auralcomp that’s functional and almost good. Almost but it still lacks a lot of control and program dependency versus the better stereo compressor plugins and hardware probably just to be sane to operate and program.
|
|