|
Post by kcatthedog on Mar 6, 2024 6:48:10 GMT -6
Agreed, just meant outboard and plug ins, but more sequentially, then simultaneously.
|
|
|
Post by noob on Mar 6, 2024 7:48:20 GMT -6
My take is that it's all subjective for the most part. HW and SW are not better or worse than each other, they are just very different. Different experience, different sounds. A comparison only really makes sense for specific objectives. If the objective is color and harmonic character, I have to give it to HW by a landslide. If the objective is compression, man, it's tough because there are some amazing SW comps out there and I use them on every mix. ITB is usually for more precision legwork on my end, and OTB is for vibes and musicality. The most ideal setup for me right now is some level of a hybrid where I get the mix as great as I can ITB, and then finish it off by printing individual tracks or busses through hardware to get some color or vibes. If I'm tracking, I want to hit some hardware on the way in too if possible, whether that be drums, vox or bass. Guitars are mostly catching air from an amp anyway so they already have that "space" around them and that real amp sound. It all comes down to the individual song and needs, and how closely you want to intensify the textures and vibes in an authentic, real sounding way. I think overall, I'd say some level of hybird is ideal. If you limit yourself either ITB or OTB, there's going to be sounds you just have more trouble getting either way. Also, I think we can all agree that turning a physical knob is just more fun than playing with a mouse on a screen.
|
|
|
Post by sean on Mar 6, 2024 7:52:15 GMT -6
I think where I’ve landed is:
Remote mixing (I’m going to send a mix to someone and maybe not get notes for a week): Completely in the box or maybe an analog 2 buss chain. I’ve got an Alan Smart C1, High Voltage Audio EQ6S, and an Overstayer MAS and I really don’t change the settings on the C1 (maybe I’ll use 3ms and or Auto but mostly it’s 10ms with 100ms release at 4:1) and the EQ6S is stepped and the MAS doesn’t really change much but sorta wish I had the stepped version.
In person mixing: Set up console like a “summing” mix (unity faders) using Pro Tools inserts (have HDX so not an issue) and after the mix is “finished” makes “stems” / commit the hardware inserts so I’m not having to recall hardware settings with a revision. Besides reverbs it’s pretty easy to get it to “null”
But like right now I probably have 60 “open” or “in progress” mixes with 15 more that’ll be added that list next week so in the box is pretty essential for me
|
|
|
Post by Quint on Mar 6, 2024 8:31:31 GMT -6
I get that it all gets delay compensated, and is still lined up with itself, but delay is still delay. So, for example, do you ever find that, when you're doing something like automation fader rides, that there is a discernible difference between what you're hearing and where the automation move actually is recorded on the timeline? Nope. Always in line. Always works. Never think about it. I get that it's all lined up. DSP or computer can both do delay compensation, so I'm not trying to get into a discussion on one being better than the other. My point was simply that a bus into a bus into a bus still incurs a latency hit of some kind, each time you do it, regardless of how it all gets compensated. Even if everything is all lined up and compensated with itself, the entire song (all tracks) will be delay compensated by whatever amount of delay is being caused by the most latent signal path that exists. If you do this enough times... So I was simply wondering if delay could become an issue with THAT many busses (700+). At some point, if you add in enough delay compensation (because of how many busses are going into busses which are going into busses....), I was wondering if that might not cause issues with things like automation. 700+ busses is a pretty extreme edge case, and PT DSP can't overcome physics. Neither can a CPU, for that matter. Maybe it's the case that the number of samples of latency incurred by going thru one additional bus is sufficiently small that, even in a 700 bus use case, the total delay incurred is still small enough to not be noticed? I don't know, but that's why I'm asking. Either way, delay is still delay, even on DSP.
|
|
|
Post by Quint on Mar 6, 2024 8:50:54 GMT -6
All of those busses have to eventually stack up on latency, no? A buss into a buss into a buss into a buss.... Never worked on a system of DrBills scale, but on some old HD cards with 192s or 96s, as long as you weren't running RTAS between the myriad of busses, the latency wasn't usually enough to go beyond what PTHD's delay comp could deal with. Not sure how that has changed in modern HDX, but bussing itself seemed efficient. I'm not saying that the delay compensation is broken or not doing what it's supposed to do, but if you delayed one track by one second, and then everything else on all other tracks accordingly was delay compensated, you'd still be in a situation where the entire song is now delayed by one full second. One second is not an indiscernible amount of time, if it were to cause issues with things like automation.
|
|
|
Post by drbill on Mar 6, 2024 10:02:04 GMT -6
....do you ever find that, when you're doing something like automation fader rides, that there is a discernible difference between what you're hearing and where the automation move actually is recorded on the timeline? I have not experienced that. Like I mentioned, I just get to work. Is it measurable? Probably. It's just never presented itself as a practical problem for me. Everything feels tactile and "instantaneous" to me - although as mentioned, I'm sure you could measure some degree of delay.
|
|
|
Post by drbill on Mar 6, 2024 10:07:20 GMT -6
For me it's pretty simple. But to write it all out would kind of take a small book. I've got it dialed in after years of experimentation though. Recall for me is virtually as fast as an ITB recall. But you’re tracking your own stuff, right? It always stays set the same? Mostly stays the same. Nobody else is putting their hands on things as it's not a public studio. I've found the sweet spots for me, and tend to leave things alone . But the music varies a lot - from big cinematic trailer stuff to Americana to jazz to trip hop to rock to who knows. This week it's Western swing, electronica and Beyonce Texas hold em hip hop. LOL
|
|
|
Post by Shadowk on Mar 6, 2024 10:07:58 GMT -6
I get that it's all lined up. DSP or computer can both do delay compensation, so I'm not trying to get into a discussion on one being better than the other. My point was simply that a bus into a bus into a bus still incurs a latency hit of some kind, each time you do it, regardless of how it all gets compensated. Even if everything is all lined up and compensated with itself, the entire song (all tracks) will be delay compensated by whatever amount of delay is being caused by the most latent signal path that exists. If you do this enough times... So I was simply wondering if delay could become an issue with THAT many busses (700+). At some point, if you add in enough delay compensation (because of how many busses are going into busses which are going into busses....), I was wondering if that might not cause issues with things like automation. 700+ busses is a pretty extreme edge case, and PT DSP can't overcome physics. Neither can a CPU, for that matter. Maybe it's the case that the number of samples of latency incurred by going thru one additional bus is sufficiently small that, even in a 700 bus use case, the total delay incurred is still small enough to not be noticed? I don't know, but that's why I'm asking. Either way, delay is still delay, even on DSP. I get where you're coming from, in HDX or TDM at least it was 33 samples of delay for a bus. However it's not a "stacking" effect per se, within the DSP mixer every bus has a 33 sample delay and if any other DAW's methodology of multi-threading is to go by then this will be per core (usually a channel inhabits one core which a single processor in itself). Well, unless you were routing busses into other busses, although whilst I do some odd stuff when parallel mixing I'd never do that. I believe native back when I had a look into this (due to some odd behaviour with busses) it was more like 200 samples.
So, where DSP usually shines is parallel processing but let's go on the recorded spec's. If you have 4 X Avid DSP cards with 72 cores you could divide the amount of bus latency roughly by ten giving you a 330 sample delay which would be about 7.48 ms. Seen as I use about six busses even with the puny 8 core power of Carbon mine would be 33 samples or 0.7ms..
|
|
|
Post by Blackdawg on Mar 6, 2024 10:08:28 GMT -6
Nope. Always in line. Always works. Never think about it. I get that it's all lined up. DSP or computer can both do delay compensation, so I'm not trying to get into a discussion on one being better than the other. My point was simply that a bus into a bus into a bus still incurs a latency hit of some kind, each time you do it, regardless of how it all gets compensated. Even if everything is all lined up and compensated with itself, the entire song (all tracks) will be delay compensated by whatever amount of delay is being caused by the most latent signal path that exists. If you do this enough times... So I was simply wondering if delay could become an issue with THAT many busses (700+). At some point, if you add in enough delay compensation (because of how many busses are going into busses which are going into busses....), I was wondering if that might not cause issues with things like automation. 700+ busses is a pretty extreme edge case, and PT DSP can't overcome physics. Neither can a CPU, for that matter. Maybe it's the case that the number of samples of latency incurred by going thru one additional bus is sufficiently small that, even in a 700 bus use case, the total delay incurred is still small enough to not be noticed? I don't know, but that's why I'm asking. Either way, delay is still delay, even on DSP. My answer is still the same. I don't ever notice it on big projects. It just works. I don't know how or why but it's never been an issue.
|
|
|
Post by svart on Mar 6, 2024 10:29:01 GMT -6
I do busses into busses all the time. Never really noticed any delay.
However, my headphone sends are taken directly from the tracks themselves and sent to the analog outs through the MOTU matrix to the analog outs for the Hearback system, so at least the musicians don't get much of a delay. Since I listen to the main mix output, everything is relative anyway, but I frequently track stuff sitting at the desk listening to the main mix and don't feel any delay either.
Also, the automation is relative to the playback cursor, so I don't know how it can be "off" from the timing of the playback. I also don't really do super tight automation either. It usually starts/stops just before or just after a section anyway.
I've only had maybe one plugin that seemed to freak out the compensation in Reaper and it would start a second late and end a second late.
|
|
|
Post by drbill on Mar 6, 2024 10:29:06 GMT -6
700 buses? Now I'm really curious. My system begins to add latency with 5-10 buses, even with zero latency plugins. . OK. 700 is probably an exaggeration, but I need a LOT. I mentioned my setup before (above ^^^). If Im doing a huge orchestral / modern hybrid mockup, almost all tracks will be stereo, there will most likely be 50-100 VI's going to individual record tracks. Let's call it 75 for the sake of argument. That's 150 busses for the VI's. Going to record tracks. Another 150. That's 300. Going to print tracks. Another 150 - that's 450. Going to stems - that could be another 20-40. Then final stereo print. Thank God I'm only doing stereo usually and not 5.1. So that puts me around 500. But there's always the sessions that push harder.... This type of workflow allows me several very valuable options. 1.) my writing, production and mixing session is linked in the same session. I can start making EQ and reverb choices that are reversible as I write that I deem "part" of the writing process. FX that become integral are already set up for mix as I'm writing. I can make automation moves while writing. My writing becomes much more streamlined, and by the time I'm ready to mix - I'm already good distance into it. i.e. FASTER!! Once done writing, I'll print the "record" tracks and get to automating and balancing the mix - although as mentioned, I'm probably already quite a ways in. Once the mix is "finished" I print the print tracks, the stems and the final mix in one pass. 3-4 minutes and I'm done. Once mixed, if recalls are needed - which honestly is rarely for me - I can go one step back and boost or EQ a stem, 2 steps back and tweak a single element., 3 steps back and adjust or change something fairly major, or all the way back to midi/VI/production tracks if a rewrite or major change has to take place. The biggest strain on the system is in the writing mode while VI's are instantiated and all subsequent tracks are on input. As I finish writing, the VI's are made inactive and hidden. Once I get to the print track stage, the automated tracks are made inactive and hidden. When I'm completely done, I'll leave the main mix, stems, and maybe print tracks "live" and the rest is inactive and hidden until needed - if ever. One thing to note that I mentioned earlier - I'm on a 2010 apple Mac Pro tower that's been upgraded as far as it will go. I'm due for a whole new setup and will hopefully put it into the chain this year. This above template is pushing things really hard, but it's been a faithful computer for me for well over a decade. That's the power of HDX. AVID's bread and butter is Film/TV and those templates make this one look like childs play. LOL
|
|
|
Post by Quint on Mar 6, 2024 10:44:03 GMT -6
I get that it's all lined up. DSP or computer can both do delay compensation, so I'm not trying to get into a discussion on one being better than the other. My point was simply that a bus into a bus into a bus still incurs a latency hit of some kind, each time you do it, regardless of how it all gets compensated. Even if everything is all lined up and compensated with itself, the entire song (all tracks) will be delay compensated by whatever amount of delay is being caused by the most latent signal path that exists. If you do this enough times... So I was simply wondering if delay could become an issue with THAT many busses (700+). At some point, if you add in enough delay compensation (because of how many busses are going into busses which are going into busses....), I was wondering if that might not cause issues with things like automation. 700+ busses is a pretty extreme edge case, and PT DSP can't overcome physics. Neither can a CPU, for that matter. Maybe it's the case that the number of samples of latency incurred by going thru one additional bus is sufficiently small that, even in a 700 bus use case, the total delay incurred is still small enough to not be noticed? I don't know, but that's why I'm asking. Either way, delay is still delay, even on DSP. I get where you're coming from, in HDX or TDM at least it was 33 samples of delay for a bus. However it's not a "stacking" effect per se, within the DSP mixer every bus has a 33 sample delay and if any other DAW's methodology of multi-threading is to go by then this will be per core (usually a channel inhabits one core which a single processor in itself). Well, unless you were routing busses into other busses, although whilst I do some odd stuff when parallel mixing I'd never do that. I believe native back when I had a look into this (due to some odd behaviour with busses) it was more like 200 samples.
So, where DSP usually shines is parallel processing but let's go on the recorded spec's. If you have 4 X Avid DSP cards with 72 cores you could divide the amount of bus latency roughly by ten giving you a 330 sample delay which would be about 7.48 ms. Seen as I use about six busses even with the puny 8 core power of Carbon mine would be 33 samples or 0.7ms..
I AM talking about busses into busses though. NOT parallel. I don't know how many busses Bill is running in series, but I think it's safe to assume that a decent number of those busses are in series, and not all parallel. 700 busses, with many of those busses in series, is still really high, but I can sort of see how you might get that high if you're sending busses to busses to busses. But 700 busses all or nearly all in parallel would just be nuts. I doubt he would even have 700 tracks, much less the need to buss it all on a parallel level. So I'm assuming that his complicated routing means that he's running busses into busses into busses, etc.
|
|
|
Post by Quint on Mar 6, 2024 10:46:37 GMT -6
I get that it's all lined up. DSP or computer can both do delay compensation, so I'm not trying to get into a discussion on one being better than the other. My point was simply that a bus into a bus into a bus still incurs a latency hit of some kind, each time you do it, regardless of how it all gets compensated. Even if everything is all lined up and compensated with itself, the entire song (all tracks) will be delay compensated by whatever amount of delay is being caused by the most latent signal path that exists. If you do this enough times... So I was simply wondering if delay could become an issue with THAT many busses (700+). At some point, if you add in enough delay compensation (because of how many busses are going into busses which are going into busses....), I was wondering if that might not cause issues with things like automation. 700+ busses is a pretty extreme edge case, and PT DSP can't overcome physics. Neither can a CPU, for that matter. Maybe it's the case that the number of samples of latency incurred by going thru one additional bus is sufficiently small that, even in a 700 bus use case, the total delay incurred is still small enough to not be noticed? I don't know, but that's why I'm asking. Either way, delay is still delay, even on DSP. My answer is still the same. I don't ever notice it on big projects. It just works. I don't know how or why but it's never been an issue. Right, but you're not doing 700 busses, are you? That's my point. What about if someone IS doing 700 busses (assuming that many of those busses are in series)? Just because you're not doing it doesn't mean that it's not a valid question, especially if someone like Bill apparently IS doing 700 busses. If you don't know how or why this stuff works, that's totally fine. But I'm still trying to figure this out.
|
|
|
Post by Quint on Mar 6, 2024 10:49:49 GMT -6
700 buses? Now I'm really curious. My system begins to add latency with 5-10 buses, even with zero latency plugins. . OK. 700 is probably an exaggeration, but I need a LOT. I mentioned my setup before (above ^^^). If Im doing a huge orchestral / modern hybrid mockup, almost all tracks will be stereo, there will most likely be 50-100 VI's going to individual record tracks. Let's call it 75 for the sake of argument. That's 150 busses for the VI's. Going to record tracks. Another 150. That's 300. Going to print tracks. Another 150 - that's 450. Going to stems - that could be another 20-40. Then final stereo print. Thank God I'm only doing stereo usually and not 5.1. So that puts me around 500. But there's always the sessions that push harder.... This type of workflow allows me several very valuable options. 1.) my writing, production and mixing session is linked in the same session. I can start making EQ and reverb choices that are reversible as I write that I deem "part" of the writing process. FX that become integral are already set up for mix as I'm writing. I can make automation moves while writing. My writing becomes much more streamlined, and by the time I'm ready to mix - I'm already good distance into it. i.e. FASTER!! Once done writing, I'll print the "record" tracks and get to automating and balancing the mix - although as mentioned, I'm probably already quite a ways in. Once the mix is "finished" I print the print tracks, the stems and the final mix in one pass. 3-4 minutes and I'm done. Once mixed, if recalls are needed - which honestly is rarely for me - I can go one step back and boost or EQ a stem, 2 steps back and tweak a single element., 3 steps back and adjust or change something fairly major, or all the way back to midi/VI/production tracks if a rewrite or major change has to take place. The biggest strain on the system is in the writing mode while VI's are instantiated and all subsequent tracks are on input. As I finish writing, the VI's are made inactive and hidden. Once I get to the print track stage, the automated tracks are made inactive and hidden. When I'm completely done, I'll leave the main mix, stems, and maybe print tracks "live" and the rest is inactive and hidden until needed - if ever. One thing to note that I mentioned earlier - I'm on a 2010 apple Mac Pro tower that's been upgraded as far as it will go. I'm due for a whole new setup and will hopefully put it into the chain this year. This above template is pushing things really hard, but it's been a faithful computer for me for well over a decade. That's the power of HDX. AVID's bread and butter is Film/TV and those templates make this one look like childs play. LOL So what's the largest number of busses you run in series with one another? 5? 10? 15? And any idea what the samples of latency are for each buss?
|
|
|
Post by noob on Mar 6, 2024 10:58:52 GMT -6
I don't think the amount of busses makes much of a difference at all in modern CPU usage, what really matters is plugins and processing going on within each bus. If there's no processing going on, only busses, it shouldn't make much difference at all in the long run. It's the plugins themselves that are the CPU hog that will make you have to go up a buffer size.
|
|
|
Post by drsax on Mar 6, 2024 11:18:50 GMT -6
Johnkenn - My musings and condensed thoughts: - Plugins cannot achieve the same result. But plugins can achieve incredible and completely professional results. - Plugins these days sound really great - Good analog hardware usually sounds greater - hardware saves time when tracking - Plugins are infinitely tweakable without committing - Hardware generally forces committed decision-making and good committed decision-making often creates great sounding recordings. - World class results can be had with either, but if you truly re-created a production or mix using real hardware and it’s plug-in counterparts, you would most definitely hear a distinct difference. If you heard either one of those recordings out in the real world alone, you might never know how it was recorded or mixed. - there is hardware that no plug-in can replicate, and there are plug-ins that do things that no hardware can replicate, they are both parts of our modern toolbox - I use both to achieve the end result, but if forced to, or if I needed to, I could use all hardware or all plug-ins and achieve a good quality result. But I’m glad I get to use both. - FINAL THOUGHT: when I use hardware and turn knobs, it makes me happy I really enjoy that process
|
|
|
Post by drbill on Mar 6, 2024 11:30:00 GMT -6
OK. 700 is probably an exaggeration, but I need a LOT. I mentioned my setup before (above ^^^). If Im doing a huge orchestral / modern hybrid mockup, almost all tracks will be stereo, there will most likely be 50-100 VI's going to individual record tracks. Let's call it 75 for the sake of argument. That's 150 busses for the VI's. Going to record tracks. Another 150. That's 300. Going to print tracks. Another 150 - that's 450. Going to stems - that could be another 20-40. Then final stereo print. Thank God I'm only doing stereo usually and not 5.1. So that puts me around 500. But there's always the sessions that push harder.... This type of workflow allows me several very valuable options. 1.) my writing, production and mixing session is linked in the same session. I can start making EQ and reverb choices that are reversible as I write that I deem "part" of the writing process. FX that become integral are already set up for mix as I'm writing. I can make automation moves while writing. My writing becomes much more streamlined, and by the time I'm ready to mix - I'm already good distance into it. i.e. FASTER!! Once done writing, I'll print the "record" tracks and get to automating and balancing the mix - although as mentioned, I'm probably already quite a ways in. Once the mix is "finished" I print the print tracks, the stems and the final mix in one pass. 3-4 minutes and I'm done. Once mixed, if recalls are needed - which honestly is rarely for me - I can go one step back and boost or EQ a stem, 2 steps back and tweak a single element., 3 steps back and adjust or change something fairly major, or all the way back to midi/VI/production tracks if a rewrite or major change has to take place. The biggest strain on the system is in the writing mode while VI's are instantiated and all subsequent tracks are on input. As I finish writing, the VI's are made inactive and hidden. Once I get to the print track stage, the automated tracks are made inactive and hidden. When I'm completely done, I'll leave the main mix, stems, and maybe print tracks "live" and the rest is inactive and hidden until needed - if ever. One thing to note that I mentioned earlier - I'm on a 2010 apple Mac Pro tower that's been upgraded as far as it will go. I'm due for a whole new setup and will hopefully put it into the chain this year. This above template is pushing things really hard, but it's been a faithful computer for me for well over a decade. That's the power of HDX. AVID's bread and butter is Film/TV and those templates make this one look like childs play. LOL So what's the largest number of busses you run in series with one another? 5? 10? 15? And any idea what the samples of latency are for each buss? I laid it out clearly above. Average 5. And I also mentioned I never measure latency. It just works, sounds great, I go back to work and never worry about latency or delay compensation. Here's the deal. There are hundreds of posts about DAW latency and delay compensation floating around these parts. Solution, problems, general head scratching, etc.. Why? People spend more on one mic than they would on an HDX system. Seems crazy to me. I'd be willing to bet that I've made more $$$$$ off my HDX system than I've made with all my other gear combined. Folks can either continue on, or find a solution. Seems fairly simple to me.
|
|
|
Post by Blackdawg on Mar 6, 2024 11:39:17 GMT -6
My answer is still the same. I don't ever notice it on big projects. It just works. I don't know how or why but it's never been an issue. Right, but you're not doing 700 busses, are you? That's my point. What about if someone IS doing 700 busses (assuming that many of those busses are in series)? Just because you're not doing it doesn't mean that it's not a valid question, especially if someone like Bill apparently IS doing 700 busses. If you don't know how or why this stuff works, that's totally fine. But I'm still trying to figure this out. I'm not hitting 700 usually unless doing a film score mix that gets up in the 300-400 range. But I can tell you that post house's doing movie mixes ARE dealing with a LOT of buses and hundreds of tracks. They are all running Protools usually HDX of some kind and doing VERY tight automation work scene by scene. Re recording mixers are SUPER underrated IMO and are the best mixers in the world when it comes to managing huge sessions and working very efficiently. And none of those guys are having issues. And I'd bet they run up against the 2048 voice limit all the time of Protool doing stereo, 5.1, 7.1, and atmos mixes of the same thing in the same project. As Svart said too, you're reacting to the playback heads position with automation so it would have to be a super laggy system for that to not work. And if it's that bad then idk how anyone would get anything done ever. If your buffer is really high on any DAW you can induce that delayed playback start and stop but thats due to you loading the buffer. Also, if you're having issues, increase your sample rate. It'll reduce your latency by half(going from 48k to 96k, and you obviously need the processing power to do so).
|
|
|
Post by Quint on Mar 6, 2024 11:42:42 GMT -6
So what's the largest number of busses you run in series with one another? 5? 10? 15? And any idea what the samples of latency are for each buss? I laid it out clearly above. Average 5. And I also mentioned I never measure latency. It just works, sounds great, I go back to work and never worry about latency or delay compensation. Here's the deal. There are hundreds of posts about DAW latency and delay compensation floating around these parts. Solution, problems, general head scratching, etc.. Why? People spend more on one mic than they would on an HDX system. Seems crazy to me. I'd be willing to bet that I've made more $$$$$ off my HDX system than I've made with all my other gear combined. Folks can either continue on, or find a solution. Seems fairly simple to me. Ok. I didn't get that there was an average of five busses from your post. Also, an average is one thing, but what would ultimately matter from a latency standpoint is the maximum number of serial busses, because that's what would determine the total latency. And you didn't indicate what the maximum would be. Also, I wasn't asking about measured latency, necessarily. I was asking if you or anybody else knew what Avid spec's for bus latency. Seems like a knowable thing that they would provide somewhere in a manual. Up above, Shadow mentioned 33 samples, but it didn't seem clear that that number necessarily applies to HDX. There's no need to defend your decision to use HDX. I get why you use it, and I am generally a supporter of DSP solutions versus native. But that doesn't mean that it's still not worth exploring how an edge case like yours might push the limits of what is workable. If a bunch of serial bussing was going to potentially cause me latency issues, I'd want to know about it rather than just assume it all is just going to work. Just saying... Seems like a simple question to me. 1. Max number of busses in series? 2. Latency per bus? 3. #1 x #2 = total latency 4. Is the answer to #3 small enough to not matter? If so, cool. That explains why it hasn't been a problem, in every day use for you.
|
|
|
Post by drbill on Mar 6, 2024 11:45:30 GMT -6
- Hardware generally forces committed decision-making and good committed decision-making often creates great sounding recordings. I'd love to address this even though it's a bit off topic. Since I started learning engineering/production decades ago on a 4 track, then 8 track, then 2" 16 track, then 2" 24 track then linked 24 tracks, then linked 2 " and digital MDM's.... I've always tried to make a concerted effort to "commit". That's how I was taught by a couple of mentors that I consider masters. Yes, there are inherent dangers to that. But there are benefits too. One, a faster overall production sequence. Second - a clearer vision throughout the process. And third - and perhaps most importantly, it can force you into creative directions down the production road that you never envisioned, and didn't think you would have to go. It can lead you into unexpected directions. Some of my most loved and beautiful decisions I've had to make were forced upon me by previous "commitments". This is a beautiful, unexpected and creative surprise. When you have to make music 10 hours a day every day, things can get pretty boring. These unexpected horizons come from time to time, and I embrace them wholeheartedly and with joy. There is the very occasional situation where I have to recut, but generally, I'll embrace the "problem" and move a new/different direction than expected. I love committing.
|
|
|
Post by drbill on Mar 6, 2024 11:50:14 GMT -6
Ok. I didn't get that there was an average of five busses from your post. Also, an average is one thing, but what would ultimately matter from a latency standpoint is the maximum number of serial busses, because that's what would determine the total latency. And you didn't indicate what the maximum would be. Also, I wasn't asking about measured latency, necessarily. I was asking if you or anybody else knew what Avid spec's for bus latency. Seems like a knowable thing that they would provide somewhere in a manual. Up above, Shadow mentioned 33 samples, but it didn't seem clear that that number necessarily applies to HDX. There's no need to defend your decision to use HDX. I get why you use it, and I am generally a supporter of DSP solutions versus native. But that doesn't mean that it's still not worth exploring how an edge case like yours might push the limits of what is workable. If a bunch of serial bussing was going to potentially cause me latency issues, I'd want to know about it rather than just assume it all is just going to work. Just saying... Your assumptions are wrong. Either I didn't' clearly lay things out, or you need to read it again. Either way, the sessions are quite complex, and they Just Work. Blackdawg , seawell , myself and virtually all other HDX users just get to work and are done with it. There is no need to be worrying about latency, delay compensation or how many busses we are cascading. It just works.
|
|
|
Post by drbill on Mar 6, 2024 11:53:31 GMT -6
Ok. I didn't get that there was an average of five busses from your post. Oops. Sorry. 6 generally. I've got a subgroup with 2 buss processing before the master print track.
|
|
|
Post by christopher on Mar 6, 2024 12:08:31 GMT -6
I love a console and towers of hardware. I have nothing against mixing ITB I just wish pros would tell the truth. Yeah one pro for years I cited as an example they recorded and engineered it on early PT ..Came out awesome! Decades later I find out, well the real truth they didn’t love the ITB mixes, so the label paid for studio time to re-mix all on hardware before releasing it 🙃
|
|
|
Post by Quint on Mar 6, 2024 12:10:15 GMT -6
Ok. I didn't get that there was an average of five busses from your post. Also, an average is one thing, but what would ultimately matter from a latency standpoint is the maximum number of serial busses, because that's what would determine the total latency. And you didn't indicate what the maximum would be. Also, I wasn't asking about measured latency, necessarily. I was asking if you or anybody else knew what Avid spec's for bus latency. Seems like a knowable thing that they would provide somewhere in a manual. Up above, Shadow mentioned 33 samples, but it didn't seem clear that that number necessarily applies to HDX. There's no need to defend your decision to use HDX. I get why you use it, and I am generally a supporter of DSP solutions versus native. But that doesn't mean that it's still not worth exploring how an edge case like yours might push the limits of what is workable. If a bunch of serial bussing was going to potentially cause me latency issues, I'd want to know about it rather than just assume it all is just going to work. Just saying... Your assumptions are wrong. Either I didn't' clearly lay things out, or you need to read it again. Either way, the sessions are quite complex, and they Just Work. Blackdawg , seawell , myself and virtually all other HDX users just get to work and are done with it. There is no need to be worrying about latency, delay compensation or how many busses we are cascading. It just works. You're incorrect. They are not assumptions. Digital processes incur latency. They just do. Add enough of them together, and at some point it could be a problem. I'm not saying it's necessarily a problem for you in your use case, but it's also not a zero added latency situation either. Not a hard concept. Seawell himself similarly mentioned being curious about how such a high number of busses might affect latency. If you are fine with not knowing, that's obviously up to you, but there's nothing wrong with wanting to know more about how something works, as opposed to just assuming that it always will, especially when using it at its extremes. "It just works" doesn't really address the question. In any case, I'm gonna go and look to see if I can find the latency numbers for HDX busses, as you apparently don't know.
|
|
|
Post by enlav on Mar 6, 2024 12:41:20 GMT -6
Never worked on a system of DrBills scale, but on some old HD cards with 192s or 96s, as long as you weren't running RTAS between the myriad of busses, the latency wasn't usually enough to go beyond what PTHD's delay comp could deal with. Not sure how that has changed in modern HDX, but bussing itself seemed efficient. I'm not saying that the delay compensation is broken or not doing what it's supposed to do, but if you delayed one track by one second, and then everything else on all other tracks accordingly was delay compensated, you'd still be in a situation where the entire song is now delayed by one full second. One second is not an indiscernible amount of time, if it were to cause issues with things like automation. I'll go more in depth when I can make a longer post, but in relation specifically to automation... with a caveat being that I'm a mouse/pencil man myself... on native, you'll do with the incurred latency no matter what, to my knowledge. If I'm writing automation on any fader when dealing with one second of delay, it's to my understanding that you could drop the fader down abruptly and not hear that impact your mix for one whole second. Now, whether that gets corrected with delay compensation and moves that automation point back one second afterwards? Not totally sure. On an HD system, I believe it's more nuanced. If the track your automating, say, a vocal, has no or minimal latency against several different tracks that have varying degrees of latency induced from plugs or inserts, I believe as long as they're not summing and going through processes itself, your fader movements on the vocal would be closer to real-time. Basically whatever the the delay incurred on that targeted track is? Because, I think it's effectively being buffered on playback. Again, could be absolutely wrong, as I'm not doing much automation on HD or HDX. I'll go in more detail later though. I'll be working on some IRs tonight so I'll see what Native has for buss latency. It'll be native, so my guess is my numbers might be higher.
|
|