|
Post by Quint on Oct 15, 2024 15:43:56 GMT -6
I have some theories on what I can do in Cubase to assign open/close an individual insert to an individual button on my SD. I'm curious, have you already figured out how to do this on your Stream Deck, or does the Sideshow app you're using for SD already have that capability directly built into the app? Also, which version of Cubase should I be looking at for purchase? You obviously have Stream Deck doing something very customized! Sideshow FX has a dedicated Stream Deck Cubase Pro 13 profile with some hot keys to launch VI's but nothing for plugins. My son has programmed some Windows short cuts for me, so it might be possible to add your customization to a Sideshow FX Stream Deck Profile. I'm a massive Cubase user and fan, so I'm going to say get the Pro version - it has no compromises and is utterly fabulous and king of the DAW hill .... see told you I was a fan Highly customized, using Keyboard Maestro and Bome Midi Translator. I've been working on setting things up on SD for Luna for the last year. Much of that work can be directly applied to setting up a similar set of profiles for Cubase, with some relatively minor modifications. At least I think so, based on how I built it, and what I've learned about the MCU implementation in Cubase. A big thing for me was to be able to open and close any single plugin on any track without having to touch a mouse. And have the plugin icon displayed on said buttons.
|
|
|
Post by BenjaminAshlin on Oct 15, 2024 16:34:52 GMT -6
I guess the most confusing thing for me here is that how can a "dual buffer" somehow override the processing overhead needed for plugins? If a plug needs 1ms to process a block of data, it needs that on DSP or in CPU alike. The DSP advantage was always that DSP enabled plugs always processed in a deterministic amount of time, and you could parallel them up and have no further increase in processing time. But that was back when CPUs were in the MHz range and now that they're in the GHz range, they can process data 10-100x faster than most DSP chips can based on clock cycles alone and thus why DSP use in audio has been largely discarded. The only drawback is that CPU processing is nonlinear and non-deterministic and there is an amount of overhead needed for the OS and the DAW. But back to the plugin processing.. Buffer settings adjust the amount of samples being processed ahead, at the expense of CPU demand. But how would a "dual buffer" change that amount of processing required? Lowering a buffer would always increase the demand on the CPU, and vice-versa. Pre-rendering effects would be an effective strategy but it ALSO takes the exact same processing time, so doing that on-the-fly would not gain a single millisecond. The ONLY way I can see this working is if the buffer were automatically reduced and plugins over a certain reported latency were disabled. In Reaper, I simply change my samplerate as I need and sometimes disable certain plugs that report (in Reaper's real-time Performance Report) that have high latency or CPU usage. I haven't seen anyone actually explain to me what these other "dual buffer" DAWs are doing that is different than that. Because of how realtime latency is processed on the CPU and how threads are dedicated to tracks. The monitored track has a dedicated CPU thread running at lower buffer. Its makes a large difference in reality to the amount of CPU usage on a full session. Every unarmed track is process at 2048 buffer speed and the armed tracks are process at your minimum buffer on a dedicated thread. These are run on different threads and the preprocessed are lined up with the monitoring buffer.
|
|
|
Post by Quint on Oct 15, 2024 19:19:45 GMT -6
I guess the most confusing thing for me here is that how can a "dual buffer" somehow override the processing overhead needed for plugins? If a plug needs 1ms to process a block of data, it needs that on DSP or in CPU alike. The DSP advantage was always that DSP enabled plugs always processed in a deterministic amount of time, and you could parallel them up and have no further increase in processing time. But that was back when CPUs were in the MHz range and now that they're in the GHz range, they can process data 10-100x faster than most DSP chips can based on clock cycles alone and thus why DSP use in audio has been largely discarded. The only drawback is that CPU processing is nonlinear and non-deterministic and there is an amount of overhead needed for the OS and the DAW. But back to the plugin processing.. Buffer settings adjust the amount of samples being processed ahead, at the expense of CPU demand. But how would a "dual buffer" change that amount of processing required? Lowering a buffer would always increase the demand on the CPU, and vice-versa. Pre-rendering effects would be an effective strategy but it ALSO takes the exact same processing time, so doing that on-the-fly would not gain a single millisecond. The ONLY way I can see this working is if the buffer were automatically reduced and plugins over a certain reported latency were disabled. In Reaper, I simply change my samplerate as I need and sometimes disable certain plugs that report (in Reaper's real-time Performance Report) that have high latency or CPU usage. I haven't seen anyone actually explain to me what these other "dual buffer" DAWs are doing that is different than that. Because of how realtime latency is processed on the CPU and how threads are dedicated to tracks. The monitored track has a dedicated CPU thread running at lower buffer. Its makes a large difference in reality to the amount of CPU usage on a full session. Every unarmed track is process at 2048 buffer speed and the armed tracks are process at your minimum buffer on a dedicated thread. These are run on different threads and the preprocessed are lined up with the monitoring buffer. I'm still in the trial phase with this whole dual buffer thing, but conceptually it totally makes sense. I only wish I would have investigated further into the dual buffer thing years ago. How long has this concept been around? If this ends up being able to reliably get me the same levels of latency/performance as DSP in the Apollos did...
|
|
|
Post by svart on Oct 15, 2024 19:25:22 GMT -6
I guess the most confusing thing for me here is that how can a "dual buffer" somehow override the processing overhead needed for plugins? If a plug needs 1ms to process a block of data, it needs that on DSP or in CPU alike. The DSP advantage was always that DSP enabled plugs always processed in a deterministic amount of time, and you could parallel them up and have no further increase in processing time. But that was back when CPUs were in the MHz range and now that they're in the GHz range, they can process data 10-100x faster than most DSP chips can based on clock cycles alone and thus why DSP use in audio has been largely discarded. The only drawback is that CPU processing is nonlinear and non-deterministic and there is an amount of overhead needed for the OS and the DAW. But back to the plugin processing.. Buffer settings adjust the amount of samples being processed ahead, at the expense of CPU demand. But how would a "dual buffer" change that amount of processing required? Lowering a buffer would always increase the demand on the CPU, and vice-versa. Pre-rendering effects would be an effective strategy but it ALSO takes the exact same processing time, so doing that on-the-fly would not gain a single millisecond. The ONLY way I can see this working is if the buffer were automatically reduced and plugins over a certain reported latency were disabled. In Reaper, I simply change my samplerate as I need and sometimes disable certain plugs that report (in Reaper's real-time Performance Report) that have high latency or CPU usage. I haven't seen anyone actually explain to me what these other "dual buffer" DAWs are doing that is different than that. Because of how realtime latency is processed on the CPU and how threads are dedicated to tracks. The monitored track has a dedicated CPU thread running at lower buffer. Its makes a large difference in reality to the amount of CPU usage on a full session. Every unarmed track is process at 2048 buffer speed and the armed tracks are process at your minimum buffer on a dedicated thread. These are run on different threads and the preprocessed are lined up with the monitoring buffer. Ok, I see what you're saying but 2048 samples at 96k would be 22ms, and if they are lined up, you can't make something "ahead" in time, so you'd need to retard the low latency thread by 22ms for it to line up netting you zero. Now if it was truly preprocessed, as in like freezing tracks, I could see that working though.
|
|
|
Post by Quint on Oct 15, 2024 19:35:15 GMT -6
Because of how realtime latency is processed on the CPU and how threads are dedicated to tracks. The monitored track has a dedicated CPU thread running at lower buffer. Its makes a large difference in reality to the amount of CPU usage on a full session. Every unarmed track is process at 2048 buffer speed and the armed tracks are process at your minimum buffer on a dedicated thread. These are run on different threads and the preprocessed are lined up with the monitoring buffer. Ok, I see what you're saying but 2048 samples at 96k would be 22ms, and if they are lined up, you can't make something "ahead" in time, so you'd need to retard the low latency thread by 22ms for it to line up netting you zero. Now if it was truly preprocessed, as in like freezing tracks, I could see that working though. Instead of 22 ms, it could be 22 SECONDS, and it still wouldn't matter, as that higher latency is just for playback. The important part is that your record armed tracks are not subject to that same high buffer. As long as you are monitoring your record enabled tracks on their own low latency buffer, that's what matters. forum.cockos.com/showthread.php?p=2748087
|
|
|
Post by popmann on Oct 15, 2024 20:40:04 GMT -6
But I do appreciate the help, nonetheless. In any case, just to make sure that we're on the same page here, are you including dual or multi buffers when you say "low latency modes"? I just want to make sure I understand what you are or aren't saying to avoid. In other words, to put it simply, are dual or multi buffers "bad" across the board, or just in SOME DAWs? If bad in only some DAWs, which ones? Multiple buffers are not RELATED to low latency mode switches. Low latency mode (by whatever name)=always less than ideal. Multiple buffers=a solution for needing low latency input monitoring in a native DAW if you want/need to not use your interface or external hardware mixers. The only negative is that they complicate the system…and apparently cause a huge amount of confusion.
|
|
|
Post by Quint on Oct 15, 2024 21:24:46 GMT -6
But I do appreciate the help, nonetheless. In any case, just to make sure that we're on the same page here, are you including dual or multi buffers when you say "low latency modes"? I just want to make sure I understand what you are or aren't saying to avoid. In other words, to put it simply, are dual or multi buffers "bad" across the board, or just in SOME DAWs? If bad in only some DAWs, which ones? Multiple buffers are not RELATED to low latency mode switches. Low latency mode (by whatever name)=always less than ideal. Multiple buffers=a solution for needing low latency input monitoring in a native DAW if you want/need to not use your interface or external hardware mixers. The only negative is that they complicate the system…and apparently cause a huge amount of confusion. It's the terminology that is causing a lot of the problem. The concept of dual buffers is simple enough. But everybody refers to it by different names, and then when other means of attempting to deal with latency get thrown into the mix too, with similar sounding names, it all gets jumbled together into a confusing pile.
|
|
|
Post by EmRR on Oct 15, 2024 21:36:20 GMT -6
Hadn't looked in awhile, MOTU DP11 still has an input monitoring mode selection between 'direct hardware playthrough' and 'monitor record enabled tracks through effects'. 'direct hardware playthrough' is greyed out unless you are connected to a legacy MOTU interface (pre-AVB, so 2013 and earlier). At that time they bypassed the latency question with a dedicated hardware solution. The AVB mixer handles latency well but it's a damn pain in the ass to manage, I understand the desire to have the DAW handle it somehow. I still have to run a fairly high buffer setting with any plugs loaded, even if bypassed on a Mac Studio M1 Max. The computer is far less taxed than DP itself is according to the competing CPU meters. From the DP perspective I've never understood anyone's ability to run low buffers, monitoring through effects seems impossible with DP on a session of any size and complexity. The answer for me remains a parallel analog monitor path.
|
|
|
Post by thehightenor on Oct 16, 2024 1:21:36 GMT -6
You obviously have Stream Deck doing something very customized! Sideshow FX has a dedicated Stream Deck Cubase Pro 13 profile with some hot keys to launch VI's but nothing for plugins. My son has programmed some Windows short cuts for me, so it might be possible to add your customization to a Sideshow FX Stream Deck Profile. I'm a massive Cubase user and fan, so I'm going to say get the Pro version - it has no compromises and is utterly fabulous and king of the DAW hill .... see told you I was a fan Highly customized, using Keyboard Maestro and Bome Midi Translator. I've been working on setting things up on SD for Luna for the last year. Much of that work can be directly applied to setting up a similar set of profiles for Cubase, with some relatively minor modifications. At least I think so, based on how I built it, and what I've learned about the MCU implementation in Cubase. A big thing for me was to be able to open and close any single plugin on any track without having to touch a mouse. And have the plugin icon displayed on said buttons. Gosh, that’s clever. If you do get Cubase, it would be great if you could explain how to set that up in Stream Deck - I’d like to be able to launch plug-ins via SD.
|
|
|
Post by svart on Oct 16, 2024 6:24:40 GMT -6
Ok, I see what you're saying but 2048 samples at 96k would be 22ms, and if they are lined up, you can't make something "ahead" in time, so you'd need to retard the low latency thread by 22ms for it to line up netting you zero. Now if it was truly preprocessed, as in like freezing tracks, I could see that working though. Instead of 22 ms, it could be 22 SECONDS, and it still wouldn't matter, as that higher latency is just for playback. The important part is that your record armed tracks are not subject to that same high buffer. As long as you are monitoring your record enabled tracks on their own low latency buffer, that's what matters. forum.cockos.com/showthread.php?p=2748087So the parts that are playing back are 22 seconds behind, but the parts you're recording at that moment are 2ms.. How are you supposed to record like that? If the buffered playback tracks are 22ms behind, and the tracks you're recording are 2ms, then it's still off unless you retard the recording tracks to line up, else you'd be hearing everything else behind. I'm still not sure I get how that works. Anyway, I get about 5ms latency without all that. I just fail to see how that's a big deal. Nobody has every complained about it. I think part of that is the MOTU system but I have serious doubts that Reaper itself requires this stuff if you're careful about the plugs you use during tracking.
|
|
|
Post by Quint on Oct 16, 2024 6:57:59 GMT -6
Instead of 22 ms, it could be 22 SECONDS, and it still wouldn't matter, as that higher latency is just for playback. The important part is that your record armed tracks are not subject to that same high buffer. As long as you are monitoring your record enabled tracks on their own low latency buffer, that's what matters. forum.cockos.com/showthread.php?p=2748087So the parts that are playing back are 22 seconds behind, but the parts you're recording at that moment are 2ms.. How are you supposed to record like that? If the buffered playback tracks are 22ms behind, and the tracks you're recording are 2ms, then it's still off unless you retard the recording tracks to line up, else you'd be hearing everything else behind. I'm still not sure I get how that works. Anyway, I get about 5ms latency without all that. I just fail to see how that's a big deal. Nobody has every complained about it. I think part of that is the MOTU system but I have serious doubts that Reaper itself requires this stuff if you're careful about the plugs you use during tracking. I'm not sure how else to explain it. When recording, you're playing along with the playback signal. It doesn't really matter what took place before that for that playback signal to leave the computer, whether it be 22 ms or 22 seconds. It's all delay compensated. You're playing back with it as a guide, live, right here, right now. As long as the record-enabled buffer is sufficiently low to be able to monitor your recorded signal in "real time", along with the playback signal, you're good. In any case, it's an elegant solution, and all things else being equal, it's superior to a single buffer system. Every DAW should have this. Reaper doesn't have it, and that's why it's not on my short list. If Reaper ever added it, I'd give Reaper a second look. 5 ms is too much latency though, at least for my needs. I want it down in the 2 ms or less range. True zero latency would be even better, but obviously not achievable if monitoring digitally. But 2 ms or less is doable with dual buffers. However it's not as easy a thing to reliably achieve with a signal buffer system.
|
|
|
Post by Quint on Oct 16, 2024 7:13:02 GMT -6
Highly customized, using Keyboard Maestro and Bome Midi Translator. I've been working on setting things up on SD for Luna for the last year. Much of that work can be directly applied to setting up a similar set of profiles for Cubase, with some relatively minor modifications. At least I think so, based on how I built it, and what I've learned about the MCU implementation in Cubase. A big thing for me was to be able to open and close any single plugin on any track without having to touch a mouse. And have the plugin icon displayed on said buttons. Gosh, that’s clever. If you do get Cubase, it would be great if you could explain how to set that up in Stream Deck - I’d like to be able to launch plug-ins via SD. We'll see where I end up. It's between, S1, Cubase, and Logic, but if I end up on Cubase, I'm sure I'll end up implementing this for myself, in which case I'd be happy to share it with you when it's done.
|
|
|
Post by svart on Oct 16, 2024 9:40:49 GMT -6
So the parts that are playing back are 22 seconds behind, but the parts you're recording at that moment are 2ms.. How are you supposed to record like that? If the buffered playback tracks are 22ms behind, and the tracks you're recording are 2ms, then it's still off unless you retard the recording tracks to line up, else you'd be hearing everything else behind. I'm still not sure I get how that works. Anyway, I get about 5ms latency without all that. I just fail to see how that's a big deal. Nobody has every complained about it. I think part of that is the MOTU system but I have serious doubts that Reaper itself requires this stuff if you're careful about the plugs you use during tracking. I'm not sure how else to explain it. When recording, you're playing along with the playback signal. It doesn't really matter what took place before that for that playback signal to leave the computer, whether it be 22 ms or 22 seconds. It's all delay compensated. You're playing back with it as a guide, live, right here, right now. As long as the record-enabled buffer is sufficiently low to be able to monitor your recorded signal in "real time", along with the playback signal, you're good. In any case, it's an elegant solution, and all things else being equal, it's superior to a single buffer system. Every DAW should have this. Reaper doesn't have it, and that's why it's not on my short list. If Reaper ever added it, I'd give Reaper a second look. 5 ms is too much latency though, at least for my needs. I want it down in the 2 ms or less range. True zero latency would be even better, but obviously not achievable if monitoring digitally. But 2 ms or less is doable with dual buffers. However it's not as easy a thing to reliably achieve with a signal buffer system. So you're saying that the playback tracks are pre-rendered before playback then? I have found nothing that states that is the case. If they are effected in "real-time" then a 2048 buffer at 96k would put them behind the "low latency tracks" by 22ms. Everything you hear being played would be 22ms behind where your "live" tracks would be. I have yet to see anything that explains how that isn't what is happening. I suppose IF that is what is happening the DAW can align the "low latency" track afterwards by just shifting them back in time 22ms. That still wouldn't work well if you're doing vocal doubles or something because you'd get phasing issues. I do understand what you've said so far but I don't think anyone is understanding what I'm asking here. Increasing the buffer on the playback tracks lowers the CPU load so that the live tracks can have more CPU for lower buffers.. I get that. But the higher buffers for the playback tracks MUST increase the latency for those playback tracks. HOW does it get around that longer latency for those tracks? You CAN'T simply not care about the time it takes for the playback tracks to leave the computer because compensation for that timing offset MUST happen at some point. I also don't see where you're getting 2ms from, since the manual link you sent me clearly showed 6ms was the total "low latency" amount.
|
|
|
Post by EmRR on Oct 16, 2024 9:59:11 GMT -6
Maybe I'm missing something, but it seems like any record enabled tracks in a lower buffer would be time marked relative to the static 'real' time stamp of the overall system, as are the playbacks in the higher buffer, and would be shifted as needed from record to playback, just as a single buffer system would shift everything based on any buffer changes. 2 shifts versus 1.
|
|
|
Post by Quint on Oct 16, 2024 10:01:11 GMT -6
I'm not sure how else to explain it. When recording, you're playing along with the playback signal. It doesn't really matter what took place before that for that playback signal to leave the computer, whether it be 22 ms or 22 seconds. It's all delay compensated. You're playing back with it as a guide, live, right here, right now. As long as the record-enabled buffer is sufficiently low to be able to monitor your recorded signal in "real time", along with the playback signal, you're good. In any case, it's an elegant solution, and all things else being equal, it's superior to a single buffer system. Every DAW should have this. Reaper doesn't have it, and that's why it's not on my short list. If Reaper ever added it, I'd give Reaper a second look. 5 ms is too much latency though, at least for my needs. I want it down in the 2 ms or less range. True zero latency would be even better, but obviously not achievable if monitoring digitally. But 2 ms or less is doable with dual buffers. However it's not as easy a thing to reliably achieve with a signal buffer system. So you're saying that the playback tracks are pre-rendered before playback then? I have found nothing that states that is the case. If they are effected in "real-time" then a 2048 buffer at 96k would put them behind the "low latency tracks" by 22ms. Everything you hear being played would be 22ms behind where your "live" tracks would be. I have yet to see anything that explains how that isn't what is happening. I suppose IF that is what is happening the DAW can align the "low latency" track afterwards by just shifting them back in time 22ms. That still wouldn't work well if you're doing vocal doubles or something because you'd get phasing issues. I also don't see where you're getting 2ms from, since the manual link you sent me clearly showed 6ms was the total "low latency" amount. The lowest latency you can achieve is, of course, down to the capabilities of your system, so it's variable. I think the 6 ms example you keep referring to is just an example they provided, not some sort of absolute. So if the best you can achieve, in a best case scenario, is 6 ms, then that's the best you're going to achieve whether you're using a dual buffer DAW or not. If it's 2 ms, then it's 2ms. I think you're getting caught up in these absolutes, when that's not actually the main point here. The main point is that, in a dual buffer DAW, you'll be able to continue to run your record enabled tracks at those best case scenario latencies even when your project is in its late stages and loaded down with a bunch of plugins and tracks. In a single buffer DAW, you now have competing goals of trying to choose a buffer size that keep dropouts from happening while also trying to monitor record enabled tracks at an acceptably low latency. That gets harder and harder to do, the further you get into a project with more and more tracks and more and more plugins. In which case, compromises (freeze tracks, bounce down, increase the buffer/latency, etc.) start entering the equation. Coming from DSP monitoring, where I didn't have to make such compromises, I don't want to start making these sort of compromises, at least if I can avoid it. Dual buffers is presenting itself as a way to avoid those compromises. This is why I'm intrigued by the idea.
|
|
|
Post by Quint on Oct 16, 2024 10:02:21 GMT -6
Maybe I'm missing something, but it seems like any record enabled tracks in a lower buffer would be time marked relative to the static 'real' time stamp of the overall system, as are the playbacks in the higher buffer, and would be shifted as needed from record to playback, just as a single buffer system would shift everything based on any buffer changes. 2 shifts versus 1. That's how I understand it. It's pretty simple.
|
|
|
Post by svart on Oct 16, 2024 12:02:18 GMT -6
So you're saying that the playback tracks are pre-rendered before playback then? I have found nothing that states that is the case. If they are effected in "real-time" then a 2048 buffer at 96k would put them behind the "low latency tracks" by 22ms. Everything you hear being played would be 22ms behind where your "live" tracks would be. I have yet to see anything that explains how that isn't what is happening. I suppose IF that is what is happening the DAW can align the "low latency" track afterwards by just shifting them back in time 22ms. That still wouldn't work well if you're doing vocal doubles or something because you'd get phasing issues. I also don't see where you're getting 2ms from, since the manual link you sent me clearly showed 6ms was the total "low latency" amount. The lowest latency you can achieve is, of course, down to the capabilities of your system, so it's variable. I think the 6 ms example you keep referring to is just an example they provided, not some sort of absolute. So if the best you can achieve, in a best case scenario, is 6 ms, then that's the best you're going to achieve whether you're using a dual buffer DAW or not. If it's 2 ms, then it's 2ms. I think you're getting caught up in these absolutes, when that's not actually the main point here. The main point is that, in a dual buffer DAW, you'll be able to continue to run your record enabled tracks at those best case scenario latencies even when your project is in its late stages and loaded down with a bunch of plugins and tracks. In a single buffer DAW, you now have competing goals of trying to choose a buffer size that keep dropouts from happening while also trying to monitor record enabled tracks at an acceptably low latency. That gets harder and harder to do, the further you get into a project with more and more tracks and more and more plugins. In which case, compromises (freeze tracks, bounce down, increase the buffer/latency, etc.) start entering the equation. Coming from DSP monitoring, where I didn't have to make such compromises, I don't want to start making these sort of compromises, at least if I can avoid it. Dual buffers is presenting itself as a way to avoid those compromises. This is why I'm intrigued by the idea. Ok, let's use an extreme example to perhaps get my question across another way because I'm still apparently not getting it across.. If somehow you had plugins with 10 seconds of latency on the "playback" tracks, how would the DAW handle the disparity between 10 seconds and 2ms? Would it simply buffer up the whole 10 seconds so when you hit record, it would take 10 seconds to finally start and you'd hear everything "in time"? Or Would it pre-render the tracks (background freeze) the tracks so that your playback was instantaneous? Because it clearly has to be doing some kind of time trickery so that you don't hear all the tracks 10 seconds behind.
|
|
|
Post by EmRR on Oct 16, 2024 12:36:46 GMT -6
Would it simply buffer up the whole 10 seconds so when you hit record, it would take 10 seconds to finally start and you'd hear everything "in time"? That's what the DP plug pre-gen is doing, adding the buffer time so the plug has a head start. So a 2 sec buffer, you hit play, it doesn't start for 2 sec. That's really more about softening CPU blow on plug usage. Still a different question from the input throughput latency question - that's still gonna be a 2 sec delay if you're set to monitor inputs through effects with DP.
|
|
|
Post by veggieryan on Oct 16, 2024 12:39:55 GMT -6
The best way to figure it out is to just demo S1 or Cubase and try it out.
It's actually a bit simpler than that.
You hit play/record and it renders playback through the larger playback buffer.
There is no issue with timing unless your goal is to record the exact moment you hit play/record instead of waiting for the 1000-2000+ samples that your large playback buffer is set to.
There is no need to pre-render the playback audio. It is simply rendered through the larger buffer as needed.
Once the audio is playing you are monitoring yourself through the smaller monitoring buffer for any monitor enabled tracks or virtual instruments ONLY.
When you stop recording and playback the audio is lined up because the DAW is aware the difference in the size of the two buffers and the difference between realtime vs the small monitoring buffer.
|
|
|
Post by Quint on Oct 16, 2024 13:50:40 GMT -6
The lowest latency you can achieve is, of course, down to the capabilities of your system, so it's variable. I think the 6 ms example you keep referring to is just an example they provided, not some sort of absolute. So if the best you can achieve, in a best case scenario, is 6 ms, then that's the best you're going to achieve whether you're using a dual buffer DAW or not. If it's 2 ms, then it's 2ms. I think you're getting caught up in these absolutes, when that's not actually the main point here. The main point is that, in a dual buffer DAW, you'll be able to continue to run your record enabled tracks at those best case scenario latencies even when your project is in its late stages and loaded down with a bunch of plugins and tracks. In a single buffer DAW, you now have competing goals of trying to choose a buffer size that keep dropouts from happening while also trying to monitor record enabled tracks at an acceptably low latency. That gets harder and harder to do, the further you get into a project with more and more tracks and more and more plugins. In which case, compromises (freeze tracks, bounce down, increase the buffer/latency, etc.) start entering the equation. Coming from DSP monitoring, where I didn't have to make such compromises, I don't want to start making these sort of compromises, at least if I can avoid it. Dual buffers is presenting itself as a way to avoid those compromises. This is why I'm intrigued by the idea. Ok, let's use an extreme example to perhaps get my question across another way because I'm still apparently not getting it across.. If somehow you had plugins with 10 seconds of latency on the "playback" tracks, how would the DAW handle the disparity between 10 seconds and 2ms? Would it simply buffer up the whole 10 seconds so when you hit record, it would take 10 seconds to finally start and you'd hear everything "in time"? Or Would it pre-render the tracks (background freeze) the tracks so that your playback was instantaneous? Because it clearly has to be doing some kind of time trickery so that you don't hear all the tracks 10 seconds behind. It's going to buffer up to whatever you set your playback buffer to. If that buffer is long enough to render all of the plugins, it will render them all. If not, you'll get dropouts. This is all the same way it works in a single buffer DAW. There is no difference here, in that regard. Now, does a buffer setting actually exist that could handle 10 seconds? I don't think so, but let's not get distracted by that because, in practice, you'd never need a buffer that long and, again, even if you did, it'd still be the same problem in a single buffer DAW. If my previous example, using 22 seconds, is what was throwing you off, sorry for any confusion that may have caused. I simply used 22 seconds because you had used 22 ms, and I was making a point about how the buffer/delay time didn't matter. Bottom line, I wouldn't overcomplicate this. It's two buffers running independent of one another, each with different goals (one for playback stability and one for speed of record monitoring), and the DAW knows how to take care of delay compensation to make sure it all lines up. Anyway, as I said before, your system is capable of whatever low latency minimum your system is capable of. Dual buffers wont change that. The real benefit of dual buffers is that you continue to be able to enjoy that same low latency on record-enabled tracks throughout the project instead of having to manage buffers and CPU later down the road when your project is loaded up with tracks and plugins. This is why I was saying that dual buffers were superior to single buffers. Also, I'd say ditto to what Veggie said. To wit, when you asked " Would it simply buffer up the whole 10 seconds so when you hit record, it would take 10 seconds to finally start and you'd hear everything "in time"?, if I'm understanding you correctly, I'd say that the answer is yes to this.
|
|
|
Post by Quint on Oct 16, 2024 15:39:03 GMT -6
So, what about Logic? Where are the Logic people? Among the dual buffered DAWs (S1, Cubase, and Logic), I've heard back from people on S1 and Cubase, and done a decent amount of investigation into those two DAWs. But what about Logic?
I'm looking at Logic, and realizing that it may be the only dual buffered DAW which also has ALL of the other features that I'm wanting, those features being:
1. Plugin based hardware inserts. Logic and S1 have this, but Cubase does not.
2. The ability to bring up a plugin GUI without having to touch a mouse. Logic and Cubase have this, but S1 does not.
3. Plugin names/info sent out over MCU to a hardware controller. Logic and Cubase have this, but S1 does not.
Perhaps Logic is what I'm actually wanting?
Edit: Cubase DOES have an I/O plugin for hardware inserts! Cubase might actually be what I'm looking for.
|
|
|
Post by veggieryan on Oct 16, 2024 15:51:50 GMT -6
Logic does not have dual buffers like Cubase or S1 as far as I know. I would love to be wrong on that.
|
|
|
Post by Quint on Oct 16, 2024 15:59:31 GMT -6
Logic does not have dual buffers like Cubase or S1 as far as I know. I would love to be wrong on that. I thought it did?
|
|
|
Post by Quint on Oct 16, 2024 16:09:15 GMT -6
Logic does not have dual buffers like Cubase or S1 as far as I know. I would love to be wrong on that. Looks like it does have a dual buffer. From: www.logicprohelp.com/forums/topic/133693-io-buffer-size-vs-process-buffer-range/Logic has been one of the first, if not the first, to have a hybrid sound engine where the playback tracks run on a larger buffer and the live tracks(armed tracks) can run on a smaller buffer. Aren’t those two settings exactly that? If you have a large project going, raising a low I/O buffer won’t change anything if there’s no armed tracks. And if your project grew up and arm a track, you’ll still record with a low latency because the armed track will use the I/O buffer setting and your project won’t struggle playing loads of tracks because it’s on the Process Buffer setting.
Before Pro Tools and Cubase got a similar hybrid engine(ASIO Guard in the case of Cubase), it was hell doing an overdub late in a busy project because you then needed to lower the buffer size to record with small latency and big project would then struggle to play back at the new low buffer setting. A Cubase friend of mine was at a loss when I told him that I was always staying at 128 samples buffer settings throughout a project and was able to record at any time with low latency without my project giving me crackles and pops. At the time, Cubase didn’t have that 2 buffers paradigm and it was hell working with big projects because if you had to lower your buffer for an overdub, the project would then be unable to play back correctly. Same for Pro Tools.
|
|
|
Post by veggieryan on Oct 16, 2024 16:12:59 GMT -6
No they added a "low latency monitoring mode" button which just disables any plugins in the monitoring path that have high latency.
Playback and monitoring are both on the same buffer.
Overall its been disappointing to see Logic's stagnation since Apple bought it... I was a big time logic user a few decades ago.
|
|