|
Post by Quint on Oct 13, 2024 9:54:21 GMT -6
I'm learning about what my UF8 can do with S1, and it's encouraging.
Though it looks like the UF8 can do a lot in Cubase and Logic too. I'm also liking that the UF8 comes pre mapped to do a lot of the functions that I still don't actually have any idea how to locate in S1, so at least the learning curve will be made easier by using the UF8.
So, I still need to confirm whether or not Logic does or doesn't have a dual buffer system which works in the same way that S1 does. Similarly, it sounds like Cubase has a dual buffer thing for PC, but it's still not clear to me whether or not Cubase also has this for Mac?
So far, S1 is the only DAW that I've been able to confirm that has the dual buffer capability I want for use on Mac.
|
|
|
Post by damoongo on Oct 13, 2024 11:49:25 GMT -6
I'm learning about what my UF8 can do with S1, and it's encouraging. Though it looks like the UF8 can do a lot in Cubase and Logic too. I'm also liking that the UF8 comes pre mapped to do a lot of the functions that I still don't actually have any idea how to locate in S1, so at least the learning curve will be made easier by using the UF8. So, I still need to confirm whether or not Logic does or doesn't have a dual buffer system which works in the same way that S1 does. Similarly, it sounds like Cubase has a dual buffer thing for PC, but it's still not clear to me whether or not Cubase also has this for Mac? So far, S1 is the only DAW that I've been able to confirm that has the dual buffer capability I want for use on Mac. Cubase’s “Asio direct monitoring” for Pc isn’t even a dual buffer, or low buffer really. It’s just highly integrated communication with your audio interface’s drivers. Essentially it takes control of the Interface’s mixer and controls it with the faders inside cubase. Completely bypassing any buffers in the DAW. So you’re hearing the sound straight from the converters, but you don’t need to use a secondary mixer (your interface’s software mixer) which makes it seem totally integrated.
|
|
|
Post by Quint on Oct 13, 2024 12:01:06 GMT -6
I'm learning about what my UF8 can do with S1, and it's encouraging. Though it looks like the UF8 can do a lot in Cubase and Logic too. I'm also liking that the UF8 comes pre mapped to do a lot of the functions that I still don't actually have any idea how to locate in S1, so at least the learning curve will be made easier by using the UF8. So, I still need to confirm whether or not Logic does or doesn't have a dual buffer system which works in the same way that S1 does. Similarly, it sounds like Cubase has a dual buffer thing for PC, but it's still not clear to me whether or not Cubase also has this for Mac? So far, S1 is the only DAW that I've been able to confirm that has the dual buffer capability I want for use on Mac. Cubase’s “Asio direct monitoring” for Pc isn’t even a dual buffer, or low buffer really. It’s just highly integrated communication with your audio interface’s drivers. Essentially it takes control of the Interface’s mixer and controls it with the faders inside cubase. Completely bypassing any buffers in the DAW. So you’re hearing the sound straight from the converters, but you don’t need to use a secondary mixer (your interface’s software mixer) which makes it seem totally integrated. How does that work between Cubase and a third party interface? In a situation where an interface and a DAW are from the same company, sure, but I don't understand how that works otherwise? Cubase would need all of the interface makers to coordinate with them to provide such control. Where can I read about this? I've tried reading some things on Cubase, but I haven't seen anything about what you describe. Also, it sounds like this is just for PC. I'm on Mac. Would this also work for Mac?
|
|
|
Post by popmann on Oct 13, 2024 12:15:43 GMT -6
As I understand it, “dual buffer” means you have one large buffer for playback and another small buffer for low latency monitoring on enabled audio tracks and virtual instruments. That also means that the system might have to disable and bypass certain plugins on the tracks you are live monitoring if they have extremely long latency. Disabling plugins that have high latency is what Logic refers to as “Low latency mode” but this does not mean that Logic has “dual buffers” like Studio One. I am currently demoing Studio One version 7 and I think all DAWs should work this way, there’s not much downside except for the difficulty of coding it. As for Cubase it seems like it has something similar for windows called ASIO-Guard but not Mac but I need to demo it on Mac to see for sure because their documentation is not so great on this subject. The cubase docs mention something called “Constrain Delay Compensation” that says some VST3 plugins have a “Live” button which is a low latency mode for that plugin. So if you activate constrain delay compensation it automatically enables the live mode for *some plugins. Anyways, it’s 2024 and latency is still a confusing problem. Sigh. I don't know why it's confusing. Help me help you. This "understanding" is flawed. You are conflating the disabling of processing with dual buffers. Dual buffering...or triple buffering as Cubase uses now...is a function of the audio engine. It's nothing you engage ad hoc. Cubase's "Constrain Delay Compensation"=Logic's "Low Latency mode"=protools "low latency monitoring"...Steinberg was the only one who called it what it was--as confusing as the name IS...they all do roughly the same thing, which is to more or less fuck up what's going on right now in service of a temporarily lower latency input. IGNORE THIS FEATURE EXISTS FOREVER. #geritolShot #history The multiple buffering, starts with Nuendo 20+ years ago. Even though I was a Logic user longer ago--I honestly never did audio production with it....it was simply a MIDI sequencer that had some software synths/sampler in it functionally to me. I used embedded hardware for audio. Certainly, they DID at some point implement double buffering. It's handled the OPPOSITE way in Logic--you select 256, that is ONLY your input buffer. The process buffer is some multiple of THAT. Which is why people think Logic is "more CPU efficient" than Cubase. If you selected 256 in Cubase, you got a 64 input buffer and it ran the WHOLE project on 256. The "equivalent" was to set Cubase to 1024(256 input buffer)...and compare what it could do compared to Logic THERE--equally input and process buffer latencies. Anyway--these two led the way, and were the ONLY apps with it until Studio One, which was created by ex Nuendo team members. Keep in mind--ProTools TDM, standard since the late 90s...is an embedded hardware system. Nuendo's developer challenge was to allow for music production IN a latent (native) environment. With HDX, they shifted to a native playback engine, hardware cue mixer/DSP...effectively making a super fancy version of Nuendo circa 1997. It's what UA does with the Apollo now--with channels automatically switching from running on the DSP mixer or the CPU mixer based on input monitor state. Technically, Steinberg/Yamaha and Presonus have their own, much lesser versions of that and have for years--NO ONE buys first party solutions...for some solid reasons. But, the side effect is that there's only so much a company can do to work around monitoring latency--since first part is the only way they have control over the hardware mixer in every interface. What's the debate about Cubase not having multiple buffers on Mac? It behaves the same. When it had two it had two on Mac. It's been a while since I've had a new Mac...but, I feel like Greg still does all his demonstration videos on his Mac...since every now and again he makes reference to a keystroke being "ALT on Windows" instead of Command...
|
|
|
Post by popmann on Oct 13, 2024 12:17:56 GMT -6
Cubase’s “Asio direct monitoring” for Pc isn’t even a dual buffer, or low buffer really. It’s just highly integrated communication with your audio interface’s drivers. Essentially it takes control of the Interface’s mixer and controls it with the faders inside cubase. Completely bypassing any buffers in the DAW. So you’re hearing the sound straight from the converters, but you don’t need to use a secondary mixer (your interface’s software mixer) which makes it seem totally integrated. How does that work between Cubase and a third party interface? In a situation where an interface and a DAW are from the same company, sure, but I don't understand how that works otherwise? Cubase would need all of the interface makers to coordinate with them to provide such control. Where can I read about this? I've tried reading some things on Cubase, but I haven't seen anything about what you describe. Also, it sounds like this is just for PC. I'm on Mac. Would this also work for Mac? It will not work on a Mac. It DOES work on every third party ASIO interface, which you can thank Apple 100% for no longer having on MacOS. And it prevents ANY DSP--like you can't have a reverb in your cans...because ASIO Direct Monitoring was invented in 1998...when there was 0 chance you could run a reverb on your CPU that wasn't garbage. And as has been pointed out, is not related to dual buffers at ALL....multiple buffering is done completely in software.
|
|
|
Post by Dan on Oct 13, 2024 12:30:43 GMT -6
I think 'untenable' is a bit of an overstretch. I'm upset about the constant pricing gymnastics and switching of policies that makes me consider dumping them. But the Apollo tracking with DSP still works great in its original design. I don't know which plugins you feel are necessary on the way in, but you can probably track with the legacy EQs and compressors and still get excellent tracks. There are a good number of Unison pres now. If you're into those, you have choices. It's beyond stupid that things like the newer Waterfall rotary speaker, amp sims, and verve analog machines aren't UAD-2. But I was tracking bands fine without those before. And I could probably never buy a UAD plugin again and it would still work for years. Untenable for ME. I should have been more clear about that. If we were just talking about using UA plugins for tracking, then yeah, I may not care as much whether the quality of some of the plugins I'm using during tracking are top quality, because I'd theoretically be using something else during mixing to replace them. And a lot of those UAD DSP-only plugins are pretty old and bested by other plugins these days, so I would be looking to replace them in that scenario . However, the appeal for me with Luna/Apollos was the auto DSP switching. Mixing is tracking. Tracking is mixing. It's the same plugin for both scenarios. You fluidly move between tracking and mixing, and start shaping, using the same plugins from start to finish, provided you use those particular plugins which UA has made available in both DSP and native format. Great workflow. And, yes, this will continue to work for me for now. But what I don't want to happen is to be in the same boat as AAX plugins are now, where development is dead. I don't want to stick with a system where no further development is going to happen. I don't think anyone would argue that the AAX plugins available today can generally compete with the latest generation of plugins. AAX stopped advancing years ago. So, for the same reason that people mixing natively now wouldn't want to be stuck using only old native plugins, five years from now I don't want to be stuck using old UAD DSP plugins which stopped advancing five or more years ago. We're all always looking around for plugins which have raised the bar to another level. This is no different. Maybe I'm a bit of an edge case. I don't know. But for how I wanted to work, which is why I went to Luna, this new Apollo release was a clear line in the sand that things had changed. I don't trust UA at all at this point, and I'm not terribly interested in sticking around to see what other promises they might break or what other policies they might change mid stream. Yep AAX DSP is totally dead and the best are still the old Sonnox Oxford plugins which are very expensive because they charge double than for the native. The Pro Audio DSP DSM too is great. Like I could mix something down with the Oxford and use the limiter on two bus and the reverb but if you need some ass saving magic like Drum Leveler or RX, distortion like Decapitator or Tupe, mega powerful dynamics like kotelnikov or the Weiss ds1, good amp sims or vstis, you have to leave the aax dsp and deal with more latency than going all native 😬
|
|
|
Post by Quint on Oct 13, 2024 12:39:15 GMT -6
How does that work between Cubase and a third party interface? In a situation where an interface and a DAW are from the same company, sure, but I don't understand how that works otherwise? Cubase would need all of the interface makers to coordinate with them to provide such control. Where can I read about this? I've tried reading some things on Cubase, but I haven't seen anything about what you describe. Also, it sounds like this is just for PC. I'm on Mac. Would this also work for Mac? It will not work on a Mac. It DOES work on every third party ASIO interface, which you can thank Apple 100% for no longer having on MacOS. And it prevents ANY DSP--like you can't have a reverb in your cans...because ASIO Direct Monitoring was invented in 1998...when there was 0 chance you could run a reverb on your CPU that wasn't garbage. And as has been pointed out, is not related to dual buffers at ALL....multiple buffering is done completely in software. If Cubase Asio Direct Monitoring (ADM) won't work on Mac, that takes care of worrying about those possibilities, even if I don't understand how Cubase can control the hardware mixer in a third party interface. That part still doesn't make any sense to me, but it's moot if I can't do it on Mac. ADM is only something that just recently entered the conversation anyway, so it's not something I was really asking about in the first place. Dual buffers is what this conversation was always about, and I've never thought anything about it other than that dual buffers are completely on the software side. We have mutual agreement/understanding there. So, since dual buffers are what I'm investigating, because that seems to be the best option for my current situation (DSP hardware mixer is out as an option), I still need to understand how dual buffers work on Cubase and Logic, compared to S1. S1 seems to have what I'm looking for, but I'm still not quite sure how the dual buffer functionality on S1 compares to that on Cubase or Logic.
|
|
|
Post by Quint on Oct 13, 2024 12:58:49 GMT -6
As I understand it, “dual buffer” means you have one large buffer for playback and another small buffer for low latency monitoring on enabled audio tracks and virtual instruments. That also means that the system might have to disable and bypass certain plugins on the tracks you are live monitoring if they have extremely long latency. Disabling plugins that have high latency is what Logic refers to as “Low latency mode” but this does not mean that Logic has “dual buffers” like Studio One. I am currently demoing Studio One version 7 and I think all DAWs should work this way, there’s not much downside except for the difficulty of coding it. As for Cubase it seems like it has something similar for windows called ASIO-Guard but not Mac but I need to demo it on Mac to see for sure because their documentation is not so great on this subject. The cubase docs mention something called “Constrain Delay Compensation” that says some VST3 plugins have a “Live” button which is a low latency mode for that plugin. So if you activate constrain delay compensation it automatically enables the live mode for *some plugins. Anyways, it’s 2024 and latency is still a confusing problem. Sigh. I don't know why it's confusing. Help me help you. This "understanding" is flawed. You are conflating the disabling of processing with dual buffers. Dual buffering...or triple buffering as Cubase uses now...is a function of the audio engine. It's nothing you engage ad hoc. Cubase's "Constrain Delay Compensation"=Logic's "Low Latency mode"=protools "low latency monitoring"...Steinberg was the only one who called it what it was--as confusing as the name IS...they all do roughly the same thing, which is to more or less fuck up what's going on right now in service of a temporarily lower latency input. IGNORE THIS FEATURE EXISTS FOREVER. #geritolShot #history The multiple buffering, starts with Nuendo 20+ years ago. Even though I was a Logic user longer ago--I honestly never did audio production with it....it was simply a MIDI sequencer that had some software synths/sampler in it functionally to me. I used embedded hardware for audio. Certainly, they DID at some point implement double buffering. It's handled the OPPOSITE way in Logic--you select 256, that is ONLY your input buffer. The process buffer is some multiple of THAT. Which is why people think Logic is "more CPU efficient" than Cubase. If you selected 256 in Cubase, you got a 64 input buffer and it ran the WHOLE project on 256. The "equivalent" was to set Cubase to 1024(256 input buffer)...and compare what it could do compared to Logic THERE--equally input and process buffer latencies. Anyway--these two led the way, and were the ONLY apps with it until Studio One, which was created by ex Nuendo team members. Keep in mind--ProTools TDM, standard since the late 90s...is an embedded hardware system. Nuendo's developer challenge was to allow for music production IN a latent (native) environment. With HDX, they shifted to a native playback engine, hardware cue mixer/DSP...effectively making a super fancy version of Nuendo circa 1997. It's what UA does with the Apollo now--with channels automatically switching from running on the DSP mixer or the CPU mixer based on input monitor state. Technically, Steinberg/Yamaha and Presonus have their own, much lesser versions of that and have for years--NO ONE buys first party solutions...for some solid reasons. But, the side effect is that there's only so much a company can do to work around monitoring latency--since first part is the only way they have control over the hardware mixer in every interface. What's the debate about Cubase not having multiple buffers on Mac? It behaves the same. When it had two it had two on Mac. It's been a while since I've had a new Mac...but, I feel like Greg still does all his demonstration videos on his Mac...since every now and again he makes reference to a keystroke being "ALT on Windows" instead of Command... I don't know anything about "Constrain Delay Compensation" (CDC) in Cubase or other versions thereof in other DAWs, but, just to be clear, this is something different than dual or multiple buffers, correct? Just making sure we're on the same page. Also, why are you saying to avoid CDC or other versions thereof in other DAWs? What is it about CDC that you don't like? Also, I think Veggieryan DOES understand the difference between disabling plugins and dual buffers. He's just simply saying that S1 will disable certain latent plugins IN ADDITION TO using the lower/faster record path buffer, as an additional measure of reducing latency. In any case, as I mentioned in my previous reply to you, dual buffers is still what I started this thread about, so I'm trying not to get too far off in the weeds with other subjects like CDC. I'm really just trying to determine which DAWs have dual or multiple buffers and which DAW's implementation of dual/multiple buffers is done the best. Thus far, S1 seems to have dual buffers which do what I want, but I'm not sure about any other DAWs. Are you saying that Cubase also has dual or multiple buffers on Mac? If so, how does it compare to the dual buffering in S1. What about Logic?
|
|
|
Post by veggieryan on Oct 13, 2024 14:02:30 GMT -6
I went ahead and installed the Cubase 13 Pro demo on my Mac to find out for myself… and yes it does appear to have dual buffers like Studio One. It was confusing because they still call it “ASIO-Guard” even though there is no such thing as ASIO on a Mac. Anyways, you set your buffer as low as you can in the Studio > Studio Setup section and then you can open the Studio > Audio Performance window. You can see that when a track is on playback with plugins it increases the load in the “ASIO-Guard” part of the meter. Tracks that you are live monitoring increase the load in the “Real time” meter.
I grew up using Cubase so I would probably lean that direction over Studio One if I had to choose now but I will keep testing.
|
|
|
Post by Quint on Oct 13, 2024 14:07:02 GMT -6
I went ahead and installed the Cubase 13 Pro demo on my Mac to find out for myself… and yes it does appear to have dual buffers like Studio One. It was confusing because they still call it “ASIO-Guard” even though there is no such thing as ASIO on a Mac. Anyways, you set your buffer as low as you can in the Studio > Studio Setup section and then you can open the Studio > Audio Performance window. You can see that when a track is on playback with plugins it increases the load in the “ASIO-Guard” part of the meter. Tracks that you are live monitoring increase the load in the “Real time” meter. I grew up using Cubase so I would probably lean that direction over Studio One if I had to choose now but I will keep testing. Good to know. Yeah, I was aware of the ASIO Guard thing, and was similarly confused by it, as there is no ASIO on Mac. Also, is ASIO Guard different than ASIO Direct Monitoring? That could further muddy the waters if you're not clear what each does. In any case, it sounds like you've verified that dual buffers exists for Cubase on Mac. Good news. That's what this thread was about, and it's good to get some confirmation. I'll be curious to hear how it compares to dual buffers on S1.
|
|
|
Post by BenjaminAshlin on Oct 13, 2024 18:17:15 GMT -6
I'm learning about what my UF8 can do with S1, and it's encouraging. Though it looks like the UF8 can do a lot in Cubase and Logic too. I'm also liking that the UF8 comes pre mapped to do a lot of the functions that I still don't actually have any idea how to locate in S1, so at least the learning curve will be made easier by using the UF8. So, I still need to confirm whether or not Logic does or doesn't have a dual buffer system which works in the same way that S1 does. Similarly, it sounds like Cubase has a dual buffer thing for PC, but it's still not clear to me whether or not Cubase also has this for Mac? So far, S1 is the only DAW that I've been able to confirm that has the dual buffer capability I want for use on Mac. Cubase and S1 both have dual buffer on Mac and Windows. FWIW i get better low latency performance under Cubase than S1. But that is for windows. I understand that Cubase is better on M1 as well.
|
|
|
Post by BenjaminAshlin on Oct 13, 2024 18:20:11 GMT -6
I'm learning about what my UF8 can do with S1, and it's encouraging. Though it looks like the UF8 can do a lot in Cubase and Logic too. I'm also liking that the UF8 comes pre mapped to do a lot of the functions that I still don't actually have any idea how to locate in S1, so at least the learning curve will be made easier by using the UF8. So, I still need to confirm whether or not Logic does or doesn't have a dual buffer system which works in the same way that S1 does. Similarly, it sounds like Cubase has a dual buffer thing for PC, but it's still not clear to me whether or not Cubase also has this for Mac? So far, S1 is the only DAW that I've been able to confirm that has the dual buffer capability I want for use on Mac. Cubase’s “Asio direct monitoring” for Pc isn’t even a dual buffer, or low buffer really. It’s just highly integrated communication with your audio interface’s drivers. Essentially it takes control of the Interface’s mixer and controls it with the faders inside cubase. Completely bypassing any buffers in the DAW. So you’re hearing the sound straight from the converters, but you don’t need to use a secondary mixer (your interface’s software mixer) which makes it seem totally integrated. Thats not what quint is asking about. "asio direct monitoring" and "constrain delay compensation" are separate to the dual buffer/hybrid buffer (ASIO-Guard). Cubase dual buffer/hybrid buffer is ASIO-Guard which is available for Mac and Windows. (yes ASIO is a windows thing so its confusing naming.) Cubase is pretty robust, set and forget on any modern computer.
|
|
|
Post by popmann on Oct 13, 2024 20:19:16 GMT -6
I don't know why it's confusing. Help me help you. This "understanding" is flawed. You are conflating the disabling of processing with dual buffers. Dual buffering...or triple buffering as Cubase uses now...is a function of the audio engine. It's nothing you engage ad hoc. Cubase's "Constrain Delay Compensation"=Logic's "Low Latency mode"=protools "low latency monitoring"...Steinberg was the only one who called it what it was--as confusing as the name IS...they all do roughly the same thing, which is to more or less fuck up what's going on right now in service of a temporarily lower latency input. IGNORE THIS FEATURE EXISTS FOREVER. #geritolShot #history The multiple buffering, starts with Nuendo 20+ years ago. Even though I was a Logic user longer ago--I honestly never did audio production with it....it was simply a MIDI sequencer that had some software synths/sampler in it functionally to me. I used embedded hardware for audio. Certainly, they DID at some point implement double buffering. It's handled the OPPOSITE way in Logic--you select 256, that is ONLY your input buffer. The process buffer is some multiple of THAT. Which is why people think Logic is "more CPU efficient" than Cubase. If you selected 256 in Cubase, you got a 64 input buffer and it ran the WHOLE project on 256. The "equivalent" was to set Cubase to 1024(256 input buffer)...and compare what it could do compared to Logic THERE--equally input and process buffer latencies. Anyway--these two led the way, and were the ONLY apps with it until Studio One, which was created by ex Nuendo team members. Keep in mind--ProTools TDM, standard since the late 90s...is an embedded hardware system. Nuendo's developer challenge was to allow for music production IN a latent (native) environment. With HDX, they shifted to a native playback engine, hardware cue mixer/DSP...effectively making a super fancy version of Nuendo circa 1997. It's what UA does with the Apollo now--with channels automatically switching from running on the DSP mixer or the CPU mixer based on input monitor state. Technically, Steinberg/Yamaha and Presonus have their own, much lesser versions of that and have for years--NO ONE buys first party solutions...for some solid reasons. But, the side effect is that there's only so much a company can do to work around monitoring latency--since first part is the only way they have control over the hardware mixer in every interface. What's the debate about Cubase not having multiple buffers on Mac? It behaves the same. When it had two it had two on Mac. It's been a while since I've had a new Mac...but, I feel like Greg still does all his demonstration videos on his Mac...since every now and again he makes reference to a keystroke being "ALT on Windows" instead of Command... What is it about CDC that you don't like? edit Also, I think Veggieryan DOES understand the difference between disabling plugins and dual buffers. He's just simply saying that S1 will disable certain latent plugins IN ADDITION TO using the lower/faster record path buffer, as an additional measure of reducing latency. I'm really just trying to determine which DAWs have dual or multiple buffers and which DAW's implementation of dual/multiple buffers is done the best. Thus far, S1 seems to have dual buffers which do what I want, but I'm not sure about any other DAWs. Are you saying that Cubase also has dual or multiple buffers on Mac? If so, how does it compare to the dual buffering in S1. What about Logic? ReRead. I LITERALLY explained the differences in Cubase/Logic/S1's multiple buffering. I don't keep up with every single DAW, but for 20 years Logic and Cubase were IT...until S1 came along. ProTools HDX uses two buffers--one for the HDX mixer, one for the software playback mixer, but it's a little different scenario...and I don't know how that applies to software Protools as that's always been a redheaded stepchild. The "in addition to" IS constrain latency compensation. Whatever Studio One calls it. The reason I don't like them is that most DAW that are not Cubase or Studio One can't maintain the timeline. I don't like it because they all do it differently and DO NOT TELL YOU HOW they do it. Did you know that if you have a latent plug in on a bus (not a track but a bus) in Logic, and you disable it, every single track you record's timestamp will be incorrect--and you can never get it corrected--only sort of slide it around and GUESS? 1500pg PDF manual, you'd think they'd let you know that...but, I have to FIND that out...only to have them go "oh--yeah, it DOES work like that"...see the sin isn't in that they compensate for the plug in latency whether you enable or disable it--they do that so you can glitch free turn it off and on. I can agree or disagree with the value of that, but --the SIN...is in their not being clear "this is how it works" so I can work around it...or with it...or not use the app in disgust at that choice--whatever... Low Latency Modes and Constrain Delay Compensation is a big black box mystery way to get you a less latent input signal "right now". One click. What DOES it disable? Plug ins that take more than Xms of latency? Is it disabling...but properly compensating the overdub record track's timestamp? How does it know where in time your "now" is...? Answer: it assumes you're monitoring in software. The software round trip feed is "now". That is NEVER my now. Whether I'm using hardware digital or my analog mixer, No way for me to tell it in the configuration that it's not.
|
|
|
Post by Quint on Oct 13, 2024 20:55:22 GMT -6
What is it about CDC that you don't like? edit Also, I think Veggieryan DOES understand the difference between disabling plugins and dual buffers. He's just simply saying that S1 will disable certain latent plugins IN ADDITION TO using the lower/faster record path buffer, as an additional measure of reducing latency. I'm really just trying to determine which DAWs have dual or multiple buffers and which DAW's implementation of dual/multiple buffers is done the best. Thus far, S1 seems to have dual buffers which do what I want, but I'm not sure about any other DAWs. Are you saying that Cubase also has dual or multiple buffers on Mac? If so, how does it compare to the dual buffering in S1. What about Logic? ReRead. I LITERALLY explained the differences in Cubase/Logic/S1's multiple buffering. I don't keep up with every single DAW, but for 20 years Logic and Cubase were IT...until S1 came along. ProTools HDX uses two buffers--one for the HDX mixer, one for the software playback mixer, but it's a little different scenario...and I don't know how that applies to software Protools as that's always been a redheaded stepchild. The "in addition to" IS constrain latency compensation. Whatever Studio One calls it. The reason I don't like them is that most DAW that are not Cubase or Studio One can't maintain the timeline. I don't like it because they all do it differently and DO NOT TELL YOU HOW they do it. Did you know that if you have a latent plug in on a bus (not a track but a bus) in Logic, and you disable it, every single track you record's timestamp will be incorrect--and you can never get it corrected--only sort of slide it around and GUESS? 1500pg PDF manual, you'd think they'd let you know that...but, I have to FIND that out...only to have them go "oh--yeah, it DOES work like that"...see the sin isn't in that they compensate for the plug in latency whether you enable or disable it--they do that so you can glitch free turn it off and on. I can agree or disagree with the value of that, but --the SIN...is in their not being clear "this is how it works" so I can work around it...or with it...or not use the app in disgust at that choice--whatever... Low Latency Modes and Constrain Delay Compensation is a big black box mystery way to get you a less latent input signal "right now". One click. What DOES it disable? Plug ins that take more than Xms of latency? Is it disabling...but properly compensating the overdub record track's timestamp? How does it know where in time your "now" is...? Answer: it assumes you're monitoring in software. The software round trip feed is "now". That is NEVER my now. Whether I'm using hardware digital or my analog mixer, No way for me to tell it in the configuration that it's not. Please don't take this the wrong way, but your posts are a little hard to decipher sometimes. They're a little "stream of consciousness", if you know what I mean. 😜 So there are some things that are hard to follow, because the context isn't clear, so I need to seek some clarification sometimes, to double check that I'm understanding you correctly. But I do appreciate the help, nonetheless. In any case, just to make sure that we're on the same page here, are you including dual or multi buffers when you say "low latency modes"? I just want to make sure I understand what you are or aren't saying to avoid. In other words, to put it simply, are dual or multi buffers "bad" across the board, or just in SOME DAWs? If bad in only some DAWs, which ones?
|
|
|
Post by thehightenor on Oct 14, 2024 0:23:01 GMT -6
Yes, Cubase has a dual buffer. I leave my buffers at 128 and Cubase takes care of the rest - I never touch them again no matter how big a mix gets. I could probably run at 64 with my 24 core 13900K workstation but 128 covers me for everything and for tracking I use an analog monitoring mixer for true zero latency tracking. So are you not using software monitoring during tracking? You mentioned that you use analog. I guess I'm confused. Am I understanding you to say that, yes Cubase has dual buffers, but no you don't use them, because you use analog monitoring? The dual buffers help me have extremely low latency for playing virtual instruments. For tracking, vocals, guitars, bass, percussion using mic’s or line level signals I prefer to go one better and have essentially zero latency by using an analog monitor mixer. In fact sometimes if the rhythm of a keyboard part is critical I’ll play it on my Nord keyboard and post trigger the midi in Cubase - same process as recording my Roland E drums …. I monitor the Roland module and post trigger the MIDI in Cubase. I detest latency, it never has a positive effect on groove and feel. Even 5ms is awful imho. And the whole “yeah but 5ms is only like being 5 feet from an amp” is not true in real world practice - electronic delay is very, very different to human perception than sound delay in air. So, to some degree you’re right, Cubase has very effective dual buffers but generally I don’t really use the benefit, except for the fact faders respond normally when mixing.
|
|
|
Post by Quint on Oct 14, 2024 8:21:32 GMT -6
So are you not using software monitoring during tracking? You mentioned that you use analog. I guess I'm confused. Am I understanding you to say that, yes Cubase has dual buffers, but no you don't use them, because you use analog monitoring? The dual buffers help me have extremely low latency for playing virtual instruments. For tracking, vocals, guitars, bass, percussion using mic’s or line level signals I prefer to go one better and have essentially zero latency by using an analog monitor mixer. In fact sometimes if the rhythm of a keyboard part is critical I’ll play it on my Nord keyboard and post trigger the midi in Cubase - same process as recording my Roland E drums …. I monitor the Roland module and post trigger the MIDI in Cubase. I detest latency, it never has a positive effect on groove and feel. Even 5ms is awful imho. And the whole “yeah but 5ms is only like being 5 feet from an amp” is not true in real world practice - electronic delay is very, very different to human perception than sound delay in air. So, to some degree you’re right, Cubase has very effective dual buffers but generally I don’t really use the benefit, except for the fact faders respond normally when mixing. I'm with you on wanting really low or zero latency. This is why I've been on Apollos for a long while. DSP monitoring may not be zero latency, but at 96k, I was getting 1.1 ms RTL on the Apollo, unless I used certain plugins, which would then kick the latency up to a total RTL of about 2 ms. So it wasn't zero, but it was as good as I could get without going all analog monitoring (like you), and I don't know that I'm prepared to make a move to switch to all analog monitoring, at least not right now. On a related note, this is one of the reasons why I always work at 96k. The latency is lower. So I'm exploring A) if it's possible to achieve similar levels of latency in an all native system and B) what the best ways are to achieve that same level of low latency in an all native system. Dual buffers seems to be the best alternative I've seen. If I could reliably/stably do 2 ms RTL or less, while monitoring thru plugins, I'd consider dual buffers to be an equal and viable alternative to DSP monitoring. Based on what Gravesnumber has said in this thread, I'm optimistic that this is doable, at least with S1.
|
|
|
Post by Quint on Oct 15, 2024 7:13:00 GMT -6
So are you not using software monitoring during tracking? You mentioned that you use analog. I guess I'm confused. Am I understanding you to say that, yes Cubase has dual buffers, but no you don't use them, because you use analog monitoring? The dual buffers help me have extremely low latency for playing virtual instruments. For tracking, vocals, guitars, bass, percussion using mic’s or line level signals I prefer to go one better and have essentially zero latency by using an analog monitor mixer. In fact sometimes if the rhythm of a keyboard part is critical I’ll play it on my Nord keyboard and post trigger the midi in Cubase - same process as recording my Roland E drums …. I monitor the Roland module and post trigger the MIDI in Cubase. I detest latency, it never has a positive effect on groove and feel. Even 5ms is awful imho. And the whole “yeah but 5ms is only like being 5 feet from an amp” is not true in real world practice - electronic delay is very, very different to human perception than sound delay in air. So, to some degree you’re right, Cubase has very effective dual buffers but generally I don’t really use the benefit, except for the fact faders respond normally when mixing. So I've been investigating Cubase some more. There's a lot to like about S1, but something I've learned is that S1 doesn't seem to have as fully a developed MCU implementation as Cubase. In particular, S1 doesn't send out plugin names, info, etc., via MCU, but Cubase does. That's a big deal to me because I need that MCU data to be able to realtime update buttons on my Stream Deck with plugin info for each insert on a given channel. It's a bummer that S1 doesn't do that. S1 has this unified plugin window, which contains all of the plugins in all of the inserts, all in one window for each track. If I were just a mouse and keyboard guy, I'd prefer S1, actually, but I really am trying to get the keyboard and mouse out of my life, as much as I can, and Cubase seems like it maybe just has a better MCU implementation than S1. I still need to look into Logic. I haven't spent much time investigating it. I'm surprised none of the resident Logic guys have chimed in.
|
|
|
Post by svart on Oct 15, 2024 8:47:08 GMT -6
I guess the most confusing thing for me here is that how can a "dual buffer" somehow override the processing overhead needed for plugins? If a plug needs 1ms to process a block of data, it needs that on DSP or in CPU alike. The DSP advantage was always that DSP enabled plugs always processed in a deterministic amount of time, and you could parallel them up and have no further increase in processing time. But that was back when CPUs were in the MHz range and now that they're in the GHz range, they can process data 10-100x faster than most DSP chips can based on clock cycles alone and thus why DSP use in audio has been largely discarded.
The only drawback is that CPU processing is nonlinear and non-deterministic and there is an amount of overhead needed for the OS and the DAW.
But back to the plugin processing.. Buffer settings adjust the amount of samples being processed ahead, at the expense of CPU demand.
But how would a "dual buffer" change that amount of processing required?
Lowering a buffer would always increase the demand on the CPU, and vice-versa. Pre-rendering effects would be an effective strategy but it ALSO takes the exact same processing time, so doing that on-the-fly would not gain a single millisecond.
The ONLY way I can see this working is if the buffer were automatically reduced and plugins over a certain reported latency were disabled.
In Reaper, I simply change my samplerate as I need and sometimes disable certain plugs that report (in Reaper's real-time Performance Report) that have high latency or CPU usage. I haven't seen anyone actually explain to me what these other "dual buffer" DAWs are doing that is different than that.
|
|
|
Post by Quint on Oct 15, 2024 9:03:06 GMT -6
I guess the most confusing thing for me here is that how can a "dual buffer" somehow override the processing overhead needed for plugins? If a plug needs 1ms to process a block of data, it needs that on DSP or in CPU alike. The DSP advantage was always that DSP enabled plugs always processed in a deterministic amount of time, and you could parallel them up and have no further increase in processing time. But that was back when CPUs were in the MHz range and now that they're in the GHz range, they can process data 10-100x faster than most DSP chips can based on clock cycles alone and thus why DSP use in audio has been largely discarded. The only drawback is that CPU processing is nonlinear and non-deterministic and there is an amount of overhead needed for the OS and the DAW. But back to the plugin processing.. Buffer settings adjust the amount of samples being processed ahead, at the expense of CPU demand. But how would a "dual buffer" change that amount of processing required? Lowering a buffer would always increase the demand on the CPU, and vice-versa. Pre-rendering effects would be an effective strategy but it ALSO takes the exact same processing time, so doing that on-the-fly would not gain a single millisecond. The ONLY way I can see this working is if the buffer were automatically reduced and plugins over a certain reported latency were disabled. In Reaper, I simply change my samplerate as I need and sometimes disable certain plugs that report (in Reaper's real-time Performance Report) that have high latency or CPU usage. I haven't seen anyone actually explain to me what these other "dual buffer" DAWs are doing that is different than that. legacy.presonus.com/products/Studio-One/downloads Download the Studio One Pro 7 Reference Manual PDF pages 19 thru 21 explain how dual buffers work. Every DAW that employs dual buffers is a little different in their implementation, but this is how they all basically work, in a nutshell. It basically comes down to dual processes. One process for playback, at a higher buffer, and one process for record armed tracks, at a separate lower buffer. When you lower your buffer in Reaper, you're asking All tracks to run at that same buffer, which could then cause CPU issues. The benefit of dual buffers is that you're only asking those tracks, which are record enabled, to run at a lower buffer, thus reducing CPU strain and increasing the ability to run lower latency on a smaller subset of tracks.
|
|
|
Post by svart on Oct 15, 2024 9:28:04 GMT -6
I guess the most confusing thing for me here is that how can a "dual buffer" somehow override the processing overhead needed for plugins? If a plug needs 1ms to process a block of data, it needs that on DSP or in CPU alike. The DSP advantage was always that DSP enabled plugs always processed in a deterministic amount of time, and you could parallel them up and have no further increase in processing time. But that was back when CPUs were in the MHz range and now that they're in the GHz range, they can process data 10-100x faster than most DSP chips can based on clock cycles alone and thus why DSP use in audio has been largely discarded. The only drawback is that CPU processing is nonlinear and non-deterministic and there is an amount of overhead needed for the OS and the DAW. But back to the plugin processing.. Buffer settings adjust the amount of samples being processed ahead, at the expense of CPU demand. But how would a "dual buffer" change that amount of processing required? Lowering a buffer would always increase the demand on the CPU, and vice-versa. Pre-rendering effects would be an effective strategy but it ALSO takes the exact same processing time, so doing that on-the-fly would not gain a single millisecond. The ONLY way I can see this working is if the buffer were automatically reduced and plugins over a certain reported latency were disabled. In Reaper, I simply change my samplerate as I need and sometimes disable certain plugs that report (in Reaper's real-time Performance Report) that have high latency or CPU usage. I haven't seen anyone actually explain to me what these other "dual buffer" DAWs are doing that is different than that. legacy.presonus.com/products/Studio-One/downloads Download the Studio One Pro 7 Reference Manual PDF pages 19 thru 21 explain how dual buffers work. Every DAW that employs dual buffers is a little different in their implementation, but this is how they all basically work, in a nutshell. Ok, so it explains it like this: Threading. It creates multiple processing threads in parallel. Other than "VIs in one thread and audio inputs in another" it does not explain this in any further detail but I guess it does explain the "dual" aspect of the dual buffer. But what if I don't use VIs? It disables any plug that is more than 3ms in latency. It disables external round-trips, analyzers and splits(I assume it means sends/returns). The picture they use to show "Low latency" monitoring shows 6.6ms of monitoring latency at 512 samples. I get similar numbers in Reaper with my current setup, but I think the MOTU driver has a lot to do with that.
|
|
|
Post by Quint on Oct 15, 2024 9:36:31 GMT -6
legacy.presonus.com/products/Studio-One/downloads Download the Studio One Pro 7 Reference Manual PDF pages 19 thru 21 explain how dual buffers work. Every DAW that employs dual buffers is a little different in their implementation, but this is how they all basically work, in a nutshell. Ok, so it explains it like this: Threading. It creates multiple processing threads in parallel. Other than "VIs in one thread and audio inputs in another" it does not explain this in any further detail but I guess it does explain the "dual" aspect of the dual buffer. But what if I don't use VIs? It disables any plug that is more than 3ms in latency. It disables external round-trips, analyzers and splits(I assume it means sends/returns). The picture they use to show "Low latency" monitoring shows 6.6ms of monitoring latency at 512 samples. I get similar numbers in Reaper with my current setup, but I think the MOTU driver has a lot to do with that. It's not just for VIs. It's for audio tracks too. Yes, it does temporarily disable some plugins (above a certain latency), as you read in the manual, but it's not like there aren't the same concerns in Reaper, if you tried to run more plugins than what your low buffer is capable of handling. The difference here is that S1 just preemptively and automatically disables those particular plugins, where as you might just get dropouts in Reaper. Keep in mind though, that you can set the device block (record enabled tracks) buffer down to 32 or even 16, all while not requiring the rest of your playback tracks to run at that same super low buffer. This is the real benefit. You can get latency a lot lower than 6.6 ms, and without having to stress your CPU as much as you would if you were trying to run the entire project at a 32 or 16 buffer. Dual buffers are a great solution, and remove or reduce some of the traditional issues with native monitoring. Coming from DSP monitoring, this is why dual buffers were of interest to me over traditional single buffer options. Similar to DSP monitoring, it allows me to monitor record enabled tracks at really low latency, without necessarily having to strain my CPU because I'm running it at the bleeding edge. The safety in being able to run the non-record enabled tracks at a higher buffer, and thus not having to worry near as much about dropouts, is an important benefit to me.
|
|
|
Post by svart on Oct 15, 2024 11:55:11 GMT -6
Ok, so it explains it like this: Threading. It creates multiple processing threads in parallel. Other than "VIs in one thread and audio inputs in another" it does not explain this in any further detail but I guess it does explain the "dual" aspect of the dual buffer. But what if I don't use VIs? It disables any plug that is more than 3ms in latency. It disables external round-trips, analyzers and splits(I assume it means sends/returns). The picture they use to show "Low latency" monitoring shows 6.6ms of monitoring latency at 512 samples. I get similar numbers in Reaper with my current setup, but I think the MOTU driver has a lot to do with that. It's not just for VIs. It's for audio tracks too. Yes, it does temporarily disable some plugins (above a certain latency), as you read in the manual, but it's not like there aren't the same concerns in Reaper, if you tried to run more plugins than what your low buffer is capable of handling. The difference here is that S1 just preemptively and automatically disables those particular plugins, where as you might just get dropouts in Reaper. Keep in mind though, that you can set the device block (record enabled tracks) buffer down to 32 or even 16, all while not requiring the rest of your playback tracks to run at that same super low buffer. This is the real benefit. You can get latency a lot lower than 6.6 ms, and without having to stress your CPU as much as you would if you were trying to run the entire project at a 32 or 16 buffer. Dual buffers are a great solution, and remove or reduce some of the traditional issues with native monitoring. Coming from DSP monitoring, this is why dual buffers were of interest to me over traditional single buffer options. Similar to DSP monitoring, it allows me to monitor record enabled tracks at really low latency, without necessarily having to strain my CPU because I'm running it at the bleeding edge. The safety in being able to run the non-record enabled tracks at a higher buffer, and thus not having to worry near as much about dropouts, is an important benefit to me. Ok. I went by this statement: "Under this system, the tasks of audio playback and monitoring of audio inputs and virtual instruments are handled as separate processes." which makes it seem like it's only those two processes that are separate, but I'm sure it's more of a layman's explanation than I'm reading into it. And yes, I turn off high latency plugs, just manually. Last night we did vocal overdubs and I left most plugs on since I'm already working on the meat of the mixes before the band wanted more harmonies added.. I even moved the buffers from 128 to 256 and noticed almost no difference with my highest DPC at about 2000. I'm not sure if something else is the gating factor here but the main singer and the background singers didn't mention anything about the headphone mix having latency. I need to do a test and see if my headphone sends set to "Pre-FX" have less latency than normal (post-FX). I do like using the DAW to direct tracks to the headphone sends since I can enable and disable at will and do adjustments on send levels, etc. It makes it much more flexible than using pure hardware for monitoring.
|
|
|
Post by thehightenor on Oct 15, 2024 12:01:45 GMT -6
The dual buffers help me have extremely low latency for playing virtual instruments. For tracking, vocals, guitars, bass, percussion using mic’s or line level signals I prefer to go one better and have essentially zero latency by using an analog monitor mixer. In fact sometimes if the rhythm of a keyboard part is critical I’ll play it on my Nord keyboard and post trigger the midi in Cubase - same process as recording my Roland E drums …. I monitor the Roland module and post trigger the MIDI in Cubase. I detest latency, it never has a positive effect on groove and feel. Even 5ms is awful imho. And the whole “yeah but 5ms is only like being 5 feet from an amp” is not true in real world practice - electronic delay is very, very different to human perception than sound delay in air. So, to some degree you’re right, Cubase has very effective dual buffers but generally I don’t really use the benefit, except for the fact faders respond normally when mixing. So I've been investigating Cubase some more. There's a lot to like about S1, but something I've learned is that S1 doesn't seem to have as fully a developed MCU implementation as Cubase. In particular, S1 doesn't send out plugin names, info, etc., via MCU, but Cubase does. That's a big deal to me because I need that MCU data to be able to realtime update buttons on my Stream Deck with plugin info for each insert on a given channel. It's a bummer that S1 doesn't do that. S1 has this unified plugin window, which contains all of the plugins in all of the inserts, all in one window for each track. If I were just a mouse and keyboard guy, I'd prefer S1, actually, but I really am trying to get the keyboard and mouse out of my life, as much as I can, and Cubase seems like it maybe just has a better MCU implementation than S1. I still need to look into Logic. I haven't spent much time investigating it. I'm surprised none of the resident Logic guys have chimed in. Cubase MIDI Remote Manager is truly amazing. Any device that outputs MIDI can become a control surface with very quick mapping - it's brilliant. Stream Deck has a fantastic dedicated "Cubase 13 Pro" package I use and makes using Cubase even quicker than it already is - and it's already a very fast app to use. I looked at S1 but as a composer/songwriter Cubase is streets ahead for the way I like to work - I like a very fast work flow. And mixing in Cubase is also a joy. I use Dorico 5, Cubase 13 and Wavelab 12 .... it's a "plans to plaster suite of apps" .... I'm a big Steinberg fan
|
|
|
Post by Quint on Oct 15, 2024 13:32:11 GMT -6
So I've been investigating Cubase some more. There's a lot to like about S1, but something I've learned is that S1 doesn't seem to have as fully a developed MCU implementation as Cubase. In particular, S1 doesn't send out plugin names, info, etc., via MCU, but Cubase does. That's a big deal to me because I need that MCU data to be able to realtime update buttons on my Stream Deck with plugin info for each insert on a given channel. It's a bummer that S1 doesn't do that. S1 has this unified plugin window, which contains all of the plugins in all of the inserts, all in one window for each track. If I were just a mouse and keyboard guy, I'd prefer S1, actually, but I really am trying to get the keyboard and mouse out of my life, as much as I can, and Cubase seems like it maybe just has a better MCU implementation than S1. I still need to look into Logic. I haven't spent much time investigating it. I'm surprised none of the resident Logic guys have chimed in. Cubase MIDI Remote Manager is truly amazing. Any device that outputs MIDI can become a control surface with very quick mapping - it's brilliant. Stream Deck has a fantastic dedicated "Cubase 13 Pro" package I use and makes using Cubase even quicker than it already is - and it's already a very fast app to use. I looked at S1 but as a composer/songwriter Cubase is streets ahead for the way I like to work - I like a very fast work flow. And mixing in Cubase is also a joy. I use Dorico 5, Cubase 13 and Wavelab 12 .... it's a "plans to plaster suite of apps" .... I'm a big Steinberg fan I have some theories on what I can do in Cubase to assign open/close an individual insert to an individual button on my SD. I'm curious, have you already figured out how to do this on your Stream Deck, or does the Sideshow app you're using for SD already have that capability directly built into the app? Also, which version of Cubase should I be looking at for purchase?
|
|
|
Post by thehightenor on Oct 15, 2024 15:21:58 GMT -6
Cubase MIDI Remote Manager is truly amazing. Any device that outputs MIDI can become a control surface with very quick mapping - it's brilliant. Stream Deck has a fantastic dedicated "Cubase 13 Pro" package I use and makes using Cubase even quicker than it already is - and it's already a very fast app to use. I looked at S1 but as a composer/songwriter Cubase is streets ahead for the way I like to work - I like a very fast work flow. And mixing in Cubase is also a joy. I use Dorico 5, Cubase 13 and Wavelab 12 .... it's a "plans to plaster suite of apps" .... I'm a big Steinberg fan I have some theories on what I can do in Cubase to assign open/close an individual insert to an individual button on my SD. I'm curious, have you already figured out how to do this on your Stream Deck, or does the Sideshow app you're using for SD already have that capability directly built into the app? Also, which version of Cubase should I be looking at for purchase? You obviously have Stream Deck doing something very customized! Sideshow FX has a dedicated Stream Deck Cubase Pro 13 profile with some hot keys to launch VI's but nothing for plugins. My son has programmed some Windows short cuts for me, so it might be possible to add your customization to a Sideshow FX Stream Deck Profile. I'm a massive Cubase user and fan, so I'm going to say get the Pro version - it has no compromises and is utterly fabulous and king of the DAW hill .... see told you I was a fan
|
|