|
Post by M57 on Jul 9, 2018 5:16:01 GMT -6
When I track using my 8 year old Focusrite interface, I always use Logic's low latency record mode. When doing so, it cuts latency down quite a bit. There's still some there, but it's manageable. So I'm thinking that latency is happening in a number of places. Because I can reduce it at the DAW, my first thought is that there must be latency in my computer as well as in the interface, and it's not insignificant. In fact, based on what I hear, there's more latency in the computer - (mostly the D/A converters for monitoring?) But then it occured to me that the computer might just send the digital signal back to the interface, which then uses D/A converters, so most of the latency is occuring in the interface. But that doesn't make sense because then what does low latency record mode do? I know it bypasses plugins but I'm pretty sure there's a difference even when there are no plugins in the path. Sorry for the dumb question, but from there my brain got stuck in a feedback loop.
|
|
|
Post by adamjbrass on Jul 9, 2018 8:19:26 GMT -6
The first thing to mention is the Buffer size in the DAW. This is usually the main factor. There is always some latency with each DAW, as it has to read/write and buffer. I have noticed that 32 sample buffer is very low latency in logic. with the AD/DA converters, there is a residual latency time, which is fixed in the design. Its the amount of time it takes to convert. This can be noticeable if the design has a long latency, but most all modern converters are not like this and it's very minimal. I am not sure exactly how Logic's Low Latency Mode works, but it probably allocates some DSP towards low buffering.
|
|
|
Post by stormymondays on Jul 9, 2018 10:20:12 GMT -6
Is there a reason to use software monitoring instead of hardware?
|
|
|
Post by Ward on Jul 9, 2018 11:05:31 GMT -6
1. Have lots of horsepower at hand. Processor speed, disk speed and RAM 2. Make eery track as good sounding as possible WHILST you're recording instead of tinkering with every track as you go forward 3. Least number of plugins possible whilst overdubbing 4. Adjust buffers until the latency is at it's lowest for your system,. And of course 5. Hardware monitoring as stormymondays eludes to above. The less strain on the horsepower the better, and the less latency Sometimes other members come in with 6 7 8 9 10 etc. That would be nice!
|
|
ericn
Temp
Balance Engineer
Posts: 15,014
|
Post by ericn on Jul 9, 2018 11:18:29 GMT -6
Is there a reason to use software monitoring instead of hardware? Only convince, use of plugins ect.
|
|
|
Post by popmann on Jul 9, 2018 21:00:10 GMT -6
Since no one directly answered....LLM disables the DAW's compensation engine. Depending on the specific app and plug ins, it MIGHT shut their effect off and it might keep it effecting--just not compensate for it's latency. Cubase's is actually named for what it is...."Constrain Delay Compensation". Admittedly LLM, is a name that makes more sense to the average user....but, then they also think it's ok to use it whenever they want less latency.
It's a bad idea to use it as a regular method of operation. It's really largely a temporary bandaid--which BEST case scenario disables processing and thus gives the performer a different SOUNDING cue....Worst case, it changes the TIMING of all the tracks in the cue, meaning the player is playing/sining to the wrong timeline.
IME....O?.....you monitor with hardware. Either digital (mixer in your interface) or analog. And the DAW needs to do FULL IO compensation....sample accurately. And you better TEST that. If you have that all set up, you leave the buffer at 1024 or whatever and make music....Cubase or Logic or StudioOne will do a secondary buffer for VI's being played live....your hardware mixer on your interface (or obviously an analog mixer) won't change latency from 32 samples to 2048....you can always figure out a way to get a cue reverb for a singer or whatever....trying to run a machine right on the edge of technical capabilities so that you can have a "less than absurdly latent" cue in a software mixer is just a recipe for bad music and tech glitches
|
|
|
Post by donr on Jul 9, 2018 21:45:54 GMT -6
No reason to ever monitor through a DAW unless you're the engineer and the only one listening to the latent tracks. Mixing for example. Automatic latency compensation of tracks through various DSP processes is a modern marvel.
For any talent monitoring and overdubbing tho, pre-DAW mixes are the way to go. As has been said here before, even a cheap anlog mixer just for talent monitoring is better than going round trip through your computer and interface. I/O Interfaces with onboard pre-DAW mixing is an alternative. That's what I do.
Yeah, it's annoying to have another computer app window open for your interface mixer software, how does AVID handle that, I don't use ProTools. Does PT have hardware monitoring?
Digital conversion itself has gotten really fast in the years since digital came in. The Line6 G series a/D and D/a guitar wireless has virtually no latency at all.
|
|
|
Post by M57 on Jul 10, 2018 6:50:26 GMT -6
So if I understand you all correctly I should always route my input (and mix) directly to the headphone output. Even my 10 year old interface let's me do that, but with no effects, which is no fun. It feels much more musical to have a little reverb on your voice when recording. But with these new interfaces boasting under 2ms of round trip latency, it sounds like I could potentially go through the DAW anyway. Though I think popmann suggests that even doing that things are gonna be offset too much?? On the other hand, it sounds like I should be using the onboard DSP to add things like reverb to the headphone mix. That'll have less latency than a round trip, right? ..or does <2ms make this a non-issue?
|
|
|
Post by swurveman on Jul 10, 2018 7:22:48 GMT -6
So if I understand you all correctly I should always route my input (and mix) directly to the headphone output. Even my 10 year old interface let's me do that, but with no effects, which is no fun. It feels much more musical to have a little reverb on your voice when recording. But with these new interfaces boasting under 2ms of round trip latency, it sounds like I could potentially go through the DAW anyway. Though I think popmann suggests that even doing that things are gonna be offset too much?? On the other hand, it sounds like I should be using the onboard DSP to add things like reverb to the headphone mix. That'll have less latency than a round trip, right? ..or does <2ms make this a non-issue? My 7 year old RME AES-32 Interface can do Asio Direct Monitoring, meaning that I never have to leave Cubase if I don't want to. Nevertheless, I always use the RME Mixer-Totalmix- anyway for cue mix levels. Pro Tools won't do ADM and it doesn't bother me to toggle between Totalmix for Cue mix levels and Cubase/Pro Tools for arming tracks for recording. The best thing I ever did was spend the money for my RME interfaces. I never have latency problems. In this case, I really did get what I paid for.
|
|
|
Post by popmann on Jul 10, 2018 8:25:13 GMT -6
Note: Apple removed ASIO direct monitoring at OS level some years ago.
Re: “Even my ten year old interface”.....how long do you think some of us have been doing this? ....a 20 year old one, and in fact 99% of all of them ever, not having the Avid/Digidesign badge had hardware mixers. And on the round trip, Id bet, though id have to look it up, that it was 10 or 15 years ago that Apple themselves declared hardware montiroing dead becuase theyd worked with Apogee to acheive 3ms or just shy round trip latency all the eay the machine.
...yes, not compensating for anything. It the RTL of the first track you record, which doesnt even NEED monitoring. The time most people are most sensitive are when you get to the last....,the lead and harmony vocals.
Truth is, RME PCI cards did that or came close prior. Its functionally not useful. The implication is that you can set it at 32 sample buffer and just record with cues coming off the software mixer. How come no one does? How come Avid continued to sell $30k hardware mixing cards for those same Macs and lead the industry doing it? Those amazing numbers that some Thunderbolt interfaces get? Same PCI all those years ago. Which is higher than digital hardware mixers 30 years ago.
Re:reverb on a vocal, ive posted directions here a million times, you can add reverb to a vocal while its live input is not going round trip through software. Interestig note: not using ASIO direct monitoring, which breaks the signal flow(by design). You do need to interact with a second mixer, becuase you do need the inout going into the software mixer via “software mpnitoring” to get to the revrb aux itself. The only thing not using software monitoring counts out are playing through INSERT effects...,ie, amp sims, autotune, compressors, eq....things literally in line with the input. There ARE also interfaces with reverb on the interface**. With nicely routeable interface mixers like totalmix, you could hook up your on reverb box. There are a lot of ways to do that while monitoring the input mic NOT all the way through software.
**as a historical note these didnt START until ten years ago or so when it became clear that faster intel chips were never going to solve the RTL of the DAWs mixer. When machine got to be able to run 32 samples on PCIe....and.....the whole market, literally some 98% pocked USB amd Firewire that could never equal the raw RTL....and plug ins started using more an more latency becuase hen you want an SSL EQ, how many of you chose based on the one that ran in one buffers time? Right. You pocked the one that sounded best without regard for the latency. Those two tidal wave market forces turned the tide to get Stenberg and MOTU and finally even RME to add reverb DSP on the interface chips. I knew people who bougt and probably still use their “3ms as real time” DAWs. If, as an engineer, yiu choose that—dont buy a plug in that takes more than that buffer, and your clinets are ok with the 3-4ms....yiu CAN do it....been able to for a long time. But, if you think not having a reverb on a vocal is “not fun”....youve got no idea what kind of diligence and constant attention that would take to provide it.
|
|
|
Post by donr on Jul 10, 2018 18:55:29 GMT -6
Where hardware or analog monitoring comes in handy is when you're ⅔ through a mix with a bunch of effects and plug-ins, and you're already at 1024 or 2048 buffer size in the DAW. Then you want to overdub some percussion or a guitar part, or a vocal idea.
You could do a rough, then overdub to that on a new session, and then import the track, but if you have hardware monitoring, you can record and hear the track and performance in sync even with the huge buffer.
|
|
ericn
Temp
Balance Engineer
Posts: 15,014
|
Post by ericn on Jul 10, 2018 19:33:20 GMT -6
Where hardware or analog monitoring comes in handy is when you're ⅔ through a mix with a bunch of effects and plug-ins, and you're already at 1024 or 2048 buffer size in the DAW. Then you want to overdub some percussion or a guitar part, or a vocal idea. You could do a rough, then overdub to that on a new session, and then import the track, but if you have hardware monitoring, you can record and hear the track and performance in sync even with the huge buffer. Or Cough Cough PT HDX😎 of course RADAR V means got to have a console here for all monitoring.
|
|
|
Post by donr on Jul 10, 2018 19:57:01 GMT -6
Where hardware or analog monitoring comes in handy is when you're ⅔ through a mix with a bunch of effects and plug-ins, and you're already at 1024 or 2048 buffer size in the DAW. Then you want to overdub some percussion or a guitar part, or a vocal idea. You could do a rough, then overdub to that on a new session, and then import the track, but if you have hardware monitoring, you can record and hear the track and performance in sync even with the huge buffer. Or Cough Cough PT HDX😎 of course RADAR V means got to have a console here for all monitoring. I was asking how PT does monitoring and overdubbing, 'cause I've never used PT or Avid hardware myself. It would seem if you're going to eliminate a console for a DAW based recording studio, you have to provide latency free monitoring and cue mixing at the interface point no matter what the DAW buffer was.
|
|
ericn
Temp
Balance Engineer
Posts: 15,014
|
Post by ericn on Jul 10, 2018 20:03:50 GMT -6
Or Cough Cough PT HDX😎 of course RADAR V means got to have a console here for all monitoring. I was asking how PT does monitoring and overdubbing, 'cause I've never used PT or Avid hardware myself. It would seem if you're going to eliminate a console for a DAW based recording studio, you have to provide latency free monitoring and cue mixing at the interface point no matter what the DAW buffer was. With HD and HDX you have DSP on the cards to run plugins and mixes. Just like any of the other low latency mix / effects systems it’s all about dedicated DSP.
|
|
|
Post by donr on Jul 10, 2018 20:16:39 GMT -6
Ah, so the tracks and plugins all run with ultra low latency? That's damm professional.
Still, the interface and software solutions of UAD and Metric Halo, MOTU and others work. The pain is switching windows and apps in the DAW computer.
I'm messing with Positive Grid's new Mini amp and software. It's a 300w class D guitar amp with their Bias2 amp software. They have a USB port for a computer, but the phenominal thing is the totally elegant and bulletproof iOS editor and amp creator that bluetooths to the amp on its own dedicated connection. It just works great.
An interface company could build their mixer and matrix control app for iOS/Android, and you'd just need a tablet on your table top to control the interface independent of your DAW computer.
|
|
|
Post by matt@IAA on Jul 10, 2018 20:18:47 GMT -6
donr that’s exactly the system on the MOTU. Anyone on the same network as the interface can use an app for the full control of the interface, the mixer or their own headphone mixes. It can be a standalone FOH mixer like this.
|
|
|
Post by donr on Jul 10, 2018 20:25:31 GMT -6
donr that’s exactly the system on the MOTU. Anyone on the same network as the interface can use an app for the full control of the interface, the mixer or their own headphone mixes. It can be a standalone FOH mixer like this. That's awesome. The most recent MOTU interface I have is the 828mkIII. Richie Castellano is using the latest with ProTools Native, but he didn't tout that feature. See, Grandpa learned a couple new things this evening. I love this place.
|
|
|
Post by popmann on Jul 10, 2018 20:48:26 GMT -6
I run my system at 1024 all the time. Hardware level latency for virtual instruments played live (I actually record the audio live, I might add--so ZERO tolerance for a glitch)....for years I integrated a rack of external gear into the software mixer and it handled all the compensation sample accurately.....
Mixbus, I run at 2048, since it doesn't host VIs well at all--so it's straight audio.
And I'm a certified SNOWFLAKE about performance. Both in terms of the response time of my VIs....and the compensation engine's accurracy . I literally sensed (and then tested to confirm) that Logic was off by 6ms. The compensation engine. The PIA of the reverb on a live input is the only workaround--and it IS a workaround....that could be done' better--but, honestly? The biggest issue is how I've had to develop different ways for different apps. Once I've been working in one long enough, it just becomes muscle memory the extra button push to play back or bring the speakers back up or whatever....Mixbus has a cool thing that unassigns the channel from the master with the press of the rotary....so, it's ONLY sent to the reverb....nice. Press it again for playback and you get the track AND reverb.
|
|
ericn
Temp
Balance Engineer
Posts: 15,014
|
Post by ericn on Jul 10, 2018 21:23:38 GMT -6
Ah, so the tracks and plugins all run with ultra low latency? That's damm professional. Still, the interface and software solutions of UAD and Metric Halo, MOTU and others work. The pain is switching windows and apps in the DAW computer. I'm messing with Positive Grid's new Mini amp and software. It's a 300w class D guitar amp with their Bias2 amp software. They have a USB port for a computer, but the phenominal thing is the totally elegant and bulletproof iOS editor and amp creator that bluetooths to the amp on its own dedicated connection. It just works great. An interface company could build their mixer and matrix control app for iOS/Android, and you'd just need a tablet on your table top to control the interface independent of your DAW computer. It was the whole scale able, single app thing that made Digi/Avid the standard, today native has the power but just can’t give you the consistent stable low latency without DSP.
|
|
ericn
Temp
Balance Engineer
Posts: 15,014
|
Post by ericn on Jul 10, 2018 21:26:01 GMT -6
donr that’s exactly the system on the MOTU. Anyone on the same network as the interface can use an app for the full control of the interface, the mixer or their own headphone mixes. It can be a standalone FOH mixer like this. That's awesome. The most recent MOTU interface I have is the 828mkIII. Richie Castellano is using the latest with ProTools Native, but he didn't tout that feature. See, Grandpa learned a couple new things this evening. I love this place. If I was going to buy an interface today it would be hard not to go MOTU, they have the Sonics stability, features and bang for the buck. Really upped their game.
|
|