|
Post by popmann on Nov 27, 2017 22:20:59 GMT -6
if you see 16 digital, it's in advanced mode already. Used as an actual converter, not interface to a computer, the 16 digital ins get turned into the 16 analog outs and visa versa--that's a 16x16 ADA unit. The advanced routing mode or whatever they call it is allowing you to use the Firewire for the digital side of the IO, leaving the AES free to act as additional inputs and outputs.
|
|
|
Post by popmann on Nov 26, 2017 20:58:20 GMT -6
So the use for the Burl will be to recapture the analog sum, IMO....and obviously overdub tracking. If you use it with the A16 for "18 inputs" you have to be aware there will likely be phase considerations--so, use it for the bass amp and guitar amp--and the other 16 for the drums, kind of thing.
You would need to use the A16 in advanced mode where you get another 16x16 AES.....hook the Burl up to the AES IN....custom cable probably. hook the DA of the DBox to the AES out. Effectively should show 18x18 in ProTools.
Just talking about tech....no idea what's better than what, as the B2 is the only one of those I've used. And actually, now that I think about it more--I think the A16 can emulate a HD192, which SHOULD have the same ADC latency as the Burl--since they're using the same chips as the old Blue192s. I don't know if that's a "setting" on the A16--but, if it's working at Avid's latency numbers, it should match the Burl--if so you can disregard the phasing part above....
|
|
|
Post by popmann on Nov 26, 2017 14:30:44 GMT -6
I might get the Mixbuss32c suite. I'm having fun with the demo this weekend. The mix process is much faster. The Polarity Optimizer actually improved the sound of some SD3 drum samples....ha....it only "gained" .5db volume but it sounded nicer and fuller. I want to support people trying to make audio engineering better and easier/quicker....their built in loopback testing resulted in sample accurate time stamps in 1min at the beginning of the demo...Logic's failure to do this properly is why I was even looking.
I guess I'll decide today if I care about the plug in suite or just getting the app. Their reverb and buss compressors don't outdo what I have....so, I'll demo their novel things--character plugs.
|
|
|
Post by popmann on Nov 26, 2017 14:18:19 GMT -6
I've tried to get people to understand that with an ITB studio, the only real "calibration" is the monitoring. If you open a drum VI and it doesn't make you afraid it will blow your speakers, you're not configured correctly.
The default levels of ALL of them is WAY too high, because you'll get nearly fullscale from a 127 snare drum...I usually just start all VIs and back the output down 10db before I hit a key. If I'm going to change presets, I'll just use a gain plug after. Some that's fine....some need a little MORE attn....some I end up backing it to -8 instead....this is why if you START a production with an Addictive/Superior/EZD kit playing a beat to fullscale, your recorded audio is ALL compromised and you're constantly trying to get things louder via a million ways. If your cue mixer is ALSO your DAW, you get screwed further by that.
|
|
|
Post by popmann on Nov 26, 2017 11:55:02 GMT -6
Yeah, actually, part of MY issue with even discussing "LUFS" is that it's literally "realtive to full scale"--which is only useful in mastering. you should never be near full scale in tracking and mixing so my LUFS WILL be something like -25 or something even if it "only" has a DR12 range, because it's ALSO peaking at -11dbfs(on a digital momentary ballistic) or something.
it cuts to the heart of the matter--mixing is about balance. You don't mix to a loudness. If I give Bob a well balanced mix peaking at -12dbfs and say crush it like a Demi Levato record, it will work better than if I give him one peaking at -1dbfs. If you need to go to an extreme of modern deafcon loudness, there ARE some mix considerations....BALANCE considerations....important to get a little more OCD about noises...and reverb return levels (or reverb at all)....but, none of it benefits from pushing to full scale in the mix process.
Thus the idea of "mixing to" a certain loudness-and PARTICULARLY an absolute peak loudness, which factors into LU-FULLSCALE...is flawed out of the gate....it's that people are thinking that they need to achieve what is 100% "mastering loudness" during the mix which they won't have any further mastering done. But, here's the thing--don't. If you masturb it, and I DO for my own stuff--it's a second project/process. ALWAYS. The mix is the mix....the master is the master. I tell people all the time they need to learn the difference in song demos and records for release--sale deal here, only now people blend it from "here's an drum machine beat and a riff idea tape" and want that project to end in a finished DR7 master for iTunes. If you want to know the "secret" to old guys' having good sounding work? Focus where the focus IS in the moment....if you're swapping snare drum samples from a MIDI drum track from the initial demo while mixing with crazy automations through a lookahead limiter, you simply won't achieve what you're after. It's not really that it can't happen on a TECHNICAL level in a modern workstation...it's that it's a human deficiency. YOU can't likely do it. I KNOW I can't, and I feel like I've done this my whole life.
#moCoffee #moPumpkinPie
|
|
|
Post by popmann on Nov 26, 2017 9:49:20 GMT -6
No, under 0dbfs (digital full scale).
|
|
|
Post by popmann on Nov 23, 2017 22:39:17 GMT -6
Any realistic early reflection emulation can cause the same poor comb filtering due to the phase interplay as actual acoustic reflections arriving time delayed to the mic.
|
|
|
Post by popmann on Nov 20, 2017 21:40:12 GMT -6
I think most of the theives streaming services ask for around -14 LUFS, right? So no point in going louder? What happens to commercial releases that are louder? Or do the mastering engineers master for Spotify at - 4LUFS as well as CD? All modern releases are attenuated by streaming services. This isn't new. iTunes has been using a K14 scale for....what a decade? There can't BE streaming services without volume calibration of some sort.
|
|
|
Post by popmann on Nov 20, 2017 20:46:20 GMT -6
I always mix to 99 LUFS balloons.
|
|
|
Post by popmann on Nov 20, 2017 16:00:17 GMT -6
What comes out of a mic preamp(and EQ and compressor)? Analog signal. You mult it....send it to your analog mixer&ADC....bring the return of the ADC back to a second channel. Assuming you're not using the mixer's preamps to begin with--then there's no mult needed.
|
|
|
Post by popmann on Nov 20, 2017 10:41:27 GMT -6
This is the freebie that will tell you if they're trusting the developers: www.voxengo.com/product/latencydelay/It works off the principle of asking for 1ms delay.....and then not actually delaying it except by the amount you specify. If they are testing on initiation like Cubase....this plug in won't work--spin the knob and it will move the track the wrong way because it didn't GIVE it 1ms....then click the plug in off and on with an amount specified....and now it's back in time completely--because, again--it's testing the ACTUAL throughput latency on initiation--ignoring what the developer "ask for". Most apps trust the developer. As someone who has used third party plug ins for as long as he's used DAWs....which at this point is a good long time--if you trust the developers, your compensation WILL be off in every real world mix situation. If you test on initiation, there's only one very specific time when it will be off--reinitiating the plug in will lock it back to sample accuracy.
|
|
|
Post by popmann on Nov 19, 2017 21:46:02 GMT -6
I know you don't. More than ANY workflow feature, the number one job of a recording system is to reproduce accurately. Above ALL others. No workflow enhancement or awesome sounding plug in or ability to FlexPitch&Time ranks anywhere near that. That's all icing if it does that well....but, I won't accept it doing that at the expense of the #1 priority.
|
|
|
Post by popmann on Nov 19, 2017 17:09:26 GMT -6
They finally got tempo mapping. That was a non starter for me before. I'm tempted. As much as Logic has cost me in fucked up sessions and timing lately.....how's the recording PDC and recording compensation? Actually--I'm be very specific--will Voxenego's Latency delay Plug function as intended in it? IF so, it means they're trusting the devloper's reports....if not (it won't work in Cubase) they're smart and running their own test on initiation. That freebie plug works off the principle of lying to the DAW--telling it that it needs a BIG window of latency, but then ONLY delaying it as much as you tell it to. Just curious. I'm so pissed at Apple right now, I'm using an antique Cubase machine to finish this project. Holy SHIT I forgot how long it takes to load samples off magnetic drives....fuck me....
|
|
|
Post by popmann on Nov 19, 2017 15:11:18 GMT -6
No--the analog signal. Via analog mixer. The only particularly valid reference is the analog signal it's converting. Mixer channel....compared to ADA loop of the same. At the least, hearing the analog take, then playback.
|
|
|
Post by popmann on Nov 18, 2017 14:59:42 GMT -6
Think of the second machine as a sampler. Hardware drum module with the sounds of SD3 and 16 analog outputs. Whatever makes sense to your brain. eKit>USB OR MIDI DIN>"second machine"=drum sound. How do you record that accurately? How you record ANYTHING accurately. By arming audio tracks setting levels and hitting record. You technically, again, CAN do this with one machine. But, BELIEVE ME.....if you're confused by what I'm saying--you have NO chance of understanding the signal flow in a DAW it takes to record a 14+ multi out VI's audio stream live. Plus, you give up all the other "functional replacement of acoustic kit"--being able to monitor the band analog, process individual "drum mics" on the way in through analog EQ/compression....sub a real hat (or whatever the drummer wants to use WITH the eKit).... But, also as I pointed out--this is how you functionally replace a real kit with an eKit (what he wants to do). But, Ragan doesn't even KNOW if, as a player, he WANTS to play an eKit--there's no reason to go to the expense until he has laid sticks to the kit+Superior--which he can do using his current iMac. If he doesn't like it--he can return the eKit, and use the Superior ambient mic samples in mixing. The software is NOT returnable. I know--because I looked after I got to lay fingers on it and heard how they'd mapped the samples and how NOT easily/quickly it was going to be.
|
|
|
Post by popmann on Nov 17, 2017 19:14:57 GMT -6
You should do that.
|
|
|
Post by popmann on Nov 17, 2017 18:33:10 GMT -6
But, you know what I just thought?
Buy the two pieces. You don't even know if you want to PLAY an eKIt. Run it stand alone on your DAW machine. If I out don't like the feel or sounds you can likely return the eKit and use Sd3 for its ambience mics. If you like the feel/sound of PLAYING it, you can get the second machine to properly record it. If you don't--the second machine won't do a lot to improve that.
|
|
|
Post by popmann on Nov 17, 2017 18:14:07 GMT -6
RE: Wiz (quoting gets super long) The method I've seen work....and am recommending....there is no MIDI recording. No MIDI editing. There are no resource issues. You don't have to wait for samples to load to open your project. From a Workflow, there's really no difference in tracking a permanently set up studio kit....and tracking a digital kit. You're simply using line level outputs instead of microphones. Want to use the La3a on kick? Do it. Want to run the overheads through that nice master buss EQ you have on the way in? Do it. Need of the band to monitor analog? Do it. Your album with the remote drummer....he recorded MIDI....right? Like I said "this will never work"? You're agreeing with me in your argument. Re:high hats....yes, ideally, you drop a real hat there with a hyperC mic on it. But, not being a drummer, I don't have a meaningful opinion on that vs the latest digital hat controllers-- I'm coming at this from the engineering perspective....if the drummer's happy PLAYING whatever....this is how you record it accurately. But, it should be noted that using the method I'm talking about, versus some cocked up MIDI BS, means you CAN put a mic up for their hat and it comes in in real time along with the sampled snare they triggered. It's all being played in real time by a drummer and recorded as audio along side the bass and guitar....along side the whatever is in the band. It literally can't GET simpler in use....it's just expensive. eKits are a solution for recording a band in places that acoustically you CAN'T....not a way to save money or time over recording a real kit. Which is the part we 100% agree on.
|
|
|
Post by popmann on Nov 17, 2017 16:53:38 GMT -6
You're missing the purpose of a the second machine, indeed.
The eKit is plugged into the second machine. It's basically converting a cheap PC into a Haus drum sampler with 16-24 analog outputs....PCi or PCIe will allow you to run that at 32 samples for hardware level response for the drummer....allows you to run the drum samples at whatever sample rate they sound/perform best at....without regard of the project rate. You never record MIDI anywhere--drummer plays the kit, triggering (and thus monitoring) the sounds of SD3(here)....he plays to them....the DAW records the performance as audio, because it's what a DAW does. You monitor like you would your acoustic kit....you can selectively process the feeds on the way in like you might want to the real kit--it's at that point and actual functional replacement for a drum kit in a studio that can't record real drums for acoustics or noise concerns. It really saves neither time nor money.The whole proposition is more expensive....and there's all kinds of set up time to get it all dialed in...and mixing samples is more tweaky, IME, on the back end. But, the upside is--you can have million dollar studio sounding drum tracks in a basement or spare bedroom somewhere....
|
|
|
Post by popmann on Nov 17, 2017 15:52:09 GMT -6
Over 25 years ago, I started AS "the midi guy"....ie: the "young kid who knew how to work a computer" all those years ago. The DRUMMERS there were the ones that warned me and I ignored them about the timing inaccuracy. I blew it off...."sounds the same to me, old man".
So, discussions like this, I guess, are my karmic punishment for not heeding the word of more experienced musicians (or engineers) then.
|
|
|
Post by popmann on Nov 17, 2017 14:12:01 GMT -6
I thought they killed the bridge in v9. I demo'd it, but, I don't have any 32bit only plug ins on the system--I moved to all 64bit in 2011/12.
|
|
|
Post by popmann on Nov 17, 2017 14:06:50 GMT -6
Here is Nick playing....on one of the all time funniest bits of music industry humor ever:
....if anyone hasn't heard one of the best albums never released I the 90s. It came out after Kevin died. Nick was one of the people who chipped in to finish it.
|
|
|
Post by popmann on Nov 17, 2017 14:02:30 GMT -6
Where does he record MIDI? MIDI has no issues in TRANSMISSION timing. You can play that eKit with no more latency than any hardware sampler....and record that AUDIO output of Superior....and HAVE a functional replacement for recording an acoustic kit. Which is what you say you want.
When you RECORD MIDI in ANY sequencer....ProTools, Logic, Cubase, Performer....they take your input and round it to the closest Pulses Per Quarter Note (PPQN) grid. Meaning--not where it came in.
So, there's no issue using a MIDI cable to trigger something....it's in fact, the ONLY way to trigger a digital instrument of any kind. It's that you need to record the AUDIO output of that instrument if you want to capture that performance accurately.
Nick, BTW, is a world class drummer. He, like so many others, have taken gigs selling things because of the state of the industry. Feel free to look him up. I know him mostly as Kevin Gilbert's oft collaborator....but, he's got long resume of prog rock and shall we say intellectual pop.
|
|
|
Post by popmann on Nov 17, 2017 13:31:50 GMT -6
This is well understood repeatable scientific fact. MIDI sequencers based on a PPQN grid system (all of them) won't play ANY note at the place in time you played it.....unless it happens to fall on a PPQN grid line. So functionally, exponentially more notes than not will be played back at a different place in time. The scale is small, but for something like a drum kit where the RELATIVE timing of say a hat to kick or simply the hat to the last hat is what MAKES the pocket....it does have a tangible real world cumulative effect, IME.
If you believe it to be accurate enough for your work, then it is. Doesn't make it accurate. Like Wiz, I wish you luck.
|
|
|
Post by popmann on Nov 17, 2017 11:07:14 GMT -6
(For Cubase) You need to use a stereo AUDIO channel with the reverb inserted. You can define the input and output like the aux. Little "I" button makes it input monitor without recording it--so it will continue to pass audio signal "live". If it's not obvious, the reverb set to 100% wet.
|
|