|
Post by Blackdawg on May 3, 2019 14:13:14 GMT -6
When I got to use a EMT140, eqing/filtering the sends was VERY important. and also the return. Doing it right you can make the plate sound like a lot of different spaces or things. By far the best reverb. I have a Scamp Parametric module in my rack that is seriously underutilized - sounds like a perfect application for it if I ever get my hands on an EMT. I used to use a....hmmm...Obon? Parametric EQ for that. Then the SSL Duality for filtering/eq the returns.
|
|
|
Post by johneppstein on May 3, 2019 17:42:08 GMT -6
I have a Scamp Parametric module in my rack that is seriously underutilized - sounds like a perfect application for it if I ever get my hands on an EMT. I used to use a....hmmm...Obon? Parametric EQ for that. Then the SSL Duality for filtering/eq the returns. Orban. When I was with FM Productions we had a couple of 621s and a couple of 622s in the monitor rack. I had to work out mods to switch the lowest 2 bands of each channel up an octave. Not too difficult on the 622s but on the older 621s I had to hand tweak matching the c apacitor pairs to get them to work right, otherwise for some reason the cut adjustment would overshoot and start boosting at the extreme position - weird. The capacitor pairs had to be selected so that the matching was off just a smidge for it to work right and the amount was different on each of the 4 channels I had to do. The orban factory claimed that the mod was impossible on either model but I got it to work with a lot of fiddling.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on May 3, 2019 18:00:24 GMT -6
Ok. Maybe I'm just overthinking it then and it must be very different in the audio world than the RF world. There's no chance I could use a general purpose CPU for what we do. In fact, sometimes we have to boil down pieces into hardware processing via FPGA/ASIC to get fast enough throughput. Even the last Bitcoin mining craze showed the limitations where you'd need a high end CPU and GPU to crunch as much data as a single purpose-built DSP engine in FPGA could. I'd still like to know why a hardware M7 machine sounds tons better than the plug-in IR's do. We've probably put everyone else to sleep with this, so I promise not to keep beating on it. The limitation in reverb algorithms was/is memory access. The random nature of access was solved in part by using high speed static memory. But that was still slower and much more expensive than DDRAM of the present day. Nowadays you can read in several cache lines in very little time and vectorize operations. The old machines weren't really written to process vectors so we all had to rejigger our thinking when desktop CPUs started to show some horsepower. Mathematically, the verb algorithms aren't really complicated. The Exponential algorithms do a lot more number-crunching than the Lex 'verbs, but it's still trivial compared to high-end graphics (and I'd assume RF, a field I don't really know anything about). Regarding the M7, I'm pretty sure Casey is doing some dynamic stuff inside his algorithms. It's not necessarily anything you're supposed to hear, but I'd be surprised if he isn't. The IRs are just snapshots of one moment in time, so they simply can't do the subtle stuff. That's one of my problems with convolution. The real world is chaotic--even the things that don't seem to be--and an acoustic space shows variance, both on the large and small scales. Even though convolution 'verbs try to fake it (a chorus in front, some fiddling here and there) they intrinsically don't have that sort of subtlety. The quote that comes to mind when I think about about convolution is this: "The body was so lifelike".
|
|
|
Post by Blackdawg on May 3, 2019 18:04:09 GMT -6
I used to use a....hmmm...Obon? Parametric EQ for that. Then the SSL Duality for filtering/eq the returns. Orban. When I was with FM Productions we had a couple of 621s and a couple of 622s in the monitor rack. I had to work out mods to switch the lowest 2 bands of each channel up an octave. Not too difficult on the 622s but on the older 621s I had to hand tweak matching the c apacitor pairs to get them to work right, otherwise for some reason the cut adjustment would overshoot and start boosting at the extreme position - weird. The capacitor pairs had to be selected so that the matching was off just a smidge for it to work right and the amount was different on each of the 4 channels I had to do. The orban factory claimed that the mod was impossible on either model but I got it to work with a lot of fiddling. yes! That was it. the 622b I think was what we had. Worked well for that.
|
|
|
Post by johneppstein on May 3, 2019 18:04:52 GMT -6
Ok. Maybe I'm just overthinking it then and it must be very different in the audio world than the RF world. There's no chance I could use a general purpose CPU for what we do. In fact, sometimes we have to boil down pieces into hardware processing via FPGA/ASIC to get fast enough throughput. Even the last Bitcoin mining craze showed the limitations where you'd need a high end CPU and GPU to crunch as much data as a single purpose-built DSP engine in FPGA could. I'd still like to know why a hardware M7 machine sounds tons better than the plug-in IR's do. We've probably put everyone else to sleep with this, so I promise not to keep beating on it. The limitation in reverb algorithms was/is memory access. The random nature of access was solved in part by using high speed static memory. But that was still slower and much more expensive than DDRAM of the present day. Nowadays you can read in several cache lines in very little time and vectorize operations. The old machines weren't really written to process vectors so we all had to rejigger our thinking when desktop CPUs started to show some horsepower. Mathematically, the verb algorithms aren't really complicated. The Exponential algorithms do a lot more number-crunching than the Lex 'verbs, but it's still trivial compared to high-end graphics (and I'd assume RF, a field I don't really know anything about). Regarding the M7, I'm pretty sure Casey is doing some dynamic stuff inside his algorithms. It's not necessarily anything you're supposed to hear, but I'd be surprised if he isn't. The IRs are just snapshots of one moment in time, so they simply can't do the subtle stuff. That's one of my problems with convolution. The real world is chaotic--even the things that don't seem to be--and an acoustic space shows variance, both on the large and small scales. Even though convolution 'verbs try to fake it (a chorus in front, some fiddling here and there) they intrinsically don't have that sort of subtlety. The quote that comes to mind when I think about about convolution is this: "The body was so lifelike". Excellent post.
It should be noted that even the most modern high power multicore CPUs STILL require an outboard GPU to keep up with current programs.
|
|
|
Post by jcoutu1 on May 3, 2019 18:05:47 GMT -6
Ok. Maybe I'm just overthinking it then and it must be very different in the audio world than the RF world. There's no chance I could use a general purpose CPU for what we do. In fact, sometimes we have to boil down pieces into hardware processing via FPGA/ASIC to get fast enough throughput. Even the last Bitcoin mining craze showed the limitations where you'd need a high end CPU and GPU to crunch as much data as a single purpose-built DSP engine in FPGA could. I'd still like to know why a hardware M7 machine sounds tons better than the plug-in IR's do. We've probably put everyone else to sleep with this, so I promise not to keep beating on it. The limitation in reverb algorithms was/is memory access. The random nature of access was solved in part by using high speed static memory. But that was still slower and much more expensive than DDRAM of the present day. Nowadays you can read in several cache lines in very little time and vectorize operations. The old machines weren't really written to process vectors so we all had to rejigger our thinking when desktop CPUs started to show some horsepower. Mathematically, the verb algorithms aren't really complicated. The Exponential algorithms do a lot more number-crunching than the Lex 'verbs, but it's still trivial compared to high-end graphics (and I'd assume RF, a field I don't really know anything about). Regarding the M7, I'm pretty sure Casey is doing some dynamic stuff inside his algorithms. It's not necessarily anything you're supposed to hear, but I'd be surprised if he isn't. The IRs are just snapshots of one moment in time, so they simply can't do the subtle stuff. That's one of my problems with convolution. The real world is chaotic--even the things that don't seem to be--and an acoustic space shows variance, both on the large and small scales. Even though convolution 'verbs try to fake it (a chorus in front, some fiddling here and there) they intrinsically don't have that sort of subtlety. The quote that comes to mind when I think about about convolution is this: "The body was so lifelike". Hey Michael, are you living in the Boston area?
|
|
moze
Full Member
Posts: 35
|
Post by moze on May 3, 2019 18:22:15 GMT -6
At work I have access to a 960, 480, 224, RMX, 2016, TC 4000 and 5000 and a bunch of other various units. They are never used except during tracking. Everything is plugin now except for the Bricasti. Anyone remember running expanders on every return of an SSL to deal with the noise? SO glad I don't have to mess with that anymore.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on May 3, 2019 18:27:33 GMT -6
It really has very little to do with latency or the speed of the processor. It has to do with the code running on the machine language level, which is something that very, very few coders understand anymore - people who program in assembler or ML are rarer than hen's teeth these days, complex tasks are so much easier to program in higher level languages. I don't program myself but I started messing with computers back in the days when the 6502 processor was king and EVERYTHING had to be programmed in assembler/ML if you didn't want it to execute like a turtle, so I do understand the differences and compromises that are inherent in all high level languages.
In this case it's not really the speed per se but what the differrence in execution path does to the sound. I know I'm not expressing this very well. I know it sounds like "magical thinking" but it's not magic - it's just a level of science that's way beyond most normal thinking people - machine language guys are often rather strange ducks.
Machine language was the only way to program a computer for a few years. It required a programmer to know all the ones and zeroes and where they went. After that came assembly (probably what you're thinking about), it was symbolic but a line of assembly still equated to one computer instruction. For a long time it was the most efficient way to do something. I must have written a million lines of it for various processors. If I never write another, it will be too soon. The first compiler was FORTran, which came along in the mid-50s. It was the first language I ever got paid for writing code in (although it was a couple of decades later than the 50s). It had the virtues of being processor independent and allowing you to think more about the problem you were trying to solve. Other than that, it was horrible. There have been a lot of languages since then. More than 70 years of computing have given a lot of insight to programmers, compiler-makers and chip makers. In most cases, an experienced coder can write in a high-level language with little loss of CPU efficiency as compared to assembly language. Aside from the incredibly useful fact of letting you write high-performance code in a hundredth of the time, it has one other huge advantage. If you're writing at the machine level, it's the easiest thing in the world to spend too much time optimizing something when it might not really be the right solution. After you've spent a month and saved every possible machine cycle, you've fallen in love with your solution and it's hard to get back up to system-level analysis. I've done it and so have lots of other programmers. High level languages allow you to do more rapid prototyping and make it emotionally easier just to throw away something that doesn't work. A good compiler is a thing of beauty. I've been continually surprised when I look at the assembly-level output of a compiler. It often addresses a problem in a way I wouldn't have thought of. In those cases, the result is much faster than what I'd have done in my own assembly.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on May 3, 2019 18:31:31 GMT -6
Hey Michael, are you living in the Boston area? Hiya, Until 2003, the answer would have been yes. But we've been in the Salt Lake City area since then, enjoying the mountains and the desert. Now that iZotope has acquired my 'verbs, I'll be back in Boston a few times a year, but only as a visitor.
|
|
|
Post by johneppstein on May 3, 2019 19:50:56 GMT -6
Orban. When I was with FM Productions we had a couple of 621s and a couple of 622s in the monitor rack. I had to work out mods to switch the lowest 2 bands of each channel up an octave. Not too difficult on the 622s but on the older 621s I had to hand tweak matching the c apacitor pairs to get them to work right, otherwise for some reason the cut adjustment would overshoot and start boosting at the extreme position - weird. The capacitor pairs had to be selected so that the matching was off just a smidge for it to work right and the amount was different on each of the 4 channels I had to do. The orban factory claimed that the mod was impossible on either model but I got it to work with a lot of fiddling. yes! That was it. the 622b I think was what we had. Worked well for that. The only difference between the "A" and the "B" is the number of channels. The "A" is mono.
All the Orban stuff (at least of that period) was actually intended for broadcast use. Very well built, vcery well designed, extremely robust.
|
|
|
Post by christopher on May 3, 2019 20:51:05 GMT -6
At work I have access to a 960, 480, 224, RMX, 2016, TC 4000 and 5000 and a bunch of other various units. They are never used except during tracking. Everything is plugin now except for the Bricasti. Anyone remember running expanders on every return of an SSL to deal with the noise? SO glad I don't have to mess with that anymore. when you get some down time you should try to use them more, I mean if you have them.. why not? The expander thing seems like a smart trick, envelope dynamically the reverbs. Great idea! Thanks!
|
|
|
Post by christopher on May 3, 2019 20:59:21 GMT -6
There is some great knowledge here. I don’t sleep to this stuff, and if I do... it’s all happy dreams.
|
|
|
Post by mrholmes on May 3, 2019 22:29:38 GMT -6
As long as you create something with taste - who cares if it was a real palte or a plug in? THe modern sound is different form the one 5 decades ago.
Yes. Unfortunately/fortunately the real plate sounds obviously better. I can turn a lead vocal off and only use the reverb return if I want, and it still sounds real, non-artificial, non-cheesy. I haven’t encountered a digital version that doesn’t fall apart to my ear past a certain volume level. Can't disagree because I never worked with a real plate nor I have expertise to discuss if this should be possible in one and zero land. Given the digital verbs I can't hear a big difference anymore. I just can say that I remember plate IRs in Logics Space Designer which do sound like older records. But once again I have no expertise to tell if a static IR can capture a real plate well. My impression was... Yes. May Michael can comment on this? Damn it I have to put the buy reminder for his verbs on my to do list.
|
|
|
Post by cowboycoalminer on May 6, 2019 6:47:14 GMT -6
You know, I'm a diehard hardware guy, but I've been finding that stacking a couple plug-in reverbs in a bus has been almost as good as hardware for me. I generally run Valhalla room of some sort stacked with IRs from a Bricasti. Then again, when I mix, I tend to mix the 'verb in thick and then thin out areas with EQ so that I get that nice and lush enveloping verb but it doesn't overtake the mix. I think a lot of folks tend to "go light" on the verb because it gets too cloudy and forget they can also EQ it to fit just like any other track. I'm also finding that it doesn't seem to matter what brand/model verb I use as long as the attributes fit the mix. Same here. I eq and compress a reverb buss all the time. I’ve had hardware units but it just doesn’t seem worth the trouble these days. Software gets me there actually quicker. But I’ll admit reverb is the only thing where this is the case.
|
|
|
Post by Martin John Butler on May 6, 2019 9:33:38 GMT -6
I never heard or thought of compressing reverb, which compressor do you use, what are the settings, and is it before or after The EQ?
|
|
|
Post by Blackdawg on May 6, 2019 10:21:21 GMT -6
I never heard or thought of compressing reverb, which compressor do you use, what are the settings, and is it before or after The EQ? Not quite the same thing..but cool trick. Some compressor plugins allow you to monitor only the compressed audio. So basically you hear nothing unless the signal crosses the threshold. I know you can enable this "listen" mode on your reverb send and then when the singer lets say sings loud you get reverb. When its a quite part, no reverb. Its like auto mixing reverb. Kind of an interesting trick. Then of course there the gate tricks.
|
|