|
Post by svart on Mar 13, 2024 11:03:15 GMT -6
I don't really have anything constructive to add.. But I genuinely cannot imagine monitors costing 20K that are actually scientifically better than monitors costing 5K. I really can't. It's your money though. I'm well past the point of caring about scientific within reason. We've both chased that dragon Svart and going by stastics everyone should have a pair of Genelec 8351's or KH310's. Although in my room, with my setup (corrected & analysed or freeform) I could not get them to the same performance, presentation, clarity or accuracy as the Dyn's. Although why should they perform as well?
The Neumann's are a cheap(ish) sealed cab non full range 8" woofer monitor and we all know that speakers are generally the weakest link in any chain (distortion alone), generally the way to get around this is one / two subs? Although that comes with their own cost & complexities, or the other option is big monitors with 12 inch drivers using complex technologies like cardioid dispersion or DSP integrated correction (in the case of the D&D 8C's both).
I'll ask you now, have you actually heard Geithain's, ATC's, D&D's or the Dynaudio Core 59's in a spare no expense acoustically designed room? Believe me, it can get much, much better than some Neumann's. Now I ain't sayin for a minute that the KH310's aren't the perfect solution for someone and I might be barking up the wrong tree here. The issue with the Dyn's is you get too much clarity, detail, transient information etc. coming from an absolutely wide phantom stage. It's like five people talking to you at the same time in a half circle, plus they are a bit shrill which makes it even worse. I could actually be looking for something "less", not quite as "much" in general because it's impressive for the first hour but six hours into a mix and I feel like I'm having a migraine.
Yes I have. Listened to ATCs in Doppler(RIP) and Blackbird a few times over the years. Anyway, like I said, it's your money. Have fun with it. I have yet to hear more than a few % difference in speakers that cost manyfold more, so it's just not for me. Have you thought that maybe the things you're hearing and comparing on the Dyns aren't actually more accurate, but actually less accurate than other speakers but you simply prefer them?
|
|
|
Post by svart on Mar 13, 2024 10:55:34 GMT -6
If they are AES/SPDIF, then no other clock is needed. AES/SPDIF are serial streams and are bi-phase encoded, which means they have the clock embedded in the datastream. The first one is always the master and the second is always the slave. However, you might need to set the slave unit to clock itself FROM the AES/SPDIF. You should never use an external clock along with an AES/SPDIF sourced datastream. AES/ SPDIF generally don't have any kind of buffering so there would be a significant chance for frame slippage (pops, clicks, other weirdness). So the Hilo only has one WC out and I have two other units. I’ve been using the AES signal to clock it…but now that I think about it, it would now be getting clock signal from the Apollo…I’d imagine this isn’t ideal. How do you deal with that? I haven't really been following how you have things set up. What are the units you are trying to chain together again?
|
|
|
Post by svart on Mar 13, 2024 8:44:47 GMT -6
I don't really have anything constructive to add.. But I genuinely cannot imagine monitors costing 20K that are actually scientifically better than monitors costing 5K. I really can't. I've heard audiophile systems that cost 50K+ and I objectively don't hear the differences between them and a 5K setup. I imagine it's the same type of arrangement, that 15K more is somehow 3x better despite me not being able to easily hear the difference. It's your money though. But I think a large part of that is the room and acoustic setup, not the monitors. I've heard some high end systems that have incredible detail and accuracy you just don't hear on other systems, specifically ATC SCM50. Only to listen to something recorded with what's likely to be gear filled with "cheap" opamps and mixed on NS-10s...?
|
|
|
Post by svart on Mar 13, 2024 8:26:23 GMT -6
I don't really have anything constructive to add..
But I genuinely cannot imagine monitors costing 20K that are actually scientifically better than monitors costing 5K. I really can't. I've heard audiophile systems that cost 50K+ and I objectively don't hear the differences between them and a 5K setup. I imagine it's the same type of arrangement, that 15K more is somehow 3x better despite me not being able to easily hear the difference.
It's your money though.
|
|
|
Post by svart on Mar 13, 2024 8:08:47 GMT -6
If I’m going digital out of the interface into another DAC for monitoring, do I have to choose a master clock, or should they clock separately? If they are AES/SPDIF, then no other clock is needed. AES/SPDIF are serial streams and are bi-phase encoded, which means they have the clock embedded in the datastream. The first one is always the master and the second is always the slave. However, you might need to set the slave unit to clock itself FROM the AES/SPDIF. You should never use an external clock along with an AES/SPDIF sourced datastream. AES/ SPDIF generally don't have any kind of buffering so there would be a significant chance for frame slippage (pops, clicks, other weirdness).
|
|
|
Post by svart on Mar 13, 2024 7:16:26 GMT -6
I made mine. I used some steel tubing and angles and welded up supports for wooden shelves. The frame was then bolted to the wall. This is an older pic, I now have 5 amps on the wall.
|
|
|
Post by svart on Mar 11, 2024 20:08:47 GMT -6
John will eventually end up with the new 828es, just watch. Lol
|
|
|
Post by svart on Mar 11, 2024 14:41:16 GMT -6
Apollos have too much of that mid range pinch. Every time I hear that tone I just can't unhear it.
|
|
|
Post by svart on Mar 11, 2024 11:25:45 GMT -6
The general rule of thumb I learned is that every inch of airgap behind a panel "doubles" the effective depth of the panel.
However, a panel of glass fiber has a reduced bandwidth compared to something like rockwool. I also prefered the rockwool itch (mild) compared to the glass fiber itch (intense)..
|
|
|
Post by svart on Mar 11, 2024 11:13:33 GMT -6
Yeah I dug pretty deep into this Lufs charade a few years back really trying to understand it in depth. Have tried same mixes with both the -14lufs and full blast on spotify and came to the same conclusion as this dude. I have actually discarded the whole idea of lufs completely. It does not seem to make any difference as I see it, really frustrating actually. Load up your master as loud as it is and it will play just great. Anything at -14lufs will just sound weak on any streaming service have been my experience. I call bs on the whole thing to be honest, wasted too many hours of my life I will never get back trying to make sense of it. I aim for around -12 to -10dB RMS for my mixes and don't really have any issues with loudness when comparing to commercial releases.
|
|
|
Lufs
Mar 11, 2024 11:11:33 GMT -6
Dan likes this
Post by svart on Mar 11, 2024 11:11:33 GMT -6
And he says RMS is basically the same so why LUFs ? LUFS and RMS are mostly the same in practice. RMS is an arbitrary short duration, and LUFS was supposed to be a running average over a specified period of time. LUFS didn't even work that well because people gamed the system pretty quickly, so they now have LUFS-I (Integrated, the whole song) and LUFS-M (momentary, about 1/2 second) and LUFS-S (short-term, about 3 seconds) which are all different time windows.
|
|
|
Post by svart on Mar 8, 2024 17:45:15 GMT -6
I used to be part of a FB group that would give each other mix sessions to practice on. I did a few of those and it was pretty fun and a really good learning experience. Does the group still exist? Yeah but there hasn't been a single post since 2018
|
|
|
Post by svart on Mar 8, 2024 8:18:05 GMT -6
I used to be part of a FB group that would give each other mix sessions to practice on. I did a few of those and it was pretty fun and a really good learning experience.
|
|
|
Post by svart on Mar 6, 2024 17:31:32 GMT -6
One thing you can do is render a track from one computer, then do the exact same thing with the same daw/plugs/session file with the new one then do a null test.
|
|
|
Post by svart on Mar 6, 2024 14:46:56 GMT -6
A computational process is a computational process. There is ZERO way that process changes between computers if the plugins and DAW are doing them the same way. It's more than likely a setting that's different. If the CPU changes or the audio driver changes, that would not effect how the plugins and DAW process audio? I guess it's a mystery then why my M1 sounds better than my old Mac. Because it definitely does. Not sure, I'm def not an expert when it comes to computers. Nope. The process is dictated by the program being run. The CPU is only there to carry it out. So, if the program says add 2+2, then the CPU output will always result in 4 no matter how the CPU might differ in design from a previous one. An x86 cpu should have the same result as an ARM. If there was something written differently in different versions of a program, it's not very likely the difference would be enough to hear. It's most likely some kind of setting, like whether or not the interface defaults to a sampling rate that's different from your DAW.
|
|
|
Post by svart on Mar 6, 2024 14:03:01 GMT -6
A computational process is a computational process. There is ZERO way that process changes between computers if the plugins and DAW are doing them the same way.
It's more than likely a setting that's different.
|
|
|
Post by svart on Mar 6, 2024 10:29:01 GMT -6
I do busses into busses all the time. Never really noticed any delay.
However, my headphone sends are taken directly from the tracks themselves and sent to the analog outs through the MOTU matrix to the analog outs for the Hearback system, so at least the musicians don't get much of a delay. Since I listen to the main mix output, everything is relative anyway, but I frequently track stuff sitting at the desk listening to the main mix and don't feel any delay either.
Also, the automation is relative to the playback cursor, so I don't know how it can be "off" from the timing of the playback. I also don't really do super tight automation either. It usually starts/stops just before or just after a section anyway.
I've only had maybe one plugin that seemed to freak out the compensation in Reaper and it would start a second late and end a second late.
|
|
|
Post by svart on Mar 6, 2024 7:52:14 GMT -6
What are the mods? Do they describe them?
|
|
|
Post by svart on Mar 5, 2024 13:56:23 GMT -6
I think SW can get you there faster these days. I used to think the opposite, but I took a mix I did primarily in hardware and then redid the mix in software with similar plugs to the hardware (1176 for 1176, etc) and it turned out better.
I'm glad I learned hardware, but I think software is superior for getting work done these days. Recall is something I have to do a lot and there's no way I could sit there and recall hardware before every session.
|
|
|
Post by svart on Mar 5, 2024 10:18:31 GMT -6
If you’re comfortable running monitor mixes on your console that’s totally fine. But for me, it’s way easier to setup an PT cue mix than it is tk get one going on say, am API 1608. My routing in PT is basically infinite. Not so much the console. Also, I can send my balances to my cue faders with a click, and then tweak them independently as needed. Stunt reverb? Sure, takes like 30 seconds. I think folks who prefer to do it on a console are just used to doing it that way. Nothing wrong with sticking to what works. For cues, I've been interested in the Motu Monitor 8 and associated AVB setup. I've never tried it, but at least on paper it seems like you could just feed it everything, along with stunt reverbs, and then let the musicians self mix their headphones mix using their phones to connect back to the web based mixer interface on the Monitor 8. And that's supposed to be pretty low latency. I contacted Motu about the latency numbers a few years ago, but I forget what they exactly were. Basically what I do, but I use the analog outputs from my 828es out to a hearback system. At 88.2k and 128 samples I'm getting around 5ms through Reaper and the motu system. I can't imagine buying a hdx system to get the same performance..
|
|
|
Post by svart on Mar 5, 2024 7:28:31 GMT -6
This is the 100 meter response of Canare L-4E6S. 328 feet. View AttachmentOooh - I don't like the look of that peak at 10Hz The graph doesn't look legit to me, but I've seen some kinds of little peaks like that at DC when you interface offset impedances together. The graph is probably normalized to 0dB, which is why you see gain at 10Hz.
|
|
|
Post by svart on Mar 4, 2024 15:41:08 GMT -6
Interesting.. I took the datasheet to mean ALL conductor capacitance relative to each other is 200pF, but maybe they mean EACH conductor is 200pF, which means you have to multiply it by 4x.. Meaning that 20000pF per 100m is actually 80000pF, which would cut the -3dB down to 220KHz or so, without taking into account any affects from the source/load impedances. What it also doesn't account for is that when paralleling up two conductors, the DCR will half, which would raise the -3dB point to 440KHz. It also doesn't account for any series inductance that will be added, which is going to be pretty high, perhaps a few uH for 100m. All in all, I don't see how that graph can be correct based on the numbers the datasheet give.
|
|
|
Post by svart on Mar 4, 2024 12:34:44 GMT -6
Just remember, "Trust the Science". When they turn out to be wrong, then "The Science is Always Changing" I get that we're riffing on the Anti-Elitism™️ Du Jour, but it's worth noting that this ^^^ isn't contradictory. I know it's not. But as a political tool, "science changes" allows people using "The Science" for their purposes a far too easy way out. Just about any Science in headlines has been picked to be there for a reason.
|
|
|
Post by svart on Mar 4, 2024 12:16:59 GMT -6
Only when it can be physically measured! Everything else is guessing tainted with personal bias. Over the last few years it seems that I have been developing an anti-science bias. . At least that's what they are telling me.... LOL Just remember, "Trust the Science". When they turn out to be wrong, then "The Science is Always Changing"
|
|
|
Post by svart on Mar 4, 2024 10:49:41 GMT -6
Why would you care what's happening at 96KHz? Most humans can't hear above 12K-ish. Also, 96K would be a sampling rate, but the A/D antialiasing filters will likely be only around 20KHz. But lets look at some details: Starquad is around 200pF per meter between the conductors themselves as well as the shield. The nominal DCR of the conductors is 0.09ohms per meter. If we ONLY consider the DCR and the capacitance of the cable and nothing else, then we use the formula Fc=1/2piRC. Let's say we have 100m of cable (I know, it's a long cable) then we'd say the capacitance is roughly 20000pF and the DCR would be 9 ohms. The result would be 884.2KHz. Or roughly 10x the 96KHz sampling rate you're using. Changing the impedance of the source or load won't appreciably change the outcome of the filter when dealing with these values but it will change the level of the signal. Wait......... Are you asking us to trust the science?? Only when it can be physically measured! Everything else is guessing tainted with personal bias.
|
|