|
Post by wiz on Jan 11, 2024 15:03:29 GMT -6
Sooooo….what happens when you run off two mixes….one clocked internally…the other clocked externally…..and then go listen to each on a consumer play back medium?
Cheers
Wiz
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jan 11, 2024 15:22:46 GMT -6
I do not think the stereo fields actually get technically wider with a different clock, but they are perceived as wider because a different clock may bring more perceived clarity of elements that exist in the left/right spectrum. Does that make any sense? I would be curious about what svart thinks about this possibility as I am no expert on this. I am, or was.. TBH, there's plenty of articles on this subject, when it gets too deep like manufacturer whitepapers (Texas instruments is a good source) then feel free to come back for translations. I promise I'm not being facetious, there's tons of anomalies in audio and also stuff I should probably know but don't. I mean I started off with console repair, got into voice with stuff like DPNSS switches, then moved onto DSP & VOIP solutions and finally into benchmark or consumer AD/DA (quite a while back now). After all that experience I switched careers to follow the money, turns out there's only so many shovels one can sell.
I'd have loved to do something like the SvartBox and I'm a bit jealous but I hardly have the time to record never mind build something like that, most of the issues come from manufacturing, testing etc. as well. A very time consuming enterprise.
The article above doesn't go all that deep, in short it's jitter that affects the stereo field. In VOIP / conference the basic principles are exactly the same but the focus is on transmission methodology, there is the equivalent of software PLL scrubbers we just use sequence numbers instead (RTP) however we're talking 120ms+ latency not picoseconds like some converters. However jitter is still a PITA whether or not you're doing VOIP or ref DAC's.. Samples or packets need to come in order in either example or we get the same issues.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jan 11, 2024 15:27:32 GMT -6
OK - so you ever use a deesser and solo "listen" function where you're just hearing the esses? Or swept a tightly Q'd EQ back and forth around the high frequencies? I would imagine everyone is similar, but it's like my eardrums "collapse" (not literally obviously) at certain higher frequencies. Or it becomes like annoying or "ouch.." I'm listening to a Lainey Wilson song, "Watermellon Sunshine." With the Apollo clocking, when she lays into notes, I get that "ear collapsing"...it's not obnoxious. But it goes away when I change clocks. The bottom gets bigger, top rolls off a bit. It's not as strident. The Apollo clock miiiight actually make the soundstage a little wider. Could it be the Trinnov? Can you try clocking the Apollo from the Burl and the Trinnov instead of the Burl from the Apollo? Or removing the Trinnov and testing? I do not know how their clocks work but with the Apogee Symphonies, they pretty much ignore the other device over ADAT/Wordlock/Dante. The current Crane Song and Weiss DACs totally ignore it. Could the Apollo's clock be adding high frequency garbage to the Burl DA? I don't know. Only way to know is test it with various reconfigurations of your devices. What about the Trinnov clocking the Burl or the Apollo? Or the Burl clocking the Apollo vs it's native clock? These are all things to test. The Apollo might be asynchronous because it uses ESS DA converters that on the chip can reclock and resample everything as it comes in.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jan 11, 2024 22:21:14 GMT -6
What irks me is I'm always up for a random funny post or gear post but myself, svart or Dan's posts which have some real experience or insights into actually how all of this this works is generally left high and dry. No post likes to be seen, I'm not expecting anyone to have Svart's sort of knowledge but how are people supposed to be AE's if you can't at least process a different technical point of view amongst or against "ears are king" (which are easily fooled) crap? Manufacturers create this stuff for you. Do you beleive it's all ignorance? Don't get me wrong, the differing opinions even on a scientific level, lapses in knowlege etc. all has its part to play. I've learned stuff from Chris (aka Svart) despite doing this for decades and I've met some tool sheds of engineers in my time. Before anyone gets annoyed I ain't saying you need to be a technical expert to be a mixing or mastering "engineer" however if you can't marry the technical with sound to a certain extent then god help you..
Edit: If you don't understand what I mean, retro preamp sounds awesome 20 likes. Foundation of how everything works and can be applied to actual production, zero likes. I mean, seriously?
|
|
|
Post by seawell on Jan 11, 2024 23:56:49 GMT -6
Before anyone gets annoyed I ain't saying you need to be a technical expert to be a mixing or mastering "engineer" however if you can't marry the technical with sound to a certain extent then god help you..
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jan 12, 2024 0:19:36 GMT -6
Before anyone gets annoyed I ain't saying you need to be a technical expert to be a mixing or mastering "engineer" however if you can't marry the technical with sound to a certain extent then god help you.. I mean it’s clocking and dithering man. This stuff was a black art until a few years ago. The way many current very high end converters work renders what John is doing irrelevant because they ignore all external clocks and jitter and even sample rate as long as they get the samples but many of the older PLL type great converters sound just as good as those or just different but can be defeated from an external clock and there’s otherwise gear that is pretty good that just has clocking issues over aes like the older dangerous converters prior to convert 2, 8, and ad+. I’ve also heard the Lynx hilo change sound radically based on how you configure the clocking.
|
|
|
Post by thehightenor on Jan 12, 2024 3:54:27 GMT -6
What irks me is I'm always up for a random funny post or gear post but myself, svart or Dan's posts which have some real experience or insights into actually how all of this this works is generally left high and dry. No post likes to be seen, I'm not expecting anyone to have Svart's sort of knowledge but how are people supposed to be AE's if you can't at least process a different technical point of view amongst or against "ears are king" (which are easily fooled) crap? Manufacturers create this stuff for you. Do you beleive it's all ignorance? Don't get me wrong, the differing opinions even on a scientific level, lapses in knowlege etc. all has its part to play. I've learned stuff from Chris (aka Svart) despite doing this for decades and I've met some tool sheds of engineers in my time. Before anyone gets annoyed I ain't saying you need to be a technical expert to be a mixing or mastering "engineer" however if you can't marry the technical with sound to a certain extent then god help you..
Edit: If you don't understand what I mean, retro preamp sounds awesome 20 likes. Foundation of how everything works and can be applied to actual production, zero likes. I mean, seriously?
I fully understand and agree with your post - and there’s no way I would ever knock someone else’s superior knowledge. But what are you saying here? I should literally ignore my ears and use internal clocking even if my ears are telling me my system sounds preferable to my ears clocking from my HEDD 192. That would be setting a precedent. For example tuning a vocal in melodyne the rules of music state a syllable should be perfectly in tune if it’s “perfectly in tune” yet in reality anyone who’s used melodyne correctly and tuned with their ears not their eyes will find that when they return to the GUI what has proven to sound musically right is in fact showing as not “technically” right on top of the pitch blob in melodyne. The same with harmony when writing some very unusual chord patterns can be perfect against a melody when they’re actually breaking the rules. I do understand I’m not comparing similar areas of music production with those examples but my point is, I work with my ears - as a musician is all I have. I don’t understand all technical stuff posted on this thread, and I don’t doubt for a moment the pedigree and knowledge of those disseminating it - but I’m not going to flip back to “internal clocking” because technically it’s better if my ears are telling me something completely different. That’s crossing a red line for me personally.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jan 12, 2024 5:30:29 GMT -6
But what are you saying here? I should literally ignore my ears and use internal clocking even if my ears are telling me my system sounds preferable to my ears clocking from my HEDD 192. Not at all, I've never heard of a professional product released without an extensive amount of end user testing. There's literally a system called MOS score for codec testing where a panel of listeners will determine the quality of said codec. What I am saying is true audio engineering is as much of a science as it is an artform, for example a question I've always pondered throughout this is, if it's technically worse but sounds better consistantly to a group of listeners then is it actually "worse"?
However without finding issues plus the relative fixes for them then there wouldn't be an option to be subjective in the first place. Your brain is a beautiful thing though, you can have HF loss on a certain day and your brain will automatically pitch shift for example. I'm sure we've all done the EQ bypass thing once or twice, therefore again it's always best to try and meet listening with some technical findings. Your ears are truly amazing and in circumstances can spot things you wouldn't expect, like Svart's mention of sensitivity to distortion so I'm certainly not saying your wrong.
In many cases it's not easy to test things out to that extent and we end up chasing down a rabbit hole. Although if I'm spending thousands on a "pro" piece of equipment and some external clocking makes it sound better despite the engineering side dictating it shouldn't be that way I'd personally wanna know why. Maybe it's just me?
|
|
|
Post by thehightenor on Jan 12, 2024 5:38:50 GMT -6
But what are you saying here? I should literally ignore my ears and use internal clocking even if my ears are telling me my system sounds preferable to my ears clocking from my HEDD 192. Not at all, I've never heard of a professional product released without an extensive amount of end user testing. There's literally a system called MOS score for codec testing where a panel of listeners will determine the quality of said codec. What I am saying is true audio engineering is as much of a science as it is an artform, for example a question I've always pondered throughout this is, if it's technically worse but sounds better consistantly to a group of listeners then is it actually "worse"?
However without finding issues plus the relative fixes for them then there wouldn't be an option to be subjective in the first place. Your brain is a beautiful thing though, you can have HF loss on a certain day and your brain will automatically pitch shift for example. I'm sure we've all done the EQ bypass thing once or twice, therefore again it's always best to try and meet listening with some technical findings. Your ears are truly amazing and in circumstances can spot things you wouldn't expect, like Svart's mention of sensitivity to distortion so I'm certainly not saying your wrong.
In many cases it's not easy to test things out to that extent and we end up chasing down a rabbit hole. Although if I'm spending thousands on a "pro" piece of equipment and some external clocking makes it sound better despite the engineering side dictating it shouldn't be that way I'd personally wanna know why. Maybe it's just me?
Great post! I think it’s not just you, I think you fall into a category of creatives who also posses technical knowledge and ability. I remember you saying you have worked in a technical field as a job. I don’t posses much detailed technical knowledge (though after 40 years you do build up some knowledge and I understand basic electronics To be truly honest, I have no desire or motivation to learn technical stuff, it doesn’t help me create and finish productions - I can already do that to a standard I’m more than happy with. Plus I don’t have time, it’s enough playing 5 instruments and writing, arranging, recording and gigging! And ….. I doubt I’m academically clever enough!!
|
|
|
Post by seawell on Jan 12, 2024 7:24:12 GMT -6
I mean it’s clocking and dithering man. This stuff was a black art until a few years ago. The way many current very high end converters work renders what John is doing irrelevant because they ignore all external clocks and jitter and even sample rate as long as they get the samples but many of the older PLL type great converters sound just as good as those or just different but can be defeated from an external clock and there’s otherwise gear that is pretty good that just has clocking issues over aes like the older dangerous converters prior to convert 2, 8, and ad+. I’ve also heard the Lynx hilo change sound radically based on how you configure the clocking. I was just poking fun at "if you can't marry the technical with sound to a certain extent then god help you.." with a pic of Rick Rubin sleeping on the couch during a session 😁 To the larger discussion though...I don't think you can say what John is experimenting with here is irrelevant because anytime something sounds different and potentially better in your studio then that's a worthwhile pursuit. I've worked in many situations over the years with mixed brand converters, Lynx, Avid, Apogee, Antelope, etc.. one being the master, all running off of their internal clocks or with a separate master clock. It was always worth trying a few clocking situations to see if you liked the results and to ensure you had a stable/smooth running rig. I never cared if the most expensive one was the master clock, the cheapest one or what. So, you'll have to excuse some of us that have experimented with this for years if you say technically there can't be a difference or it can't be better. Of course we all have preferences and we could listen to a few examples and debate which one we think sounds best but to chalk up the differences to confirmation bias is kind of insulting to the dude sitting there knowing these things don't sound the same. So yes, I value what my ears tell me over any forum post, white paper, YouTube video, etc.. So, the only point I'm trying to make is that if you try something and like it, then keep doing it. Who cares if it technically isn't "right?" In addition to that, if experimenting with clocking gives different and in some situations preferable sonics then instead of trying to prove why that can't be, how about go back to the drawing board and try and figure out why real life isn't matching up with what is "supposed" to be happening? BLIND TEST: If anyone is interested, here is the same mix with a few different clocking situations to see if you hear an appreciable difference: www.dropbox.com/sh/okb16qhat60h6ab/AAA1_UquTdr8PBE7Qi94S-nea?dl=0
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jan 12, 2024 8:19:00 GMT -6
I mean it’s clocking and dithering man. This stuff was a black art until a few years ago. The way many current very high end converters work renders what John is doing irrelevant because they ignore all external clocks and jitter and even sample rate as long as they get the samples but many of the older PLL type great converters sound just as good as those or just different but can be defeated from an external clock and there’s otherwise gear that is pretty good that just has clocking issues over aes like the older dangerous converters prior to convert 2, 8, and ad+. I’ve also heard the Lynx hilo change sound radically based on how you configure the clocking. I was just poking fun at "if you can't marry the technical with sound to a certain extent then god help you.." with a pic of Rick Rubin sleeping on the couch during a session 😁 To the larger discussion though...I don't think you can say what John is experimenting with here is irrelevant because anytime something sounds different and potentially better in your studio then that's a worthwhile pursuit. So yes, I value what my ears tell me over any forum post, white paper, YouTube video, etc.. So, the only point I'm trying to make is that if you try something and like it, then keep doing it. Who cares if it technically isn't "right?" In addition to that, if experimenting with clocking gives different and in some situations preferable sonics then instead of trying to prove why that can't be, how about go back to the drawing board and try and figure out why real life isn't matching up with what is "supposed" to be happening? If anyone is interested, here is the same mix with a few different clocking situations to see if you hear a difference: www.dropbox.com/sh/okb16qhat60h6ab/AAA1_UquTdr8PBE7Qi94S-nea?dl=0Now I get it ..
I know you're replying to Dan here but I've said similar things and I don't think that's the point Josh. As I already stated if an external clock is degrading the signal (even to a small extent) and is preferable because of that then so be it, many parts of mixing revolves around mangling the original sound but personally I'd prefer that to be a choice. I want my 2-4K interface to deliver the most amount of transparency possible or I might as well just buy a cheap Focusrite and stop throwing money at a wall.
So yeah I do care if it's "technically" correct and this ain't about confirmation bias nor is it meant to be in any way insulting. If the aforementioned external clock is actually making things better then something's not right and I'd want to make sure what I'm hearing lines up. If nothing else it's a tad bit concerning..
|
|
|
Post by seawell on Jan 12, 2024 8:30:55 GMT -6
I was just poking fun at "if you can't marry the technical with sound to a certain extent then god help you.." with a pic of Rick Rubin sleeping on the couch during a session 😁 To the larger discussion though...I don't think you can say what John is experimenting with here is irrelevant because anytime something sounds different and potentially better in your studio then that's a worthwhile pursuit. So yes, I value what my ears tell me over any forum post, white paper, YouTube video, etc.. So, the only point I'm trying to make is that if you try something and like it, then keep doing it. Who cares if it technically isn't "right?" In addition to that, if experimenting with clocking gives different and in some situations preferable sonics then instead of trying to prove why that can't be, how about go back to the drawing board and try and figure out why real life isn't matching up with what is "supposed" to be happening? If anyone is interested, here is the same mix with a few different clocking situations to see if you hear a difference: www.dropbox.com/sh/okb16qhat60h6ab/AAA1_UquTdr8PBE7Qi94S-nea?dl=0Now I get it ..
I know you're replying to Dan here but I've said similar things and I don't think that's the point Josh. As I already stated if an external clock is degrading the signal (even to a small extent) and is preferable because of that then so be it, many parts of mixing revolves around mangling the original sound but personally I'd prefer that to be a choice. I want my 2-4K interface to deliver the most amount of transparency possible or I might as well just buy a cheap Focusrite and stop throwing money at a wall.
So yeah I do care if it's "technically" correct and this ain't about confirmation bias nor is it meant to be in any way insulting. If the aforementioned external clock is actually making things better then something's not right and I'd want to make sure what I'm hearing lines up. If nothing else it's a tad bit concerning..
Ok, well my question then is what has convinced us that internal clocking is always superior other than it has been repeated quite a few times over the years? Different, yes, I definitely agree with that but... "superior", in audio wouldn't that tend to be subjective? I don't think an external clock presenting a sound that you enjoy more has to mean something isn't right...it's just different.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jan 12, 2024 9:08:02 GMT -6
Ok, well my question then is what has convinced us that internal clocking is always superior other than it has been repeated quite a few times over the years? Different, yes, I definitely agree with that but... "superior", in audio wouldn't that tend to be subjective? I don't think an external clock presenting a sound that you enjoy more has to mean something isn't right...it's just different. There is no subjectivity when it comes to ADC really. The whole point of it is to digitise a signal and pass it through unscathed, if you want to apply a HAAS effect, add harmonic distortion, warp the stereo field or even add pops & clicks etc. after the fact then fine. What you certainly don't want is spurious non repeatable performance caused by things like jitter, it's a bit like saying I prefer this brand of dish washer because it only cleans half of my plates some of the time. We're not including analog stages or filters etc. in this example of course.
Mathematics and testing isn't subjective either, there are very simple or complex ways to test this from a loopback with matlab / diffmaker or something like that all the way up to $100K test bench equipment. There's an interesting example on the purple site, if I'm reading it correctly take the Lynx Hilo for example with an external Mutec MC-3+ clock it didn't really make any difference and that's actually a good thing. However when clocked to an MBC it did..
I know for a fact with some older interfaces I used and tested that external clocking actually improved things, echo audiofrier for example. Nowaday's, we've moved on a lot since then.
Hilo:
-0.1 dB (L), -0.1 dB (R), -59.1 dBFS (L), -59.2 dBFS (R)
External clock
Lynx Hilo clocked by Mutec MC-3+ (drlex): 3.0 dB (L), 3.0 dB (R), -59.1 dBFS (L), -59.2 dBFS (R) Lynx Hilo ---> Rupert Neve Designs MBC as master (drlex) 0.1 dB (L), 0.1 dB (R), -50.1 dBFS (L), -50.1 dBFS (R)
|
|
|
Post by seawell on Jan 12, 2024 9:45:56 GMT -6
Ok, well my question then is what has convinced us that internal clocking is always superior other than it has been repeated quite a few times over the years? Different, yes, I definitely agree with that but... "superior", in audio wouldn't that tend to be subjective? I don't think an external clock presenting a sound that you enjoy more has to mean something isn't right...it's just different. There is no subjectivity when it comes to ADC really. The whole point of it is to digitise a signal and pass it through unscathed, if you want to apply a HAAS effect, add harmonic distortion, warp the stereo field or even add pops & clicks etc. after the fact then fine. What you certainly don't want is spurious non repeatable performance caused by things like jitter, it's a bit like saying I prefer this brand of dish washer because it only cleans half of my plates some of the time. We're not including analog stages or filters etc. in this example of course.
Mathematics and testing isn't subjective either, there are very simple or complex ways to test this from a loopback with matlab / diffmaker or something like that all the way up to $100K test bench equipment. There's an interesting example on the purple site, if I'm reading it correctly take the Lynx Hilo for example with an external Mutec MC-3+ clock it didn't really make any difference and that's actually a good thing. However when clocked to an MBC it did..
I know for a fact with some older interfaces I used and tested that external clocking actually improved things, echo audiofrier for example. Nowaday's, we've moved on a lot since then.
Hilo:
-0.1 dB (L), -0.1 dB (R), -59.1 dBFS (L), -59.2 dBFS (R)
External clock
Lynx Hilo clocked by Mutec MC-3+ (drlex): 3.0 dB (L), 3.0 dB (R), -59.1 dBFS (L), -59.2 dBFS (R) Lynx Hilo ---> Rupert Neve Designs MBC as master (drlex) 0.1 dB (L), 0.1 dB (R), -50.1 dBFS (L), -50.1 dBFS (R)
If there’s no subjectivity to converters then we’d just choose which one to buy based off of spec sheets instead of listening to how they sound though wouldn’t we? That’s my point with how we choose to clock things, if I like how it sounds better I don’t really care how or why and I’m not going to go back to a setting what the spec sheet says should be better 😁 Also, as a converter manufacturer I wouldn’t exactly want to advertise that clocking my unit to another source may improve the sound so I can imagine how the “internal clock is superior” narrative got started. I haven’t just tested this on older converters, I’ve done it with modern apogee & antelope for example. It still makes a difference to my ears. Sometimes better.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jan 12, 2024 9:54:36 GMT -6
I mean it’s clocking and dithering man. This stuff was a black art until a few years ago. The way many current very high end converters work renders what John is doing irrelevant because they ignore all external clocks and jitter and even sample rate as long as they get the samples but many of the older PLL type great converters sound just as good as those or just different but can be defeated from an external clock and there’s otherwise gear that is pretty good that just has clocking issues over aes like the older dangerous converters prior to convert 2, 8, and ad+. I’ve also heard the Lynx hilo change sound radically based on how you configure the clocking. I was just poking fun at "if you can't marry the technical with sound to a certain extent then god help you.." with a pic of Rick Rubin sleeping on the couch during a session 😁 To the larger discussion though...I don't think you can say what John is experimenting with here is irrelevant because anytime something sounds different and potentially better in your studio then that's a worthwhile pursuit. I've worked in many situations over the years with mixed brand converters, Lynx, Avid, Apogee, Antelope, etc.. one being the master, all running off of their internal clocks or with a separate master clock. It was always worth trying a few clocking situations to see if you liked the results and to ensure you had a stable/smooth running rig. I never cared if the most expensive one was the master clock, the cheapest one or what. So, you'll have to excuse some of us that have experimented with this for years if you say technically there can't be a difference or it can't be better. Of course we all have preferences and we could listen to a few examples and debate which one we think sounds best but to chalk up the differences to confirmation bias is kind of insulting to the dude sitting there knowing these things don't sound the same. So yes, I value what my ears tell me over any forum post, white paper, YouTube video, etc.. So, the only point I'm trying to make is that if you try something and like it, then keep doing it. Who cares if it technically isn't "right?" In addition to that, if experimenting with clocking gives different and in some situations preferable sonics then instead of trying to prove why that can't be, how about go back to the drawing board and try and figure out why real life isn't matching up with what is "supposed" to be happening? BLIND TEST: If anyone is interested, here is the same mix with a few different clocking situations to see if you hear an appreciable difference: www.dropbox.com/sh/okb16qhat60h6ab/AAA1_UquTdr8PBE7Qi94S-nea?dl=0Those converters were older and the asynchronous sample rate conversion technique to strip the received stream of jitter, reclock everything to it's internal clock, and feed the da chip an optimal sample rate that is a not a multiple of 44.1 or 48 didn't exist for audio converters then. Older multichannel converters and still most modern multichannel converters are quite poor unless you pay thousands of dollars and many of the good ones still like the Dangerous Convert 8 and Lynx Aurora N still let the end user defeat the internal clocking scheme. Crazy PLL arrangements were previously almost essential for high performance from many optical connections like Toslink and MADI like Lynx's "Synchroclock" and RME's "Steadyclock" but still asynchronous operation is necessary for high performance from Dante converters forced to use the Dante clock. The ESS Sabre chips allow this technique even in cheap consumer hifi products but of course audio quality and internal clock quality wildly varies.
The current Benchmark Crane Song, Weiss, and Lavry converters do not even have a clock input. They do not even want to give you the idea that you can mess with it. The Apogee Symphony rack mount case does but the converter chips in all Apogee products down to the lowly Groove are all asynchronous. Maybe that's just so other equipment can be spoofed into believing that it is the master clock and that the Symphony is slaving to it. This is all reflective of the new way of doing digital which is to have every device be it's own master clock and operate them (and every process) at the optimal sample rate. The only difference is feeding them single sample rates will have the anti-imaging filter applied at the end of the audible band. This is all a natural extension of oversampling noise shaping filters, which allowed Philips to get 16-bit performance out of physically 14-bit resistor ladder DACs in the 80s which eventually led to delta-sigma modulation based converters. Even going back to the 90s, Weiss hardware was multirate like the best modern plugins as an improvement over FIR smoothed algorithms to behave almost identically when fed different sample rates.
benchmarkmedia.com/blogs/application_notes/13127453-asynchronous-upsampling-to-110-khz
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jan 12, 2024 10:09:57 GMT -6
Ok, well my question then is what has convinced us that internal clocking is always superior other than it has been repeated quite a few times over the years? Different, yes, I definitely agree with that but... "superior", in audio wouldn't that tend to be subjective? I don't think an external clock presenting a sound that you enjoy more has to mean something isn't right...it's just different. There is no subjectivity when it comes to ADC really. The whole point of it is to digitise a signal and pass it through unscathed, if you want to apply a HAAS effect, add harmonic distortion, warp the stereo field or even add pops & clicks etc. after the fact then fine. What you certainly don't want is spurious non repeatable performance caused by things like jitter, it's a bit like saying I prefer this brand of dish washer because it only cleans half of my plates some of the time. We're not including analog stages or filters etc. in this example of course.
Mathematics and testing isn't subjective either, there are very simple or complex ways to test this from a loopback with matlab / diffmaker or something like that all the way up to $100K test bench equipment. There's an interesting example on the purple site, if I'm reading it correctly take the Lynx Hilo for example with an external Mutec MC-3+ clock it didn't really make any difference and that's actually a good thing. However when clocked to an MBC it did..
I know for a fact with some older interfaces I used and tested that external clocking actually improved things, echo audiofrier for example. Nowaday's, we've moved on a lot since then.
Hilo:
-0.1 dB (L), -0.1 dB (R), -59.1 dBFS (L), -59.2 dBFS (R)
External clock
Lynx Hilo clocked by Mutec MC-3+ (drlex): 3.0 dB (L), 3.0 dB (R), -59.1 dBFS (L), -59.2 dBFS (R) Lynx Hilo ---> Rupert Neve Designs MBC as master (drlex) 0.1 dB (L), 0.1 dB (R), -50.1 dBFS (L), -50.1 dBFS (R)
The subjectivity is mainly in the electrical parts, pll and dds arrangements, and single rate digital filters for which there is no optimal way to design something. There are many ways to build a line stage that will measure similarly enough in quality but differently and of course sound different. Everyone just buys a chipset that meets the technical specifications but many inferior products display distortions common to various converter chipsets like poorly configured TI small resistor ladder / delta sig modulator hybrid converters, ESS Sabres, and AKM's later Velvet Sound chips. Dave Hill, RIP, defeated the "velvet sound" that was requested in that generation of chips by largescale Asian hifi manufacturers. The better equipment manufacturers do not display these distortions.
The state of the art equipment manufacturers don't even let you reclock them or ignore any settings you set on them now. They measure almost identically or identically when fed from low jitter sources and total crap. A lot of other equipment might sound just as good with its own master clock but they do not want to redesign it to give the finger to audiophile tweakers or tech bros who might want to use Dante and believe newer is better to prevent them from using a worse clock.
|
|
|
Post by thehightenor on Jan 12, 2024 10:48:46 GMT -6
There is no subjectivity when it comes to ADC really. The whole point of it is to digitise a signal and pass it through unscathed, if you want to apply a HAAS effect, add harmonic distortion, warp the stereo field or even add pops & clicks etc. after the fact then fine. What you certainly don't want is spurious non repeatable performance caused by things like jitter, it's a bit like saying I prefer this brand of dish washer because it only cleans half of my plates some of the time. We're not including analog stages or filters etc. in this example of course.
Mathematics and testing isn't subjective either, there are very simple or complex ways to test this from a loopback with matlab / diffmaker or something like that all the way up to $100K test bench equipment. There's an interesting example on the purple site, if I'm reading it correctly take the Lynx Hilo for example with an external Mutec MC-3+ clock it didn't really make any difference and that's actually a good thing. However when clocked to an MBC it did..
I know for a fact with some older interfaces I used and tested that external clocking actually improved things, echo audiofrier for example. Nowaday's, we've moved on a lot since then.
Hilo:
-0.1 dB (L), -0.1 dB (R), -59.1 dBFS (L), -59.2 dBFS (R)
External clock
Lynx Hilo clocked by Mutec MC-3+ (drlex): 3.0 dB (L), 3.0 dB (R), -59.1 dBFS (L), -59.2 dBFS (R) Lynx Hilo ---> Rupert Neve Designs MBC as master (drlex) 0.1 dB (L), 0.1 dB (R), -50.1 dBFS (L), -50.1 dBFS (R)
If there’s no subjectivity to converters then we’d just choose which one to buy based off of spec sheets instead of listening to how they sound though wouldn’t we? That’s my point with how we choose to clock things, if I like how it sounds better I don’t really care how or why and I’m not going to go back to a setting what the spec sheet says should be better 😁 Also, as a converter manufacturer I wouldn’t exactly want to advertise that clocking my unit to another source may improve the sound so I can imagine how the “internal clock is superior” narrative got started. I haven’t just tested this on older converters, I’ve done it with modern apogee & antelope for example. It still makes a difference to my ears. Sometimes better. +1 Excellent point! This point rages on about monitors as another example. Over on other forums (one in particular) ATC monitors get greatly maligned because apparently they don't have the best specifications in certain areas of acoustic technical perfection. My ATC's are the best tool I own for producing music. Good luck to someone choosing their studio monitors from a spec sheet! And in my case there was no "expectation bias" because my HEDD 192 clocking discovery was entirely accidental and I had to trace my steps backwards to work out why everything had started to sound "better" in a subjective sense to my ears.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jan 12, 2024 10:49:41 GMT -6
I'm not talking about the design holistically, neither am I talking about subjectivity in an overall sense. ADC has one job..
Some people prefer the old Lexicon units and they are just signal degraders really, I get it and IME or IMO technically correct can sound pretty boring sometimes. However we're talking about clocking actually increasing the performance of an easily measurable concept which is signal degredation. I have never seen anything that would point to an increase in performance via external clocking in a modern PLL system, you want to mash things up a bit by using an external clock then fine. Although again, I'd rather have the choice and I'm not the only one saying this. I'll quote Svart again and he's not selling converters anymore so there's no "narrative" to be found.
There's either two things going on here, the converter has been cheaped out / not very well designed or people might like a bit of jitter in their coffee?
-------------------------
"Coming from someone who designed their own converters and designs clocking solutions for RF that are 100x better jitter and phase noise than audio requires..
External clocks for audio are almost never worth the cost. The act of using interconnects, cables, buffering, phase locking, re-clocking through a DLL/PLL to reach usable sampling frequencies, etc., All conspire to reduce clock performance to levels below all but the worst internal clock.
Most modern designs use multi-MHz DPLL-based oscillators then divide the frequency down 256-512 times to reach converter oversampling frequencies, or further down to direct word sampling frequencies. This divides deterministic oscillator jitter by the same amounts, leading to internal system clock jitter to be extremely low, generally almost as low as to be considered negligible in general sampling work. Power supply and system noise on the I2S data bus are a greater issue since careful PCB layout is needed for optimal performance, but rarely understood by those who've only done audio work.
My converter clock was in the sub-picosecond jitter range, about 600 femptoseconds. To give a bit of comparison, ADAT standard required clocks with less than 1 nanosecond of jitter at 48KHz.. quite a large difference!
Anyway, the other side of the story is that a lot of people actually prefer a little jitter on the clock as it smoothes out harmonic content and softens harshness."
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jan 12, 2024 10:58:34 GMT -6
There is no subjectivity when it comes to ADC really. The whole point of it is to digitise a signal and pass it through unscathed, if you want to apply a HAAS effect, add harmonic distortion, warp the stereo field or even add pops & clicks etc. after the fact then fine. What you certainly don't want is spurious non repeatable performance caused by things like jitter, it's a bit like saying I prefer this brand of dish washer because it only cleans half of my plates some of the time. We're not including analog stages or filters etc. in this example of course.
Mathematics and testing isn't subjective either, there are very simple or complex ways to test this from a loopback with matlab / diffmaker or something like that all the way up to $100K test bench equipment. There's an interesting example on the purple site, if I'm reading it correctly take the Lynx Hilo for example with an external Mutec MC-3+ clock it didn't really make any difference and that's actually a good thing. However when clocked to an MBC it did..
I know for a fact with some older interfaces I used and tested that external clocking actually improved things, echo audiofrier for example. Nowaday's, we've moved on a lot since then.
Hilo:
-0.1 dB (L), -0.1 dB (R), -59.1 dBFS (L), -59.2 dBFS (R)
External clock
Lynx Hilo clocked by Mutec MC-3+ (drlex): 3.0 dB (L), 3.0 dB (R), -59.1 dBFS (L), -59.2 dBFS (R) Lynx Hilo ---> Rupert Neve Designs MBC as master (drlex) 0.1 dB (L), 0.1 dB (R), -50.1 dBFS (L), -50.1 dBFS (R)
If there’s no subjectivity to converters then we’d just choose which one to buy based off of spec sheets instead of listening to how they sound though wouldn’t we? That’s my point with how we choose to clock things, if I like how it sounds better I don’t really care how or why and I’m not going to go back to a setting what the spec sheet says should be better 😁 Also, as a converter manufacturer I wouldn’t exactly want to advertise that clocking my unit to another source may improve the sound so I can imagine how the “internal clock is superior” narrative got started. I haven’t just tested this on older converters, I’ve done it with modern apogee & antelope for example. It still makes a difference to my ears. Sometimes better. It's not a narrative. The internal clock is always superior in modern converter clocks. The transmission can only add jitter. They are all configured around this and probably eventually all be asynchronous from the more boutique manufacturers. This is slipping down into even cheaper hifi like Emotiva that choose dac chips with paired asynchronous sample rate conversion chips or ones that use the ESS Sabre chipsets which have it built in. The spec sheets for many of these devices are fudged. Many reviewers with audio analyzers barely know how to set them up and have to be walked through them by the manufacturer providing the unit
|
|
|
Post by notneeson on Jan 12, 2024 11:07:43 GMT -6
Both things are true in this gig:
You have to trust your ears.
You also have to be aware that you can fool yourself (like when you're making genius EQ moves in bypass).
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jan 12, 2024 11:08:05 GMT -6
If there’s no subjectivity to converters then we’d just choose which one to buy based off of spec sheets instead of listening to how they sound though wouldn’t we? That’s my point with how we choose to clock things, if I like how it sounds better I don’t really care how or why and I’m not going to go back to a setting what the spec sheet says should be better 😁 Also, as a converter manufacturer I wouldn’t exactly want to advertise that clocking my unit to another source may improve the sound so I can imagine how the “internal clock is superior” narrative got started. I haven’t just tested this on older converters, I’ve done it with modern apogee & antelope for example. It still makes a difference to my ears. Sometimes better. +1 Excellent point! This point rages on about monitors as another example. Over on other forums (one in particular) ATC monitors get greatly maligned because apparently they don't have the best specifications in certain areas of acoustic technical perfection. My ATC's are the best tool I own for producing music. Good luck to someone choosing their studio monitors from a spec sheet! And in my case there was no "expectation bias" because my HEDD 192 clocking discovery was entirely accidental and I had to trace my steps backwards to work out why everything had started to sound "better" in a subjective sense to my ears. That's just untrue from ASR. The drivers ATC is using have far less distortion than the cheaper ones in the so called competitive speakers that the owner is shilling. There are a ton of slanderous accusations on the site because the owner has a long standing beef with ATC and Trans Audio Group that has gone for at least a decade across various forums he was banned from until he started his own forum with only him in charge. He is not shilling Strauss monitors with Scanspeak Revelators, PSI phase compensated active three ways, or modern super low distortion horn loaded stuff like Radian. There are competitive speakers to ATCs but they are still very expensive and not usually DSPed to death complex crossovers with lots of limiters to protect the cheap (or very fragile) drivers from blowing up
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jan 12, 2024 11:15:37 GMT -6
I'm not talking about the design holistically, neither am I talking about subjectivity in an overall sense. ADC has one job.. Some people prefer the old Lexicon units and they are just signal degraders really, I get it and IME or IMO technically correct can sound pretty boring sometimes. However we're talking about clocking actually increasing the performance of an easily measurable concept which is signal degredation. I have never seen anything that would point to an increase in performance via external clocking in a modern PLL system, you want to mash things up a bit by using an external clock then fine. Although again, I'd rather have the choice and I'm not the only one saying this. I'll quote Svart again and he's not selling converters anymore so there's no "narrative" to be found. There's either two things going on here, the converter has been cheaped out / not very well designed or people might like a bit of jitter in their coffee? ------------------------- "Coming from someone who designed their own converters and designs clocking solutions for RF that are 100x better jitter and phase noise than audio requires.. External clocks for audio are almost never worth the cost. The act of using interconnects, cables, buffering, phase locking, re-clocking through a DLL/PLL to reach usable sampling frequencies, etc., All conspire to reduce clock performance to levels below all but the worst internal clock. Most modern designs use multi-MHz DPLL-based oscillators then divide the frequency down 256-512 times to reach converter oversampling frequencies, or further down to direct word sampling frequencies. This divides deterministic oscillator jitter by the same amounts, leading to internal system clock jitter to be extremely low, generally almost as low as to be considered negligible in general sampling work. Power supply and system noise on the I2S data bus are a greater issue since careful PCB layout is needed for optimal performance, but rarely understood by those who've only done audio work. My converter clock was in the sub-picosecond jitter range, about 600 femptoseconds. To give a bit of comparison, ADAT standard required clocks with less than 1 nanosecond of jitter at 48KHz.. quite a large difference! Anyway, the other side of the story is that a lot of people actually prefer a little jitter on the clock as it smoothes out harmonic content and softens harshness." The Lexicons were better than most modern reverbs because since they couldn't sound realistic at all and most modern reverbs don't really either, they had to have their algorithms designed holistically to sound like a good special effect with a wide variety of material. The frequency modulation and chorusing makes voices and guitars sound angelic. The random hall is huge and surreal. Unlike most modern flexible "clean" or "utilitarian" itb reverbs you can make sound AWFUL with most settings deviating from the carefully selected presets. The Alesis Midiverb II is iconic and didn't even have settings, it had 99 built in patches Good luck making something like Valhalla Shimmer, a Strymon Big Sky, or Liquidsonics Tai Chi sound as iconic, utilitarian, and adhering to the signal like that or the iconic Quadraverb Taj Mahal setting. Keith Barr pretty much wrote the best primitive Schroeder reverb possible, the distortion made it sound smaller and adher to noisy signals better, and with it, shoegaze was invented.
|
|
|
Post by Johnkenn on Jan 12, 2024 11:54:53 GMT -6
Sooooo….what happens when you run off two mixes….one clocked internally…the other clocked externally…..and then go listen to each on a consumer play back medium? Cheers Wiz You’re not hearing my whole chain because the trinnov isn’t printed.
|
|
|
Post by Johnkenn on Jan 12, 2024 12:07:10 GMT -6
What irks me is I'm always up for a random funny post or gear post but myself, svart or Dan's posts which have some real experience or insights into actually how all of this this works is generally left high and dry. No post likes to be seen, I'm not expecting anyone to have Svart's sort of knowledge but how are people supposed to be AE's if you can't at least process a different technical point of view amongst or against "ears are king" (which are easily fooled) crap? Manufacturers create this stuff for you. Do you beleive it's all ignorance? Don't get me wrong, the differing opinions even on a scientific level, lapses in knowlege etc. all has its part to play. I've learned stuff from Chris (aka Svart) despite doing this for decades and I've met some tool sheds of engineers in my time. Before anyone gets annoyed I ain't saying you need to be a technical expert to be a mixing or mastering "engineer" however if you can't marry the technical with sound to a certain extent then god help you.. Edit: If you don't understand what I mean, retro preamp sounds awesome 20 likes. Foundation of how everything works and can be applied to actual production, zero likes. I mean, seriously?
I’ll be sure to like more of Svart’s posts…I wasn’t aware we were counting likes. No one is questioning whether what you guys are telling us is scientifically true…what I AM asking is why I’m not hearing what you guys are telling me I hear. The problem with dogma is that when you (just referring generally here) tell me what you’re hearing, I go, “that’s weird, I don’t hear that at all.” When I tell you what I hear, you say, “that’s impossible. You’re fundamentally incorrect.” This is why we get into yelling matches over this stuff. Just trust that I’m hearing what I’m telling you. I think if you were here, you’d hear the same thing. I appreciate Svart considering other possibilities and putting those out there. This is not about not believing in science, it’s about, “why am I hearing it this way when it’s supposed to not be this way.”
|
|
|
Post by seawell on Jan 12, 2024 12:10:28 GMT -6
I was just poking fun at "if you can't marry the technical with sound to a certain extent then god help you.." with a pic of Rick Rubin sleeping on the couch during a session 😁 To the larger discussion though...I don't think you can say what John is experimenting with here is irrelevant because anytime something sounds different and potentially better in your studio then that's a worthwhile pursuit. I've worked in many situations over the years with mixed brand converters, Lynx, Avid, Apogee, Antelope, etc.. one being the master, all running off of their internal clocks or with a separate master clock. It was always worth trying a few clocking situations to see if you liked the results and to ensure you had a stable/smooth running rig. I never cared if the most expensive one was the master clock, the cheapest one or what. So, you'll have to excuse some of us that have experimented with this for years if you say technically there can't be a difference or it can't be better. Of course we all have preferences and we could listen to a few examples and debate which one we think sounds best but to chalk up the differences to confirmation bias is kind of insulting to the dude sitting there knowing these things don't sound the same. So yes, I value what my ears tell me over any forum post, white paper, YouTube video, etc.. So, the only point I'm trying to make is that if you try something and like it, then keep doing it. Who cares if it technically isn't "right?" In addition to that, if experimenting with clocking gives different and in some situations preferable sonics then instead of trying to prove why that can't be, how about go back to the drawing board and try and figure out why real life isn't matching up with what is "supposed" to be happening? BLIND TEST: If anyone is interested, here is the same mix with a few different clocking situations to see if you hear an appreciable difference: www.dropbox.com/sh/okb16qhat60h6ab/AAA1_UquTdr8PBE7Qi94S-nea?dl=0Those converters were older and the asynchronous sample rate conversion technique to strip the received stream of jitter, reclock everything to it's internal clock, and feed the da chip an optimal sample rate that is a not a multiple of 44.1 or 48 didn't exist for audio converters then. Older multichannel converters and still most modern multichannel converters are quite poor unless you pay thousands of dollars and many of the good ones still like the Dangerous Convert 8 and Lynx Aurora N still let the end user defeat the internal clocking scheme. Crazy PLL arrangements were previously almost essential for high performance from many optical connections like Toslink and MADI like Lynx's "Synchroclock" and RME's "Steadyclock" but still asynchronous operation is necessary for high performance from Dante converters forced to use the Dante clock. The ESS Sabre chips allow this technique even in cheap consumer hifi products but of course audio quality and internal clock quality wildly varies.
The current Benchmark Crane Song, Weiss, and Lavry converters do not even have a clock input. They do not even want to give you the idea that you can mess with it. The Apogee Symphony rack mount case does but the converter chips in all Apogee products down to the lowly Groove are all asynchronous. Maybe that's just so other equipment can be spoofed into believing that it is the master clock and that the Symphony is slaving to it. This is all reflective of the new way of doing digital which is to have every device be it's own master clock and operate them (and every process) at the optimal sample rate. The only difference is feeding them single sample rates will have the anti-imaging filter applied at the end of the audible band. This is all a natural extension of oversampling noise shaping filters, which allowed Philips to get 16-bit performance out of physically 14-bit resistor ladder DACs in the 80s which eventually led to delta-sigma modulation based converters. Even going back to the 90s, Weiss hardware was multirate like the best modern plugins as an improvement over FIR smoothed algorithms to behave almost identically when fed different sample rates.
benchmarkmedia.com/blogs/application_notes/13127453-asynchronous-upsampling-to-110-khz
I mentioned in another post I've tested this with modern Apogee and Antelope as well, I just didn't have them for that particular test. So with this new way of doing digital, what does one with a bunch of outboard inserts do? Have multiple interfaces with none as the master, all internally clocked and hope for the best? The only example I didn't include in that test was all converters clocked internally because there was audible distortion and playback kept stopping so I couldn't print it that way.
|
|