|
Post by Johnkenn on Oct 24, 2019 10:49:19 GMT -6
I need to hook back into the DA of the Svartbox and give a comparative listen again.
|
|
|
Post by nobtwiddler on Oct 24, 2019 11:02:41 GMT -6
Just ordered a X4 yesterday.
Going to be doing a few remote / location gigs, singer songwriter stuff. Recording with only 1 mic (Josephson C700S) So this and my new MacBook Pro, should will fit the bill nicely!
|
|
|
Post by svart on Oct 24, 2019 11:39:08 GMT -6
I just remember what Gannon said on product launch. I found the point about jitter and reduced artifacts interesting, I. The sense that there is less math going on in the background and the math that is occurring is more precise? That's the same argument that people use for using something like 88.2K, that the "math" works better when doing downconversion, but time and time again folks say that it doesn't matter. Now, for physical analog timing circuits, "math" is simply manipulating currents in time, so there are no limitations due to word length, etc, but there are great physical constraints that can adversely affect operation, especially as you go higher in frequency. For digital timing circuits, "math" is manipulating numbers, so there are limitations due to processing speed (latency) but there are few physical constraints. Since they're using "two crystals" in an unknown configuration, I expect that it's an analog circuit for the most part and thus "math" is not involved except in the formulas needed to derive physical part values. In integer PLL's (phase-locked loops) they use a low speed xtal or clock oscillator to feed a control circuit that dictates the frequency of a much higher frequency voltage-controlled oscillator (VCO). The output of the VCO is split and one of the feeds is routed back into the PLL and through an integer divider circuit that has an output value equal to that of the reference input xtal/oscillator. The phase is brought into sync by small changes in the output voltage to the VCO and once they are totally in sync, this is called "PLL lock". The PLL monitors the phase comparison and makes small adjustments to keep the VCO frequency constant. It's these small changes that can affect jitter(phase noise) along with harmonic content, spurious signals, etc. It's an involved process to do this properly and it takes quite a bit of practice to know your way around doing it properly. Anyway, the point is that I don't think they're doing things this way due to the scant information we have. I think they're just using xtals at much higher multiples of the word-clock frequencies and dividing them down. It seems to describe what they're doing based on what they've said, and it's very easy to divide down the even harmonics of a clock using some logic chips.
|
|
|
Post by svart on Oct 24, 2019 11:41:26 GMT -6
I need to hook back into the DA of the Svartbox and give a comparative listen again. The DAC side is slaved to the recovered SPDIF clock. It's the ADC side that has the advantage, plus the SPDIF output from the ADC is derived from that clock, so if you were to slave your clocking to the ADC side, you'll get the benefits of that clock, unless the recovered SPDIF clock is retimed internally to the interface, then who knows what you have.
|
|
|
Post by kcatthedog on Oct 24, 2019 11:57:44 GMT -6
Svart, I/we know, you know this stuff cold, so not arguing with you .
Perhaps, I misunderstood the UA reference to math but I thought what it meant is that the actual converter in a way is doing the conversion at a specific sample rate but then math is applied for the various sample rates ?
Or, can you explain, is the converter function actually a factor of math already being done so the converter is actually sampling at the different sample rates?
And would mathematical conversion error cause jitter increases: they are interrelated (sort of sonic corollaries of each other) but not cause and effect ?
|
|
|
Post by svart on Oct 24, 2019 12:21:29 GMT -6
Svart, I/we know, you know this stuff cold, so not arguing with you . Perhaps, I misunderstood the UA reference to math but I thought what it meant is that the actual converter in a way is doing the conversion at a specific sample rate but then math is applied for the various sample rates ? Or, can you explain, is the converter function actually a factor of math already being done so the converter is actually sampling at the different sample rates? And would mathematical conversion error cause jitter increases: they are interrelated (sort of sonic corollaries of each other) but not cause and effect ? I doubt they do post-processing on the sample rates, this would be no more efficient than doing samplerate and bit-depth conversions in your DAW for very little benefit that I can tell. Mathematical post-processing wouldn't increase "jitter" per se but it unless you could produce a mathematically lossless way to do this, you'd probably lose some precision to truncation as you do with plugins that don't operate at the same bitrate, while needing fast (and power hungry) FPGA/DSP processing to do this close to realtime. The only reasonably explainable way I could think of doing this is bypassing the A/D and D/A chip's internal division and using an external divider if they believed that they could build one better than the A/D and D/A's on-die silicon dividers(which I would seriously doubt due to parasitics inherent in PCB materials and such). This doesn't make sense in that most Delta-Sigma converters are oversampling devices and operate at much higher frequencies internally and only produce the samplerates you set them to output, so they would never feed the converter chips themselves with wordclock frequencies directly. I take it that they've stopped using whatever clocking sources they've used in the past and started to do it a different way and they're just using marketing-speak to explain the change and puff up the specs a little. Whether or not there is significant improvement in the circuit is unknown unless they get into more detail about the physical circuitry. I'm intrigued about what they have to say about this even though I have no interest in this device really.
|
|
|
Post by popmann on Oct 24, 2019 12:29:02 GMT -6
Couple things.... first--this is the form factor to send me, UA. Along with obviously NFR licenses for the whole kit and caboodle-to give the impression that everyone hip uses UAD-AND me. Questioning this form factor, I think it for people with established studio infrastructure--external preamps....etc. This, my Kronos, my Macbook--would make a fierce home studio if I had nothing else. The only thing that would be better would be being iOS compatible. Which they should consider as an emerging market for outboard DSP. This unit can't piss on a SINGLE CORE of my tower....but, if someone can run a mix worth of 96khz plug ins on it--you literally CAN use a $300 iPad to record. Is it THAT big of a difference over the 2 preamp vision? 10 vs 12 in cutting a band can be very tangible. Particularly when 4 of those might need UAD--ie, amp sims for bass and guitar....confidence chain and reverb for singer....my Leslie take 3 when I go Spinal Tap on it....you could do some old school trap kit recordings with 4--not with 2. So, I get it. Will they need to probably reduce the price? Maybe. Second: Re:SRC "math"---I think the discrepancy is between the quality possible from an SRC, which doesn't matter and has nothing to DO with being multiples....and the algo that do "real time" conversion, which ALL use multiples--and so I have to expect there's SOME benefit--if only in CPU cycles. It works the same in BOTH directions--MQA will take a 24/44 recording and play it back at 24/88 after whatever time correction DSP it applies....play a 88.2 file on an iPhone (with built in IO) and it will play back at 44.1 where the 96khz will play back at 48khz. So, I haven't looked back to see what the context is--but "src isn't src"....the idea that you need to use 88.2 because it SOUNDS BETTER at 44.1 vs 96khz is provably BS....but, that doesn't mean that doing the SRC in real time wouldn't work out better. If it doesn't, there's a LOT of completely disparate coders who don't know that....so, I assume there's an advantage--which MAY just be clock cycles to do the SRC.
|
|
|
Post by Drew @ UA on Oct 24, 2019 13:56:33 GMT -6
Hi all, I've not read through all this thread yet, so please quote this post with specific questions you want me to hit.
|
|
|
Post by Johnkenn on Oct 24, 2019 14:50:53 GMT -6
Couple things.... first--this is the form factor to send me, UA. Along with obviously NFR licenses for the whole kit and caboodle-to give the impression that everyone hip uses UAD-AND me. Questioning this form factor, I think it for people with established studio infrastructure--external preamps....etc. This, my Kronos, my Macbook--would make a fierce home studio if I had nothing else. The only thing that would be better would be being iOS compatible. Which they should consider as an emerging market for outboard DSP. This unit can't piss on a SINGLE CORE of my tower....but, if someone can run a mix worth of 96khz plug ins on it--you literally CAN use a $300 iPad to record. Is it THAT big of a difference over the 2 preamp vision? 10 vs 12 in cutting a band can be very tangible. Particularly when 4 of those might need UAD--ie, amp sims for bass and guitar....confidence chain and reverb for singer....my Leslie take 3 when I go Spinal Tap on it....you could do some old school trap kit recordings with 4--not with 2. So, I get it. Will they need to probably reduce the price? Maybe. Second: Re:SRC "math"---I think the discrepancy is between the quality possible from an SRC, which doesn't matter and has nothing to DO with being multiples....and the algo that do "real time" conversion, which ALL use multiples--and so I have to expect there's SOME benefit--if only in CPU cycles. It works the same in BOTH directions--MQA will take a 24/44 recording and play it back at 24/88 after whatever time correction DSP it applies....play a 88.2 file on an iPhone (with built in IO) and it will play back at 44.1 where the 96khz will play back at 48khz. So, I haven't looked back to see what the context is--but "src isn't src"....the idea that you need to use 88.2 because it SOUNDS BETTER at 44.1 vs 96khz is provably BS....but, that doesn't mean that doing the SRC in real time wouldn't work out better. If it doesn't, there's a LOT of completely disparate coders who don't know that....so, I assume there's an advantage--which MAY just be clock cycles to do the SRC. Um...me first
|
|