|
Post by jeromemason on Jan 22, 2015 3:30:01 GMT -6
I'm a big fan of BLA, both their products and their mods. Looks like they are showing off their latest version of their dedicated clock. I was insanely impressed with the previous one they put out, but with no UI and the inability to rack mount it probably threw a lot of folks off, and that's sad because for $600 it's a no brainer. Looks like they've got the UI and mounting fixed, hopefully those that are roaming around can give us some more details on that baby. I'm interested to know if they've put the XB in this, I spoke with them a few months ago and it was hinted to me they may put that clock in their new design. Looks cool to me! tonycamphd
|
|
|
Post by kcatthedog on Jan 22, 2015 8:08:02 GMT -6
xB ?
|
|
|
Post by svart on Jan 22, 2015 9:11:03 GMT -6
Meh. I don't make any excuses for my dislike of BLA, that's one thing. The other is why oh why are we still using wordclock? It's just a terrible way to clock things.
A typical WC signal is upconverted (multiplied) by 256-512x for use by the internal ICs, like ADC/DAC chips. Any jitter on the WC signal is multiplied and also significantly increases the jitter on the MCLK signal (the actual clock for the chips). Even using a PLL, if WC is used as the reference frequency, you can never get better performance than the worst case reference jitter. Any additional jitter from the PLL/VCO is on top of what is already there in the reference.
Most modern internal clocks take their 256-512fs clocks directly from crystals or oscillators and avoid the upconversion, which results in moderately better jitter performance than an external WC no matter the jitter performance is from that external clock source..
|
|
|
Post by RicFoxx on Jan 22, 2015 9:41:18 GMT -6
svart you should open a BLA modding business where you mod BLA original products!
|
|
|
Post by Johnkenn on Jan 22, 2015 9:50:02 GMT -6
3..2..1..
|
|
|
Post by tonycamphd on Jan 22, 2015 10:34:46 GMT -6
jeromemason I'll check there booth out for u on sat when I get there and give u the heads up on what I find out 8)
|
|
|
Post by Johnkenn on Jan 22, 2015 10:36:43 GMT -6
So who is working in 384?
|
|
|
Post by tonycamphd on Jan 22, 2015 10:46:42 GMT -6
The next iteration of the FM clock.
|
|
|
Post by svart on Jan 22, 2015 10:47:45 GMT -6
So who is working in 384? Apparently there are a few manufacturers making 384Khz sampling converters.
|
|
|
Post by svart on Jan 22, 2015 12:35:02 GMT -6
Meh. I don't make any excuses for my dislike of BLA, that's one thing. The other is why oh why are we still using wordclock? It's just a terrible way to clock things. A typical WC signal is upconverted (multiplied) by 256-512x for use by the internal ICs, like ADC/DAC chips. Any jitter on the WC signal is multiplied and also significantly increases the jitter on the MCLK signal (the actual clock for the chips). Even using a PLL, if WC is used as the reference frequency, you can never get better performance than the worst case reference jitter. Any additional jitter from the PLL/VCO is on top of what is already there in the reference. Most modern internal clocks take their 256-512fs clocks directly from crystals or oscillators and avoid the upconversion, which results in moderately better jitter performance than an external WC no matter the jitter performance is from that external clock source.. Just curious how you think things should be externally clocked when needed? (no agenda - just trying to learn ) I think superclock was a good way to go, but since the frequencies are in the 25Mhz range, it's harder to utilize since you run into more issues with parasitics with the cables and terminations. It's fallen by the wayside since end users haven't bothered to understand the differences and manufacturers don't make what isn't selling. I think the best way would be have some kind of standard where you send and receive some kind of timing information in digital form and then allow each slave device to clock via modern DDS methods. It's a lot more complicated, but it's much more precise in overall timing. You essentially let each device create it's own low jitter clock, but use the timing signal to sync each clock source up through a handshake event that logs each delay and accounts for the propagation delay from the source to each slave device. Kinda like timecode that drives discrete clock sources with latency delay built in. If you wanted to do a less intensive clocking from a single clock source you could do something else. You could also do a super-super clock with much higher frequencies and have each slave device divide the clock and lock to it. Dividing is much better for jitter than multiplying is. You'd have to have MUCH more stringent termination and cable specifications, but could probably use existing CATV cables and devices for buffering and splitting.
|
|
|
Post by cowboycoalminer on Jan 22, 2015 14:55:20 GMT -6
My head is spinning. I'm so far out of league in this thread, but I have a question. I might even start another thread about it. Svart, please explain (in elementary terms) what DSD is and why Rupert Neve says it's the best sounding digital he's ever heard. I've heard it. It's stunning. So why aren't we using it? Besides the fact that all software is PCM of coarse.
|
|
|
Post by svart on Jan 22, 2015 15:24:22 GMT -6
My head is spinning. I'm so far out of league in this thread, but I have a question. I might even start another thread about it. Svart, please explain (in elementary terms) what DSD is and why Rupert Neve says it's the best sounding digital he's ever heard. I've heard it. It's stunning. So why aren't we using it? Besides the fact that all software is PCM of coarse. DSD is essentially a train of single bits. Each bit can be a 1 or a 0, naturally. Depending on how many bits are 1 in a row, the output voltage rises higher and higher the more 1's it sees. Same goes for 0 bits, the voltage falls as more 0's are added to the stream. DSD is the same idea as PWM/Class D audio in the broad scope. PCM is just a group of bits that represent a digital value (bytes and words). It's not interpreted bit-by-bit like DSD is. It's interesting to think that DSD was essentially the basis for SACD's, but didn't go very far. There are claims that single bit streams suffer from distortion at the bit level during signal reconstruction, but from what I've read, nobody has ever really been able to tell the difference between PCM and DSD when objectively listening. One bad thing about DSD is that it's impossible to edit in it's native format. It MUST be converted to something else to be manipulated, which usually means that DSD will become PCM at some point in an editing process.
|
|
|
Post by Johnkenn on Jan 22, 2015 16:43:53 GMT -6
Well...that totally cleared it up for me...
|
|
|
Post by kcatthedog on Jan 22, 2015 17:42:42 GMT -6
Do i detect a note of facesciousness ? Excuse me, while i scrape my few remaining brain cells off the walls and ceilings !
|
|
|
Post by cowboycoalminer on Jan 22, 2015 17:53:59 GMT -6
I've heard some guys claim it's beneficial to record in DSD, covert to PCM and edit. I don't know.
|
|
|
Post by jeromemason on Jan 22, 2015 19:25:38 GMT -6
It's not entirely too complicated, basically the way interfaces understand digital or logic is by voltage. A voltage of +5v could mean 1 or a voltage of 0v means 0. It's just binary code that is transferred in the analog domain. Imagine the old style telegraph.... the pauses between taps meant something, similar to what is being described here. For a rise in voltage is a 1 a lowering of a voltage is a 0 and thus the analog to digital converter turns voltage into binary code. That's a really stripped down explanation of what is basically going on under the hood. For the Digital to Analog it's similar, a chip is being told to turn binary 0's and 1's into +/- voltage. I can't remember if 0v is a 0 or if it actually swings to an inverse voltage for 0 but it's that idea.
BTW, what Svart is talking about is DSD specifically, I'm just talking about the conversion process in general, nothing specific to how a certain platform processes those voltages.
|
|
|
Post by svart on Jan 22, 2015 22:38:04 GMT -6
It's not entirely too complicated, basically the way interfaces understand digital or logic is by voltage. A voltage of +5v could mean 1 or a voltage of 0v means 0. It's just binary code that is transferred in the analog domain. Imagine the old style telegraph.... the pauses between taps meant something, similar to what is being described here. For a rise in voltage is a 1 a lowering of a voltage is a 0 and thus the analog to digital converter turns voltage into binary code. That's a really stripped down explanation of what is basically going on under the hood. For the Digital to Analog it's similar, a chip is being told to turn binary 0's and 1's into +/- voltage. I can't remember if 0v is a 0 or if it actually swings to an inverse voltage for 0 but it's that idea. BTW, what Svart is talking about is DSD specifically, I'm just talking about the conversion process in general, nothing specific to how a certain platform processes those voltages. Most modern A/D converter chips actually float the inputs to 1/2VCC or 2.5VDC for 5V. Zero volts would actually correspond to a negative voltage if you were to graph the digital data. 2.5VDC would correspond to a zero crossing state of a sine wave. Most modern A/D architecture is Delta-Sigma, which oversamples the input signal by many orders of magnitude (also why the clock needs to be 256-512fs), which allows for heavy filtering of the signal and allows for a negative feedback that nulls out error. Because it oversamples (or samples so quickly that you get a more "average" acquisition), you get more precision than other types of ADC chip. Essentially a charge is built up on capacitors in the ADC frontend, and is then sampled based on the clock. It does this by comparing the charge at that point in time to ground. At this point it's essentially similar to a DSD device. This serial bitstream is fed into a quantizer, which arranges the bits into a "quantum" which is simply a group of bits arranged by amplitude and time. So the data output is now arranged in "words" with each word corresponding to a slice in time based on each clock pulse of the sampling frequency. As you may know, the sampling frequency is 44.1Khz for CDs or some multiple of that for other audio sources. That means you get 24bit words of data 44100 times a second. Each word is 16, 18, 20, 24 or 32 bits in total depth. The bit depth defines the amplitude of the signal at that point in time on the sampling rate. There is always some noise in the system, some of which comes from the ADC itself. While a part might be designated as something like "24 bit", the true usable bit depth would be more like 22bits. This in the design world is called ENOB or "effective number of bits". Just remember that just because a converter says that it's something doesn't mean you are getting that something. A good converter at 20 bits might outperform a poorly implemented converter at 24bits, and so forth. Also, for most modern 5V converters, each bit is worth roughly 6dB of dynamic range.
|
|
|
Post by jimwilliams on Jan 24, 2015 10:22:53 GMT -6
Michal at Mytek designed very low jitter clocks many years ago, about 10 ps which was amazing back then. After running it through pcb traces 3" it went up to 100 ps. After running through a BNC and a 10 foot cable it was even worse.
|
|
|
Post by svart on Jan 24, 2015 13:46:27 GMT -6
Michal at Mytek designed very low jitter clocks many years ago, about 10 ps which was amazing back then. After running it through pcb traces 3" it went up to 100 ps. After running through a BNC and a 10 foot cable it was even worse. One of my design responsibilities at work is high speed data integrity for sampling systems. I've not seen anything quite that bad, even into the hundreds of mhz, unless the pcb layout/stackup is bad. However, i do agree that clocking through cables is pretty bad practice and local clocking is always best.
|
|