|
Post by Johnkenn on Feb 27, 2020 18:29:18 GMT -6
I would speculate that if enough 1s and 0s are dropped there would be some kind of signal degradation that would be audible- distortion, loss of frequency response, etc. Significant loss may result in a collapse of the stereo field similar to what happens with low-rez mp3s. Correct me if I'm wrong (and I'm pretty sure I am because I know nothing about digital engineering), but I thought digital was all or nothing. Either you have a signal or you don't. The threshold is set to an over/under correct? Not like a sine wave that has fluctuations in amplitude? Read earlier in the thread...had a conversation with someone that refuted that. Not sure if it’s true or not.
|
|
|
Post by svart on Feb 27, 2020 19:54:01 GMT -6
I would speculate that if enough 1s and 0s are dropped there would be some kind of signal degradation that would be audible- distortion, loss of frequency response, etc. Significant loss may result in a collapse of the stereo field similar to what happens with low-rez mp3s. Correct me if I'm wrong (and I'm pretty sure I am because I know nothing about digital engineering), but I thought digital was all or nothing. Either you have a signal or you don't. The threshold is set to an over/under correct? Not like a sine wave that has fluctuations in amplitude? What happens is that the digital protocol is made up of states of 1 or 0 that represent each bit, or can represent groups of bits to make the data more compact. The problem is that the transition between the physical world and digital world can cause errors in the data. Let's take a common issue, such as a cable that has a big crack through the fiber. The crack could refract or reflect a significant portion of the signal back to the end of the cable at the transmitter side. The end of the cable also had a surface that can reflect the signal back again, causing a duplicate, but time delayed version of the signal to reach the receiver slightly after the original signal. The receiver is designed to see a transition between light and no light. The receivers don't necessarily care about the brightness as long as it's bright enough to detect cleanly. The issue is that it can't differentiate between the edge of one bit, and the edge of a slightly delayed copy of the signal, and it will register multiple triggers where there should only be one. This is common in fiber optics but probably less so at such short lengths and low speeds as spdif and ADAT, but it's just one example of how a common problem might cause poor performance but not destroy the signal entirely.
|
|
|
Post by popmann on Feb 27, 2020 20:17:49 GMT -6
Real time digital audio streaming is NOT inherently lossless like digital FILE level transfer.
|
|
|
Post by Martin John Butler on Feb 27, 2020 21:33:47 GMT -6
Cowboycoalminer, there's more to it than that. The timing of the zeros and ones is important, it can cause jitter which reveals itself as a sense of harshness or annoyance. I found sources with higher jitter specs made me turn the music off considerably sooner than sources that had low jitter. I'm sure there are a few other factors involved as well. I've compared digital cables for some major manufacturers around 12-15 years ago, and some had better intelligibility. I could clearly hear words or sounds that were indecipherable with the previous cable. That was listening to a Tom Waits song. I thought he mumbled some nonsense sound when it fact it was perfectly clear what he was saying.
|
|
|
Post by matt on Feb 27, 2020 22:26:58 GMT -6
I would speculate that if enough 1s and 0s are dropped there would be some kind of signal degradation that would be audible- distortion, loss of frequency response, etc. Significant loss may result in a collapse of the stereo field similar to what happens with low-rez mp3s. Correct me if I'm wrong (and I'm pretty sure I am because I know nothing about digital engineering), but I thought digital was all or nothing. Either you have a signal or you don't. The threshold is set to an over/under correct? Not like a sine wave that has fluctuations in amplitude? Good question, I really don't know. Digital- it's complicated! But I'm glad some genius engineers figured it all out. Where would we be without it? Tape? The horror!
|
|
|
Post by popmann on Feb 28, 2020 14:23:57 GMT -6
I think....was it Ludwig? What mastering guy said you can never “turn your back on digital audio”? Thats the issue, as i see it—that its never THAT different, so its writeen off by people as “the same ones and zeros”....but, it becomes an issue becuase it can be both cumulative AND it can slip undetected until you dont know where in the process it got pooched.
|
|
|
Post by cowboycoalminer on Feb 28, 2020 17:29:04 GMT -6
Correct me if I'm wrong (and I'm pretty sure I am because I know nothing about digital engineering), but I thought digital was all or nothing. Either you have a signal or you don't. The threshold is set to an over/under correct? Not like a sine wave that has fluctuations in amplitude? What happens is that the digital protocol is made up of states of 1 or 0 that represent each bit, or can represent groups of bits to make the data more compact. The problem is that the transition between the physical world and digital world can cause errors in the data. Let's take a common issue, such as a cable that has a big crack through the fiber. The crack could refract or reflect a significant portion of the signal back to the end of the cable at the transmitter side. The end of the cable also had a surface that can reflect the signal back again, causing a duplicate, but time delayed version of the signal to reach the receiver slightly after the original signal. The receiver is designed to see a transition between light and no light. The receivers don't necessarily care about the brightness as long as it's bright enough to detect cleanly. The issue is that it can't differentiate between the edge of one bit, and the edge of a slightly delayed copy of the signal, and it will register multiple triggers where there should only be one. This is common in fiber optics but probably less so at such short lengths and low speeds as spdif and ADAT, but it's just one example of how a common problem might cause poor performance but not destroy the signal entirely. I don't completely understand all of what you wrote, but this seems the best explanation I've ever had. Thanks.
|
|
|
Post by svart on Feb 28, 2020 17:48:08 GMT -6
What happens is that the digital protocol is made up of states of 1 or 0 that represent each bit, or can represent groups of bits to make the data more compact. The problem is that the transition between the physical world and digital world can cause errors in the data. Let's take a common issue, such as a cable that has a big crack through the fiber. The crack could refract or reflect a significant portion of the signal back to the end of the cable at the transmitter side. The end of the cable also had a surface that can reflect the signal back again, causing a duplicate, but time delayed version of the signal to reach the receiver slightly after the original signal. The receiver is designed to see a transition between light and no light. The receivers don't necessarily care about the brightness as long as it's bright enough to detect cleanly. The issue is that it can't differentiate between the edge of one bit, and the edge of a slightly delayed copy of the signal, and it will register multiple triggers where there should only be one. This is common in fiber optics but probably less so at such short lengths and low speeds as spdif and ADAT, but it's just one example of how a common problem might cause poor performance but not destroy the signal entirely. I don't completely understand all of what you wrote, but this seems the best explanation I've ever had. Thanks. Yeah, a bit wordy on part trying to explain too many details. Easiest analogy would be a sound reflection. If someone is signing loudly in a room, you'll hear the echo/reflection of their voice in addition to their direct voice. Your brain knows the reflected sound is not the same as the incident sound, but a digital device would "hear" both and trigger on both, causing an error. If you get enough errors, you'll start to get glitches and dropouts or things will otherwise sound strange, or maybe just unlock altogether.
|
|
|
Post by christopher on Feb 28, 2020 21:26:22 GMT -6
Interesting topic, makes me wonder. What’s easy for me to understand would be a glitch, a sudden extreme sample not anywhere close to the sample before or after. This would sound like a pop or click. So if there are random errors, you’d think it would it be obvious. What is tricky to understand, apparently when bits have errors, it can be close to the sample before or after. Ok here’s where it gets weird to me: if it was DSD then it’s easier for me to grasp, if the sample before/after was wrong, I’d assume overall the average of bits would sort of average close to the original signal and sound messy. With how we use packets to build a 24bit word, when a bit error happens is it somehow not bad enough to place the sample too far away from the original sample? I guess there must be some significant bits early on in the word that tell whether it’s positive or negative? So I guess an error has a a 1/24th chance of changing the positive/negative placement? And then maybe a 1/12th chance of placing it in the top or bottom half of any possible positive locations? I guess the more bits, the less the errors show up as audible click/pops, just more of an ugly nastiness?
|
|
|
Post by christopher on Feb 29, 2020 9:24:17 GMT -6
Thinking on this some more... a 1 bit system would divide the dynamic range in two... pos/neg. A 2 bit system would divide those sub areas in two: 4 total slices.. 3 bits would create 8 slices, 4 bits 16 slices, etc.. until you get to 24bit: 16,777,215 slices.. the first 3 bits get you pretty close to the sample range, the other 21 bits zero in on the detail. It makes sense that errors are way more likely to sound like distortion/noise than sudden pops.
Now I see a reason why 32bit and 64bit audio could sound better.. if errors happen on the hard drive or in transfer, or even capture or rendering, they are less likely to register as audible distortion the longer the word is.
|
|
|
Post by Bob Olhsson on Feb 29, 2020 11:00:49 GMT -6
The ones and zeros are data and error correction works very well.
The problem is that the clock is good ol' analog with all of its quirks. SPDIF optical transceivers are notorious clock jitter generators that can swamp a lot of common reclocking methods. Moving a plastic optical cable can actually move the image around especially with older ICs.
Where optical can help is by eliminating ground loops.
|
|
|
Post by Johnkenn on Feb 29, 2020 11:32:49 GMT -6
The ones and zeros are data and error correction works very well. The problem is that the clock is good ol' analog with all of its quirks. SPDIF optical transceivers are notorious clock jitter generators that can swamp a lot of common reclocking methods. Moving a plastic optical cable can actually move the image around especially with older ICs. Where optical can help is by eliminating ground loops. So W/C seems like the best solution for clocking - and then your digital information through one of the digital protocols?
|
|
|
Post by svart on Feb 29, 2020 12:12:21 GMT -6
The ones and zeros are data and error correction works very well. The problem is that the clock is good ol' analog with all of its quirks. SPDIF optical transceivers are notorious clock jitter generators that can swamp a lot of common reclocking methods. Moving a plastic optical cable can actually move the image around especially with older ICs. Where optical can help is by eliminating ground loops. So W/C seems like the best solution for clocking - and then your digital information through one of the digital protocols? Superclock was the best solution, but nobody really bought into it. Coax/differential spdif/AES is by far the best otherwise. Word needs to be upconverted inside most modern converters that oversample natively, and that multiplies jitter. Digital can be mostly just downconverted, which divides jitter.
|
|
|
Post by Bob Olhsson on Feb 29, 2020 12:12:38 GMT -6
With modern chips, W/C is supposed to be worse.
|
|
|
Post by Johnkenn on Feb 29, 2020 13:34:13 GMT -6
Maybe I’m completely wrong - but all of this just feels like fear mongering to me. What I hear sounds better. My mixes are better. I get more money for my services...all of this is happening because I’m obviously hearing clocking issues and jitter?
|
|
|
Post by svart on Feb 29, 2020 15:07:33 GMT -6
Maybe I’m completely wrong - but all of this just feels like fear mongering to me. What I hear sounds better. My mixes are better. I get more money for my services...all of this is happening because I’m obviously hearing clocking issues and jitter? I think we're just discussing academics. Obviously, what works best is the better option, no matter what it is. I do think the jitter thing is overblown, and as I've stated before, I think there's such thing as too good.
|
|
|
Post by Bob Olhsson on Feb 29, 2020 15:14:30 GMT -6
It also could be that you eliminated a ground loop.
|
|
|
Post by ragan on Feb 29, 2020 15:46:47 GMT -6
JK if you had a damaged cable and it was messing with bit-timing/clocking and you replaced with a non-damaged cable, there you go. If it was illusory but you feel better about what you're doing, also there you go.
I think most often we're fooling ourselves in situations like these but when it's difficult to get a certain, objective answer and the 'solution' (if in fact there was ever a problem) is cheap and easy...why not just roll with it?
|
|
|
Post by Johnkenn on Feb 29, 2020 16:13:26 GMT -6
I AM rolling with it. That’s my point. I just thought it was kind’ve funny that I was reporting something that sounded better and then the response that maybe clocking errors/jitter could be responsible for the increased sense of depth...I don’t know - it just struck me as not hearing the forest for the trees. Maybe I’m not understanding what people are saying - are they saying ONLY damaged cables could cause this kind of stuff? And about the clocking - now I’m supposed to worry whether my new generation chip is working correctly with my WC? Oh well - it’s working great - so if it starts sounding bad I can revisit.
|
|
|
Post by Bob Olhsson on Feb 29, 2020 16:15:22 GMT -6
Digital ground loops are a pain because they drain the power supply reducing analog headroom without there being any audible hum.
|
|
|
Post by cyrano on Mar 1, 2020 17:31:55 GMT -6
Real time digital audio streaming is NOT inherently lossless like digital FILE level transfer. Wot? When did spdif become lossy? Most all backbone network connections are optical these days. There are two reasons: bandwidth and cost. Copper has become gradually more expensive, and optical has gone up in bandwidth much faster than copper. Optical also needs less amplification along the way. Since these optical cables have become a mass product, there's enough leftover pieces to make some very cheap TosLink cables. As they are very short, compared to the network stuff, we can enjoy the quality for a next-to-nothing price. If you order a 1 km cable, the price will smack you in the head, as every millimeter of that cable needs to be inspected. You might clearly see differences in output intensity when you check optical cables by peeking at the end, but these don't matter up to the point where it drops out completely. Hec, I even have gear that outputs a circle, in stead of a dot. Seems to make no difference at all. If you can clearly hear a difference (with modern gear that isn't defective) there is something wrong, imho. I There seems to be some confusion between AV streaming (which can be lossy or lossless) with data transfer. Data transfer is NEVER lossy, whatever medium it is traveling on. Now, maybe I'm being dense again, as "streaming" isn't a well defined term. Lots of people use it for any audio or video over the network. That's why I personally only use it for true, adaptive streaming that can go down in bitrate with a lesser connection.
|
|
|
Post by christopher on Mar 1, 2020 23:39:18 GMT -6
I have heard digital errors that I could prove were errors. I'd burn CD-R's and rip the audio and use Wavlab audio compare tool and it would always have hundreds or thousands of errors. If I could get the errors below 1000 I had no worries. Such is digital life. When the errors were in 10,000's+ you could hear the detail being lost. I've probably told this story on GS too many times, but what's one more time... About 15 years ago I worked at a company, just down the street from UA HQ actually, that made DVD's on-demand for online purchase. Amazon bought them out, hired me as a temp, doing tech phone support. DVD-R's back then were blue on the bottom, (obviously worth 5 cents at most) but store bought DVD's were clear & replicated, a more costly reliable manufacture process and arguably worth $20. So this company developed a way to trick the consumer: glued another slim clear plastic disk after burning so it looked like replicated. Basically 5 cent counterfeit replicated DVD's that cost the same as the store, but mailed to your house. It was a huge success, esp for indie films, so they wanted to grow and went into music. That's when I had to start answering calls from music guys around the world. And then I had to deal with a golden ear producer wondering why the masters we sent them for their label sounded like shit. Well.. I thought the producer was crazy, but he challenged me to do a listening test. I did a blind comparison, both our disks and his. 10 attempts: 10 out of 10 I chose his master, it was so much better. But I wasn't allowed to tell them how the service worked, how crappy it was, and when I told the higher ups that there's an audible difference that we need to fix, they scoffed and of course I was put on the naughty list for daring to question anything higher ups say or do. I mean they were on track for way over $100million that year in revenue, selling DVD's and CD's for full retail that were terrible copies, and the customers had no idea. The musicians had no idea their work was being compromised. Nobody cared. I did not stay working there much longer to watch this all go down, but still makes me sick. JohnKenn what you are describing sounds a lot like how those shitty CD's sounded. Like its the same... but nowhere near the same I don't know how error correction works, but first thing I'd do is write a little algorithm that made sure that sudden rapid changes in dynamics (like a glitch) were next to samples that also had the same first couple bits.. if not: change the first couple bits to be the same as preceding and following sample, so that any errors would be more noise than a glitch. Kind of helps me imagine to understand how this can happen now.
|
|
|
Post by svart on Mar 2, 2020 8:45:20 GMT -6
Real time digital audio streaming is NOT inherently lossless like digital FILE level transfer. Wot? When did spdif become lossy? Most all backbone network connections are optical these days. There are two reasons: bandwidth and cost. Copper has become gradually more expensive, and optical has gone up in bandwidth much faster than copper. Optical also needs less amplification along the way. Since these optical cables have become a mass product, there's enough leftover pieces to make some very cheap TosLink cables. As they are very short, compared to the network stuff, we can enjoy the quality for a next-to-nothing price. If you order a 1 km cable, the price will smack you in the head, as every millimeter of that cable needs to be inspected. You might clearly see differences in output intensity when you check optical cables by peeking at the end, but these don't matter up to the point where it drops out completely. Hec, I even have gear that outputs a circle, in stead of a dot. Seems to make no difference at all. If you can clearly hear a difference (with modern gear that isn't defective) there is something wrong, imho. I There seems to be some confusion between AV streaming (which can be lossy or lossless) with data transfer. Data transfer is NEVER lossy, whatever medium it is traveling on. Now, maybe I'm being dense again, as "streaming" isn't a well defined term. Lots of people use it for any audio or video over the network. That's why I personally only use it for true, adaptive streaming that can go down in bitrate with a lesser connection. TOSLINK cables are not made from professional grade optical glass. Most of them are made from plastic. The better ones will be optical-grade polycarbonate, which is as good as glass if there are no inclusions, and flexible enough to endure daily movement. Polycarbonate is what they make plastic eyeglass lenses from. The cheap ones are not optical-grade plastic, and have a tendency to have inclusions (bubbles, cracks, etc) and to crack more readily when moved around. The dot-vs-circle thing you describe doesn't make any difference because, as I explained in earlier posts, the receivers do not trigger on overall brightness. The dot/circle is an effect created by the end polish and flatness. The receivers only care about transitions between light and dark. Testing the fibers is easy. You run the fiber through an automated camera array that looks for leaks in cladding as the fiber is spooled and you run OTDR from both ends. I worked on a phase lock problem with a design for OTDR (Optical Time-domain Reflectometry) for 1550nm 2.5G dual mode fibers. It's not my regular work, but at the time we were short on staff and had to figure it out.. It worked by sending an optical pulse down the fiber and measure the reflection's time delay. Depending on the attenuation and the time delay, we could determine roughly where any issues were. We could also measure jitter down into the femptoseconds by using a oscillator with ultra-low phase noise at 2.5G and then using a high speed digital comparator to phase lock to the recovered 2.5G clock from the incoming signal. We'd adjust the delay of the reference until we could achieve phase lock and then measure the changes in delay we'd need to make to remain phase locked from cycle to cycle. The problem was that it wouldn't stay locked for more than a few seconds at a time when using a new batch of delay chips. It was a fussy piece of equipment because the company didn't understand that even the temperature change from the breath of the user was enough to change the phase of the oscillator and the fans would kick on and off causing the phase issues. I begged them to redesign it to incorporate an oven-controlled oscillator, but they balked at the 100$ that part cost. Also, the layout of the device was terrible, probably one of the worst cases of a layout I've seen in GHz speeds. And no, SPDIF isn't lossy at all. It just doesn't have any error correction. It has error flagging, but it really does nothing for errors occurring during the physical layer transitions. Funny story though, impedance controlled coax has a ton more bandwidth than a single fiber does. It's just much more expensive to utilize that bandwidth. I routinely use 50 ohm cables with 12GHz-24GHz ratings. Since nothing exists to utilize the copper bandwidth in terms of datarate, fiber wins in sheer speed/ datarate. If we could use all of the coax bandwidth and translate that to maximum datarate, it'd eat fiber's lunch. The problem as you've pointed out is that the cost/distance ratio is ridiculously high for coax and it's much easier and cheap to just pull a bundle of hundreds of fibers and use a handful of them than to try to mitigate coax's issues with length.
|
|
|
Post by svart on Mar 2, 2020 9:06:20 GMT -6
On another note, years ago I was trying to utilize the ADAT outputs from my converters to a set of ADA8000 converters for my headphone outputs. I bought a handful of 1$ TOSLINK cables from Amazon. They were clearly in rougher looking shape than ones I had bought years earlier. very poor external quality, which should have been an indicator of performance.
I had a bank of channels that kept having pops and clicks and would sound strangely metallic. I measured the error flag pin on the receiver chips and saw it was getting a ton of errors. After messing with the software routing and such for way too long I made a WAG and changed the TOSLINK cable and the problem went away.
I cut the cable open to find a huge bubble inclusion.
|
|
|
Post by cowboycoalminer on Mar 2, 2020 17:48:48 GMT -6
The ones and zeros are data and error correction works very well. The problem is that the clock is good ol' analog with all of its quirks. SPDIF optical transceivers are notorious clock jitter generators that can swamp a lot of common reclocking methods. Moving a plastic optical cable can actually move the image around especially with older ICs. Where optical can help is by eliminating ground loops. So you would be an advocate for ADAT (lightpipe) over SPIDF? Just trying to learn.
|
|