|
Post by indiehouse on Jun 3, 2021 7:12:17 GMT -6
Should I be concerned about this? Digital attenuation decreases bit depth, right? And I’m assuming interfaces like Apollo X, Motu 828ES and Lynx Aurora (n) are using digital attenuation on the front panel volume control. Would sending full volume to a monitor controller, like a Coleman, noticeable increase the fidelity?
|
|
|
Post by Ward on Jun 3, 2021 7:19:13 GMT -6
Should I be concerned about this? Digital attenuation decreases bit depth, right? And I’m assuming interfaces like Apollo X, Motu 828ES and Lynx Aurora (n) are using digital attenuation on the front panel volume control. Would sending full volume to a monitor controller, like a Coleman, noticeable increase the fidelity? Just use clip gain and make sure all your regions respect digital zero which is -18dBfs fidelity doesn't quite degrade the same way from lowering amplitude like in analog.
|
|
|
Post by popmann on Jun 3, 2021 8:34:02 GMT -6
Are you normally attenuation from 127 to 20? Then no....I mean does....but, it should be using floating point math (or used to be higher than 24bit fixed) to reduce....so, the loss should be the stuff of butterfly wings in moderation.
Yes, analog is better in one way....but, then you get into overall circuit coloration and channel balance issues at the lowest extremes-where digital is perfect and you have to get spendy to have analog not be overtly sucky. . So, its a little like trading one loss for another.
I use a combo. My benchmark has analog attenuation....which i use for basic “comfortably loud” level setting —usually at 11-1 oclock....and then i use digital in the software for ad hoc “whats it like super quiet?” Attenuation. But, both my apps have a control room monitoring level and mono switches in the floating point world.
|
|
|
Post by jeremygillespie on Jun 4, 2021 18:19:46 GMT -6
I'd go line out to the Coleman.
|
|
ericn
Temp
Balance Engineer
Posts: 14,921
|
Post by ericn on Jun 4, 2021 18:43:28 GMT -6
The Mytek Brooklyn DAC+ has the option for either, for the first month or so I switched back and forth. I found it just seams more open and wider in analog attn mode and have not looked back. As much as I think in theory I prefer analog attn. I’ll say this is one you really need to find out and decide on your own with kit.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jun 4, 2021 19:42:13 GMT -6
Most digital attenuation is not properly dithered or 64-bit floating point. That being said the MOTUs sound cleaner outputting -1 than 0 ime even with an analog volume control.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jun 4, 2021 19:43:47 GMT -6
The Mytek Brooklyn DAC+ has the option for either, for the first month or so I switched back and forth. I found it just seams more open and wider in analog attn mode and have not looked back. As much as I think in theory I prefer analog attn. I’ll say this is one you really need to find out and decide on your own with kit. Do you use the analog attenuation there or an external passive control?
|
|
|
Post by Guitar on Jun 4, 2021 19:48:28 GMT -6
Most digital attenuation is not properly dithered or 64-bit floating point. That being said the MOTUs sound cleaner outputting -1 than 0 ime even with an analog volume control. I was noticing that too, my Ultralite MK 5 sounds "warmer" ? with the outputs turned down to -6 than when full blast, which has more treble stuff happening. Pretty cool "feature," I guess, if you know about it.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jun 4, 2021 20:00:31 GMT -6
Most digital attenuation is not properly dithered or 64-bit floating point. That being said the MOTUs sound cleaner outputting -1 than 0 ime even with an analog volume control. I was noticing that too, my Ultralite MK 5 sounds "warmer" ? with the outputs turned down to -6 than when full blast, which has more treble stuff happening. Pretty cool "feature," I guess, if you know about it. The warmth is probably the lack of distortion from intersample clipping or the interface being designed to use the built in digital mixer and crapping out at 0 dbfs.
|
|
ericn
Temp
Balance Engineer
Posts: 14,921
|
Post by ericn on Jun 4, 2021 20:31:14 GMT -6
The Mytek Brooklyn DAC+ has the option for either, for the first month or so I switched back and forth. I found it just seams more open and wider in analog attn mode and have not looked back. As much as I think in theory I prefer analog attn. I’ll say this is one you really need to find out and decide on your own with kit. Do you use the analog attenuation there or an external passive control? I’m using the one built into the unit, the Apple remote is so convenient.
|
|
|
Post by earlevel on Jun 7, 2021 22:32:20 GMT -6
Should I be concerned about this? Digital attenuation decreases bit depth, right? And I’m assuming interfaces like Apollo X, Motu 828ES and Lynx Aurora (n) are using digital attenuation on the front panel volume control. Would sending full volume to a monitor controller, like a Coleman, noticeable increase the fidelity? tl;dr: It doesn't matter which way you go. Details follow...Most people think about this purely from a math point of view, without thinking about practical limits in the system (including fundamental limitations of electronics and of your ears). The simple story is that yes, for every 6 dB (6.02, close enough) you turn down the output, you waste one bit. That doesn't happen with an (ideal) analog attenuator, though—attenuate 6 dB at the analog output of the DAC and the digital lsb because half an lsb. Sounds like analog gain is the winner, right? These things are true, but only up to the point where they run into the unavoidable physical limits we live with. The most obvious is thermal noise. (Electronics anywhere near room temperature puts out noise, there is a fundamental minimum—many people know this so I don't want to go off on that tangent, but for anyone who this is a surprise, start with looking up "Johnson noise" for one source. The main point is there is a theoretical minimum noise in electronics, the best you can do is try to get near it.) The last bits of a DAC are below the noise floor of electronics. This is always true for 24-bit DACs—whether the gear is consumer or pro, the max output is 1-4 volts, so the bottom bits are below the lowest attainable noise floor. In fact, a few high-end DAC makers are honest and tell you in plain English that no DAC does better than 20-bit accuracy. A few will argue close to 22 is achievable. In any case, that output is already as good as it can get, and this has implications for both digital and analog gain. Consider digital first: If we know we can attenuate to the point of wasting 3-4 bit without any possibility of noticing (since they are already under noise at full output), it seems like we can have 18-24 dB attenuation without fretting about losing resolution. So, it seems the most important thing is to not have an amp to powerful (with speakers so sensitive) that we're forced to turn the digital volume control down by a much larger amount. Right? Now analog: We know there is a noise floor that will not get lower as we turn down the analog volume control. (Someone is thinking...wait, I'll use a plain resistive pot that requires no electronics, and it won't have that electronic noise—therefore turning down the pot will lower the thermal noise too. Nope, the noise comes from components like resistors.) Because of that, you lose signal-to-noise ratio as you turn down the volume, in effectively the same way as the digital control. In fact, the bottom line again is that the best case is to match the power of your amp/speakers—you want to avoid the hypothetical situation where your amp/speakers are so loud with a full signal that you must turn down either the digital or analog gain near the bottom of their range. That's the bottom line—if your amplifier/speakers are far too loud for your listening environment, you'll always need to turn down your signal, and your amp speaker will bring up the volume of the noise floor along with the signal. But it doesn't matter whether you use digital or analog gain for the fine adjustment (assuming a suitable multiplier—sure, it's possible for a brain-dead digital implementation to be worse than analog). PS—This was actually helpful to me. I currently have that situation for monitoring at my computer. I wasn't concerned I was losing anything significant, but till I stopped here to think it through, I didn't realize how it's basically the same either way.
|
|
|
Post by indiehouse on Jun 8, 2021 6:01:14 GMT -6
Should I be concerned about this? Digital attenuation decreases bit depth, right? And I’m assuming interfaces like Apollo X, Motu 828ES and Lynx Aurora (n) are using digital attenuation on the front panel volume control. Would sending full volume to a monitor controller, like a Coleman, noticeable increase the fidelity? tl;dr: It doesn't matter which way you go. Details follow...Most people think about this purely from a math point of view, without thinking about practical limits in the system (including fundamental limitations of electronics and of your ears). The simple story is that yes, for every 6 dB (6.02, close enough) you turn down the output, you waste one bit. That doesn't happen with an (ideal) analog attenuator, though—attenuate 6 dB at the analog output of the DAC and the digital lsb because half an lsb. Sounds like analog gain is the winner, right? These things are true, but only up to the point where they run into the unavoidable physical limits we live with. The most obvious is thermal noise. (Electronics anywhere near room temperature puts out noise, there is a fundamental minimum—many people know this so I don't want to go off on that tangent, but for anyone who this is a surprise, start with looking up "Johnson noise" for one source. The main point is there is a theoretical minimum noise in electronics, the best you can do is try to get near it.) The last bits of a DAC are below the noise floor of electronics. This is always true for 24-bit DACs—whether the gear is consumer or pro, the max output is 1-4 volts, so the bottom bits are below the lowest attainable noise floor. In fact, a few high-end DAC makers are honest and tell you in plain English that no DAC does better than 20-bit accuracy. A few will argue close to 22 is achievable. In any case, that output is already as good as it can get, and this has implications for both digital and analog gain. Consider digital first: If we know we can attenuate to the point of wasting 3-4 bit without any possibility of noticing (since they are already under noise at full output), it seems like we can have 18-24 dB attenuation without fretting about losing resolution. So, it seems the most important thing is to not have an amp to powerful (with speakers so sensitive) that we're forced to turn the digital volume control down by a much larger amount. Right? Now analog: We know there is a noise floor that will not get lower as we turn down the analog volume control. (Someone is thinking...wait, I'll use a plain resistive pot that requires no electronics, and it won't have that electronic noise—therefore turning down the pot will lower the thermal noise too. Nope, the noise comes from components like resistors.) Because of that, you lose signal-to-noise ratio as you turn down the volume, in effectively the same way as the digital control. In fact, the bottom line again is that the best case is to match the power of your amp/speakers—you want to avoid the hypothetical situation where your amp/speakers are so loud with a full signal that you must turn down either the digital or analog gain near the bottom of their range. That's the bottom line—if your amplifier/speakers are far too loud for your listening environment, you'll always need to turn down your signal, and your amp speaker will bring up the volume of the noise floor along with the signal. But it doesn't matter whether you use digital or analog gain for the fine adjustment (assuming a suitable multiplier—sure, it's possible for a brain-dead digital implementation to be worse than analog). PS—This was actually helpful to me. I currently have that situation for monitoring at my computer. I wasn't concerned I was losing anything significant, but till I stopped here to think it through, I didn't realize how it's basically the same either way.Wow. Incredibly informative.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jun 8, 2021 6:26:11 GMT -6
Should I be concerned about this? Digital attenuation decreases bit depth, right? And I’m assuming interfaces like Apollo X, Motu 828ES and Lynx Aurora (n) are using digital attenuation on the front panel volume control. Would sending full volume to a monitor controller, like a Coleman, noticeable increase the fidelity? tl;dr: It doesn't matter which way you go. Details follow...Most people think about this purely from a math point of view, without thinking about practical limits in the system (including fundamental limitations of electronics and of your ears). The simple story is that yes, for every 6 dB (6.02, close enough) you turn down the output, you waste one bit. That doesn't happen with an (ideal) analog attenuator, though—attenuate 6 dB at the analog output of the DAC and the digital lsb because half an lsb. Sounds like analog gain is the winner, right? These things are true, but only up to the point where they run into the unavoidable physical limits we live with. The most obvious is thermal noise. (Electronics anywhere near room temperature puts out noise, there is a fundamental minimum—many people know this so I don't want to go off on that tangent, but for anyone who this is a surprise, start with looking up "Johnson noise" for one source. The main point is there is a theoretical minimum noise in electronics, the best you can do is try to get near it.) The last bits of a DAC are below the noise floor of electronics. This is always true for 24-bit DACs—whether the gear is consumer or pro, the max output is 1-4 volts, so the bottom bits are below the lowest attainable noise floor. In fact, a few high-end DAC makers are honest and tell you in plain English that no DAC does better than 20-bit accuracy. A few will argue close to 22 is achievable. In any case, that output is already as good as it can get, and this has implications for both digital and analog gain. Consider digital first: If we know we can attenuate to the point of wasting 3-4 bit without any possibility of noticing (since they are already under noise at full output), it seems like we can have 18-24 dB attenuation without fretting about losing resolution. So, it seems the most important thing is to not have an amp to powerful (with speakers so sensitive) that we're forced to turn the digital volume control down by a much larger amount. Right? Now analog: We know there is a noise floor that will not get lower as we turn down the analog volume control. (Someone is thinking...wait, I'll use a plain resistive pot that requires no electronics, and it won't have that electronic noise—therefore turning down the pot will lower the thermal noise too. Nope, the noise comes from components like resistors.) Because of that, you lose signal-to-noise ratio as you turn down the volume, in effectively the same way as the digital control. In fact, the bottom line again is that the best case is to match the power of your amp/speakers—you want to avoid the hypothetical situation where your amp/speakers are so loud with a full signal that you must turn down either the digital or analog gain near the bottom of their range. That's the bottom line—if your amplifier/speakers are far too loud for your listening environment, you'll always need to turn down your signal, and your amp speaker will bring up the volume of the noise floor along with the signal. But it doesn't matter whether you use digital or analog gain for the fine adjustment (assuming a suitable multiplier—sure, it's possible for a brain-dead digital implementation to be worse than analog). PS—This was actually helpful to me. I currently have that situation for monitoring at my computer. I wasn't concerned I was losing anything significant, but till I stopped here to think it through, I didn't realize how it's basically the same either way.Yep, digital volume control doesn’t work exactly like analog does. The bit-depth defines the noise floor if properly dithered. Every operation in fixed-bit causes the noise floor to rise as the amount of dither noise builds up. Truncation distortion rises faster. You don’t raise the digital noise floor as you said by turning it down and then turning it up somewhere else because the original digital samples are simply gone and replaced by new ones, thus the need to dither fixed point operations. All major DAWs are floating point now so there is not even a loss of precision, just gradually rising rounding distortion instead of gradually rising dither noise or truncation distortion. 32-bit float has a -144 dbfs noise floor and 64 bit a -318 dbfs one. Since floating point samples are a mantissa with an exponent, the mantissa is always there in full detail even as the rounding distortion rises and you won’t even see the rounding distortion rise in 64-bit daws in SPAN. Now these floating point values are usually truncated to 24-bit fixed point to feed your interface. This is where things can go wrong once you hit the drivers, the dsp in the interface, and then the analog world. Your setup might sound best with a passive pot, a well-voiced active controller that meshes well with your converters and other electronics, or the digital controls on your interface.
|
|
|
Post by earlevel on Jun 8, 2021 12:25:25 GMT -6
tl;dr: It doesn't matter which way you go. Details follow... Yep, digital volume control doesn’t work exactly like analog does. The bit-depth defines the noise floor if properly dithered. Every operation in fixed-bit causes the noise floor to rise as the amount of dither noise builds up. Truncation distortion rises faster. You don’t raise the digital noise floor as you said by turning it down and then turning it up somewhere else because the original digital samples are simply gone and replaced by new ones, thus the need to dither fixed point operations. All major DAWs are floating point now so there is not even a loss of precision, just gradually rising rounding distortion instead of gradually rising dither noise or truncation distortion. 32-bit float has a -144 dbfs noise floor and 64 bit a -318 dbfs one. Since floating point samples are a mantissa with an exponent, the mantissa is always there in full detail even as the rounding distortion rises and you won’t even see the rounding distortion rise in 64-bit daws in SPAN. Now these floating point values are usually truncated to 24-bit fixed point to feed your interface. This is where things can go wrong once you hit the drivers, the dsp in the interface, and then the analog world. Your setup might sound best with a passive pot, a well-voiced active controller that meshes well with your converters and other electronics, or the digital controls on your interface. I'm not sure if you got my point exactly (not sure if you're agreeing partially, but disagreeing on the conclusion?). What I was saying about digital is that no matter how you scale it digitally—which causes a loss of digital precision—the bottom bits are always below the analog noise floor. That is, when you send full-resolution 24-bit data to the DAC, the precision afforded by the least significant bit is lost well below the noise floor. (More than one bit, but I need only claim the least significant for this argument.) If you scale the audio digitally, say shift 4 bits for -24 dB, you've lost precision, but what you lost is still far below the analog noise level. Effectively, it doesn't matter that you lost digital precision, because the error it introduces is well below the unavoidable analog noise level—always. So, it doesn't matter whether we use a digital or analog attenuator, but in either case, it's best to avoid a sound system that requires significant attenuation. But no need to get overly paranoid, since there are similar limits to our hearing. That is, if you need to attenuate, say, up to -30 dB or a bit more worst case, you'll never hear the difference between that and no attenuate and the perfect amplification match for your listening level (assuming perfect amps). But if your amp is so powerful you need to attenuate something like -60 dB to keep the volume in check, it's degrading the audio significantly.
|
|
|
Post by Guitar on Jun 8, 2021 13:23:27 GMT -6
Earlevel really nice post! I'm not so afraid of using my Ultralite digital volume knob now at moderate reduction, thanks.
I love it when analog noise comes up in bit depth discussions, to keep our heads on straight.
Now I'm wondering if there's any benefit to recording 64 bit floating point in Reaper vs 24 bit or 32 bit floating point. I need to decide on this setting.
|
|
|
Post by mrholmes on Jun 8, 2021 13:32:18 GMT -6
Most digital attenuation is not properly dithered or 64-bit floating point. That being said the MOTUs sound cleaner outputting -1 than 0 ime even with an analog volume control.
+1 I don't hear anything bad with my digital monitor controller..... as good as the analog ones I had just without the left right issues...
|
|
|
Post by Pueblo Audio on Jun 8, 2021 14:49:20 GMT -6
Using ITB attenuation for monitor level control is a method loaded with negative fidelity consequences.
First, the obvious digital resolution reduction before reconstruction at the DAC. A common level attenuation for normal monitoring levels is somewhere is between -18dB to -26dB. This would reduce a 0dBFS peak to -26dBFP. But the average level of a mix lays down around -16 which, when attenuated, results in -40dBFS. That’’s only about 18bit resolution which will not reconstruct as faithfully when reconstructed at the DAC. We want faithfullness so that we know what’s in our record.
Second, if dither is not being applied properly (which is most of the time) we are awarded two insults. First is truncation. The DAW may be 64bit but the DAC is only 24, so foul there. Next we inherit the cancer that is quantization distortion. This is a pesky distortion which modulates with the signal. Even when minuscule, the ear can pick this out of the noise floor and fake a mixer’s perceptions of their record. This leading us to make erroneous decisions.
Third, without the benefit of an OTB level control, there is no stop guard for full scale casualties. When a computer has a “moment of confusion” and decides to pass full scale signal, there is no analog attenuation saving the amp from launching the speaker cones and, consequently, your ear drum. Safety first my sisters and brothers!
Empirically, in my many blind listening tests over the decades, no ITB or built in DAC attenuation has been perceived as more resolute than a straight forward, external, high-quality, passive stepped attenuator switch.
|
|
|
Post by earlevel on Jun 8, 2021 14:55:24 GMT -6
Earlevel really nice post! I'm not so afraid of using my Ultralite digital volume knob now at moderate reduction, thanks. I love it when analog noise comes up in bit depth discussions, to keep our heads on straight. Now I'm wondering if there's any benefit to recording 64 bit floating point in Reaper vs 24 bit or 32 bit floating point. I need to decide on this setting. Reaper is really cool, it's not the DAW I use, but I love Reaper. That said, the 64-bit bus option is pure gimmickry. I'll get back to that. DAWS typically bus 32-bit float. The last to bus 24 was Pro Tools TDM (because 24-bit DSPs did the work). Just establishing some history—obviously, you are talking about file (track) format. While 24-bit is good enough, I favor 32-bit float for tracks (and mixes, for that matter). Some might view that as 33% overhead for something that in practical terms doesn't matter, but drive space is cheap. No matter what, your DAW will convert it to 32-bit float (if not 64) on every load and save. I'd just as soon it already be in that format, and all my tracks and mixes are 32-bit float, and that's what I work with if I should want to edit in another program. In other words, it's just appealing to use 32-bit files. That said, nothing at all wrong with 24-bit track files. And, the more tracks you use, the less the resolution matters. That is, if you have crap-load of tracks you're mixing together, effectively they all get turned down with no loss of precision (because the DAW is using floating point) in order to sum to the final product destined for a 24-bit converter. So, in the mix, the 24-bit tracks effectively have more precision than you can use. Some fret that if they didn't record the track hot enough, then have to turn it up later, the noise floor comes up. But that's true regardless of track format—you lost bits in the converter, they won't come back either way. There's a slight load/save penalty with 24-bit due to format conversion, not sure how perceptible it is, but in essence it doesn't matter unless you do something absurd like change the gain of track data in a bad way (you can recover if they are 32-bit float). So, 64-bit...first, plug-ins and internal processes in the DAW are going to use double-precision—64-bit floats. So the precision in calculation is always there, no matter 24-bit tracks, 32-bit tracks, 64-bit tracks. The question is whether 32-bit is an effective "bus" to move the results from one complex process to the next. (Track files are an extension of the bus, a place to hold the track data, so I won't differentiate between the DAW bus and file format in this case.) How good of a bus is 32-bit float format? First, it can encode 25 bits directly (23-bit mantissa, normalized to hide and extra bit, and one more for an explicit sign). That's a floor of about -150 dB full scale. But it gets better. One reason you want a very precise bus is that the DSP processes (everything from gain adjustment to reverb) require further increased precision to hold the result. Often, it doesn't matter whether to maintain the extra bits generated, because they'll get lopped off going to the converter anyway—and since we already have a 25-bit mantissa, we are already lopping off bits. Still, they are nice to have because they make the recording process very forgiving—you could inadvertently get a -98 dB gain change in one stage, and you can compensate with a +98 dB gain with no loss of fidelity (assuming plain linear processes, of course). And because those 25 bit of mantissa (aka precision) "floats", 32-bit floats allow a stunning level of accuracy. So, even though 64-bit calculations are important for things like filters (at minimum, it makes coding them easier), 32-bit is far better than needed for moving results between processes (including saving tracks to disk). 64-bit is there to appeal to people asking, "but wait, since you said the filter uses 64-bit internally, why can't we just keep that 64-bit result?". Read that last sentence again, because that's the bottom line. In reality, floating point is always an approximation, even at 64-bit. (Hypothetical one-digit decimal floating point multiply: .9 x .9 = .8—true answer is .81, but it won't fit in one digit.) So, the only question is whether there is an advantage to a 64-bit bus, and the answer is no. There will be no difference in the 24-bit output to your DAC, whether you bus is 64-bit or 32-bit. 64-bit just makes internal buffers twice as big, your storage twice as big, no difference in what goes to your DAC (barring an absurd example). So for Reaper I'd personally use 32-bit (or 24-bit if drive space is a concern, won't hurt you). 64-bit is for those who feel comfort in the large number.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jun 9, 2021 3:10:32 GMT -6
Reaper does not just bus in 32-bit, it processes in 32-bit too. Rounding distortion rises in low end like the original Waves Q10. It’s more efficient anyway ( and cleaner) to just run it at 64-bit rather than have it constantly round.
Pro Tools rounds to 32-bit for plugins despite having a 64-bit mixer because Pro Tools has been antiquated like that for decades. TDM was 48-bit fixed truncated to 24-bit fixed to plugins without dither if not using the dithered mixer.
|
|
|
Post by earlevel on Jun 9, 2021 3:41:17 GMT -6
Reaper does not just bus in 32-bit, it processes in 32-bit too. Rounding distortion rises in low end like the original Waves Q10. It’s more efficient anyway ( and cleaner) to just run it at 64-bit rather than have it constantly round. I'm unclear on the degree to which you mean it processes in 32-bit when it's routing 32-bit. I'll have to check into that when I have time. (dam, when will that be...) I would expect that, yes. Yeah, I make people mad when I say it doesn't matter if you dither 24-bit truncations. No one will ever know, it's below the noise floor of the electronics (and your ears). That hasn't stopped famed mastering engineers telling me they can hear it. Another plugin developer told me how it was very important that TDM plugins dithered their outputs, and all the good ones certainly did. Until I pointed out that the damage (if that's the way you perceive it) was already done many times over in any TDM plugin of consequence, well before the output. You may have 56-bit accumulators, but the first time (of many) you pull a result out to pass it to the next task that requires a multiply, it's almost certainly truncated to 24-bit right there. (Unless you're doing double precision—slow and awkward on 56k, rarely done; I have one plugin that required a 24x48 inside an oversampling loop, expensive.) That means every IIR, for instance, every gain change...He conceded I was right when I reminded him of that. Really, lack of dither, and error in the 24th bit or so was the least of PT TDM's pitfalls. There are a lot of ways to fail with fixed point. I'm sure a lot of developers never knew the biquads they sourced from a cookbook or application note had serious performance issues and gave up a lot more than the low couple of bits, for instance. On the bright side, somehow we survived TDM (in style, even), and now we have more precision. That should relieve even the most paranoid.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jun 9, 2021 4:19:52 GMT -6
We’re not hearing the distortion itself, we’re hearing something wrong with the signal. I use the Goodhertz Good Dither now with no noise shaping because it has more clarity than the PSP X Dither to my ears. I just run it as my last fx and don’t touch the master fader.
|
|
|
Post by Quint on Jun 9, 2021 7:31:13 GMT -6
Anybody know how Luna handles all this stuff (mix engine, bits, dither, etc.)?
I use Reaper right now, but I'd be keen to give Luna a shot one day. It'd be nice if, once Luna has full hardware inserts, it automatically incorporates dither on outputs, similar to how Harrison MixBus does this.
|
|
|
Post by plinker on Jun 9, 2021 8:16:32 GMT -6
Should I be concerned about this? Digital attenuation decreases bit depth, right? And I’m assuming interfaces like Apollo X, Motu 828ES and Lynx Aurora (n) are using digital attenuation on the front panel volume control. Would sending full volume to a monitor controller, like a Coleman, noticeable increase the fidelity? I don't know about those other units, but Metric Halo ULN8 & LIO8 have digitally controlled, analog gain/trim on all outputs (including cans). Levels can be adjusted from both the front panel (using rotary encoders) and from the MH Console software.
|
|
|
Post by indiehouse on Jun 9, 2021 8:37:07 GMT -6
Speaking of dither, what’s best practices for using dither? Where and when?
|
|
ericn
Temp
Balance Engineer
Posts: 14,921
|
Post by ericn on Jun 9, 2021 8:48:38 GMT -6
Speaking of dither, what’s best practices for using dither? Where and when? The rule always was any time there is a conversion.
|
|