|
Post by svart on Feb 21, 2018 19:57:47 GMT -6
I heard a clear step up in fidelity going from 44 to 88 on my SSL converters. The same happened when I designed my stereo converter, 48/88/96/192 have steps in fidelity going upwards from 44.1.
Now, I only use multiples of 44.1 since the math is a lot easier on programs when they have to samplerate convert. I'd say that I believe that 88.2 being downconverted to 44.1 sounds better than 96 does.. But 96 sounds slightly better than 88.2 if I'm not downconverting.
|
|
|
Post by johneppstein on Feb 21, 2018 21:25:00 GMT -6
A lot of modern converters input audio at a very high sampling rate and downconvert it to whatever rate is selected in the DAW.
|
|
|
Post by ragan on Feb 21, 2018 23:40:20 GMT -6
I heard a clear step up in fidelity going from 44 to 88 on my SSL converters. The same happened when I designed my stereo converter, 48/88/96/192 have steps in fidelity going upwards from 44.1. Now, I only use multiples of 44.1 since the math is a lot easier on programs when they have to samplerate convert. I'd say that I believe that 88.2 being downconverted to 44.1 sounds better than 96 does.. But 96 sounds slightly better than 88.2 if I'm not downconverting. Wait, for real? I always kind of considered the "math is easier" thing to be a total fallacy since, you know, math is just kinda...math. And computers aren't exactly known for being challenged by calculating things. Is it a float/approximation thing? If you're not being sarcastic, I wanna know more, since I know you know what you're talking about in this area. I don't remotely pretend to have a great grasp on it.
|
|
|
Post by Ward on Feb 22, 2018 7:05:16 GMT -6
After much discussion of this with Andy from Cytomic, I believed it was objectively provable to say most plugins benefit from being run at 88/96. Particularly eqs so they won't cramp and analog-modeled plugins so they won't alias. I remember he wasn't particularly fond of Pro Q2's Natural phase mode thinking it was too big a compromise to avoid the cramping at 44/48. The latency was quite high, for one. Pure digital plugins work the same at either rate, I think. Not sure if any eq has come up with a better solution than natural phase mode when being run at 44/48. Natural phase? You mean 'Phase Shift' ? ? That's the basic operating principal of most analog EQs in the hardware world.
|
|
|
Post by svart on Feb 22, 2018 8:32:35 GMT -6
I heard a clear step up in fidelity going from 44 to 88 on my SSL converters. The same happened when I designed my stereo converter, 48/88/96/192 have steps in fidelity going upwards from 44.1. Now, I only use multiples of 44.1 since the math is a lot easier on programs when they have to samplerate convert. I'd say that I believe that 88.2 being downconverted to 44.1 sounds better than 96 does.. But 96 sounds slightly better than 88.2 if I'm not downconverting. Wait, for real? I always kind of considered the "math is easier" thing to be a total fallacy since, you know, math is just kinda...math. And computers aren't exactly known for being challenged by calculating things. Is it a float/approximation thing? If you're not being sarcastic, I wanna know more, since I know you know what you're talking about in this area. I don't remotely pretend to have a great grasp on it. I'm not being sarcastic, however, it's a personal belief based on my own testing, in my own studio with my own gear. Many very learned folks don't believe that multiples of your intended downconverted sample rate are superior, but I believe that it's entirely up to the SRC software and how it's implemented. I'm not even talking about what samplerates are running inside the converter ICs, because that can be a mess of oversampling, undersampling, etc.. I'm talking entirely about the data being written to the HDD and how it's converted from there. The fact is that the algorithm being used by the SRC software determines the precision at which it resamples a stream. Although, there are many, many ways to skin this cat and their results are not equal.. Especially if it's older theory. I'm certainly not versed in all the ways to do SRC. In general, the coder must find the happy medium between speed and precision(read: time consuming). A process meant to be used standalone, like the Izotope SRC application I use, can take as long as it needs to be absolutely precise/accurate. That takes a good 20 seconds to convert a song from 88.2 to 44.1. Interestingly enough, it takes longer to convert 96 to 44.1, by a few seconds at the least. Why does it take a few seconds longer, for very little difference in total number of samples? The math is more complex. I *believe* (don't quote me) that a lot of modern SRC to convert non-multiple samplerates needs to upsample to a common multiple of the two frequencies, before being downsampled to the lower samplerate. This creates very large words during the intermediate step that either need to be floating, or possibly truncated for speed's sake. Using a multiple, you can simply use digital filtering, and drop samples to get your lower rate. Faster, and less CPU intensive. Ragan, you'll see what I mean when you get to polynomial curve fitting/transfer equations in classes. Interpolation is a lot easier when you drop points, than if you need to recalculate higher order polynomials since you've added more points(precision) in an intermediate step, before dropping points again.. But I have no idea if it's a similar process when written for computers. However, there is one reason that I'd use higher sampling rates in almost all cases, is that most anti-aliasing filters are designed as a tradeoff between their lowest and highest sampling rates. Rarely do converters switch in/out analog filters to match their sampling rates. This means that at lower sample rates, you're getting some of the aliased image in your upper range.. you may not hear it outright, but the harmonic content is usually perceived anyway. Choosing a sample rate in the middle of your choices might be closer to utilizing the filtering at it's best region.
|
|
|
Post by EmRR on Feb 22, 2018 9:48:42 GMT -6
Wait, for real? If you're not being sarcastic, I wanna know more, since I know you know what you're talking about in this area. I don't remotely pretend to have a great grasp on it. there is one reason that I'd use higher sampling rates in almost all cases, is that most anti-aliasing filters are designed as a tradeoff between their lowest and highest sampling rates. Rarely do converters switch in/out analog filters to match their sampling rates. This means that at lower sample rates, you're getting some of the aliased image in your upper range.. you may not hear it outright, but the harmonic content is usually perceived anyway. Choosing a sample rate in the middle of your choices might be closer to utilizing the filtering at it's best region. The response and phase plots of my 16A are pretty interesting, you can see the filters action.
|
|
|
Post by svart on Feb 22, 2018 10:18:55 GMT -6
there is one reason that I'd use higher sampling rates in almost all cases, is that most anti-aliasing filters are designed as a tradeoff between their lowest and highest sampling rates. Rarely do converters switch in/out analog filters to match their sampling rates. This means that at lower sample rates, you're getting some of the aliased image in your upper range.. you may not hear it outright, but the harmonic content is usually perceived anyway. Choosing a sample rate in the middle of your choices might be closer to utilizing the filtering at it's best region. The response and phase plots of my 16A are pretty interesting, you can see the filters action. So are these the digital filters, or the analog hardware filters? I'm speaking of the analog hardware filters, not digital post processing.
|
|
|
Post by EmRR on Feb 22, 2018 10:47:25 GMT -6
This is the result of an analog loop test with pink noise from Spectrafoo.
|
|
|
Post by svart on Feb 22, 2018 12:28:59 GMT -6
This is the result of an analog loop test with pink noise from Spectrafoo. No offense, but that still doesn't answer the question.. If we're assuming that the hardware antialiasing filters are not being switched in/out(which I've never come across a design that did), then we can only assume that those are digital filters being applied to the stream by the driver. That would be better than nothing, but still doesn't really help the aliasing on the A/D IC.
|
|
|
Post by Bob Olhsson on Feb 22, 2018 13:35:11 GMT -6
It's all about the filters, the sound of the filters and choosing when and by how much to downsample from the A to D converter's internal sample rate.
My experience with plug-ins has been that 48 can sound better than 88.2 but not as good as 96. I assume this is because the high-end markets for digital audio gear are post production and live sound which are standardized on 48 although I recently learned that 96 is making inroads in touring sound.
Aliasing isn't what we think of as distortion coming from the analog world. It is a masking of detail in the midrange. Another interesting thing is that it is easier to hear on a big sound system in a hall than in a recording or mastering studio.
|
|
|
Post by EmRR on Feb 22, 2018 14:31:47 GMT -6
This is the result of an analog loop test with pink noise from Spectrafoo. No offense, but that still doesn't answer the question.. If we're assuming that the hardware antialiasing filters are not being switched in/out(which I've never come across a design that did), then we can only assume that those are digital filters being applied to the stream by the driver. That would be better than nothing, but still doesn't really help the aliasing on the A/D IC. I'm not addressing the aliasing, I'm addressing the observable differences of the filters, whatever they are.
|
|
|
Post by viciousbliss on Feb 22, 2018 15:05:39 GMT -6
After much discussion of this with Andy from Cytomic, I believed it was objectively provable to say most plugins benefit from being run at 88/96. Particularly eqs so they won't cramp and analog-modeled plugins so they won't alias. I remember he wasn't particularly fond of Pro Q2's Natural phase mode thinking it was too big a compromise to avoid the cramping at 44/48. The latency was quite high, for one. Pure digital plugins work the same at either rate, I think. Not sure if any eq has come up with a better solution than natural phase mode when being run at 44/48. Natural phase? You mean 'Phase Shift' ? ? That's the basic operating principal of most analog EQs in the hardware world. Nope, it's a selectable mode in Pro Q2. Zero latency, natural phase, linear phase. www.fabfilter.com/help/pro-q/using/processingmode
|
|
|
Post by Quint on Feb 22, 2018 19:56:53 GMT -6
Wait, for real? I always kind of considered the "math is easier" thing to be a total fallacy since, you know, math is just kinda...math. And computers aren't exactly known for being challenged by calculating things. Is it a float/approximation thing? If you're not being sarcastic, I wanna know more, since I know you know what you're talking about in this area. I don't remotely pretend to have a great grasp on it. I'm not being sarcastic, however, it's a personal belief based on my own testing, in my own studio with my own gear. Many very learned folks don't believe that multiples of your intended downconverted sample rate are superior, but I believe that it's entirely up to the SRC software and how it's implemented. I'm not even talking about what samplerates are running inside the converter ICs, because that can be a mess of oversampling, undersampling, etc.. I'm talking entirely about the data being written to the HDD and how it's converted from there. The fact is that the algorithm being used by the SRC software determines the precision at which it resamples a stream. Although, there are many, many ways to skin this cat and their results are not equal.. Especially if it's older theory. I'm certainly not versed in all the ways to do SRC. In general, the coder must find the happy medium between speed and precision(read: time consuming). A process meant to be used standalone, like the Izotope SRC application I use, can take as long as it needs to be absolutely precise/accurate. That takes a good 20 seconds to convert a song from 88.2 to 44.1. Interestingly enough, it takes longer to convert 96 to 44.1, by a few seconds at the least. Why does it take a few seconds longer, for very little difference in total number of samples? The math is more complex. I *believe* (don't quote me) that a lot of modern SRC to convert non-multiple samplerates needs to upsample to a common multiple of the two frequencies, before being downsampled to the lower samplerate. This creates very large words during the intermediate step that either need to be floating, or possibly truncated for speed's sake. Using a multiple, you can simply use digital filtering, and drop samples to get your lower rate. Faster, and less CPU intensive. Ragan, you'll see what I mean when you get to polynomial curve fitting/transfer equations in classes. Interpolation is a lot easier when you drop points, than if you need to recalculate higher order polynomials since you've added more points(precision) in an intermediate step, before dropping points again.. But I have no idea if it's a similar process when written for computers. However, there is one reason that I'd use higher sampling rates in almost all cases, is that most anti-aliasing filters are designed as a tradeoff between their lowest and highest sampling rates. Rarely do converters switch in/out analog filters to match their sampling rates. This means that at lower sample rates, you're getting some of the aliased image in your upper range.. you may not hear it outright, but the harmonic content is usually perceived anyway. Choosing a sample rate in the middle of your choices might be closer to utilizing the filtering at it's best region. That's an interesting observation and theory on the extra time needed to do SRC between non-multiple sample rates. All things being completely equal, which they apparently never totally are when discussing these matters, what would be the downside if it was truly down to nothing more than the math, other than maybe, say, latency impacts?
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Feb 22, 2018 22:44:20 GMT -6
The extra time is much easier to explain as it seems. And yes, there is a) more data to process because of the higher number of samples, and b)an additional upsampling process. BUT the additional upsampling is no big deal. The thing with "drop every second sample at integer multitude sample rates is much less complicated and more precise, therefore better sounding" is a common belief that simply does not work like this. The difference between integer multitude conversion and fractional multitude conversion is marginal. For the first, it is one downsampling and a lowpass, for the latter it is a (full integer) upsampling and then downsampling, with the lowpass normally applied already after the upsampling. You do not need a second filter, as it may be the first idea, because if you need 2 filters, you just take the lower cutoff frequency. So, the crucial calculation for the sound quality, the quality of the filter, is in the end the same problem, no matter if integer multitude or not ... Some very good SRC algorithms are open source, so this is no black art anymore. Not so long ago, internal SRCs of some DAWs were, cough, let's say ... lousy. Since the performance of the SRCs is also crucial for the end product and the software houses had to improve a lot to stay competitive when the first high-quality open source algorithms became widely available and smoked the commercial inbuilt ones ... I guess this page on src.infinitewave.ca/, where you can look up measured performance of SRCs does explain it better than i can right now...: src.infinitewave.ca/help.html
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Feb 22, 2018 22:49:33 GMT -6
Bottom line of the previous post: The quality of integer multitude or fractional multitude SRC should be virtually the same, if the software does things right.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Feb 22, 2018 23:02:20 GMT -6
Addendum 2: There are no samples dropped but neighbored samples are averaged to downsample... you get the point, so you need a multitude of the target sample rate. Things are not sooo complicated actually...
|
|
ericn
Temp
Balance Engineer
Posts: 15,023
Member is Online
|
Post by ericn on Feb 23, 2018 8:03:38 GMT -6
The extra time is much easier to explain as it seems. And yes, there is a) more data to process because of the higher number of samples, and b)an additional upsampling process. BUT the additional upsampling is no big deal. The thing with "drop every second sample at integer multitude sample rates is much less complicated and more precise, therefore better sounding" is a common belief that simply does not work like this. The difference between integer multitude conversion and fractional multitude conversion is marginal. For the first, it is one downsampling and a lowpass, for the latter it is a (full integer) upsampling and then downsampling, with the lowpass normally applied already after the upsampling. You do not need a second filter, as it may be the first idea, because if you need 2 filters, you just take the lower cutoff frequency. So, the crucial calculation for the sound quality, the quality of the filter, is in the end the same problem, no matter if integer multitude or not ... Some very good SRC algorithms are open source, so this is no black art anymore. Not so long ago, internal SRCs of some DAWs were, cough, let's say ... lousy. Since the performance of the SRCs is also crucial for the end product and the software houses had to improve a lot to stay competitive when the first high-quality open source algorithms became widely available and smoked the commercial inbuilt ones ... I guess this page on src.infinitewave.ca/, where you can look up measured performance of SRCs does explain it better than i can right now...: src.infinitewave.ca/help.htmlNobody said there would be math involved!!😎
|
|
|
Post by Bob Olhsson on Feb 23, 2018 13:59:45 GMT -6
I do everything I can at 96 because multiple over-sampling within plug-ins, i.e. up/down, up/down, up/down, up/down, up/down, etc. where latency is generally optimized over precision makes no sense. The entire process from the microphone to iTunes needs to be treated as a system.
|
|
ericn
Temp
Balance Engineer
Posts: 15,023
Member is Online
|
Post by ericn on Feb 23, 2018 14:09:55 GMT -6
I do everything I can at 96 because multiple over-sampling within plug-ins, i.e. up/down, up/down, up/down, up/down, up/down, etc. where latency is generally optimized over precision makes no sense. The entire process from the microphone to iTunes needs to be treated as a system. There you go again, oh the insanity, why your talking sense and you know where that gets us!
|
|
|
Post by guitfiddler on Feb 24, 2018 16:38:17 GMT -6
So, the verdict after all the calculations? 96?
What conversion rate are you using?
What converters should I get? LOL
A newer take on this subject, can I hear it?
|
|
|
Post by Martin John Butler on Feb 24, 2018 21:05:51 GMT -6
I’ve tried 96, but only for a very small session. I felt that my System wasn’t as stable, I’m not sure if I have enough firepower to run a full session at 96. I have an older iMac with 32 gigs for graphics.
|
|
|
Post by Vincent R. on Feb 25, 2018 7:20:55 GMT -6
I’ve tried 96, but only for a very small session. I felt that my System wasn’t as stable, I’m not sure if I have enough firepower to run a full session at 96. I have an older iMac with 32 gigs for graphics. Yeah, 96 is the highest I can go with my system and if I pile on a ton of tracks it gets unstable. I also have an older iMac. If I try 192 my computer can’t send and receive information fast enough to work.
|
|
|
Post by joseph on Feb 25, 2018 8:58:13 GMT -6
It's all about the filters, the sound of the filters and choosing when and by how much to downsample from the A to D converter's internal sample rate. My experience with plug-ins has been that 48 can sound better than 88.2 but not as good as 96. I assume this is because the high-end markets for digital audio gear are post production and live sound which are standardized on 48 although I recently learned that 96 is making inroads in touring sound. Aliasing isn't what we think of as distortion coming from the analog world. It is a masking of detail in the midrange. Another interesting thing is that it is easier to hear on a big sound system in a hall than in a recording or mastering studio. I noticed this especially with the more sophisticated clean plugins like DMG, and in general on good monitors the soundstage just seems to be deeper and more refined tracking with 96 and 48 vs 41.1. Also certain plugins that have oversampling options built in sound as good with it off at higher sample rates and with less latency. I wonder if it's advisable to always LPF 96khz mixes more so than 48khz given this discussion about harmonics from recorded audio in inaudible range creating distortion in the audible range www.gearslutz.com/board/mastering-forum/968641-some-thoughts-quot-high-resolution-quot-audio-processing.html
|
|
|
Post by Bob Olhsson on Feb 25, 2018 12:09:46 GMT -6
Here turning oversampling on never sounds as good in a final 44.1 file as kicking the audio up to 96 and leaving oversampling off. Sometimes low-pass sounds better, sometimes worse. You just need to listen.
|
|
|
Post by jimwilliams on Feb 26, 2018 11:17:46 GMT -6
When I test converters here on my Audio Precision analyzer I also notice some issues not related to the rates but general physics.
The actual performance of the top notch chip sets do quite well at all the published sample rates but the measurements will change. This is not because faster rates are harder, these products are well designed. Industrial converters operate at much higher bandwidths than audio converters and they don't have issues.
When one runs converters at 96k you will measure an increase in THD+noise specs even though the THD doesn't increase with an FFT. This is because the measurement bandwidth has doubled. Then all that increased bandwidth must be extrapolated into a measurement. Noise is not factored out on THD+noise measurements but rather is included, the audio standard for well over 60 years. Therefore THD+noise specs show higher on the AP as well as on the published THD plots from the chip manufacturers at the higher sample rates.
If one considers that all this new noise that is now measured is now placed above 20k hz, it's not something we will hear. Standard AP THD+noise tests have an 80k hz bandwidth, now you see why more crap is measured even though it won't mean much to the end listener.
|
|