|
Post by Martin John Butler on Mar 29, 2018 10:56:49 GMT -6
Again, over my head, but I was just saying I hear or feel a little difference when using higher sample rates, that's all. I assumed that graph showed that, I really don't understand it the way you do svart.
|
|
|
Post by svart on Mar 29, 2018 12:19:47 GMT -6
Again, over my head, but I was just saying I hear or feel a little difference when using higher sample rates, that all. I assumed that graph showed that, I really don't understand it the way you do svart. I see, I didn't understand the context of your statement! Yes, I totally agree that higher sampling rates would allow higher frequencies through if you don't take the analog filter into account. Even then, the precision for signals in the audio band would be higher, resulting in perception of tiny distortions on lower frequency signals that wouldn't be audible with lower sampling rates, even with the anti-aliasing filter in place. The filter would still cut down the effective bandwidth of the circuit and a lot of those high frequency distortions/modulations would be snuffed to some degree, but to a lesser extent that if the samplerate alone was lower. I mentioned it before that it seems that most A/D designs have anti-aliasing filters that "split the difference" between the highest and lowest expected analog bandwidth and I think that graph shows it fairly well. You can see that even though it's a 192K sampling rate, the signal starts dropping in amplitude quickly above maybe 15KHz. Since the dB scale is log, dropping in dB values just a few dB results in a much higher drop in audibility. Here's a graph that shows that lower sampling rates "smooth" the signals, and higher ones show more detail, for a given signal frequency in the audio band.. link
|
|
|
Post by Bob Olhsson on Mar 29, 2018 18:11:23 GMT -6
What aliasing does in the midrange is mask information. I remember one converter demonstration where the jazz drummer's brush work disappeared with one converter but not another. It's about the filters and the really good ones have lots of latency.
|
|
|
Post by wiz on Mar 29, 2018 18:20:47 GMT -6
I wonder how much of this actually matters, with todays converters vs the days of yore?
The audio that comes from my fingers to release gets crunched a lot.
48K capture, upsample to 96k for mastering, back to 44.1 for distribution, then mangled by streaming services...
So at best, CD release its operated on at 3 different sample rates....
I dunno, I got bigger fish to fry in the whole album process... 8)
cheers
Wiz
|
|
|
Post by Quint on Mar 29, 2018 18:28:29 GMT -6
What aliasing does in the midrange is mask information. I remember one converter demonstration where the jazz drummer's brush work disappeared with one converter but not another. It's about the filters and the really good ones have lots of latency. Hi Bob, I posted a link earlier in this thread to a discussion where Dave Amels was discussing the minimally acceptable sample rate which would effectively make digital conversion artifacts a moot point. In your opinion, given the real world limitations on filter design, including the typical user's need for low latency or at least relatively low latency, what would be the necessary sample rate to effectively make all of these discussions about filters and pre-ringing be a non issue? What about DSD?
|
|
|
Post by Bob Olhsson on Mar 29, 2018 18:39:34 GMT -6
First, I think people should be using analog monitoring. I'm still not convinced that rates above 96k are useful assuming top quality conversion. That said lots of people are now using 192 with consoles.
|
|
|
Post by Quint on Mar 29, 2018 18:44:34 GMT -6
First, I think people should be using analog monitoring. I'm still not convinced that rates above 96k are useful assuming top quality conversion. That said lots of people are now using 192 with consoles. I use 96k and analog monitoring exclusively. Out of curiosity, what is your reasoning on why analog monitoring should be used? Any thoughts about 192 and consoles?
|
|
|
Post by Bob Olhsson on Mar 29, 2018 20:02:52 GMT -6
There's effectively no latency in analog monitoring. Several people I know have found that it really speeds up getting a good performance compared to even going through back to back converters.
|
|
|
Post by Martin John Butler on Mar 29, 2018 21:17:36 GMT -6
Can you describe what you mean by analogue monitoring Bob, I don't truly follow. How does one record digital, but monitor alanolgue?
I use Logic. I record track by track, I listen back via my Apollo 8 to my Adcom amp to my NS-10 speakers.
|
|
|
Post by popmann on Mar 29, 2018 22:30:37 GMT -6
You need an analog mixer. I use a little cheap line mixer. The signal is split from the input chain--one leg goes to the converter, one goes to the analog mixer. The playback stereo mix (if not obvious) goes to the same analog mixer.
Then you can not use headphones at all....or not put your input feed into the headphone at all and keep one ear off....those would also qualify, and are doable for zero dollars.
|
|
|
Post by donr on Mar 29, 2018 23:15:45 GMT -6
It'd be hip to invent making the limitations of 44.1/48 (or any digital sampling rate,) the compounding formerly problematic, to sound more musical, by making (interfering with) the required calculations, becoming non-linear, introducing some new deep math, but somehow ending up more euphonic when listened to.
Just a late night pipe dream..
|
|
|
Post by donr on Mar 29, 2018 23:18:25 GMT -6
+1 on Analog monitoring. Why would do otherwise? You need to hear what your doing and what you're doing it to at the same moment.
|
|
|
Post by wiz on Mar 29, 2018 23:25:53 GMT -6
You need an analog mixer. I use a little cheap line mixer. The signal is split from the input chain--one leg goes to the converter, one goes to the analog mixer. The playback stereo mix (if not obvious) goes to the same analog mixer. Then you can not use headphones at all....or not put your input feed into the headphone at all and keep one ear off....those would also qualify, and are doable for zero dollars. I originally asked.... How do you actually split the output of the preamp? and what about impedance presented to the mic preamp? Never mind, I just figured it out for me how I would do it. I went mic, mic preamp, STA Level, into rear of Patch bay (Half Normalled) then took two outputs from the front of the patch bay, one into my soundcraft delta, and one to AD. I monitored off the Delta, then did a take monitoring through the DAW (this is how I normally run) It definately sounds different. I will try this for a while and see, on that five minute test, I think my timing was a little nicer and perhaps intonation was better, but it could be placebo.... I did notice less phase issue though, I am sensitive to polarity for sure and have mentioned that previously... as to whether its worth the futzing around, not sure yet. I monitor through the DAW using MOTUs mixer so the latency is near zero, but near zero aint zero...8) cheers Wiz
|
|
|
Post by popmann on Mar 29, 2018 23:56:22 GMT -6
You need an analog mixer. I use a little cheap line mixer. The signal is split from the input chain--one leg goes to the converter, one goes to the analog mixer. The playback stereo mix (if not obvious) goes to the same analog mixer. Then you can not use headphones at all....or not put your input feed into the headphone at all and keep one ear off....those would also qualify, and are doable for zero dollars. How do you actually split the output of the preamp? and what about impedance presented to the mic preamp? cheers Wiz I have a Y cable attached to the Burl's inputs full time...so rather than plug into the Burl, I plug into the Y, which one leg goes to the Burl, one to the mixer. Great River's preamp actually had an analog monitoring out as a safety against what you're implying, but the designer told me there aren't a lot of situations it's necessary--where the main output can't be split and feed multiple inputs--just a nice feature to have when you DO run into an issue. I've tested with and without the Y....can't hear a lick of difference in the resultant recorded track....and I'm one of those frou frou "cables sound different" guys, so to say I appreciate nuance is an understatement. I should also point out, since I realize there's young'uns reading....that of course the original way you monitored analog is by using the mic input on the console--it's multed in the internal signal flow of the mixer to the recorder's inputs.
|
|
|
Post by jazznoise on Mar 30, 2018 11:30:30 GMT -6
A lot of modern stuff has hardware monitoring - including the Focusrite stuff. Does indeed make a big difference, I never monitor with latency. Just hate it.
|
|
|
Post by Bob Olhsson on Mar 30, 2018 11:53:39 GMT -6
Most "hardware monitoring" just bypasses the computer while still being delayed by the converters.
|
|
|
Post by johneppstein on Mar 30, 2018 12:22:30 GMT -6
• There's zero mathematical benefit from recording at higher sample rates for audio work. It's simply Nyquist. That assertion is questionable at best. There is a growing body of scientific evidence that our brains respond to audio imformation of signficantly higher frequency than 20kHz, although we may not be consciously aware of it. We can also perceive differences in waveform within the so-called "audible range" although the harmonics comprising those differences are inaudible themselves as fundamental frequencies. Of course it does. Unless. of course, you insist that the difference in waveform caused by very high harmonics don't make any audible difference, easily disproved by comparing a 10kHz sine wave to a square wave of the same fundamental frequency. The old "machines are better than ears" fallacy. If machines are perfect, why are they constantly being improved? "Many" is not all. And isn't it better to avoid repeated up and down sampling if possible? Yes. Most modern converter designs record at a very high native frequency and downsample. That renders the whole "frequency controversy" moot. The only valid argument for a lower sampling rate these days is file size. And storage is very cheap now. And the vast majority of diners in commercial establishments eat at Mickey D's, Booger King, and the like, not Chez Panisse. So? There's a lot of conservatism and inertia in the industry. Quite a few of those professionals and professional studios are also still running obsolete converter hardware, largely out of the idea that "If it ain't broke, don't fix it.", which I generally agree with in principle. However when you're used to working with something that actually has been "broke" from day one..... And of course converters aren't "sexy" and converter upgrades are not nearly as likely to bring new customers through the door as many other purchases.
|
|
|
Post by popmann on Mar 30, 2018 12:25:40 GMT -6
Most "hardware monitoring" just bypasses the computer while still being delayed by the converters. It's the rare exception an ONLY in "modern" time that an interface that ISN'T MADE BY AVID doesn't have a hardware DSP mixer. The SPL Crimson is the only analog cue I'm aware of....and they are quick to point that out. Even though the majority of users don't have any clue why that's better--and I've seen people railing on it for NOT having the typical DSP mixer, which will have more bells and whistles because....it's a cheap digital chip. On the flip side, the Presonus Quantum is the first line I've witnessed in 25 years of computer recording use WITHOUT a hardware DSP mixer. People claim things about how (RME's) Totalmix is unique--it's not. It's a NICE UI....but, the concept of hardware digital cue mixers being built into the interface is at least as old as 99 (?)--whenever I built my first "software instrument" PC. It was part of the ASIO standard for native systems of the 90s....which Apple has abandoned at OS level, FWIW.
|
|
|
Post by johneppstein on Mar 30, 2018 12:31:16 GMT -6
Just a couple last points. 96kHz latency benefit is indeed worth the change for anyone working natively. Circling back around to fallacy that more sample points equals a more accurate waveform—that would mean that even recording at an 8kHz samplerate (yes, 1/12 of 96kHz) would have MUCH more accuracy (more sample points) at 40Hz than 96kHz has at 4kHz. It's pretty simple. More than two sampling points for a cycle means nothing. The fallacy is that your "cycle" is the fundamental of the tone, which is fine when all you're working with is sine waves. When you're working with complex waveforms you need two points on the highest harmonic.
|
|
mhep
Full Member
Posts: 36
|
Post by mhep on Mar 30, 2018 19:51:57 GMT -6
That would just reinforce my statement that 96kHz sample rate reproduction of 4kHz content would be of lower quality than an 8kHz sample rate's reproduction of 40Hz, if sampling really worked like you're illustrating. Just a couple last points. 96kHz latency benefit is indeed worth the change for anyone working natively. Circling back around to fallacy that more sample points equals a more accurate waveform—that would mean that even recording at an 8kHz samplerate (yes, 1/12 of 96kHz) would have MUCH more accuracy (more sample points) at 40Hz than 96kHz has at 4kHz. It's pretty simple. More than two sampling points for a cycle means nothing. The fallacy is that your "cycle" is the fundamental of the tone, which is fine when all you're working with is sine waves. When you're working with complex waveforms you need two points on the highest harmonic.
|
|
|
Post by johneppstein on Mar 31, 2018 13:02:36 GMT -6
That would just reinforce my statement that 96kHz sample rate reproduction of 4kHz content would be of lower quality than an 8kHz sample rate's reproduction of 40Hz, if sampling really worked like you're illustrating. The fallacy is that your "cycle" is the fundamental of the tone, which is fine when all you're working with is sine waves. When you're working with complex waveforms you need two points on the highest harmonic. If you're talking about pure sine waves. Musical tones are almost never pure sine waves except in some electronic music. So while that might be of academic interest it has little to do with reality outside the "ivory tower"...
|
|
|
Post by christopher on Mar 31, 2018 17:13:30 GMT -6
Mavericks. That surf break is famous for being huge. When I watched a competition I was kind of let down because the waves aren't any bigger there than anywhere else along the coast. But the reason the 'big wave' gets so big there is because the prior wave bounced off the shore and headed backwards out to sea, and where it crosses with the next wave it adds up to makes a monster wave. Watching this action for an hour it was like a light bulb and I made the connection with audio: all these little harmonics that we can't hear are adding up on top of stuff that we can hear. Enjoy!
|
|