|
Post by jin167 on Feb 2, 2019 1:38:56 GMT -6
I'm responding to your comment about FP ADC being IMPOSSIBLE to design/exist. There are papers out there suggesting its feasibility and I'm simply letting you know that they're there. Yes, I have read the paper and I'm reading it again because I have learned quiet a lot from it and discovering new things along the way. You're more than welcome to stop at any time it's not like I'm forcing you to read it. As I said, I'm not an expert in dsp or converter design but I can tell that you're not one neither so what makes you so certain that an FP ADC is impossible when you don't have sufficient knowledge to prove your claim? And what's up with dismissing someone's PhD thesis as science fiction? Do you have a PhD degree in electrical engineering? Again, I'm not interested in your own definition of FP ADC. If you want to create/define your own term, write a paper on it. Well, let's put it this way - Svart is not the only engineer who has told me that. Not even close.
I didn't say his thesis is SF - I said your interpretation of it is SF.
And let's get something straight right now - it's not my definition. what I'm saying isn't in any way radical, unconventional, or original. In fact, YOU are the first person I've ever encountered who has argued the point. It has been printed in manuals, textbooks, and articles explaining how floating point works in the DAW environment that are way too numerous to mention. Actually, for me it goes back to before DAWs for me, I believe I first encountered discussions of floating point back when I was hardware hacking on early personal computers and FPUs were separate chips back in the '80s in the old days of Silicon Valley.
Now, perhaps a "floating point converter" may be possible, but as I mentioned previously at the point I am in the paper it sure looks to me like the author is simply talking about adding an FPU to a standard fixed point converter and by the standards of those who design processors that isn't actually a floating point analog to digital converter. People who design CPUs, which are MUCH more intensive in their use of FP, still regard the onboard FPU in a modern CPU chip as a separate device from a design standpoint. The difference is that its on a section of the same die, not a discrete part. (And please don't start arguing semantics about the word "discrete".)
And even if a converter that is natively floating point might be possible there must be good reasons why such chips are not widespread. Like they're not practical, which is not much different from impossible from a practical engineering point of view.
I didn't say his thesis is SF - I said your interpretation of it is SF. Trying your best to dodge a bullet huh? Not even once have I expressed my interpretation of the paper on this forum and yet you claim that my interpretation of the paper is SF. Try harder. As I said, and for the last time, I'm questioning your claim that FP ADC is IMPOSSIBLE to design/exist. My quarrel is not about its practicality, I'm solely focusing on your claim that FP ADC is a design that is impossible to achieve. Just because something is impractical doesn't mean that it's impossible to design or build. It simply means there's no point in making one. And I was the first one to admit that I don't know enough about dsp or converter design to make any definitive statement but somehow you had enough confidence to state that an FP ADC is impossible to achieve despite not having sufficient technical knowledge (you can prove me wrong on this anytime. Perhaps a paper/article you have written on this subject?). This will be the last response from me. I can see that there is nothing to be gained from all this. Enjoy your day.
|
|
|
Post by jin167 on Feb 1, 2019 20:09:51 GMT -6
As long as you remain adamant in stating that ADC is ONLY possible in fixed point format, your definition of "conversion" should stick with you. And the link you have shared does not confirm your claim that conversion can ONLY happen in fixed point thus irrelevant to this topic. You have stated that FP ADC is impossible but there seem to be papers out there suggesting otherwise (BTW, I'm not interested in their practicality). Well, if you're not interested in practicality this entire discussion is pointless. I'm talking about engineering, you're playing semantic games. And frankly, if that's all this is I have far more interesting science fiction books to read.
Am I wasting my time slogging through your paper?
I'm responding to your comment about FP ADC being IMPOSSIBLE to design/exist. There are papers out there suggesting its feasibility and I'm simply letting you know that they're there. Yes, I have read the paper and I'm reading it again because I have learned quiet a lot from it and discovering new things along the way. You're more than welcome to stop at any time it's not like I'm forcing you to read it. As I said, I'm not an expert in dsp or converter design but I can tell that you're not one neither so what makes you so certain that an FP ADC is impossible when you don't have sufficient knowledge to prove your claim? And what's up with dismissing someone's PhD thesis as science fiction? Do you have a PhD degree in electrical engineering? Again, I'm not interested in your own definition of FP ADC. If you want to create/define your own term, write a paper on it.
|
|
|
Post by jin167 on Feb 1, 2019 9:02:44 GMT -6
the link you gave me has little relevance to the topic that is being discussed here. Besides, as I said I'm aware of the difference between floating and fixed. Floating point has a better definition than a 'PROCESSING TECHNOLOGY'. Say that to any decent engineer or mathematician and see how they react to your remark. I'm not just talking about audio conversion, I'm talking about ADC in a broader sense. The paper below discusses the technical feasibility of floating point ADC. portal.research.lu.se/portal/files/4716761/1472266.pdfDon't think that your own definition of "conversion" applies to everyone. You are joking, right?
It's not "my own definition of conversion". Conversion, as considered in this thread, is about the action performed by analog to digital audio converters and nothing else. The word "conversion" can mean a lot of different things in differrent con texts, from the process of changing one's religious allegience to turning a stock car into a hot rod. We are not discussing any of those other uses of the word here.
And tyhe article I referred you to is EXACTLY what we're talking about.
As long as you remain adamant in stating that ADC is ONLY possible in fixed point format, your definition of "conversion" should stick with you. And the link you have shared does not confirm your claim that conversion can ONLY happen in fixed point thus irrelevant to this topic. You have stated that FP ADC is impossible but there seem to be papers out there suggesting otherwise (BTW, I'm not interested in their practicality).
|
|
|
Post by jin167 on Jan 31, 2019 22:43:47 GMT -6
the link you gave me has little relevance to the topic that is being discussed here. Besides, as I said I'm aware of the difference between floating and fixed. Floating point has a better definition than a 'PROCESSING TECHNOLOGY'. Say that to any decent engineer or mathematician and see how they react to your remark. I'm not just talking about audio conversion, I'm talking about ADC in a broader sense. The paper below discusses the technical feasibility of floating point ADC. portal.research.lu.se/portal/files/4716761/1472266.pdfDon't think that your own definition of "conversion" applies to everyone. I'm an engineer and John is right. Floating point in layman's terms is merely a way to handle larger (or smaller depending on how you look at it) decimal values/numbers than a system could handle if they were working with static word length (bit width). If you do a math operation that results in a decimal number, but your system had a fixed bit width, you'd either have to round to an integer, or truncate your value. Either one loses precision or accuracy in the scheme of things. Your could also do more operations to figure out an integer result, but that wastes time and power. Yes, there's been talks of floating point specific converters, but they're nowhere to be found because they're impractical and ultimately unnecessary. Thank you, svart. I know you're an engineer and I highly value your opinion. I can agree with your definition/description of a floating point because it is exactly as how I understand it but wrapping it with a term 'PROCESSING TECHNOLOGY' is not really good enough for me. In this instance, John was adamant that a floating point converter is impossible by definition and I responded by saying that I do not have enough knowledge in dsp or converter design to conclude that a floating point converter is impossible but I have found number of papers regarding this topic suggesting their feasibility and as you have pointed out there have been some talks/research of floating point specific converters in the past but they never came to fruition for various reasons.
|
|
|
Post by jin167 on Jan 31, 2019 21:43:09 GMT -6
floating point is not a processing technology and there is a better definition of it. I'm not an expert or have enough experience in dsp or converter design to confirm that there 'Ain't no such thing as floating point conversion'. I googled for floating point ADC and found few papers from early 2000 so I'll have a look at some of them tonight. BTW, I would love to have a look at your work regarding this topic I'm always willing to learn. Let me know if you have a paper that I can have a look into. While I, myself, am neither a coder nor a mathematician I CAN follow a paper and understand what it says. Here is such a paper from the well known company, the oddly named "Analog Devices" which makes various sorts of digital processing chips.
Hopefully you'l be able to understand the subject a bit better after reading it.
Again, floating point is a PROCESSING TECHNOLOGY, not a conversion technology. Analog to digital conversion, by nature produces a fixed point result.
I'm guessing that the papers you saw but didn't read involved conversion between fixed point and floating point within the processing environment and didn't have much of anything to do with analog to digital audio conversion. Back in the late 20th century and up to the early 2000s many CPUs did not actually have floating point built in and relied on exterrnal FPUs (Floating Point Processors). That may be what those papers were about.
Note that the word "conversion" means different things in different contexts.
the link you gave me has little relevance to the topic that is being discussed here. Besides, as I said I'm aware of the difference between floating and fixed. Floating point has a better definition than a 'PROCESSING TECHNOLOGY'. Say that to any decent engineer or mathematician and see how they react to your remark. I'm not just talking about audio conversion, I'm talking about ADC in a broader sense. The paper below discusses the technical feasibility of floating point ADC. portal.research.lu.se/portal/files/4716761/1472266.pdfDon't think that your own definition of "conversion" applies to everyone.
|
|
|
Post by jin167 on Jan 31, 2019 0:22:54 GMT -6
I don't think I have seen a converter that does floating point (in an audio application at least). To avoid further confusion, I do understand the differences between floating and fixed point and I'm only using the yamaha's interface as an example as it has just been released and made available to the public at a reasonable price point (and I have a feeling that this is only the beginning and we will be seeing more of these '32 bit' converters in the near future). 32 bit converter seems to make sense in certain designs like the yamaha's new interface (internal dsp, cubase) but in general, there seems to no real benefit in recording in 32 bit. floating point is a processing technology, not a convertsion technology. Ain't no such thing as floating point conversion.
At this point I'd much rather spend that money on more channels, not more bits.
floating point is not a processing technology and there is a better definition of it. I'm not an expert or have enough experience in dsp or converter design to confirm that there 'Ain't no such thing as floating point conversion'. I googled for floating point ADC and found few papers from early 2000 so I'll have a look at some of them tonight. BTW, I would love to have a look at your work regarding this topic I'm always willing to learn. Let me know if you have a paper that I can have a look into.
|
|
|
Post by jin167 on Jan 29, 2019 10:16:50 GMT -6
I don't think I have seen a converter that does floating point (in an audio application at least).
To avoid further confusion, I do understand the differences between floating and fixed point and I'm only using the yamaha's interface as an example as it has just been released and made available to the public at a reasonable price point (and I have a feeling that this is only the beginning and we will be seeing more of these '32 bit' converters in the near future).
32 bit converter seems to make sense in certain designs like the yamaha's new interface (internal dsp, cubase) but in general, there seems to no real benefit in recording in 32 bit.
|
|
|
Post by jin167 on Jan 28, 2019 21:08:11 GMT -6
So if you clip the converter it's because you're (not you personally) are an idiot and are feedind the converter a WAY hot signal. Which is totally unneccessary if you understand gain structure because the noise floor of 24 bit is way, way below the realm of audibility. (I am not considering the issue of using clipping as a substitute for compression because it's not relevant to the conversation.)
You cannot technically compensate for human idiocy. There will always be some knothead who wants to track at 0dB. Because he read somewhere that that's how people did it back in the tape days.
A 32 bit converter is no protection against bad practice.
exactly, which is what motivated me to start this threaded in the first place.
|
|
|
Post by jin167 on Jan 28, 2019 20:08:43 GMT -6
I think clipping has less to do with the bit depth of a converter. You can easily clip a 32 bit converter if the supply rail can't cope with the incoming line level signal. I guess the biggest problem with the 384kHz/32bit is its size which is over 5 times what is considered to be a standard mastering grade format at the moment (96kHz, 24bit). Bit depth is not related to rail voltage. obviously.
|
|
|
Post by jin167 on Jan 28, 2019 18:45:05 GMT -6
well, my question was whether or not we need a 32 bit converter. I was using Yamaha's new interface as an example and I had no intention of bashing it in any way. From what you're saying I guess Yamaha had a good reason to implement 32 bit converter in their design but does it apply to other designs as well (audio interfaces/converters without internal dsp, non-cubase users)? Then that's easy. Not to my understanding. The "live use" pointed out above is again, DSP related, not conversion. anyone clipping their ADCs live or not is simply...I mean full scale will clip always...unless you build the analog to square wave first, which might be an interesting design...you don't get more headroom in the converter. Well...sort of...foot room--but, analog to full scale mapping is going to be analog to full scale mapping...so, if you're clipping your mic preamp/ADC now, you will wit hthe 32bit, too. If anything it would allow you to turn the whole calibration down...but...24bit already allows WAY more of that than I've ever experienced people actually USING...so... 384, for the record- -isn't really new. SACD/DSD is basically 384 in the time domain and 88.2 in the frequency domain circa late 90s. (back then) That's what I assumed we'd ALL be using by now. The "most analog sounding" of all digital ever. I never even considered that someone would put content manipulation above sonics. Ha. Talk about ME having a UUUgge blind spot. 384 is just "dxd" which is DSD made into linear PCM so that you can ALSO do PCM based content manipulation. Cubase has supported 384 for a lot of years, but only some REALLY expensive IO units supported it. didn't mean it as in "new" new. I meant it as in whole new different topic. Language barrier.
|
|
|
Post by jin167 on Jan 28, 2019 18:37:01 GMT -6
well, my question was whether or not we need a 32 bit converter. I was using Yamaha's new interface as an example and I had no intention of bashing it in any way. From what you're saying I guess Yamaha had a good reason to implement 32 bit converter in their design but does it apply to other designs as well (audio interfaces/converters without internal dsp, non-cubase users)? The 384kHz sampling rate is a whole new issue and I'm still undecided on that matter so I won't comment on it. For Live sound the 32 bit extra headroom is great I'd think. I don't see how someone wouldn't be for more bits on the front end of any audio device. It just makes it that much harder to clip. Seems worth it to me. I record in 384kHz a lot. I also think its worth it though and plenty would disagree on that front..but whatever. I think clipping has less to do with the bit depth of a converter. You can easily clip a 32 bit converter if the supply rail can't cope with the incoming line level signal. I guess the biggest problem with the 384kHz/32bit is its size which is over 5 times what is considered to be a standard mastering grade format at the moment (96kHz, 24bit).
|
|
|
Post by jin167 on Jan 28, 2019 17:31:42 GMT -6
well, my question was whether or not we need a 32 bit converter. I was using Yamaha's new interface as an example and I had no intention of bashing it in any way. From what you're saying I guess Yamaha had a good reason to implement 32 bit converter in their design but does it apply to other designs as well (audio interfaces/converters without internal dsp, non-cubase users)?
The 384kHz sampling rate is a whole new issue and I'm still undecided on that matter so I won't comment on it.
|
|
|
Post by jin167 on Jan 28, 2019 9:36:55 GMT -6
I just hope that I don't get peer pressured into buying a 32 bit converter just because everyone else is getting one..
|
|
|
Post by jin167 on Jan 28, 2019 0:27:10 GMT -6
Interesting. I thought the headroom of a converter is determined by its supply rail rather than its bit depth. Would love to how 32 bit was useful in your situation!
|
|
|
Post by jin167 on Jan 27, 2019 23:32:56 GMT -6
Interesting. I think Antelope is ready to release their 32 bit converter as well and it looks like more manufacturers will be releasing their own version of affordable 32 bit converters in the near future. Now the question is.. do we need 32 bit converters? I watched Ian's youtube video on bit depth and dither a couple of days ago and thought about whether or not there's any benefit in recording in 32 bit instead of 24 bit. Given that the noise floor of our converters is above the theoretical noise floor of 24 bit what's the point of recording in 32 bit? I think there could be a good reason for it but I just can't see it from my end. Any thoughts on this matter?
|
|
|
Post by jin167 on Jan 5, 2019 8:24:00 GMT -6
Interesting video! One thing that I am not sure about and would, for me, determine if the two passes add up to a different result than just moving a single pass around on the timeline .. and that is: If I eq a boost at 100 Hz, does the phase shift happen within the Q area of the boost around 100 Hz or does it happen to the whole signal? If the answer is that the phase shift happens around the boost/cut frequencies then this double/reverse pass makes total sense to me. I believe you're correct. (assuming a bell curve) If you're applying a gain of say 3 dB with 100Hz as your centre frequency with some Q value then you will have a phase shift around the centre frequency (0 degree shift at the centre frequency (100 Hz) and regions where the level is at unity). I think you have to learn Hilbert transformation to understand this topic. There are a number of professional engineers on this forum so they might be able to give us a brief lecture on this topic if we're lucky?
|
|
|
Post by jin167 on Jan 4, 2019 3:17:54 GMT -6
"The latency is compensated automatically by the Reainsert plugin. This is about phase shifts at specific frequencies, not just an overall time delay."
Quick update from Dan FYI.
|
|
|
Post by jin167 on Jan 4, 2019 3:14:27 GMT -6
D1x with seas millennium tweeter here and I'm pretty happy with them. Only down-side is that they are huge (takes up a lot of space) and heavy AF. Oh, and they are passive so you'll need an amp.
|
|
|
Post by jin167 on Jan 2, 2019 17:13:07 GMT -6
This is just crazy. All analogue EQs ever have been minimum/analogue phase, and they are STILL the most sought after EQs, so why would you want to rectify that anyway? Not to mention the dude isn't even talking about phase but time aligning. It's rubbish and detracts from the real issue, fake news, I say. Well, he did video tutorials for fabfilter and I really like that series so thought this video might be worth watching (and it was, for me at least). If you have watched any of his fabfilter eq tutorial you'll know that he has this thing about linear eq and this video is only an extension of that. I think this video was useful. It may not be practical but it's a cool trick to have up your sleeve nonetheless and allowed me to refresh my memory on several related topics.
|
|
|
Post by jin167 on Jan 2, 2019 17:02:52 GMT -6
Sure, same thing with amplitude applied. The lightbulb for me (recently) was when my brain correlated circuit resonance with mechanical resonance. For example, a passive inductor filter to a mechanical system. It's resonating at the critical (notch) frequency. You can even model the electrical circuits as mechanical systems - resistors are springs, inductors are masses, and capacitors are dampers. The math works out the same. When you have a mechanical system there must be, always will be, has to be a phase change as the system goes through resonance. A thought exercise for this is a weight hanging from a spring underneath a platform that is moving at some frequency. Imagine hanging a box by a spring under a trampoline. Now move the trampoline. For low frequency (slow) bounces, the box, spring, and trampoline mat all move together - in phase. Once you hit oscillation, resonance, critical frequency, whatever you want to call it, the box underneath will be moving down when the trampoline is moving up and vice versa. This is what generates the positive feedback of oscillation with the spring. If you go faster, they'll begin to move together again as you leave resonance. If we're thinking of this as a passive boost circuit, the "box" is the RC or RLC network passing audio, and the trampoline is the incoming signal. When they're all in phase, no EQ is happening - flat frequency response between in and out. When you're at the resonant frequency, the box (out) is moving differently than the trampoline (in) - so you have EQ happening. This notionally the same behavior as in EQ or filters. We just use that resonant behavior to selectively cut or amplify signals, either passively (by bypassing a voltage drop or shorting the signal at a specific frequency) or actively (by putting this same behavior inside an amp's feedback loop to selectively boost or cut the specific frequency). But, at least as far is I know, at the actual filter itself there has to be a phase change. Willing to be shown that I am wrong here, however. I'm just a mechanical engineer, all this electrical stuff is pretty much voodoo to me. Hi, dogears. Thanks for chiming in! I really like your explanation and thanks for taking your time to write it! I do understand the concept of a filter but when I saw this video and heard him saying that what he is doing is flattening out the phase shifts I thought there was something I didn't understand or simply didn't know about since it involved time reversal which I don't come across often.
|
|
|
Post by jin167 on Jan 2, 2019 11:22:46 GMT -6
I don't know.. he says 'linear phase analog EQ' at 6:07. He is claiming that this technique is flattening out any phase shifts. Someone on the mastering forum mentioned two pass IIR and that might ring a bell for some of you? There is a lot of misconceptions in audio, and even more mis-appropriation of nomenclature. Most people call the polarity switch on their preamps "phase" but it's not, because it doesn't adjust time relationships. Same for this. he's not adjusting the phase relationships between the frequencies, he's only time aligning the audio, which due to the processing will also adjust phase relationships because the source and the return tracks are based on the same start point. Phase (angle) is just a fancy way of stating at what point in time the signal exists at a specified voltage. A vector if you will, which they call the phasor (phase vector). They plot this to a round graph and describe it in degrees around the circle. So two sinewaves of equal frequency and amplitude are compared. If you move one *in time* relation to the other, you see a shift in *phase* which is described in degrees that relate to the unchanged signal. You can eventually move the second signal 180deg which relates to 100% opposite polarity as well. However, it's important to note that the second signal is offset *in time* which is why the phase relationship is 180deg. For sinewaves this works because the cycles of the waveform are repeating. However, with sinewaves you can also simply flip polarity on the second signal and it will be described as 180deg out of phase, as well as being out of polarity, yet it's time relationship is still perfectly aligned. With audio, the waveforms are complex and rarely repeat with any meaningful pattern, so changing time alignment causes all kinds of phase-summing anomalies. This is why phase is a poor descriptor of complex audio, and why things like the "phase" switch on your preamp is a LIE. Great! I'll have a think about this tonight and go through my old lecture notes and see if I can refresh my memory on this topic
|
|
|
Post by jin167 on Jan 2, 2019 11:08:57 GMT -6
For me, what's attractive about this technique is that it allows me to use my outboard EQs without having to worry about the phase shifts. Don't overthink that. No one much is worrying about phase shifts in analog eq's, and it can be argued it's one of the reasons people use them. True that. But it's fun to have more options. How do you feel about analog EQs with M/S function? I think EQ's with M/S could benefit from minimised phase shifts?
|
|
|
Post by jin167 on Jan 2, 2019 10:40:46 GMT -6
Yeah svart is right. This is not correcting "phase" issues with EQ. Its just time aligment. Which is also..phase technically. But not what a linear EQ is. The guy is confusing you with the terms and what he is doing. Its not making his analog eq linear. He is just correcting the latency. And as svart has mentioned there are a lot of ways to do this and most DAWs these days let you ping your gear to compensate for this. aka delay compensation. Literally the guy says 14 seconds in..linear analog EQ isn't possible. because..it isn't. I don't know.. he says 'linear phase analog EQ' at 6:07. He is claiming that this technique is flattening out any phase shifts. Someone on the mastering forum mentioned two pass IIR and that might ring a bell for some of you?
|
|
|
Post by jin167 on Jan 2, 2019 10:26:24 GMT -6
I think it's a really cool trick! May not be practical depending on your setup but still! Just as a side note for those who are knowledgeable in signals, would you be able to explain this phenomenon from an engineering point of view with mathematical proof if possible? It'd be really cool to understand what's actually happening during the process Looks to be using a trick on on the latency. You know that the hardware/software round-trip latency will be roughly similar from take to take, and will always result in a lagging return signal compared to the source.. So when you reverse the signal and run it, it will lag the return signal in the deterministic amount except it will be applied to the signal backwards in relative time, so when you flip it, it's now nudged *forward* by the same amount of latency that it would have originally lagged.. So now your signal has effectively nullified the round-trip latency. I guess to explain it would be to say that the guy's signal might have a combined 10ms round-trip through the software(5ms) and hardware(5ms) resulting in a track that is 10ms behind the source track. You flip the source track, which plays backwards with a 5ms lag, through the hardware that also has a 5ms lag.. When the track is flipped, we subtract the 5ms software lag and now it's aligned to the source track. The easiest way to do this would be to do a track with your analog processing, and then simply grab the new track and align it with the old track. Set a marker at some kind of peak in your original file and then find the same peak in the processed track and just pull it until they both line up. Now they are in-phase and time-aligned and no need for all this other stuff. You could just nudge the track by small amounts until you get phase alignment and from that point on you know roughly the round-trip time and can account for it.. Reaper (the daw the guy is using) also has a hardware "ping" option that can find the round-trip latency and null it as well. There's a dozen ways to skin this cat and the way this guy is doing it is a fun and interesting way around the problem, but it's also pretty time consuming compared to other ways of doing the same thing. Svart, I read your comment again but I'm still confused. I'm not talking about a discrepancy in time introduced by round trip but phase shifts introduced by the hardware EQ. Combining an EQd signal with the original track doesn't sound like a good idea to me even after making adjustments for the round trip latency. Have you watched the video by any chance? I think I'm either not catching your point or my question wasn't very clear.
|
|
|
Post by jin167 on Jan 2, 2019 10:14:05 GMT -6
For me, what's attractive about this technique is that it allows me to use my outboard EQs without having to worry about the phase shifts. But I'm starting to wonder what happens to things like distortion and noise figures when I use this technique. I'm guessing that those figures will get worse since I'm effectively doing double passing in the analogue domain? And there's the inverse ringing to take into consideration as Dan mentions in his video. I'll give it a go tonight and see if this technique is actually worth the time and effort.
|
|