|
Post by jin167 on Jan 2, 2019 7:33:10 GMT -6
I think it's a really cool trick! May not be practical depending on your setup but still! Just as a side note for those who are knowledgeable in signals, would you be able to explain this phenomenon from an engineering point of view with mathematical proof if possible? It'd be really cool to understand what's actually happening during the process
|
|
ericn
Temp
Balance Engineer
Posts: 14,939
|
Post by ericn on Jan 2, 2019 8:38:29 GMT -6
If you want a linear phase analog EQ buy a Meyers CP10, then send it to Jim Williams. Used CP10’s are going for pennies on the dollar of what they were new and they are real time!
|
|
|
Post by svart on Jan 2, 2019 8:52:07 GMT -6
If you want a linear phase analog EQ buy a Meyers CP10, then send it to Jim Williams. Used CP10’s are going for pennies on the dollar of what they were new and they are real time! There is no such thing as linear phase analog EQ. All analog EQ is based on phase relationships. Changing the phase relationship is what causes the nulling/adding and frequency selection..
|
|
|
Post by svart on Jan 2, 2019 9:09:36 GMT -6
I think it's a really cool trick! May not be practical depending on your setup but still! Just as a side note for those who are knowledgeable in signals, would you be able to explain this phenomenon from an engineering point of view with mathematical proof if possible? It'd be really cool to understand what's actually happening during the process Looks to be using a trick on on the latency. You know that the hardware/software round-trip latency will be roughly similar from take to take, and will always result in a lagging return signal compared to the source.. So when you reverse the signal and run it, it will lag the return signal in the deterministic amount except it will be applied to the signal backwards in relative time, so when you flip it, it's now nudged *forward* by the same amount of latency that it would have originally lagged.. So now your signal has effectively nullified the round-trip latency. I guess to explain it would be to say that the guy's signal might have a combined 10ms round-trip through the software(5ms) and hardware(5ms) resulting in a track that is 10ms behind the source track. You flip the source track, which plays backwards with a 5ms lag, through the hardware that also has a 5ms lag.. When the track is flipped, we subtract the 5ms software lag and now it's aligned to the source track. The easiest way to do this would be to do a track with your analog processing, and then simply grab the new track and align it with the old track. Set a marker at some kind of peak in your original file and then find the same peak in the processed track and just pull it until they both line up. Now they are in-phase and time-aligned and no need for all this other stuff. You could just nudge the track by small amounts until you get phase alignment and from that point on you know roughly the round-trip time and can account for it.. Reaper (the daw the guy is using) also has a hardware "ping" option that can find the round-trip latency and null it as well. There's a dozen ways to skin this cat and the way this guy is doing it is a fun and interesting way around the problem, but it's also pretty time consuming compared to other ways of doing the same thing.
|
|
|
Post by jin167 on Jan 2, 2019 9:47:17 GMT -6
I think it's a really cool trick! May not be practical depending on your setup but still! Just as a side note for those who are knowledgeable in signals, would you be able to explain this phenomenon from an engineering point of view with mathematical proof if possible? It'd be really cool to understand what's actually happening during the process Looks to be using a trick on on the latency. You know that the hardware/software round-trip latency will be roughly similar from take to take, and will always result in a lagging return signal compared to the source.. So when you reverse the signal and run it, it will lag the return signal in the deterministic amount except it will be applied to the signal backwards in relative time, so when you flip it, it's now nudged *forward* by the same amount of latency that it would have originally lagged.. So now your signal has effectively nullified the round-trip latency. I guess to explain it would be to say that the guy's signal might have a combined 10ms round-trip through the software(5ms) and hardware(5ms) resulting in a track that is 10ms behind the source track. You flip the source track, which plays backwards with a 5ms lag, through the hardware that also has a 5ms lag.. When the track is flipped, we subtract the 5ms software lag and now it's aligned to the source track. The easiest way to do this would be to do a track with your analog processing, and then simply grab the new track and align it with the old track. Set a marker at some kind of peak in your original file and then find the same peak in the processed track and just pull it until they both line up. Now they are in-phase and time-aligned and no need for all this other stuff. You could just nudge the track by small amounts until you get phase alignment and from that point on you know roughly the round-trip time and can account for it.. Reaper (the daw the guy is using) also has a hardware "ping" option that can find the round-trip latency and null it as well. There's a dozen ways to skin this cat and the way this guy is doing it is a fun and interesting way around the problem, but it's also pretty time consuming compared to other ways of doing the same thing. Thanks for taking your time to answer my question, Svart! I guess your explanation makes sense in the context of 'blending' but for me what was interesting about this video is how you can remove phase shifts from your analogue EQ. What I wish to know more about is how time reversal of a signal resulted in the flip of a phase shift which resulted in the cancellation of phase shifts. I can vaguely remember going over this topic in continuous time signals back in Uni but can't remember the exact details!
|
|
|
Post by EmRR on Jan 2, 2019 9:49:58 GMT -6
If you want a linear phase analog EQ buy a Meyers CP10, then send it to Jim Williams. Used CP10’s are going for pennies on the dollar of what they were new and they are real time! There is no such thing as linear phase analog EQ. All analog EQ is based on phase relationships. Changing the phase relationship is what causes the nulling/adding and frequency selection.. Why the BBC used to apparently call equalizers 'phase distortion units' rather than 'equalizers'. Linear phase EQ was gonna be THE SHIT I was told in recording school back in 1990, and then.....it wasn't so much. It has it's own set of problems and audible artifacts. Worth reading more about.
|
|
ericn
Temp
Balance Engineer
Posts: 14,939
|
Post by ericn on Jan 2, 2019 10:00:32 GMT -6
If you want a linear phase analog EQ buy a Meyers CP10, then send it to Jim Williams. Used CP10’s are going for pennies on the dollar of what they were new and they are real time! There is no such thing as linear phase analog EQ. All analog EQ is based on phase relationships. Changing the phase relationship is what causes the nulling/adding and frequency selection.. I don’t know , that’s what I was taught as well, if it used some kind of compensation or what JW would know, but when I did my first SIMM training when they still used a TEF anylizer there was no change, except in frequency response. when boosting and cutting. The CP10 was developed for the SIMM system so that you could EQ parts of the Array without phase issues so that it would not effect the acoustical summation. All Meyers boxes exhibit the same phase response and approximate Freq response so they can be used as building blocks ( this should be a dah in large scale PA world but sadly isn’t). There probably are some old white papers on the Meyer Sound site, Apogee Sound had a similar box as well but I don’t remember it as well.
|
|
|
Post by jcoutu1 on Jan 2, 2019 10:02:41 GMT -6
Apogee Sound has the CRQ-12. I have one. Nice box and SUPER flexible. I just wish it took up 6 rack spaces and had knobs that were 3x the size.
|
|
|
Post by svart on Jan 2, 2019 10:09:32 GMT -6
Looks to be using a trick on on the latency. You know that the hardware/software round-trip latency will be roughly similar from take to take, and will always result in a lagging return signal compared to the source.. So when you reverse the signal and run it, it will lag the return signal in the deterministic amount except it will be applied to the signal backwards in relative time, so when you flip it, it's now nudged *forward* by the same amount of latency that it would have originally lagged.. So now your signal has effectively nullified the round-trip latency. I guess to explain it would be to say that the guy's signal might have a combined 10ms round-trip through the software(5ms) and hardware(5ms) resulting in a track that is 10ms behind the source track. You flip the source track, which plays backwards with a 5ms lag, through the hardware that also has a 5ms lag.. When the track is flipped, we subtract the 5ms software lag and now it's aligned to the source track. The easiest way to do this would be to do a track with your analog processing, and then simply grab the new track and align it with the old track. Set a marker at some kind of peak in your original file and then find the same peak in the processed track and just pull it until they both line up. Now they are in-phase and time-aligned and no need for all this other stuff. You could just nudge the track by small amounts until you get phase alignment and from that point on you know roughly the round-trip time and can account for it.. Reaper (the daw the guy is using) also has a hardware "ping" option that can find the round-trip latency and null it as well. There's a dozen ways to skin this cat and the way this guy is doing it is a fun and interesting way around the problem, but it's also pretty time consuming compared to other ways of doing the same thing. Thanks for taking your time to answer my question, Svart! I guess your explanation makes sense in the context of 'blending' but for me what was interesting about this video is how you can remove phase shifts from your analogue EQ. What I wish to know more about is how time reversal of a signal resulted in the flip of a phase shift which resulted in the cancellation of phase shifts. I can vaguely remember going over this topic in continuous time signals back in Uni but can't remember the exact details! "Phase" is only relative to the starting point of two signals. I think the video is mis-appropriating the word "phase" when it really means to say something like "time alignment". This trick they are using is manipulating time alignment, not phase. It might be changing and aligning phase as well, but that's not the attribute they are adjusting.
|
|
|
Post by jin167 on Jan 2, 2019 10:14:05 GMT -6
For me, what's attractive about this technique is that it allows me to use my outboard EQs without having to worry about the phase shifts. But I'm starting to wonder what happens to things like distortion and noise figures when I use this technique. I'm guessing that those figures will get worse since I'm effectively doing double passing in the analogue domain? And there's the inverse ringing to take into consideration as Dan mentions in his video. I'll give it a go tonight and see if this technique is actually worth the time and effort.
|
|
|
Post by svart on Jan 2, 2019 10:17:43 GMT -6
There is no such thing as linear phase analog EQ. All analog EQ is based on phase relationships. Changing the phase relationship is what causes the nulling/adding and frequency selection.. I don’t know , that’s what I was taught as well, if it used some kind of compensation or what JW would know, but when I did my first SIMM training when they still used a TEF anylizer there was no change, except in frequency response. when boosting and cutting. The CP10 was developed for the SIMM system so that you could EQ parts of the Array without phase issues so that it would not effect the acoustical summation. All Meyers boxes exhibit the same phase response and approximate Freq response so they can be used as building blocks ( this should be a dah in large scale PA world but sadly isn’t). There probably are some old white papers on the Meyer Sound site, Apogee Sound had a similar box as well but I don’t remember it as well. The hardware linear phase EQs still use phase relationships to add/null frequencies but they attempt to adjust all affected frequencies by the same amount which necessitates much more complex circuitry with it's own issues, IE; all pass filters, etc.. That's why most of them call themselves "minimum phase EQ" rather than linear phase, because it's impossible to do in analog.
|
|
|
Post by svart on Jan 2, 2019 10:19:34 GMT -6
For me, what's attractive about this technique is that it allows me to use my outboard EQs without having to worry about the phase shifts. But I'm starting to wonder what happens to things like distortion and noise figures when I use this technique. I'm guessing that those figures will get worse since I'm effectively doing double passing in the analogue domain? And there's the inverse ringing to take into consideration as Dan mentions in his video. I'll give it a go tonight and see if this technique is actually worth the time and effort. You can still use your outboard with any of the other suggestions too. As I mentioned in one of my replies, you can just output your source through your hardware and record the return to a second track, then grab that track and slide it backwards until it lines up in time. That's essentially what this guy did, just using a lot fewer steps.
|
|
|
Post by jin167 on Jan 2, 2019 10:26:24 GMT -6
I think it's a really cool trick! May not be practical depending on your setup but still! Just as a side note for those who are knowledgeable in signals, would you be able to explain this phenomenon from an engineering point of view with mathematical proof if possible? It'd be really cool to understand what's actually happening during the process Looks to be using a trick on on the latency. You know that the hardware/software round-trip latency will be roughly similar from take to take, and will always result in a lagging return signal compared to the source.. So when you reverse the signal and run it, it will lag the return signal in the deterministic amount except it will be applied to the signal backwards in relative time, so when you flip it, it's now nudged *forward* by the same amount of latency that it would have originally lagged.. So now your signal has effectively nullified the round-trip latency. I guess to explain it would be to say that the guy's signal might have a combined 10ms round-trip through the software(5ms) and hardware(5ms) resulting in a track that is 10ms behind the source track. You flip the source track, which plays backwards with a 5ms lag, through the hardware that also has a 5ms lag.. When the track is flipped, we subtract the 5ms software lag and now it's aligned to the source track. The easiest way to do this would be to do a track with your analog processing, and then simply grab the new track and align it with the old track. Set a marker at some kind of peak in your original file and then find the same peak in the processed track and just pull it until they both line up. Now they are in-phase and time-aligned and no need for all this other stuff. You could just nudge the track by small amounts until you get phase alignment and from that point on you know roughly the round-trip time and can account for it.. Reaper (the daw the guy is using) also has a hardware "ping" option that can find the round-trip latency and null it as well. There's a dozen ways to skin this cat and the way this guy is doing it is a fun and interesting way around the problem, but it's also pretty time consuming compared to other ways of doing the same thing. Svart, I read your comment again but I'm still confused. I'm not talking about a discrepancy in time introduced by round trip but phase shifts introduced by the hardware EQ. Combining an EQd signal with the original track doesn't sound like a good idea to me even after making adjustments for the round trip latency. Have you watched the video by any chance? I think I'm either not catching your point or my question wasn't very clear.
|
|
|
Post by Blackdawg on Jan 2, 2019 10:27:51 GMT -6
Yeah svart is right. This is not correcting "phase" issues with EQ. Its just time aligment. Which is also..phase technically. But not what a linear EQ is. The guy is confusing you with the terms and what he is doing. Its not making his analog eq linear. He is just correcting the latency. And as svart has mentioned there are a lot of ways to do this and most DAWs these days let you ping your gear to compensate for this.
aka
delay compensation.
Literally the guy says 14 seconds in..linear analog EQ isn't possible. because..it isn't.
|
|
|
Post by jin167 on Jan 2, 2019 10:40:46 GMT -6
Yeah svart is right. This is not correcting "phase" issues with EQ. Its just time aligment. Which is also..phase technically. But not what a linear EQ is. The guy is confusing you with the terms and what he is doing. Its not making his analog eq linear. He is just correcting the latency. And as svart has mentioned there are a lot of ways to do this and most DAWs these days let you ping your gear to compensate for this. aka delay compensation. Literally the guy says 14 seconds in..linear analog EQ isn't possible. because..it isn't. I don't know.. he says 'linear phase analog EQ' at 6:07. He is claiming that this technique is flattening out any phase shifts. Someone on the mastering forum mentioned two pass IIR and that might ring a bell for some of you?
|
|
|
Post by EmRR on Jan 2, 2019 10:59:11 GMT -6
For me, what's attractive about this technique is that it allows me to use my outboard EQs without having to worry about the phase shifts. Don't overthink that. No one much is worrying about phase shifts in analog eq's, and it can be argued it's one of the reasons people use them.
|
|
|
Post by jin167 on Jan 2, 2019 11:08:57 GMT -6
For me, what's attractive about this technique is that it allows me to use my outboard EQs without having to worry about the phase shifts. Don't overthink that. No one much is worrying about phase shifts in analog eq's, and it can be argued it's one of the reasons people use them. True that. But it's fun to have more options. How do you feel about analog EQs with M/S function? I think EQ's with M/S could benefit from minimised phase shifts?
|
|
|
Post by svart on Jan 2, 2019 11:10:17 GMT -6
Yeah svart is right. This is not correcting "phase" issues with EQ. Its just time aligment. Which is also..phase technically. But not what a linear EQ is. The guy is confusing you with the terms and what he is doing. Its not making his analog eq linear. He is just correcting the latency. And as svart has mentioned there are a lot of ways to do this and most DAWs these days let you ping your gear to compensate for this. aka delay compensation. Literally the guy says 14 seconds in..linear analog EQ isn't possible. because..it isn't. I don't know.. he says 'linear phase analog EQ' at 6:07. He is claiming that this technique is flattening out any phase shifts. Someone on the mastering forum mentioned two pass IIR and that might ring a bell for some of you? There is a lot of misconceptions in audio, and even more mis-appropriation of nomenclature. Most people call the polarity switch on their preamps "phase" but it's not, because it doesn't adjust time relationships. Same for this. he's not adjusting the phase relationships between the frequencies, he's only time aligning the audio, which due to the processing will also adjust phase relationships because the source and the return tracks are based on the same start point. Phase (angle) is just a fancy way of stating at what point in time the signal exists at a specified voltage. A vector if you will, which they call the phasor (phase vector). They plot this to a round graph and describe it in degrees around the circle. So two sinewaves of equal frequency and amplitude are compared. If you move one *in time* relation to the other, you see a shift in *phase* which is described in degrees that relate to the unchanged signal. You can eventually move the second signal 180deg which relates to 100% opposite polarity as well. However, it's important to note that the second signal is offset *in time* which is why the phase relationship is 180deg. For sinewaves this works because the cycles of the waveform are repeating. However, with sinewaves you can also simply flip polarity on the second signal and it will be described as 180deg out of phase, as well as being out of polarity, yet it's time relationship is still perfectly aligned. With audio, the waveforms are complex and rarely repeat with any meaningful pattern, so changing time alignment causes all kinds of phase-summing anomalies. This is why phase is a poor descriptor of complex audio, and why things like the "phase" switch on your preamp is a LIE.
|
|
|
Post by EmRR on Jan 2, 2019 11:16:35 GMT -6
Don't overthink that. No one much is worrying about phase shifts in analog eq's, and it can be argued it's one of the reasons people use them. True that. But it's fun to have more options. How do you feel about analog EQs with M/S function? I think EQ's with M/S could benefit from minimised phase shifts? I do use MS all the time, and never note any problems if I EQ the side. I haven't experimented with linear phase EQ on side channel. You may note the delays effects that occur with linear phase EQ, and they in my experience have been a bigger problem than any phase shift in an analog EQ. There's as much a phase argument that if you chose a side mic with the rolloff you wanted, it would have a similar phase response as a flat mic that's been EQ'd similarly. No free lunch, you pay any way you go.
|
|
|
Post by jin167 on Jan 2, 2019 11:22:46 GMT -6
I don't know.. he says 'linear phase analog EQ' at 6:07. He is claiming that this technique is flattening out any phase shifts. Someone on the mastering forum mentioned two pass IIR and that might ring a bell for some of you? There is a lot of misconceptions in audio, and even more mis-appropriation of nomenclature. Most people call the polarity switch on their preamps "phase" but it's not, because it doesn't adjust time relationships. Same for this. he's not adjusting the phase relationships between the frequencies, he's only time aligning the audio, which due to the processing will also adjust phase relationships because the source and the return tracks are based on the same start point. Phase (angle) is just a fancy way of stating at what point in time the signal exists at a specified voltage. A vector if you will, which they call the phasor (phase vector). They plot this to a round graph and describe it in degrees around the circle. So two sinewaves of equal frequency and amplitude are compared. If you move one *in time* relation to the other, you see a shift in *phase* which is described in degrees that relate to the unchanged signal. You can eventually move the second signal 180deg which relates to 100% opposite polarity as well. However, it's important to note that the second signal is offset *in time* which is why the phase relationship is 180deg. For sinewaves this works because the cycles of the waveform are repeating. However, with sinewaves you can also simply flip polarity on the second signal and it will be described as 180deg out of phase, as well as being out of polarity, yet it's time relationship is still perfectly aligned. With audio, the waveforms are complex and rarely repeat with any meaningful pattern, so changing time alignment causes all kinds of phase-summing anomalies. This is why phase is a poor descriptor of complex audio, and why things like the "phase" switch on your preamp is a LIE. Great! I'll have a think about this tonight and go through my old lecture notes and see if I can refresh my memory on this topic
|
|
|
Post by matt@IAA on Jan 2, 2019 11:58:55 GMT -6
I don't know.. he says 'linear phase analog EQ' at 6:07. He is claiming that this technique is flattening out any phase shifts. Someone on the mastering forum mentioned two pass IIR and that might ring a bell for some of you? There is a lot of misconceptions in audio, and even more mis-appropriation of nomenclature. Most people call the polarity switch on their preamps "phase" but it's not, because it doesn't adjust time relationships. Same for this. he's not adjusting the phase relationships between the frequencies, he's only time aligning the audio, which due to the processing will also adjust phase relationships because the source and the return tracks are based on the same start point. Phase (angle) is just a fancy way of stating at what point in time the signal exists at a specified voltage. A vector if you will, which they call the phasor (phase vector). They plot this to a round graph and describe it in degrees around the circle. So two sinewaves of equal frequency and amplitude are compared. If you move one *in time* relation to the other, you see a shift in *phase* which is described in degrees that relate to the unchanged signal. You can eventually move the second signal 180deg which relates to 100% opposite polarity as well. However, it's important to note that the second signal is offset *in time* which is why the phase relationship is 180deg. For sinewaves this works because the cycles of the waveform are repeating. However, with sinewaves you can also simply flip polarity on the second signal and it will be described as 180deg out of phase, as well as being out of polarity, yet it's time relationship is still perfectly aligned. With audio, the waveforms are complex and rarely repeat with any meaningful pattern, so changing time alignment causes all kinds of phase-summing anomalies. This is why phase is a poor descriptor of complex audio, and why things like the "phase" switch on your preamp is a LIE. sin(wt + phi)
|
|
|
Post by svart on Jan 2, 2019 12:02:15 GMT -6
There is a lot of misconceptions in audio, and even more mis-appropriation of nomenclature. Most people call the polarity switch on their preamps "phase" but it's not, because it doesn't adjust time relationships. Same for this. he's not adjusting the phase relationships between the frequencies, he's only time aligning the audio, which due to the processing will also adjust phase relationships because the source and the return tracks are based on the same start point. Phase (angle) is just a fancy way of stating at what point in time the signal exists at a specified voltage. A vector if you will, which they call the phasor (phase vector). They plot this to a round graph and describe it in degrees around the circle. So two sinewaves of equal frequency and amplitude are compared. If you move one *in time* relation to the other, you see a shift in *phase* which is described in degrees that relate to the unchanged signal. You can eventually move the second signal 180deg which relates to 100% opposite polarity as well. However, it's important to note that the second signal is offset *in time* which is why the phase relationship is 180deg. For sinewaves this works because the cycles of the waveform are repeating. However, with sinewaves you can also simply flip polarity on the second signal and it will be described as 180deg out of phase, as well as being out of polarity, yet it's time relationship is still perfectly aligned. With audio, the waveforms are complex and rarely repeat with any meaningful pattern, so changing time alignment causes all kinds of phase-summing anomalies. This is why phase is a poor descriptor of complex audio, and why things like the "phase" switch on your preamp is a LIE. sin(wt + phi) I'd seen it as A*cos(wt+phi)
|
|
|
Post by matt@IAA on Jan 2, 2019 12:29:49 GMT -6
Sure, same thing with amplitude applied.
The lightbulb for me (recently) was when my brain correlated circuit resonance with mechanical resonance. For example, a passive inductor filter to a mechanical system. It's resonating at the critical (notch) frequency.
You can even model the electrical circuits as mechanical systems - resistors are springs, inductors are masses, and capacitors are dampers. The math works out the same.
When you have a mechanical system there must be, always will be, has to be a phase change as the system goes through resonance. A thought exercise for this is a weight hanging from a spring underneath a platform that is moving at some frequency. Imagine hanging a box by a spring under a trampoline. Now move the trampoline. For low frequency (slow) bounces, the box, spring, and trampoline mat all move together - in phase. Once you hit oscillation, resonance, critical frequency, whatever you want to call it, the box underneath will be moving down when the trampoline is moving up and vice versa. This is what generates the positive feedback of oscillation with the spring. If you go faster, they'll begin to move together again as you leave resonance.
If we're thinking of this as a passive boost circuit, the "box" is the RC or RLC network passing audio, and the trampoline is the incoming signal. When they're all in phase, no EQ is happening - flat frequency response between in and out. When you're at the resonant frequency, the box (out) is moving differently than the trampoline (in) - so you have EQ happening.
This notionally the same behavior as in EQ or filters. We just use that resonant behavior to selectively cut or amplify signals, either passively (by bypassing a voltage drop or shorting the signal at a specific frequency) or actively (by putting this same behavior inside an amp's feedback loop to selectively boost or cut the specific frequency). But, at least as far is I know, at the actual filter itself there has to be a phase change. Willing to be shown that I am wrong here, however. I'm just a mechanical engineer, all this electrical stuff is pretty much voodoo to me.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jan 2, 2019 13:48:15 GMT -6
This is just crazy. All analogue EQs ever have been minimum/analogue phase, and they are STILL the most sought after EQs, so why would you want to rectify that anyway? Not to mention the dude isn't even talking about phase but time aligning. It's rubbish and detracts from the real issue, fake news, I say.
|
|
|
Post by jin167 on Jan 2, 2019 17:02:52 GMT -6
Sure, same thing with amplitude applied. The lightbulb for me (recently) was when my brain correlated circuit resonance with mechanical resonance. For example, a passive inductor filter to a mechanical system. It's resonating at the critical (notch) frequency. You can even model the electrical circuits as mechanical systems - resistors are springs, inductors are masses, and capacitors are dampers. The math works out the same. When you have a mechanical system there must be, always will be, has to be a phase change as the system goes through resonance. A thought exercise for this is a weight hanging from a spring underneath a platform that is moving at some frequency. Imagine hanging a box by a spring under a trampoline. Now move the trampoline. For low frequency (slow) bounces, the box, spring, and trampoline mat all move together - in phase. Once you hit oscillation, resonance, critical frequency, whatever you want to call it, the box underneath will be moving down when the trampoline is moving up and vice versa. This is what generates the positive feedback of oscillation with the spring. If you go faster, they'll begin to move together again as you leave resonance. If we're thinking of this as a passive boost circuit, the "box" is the RC or RLC network passing audio, and the trampoline is the incoming signal. When they're all in phase, no EQ is happening - flat frequency response between in and out. When you're at the resonant frequency, the box (out) is moving differently than the trampoline (in) - so you have EQ happening. This notionally the same behavior as in EQ or filters. We just use that resonant behavior to selectively cut or amplify signals, either passively (by bypassing a voltage drop or shorting the signal at a specific frequency) or actively (by putting this same behavior inside an amp's feedback loop to selectively boost or cut the specific frequency). But, at least as far is I know, at the actual filter itself there has to be a phase change. Willing to be shown that I am wrong here, however. I'm just a mechanical engineer, all this electrical stuff is pretty much voodoo to me. Hi, dogears. Thanks for chiming in! I really like your explanation and thanks for taking your time to write it! I do understand the concept of a filter but when I saw this video and heard him saying that what he is doing is flattening out the phase shifts I thought there was something I didn't understand or simply didn't know about since it involved time reversal which I don't come across often.
|
|