|
Post by theshea on May 7, 2021 0:19:31 GMT -6
no doubt phase shift with hpf is real, but how much does it count in the end? after all every song today gets mangled in several stages. to me the the advantages of hpf are bigger than the loss. off course, if you are after the pure natural sound of instrument and voices i guess you have to avoid eq as much as possible. but to me studio recordings are more about experimenting, bending the rules. the sonics recorded way into the red and gave us exiting screaming garage music. the beatles broke every rule and gave us even more exiting pop, rock, indie, psycadelic music. i will try to implement less drastic hpf in my mixes to hear how it affects my sounds, but if it doesn‘t make a big enough difference, i will continue with my guerilla hpf :-)
|
|
|
Post by Guitar on May 7, 2021 3:13:52 GMT -6
Phase is 100% existing in the time domain, that's what phase is! Even in electronics. Not just acoustics. If I have to say some formula I think it's frequency and amplitude, vs time, of a signal.
In these little RC filters, to use some simple example, the time to charge the capacitor (time constant) creates the delay. Trying to keep it simple for my own good as well as the good of the thread. Just wanted to illustrate an example of time/phase shift existing in simple electronics. It takes time for electrons to ride the subway of a circuit and sometimes there are delays for certain passengers along the way. Even your favorite guitar amp has a few nanoseconds or milliseconds (not sure) of electronic travel time (latency) just to illustrate that electronics are not instantaneous.
If you record through audio transformers (very popular, uh oh) you are also funking up your phase.. guess what, it sounds good.
And just to clarify an earlier post I made, all EQ's are filters, not just high pass and low pass. Also your bells, shelves, and so on. These are all types of filters. If you use EQ anywhere in a mix, at all, you are "phasing" your audio. Yes, even your $4,000 GML. Unless you use linear phase EQ, which I don't think most people do, and that adds pre ringing artifacts. There's no free lunch.
If you want to own this anti-phase mindset you might have to abandon EQ altogether. If you record to tape (glorious phase smear machine)...time to get a new job.
I'd like to think that most people are paying attention to phase cancellation between tracks, with proper recording technique. That seems like a much bigger deal than using the same tools and techniques we've been using since the beginning of recording, regardless of how funny it makes you feel to think about what they are actually doing.
On the other hand, I'd love to listen to some pristine, acoustically perfect chamber music recorded through some Pueblo gear, but that's a different genre of recording, to pick a word for it. I dunno, maybe Al Schmitt split the difference with the pop music he recorded. Some of us seem to be coming from different places, just trying to wrap my head around that. Maybe ignore the HPF switch on your Pueblo mic pre for the sake of this conversation, ;-D
|
|
|
Post by Guitar on May 7, 2021 5:28:42 GMT -6
Here's a measurement I took of an HPF in my setup. This is the 40 Hz HPF on the Tascam UH7000. As you can see, there is about an 8 degree phase shift at 400 Hz, out of a possible 180 degrees. I'm not sure how meaningful that is in terms of ear pleasures, but there it is on a graph: You can ignore the phase shift in the highest frequencies, that's happening during the measurement itself for some reason, even when it's set flat.
|
|
|
Post by Guitar on May 7, 2021 5:34:32 GMT -6
Here's a measurment of the UH7000 doing a 5 dB mid boost at 1K Q of 1.0 Maximum phase shift, positive or negative, is about 16 degrees for this filter at these settings. You can add your own comments, just a graph:
|
|
|
Post by Ward on May 7, 2021 6:15:16 GMT -6
Perhaps one of our resident electronics engineers could explain the different types of EQ circuitry and how each relates to phase shift? I know a bit, but I would love to learn more!! matt@IAA jimwilliams svart EmRR jsteiger and others
|
|
|
Post by Martin John Butler on May 7, 2021 7:21:03 GMT -6
I'd love to hear some of the guys here picking a classic song or two that are benchmarks of great sound, and analyzing them for low frequencies and phase shift. It would be interesting to know. It's over my head to measure that for myself.
The best recorded album in the 21st century I can think of is Beck's "Sea Change", but I'd really love to know where Chris Stapleton's "Broken Halos" stands frequency wise too.
|
|
|
Post by EmRR on May 7, 2021 7:33:17 GMT -6
Yes, all this beloved transformer coupled equipment ‘mangles’ phase on the bottom end. Tape rol-off mangles phase. Speakers mangle phase. It’s inescapable. Make recordings. Use hpf if you need them.
|
|
|
Post by svart on May 7, 2021 7:41:45 GMT -6
Perhaps one of our resident electronics engineers could explain the different types of EQ circuitry and how each relates to phase shift? I know a bit, but I would love to learn more!! matt@IAA jimwilliams svart EmRR jsteiger and others Not sure exactly what you want to know but all analog EQ is done by either changing the relationship of phase vs. frequency by using small amounts of frequency dependant delay introduced by components (passive filter) or by applying feedback in the form of phase shifted version of a signal back to the original signal to gain a much larger amount of cut/boost (active filter). The implementation might be different but the underlying method is the same. You can define the frequencies at which the phase is effected, and create EQ bands by creating overlapping HPF and LPF regions. Zero phase EQ is strictly a digital/DSP entity as it works by using something analogous to separating the frequency spectrum into a large number of frequency based delay lines. Each frequency "bin" can be delayed, and thus phase shifted, buffered and then reassembled(summed) with all other frequency bins. That's also why zero-phase-shift EQs have a lot of latency or need tons of CPU time.
|
|
|
Post by matt@IAA on May 7, 2021 8:23:13 GMT -6
Gotta be precise when you're talking about phase because it gets all kinds of confusing otherwise.
Phase shift is sliding or moving the response of the system in time. You can accomplish this by moving a microphone relative to the source. If you take two mics and sum them, and move only one, you can see the waveforms change alignment. You can actually re-align them to manually introduce that offset. But that changes the phase of the entire waveform at once. All frequencies move together. That doesn't have any EQ effect.
I'm not an electrical engineer - mechanical - so for me all this stuff makes way more sense in terms of a mechanical system. All systems have resonant frequencies, what we call "critical" frequencies. At the resonant or critical frequency, the system will oscillate and be excited sympathetically by a forcing frequency. Fancy way to say, it will vibrate. If you've ever hummed in a room and all of a sudden you get the room vibing with you, you found a room's resonant frequency. If you ever slid back and forth and back and forth til you made a tidal wave in your bathtub, you found the tub-water-body system's resonant frequency. Analog filters actually work the same way, or at least the math does.
Imagine a super simple model of a rotor - something like a solid metal bar with some weight attached to the center, spinning on two perfect bearings, with a small mark at one place that we'll call 0 degrees, top dead center, whatever. If you spin this thing, the weight in the center will make it vibrate. The "heavy" spot is where the weight is fixed relative to the 0 degree mark. It will also have a "high" spot with each rotation - if you were to measure the maximum deflection or bend, it will be at a certain angle relative to that zero degree mark, if you held the bearing in your hand, you'd feel the force from the high spot with each rotation. Stick with me here, because I think this may help the phase thing.
Below the resonant frequency the rotor will spin around the centerline between the two bearings. If you were to measure the deflection of the solid metal bar with each rotation, it would be in phase with the heavy spot. High and heavy spot coincide. If you were holding the bearing and watching the high spot, you'd feel the force at the same time as the weight rolled around.
AT the resonant frequency the heavy spot leads the maximum deflection by 90 degrees. No kidding.
Above the resonant frequency the rotor now spins around the heavy spot...it no longer rotates perfectly on the centerline between the two bearings. The heavy spot and the "high" spot are now out of phase by 180 degrees. Now remember, the heavy spot didn't move, what changed was how the rotor was moving, vibrating with each rotation.
If you were to plot the "high" spot relative to the heavy spot with frequency, below you'd be at zero, at you'd be at 90, and above you'd be at 180. And, incidentally, this is exactly the same behavior you'd see with, for example, a high pass filter. Below the critical frequency you have 0, at you have 90, and above you have 180 degree phase shift.
I say all that to show that this is real stuff, it actually happens in physical systems. It's not some imaginary thing. What gets confusing is that the rotor can only spin at one speed at at time. If you want to get it up to 3600 rpm, 60 Hz, you have to spin it up through 500 and 1500 and 2500 rpm on the way. Or to use the bathtub example, if you sloshed back and forth super slow, no tidal wave. Super fast, also no tidal wave. But if you went from slow to fast at some point there'd be a breakover where the tidal wave builds, then subsides...and if you measured how the wave crest moved through the tub on a cyclic basis, it'd go through a phase change as that happened. But audio is shooting all those frequencies through a system all at once, whambo, so like all of this is happening at the same time.
But this kind of behavior isn't something you can get rid of, or get around. Phase and frequency response are fundamentally related - in fact this is precisely why equalizers work at all.
What you can do is adjust the system to tweak the shape of the response. Going back to the rotor, that's like making the shaft thicker, or decreasing the weight, or adding damping to the bearings. These change the shape of the response curve, but not the fundamental and necessary thing that through critical frequency you have a 180 degree phase shift. In analog filter terms that's changing the capacitance or inductance, adding series resistance to an inductor, and so on.
To finally answer your question, the different types of analog EQ circuitry change when and how the phase and frequency response occur at the filter, but not that it occurs.
|
|
|
Post by svart on May 7, 2021 8:55:22 GMT -6
Here's an animation that's pretty cool:
|
|
|
Post by jaba on May 7, 2021 9:40:34 GMT -6
Very interesting discussion. I think I'll try using less HPFs and see what happens. Can't say I'm afraid of the phase-shift - pretty much everything from mic capsule to speakers alter the purity of the sound. Funky mics, compression, saturation, transformers, yes please.
I'm curious if I notice a difference in the low end - better weight, depth? I'm not overly aggressive when setting filters and tend to keep an eye on an RTA so it may be all for naught but this discussion has made me curious to take another look/listen.
|
|
|
Post by svart on May 7, 2021 10:50:55 GMT -6
Very interesting discussion. I think I'll try using less HPFs and see what happens. Can't say I'm afraid of the phase-shift - pretty much everything from mic capsule to speakers alter the purity of the sound. Funky mics, compression, saturation, transformers, yes please. I'm curious if I notice a difference in the low end - better weight, depth? I'm not overly aggressive when setting filters and tend to keep an eye on an RTA so it may be all for naught but this discussion has made me curious to take another look/listen. That's right. Phase shift is perfectly normal in every component of the signal chain so there's not much benefit in trying to avoid it. Just learn to work with it.
|
|
|
Post by EmRR on May 7, 2021 11:49:11 GMT -6
Funny, all these long master processing chains people use are just adding to the phase shift. Sayin'. Each piece increases the band-limiting.
|
|
|
Post by jmoose on May 7, 2021 12:47:57 GMT -6
no doubt phase shift with hpf is real, but how much does it count in the end? after all every song today gets mangled in several stages. to me the the advantages of hpf are bigger than the loss. off course, if you are after the pure natural sound of instrument and voices i guess you have to avoid eq as much as possible. but to me studio recordings are more about experimenting, bending the rules. the sonics recorded way into the red and gave us exiting screaming garage music. the beatles broke every rule and gave us even more exiting pop, rock, indie, psycadelic music. i will try to implement less drastic hpf in my mixes to hear how it affects my sounds, but if it doesn‘t make a big enough difference, i will continue with my guerilla hpf :-) That's a strawman... the Beatles did it? The Beatles had George Martin as a producer who had been making records for at least a decade before anyone heard of Paul or Ringo. And they were working with lab coat wearing BBC "balance engineers" who not only helped them break & bend those rules but also ensured that whatever they did was intelligible and would actually playback and translate to the real world. Basically the total opposite of the average person with no formal training working in the average home studio with no outside input. That's one of the major differences between analog & digital recording. On analog you kinda need to have half a clue about what your doing to get usable tracks. Digital will play back anything you feed it. Might not sound great but it'll play! And not all of that garage rock stuff sounds good. How many people would be stoked if their album sounded like the Stooges Raw Power? Yeah, massively influential record but it sounds like garbage. Not even Iggy & the band were happy with the way it sounded. All that "artistic" stuff? Way irrelevant.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on May 7, 2021 13:09:14 GMT -6
Actually that's somewhat incorrect, it is both time / distance and frequency domain dependant. Okay in terms of frequency domain the most basic repeatable use case is two sine waves flipped to cancel each other out. In terms of time, what we're told to do as AE's is avoid incorrect travel distance between two source collections, in a 3:1 we'd place a mic at exactly three times the distance of another source mic. Why? The delay of positive and negative frequencies clashing will induce phase and again often unpredictably cancel each other out. This can even be done with reverbs.. Dry mic'ing single instruments one at a time (without moving the mic) should be fine (but I can explain several scenario's that say otherwise). Although generally the rule of thumb is the more mic's involved the more phase you'll come across. P.S I get your point but you gotta be careful especially when you're multi-mic'ing because time / distance can play a massive factor in phase and crap recordings. Huh..? Possibly I'm missing something. Who was talking about multiple microphones..? Only reason I referred to time domain was because that's probably an easier variation of "phase" for some to understand... Using an equalizer doesn't flip things, it creates ripples in frequency response & phase coherence above and below the center frequency. In the parlance of our times that's the crux of the biscuit. The way Svart and Monkey explained it is the way I understand it, although I did consult several resources including Universal Audio's explanation of it first. I might know something inside out but I check before posting because I don't like to forget something.. Multiple mic's is just the defacto standard explanation (IME) of delay induced phase and good insight into what happens to the frequency spectrum, although there quicker ways to do this like copy a track and move one slightly out of time, use specific delay inducing reverbs and of course delays themselves. I never said using an EQ flips things, it's just a perfect example of what phase does to your track ultimately in an extreme case. It literally cancels itself out.. The less phase you introduce by whatever means (including EQ) the better. www.uaudio.com/blog/understanding-audio-phase
|
|
|
Post by Pueblo Audio on May 7, 2021 14:10:22 GMT -6
HPFs have infinite attenuation at 0Hz. Let’s recognize that “infinite” is a cosmic amount of action! For every action there is an equal and opposite reaction. So what is that reaction; that price we pay? PHASE SHIFTS. Shifts reaching as far as 10x above the cut-off frequency. For example, a 40Hz filter’s recoil may disturb spectra up to 400Hz. A lot of music lives in that region, right? (And minimal phase filters do not get you out of jail free) So what does that mean to music producers? Phase shifts cause overtones to become misaligned with their fundamental causing internal comb filtering. The result being hollow timbres, ghost-like bass and smeared highs. This, consequently, may lead engineers to apply more processing to try to re-solidify the sound, leading to more phase shift. It can become a tragic race to nowhere. That’s an ugly sonority penalty to pay if the HPF does not provide a tangible, material benefit. The kind you can hear from down the hall. Certainly there will be circumstances where they will be the appropriate tool. But it seems to me most skillfully tracked signals shouldn’t contain subs or other “garbage” with enough energy to upset a mix . I listen to original masters of the “history of recorded music” at the mastering studio all day, everyday. The most impressive and musical stuff nary saw a HPF at all. I’ve recorded decades of live shows (32 track) with my Pueblo preamps (which go down to 0Hz DC). Never needed HP filtering and the sonorities sound like the artists. I really like that. I would think, test, then think again before adopting an all-channels-with-HPF default template I've always felt that there must be something I'm losing by removing too much low end. Whenever I try the approaches discussed in this thread, my recordings start sounding less like people playing music. On the other hand, I'm not exactly a master of a well balanced low end so I've got lots of room to grow. My question for you, Pueblo Audio, is does this issue persist even if you're using a low cut at the tracking stage either on the mic itself or on the pre-amp/strip if it has HPF. I generally avoid this as well but I've been experimenting with tracking with some compression going in and there's something to be said for less low end triggering the threshold. Filters impart their phase shifts no matter their placement. HPF is best placed where a HPF-worthy problem exists. Engage a mic’s HPF if rumble or air drafts are overwhelming the mics head amp. Use a preamps HPF if there is LFinterference. Use in a line level receiver if there is uncorrectable hum loops, etc.. HPF placed closer to the mic end will generally have lower level signals of less complexity compared with those on Busses and 2mix end which will excite more artifacts The quality of the filter matters, too. Those in mics are usually poor. Preamps tend to be middling. If your gonna HPF busses with full level, complex signals, that HPF best be as linear as possible and stereo matched (unless you are liking artifacts). Again , HPF have their uses. But i hardly feel their uses are universal. If a bass-y signal has 2dB too much low end for a compressor to trigger as desired, are you gonna nuke it with a HPF or mindfully tailor it the needed amount? Maybe with a bell or shelf? Or maybe just EQ the side chain. We have lots of tools available that will be best fitting for various tasks. Considering the entire entropic path a signal makes during a production, Minimizing loss along the way can cultivate a more visceral and lasting end product. And for free.
|
|
|
Post by matt@IAA on May 7, 2021 14:24:58 GMT -6
The way Svart and Monkey explained it is the way I understand it, although I did consult several resources including Universal Audio's explanation of it first. I might know something inside out but I check before posting because I don't like to forget something.. Multiple mic's is just the defacto standard explanation (IME) of delay induced phase and good insight into what happens to the frequency spectrum, although there quicker ways to do this like copy a track and move one slightly out of time, use specific delay inducing reverbs and of course delays themselves. I never said using an EQ flips things, it's just a perfect example of what phase does to your track ultimately in an extreme case. It literally cancels itself out.. The less phase you introduce by whatever means (including EQ) the better. www.uaudio.com/blog/understanding-audio-phaseThe word phase is a source of plenty of consternation. When you remove periodic signals, the whole thing really falls apart. You can mount mics at different distances in multiples of a wavelength and achieve zero out-of-phase behavior for that frequency...for a periodic signal. You can have two mics at the same distance, high pass one, sum, and have out-of-phase behavior between the two tracks in the area of high pass even with a periodic signal - like this: We need to be specific about phase as a function of time (delay) and phase as a function of looking at signals in the frequency domain or bode plot. They're not the same thing, though they sometimes can be made to look the same. The former is only a product of time-and-distance, the latter is a relative measure of response for a signal.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on May 7, 2021 14:43:30 GMT -6
no doubt phase shift with hpf is real, but how much does it count in the end? after all every song today gets mangled in several stages. to me the the advantages of hpf are bigger than the loss. off course, if you are after the pure natural sound of instrument and voices i guess you have to avoid eq as much as possible. but to me studio recordings are more about experimenting, bending the rules. the sonics recorded way into the red and gave us exiting screaming garage music. the beatles broke every rule and gave us even more exiting pop, rock, indie, psycadelic music. i will try to implement less drastic hpf in my mixes to hear how it affects my sounds, but if it doesn‘t make a big enough difference, i will continue with my guerilla hpf :-) That's a strawman... the Beatles did it? The Beatles had George Martin as a producer who had been making records for at least a decade before anyone heard of Paul or Ringo. And they were working with lab coat wearing BBC "balance engineers" who not only helped them break & bend those rules but also ensured that whatever they did was intelligible and would actually playback and translate to the real world. Basically the total opposite of the average person with no formal training working in the average home studio with no outside input. That's one of the major differences between analog & digital recording. On analog you kinda need to have half a clue about what your doing to get usable tracks. Digital will play back anything you feed it. Might not sound great but it'll play! And not all of that garage rock stuff sounds good. How many people would be stoked if their album sounded like the Stooges Raw Power? Yeah, massively influential record but it sounds like garbage. Not even Iggy & the band were happy with the way it sounded. All that "artistic" stuff? Way irrelevant. Don Gallucci made Fun House sound awesome. Raw Power is worse than Bathory. And the original Bathory lps and boots of them sound pretty good for garage recordings because the one man band’s dad owned a record label. But even then there are random recording errors and dropouts and noises. The limits of drunk guys in a garage with digital are on display with what we happened between them ending up on LP and the hack cd self-mastering job (I presume because the first three albums went from cool to garbage) in the 90s destroyed the tapes. Probably because a Swedish garage is a terrible place to store tapes. DAWs made it easier to fuck up. People are still fucking up in so many ways. There are well known producers who don’t realize how to change the quality settings of plugins, trim everything to ensure no intersample clipping, don’t realize that 64-bit float is pretty much lossless, don’t dither, use plugins that cause tracks to go beyond out of phase to out of time because their daw (eg Logic) isn’t compensated properly for latency and theyre too lazy to insert a delay, start flame wars with the pioneers of digital audio, etc. every screwup makes it sound worse.
|
|
ericn
Temp
Balance Engineer
Posts: 14,937
|
Post by ericn on May 7, 2021 15:25:29 GMT -6
Yes, a filter on a microphone will still have "phase issues." They do not defy the laws of physics. Even the high boost filter on an SM7B will have "phase issues" but people love it. As DrBill and I have stated many times often people like phase distortion. Think about it in DSP it isn’t that difficult to design a filter without phase distortion, yet every body emulates the phase issues of analog filters 99% of the time? Why, because as much as we “ hate “ it we really like it or a filter or EQ without it sounds unnatural.
|
|
ericn
Temp
Balance Engineer
Posts: 14,937
|
Post by ericn on May 7, 2021 15:30:21 GMT -6
Funny, all these long master processing chains people use are just adding to the phase shift. Sayin'. Each piece increases the band-limiting. What’s even more fun is looking at the phase plot of a subwoofer in the real world, you have what the crossover is doing, then the driver in the enclosure, then the smear of the main cabinets and the sub acoustical summation.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on May 7, 2021 15:50:43 GMT -6
The way Svart and Monkey explained it is the way I understand it, although I did consult several resources including Universal Audio's explanation of it first. I might know something inside out but I check before posting because I don't like to forget something.. Multiple mic's is just the defacto standard explanation (IME) of delay induced phase and good insight into what happens to the frequency spectrum, although there quicker ways to do this like copy a track and move one slightly out of time, use specific delay inducing reverbs and of course delays themselves. I never said using an EQ flips things, it's just a perfect example of what phase does to your track ultimately in an extreme case. It literally cancels itself out.. The less phase you introduce by whatever means (including EQ) the better. www.uaudio.com/blog/understanding-audio-phaseThe word phase is a source of plenty of consternation. When you remove periodic signals, the whole thing really falls apart. You can mount mics at different distances in multiples of a wavelength and achieve zero out-of-phase behavior for that frequency...for a periodic signal. You can have two mics at the same distance, high pass one, sum, and have out-of-phase behavior between the two tracks in the area of high pass even with a periodic signal - like this: View AttachmentWe need to be specific about phase as a function of time (delay) and phase as a function of looking at signals in the frequency domain or bode plot. They're not the same thing, though they sometimes can be made to look the same. The former is only a product of time-and-distance, the latter is a relative measure of response for a signal. Cool, appreciate the discussion.. Although why are not the same thing? Let's take an analogue EQ for example and let me quote Ethan Winer. "With an analog EQ the delays (phase shift) are created with capacitors and inductors. In a digital EQ the delays are created with a tapped shift register. But the key point is that all EQ shifts phase, unless it uses special trickery." Delay is time, if we are going to get deeply technical into this it's not actually the phase itself that's the primary issue. The time domain frequency shift from multiple mic's causes "comb filtering" which makes things sound phasey. Chances are that when someone boosts a specific frequency it's the comb filtering from whatever multi-mic'd sources they've recorded and not the phase shift from the EQ itself. Although it's easy to refer to it as "phase" and whilst technically incorrect if you look at an oscilloscope with two wave forms of the sample amplitude shifted in time you get both constructive and deconstructive interference. This would lead to the signal being in phase, partially in phase and some perfectly out of phase. Ultimately though phase is always a function of time and polarity vs. phase is a good example here. Polarity is the positive / negative signals and if they are synchronised then delayed at the endpoint the technical term for this would be "phase shift". Remember waves work in cycles constantly flipping through their axis, again if these two signals arrive out of order they could be 180 degrees out of phase and this would cause destructive interference. If we are being very technically accurate here it's the polarity of positive and negative signals causing the cancellation, the phase shift is what will affect specific frequencies in regular multi-frequency recordings.
|
|
|
Post by Guitar on May 7, 2021 16:24:15 GMT -6
If I understand correctly, those little phase deviations in say, the high pass filter of this thread, the ones on bode plots, mean your signal is out of phase with itself. The bass end will come out of the speakers at a slightly different time than the treble end of the same signal. It's still a function of time. A fun way to listen to this is to take an EQ like Crave EQ, or some others, and make some moves with your various filters. Then switch between minimum phase (digital phase,) analog phase, linear phase, listen for the subtle differences. This is super tweaky stuff, likely not that important in a mix, but it's a fun listening test. You're just listening to phase shifts. And pre-ringing when you switch to LP. This frequency dependent phase difference of one signal related to itself is also the principle behind the BBE Sonic Maximizer and the SPL Vitalizer in a very small nutshell. Some people find these delays to be musical. Like ericn and drbill apparently, and myself. A transformer is a good example of this 'musicality' although there are other things happening there too, I'm not sure how much of it is the slight phase shift in the bass end, but it is part of the recipe, part of the deal. Phase is time...Plus frequency. Without frequency, it would just be "delay." The same way impedance without phase would just be resistance. I like this quote from @soriantis "If we are being very technically accurate here it's the polarity of positive and negative signals causing the cancellation, the phase shift is what will effect specific frequencies in regular multi-frequency recordings."
|
|
|
Post by matt@IAA on May 7, 2021 16:41:00 GMT -6
This is incorrect -- they're simply not delays. The signal doesn't slow down, the impedance varies with frequency and the signal magnitude and phase varies accordingly. Look at the graph I posted. There is amplitude, phase, and frequency. There's no time represented on that graph other than tangentially as frequency. Temporally the two signals are perfectly aligned (within reason, particularly in the audible spectrum). If you send an impulse through the circuit that produced that graph, that is the response, and the impulse will not be delayed from one to the other. Yet there are still phase differences at different frequencies. An impulse will arrive at the same time through both chains, but the high-passed will have a lower amplitude - it will lose low-end content. The low-end content doesn't arrive "later" and it isn't comb-filtered out with the other signal. It is gone, it is reduced in magnitude, because in that case it flows to analog 0V or common or ground instead of where you're measuring. Consider the rotor, the rotor is spinning at the frequency, there is no delay - the whole thing is spinning together, as one. But there is a phase shift between the force and the unbalance that changes with speed.
This is exactly how a voltage divider of two resistors works, except a voltage divider works the same at all frequencies. Yet we don't say a voltage divider delays the signal and we don't say that some kind of phase cancellation is what causes the volume or amplitude to be attenuated. Because that's not how it works, in either case.
Delay is time, but phase is not. Frequency is time, but frequency is not phase. Phase can be related to time in a periodic signal, but this is a particular case. You're stuck on periodic signals, but that's not a good understanding. As I said, as soon as you drop periodicity from the equation it becomes much much clearer what the difference is between delay and phase.
No. Phase is a function of frequency. They're not the same thing. Look back again at the rotordynamic example I gave you. The phase angle in this case represents the difference on a per-revolution basis of the angle portion of the vector of the unbalance force and the angular location of the actual mass causing the unbalance. The relationship between those is constant for any given frequency. You can excite that response forever, by spinning the rotor at that frequency. The frequency is a description of the force exciting the system, and while this has a time component in it, it has nothing to do with delay or distance or time in that sense. Likewise, in the case of audio that frequency is the descriptive of how the impedance of the components in the circuit vary. They don't vary in time, they vary in frequency - and that frequency describes the forcing function or signal. Has nothing whatsoever to do with time, distance from source, or whatever else.
This is why everyone gets confused. This is a no-good terrible way to talk about it and why I said the word phase is contentious. There is only phase when you have a periodic signal. Otherwise it is not phase at all, it is simply a time delay. This is very simply seen because the interference is a function of the period, NOT the time delay introduced. For example, if you introduce a 16 millisecond delay, you'll get 0 phase angle at 60 Hz and all multiples of 60Hz. But at other frequencies you'll have varying degrees of offset, ONLY with periodic signals. But with a non-periodic signal, you simply have a 16 millisecond shift in the waveform.
Polarity is a minus sign affixed to the front of the equation. Phase is an offset introduced to the periodic function. There are times when mathematically these are equivalent - you get the same amplitude. But they represent different things, and again, as soon as you remove periodicity they are no longer equivalent.
|
|
|
Post by matt@IAA on May 7, 2021 16:44:18 GMT -6
If I understand correctly, those little phase deviations in say, the high pass filter of this thread, the ones on bode plots, mean your signal is out of phase with itself. The bass end will come out of the speakers at a slightly different time than the treble end of the same signal. It's still a function of time. No, sir. The periodic waveform will look as if there is a delay because the phase changes, but an the signal is not slowed down. This should be fairly simple to test. Take a balloon, set up a mic and a y-cable to two identical mic pres. Engage the HPF on one, don't engage on the other. Record the balloon pop. Look at the waveforms. Zoom in as far as you like. Put the HPF as high as you like. They won't be time-shifted.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on May 7, 2021 16:46:48 GMT -6
Exactamundo, as Svart said you'll never avoid this. Songs are basically a range of ever changing frequencies through layers of electrical components, it's deciding what to care about and IMO most large issues are generally room or mic related. Reflections / bad mic setups or even excessive dampening, I bought some middle of the road treatment at my last place and the recordings always sounded lifeless but if you moved to certain locations you'd get slap back. Now I've got scatter plates and multi-function treatment etc..
Also noticed the rating was 70hz for the traps, no guesses why I struggled with getting the low end. I do tend to stay away from over processing nowadays though. LPF'ing cymbals to like 15K doesn't sound right to me and in smaller mixes without double kicks or 50 string guitars I will try to keep it intact as possible and focus on getting the source right.
|
|