Linear phase is a term which relates to a feature included on some types of processing, such as certain EQ plugins. Let’s identify what is linear phase as it relates to mixing, and if this is something you should be using if your plugins feature it.
What is Linear Phase
Linear phase is a feature on some plugins which corrects phase issues which are introduced with certain types of processing.
Let’s stick with EQ as an example to explain how linear phase works.
When you make an adjustment to the frequencies of your audio via EQ, either cutting or boosting, this affects the phase of your audio.
The more drastic the cut or boost, particularly with narrower adjustments, the greater the chances for phase issues.
Linear phase corrects these issues by introducing latency into the mix to achieve the proper phasing.
Not every plugin has a “Zero Latency” feature, but linear phase EQ plugins either work this way by default or as in the case of my favorite EQ plugin, the FabFilter Pro-Q 3, you have a specific feature where you can turn on different degrees of minimum or zero latency.
You might think if linear phase essentially works to put the phase back where it should be, why not just use this regardless or even why aren’t ALL EQ plugins linear phase by default?
The downside of linear phase is that it is much more CPU intensive because of the latency it introduces to correct phase (see CPU load in mixing), making it difficult to mix in real time particularly on some systems.
It can also introduce artifacts into the audio, like an unpleasant pre-ringing ahead of the audio which aside from being annoying can mess with your audio’s transients.
Should You Use Linear Phase
It’s important to remember that in the case of a lot of processing, affecting the phase is effectively and inherently how these plugins work, like EQ. The same holds true for the original analog EQs which plugins are based on.
I’ve shared this video before, but I’ll do it again as it does such a good job explaining linear phase:
Most of the time you won’t even notice any phase issues as a result of processing unless you’re using extreme settings, like in the case of aggressive cuts or boosts in your audio which aren’t recommended to begin with because, phase issues aside, they sound unnatural (unless that’s the intended effect).
It’s also If important to remember that phase is only an issue when you’ve got two (or more) tracks recording the same live source.
Examples of this would be most drum microphones where you have bleed from other instruments (that you’re not removing in post via a drum gate) or recording acoustic guitar with two microphones.
This is when you might need to be wary of phase issues, but as a general rule whenever you’re working with stereo tracks or tracks with live bleed from other tracks, you should be checking for phase issues anyway (see my overview on fixing phase issues).
Phase issues can arise through any number of sources, like even just having two microphones capturing the same source a few feet or sometimes inches in difference from their distance to the source.
If you’re using EQ (or using any kind of processing) on a single track capturing a single source (the kind of tracks which generally make up the majority of your mix), like a mono vocal track, there are no other tracks which need to be accounted for regarding the phase.
Non-linear processing (which is typically the default) is not the boogeyman that articles selling linear phase make it out to be. 99% of the time, you don’t need linear phase and not only are the artifacts not worth it, but many engineers actually PREFER the sound of non-linear processing.
More than that, many mixing engineers believe linear phase processing sounds comparatively sterile and doesn’t give you the analog color you get with the non-linear options.
Now you know the difference between linear phase and conventional processing when it comes to how the different types of plugins affect your audio/its phase and more importantly its sound.