

As a result, each channel’s sound waves will be similar in some respects but different in others. However, because each microphone is in a unique spatial position, different overtones will enter each mic at a different time. If you’re recording an instrument with two separate microphones (stereo recording), the incoming fundamental frequencies (i.e., the notes played) will be the same in each channel. Simply put, audio phase is a factor whenever two or more signals are combined - the more related those signals are, the more relevant phase becomes. Phase interactions also come into play when layering samples over acoustic drums, using different plugins on similar tracks, applying parallel processing, and more. Whether you’re recording a single instrument or multiple instruments with any number of microphones, phasing will be a factor that shouldn’t go ignored. The precise scenario described above isn’t common in the real world since these perfect, fundamental sound waves won’t be what you’re working with, but the theory still applies. This is called “destructive interference” or “phase cancellation.” Conversely, if these channels were perfectly “out of phase” (i.e., the lowest amplitude of one channel’s wave occurs when the other channel’s wave is at its loudest), their peaks and troughs would cancel each other out.

When both halves are perfectly aligned, their amplitudes are identical across time, meaning you would hear the same sound on both sides.īring those channels together and play them at the same time, and you’ve got what’s known as “constructive interference” because the combination of these in-phase waves doubles the resultant amplitude. To keep things simple, imagine two perfectly symmetrical and repetitive sine waves, one in the left channel and one in the right. Phase problems are at the root of many mixing problems and have major implications on the overall sound. Understanding phase is crucial for optimizing your mixes. When two signals are “in phase” with one another, their amplitudes (i.e., peaks and troughs) coincide. What’s bound to happen is that some waves will go in and out of phase with one another at different points in time. As such, you’ll be juggling countless sound waves that each vary in frequency, amplitude, harmonic overtones, and more. Is all about combining distinct but cohesive elements so that every component can be heard as intended by the artist, producer, and engineer.

In audio production, the relationship between two or more waveforms is what really matters the absolute phase of a singular sound wave isn’t all that relevant for reasons we’ll discuss next. The phase of a sound wave tells us where exactly along this cycle we’re looking. “pitch”) is the number of times per second the sound wave repeats itself along that cycle. The amplitude refers to the loudness of the wave at a particular point in time for a perfectly symmetrical and repetitive sound wave (like the sine wave shown above), wavelength measures the distance between two equal amplitudes along the cycle and frequency (a.k.a.
