You are on page 1of 100

Basic Mixing I

Mixing or Mix.
Mixing is not only an art by itself as music is, it is called mixing because the word means
just what it is about. Mixing or making a Mix is adjusting all different instruments or
individual tracks to sound well together, composition wise and mix wise. How to start
mixing a mix is a simple task when you understand what to do and what not. Later on we
will also discuss the static mix and dynamic mix. According to some common rules, the
Basic Mixing chapters explain common mixing standards as well being informational about
sound subjects.

The Starter Mix, Static Mix and Dynamic Mix.


As of a process being broken down into parts, we can divide mixing into three basic steps.
When starting a mix, mostly you will have some previously recorded tracks you need to
mix furthermore. We will explain to setup all tracks fast, so you can have a default setup
and progress to the static mix. Mostly the starter mix can be setup in less than 1 hour of
working time. The static mix takes a bit longer, about 4 hours or so. The Dynamic mix and
finishing up a mix can take from 4 to 12 hours of working time. Finishing off the mix can
take 1 o2 two days or more depending on creativity, style and experience. It is good to
know the total working time in hours finishing a mix, can be divided into three parts. First
the Starter Mix. Then the Static Mix. Then the Dynamic Mix. Starter, Static and Dynamic
mix are the basic three standard parts. Then finishing off. At last part 4 should be just
working until the mix is finished. Before we discuss these subjects, we will start off with
some more sound or audio details.

Overall Loudness while mixing.


The first mistake might be in thinking that how loud this mix will sound is important; a lot
of beginners who start with mixing will actually try to get their mix as loud as they can get
it to be. They try to push-up all faders until they get a desired overall loudness level, don't
do that. The master vu-meter does look attractive when it is showing all green and red
lights, you might get confused into thinking that louder is better. Louder is not meaning
better when mixing, as we are in the mixing stage loudness is less important as this is part
of the mastering stage. In the mixing stage we try to have a balance in the three dimensions
of mixing, therefore creating separation and togetherness (at the same time). Though
separation and togetherness might seem contradicting, every instrument needs to have a
place on the stage, together they sound as a mix. So mixing is more about balancing
(adjusting) single tracks to sound well. By a general rule on digital systems we do not like
to pass 0 dB on the master track. Keeping a nice gap between 0 dB and -6 dB can help your
mix well without distortion going on. Some like to place a limiter on the master track and
so try to mix louder, maybe it works for them but we do not recommend doing this until
you are experienced with a common dry mix under 0 dB. Anyway if you need your mix to
be louder, just raise the volume of your speakers instead. That is a normal way of doing it.
We will explain later on what to do with the master track of your mixer. Also when mixing
do not place anything other on the master fader, so no plugins, reverb, maximizers etc. Just
maybe a Brickwall limiter on the master fader with a threshold -0.3 db, or reducing just 1 or
2 dB only when peaks occur. For real beginner and not so experienced, we recommend
nothing on the master fader and set to 0 dB.

Volume or Level.

As the human ear can detect sounds with a very wide range of amplitudes, sound pressure
is often measured as a level on a logarithmic decibel scale in dB. Commonly used are
faders from a mixer or a single volume knob of any stereo audio system. Because volume is
commonly known as level, beginning users might overlook the possibilities. The different
volume faders of any mixer count up all levels towards the master fader as a mix. Summing
up levels of tracks towards the master bus. When talking about sound or a note that has
been played, the frequency and amplitude (level, volume) will allow our ears to record and
our brains to understand it's information. You can guess playing at different frequencies and
amplitudes, our hearing will react differently, allowing loud or soft sound to be understood.
Allowing to perceive loud or soft, left, center or right, distance and environment. Our
hearing is a wonderful natural device.

The Fletcher Muson chart shows different hearing amplitudes for frequencies at certain
loudness levels. As you can see, how loud a note is played is affecting the frequency a bit.
As well as with Frequency and Volume (amplitude, loudness), we can get a sense of
direction and distance (depth). Our brains will always try to make sense as if sounds are
naturally reproduced. Music or mixing is mostly unnatural (or less natural), but our brains
understands music better when it is mixer for our natural hearing in a natural way. Mixing
to affect our natural hearing by perceiving natural elements correctly (dry signal,
reverberation, effects, summing towards the master bus). So as well for separating or
togetherness, we can refer fist to the volume of a sound, instrument, track or mix that is
playing. As well as Balance or Pan, Volume is an easily overlooked item of a mix. You
might want to fiddle with effects more or keep it to more interesting things, volume is most
important. Actually volume and pan (balance) are the first things that need to be set when
starting a mix and throughout the mixing process. Not only fader, level and panning is
important for a mix, composition wise volume or level is a first tool when you are using the
mute button for instance.

Balance or Pan.
On a single speaker system (mono) where Frequency and Volume is applied, we would not
have to worry about pan or balance, so all sound is coming from the center (mono). With a
pair of speakers (stereo) it is possible to pan or balance from left, center to right. We call
this left, center and right of the Panorama. So we are allowed to perceive some direction in
the panorama from left to right. Just as effective to our hearing, the volume or level,
panning or balance, is mostly overlooked by beginning users. What can be difficult about
setting two knobs, fader and balance? Easy it sounds, but planning what youre doing might
avoid a muddy or fuzzy mix later on, keeping things natural to our hearing. Pan (Panorama)
or Balance are both the same. As to where instruments are placed, Panorama is important it
is the first sense of direction. By a common rule Volume Faders and Balance Knobs are the
first things to do, and refer to, when setting up a mix. Beginning users who just setup

Volume and Panning without a plan or understanding dimensional mixing are quite often
lost and are struggling to finish off a completed mix.

Dimensional Mixing.
As a concept dimensional mixing has got something to do with 3D (three dimensional).
You can understand that Frequency, Amplitude and Direction, make the listener understand
(by hearing with our ears and understanding by brains) the 3D Spatial Information. When
mixing a dry-signal towards a naturally understandable signal, we need some effects as well
as some basic mixer settings to accomplish a natural perception. Setting the Pan to the left
makes the listener believe the sound is coming from the left. Setting the Pan to center
makes the listener believe the sound is coming from the center. Setting the Pan to the right
makes the listener believe the sound is coming from the right. All very easy to understand.
As we focus on frequency we can also do something about the way the listener will
perceive depth. As sounds with a lot of trebles (higher frequencies) are perceived as close
distance, and a more muddy sound (with lesser trebles) is perceived as more distanced
(further backwards). Next our human brain can understand reverberation when for instance
we clap our hands inside a room. The dry clap sound (transients) from our hands is heard
accompanied by reverberation sound coming from the walls (early reflections).
Reverberation, specially the delay between the dry clap and the first reverberations
(reflections), will make our brains believe there is some distance and depth, as we hear first
the transient original signal information of the clap then the reverberations. The more
natural the more understandable. So there are quite some influences on what our hearing
believes as being 3D Spatial Information. Make the listener believe in the mix as being
true. Our hearing also likes natural and believable sounds, sometimes addressed as stage
depth. With all controls of a mixer you can influence the way the 3d spatial information is
transmitted to the listener. You can assume that Volume (Fader or Level), Panorama
(Balance or Pan), Frequency (Fundamental Frequency Range) and Reverberation (Reverb
or Delay) are tools you can use to make the listener understand the mix youre trying to
transmit. We will discuss dimensional mixing later on; now let's head to the frequency or
frequency range of a sound. We perceive distance, direction, space ,etc, through clues such
as volume, frequency, the difference in time it takes a sound to enter both ears (if it hits the
left ear louder and quicker than the right) and reverberation.

The Frequency Spectrum.


A normal Frequency Spectrum is ranged from 0 Hz to 22000 Hz, actually all normal human
hearing will fit in this range. Each of instruments will play in this frequency range, so the
Spectrum will be filled with all sounds from instruments or tracks the mix is filled with. On
a normal two-way speaker system these frequencies will be presented as Stereo. A speaker
for Left hearing and a speaker for Right Hearing. So, on a stereo system there are two

frequency spectrums played (Left Speaker and Right Speaker). Basically the sound coming
from both Left and Right speakers together, makes up for the Stereo Frequency Spectrum
as is presented below. Combined Left and Right (stereo), makes Centre (mono).

This chart is showing a commercial recording, finished song or mix. The x-axis shows the
frequency range of the spectrum 0 Hz to 22 KHz. The Y-Axis is showing level in dB. On
digital systems now days we go from 0 dB (loudest) downwards to about -100 db (soft or
quit). In this chart (AAMS Analyzer Spectrum Display) you can see that the lower
frequency range <1 KHz is much harder and louder in level then all higher frequencies > 1
KHz. The loudest levels are at about 64 Hz and -35 dB, while the softest levels are about
-65 dB and range from 4 KHz to 22 KHz. The difference is 65 dB - 35 dB = 30 dB! As with
every -10 dB of level reduction the sound volume for human hearing will halve (times 0.5).
Instruments like bass or base drum (that have more lower frequencies in their range) are
generating way more power (level) than the HI hat or higher frequency instruments. Even
though we might perceive a HI hat clearly when listening, the HI hat by itself produces
mainly higher frequencies and generates way less volume (amplitude, power, level)
compared to a Basedrum or bass. This is the way our hearing is working naturally. But
however a master Vu-meter of a mix will only display loudness, youre actually watching
the lower frequencies responding. The difference between lows and highs can be 3 times
the sound level. From left to right mainly above > 120 Hz towards 22 KHz are the levels of
frequencies all going downwards. Speakers will show more movement when playing lower
frequencies and less movement when playing higher frequencies. This chart is taken from
AAMS Auto Audio Mastering System, this software package is for mastering audio, but
actually can show also spectrum and can give suggestions based on source and reference
calculations for mixing. This can be handy to investigate sound of finished mixes or tracks,
showing frequencies and levels.

Human Hearing.
Human hearing is perceptive and difficult to explain, it is logarithmic. As lower frequency
range sound levels are measured louder. Higher frequencies measured as soft. They are
both heard good (perceived naturally) at their own levels independent. Not only is human
hearing good at understanding frequencies and perceives them logarithmical, also acoustics
from rooms and reverberations play a great deal in understanding direction of sound.
Generally a natural mix will be more understandable to the listener.

The Basic Frequency Rule.


The rule for mixing, that the bottom end or lower frequencies are important, because the
bottom end or lower frequencies are taking so much headroom away and have the loudest
effect on the Vu-Meters (dynamic level). The lower frequencies will fill up a mix and are
the main portion to be looked after. The Vu-Meter is mainly showing you a feel of how the
lowest fundamental frequencies are behaving. The Vu-Meter will respond more to lower
frequencies and responds lesser to higher frequencies (3 times lesser). Mainly the mix
fundamentals of loudness are ranging from 0 Hz to about 1 KHz; these will show good on a
Vu-Meter. A range from 0 Hz to 4 KHz, will be shown by the VU-Meters as loudness, and
is the range where you must pay attention to detail. If you can see the difference in loudness
of a Basedrum and a HI hat you will understand that the HI hat (though can heard good)
brings way less power than the Basedrum does. A beginners mistake would be mixing the
Basedrum and bass loud and then try to add more instruments inside the mix, thus will give
you limited headroom inside your mix (dynamic level). Most common to adjust frequency
are EQ or Equalizers, but as we will learn later on, there are quite a bit more tools to adjust
the frequency spectrum. As we did explain before, Volume (Amplitude), Panorama (Pan or
Balance) and Frequency Range (EQ or Compression, limiter, gate) are the main
components of mixing (dimensions). Before we add reverberation, we must get some mix
that is dry and uses these components; we call this a starter mix.

Notes and Frequencies.


To make frequencies more understandable, you can imagine a single instrument playing all
sorts of notes, melodies, in time on a timeline. To have some feeling where notes are placed
in the frequency spectrum and how to range them, the chart below is showing a keyboard
and some instruments and their range of notes (frequency range) they can normally play.
All notes from C1 to C7 on a keyboard have their own main frequency. You can see Bass,
Tuba, Piano, etc, in the lower range and Violin, Piccolo and again piano that can play high
notes.

It is important to know about every instruments range, but as you go mixing it is better to
know to give an instrument a place inside the available spectrum. The colored areas are the
fundamental frequency ranges. It is likely when we need to do something about the quality
of each instrument we will look inside their fundamental frequency range. It is likely when
we boost or cut in these areas, we can do something about the instruments quality of
playing. More interesting are the black areas of the chart above, these will represent the
frequencies that are not fundamental. These frequencies are not fundamental frequencies
and therefore when saving the mix for some headroom and get some clearness (separation),
we are likely to cut heavily in these area's with EQ. Most of the hidden mix headroom is
taken up in the first bass octave and the second octave (0 Hz - 120 Hz). Most notes played
or sounds from instruments are notes that have a fundamental frequency below < 4 KHz.
And when you really look at the fundamentals of a mix the frequencies 50 Hz to 500 Hz are
really filling it, this is where almost any instrument will play its range and is much crowed
therefore. The misery area between 120 Hz to 350 Hz is really crowded and is the second
frequency range to look after (1st is 0 Hz - 120 Hz). The headroom required for the proper
mixing of any frequency is inversed proportional to its audibility or overall level. The lower
you go in frequency the more it costs hidden energy of the mix or headroom (dynamic
level). This is why the first two frequency ranges need to be the most efficiently negotiated
parts of any mix (the foundation of the house) and the part most often fiddled by the
inexperienced. Decide what instruments will be inside this range and where they have their
fundamental notes played. Keeping what is needed and deleting what is not needed
(reduction) seems better than just making it all louder (boosting). To hear all instruments
inside a mix, you need to separate, use Volume, Panorama, and its Frequency Range. You
can get more clearness by cutting the higher frequencies out of the bass and play a piano on

top that has cut lower frequencies. By this frequency rule, they do not affect each other and
the mix will sound less muddy and more clear (separation). Both bass and piano have
therefore funded their place inside the whole available frequency spectrum of a mix. You
will hear them both together and clean sounding following the fundamental frequency
range rules. Anyway for most playing instruments a nice frequency cut from 0 Hz upward
to 120 Hz is not so uncommon, actually cutting lower frequencies is most common. Apart
from Basedrum and Base that really need their information to be present, we are likely to
save some headroom on all other instruments or tracks, by cutting some of its lower
frequency range anywhere up to 120 Hz. The lower mid-range misery area between 120
and 350 Hz is the second pillar for the warmth in a song, but potential to be unpleasant
went distributed unevenly. You should pay attention to this range, because almost all
instruments will be present over here.

Fundamental Frequencies and their Harmonics.


Now as notes are played you expect their main frequency to sound each time. But also you
will hear much more than just a main fundamental frequency. An instrument is sounding
(playing notes), so there is a fundamental frequency range to be expected to sound, the
frequency range of this particular instrument. Also recorded instruments like vocals contain
reverb and delay from the room that has been recorded in and also quite a few instruments
come with body, snare, string sounds as well (even those nasty popping sounds). The whole
frequency range of an instrument is caused by its fundamental frequency and its harmonics
and several other sounds. As we mix we like to talk in frequency ranges we can expect the
instrument or track to be playing inside the frequency range (fundamental frequencies).
Therefore we can expect what is important (the frequency range of the instrument or track)
and what is less important (the frequencies that fall outside this range).

Harmonics.
The harmonic of a wave is a component frequency of the signal that is integer multiple of
the fundamental frequency. For example f is the fundamental frequency; two times f is the
first harmonic frequency. Three times f is the third harmonic and so on. The harmonics are
all periodic to its fundamental frequency and also lower in level each time they progress.

Harmonics double in frequency, so the first harmonic range will be 440 times 2 = 880 Hz.
Harmonics multiple very fast inside the whole frequency spectrum. You can expect the
range 4 KHz to 8 KHz to be filled with harmonics. If you are looking for some sparkle, the
4 KHz to 8 KHz range is the place to be. Over > 8 KHz towards 16 KHz expect all fizzle
and sizzle (air). The HI hat will sound in the range 8 KHz to 16 KHz and this is where the
crispiness of your mix will reside. Also when the harmonics double in frequency, their
amplitude or volume goes softer. The main fundamental sound will play loud, as de
harmonics will decrease in amplitude each time.
Here are some instruments with their fundamental ranges and harmonic ranges.

In this chart you can see that the highest fundamental frequency (the violin) is 3136 Hz. So
as a general rule you can say all fundamental frequencies somehow stop at < 4 KHz. For
most instruments common notes are played in the lower frequency range < 1 KHz. You can
also see that the lowest range of a bass drum < 50 Hz or bass is at about < 30 Hz. This
means we have an area of frequencies from 0 Hz to 30 Hz that is normally not used by
instruments playing; this area contains mostly rumble and pop noises, and therefore is
unwanted. Cutting heavily with EQ in this area, can take the strain of unwanted power out
of your mix, leaving more headroom and a clear mix as result (use the steepest cutoff filter
you can find for cutting). Anyway try to think in ranges when creating a mix inside the
whole frequency spectrum. Expect where to place instruments and what you can cut from
them to make some headroom (space) for others. Need more punch? Search in the lower
range of the instrument up to 1 KHz (4 KHz max). Need more crispiness? Search in the
higher ranges of the instrument 4 KHz to 12 KHz, where the harmonics are situated.
Expecting where things can be done in the spectrum, you can now decide how to EQ a mix
or use some compression, gate, limiter and effects to correct. By cutting out what is not
needed and keeping what is needed is starting a mix. Starting a mix would be getting a

clean mix a as whole, before adding more into it. Effects like adding reverb or delay will be
added later on (static mix), lets first focus on what is recorded and getting that clean and
sounding good.

Recorded Sound.
First and foremost, composition wise and recording wise, all instruments and tracks need to
be recorded clean and clear. Use the best equipment when recording tracks. Even when
playing with midi and instruments all recordings need to be clean, clear and crispy. The
recorded sound is important, so recording the best as you can is a good thing. For mixing
the recorded sound can be adjusted to what we like as pleasant hearing. So knowing where
an instrument or track will fit in, will give you an idea what you can do to adjust it. Also
giving an idea to record it. Getting some kind of mix where you hear each instrument play
(separation) and still have some togetherness as a whole mix combines means also
composition wise thinking and recording.

Cutting / Removing is better than Adding / Gaining.


Often throwing in Reverb or Delay (too early) will taste up the sound of instruments and
most beginners will start with adding these kinds of effects. Trying to make more sound
that they like. Well just don't! You wont have to add effects at first; you will have to decide
what will stay and what must go. As well as setting up for some togetherness of all
combined tracks, you will need some headroom for later freedom (creative things) to add

into the mix. It is quite easy to fill your mix with mud; this can be done with adding a
reverb or two. It is quite easy to make a booming sound by adding all kinds of effects or
just pump up (boost) the EQ. To take away mud when you have already added it is a hell of
a job. So starting with a nice clean mix that has all important sounds left over (without
adding), is way better and gives less change for muddiness. Remember to do more cutting
then boosting or gaining. Manual editing comes as a first task to decide what must be
removed and what can stay.
Leaving some headroom for furthermore mixing purposes. This is quite a task. In most
cases EQ or Equalization can be used to do work with the frequency spectrum (range) as a
whole. But on a DAW you can also delete what is not needed or mute it. You can decide to
cut all lower frequencies out of a HI hat, just because you expect they are not useful.
Leaving some frequency space (headroom) in the lower frequencies for other instruments to
play. This kind of cutting (the HI hat) in the lower frequency range to leave some lower
frequency space unaffected is the way to make every instrument have their own place
inside the whole frequency spectrum or mix. Using Level (Fader), Balance, EQ and
Compression (limiter and gating), these are good tools to start a basic mix setup. But a good
start is meaning better results for later on, when your adding more to the mix to make it
sound better and together. Starting with a clean mix is starting with a clean slate. With EQ
for instance cutting/lowering can be done with a steep bell filter, raising can be done with a
wider bell filter.

The Master Fader.


What not to do while mixing is adjusting the master fader each time you need to correct the
overall level of your track, keep the master fader always at 0 dB (Only when youre using
the master fader to adjust the main volume of your monitor speakers, headphones or output
to you listening system, it is allowed to adjust only that single master fader of your desk
while mixing). This means that all other master faders (soundcard, recording program,
sequencer, etc.) must be left in the same 0 dB position while mixing. Also this will go for
the direct Master Fader of summing up the mix and Balance (Mater Pan), keep this always
centered. The main reason is simple; the master fader is not for mixing, leave it alone.
When you set the main master bus (summing) fader below 0 dB you are lowering the
overall volume, this might seem plausible but especially with digital systems you will have
problems not hearing distortion while you are pushing the instrument faders upwards. Also
by lowering the master fader you will have less dynamic range, This means that internal
mixing can be going over 0 dB (creating internal distortion) but it will not be visible or
show on the VU-meter, will not light up the Limit Led, it will give you no warning that
youre going over 0 dB. When a signal goes over 0 dB on a digital system, there will be
distortion of the signal going on (set your DAW for 32 bit float processing). But you will
not notice any distortion going on when this happens internal. If you hear this or not, this is
(mostly) not allowed. Try to keep all master faders and master balance in the same position
when mixing, preferred at 0 dB. Also the human ear is hearing frequencies different at
variable volume's (loudness). Listening while playing soft might reveal to your hearing in a
certain way, when you raise the volume it will be slightly different to your hearing. So

listening loud or soft, it is close but differs, by this it is always good when you like it loud,
play your mix soft and see what happens to the sound (disappearing?). It is a good check to
see if your mix will stand out as well played loud or softly. How the human hearing is
responding is showed in this chart.

This chart shows different loudness levels, you can see that the frequency range between
250 Hz to 5 KHz is quite unaffected by playing loud or soft. But however the 20 Hz to 250
Hz is greatly different in loudness when played loud or soft. Also the higher frequencies
transfer different when played loud or soft. This is the way human hearing perceives
loudness.

Instruments.
Everything that you record on a track is likely to be an instrument. Common instruments
are Drums, Bass, Guitar, Keyboard, Percussion, Vocals, etc. So when talking about
instruments we do mean the full range of available instruments or sounds that are placed
each on their own single track.

Instrument Faders.
When you mix, you only adjust the instrument faders to adjust the volumes (levels) of the
different instruments or single recorded tracks (don't touch that master fader). Hopefully
you have recorded every instrument separately like Drums, Bass, Guitar, Keyboard, Vocals,
etc. On single tracks and on your mixer they are labeled from left to right. Each fader will
adjust volume (or the level) of a single instrument or track, as a total summed up by the
master bus fader. It would be wise to start with Drums on the first fader and then Bass. The
rest of the faders can be Guitar, Keyboard, Vocals, etc, whatever instruments you have
recorded.

Separation and Planning, Labeling and placement on a mixer.


Most likely you will start with the Base drum on fader 1 and working upwards with Snare,
Claps, HI hat, Toms, Etc, each on their own fader 2,3,4,5,6,etc. So the whole Drums are
sitting on the first faders. Then place the Bass, Guitar, Piano, Keyboard, Organ, Brass,
Strings, Background Vocals, Vocals, Etc. on the next faders. You can use any kind of
system. If you have some Send Tracks, place them far right on the mixer, just next to the
master fader. Be sure to label all tracks and set the fader at 0 dB and Pan at Centre for each
mixer track. To Label names and tracks (instruments) of a mixer is keeping it visible. Most
digital sequencers allow this naming of a track on a mixer. Also it is good to work from the
loudest instruments (Drums, Bass, Etc) towards softer instruments. Plan this on your mixer
from left to right, faders 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,etc. Most likely the
Basedrum will be the loudest peaking sound, place it first on the right. Maybe you have no
drums on your tracks, just work out what sounds would be mixed and heard the loudest and
what would be softer heard.

Making things easier for you to understand, we use labeling the Drums as an example.
Keeping things separated when recording drums is a must. You can do more on drum
mixing when Basedrum, Snare, Claps, HI hats, Toms, etc are, each recorded on their own
track (separately). This will mean that you are using more tracks on the mixer, but are
rewarded by flexibility in mixing. Now days with digital recording, sequencing and
sampling instruments, the drums often come from a sampling device, drum synth or
recorded with multiple microphone setups. As long as your recording technique allows you
to separate tracks or instruments, you will profit from this while mixing. Also for sampled
instruments or synthesizers that can output at several multi tracks, it can be rewarding to
separate each sound, giving each a single track on the mixer. Again, spreading and
separation works best and is most common mixing technique. Deep sounds spread all
across the panorama is not a good thing, depending on fundamental instruments (bass
drum, snare, bass, main vocals) must have a center placement. Any variation off-center will

be noticeable. Follow the panning laws for fundamental and not fundamental instruments,
fundamental lower frequencies are centered and higher frequencies more outwards, lower
not fundamental instruments more towards center, higher instruments more outwards. Use a
goniometer, correlation meter. Working on Daws (digital audio workstations) keep
goniometer, correlation meter, level meters and spectrum available as constant checking
tools. Maybe even place a second monitor or even another computer to do this job.

Sound Systems.
As with many questions about sound systems, there is no one right answer. A well designed
mono system will satisfy more people than a poorly designed or implemented two channel
sound system. The important thing to keep in mind is that the best loudspeaker design for
any facility is the one that will work effectively within the, programmatic, architectural and
acoustical constraints of the room, and that means (to paraphrase the Rolling Stones) "You
can't always get the system that you want, but you find some times that you get the system
that you need." If the facility design (or budget) won't support an effective stereo playback
or reinforcement system, then it is important that the sound system be designed to be as
effective as possible. Preferred is a room with no acoustics for recording. For monitoring a
room with some acoustics (room reverberation). Quality is an assurance, but however when
on a budget at least choose equipment with less or no noise (background noise).

Mono or Stereo.
Well this question is asked and debated. But for me and many others I like all tracks to be
stereo. So I do not like to record in mono at al. But we can refer to fundamental instruments
(Basedrum, Snare and Vocals) as panned straight in center and be upfront. So these can be
recorded or have converted original signal in mono, this will assure the left speaker and
right speaker play both exactly equal and make them appear straight in center where they
should be. Most of times I will convert mono tracks to stereo (left right the same) or just
record in stereo even when it's a mono signal. So it's no mono for me, but this can be
debated. Although of off course I respect the fundamental instruments are straight centered
all the time. Specially using a computer or digital systems and recording sequencing
software, working in stereo all time will allow you to have all effects in stereo and channels
in stereo. Most digital mixer and effects like delay, reverb, phaser, flanger, etc are working
in stereo and need to sound in stereo anyway. When playing a mono signal some digital
systems will not perform that well, so it is stereo that is creating lesser problems with
digital systems. Off course working in complete mono will reduce correlation problems, we
mix in stereo with 2 speakers. It is better to have all tracks in stereo even when a recorded
bass or guitar is actually recorded in mono. I always convert from mono to stereo or start by
recording in stereo, this is just an advice. As long as the original signal is exactly the same
left and right, you can work with mono signal in stereo mode. Knowing your tracks are all
in stereo, you would not have to worry anymore about mono or stereo tracks at all (and to
worry an effect or plugin is not outputting that well). You just know its Stereo all-time! This

can help for setting up and making things easy. A well-recorded mono sound source on the
other hand (recorded mono or stereo both channels), can be placed with relative ease onto
the sound-stage allowing you to much better handle what and how any effects should be
applied with regard to your other neighboring instruments, and their positions and
frequencies in the mix. Stereo sounds that sway around the panorama alike synths, can be
hard to handle. Especially when you have a bunch of these swaying instruments inside your
mix. In natural world, it is likely that a dry signal is transmitted as mono, but with
reverberation added and perceived as stereo by both our ears. Also in steady mixing, mono
signals work best, even when they are filling up a stereo track both channels playing the
same amount of sound gives a more steady and natural mix. Remember you can always add
an effect to make instruments sway around. So recording a dry and clean signal is rewarded
when later mixing purposes have to be free and creative. If two mono sound parts are
sharing the same frequency range then just try and simply pan them slightly one to the
right, other to the left. A couple of notches either side is usually enough. You must record in
stereo, use two mono channels to capture right and left respectively as mono or as stereo.
Test your mix in mono mode as well as in stereo mode. Use the mono button on the mixing
desk to sum the channels together into one mono channel. This will put all the sounds into
the center. Listen for phasing or any sounds that might disappear, so you can correct them.
Use a correlation meter, goniometer, spectrum analyzer and level meter on the master bus
to have checking tools available when needed.

Basic Mixing.
This is going to be hard to explain, but an example will help you get started mixing. For
example you have recorded a Pop, Rock, House or Ballad song. And now you have finished
recording it (composition wise and recording wise in audio or midi), you will need to mix
to make it sound better and more together. At first separation is needed, cleaning and
clearing (single tracks). Second quality and togetherness of a mix is what your aiming for,
mixing it up (groups towards the master bus, summing up). What youre not aiming for is
loudness or level, how loud your mix is sounding is of lesser importance then having your
mix sound well together. Togetherness is what youre aiming for. So watching the VUmeter go to maximal levels is not so important while mixing, pushing all faders upwards all
the time will get you nowhere. So forget how loud your mix is sounding, this is called
Mastering and is a whole different subject. Mastering comes after you have finished
mixing. Mixing is what youre looking and that is why it is called mixing, for this means ,
cleaning, cutting, separation as well as togetherness.

Mixing steps.
We have three sections to fulfill while mixing from beginning to end. First the Starter Mix,
where we setup a mix and start off working inside dimensions 1 and 2. Then the Static Mix,
where we apply dimension 1, 2 and introduce dimension 3 as a final 3d dimensional mixing
stage plan. Finishing off to this part Starter and Static mix is giving a basic reference static

mix for later use, and needs to be worked on until the static mix stands as a house stands on
its foundation. Then finally the Dynamic Mix, where we introduce automated or time lined
events. Make progress in mixing, plan on finishing your projects within a predetermined
period of time. This is the only way to see your development in time. Don't fiddle around
with DAWs function but be concrete, improve your mixing skills and decision making
capabilities, then learn to trust them . Give yourself a limited amount of time per mix. A
static mix should be 80% done after hours of work. The rest is fine tuning and takes the
largest amount of time. Building confidence in rhythmic hearing. Trust your ears for
listening for rhythmic precision and keep it natural. A DAW and its graphic interface allow
for seeing all you need, but allow to trust your ears not the display. When rhythmic timing
is needed, your ears will decide something is early or late, or spot on. Trust your ears.
When you are not happy with results, make a copy of your project, remove all insert and
send effects and put all panning to center. Start right from the beginning, redefine your
stage plan with a clear mixing strategy. Re-setting levels, pans, EQ, to zero and start from
the beginning, removing all effect or plugins. Necessary to obtain a good mix lies in
intelligently distributing all events in the three spatial dimensions, width, height and depth.

The Starter Mix.


Basically as we are staying inside dimension 1 and 2. We will explain the dimensions later
on, but for a starter mix we only use Fader, Level, Balance, Pan, EQ, Compression and
sometimes some more tools alike Gate, Limiter. Our main goal is togetherness, but as a
contradictive we will explain why we need to separate first. As a starter mix will start off
good, only when we first separate the bad from the good. Rushing towards togetherness is
never doing any good, so this comes second in line. To understand what we must do (our
goal for starter mixes) we need to explain the stage and the three dimensions now.

Panning Laws.
Crucial to understanding the first dimension of mixing are the panning laws. Frequency
ranges or instruments/events with a low range, are more placed in center. High ranges are
more placed outwards to the left or right. This will mean that Basedrum, Snare, Bass and
Main Vocals (fundamentals) are always in the dead center, especially with their low
frequency content. All other instruments or events are more placed outwards (not
fundamental), even if they contain lows, when they are not part of Basedrum, Snare, Bass,
or Main Vocals, they are placed outward to the left or right. Lows more centered and Highs
more outwards. Also take in mind that send effects that are placed more in center, will draw
outward instruments towards the center. So placement of a delay or reverb must be
considered for what instrument (fundamental or not fundamental) it is required. The
Masking effect, the time and effort of using left/right effects is only correct if the reverb
part becomes too large to convey all the spatial information as a result of the masking
effect. The more complex a mix, the more time and effort is required for placing all events
accurately within the three dimensions. Starting off with panning in the first dimension.

Before mixing start, make a sketch of your panning strategy (stage plan). Anything that is
not bass, bass drum, snare or lead vocals, should not be in the center. Instruments present in
the same or overlaying frequency sectors, should be placed at opposite ends complimenting
each other within the panorama. Well panned and carefully automated panning often creates
greater clarity in the mix than the use of EQ and is much better then unnecessary EQ ing. If
sounding mush, your first step is panning then to resort to EQ. Be courageous, try extreme
panorama settings, and make the center free for the fundamental instruments. Never control
panning trough groups, only by its individual channel. Never control straight panning or
expanding with automation, just small panning and expanding settings for clearing a mix
temporarily.

The Stage.
With an orchestra or a live band playing (we are going a little ancient here) there is an
always stage to do so. Back in the old days people could only listen to music when played
by real performing players or artists. There was no means of electricity or even amplified
sounds coming from speakers. And furthermore a human is always hearing natural sounds
in life. Anyway listening to music just appeals most when the instruments are staged and
naturally arranged. We as human's are used to listen to music in this fashion for ages and
now we have the common pattern inside our DNA. Human ears like hearing naturally and
dislike unnatural hearing. When playing music we hear Volume, Panorama, Frequency,
Distance and Depth. Therefore we talk about the musical stage. Mixing is the art of making
a stage, this is called orchestral placement and sets all players to a defined space of the
stage they are expected to play. For any listener it is more convenient to listen as natural as
possible, so a stage is more appealing for the human brain to recognize and understand. A
live concert of an orchestra might reveal the stage better in this picture below.

No matter what stage is set, what you are trying to accomplish is stage depth. The next
chart display's a setup plan for recording and mixing a whole orchestra. We call this
orchestral placement.

In this chart we present a whole orchestra of instruments. The x-axis is showing Panorama,
Pan or Balance (left, center and right). The y-axis is showing depth (stage depth). As
listeners we do like to hear where instruments are, some are upfront, some are more in the
back of the stage. A mix would be quite boring and unappealing to the human ears when all
sounds seem to come from one direction only (mono). Anyway we as humans can perceive
Volume (level), Direction (Panorama, Pan or Balance), Frequency Spectrum and Depth.
These are the three dimensions of mixing. Taken in account we are using two (or more)
speakers. It is quite common to think in stage depth when mixing. Even when your material
is modern funky house music, still thinking in stage depth might help you mixing a good
understandable mix and have some idea where to go and what to accomplish.
Stage Planning.
So it is better to have some kind of system and planning, before starting a mix. Knowing
where to place instruments or single tracks inside the three dimensions. Basically all parts
of dimensions (we explain the dimensions later on) are easily overcrowded. Therefore we
must use a system to give all instruments a place inside the dimensions, just to un-crowd.
Making a rough sketch can simplify and visualize the mix. Therefore you will have some
pre-definition before you actually start mixing. You will know what youre doing and what
you are after (your goal in mixing). We start with a basic approach. We start with the most
crucial or fundamental instruments first.

The Base drum is fundamental, keeps the rhythm and because it is mostly played in the
lower frequency range. The base drum is most fundamental, because it keeps rhythm and
second because it's fundamental frequency range is mainly lower or bottom end based
(dynamic high level). All main fundamental instruments are placed dead center. The Snare
is important for the rhythm, but however does not play as much lower frequencies as the
base drum. The Bass is fundamental because almost all notes play in the fundamental lower
frequency range. Vocals must be understood, upfront and are therefore fundamental to the
whole mix. As you can see all important fundamental instruments are planned in the center
inside Dimension 1 (Panorama).
All instruments that are fundamental and are playing lower frequencies must be centered,
because two speakers left and right, will at the same time give more loudness and therefore
can play and represent lower frequencies best (center is coming out evenly on left and right
speaker).
The center position is now a bit crowded by the fundamentals, Basedrum, Snare, Bass and
Main Vocals. To give some more space between each other (separation) dimension 1
(panning) and 2 (frequency spectrum or frequency range) and dimension 3 (depth) are used
to separate them and give some idea what is in front of each other. Most likely you would
like the main vocals to be clear and upfront. Think of it as a stage setup. The bass (or bass
player) would stand behind the vocals, on a real stage the bass player might move around a
bit, for modern mixing still dead is centered (because of transmission problems in the lower
frequency range or bottom end, only placed center, and we are still busy with the starter or
static mix, no automation can be used). As the drums would be the furthest away backwards

on the stage, we place them in the back but still dead center. Anyway placing these
fundamental instruments in the center gives definition and clearness to them, without
interfering instruments overlapping. Especially Base drum and Bass must be centered to
make the most out of your speakers. As the spectrum will fill up in the center because
already Base drum, Snare, Bass and Vocals are filling it up (fundamentals), discard and
leave this area alone (off limits) for any other instruments (not fundamentals) . Other
instruments can be placed in dimension 1 (panorama) and panned or balanced more left or
right. This is common in practice for many mixes, but a beginner will hesitate to do this
(Panning). Still think of it that guitars and keyboard on stage are always placed left and
right. Simply because else the stage would be crowed in the center if all players have their
position taken. To imagine where an instrument or player will be placed is also being a bit
creative and then be experienced, adding to what a human perceives as natural keeping it all
understandable for the listener (finding the clear spots). Keep in mind that lower
frequencies play better when played by both speakers (centered) and therefore higher
frequencies can be more panned left or right (outwards). Fundamental instruments with
bottom end or lower frequency ranges must be more centered, while higher frequency range
instruments must be panned more outwards. Next we will place the other drum sounds.

As a decision we place the HI hat next to the snare, by panning the HI hat a bit to the right.
Planning the stage or dimensions, this is a creative aspect; the HI hat are placed right from
the snare, but also could be placed left. This depends on the natural position of the HI hat,
for setting the stage we could look at real life drum placement and take this in account
while planning the stage, so mostly the HI hat is placed more right. Now we have the right
speaker playing more highs then the left because we placed the HI hat more right. To

counter act and give the left speaker some more highs we can place an existing shaker to
the left. This counteracting gives a nice balanced feel between left and right, because
mostly we like to whole mix to play balanced throughout. Then the toms are only played
scarcely in time (toms are just suddenly played once in a while) so are less important in
planning, still we place them to show where they are. For toms we place hi-tom far out and
low-tom far out, in between the mid-toms. The overheads are placed behind and with some
stereo expanding or widening this will give some room and sounds more natural. The main
vocals are upfront. The rear can be used for the background vocals (choirs) and strings,
bongo's, conga's, etc. Next we place some other instruments and we are looking at not so
crowed places to place them in. Separating more and more.

See that Guitar 1 and Guitar 2 are placed Right and Left (this could also be guitars and
keyboards), so they are compensating for each other and keep a nice balance. Also Synths
and Strings are compensating and in balance, tough with some more distance (we use the
strings as counter weight over here). Strings can also be placed back of the stage with a
stereo expander to widen the sound at act as a single sound filler. Remember when you
place an instrument, it is likely to counteract with another instrument on the opposite side.
Also taken in mind instruments that play in the same frequency range can be used to
counteract and balance the stereo field. For that we can say the HI hat and Shaker are
complimenting each other (togetherness), as well as Guitar 1 and Guitar 2 do. And the
Synth with the Strings. So we keep a balance from left, center and right. Don't be afraid to
place not fundamental instruments more left or more right, keeping them from the already
crowded center. Unbalanced mixes will sound uneven, when the whole outcome of the mix
is centered we can hear the setup (stage plan) better and more naturally. When the left

speaker plays louder than the right speaker, it will give unpleasant (unbalanced) listening.
The total balance of your stage planning should be centered. Adjusting the master balance
for this purpose is not recommended. Keep the master balance centered as well as the
master fader at 0 dB, as well as any effects on the master bus, we always try to correct
things inside the mix, not on the master bus fader. Whenever you have an unbalanced
panorama, go back to each instrument or single track and re-check your stage planning. As
stage panning or balancing in the first dimension is one of the first tools for setting anything
else. With the help of dimension 2 (trebles, boosting for close sounds or cutting higher
frequencies for further away sounds) and dimension 3 (reverberation, room, ambience) we
can create some kind of distance and depth. A final mix or mixing plan should refer to all of
this. Depending on the musical style and what you want to accomplish as a final product.
Also do not hesitate to use panorama, beginners will be resultant to do so.

Although this looks a bit crowed when you have all instruments playing at the same time
together, it is likely you will not have all instruments inside the mix anyway or playing alltime together (composition, muting). It would be quite boring when all instruments where
audible throughout the whole mix. We do fill in our stage plan with all our instruments. We
give an indication what is a general setup and a good starting point, planning where
instruments play and giving them a place is defining your mix, a foundation to build your
mix on. This planning is called stage depth because almost any mix has some relations to
what the human ear likes to visualize in our brains. Most likely natural placement is the
way to go and is most common. So you can be creative and come up with any kind of
planning or setup. Remember it is likely for instruments that need a bottom end, to stay
more center (especially the fundamentals). All other instruments that do not need a lower

bottom end (not fundamentals) can be placed more to left or right (apart from the dead
centered and upfront main vocals). Decide what your fundamental instruments are, then
setup panorama and depth (distance) accordingly.
3D - Three Dimensional Mixing.
Strangely creating togetherness means separating more than overlapping each other, it
means you will have to separate first. What most beginners do not know about is the
masking effect, where two instruments that play in the same range are masking each other.
Try have two guitars in mono mode, then drop one guitars level with -15 db or more. You
cannot hear this guitar anymore do you ? Well now pan this guitar to the left, you can hear
it again, even now its -15 db lower than the other guitar. Basically when playing every
instrument just leaving centered (no panorama) it is getting quite crowed in center position
and is quite boring (and enhances the masking effect). Masking is so common in mixing,
we are in a constant struggle to avoid it. With avoiding masking, we can have more
dynamics, or to say it the other war "we have more room for each instrument to play and be
heard, with less volume level needed, therefore leaving more room for others to be heard.
Therefore every instrument will get its own place inside the three dimensions. Below is an
example of the three dimensions.

The Three dimensions.


1. Width (Left + Center + Right), Panorama, Panning, Widening and Expanding.

2. Height, Frequency, Level, EQ, Compression (Gate, mute, etc).


3. Depth (Front to Back Space), Reverb & Delay, EQ ing Reverb & Delay.

Dimension 1 - Panorama.
Panorama is mostly achieved by setting Pan or Balance for each instrument on each
independent single track. Basically setting the panning to the left, the sound will play from
the left speaker. Setting to the right will play the sound from the right speaker. Setting it to
center will play the sound from both speakers. Think of dimension 1 as Left, Center and
Right. Three spectral places in dimension 1, Panorama. When its more crucial to you, you
can also use 5 places for naming panorama when mixing or planning stage depth, 9:00
(Nine O' clock), 10:30 (Ten Thirty), 12:00 (Twelve O'clock), 1:30 (One Thirty), 3:00 (three
O' clock). Panorama is most a underestimated effect in mixing (masking effect). Just
because turning a simple pan or balance knob is easy to setup. Panorama in fact is a most
important design tool (option) and the first start of defining a mix (apart from the fader
level). Use Panning first before setting the fader level, apply the panning law and the
relative volume of a signal changes when it is planned. Even when youre fully on your
way with a mix, turning all effects off (bypass) and listening to the panorama is often used
for checking a mix is placed correctly.

There is a mixing solution for deciding what instruments stay centered and what
instruments go outside of center. Instruments that are crucial or fundamental to your mix,
like Base drum, Snare, Bass and Vocals are all in the center (fundamentals). Any other

instruments (not fundamentals) will be more or less panned left or right. The most common
place for Basedrum and Bass are center because two speakers playing at the same time at
center position will play lower frequency signals better. Panning or balancing lower
fundamental instruments left or right, is not recommended therefore at all. Even the effects
alike delay or stereo delay can move instruments more left or right in time, so watch out to
use these kinds of effects on fundamental instruments. And as automation is not a part of
the static mix, we do not use it. The main pathway is dead center, so even when using a
stereo delay, the main information should be dead centered for fundamental instruments.
The Snare and Vocals are just as important, because the snare combines with the Basedrum
rhythmically and vocals must be heard clearly always (so we also place them all dead
center upfront). By having the Basedrum, Snare, Bass and Vocals in the center
(fundamentals), there is not much center panorama and spectral room (Dimension 1 and 2)
left over for other instruments to play in the center. or more widening the stereo sound
(outside left and outside right) a Stereo Expander or Widening effect (delay, etc) make the
stereo field more than 180 degrees and will widen the panorama even more, giving some
more space inside dimension 1 and more room to spread the not fundamentals around. Be
courageous!

Do take into account that correlation (signals cancelling each other out in mono mode) will
be more when you widen or pan more, so check for mono compatibility. Use a correlation
meter to check or goniometer. Maybe you have to reduce the stereo field to prevent a mono
mix from cancelling out instruments. Also Basedrum and Bass can have signals that need to
be reduced that fill the spectrum left or right, cutting this will keep them centered more (in
time) and keeps them from swaying around. As a general rule lower frequency range
instruments or tracks are placed at center, while higher frequency range instruments or
tracks a panned more outwards. There are basically two ways op perceiving the
dimensions. Fist panning from left to right in front of you, alike a stage. And second the
ambient effect. This is to move any panning sounds right around your body, rather than just
from left-to-right in front of you. Meaning you are in center of the sound, meaning ambient
sound or surround sound. This is apart from the stage planning, the listeners position. We
like the listeners position to be mostly straight in the middle of two speakers, hearing an

equal divided sound on both speakers overall (RMS, Left + Center + Right, LCR
spectrums).

Dimension 2 - Frequency Spectrum.


Frequency Range 0 30 Hz, Sub Bass, Remove.
Frequency Range 30 120 Hz, Bass Range, Bass and Basedrum.
Frequency Range 120 350 Hz, Lower Mid-Range, Warmth, Misery Area.
Frequency Range 350 2 KHz Hz, Mid-Range, Nasal.
Frequency Range 2 KHz 8 KHz, Upper Mid-Range, Speech, Vocals.
Frequency Range 8 KHz 12 KHz, High Range, Trebles.
Frequency Range 12 KHz 22 KHz, Upper Trebles, Air.
The frequency spectrum or frequency distribution of a single instrument or whole mix is the
second dimension. It is understood that a Bass is a low frequency instrument will sound
most in the lower frequency range 30 Hz to 120 Hz (bottom end). Cut all other instruments
out of this range with a very steep filter. The frequency spectrum of a mix is specially
crowded in the lower 'misery' range 120 Hz to 350 Hz (500 Hz) or 2nd bottom end, where
almost all instruments play somehow. From 1 KHz to 4 KHz we find most nasal sounds and
tend to find harmonics starting to build up. The 4 KHz to 8 KHz can contain some
crispiness, can sound more clear when boosted, but also unnatural. A HI hat will play
mostly in the higher frequency range 8 KHz to 16 KHz (trebles). So giving each instrument
a place in the second dimension where it belongs is important filling up a frequency
spectrum. We tend to talk in frequency ranges, so words alike low, Mids or highs are
common in the mixing department. Also words alike, bottom end, lows, misery area,
trebles, Mids are only indications where to find the main frequency range. The main tools
for working with the frequency spectrum and making the sound of an instrument fit inside a
mix are EQ, Compression and Level. Also tools like gating and limiting can prevent
unwanted events to pass. There are two purposes for these tools. First to affect quality, thus
boosting or cutting frequencies that lie inside the frequency range of the instrument. Second
to reduce unwanted frequencies, mostly lie outside the instrumental frequency range, thus
cutting what is not needed to play. Most instruments alike Basedrum for its bottom and
skin, have two frequency ranges that are important. The bass drum must convey its
rhythmic qualities for instance. A bass instrument plays a note it will have its own main
frequency, its harmonics and instrument sounds around it, alike body and string attack
sounds. This is the frequency range the instrument is playing in, it's main sound. For bass
this does mean a lot, we expect that the range 0 Hz to 30 Hz can be cut, while leaving 30
Hz to 120 Hz (180 Hz) intact (first fundamental range of the bass). Higher frequencies can
be cutout or shelved out. Because this will separate the bass and give it place (space,
headroom) to leave dynamic sound to rest of instruments. By doing this using EQ on the
bass to make the sound more beautiful (quality) and to leave some room for other
instruments to play by cutting out what is not needed (reduction), is leaving headroom and
will separate instruments. As you can see we basically boost or cut when doing quality
purposed mixing. And we mostly cut when we are reducing. As a result we are likely to cut
more and are likely to boost less. We tend to cut with a steep EQ filter and to boost with a

wide EQ filter. The bass has now got a clear pathway from 30 Hz to 120 Hz (180 Hz),
maybe the Basedrum is in the bass range (60 - 100 Hz), but we try to keep all other
instruments away from the bass range (0 - 120 Hz). The range 30 to 120 Hz (180 Hz) is
mainly for Basedrum and Bass (especially in the center spectrum). As this frequency
spectrum is easily filled up, it is better to cut what is not needed on all other instruments.
You might think it is not necessary to cut the lows out of the HI hat, but it is best to know
that the HI hat will play in the higher frequency range, to remove all lower range
frequencies, you could use a low cut with EQ over here also. So now you have separated
the Bass and the HI hat from each other and have given each a place inside the whole
spectrum (tunneling, separation). The same will apply for all other instruments that
combine the mix, even effects used. Knowing where the ranges are of each instrument and
having planned the panorama and frequency spectrums will help to understand how
separation works when mixing and this is building the basis start of a mix, the foundation
of a house (reference or static mix).

The Spectrum of a finished mix could look like the figure on the left (we have shown this
before), you can see a good loud 30 Hz -120 Hz section, that is the range the Basedrum and
Bass play with each other. And the roll down to 22 KHz. Though sub bass 0 Hz to 30 Hz is
still quite loud in this spectrum, still this is quite a bit lower than the 30-120 Hz range. On
the figure on the left you can visualize the range of instruments and their frequencies, refer
to it whenever you need to decide the instrumental frequency range and what to cut out
(reduction) and what to leave intact (quality). We have discussed these subjects before.
Dimension 1 and 2 are most important for creating a starter towards static reference mix, so
do not overlook these dimensions. Return to these dimensions when your mix is not
correctly placed, sounds muddy or fuzzy (masking). The Volume Fader, Balance or Pan
Knobs must be your best friend in mixing and first starting and referring points. Then refer
to EQ or compression as a second measure (gate or limiter also allowed). Knowing where
instruments must be placed according to plan, works out best in dimensions 1 and 2.

Dimension 2 frequency spectrum can be also working a bit inside dimension 3, as we


perceive depth when trebles (high frequencies) are loud and upfront, but perceived
backward in depth when trebles are less loud. Use an enhancer to brighten dull sounds to
keep them upfront. Always when working with trebles > 8 KHz, be sure to use
quality/oversampling EQ and effects.

Separating instruments in dimension 2, frequency range.


EQ can do a good job by cutting out the bottom end of all the instruments that are panned
left or right (not fundamental) and instruments panned dead center (fundamental). That is
why we will discuss some effects alike EQ now, even though we have an EQ section
explained later on. Basically the low bottom cut for Basedrum is a decision you can make
when you are combining Basedrum and bass together. It is most likely a 0 Hz to 30 Hz cut
can be applied to all instruments and tracks, even bass drum and bass. You can start off
using a low bottom cut around 0 Hz to about 30 Hz, this is most common.

The cutoff figure shown above would be a good cut for the most fundamental instruments
alike Base drum and Bass, but really applies for all fundamental or not fundamental
instruments or tracks. Cutting from 0 Hz to about 30 Hz (50 Hz) can remove some sub bass
range as well as pops, low clicks and lower rumble for every instrument. Anyway the range
0 Hz to 30 Hz is really sub bass levels, so you actually do not hear much of them at all and
is more of a feeling kind then hearing. If you need sub bass frequencies in you music, you
must know that most speakers do not even play them. When for instance a bass drum is
believed by beginners to make more power and raise the whole 30 - 120 Hz range with EQ,
please do not. So you can't hear them in the first place, even with a big bottom speaker this
is not heard much (filling up your headroom without even hearing it correctly). Even in a
club or live event the bass drum will have effect around 60 - 90 Hz. In general most
household stereo systems do not play bottom end frequencies < 50 Hz or even < 100 Hz at
all (depending on the quality of the system and speaker set). Thinking sub bass (0 - 30 Hz)
will enhance your mix by boosting or leaving unaffected is a beginners mistake. Leaving it
intact for instruments that are not fundamental is also mistake. Do not hesitate to cut the 0
Hz to 30 Hz range of frequencies out of all fundamental or not fundamental instruments.
We now have removed some really low frequencies out of all instruments or tracks with a
steep low cut EQ filter and therefore removed some unwanted loudness, leaving some
precious headroom and will un muddy your mix (masking), making your mix more clear
(dynamical, rhythmical).

The above figure shows a bottom cut and a highs cut, for a more distantly placed
instrument.

We need our Bass to play, and not be overcrowded. As well as we need the Basedrum to
play, keeping 30 Hz to about 120 Hz (150 Hz) free for bass drum and bass only. This means
we are creating a clear dead center blast of lower frequencies (L + R = C power) free for
playing only Basedrum and bass. Even fundamental instruments alike snare and vocals will
give problems with headroom and are playing somehow inside the base drum and bass
range, cut them all.

A low bottom cut for all other fundamental instruments (snare and main vocals) is shown in
the above chart. The snare and main vocals are playing somehow in the lower end of the
frequency spectrum, but do not actually play in the bottom end range (where bass and bass
drum are already playing in). So maybe we can do some more cutting from 0 Hz to 120 Hz
(180 Hz). Second, the bottom end 0 Hz to 30 Hz range is filled with mostly rumble, pops
and other unwanted events for the most part. So cutting with an EQ steep filter is quite
understandable to be sure to remove these elements or events. To keep the lower
fundamentals bass drum and bass free in their own 30 - 120 Hz range.
To avoid overcrowding we can cut out the bottom end of all other not fundamental
instruments, leaving more space (headroom) for the fundamental instruments to shine and
separate, avoiding muddiness and overcrowding (masking). Don't be afraid to cut more out
of a Synth or Guitar, anywhere from 100 Hz to even 250 Hz is quite understandable. This is
where most beginners will hesitate. It is better to do a bottom end cut on all other
instruments, just to un-muddy the lower frequencies and make a clear path for the base
drum and bass to play unaffected. For not fundamental (all other) instruments, you can cut
some more or less lower frequencies with a steep low-cut filter or some good cutting EQ.
We can avoid pops, low clicks or rumble out of our mix and keep the lower frequency
range free. If there is any information at all over in the sub bass range, it would be Bass.
Bass is the only instrument that can reach this low. So therefore we don't cutoff the bass, we
do cut-off the rest of all instruments playing. Well normally that is, sometimes a piano can
reach this low but really still does not contain a relevant sub bass range. Do not hesitate to
use quite a lot EQ cutoff shelving on all instruments, better to do more cutting then less.
Apart from Basedrum and Bass, a good roll off at 120 - 150 Hz is a good starting point,
setting higher until you affect the main frequency range of the instrument. You can always
adjust the cutoff frequency range later on for better results once you have placed it. Un
fundamental instruments can be cut anywhere from 0 Hz to 180 Hz, basically they almost
never play the C1 note range (octave). In order to find the lowest note played by an
instrument, listen solo throughout the whole mix. Find the lowest note and its frequency.

You can decide where the cutoff frequency lies, but remember the Basedrum and Bass need
room to shine, so their main range is from 30 Hz up to about 120 Hz (180 Hz). Any other
instruments that play in this range will crowd it and is better to avoid (muddiness and
masking). So leaving the lower frequencies for Basedrum and Bass will have you deciding
to make cutoff's or roll-offs on all other interfering instruments.

The cutoff figure shown above would be a good cut for the not fundamental instruments
like Keyboards, Synths, Guitars, Organ, Vocals, etc. Depending on the low cut by
dynamical intent, depending distance by controlling highs. By listening to each instrument
you can decide where the cutoff frequencies are exactly. This can only be done if you
understand what the frequency range is of the playing instrument and decide what is needed
and what is not needed to heard. Most drums (all drums that are in the drum set) have two
main frequency ranges, as well as most instruments. Remember in our stage planning, we
now have to decide how our separation plans must work out in each different instrument or
track. Use more cutoffs on not fundamental instruments. Subs (0 Hz to 30 Hz) can mostly
be removed. The lower frequency range (30 Hz to 120 Hz, 180Hz) is mainly for Base drum
and Bass. The frequency range between 180 Hz to 500 Hz is overcrowded anyway by most
instruments playing over here, you can make a difference over here paying attention and
spending time to get it correct sounding. The loudness that comes from the lower frequency
range from 30 Hz to 500 Hz upwards 1000 Hz is basically generating the most loudness out
of your whole mix and will show up on the Vu-Meter. Especially the lower frequencies of
the Basedrum and Bass are fundamental for rhythmic content, power, clearness and are
generating the most loudness, keeping them separated by giving them a free frequency
range 0 Hz to 120 Hz. Remember the lower the frequency to more power, you can save
headroom (power) by cutting out all unwanted frequency ranges.

Quality and Reduction.


Basically we for a good starter mix we will try to achieve quality as well as reduction of
unwanted events. Quality involves boosting with EQ (wide) and cutting with EQ (small),
likely inside the main range of frequencies sounding from the instrument playing a range of
notes or main frequencies. Quality can be boosted, but counteracting cuts can avoid
boosting (better). Quality relies on how good an instrument is sounding. Reduction means
mostly cutting some lower frequencies (0 Hz to 250 Hz depending on the instrument) and
cutting high trebles for distance. Where the cutoff frequency is placed relies on the

instrument and mix decision (stage plan). But apart from this, it can mean also a cutoff in
higher frequencies for instance on bass or base drum just to separate. By using reduction
methods we try to separate instruments and give them each headroom to play inside the
frequency spectrum. Compression alike EQ has quality and reduction features.
Compression can raise transients (quality) or sustain (quality), but can reduce peaks as well
(reduction). For reduction a gate keeps out unwanted events or we can use manual muting.
Maybe a limiter can scrape off some peaks (or a peak compressor, reduction). Anyway
these two purposes (quality and reduction) are the main tools for a starter mix.

Separation.
Making separation and headroom. In dimension 1, as we explained panorama separates
instruments and spreads them from left, center, right. In dimension 2, we can adjust the
frequency spectrum. Both combined are the basics of a good starter mix and can take up to
four hours of time to accomplish a mix that is dry and according to your planned stage and
still have some headroom for furthermore mixing purposes. As if youre not fully trained
and experienced, then spend a great deal of time inside dimension 1 and 2. Stepping too fast
into dimension 3 might set you up for some troubles you could not fix otherwise.
Understanding what is going on inside each dimension and where to place instruments
according to human natural hearing (your stage plan), is the key to successful mixing.
Swapping for instance left and right is off course ok. As long as you understand that placing
a high frequency range instrument (HI hat) on the right will affect the total balance of the
mix, to compensate we have added the another high frequency instrument (shaker) to the
left. This kind of thinking goes for the Mids and lows also. As long as you counteract your
actions, you are doing fine. Counteracting is a most common many methods of mixing.
Again how youre planning of the dimensions will unpack; the final mix will have to be
balanced (meaning the combined sound of your mix must be centered over two speakers).
We as human's dislike when the left speaker plays louder than the right speaker or
otherwise. It is artistic rights and being creative that defies the rules, but still can have a
good outcome. Generally fundamental instruments are centered, and lesser fundamentals
are placed more left and more right.
Dimension 3 - Depth.
The Spatial Depth is a more perceptive sound, giving space and room to each instrument,
single track or mix. The most common tools are Reverb and Delay. Reverberation is a
common depth (dimension 3) tool. When a note or sound is played at the first time, the
transients are an important factor (from the original sound event). The transients make our
brain understand what sound is played and for recognizing the instrument. This we will call
the dry signal. From the dry signal a room will present reverberation after some time in
milliseconds, mostly the early reflections will make our hearing understand distance and
placement. The pre-delay of first reverberations/early reflection is making our brain
understand depth or distance. Mostly when pre-delay and reverberation is naturally
understandable to our brains, we perceive depth, because a Reverb (and Delay in a lesser
fashion) will muddy up the mix (masking), careful attention must be applied over here.

With Reverb or Delay it is common to cut the lower bottom frequencies because this will
clear up the mix and wipe away some muddiness (separates the reverb from the
fundamentals alike Base drum and Bass). Also when you apply the rules of Dimension 1
and 2 correctly, the panorama and spectrum of each instrument will create a place or stage
for each instrument. For that we can cutoff or raise the trebles of the reverb to be closed
upfront or more distanced. Now that reverberation is making our brain believe there is
some distance, dimension 3 is a fact. Separation is the key to successful mixing, balancing
not fundamental instruments more left or right and not over pumping the frequency
spectrum as a whole. Basically the lower frequency range of a mix is the place where all
instruments will play their main ranges, so filling this with Reverb or Delay will only add
to muddiness or add unclear (fuzzy) sounds and enhance the masking effect. Especially
Base drum and Bass are instruments you want to hear straightforward, so must be separated
at all time from the rest by controlling all lower frequencies that play in their range (use an
ambient, drum booth, small room). Instead depth can be interesting when applied on clear
and dry starter mixes, making them sound more natural and less fabricated. Also Reverb
and Delay are not the only factors for depth. Instruments will not play all the time; it would
be boring to hear them all throughout the whole mix. It is likely you have some kind of
composition going on and the timed events of instruments can create more depth also. The
level (volume or amplitude) of the played note will create depth by itself. As we perceive
louder sounds as closer and softer sounds as further away. Also we perceive close sounds
when the higher frequencies are more present, the further away in the background the less
high frequencies can be heard (dimension 2). These are good starting points to address
when mixing (in dimensions 1 and 2) before adding any delay or reverb (in dimension 3).
Therefore when you need background vocals to be heard as if they have some distance, you
can roll off some higher frequencies in dimension two first, before you add some delay or
reverb to make some kind of depth or distance inside dimension 3. Even when adding delay
or reverb, you can decide by rolling off (or cutting) some high frequencies from the effect
output or input what the distance or depth they will be perceived as. A good parameter to
set depth or distance is the pre-delay of any delay or reverb (or any effect). Reverb can only
do a good job when it's a really good quality and setup correctly. Mostly for fundamental
instruments alike Basedrum, Bass, Vocals we can use an ambient room or drum booth
reverb type, these will have more early reflections and have less reverb tail, therefore less
fuzzy and more upfront. On the vocals use no trebles cutoff for keeping upfront of the
stage. Basedrum and Bass inherently have lesser trebles so they automatically fall behind
the vocals with an ambient small room drum booth reverb. For not fundamental instruments
that are placed at the back of the stage we can use way more reverb, alike a hall or large
room, and cutoff their trebles more to set distance. For achieving our stage plan to be true,
we can prepare the dry signal and/or adjust the reverb accordingly. Delay can do a good
job, but with percussive instruments (Drums, Percussion) the rhythmic can be influenced,
timing the delay to the beat or notes can be of importance. Especially a stereo delay with its
movements can avoid masking. So for drums and percussive elements we try to stay in
tempo and setting almost no pre-delay. For Vocals delay can give more depth and
placement inside a mix, without moving backwards and keeping them upfront. Reverb is a
good tool for creating depth, but can be processor hungry for digital systems. A good reverb
does not get muddy fast and stays inside the mix and does not have to be loud to be
perceived as depth. Depth is the last dimension, so working first our starter mix in
dimension 1 (panorama) and dimension 2 (frequency range) before working on dimension 3

(depth) is recommended. The static mix contains dimensions 1,2 and 3. Use a brighter
reverb ambient small room or drum booth for upfront sounds and a duller larger reverb for
distanced sounds. A short pre-delay or no pre-delay can help prevent the reverb from
pushing the sound back into the mix. Give the reverb a wide spread for upfront sounds. Use
narrow panned or even mono reverbs for distanced sounds with longer reverb times.

The three dimensions together make up any static reference mix.


For Stereo Mixing the three dimensions are Panorama (1), Frequency Spectrum (2) and
Depth (3). Basically Panorama is controlled by Pan or Balance mostly and sometimes using
a stereo expander or widener. The Frequency Spectrum is controlled by amplitude, level,
volume, EQ (Compression, limiter, gate) of the sound. Depth is perceptive and can be
controlled by High Frequencies (trebles), delay (pre-delay), Reverberation or Reverb. There
are quite some other effects that generate some kind of reverberation or can be perceived as
depth or distance to human hearing, we will not discuss them all. A sense of direction for
each individual instrument can be found in all dimensions. Also the three dimensions can
influence each other, by rolling of some highs for instance in the frequency spectrum
(dimension 2) of a single instrument, track or group, you can affect depth (dimension 3).
Coexistence and placing instruments inside the three dimensions can be a fiddly job and
maybe you would like to rush this. Pre-planning is a better idea. Also we cannot use a lot of
reverbs on processor hungry systems, so we choose a few and use them on groups mostly.
Off course mixing is creative. Bypassing the dimensions without some thoughts and
planning and throwing in effects and mixing uncared, will soon give muddy unclear fuzzy
results (masking, correlation, etc). Maybe you have ended up in this situation before? Then
it is time to get some understanding about the three dimensions, quality, reduction,
overcrowding, making headroom, masking, separation and togetherness. Re-start with a
clean slate setting all levels to 0 db and panning to center, remove all plugins, re-start with
the dry mono mix.

The chart above shows how the three dimensions can be adjusted using common mixing
tools. For summing up, dimension 1 is controlled by the Panorama (Pan or Balance and
maybe some widening/expanding), dimension 2 is controlled by the Frequency Spectrum
(EQ, Compression, mutes, gates and limiters), dimension 3 is controlled by dimensions 1
and 2 as well as using reverberation/early reflection effects (Reverb, Delay, Etc). Making
use of the 3D visualization or 2D stage visualization can help improve your mixing skills.
Some like to write down a plan (stage plan) or some just like to remember and visualize in
their head (the experienced). The easiest dimension is dimension 1, setting pan and we hear
left, center or right (but easily underestimated). Next dimension 2 is more complicated,
because we are working inside the frequency spectrum of each instrument to create a whole
spectrum for the mix. Composition wise muting, level, amplitude, transients and balance
are good tools to start with then reverting to EQ. Compression can be a hassle to master,
mostly when we hear compression, we know we have gone too far. Rather use a more even
amount of compression, when compressing only peaks very hard we achieve pumping.
Dimension 3 is all about quality reverberation and needs skill and very good ears, as well as
understanding how human hearing reacts. As we can say the difficulty of mixing progresses
with the dimensions in place, so we start with dimension 1 and progress towards dimension
3. When we need to adjust an event, we first resort to dimension 1 and progress towards
dimension 2 and 3. Hunting for quality and reduction (boost wide, cut small). Changing an
event or instrument in one dimension means a change in the other dimensions also. So
careful planning and preparation is a must, it is better to know what youre doing while
mixing. Knowing what you want out of a mix beforehand can make mixing easy and keep
you from struggling towards the end. Understanding the three dimensions is crucial and do
not hesitate to apply, it is a common way of mixing and very much accepted generally. At
least to our natural hearing ears, to keep it all acceptable to our brains, we apply the natural
rules and laws mostly.

3D Mixing.
Mixing, as if the listener is listening to a stage is common practice, it seems more natural.
The more natural a mix sounds, the more natural the human brain can receive the 3D
Spatial Information. Unnatural placement can make a listener feel unpleasant, so only use
this when you need it. Most likely Basedrum, Snare, Bass and Main Vocals are more
centered and fundamental. And all other instruments are placed more outward of the center
field, more left or more right. Lower frequency not fundamental instruments are more or
less centered, as not fundamental instruments playing a higher frequency range are more
placed outwards. The main vocals are up-front and drums more in the back. Sometimes a
choir would stand behind the drummer even further backwards. Just experiment with a mix
and play with the dimensions, make some different plans to where you are placing the
instruments.

Experimenting with 3D Mixing.


Do some mix setups and learn from the differences, learn from your mistakes and
remember when having progression to keep notice of what you did correctly. A good start
of a mix can take hours to accomplish towards a completed static reference mix. Maybe
your ears do not listen very well when mixing this long. So returning later or have some
fresh ears can do wonders. Also visualizing things is better, especially when working on the
whole frequency spectrum or planning your staged mix. So any metering you can do over
here with a spectrum analyzer is visualizing what you hear. Also use a correlation meter for
avoiding the masking effect and check for mono compatibility. Use a goniometer to keep
unwanted events from the left or right side that correlate. For listening to a whole mix you
can visualize mostly, but remember that listening without all of these tools is of importance.
After all listening/hearing a mix is the end result what youre trying to accomplish. So what
you can see by your eyes is interfering with your hearing. Sit down and relax and only
listen (do not look at any metering). For the listening experience to be true for a normal
listener of your music, maybe close your eyes. Do listen on multiple speakers, home audio
sets, in your car, Walkman, almost anywhere possible to get a good view of what your mix
is doing.

Stereo and Mono.


Mono is a single speaker system. Stereo is Left and Right Speakers only (still the most
common way of playing music authentically). A mono speaker setup alike TVs and small
Radio's is quite common still. As we explain mixing in stereo, mono compatibility can still
be an issue. Below we have a common stereo speaker setup. Even having the availability of
surround sound with multiple speakers, humans now days are quite known with the stereo
sound. We have been listening for so long in stereo, it is kind of baked in our DNA. It is so
common that adding more speakers (directions) might influence the way it is been
perceived.

The most direct sound is a single mono speaker and the more speakers you add, the more
you can control the dimensions (3D Spatial Information). Adding more speakers can widen
dimensions or separate frequencies more, still stereo is closest to human hearing. With
Stereo there is a lesser degree of dimensions (compared to surround sound systems), still it
listens close to what we will hear or perceive as natural. Our brain is not so much confused
with dimensions as with Surround Sound. Multiple speaker setups are more difficult to
perceive straightforward, especially when an each room is filled differently with the
placement of the speakers. You can imagine a household surround system being placed
differently each time. As each living room is setup differently. With only two speakers for
stereo, many households know where to place them to get a good sound. Depending on
where a user can place the multiple speakers, is affecting the way your music is perceived
in the dimensions. Off course they all should be setup the same way theoretically and
according to the operation manuals instruction, in real life every user or listener will have
their own setup's for speaker placement.

As we explain stereo mixing over here, surround sound does apply almost the same rules
for mixing. Although with more speakers it will be giving more opportunities for 3D

Spatial Placement, therefore more room for instruments to play and be clearly heard. Above
is a figure containing surround with more than two speakers. For this kind of mixing a
different set of rules will apply to the amount of dimensions and we do not explain this any
further. We concentrate on conventional stereo mixing (and check mono compatibility).
When we are mixing in Stereo we try to accomplish a sound that compares to natural
human hearing, a try accomplish our stage plan, so the mix will transmit 3D Spatial
Information very well. As for Stereo Mixing we might be more persuasive and throw the
3D Spatial Information upon the ears of the listener. Sometimes this means you might use a
little bit more force than naturally is perceived, to get the listener to hear as it would be
naturally be perceived.
Preparing a Mix, Starter to Static mix.
You can set all faders to 0 dB and all Pan or Balance to Center position. Set all EQ to its
defaults. Basically no effects are used; else turn all effects to off (dry, bypass) even better to
remove them. As a start of mixing it is best to clean up all single tracks by listening solo
and removing all that is not needed (unwanted). Do this by listening every track in solo
mode and listen trough all parts until the end, removing anything not needed to hear.
Functions you can use are, audio track or sample based editing or midi event editing. This
is more a recording thing, composition wise, but removing clicks, pops and any other
unwanted material is crucial and can be done now. Listen every track or instrument from
start to end, they all should sound clear and unaffected before going any further in mixing.
This can be a tedious job, removing all unwanted material, but you would not like it when
you hear it in the mix (and cannot figure out where it is coming from). Any listener easily
hears clicks, so take care of this problem first and foremost. Maybe using a gate or just
delete all unwanted audio parts. Sometimes at vocal level any breaths or 'sss' and 'tss'
sounds are taken care of (removed), using a de-esser or just simple audio cutting / muting.
Remove background noise while an event is not playing (manual edit or gates). You cannot
overlook anything here, check, re-check when you need to. All tracks and instruments must
be clean and only play what you need to be played. The rest can be cut out. Timeconsuming it is, it is better to work on this beforehand, before you actually start mixing.
Noise is difficult to remove once recorded. We would like to remove noise, but really we
cannot do this process really effective, so when recorded already we try to cut, delete and
mute. Maybe a steep cut in EQ can help or some noise reduction tools, but they will mud or
fuzz and even do not remove all noise. So noise should be avoided and therefore each
recording of a track needs to be noise free or almost noise free. White or Pink Noise and
Humming Sounds are to be avoided at all time. When you need EQ to remove background
noise use quality EQ or oversampling EQ, especially working in the higher treble ranges,
cut with a small steep filter. Clear up, before going any further in mixing. Make sure the
audio files and samples you are using are at a decent level, so that the levels don't have to
be boosted and therefore the noise floor does not rise.

Starting to Mix.

Provided you have prepared a mix (see above), you have labeled all tracks from left to
right, you have cleaned them up and are ready for mixing. Again you can set all faders to 0
dB and all Pan or Balance to Centre position and set all EQ to its defaults. Set the faders
and pots so they are around unity. Zero everything on your onboard and outboard
equipment, mixing desk, etc. Basically no effects are used, else turn all effects to off (dry,
bypass) or remove them. Even when you are not mixing your own material, when you have
received a mix for mixing or re-mixing purposes, we can re-set to defaults. We are starting
default keeping it basic. This is a good saving point on digital systems, if you save your
project now, you can always return to the default starter mix.

Starting a Mix (Example).


Only by example we can try to explain what we are after. Provided that you have recorded
drums, the base drum will be the loudest of them all (fundamentally the loudest). So a good
start is to listen to the track you have recorded the Basedrum on. Solo listen the Basedrum
track solo and adjust the fader until the VU-Meter shows levels of about -6 dB to -10 dB.
Basically you are soloing the Basedrum now, so the track Vu-Meter or Master Vu-Meter
should look the same. Somewhere in the range of -6 dB to -10 dB is a good start. Basically
you are now creating headroom for the other instruments to fit (when added later on) while
not going over 0 dB. So by setting the Basedrum at the VU-Meter is giving back some
headroom for other tracks to play. It is a good thing to hear the Base drum solo and adjust
EQ, Faders and Balance. Looking for quality and reduction. Do some lower frequency
cutoff 0 Hz to 30 / 50 Hz or so. Roll off some highs, drums are behind the main vocals and
bass. Just remember to set the level of the Basedrum back to -6dB to -10dB afterwards, this
will have changed because you have used EQ, Reverb, Delay or anything you did to make
the Basedrum sound better. When the Base drum is a sampled instrument maybe you could
work on the Basedrum sound beforehand. You have to reposition the track fader level again
each time you adjust the Basedrum sound. Keep the balance straight in the middle, do not
let the bass drum sway out of the middle center position. Overall when using send effects or
an effect group that show up on sends or another track, keep doing the same thing, keep the
base drum level steady at the master VU-Meter, advised between -6 dB to -10 dB and in
center all the time. When you do not have a Basedrum recorded or no Drums, you can seek
the nearest loudest (fundamental) recorded track as reference starting point (solo it),
specially choose an instrument that is playing center and has got lots of lower frequencies
and has a good part throughout the whole composition (rhythmically). When you adjust this
Basedrum or Loudest Track at any time when mixing, you must repeat the same rules and
seek the Master Vu-Meter again. Solo the Basedrum and set it back to -6dB to -10dB. This
Basedrum (or loudest) track is your starting reference track (most fundamental track) for
headroom purposes and it is the main focus of your mix. It is way better to be happy the
way the Basedrum is sounding and really make it sound good (beforehand), you will be
happy with a finished drum kit before starting with other instruments. Because each time
you adjust the Basedrum (or your reference instrument) later on inside the mix, you can
adjust the whole mix again accordingly (repeat the operation with the master vu-meter).
Because you are now using the Basedrum as static reference, it is better not to change it
once you set it at start. Set it at start and be satisfied with the Basedrum sound, then leave it

alone. At least until you have setup all tracks, maybe you need some adjustments, still
keeping your Reference Headroom (Basedrum) start track steady is best.
So you have adjusted the Basedrum and youre happy with the sound and Vu-Meter's
levels? Lets go to the Snare. Keep listening to the Basedrum and turn on the Snare, listen
both Basedrum and snare together. Now adjust the snare fader level until you are satisfied
with the combined Base drum + Snare sound and levels. Do not touch the Basedrum fader,
only adjust the snare fader until it sounds correct together (using fader, pan, balance, EQ,
etc). Whenever you need to EQ or use compression, do this while listening only the snare
solo and combined base drum + snare. It is wise to cutoff the snare in its lower frequency
range below 120 Hz, not interfering with the Base drum. Whenever your applying effects or
change the snare (quality or reduction, separation), you need to check the levels again and
recreate the togetherness. So it is best to not apply any furthermore effects at this time, and
leave this adding into the mix for later purposes. For the bass drum we should have used an
ambience reverb or small room booth (that is on the drum set group), for the snare we can
use a larger reverb (to convey) and send it back into the ambience reverb of the drum set
group to give it the same properties (coherence, ambient). Only touch the snare fader at this
time, do not touch anything from the Basedrum track. When youre happy with the
combination of the Base drum and Snare sounding together, in center, the same will apply.
Do not change these faders anymore when mixing further more. If you have to change these
later on, you must go back to start and re-check all your work. So it is again better once set,
to leave it alone and go to the next instrument or next drum kit item. This might sound a bit
tedious, but remember we are building the fundamentals of the mix over here (starting a
mix), when you lose attention over here, you might lose the mix. We will progress with
finishing off the drum set/drum kit.
So at this point you could work on the HI hat and mix this together with the Basedrum and
Snare. Remember that the HI hat can use quite a good low and heavy EQ cut (reduction) to
make some headroom for other instruments. Finish off the rest of the drum set by adding
each single drum track (un-mute). Panned more to the right as it is more not fundamental
(but rhythmically inclined). Take into consideration placement in the dimensions, quality
and reduction. Maybe when finished assign all single drum tracks to group track for later
purpose mixing (we have the ambient reverb on the send/group anyway). At this point you
can do a lot off stage planning on the drum set, keeping snare and Basedrum in center and
pan the rest of the drum set more outwards. We explain each instrument later on and give
exact instructions for each instrument. We finish off the drum kit first, with the available
tools in dimension 1, 2 and 3. Now turn on the Bass track. With the bass track you can
apply a low cut to < 30 Hz and roll off some highs. According to your stage plan, place bass
in center, behind the vocals, rolling off the highs will make it more distanced but bass does
not have a lot of highs anyway. Maybe for quality boosting some 30 Hz to 120 Hz
frequencies. Solo the Basedrum and Bass, adjust the bass until they sound good together
(do not adjust the bass drum). Turn on the rest of the drum set and compare. Keep adjusting
the bass until it sounds correct. Start introducing new tracks or instruments each time
looking for quality and reduction, separation and togetherness. Basically working from left
to right on your mixer is building the mix, you set the faders and effects and then move on
to the next nearest track and repeat the same. This goes for all other tracks you have on
your mixer until you have finished all tracks and are on the right side of your mixer.

Anyway when you start with Drums and Bass sounding well together, this is a good starting
point for a mix. Basically placing them dead center. Then work with snare and main vocals
also dead centered. Then introduce the HI hat and rest of the drum kit. Then introduce bass.
Then the rest of all not fundamental instruments placing them more left or right, keeping
them out of the already crowded center Once you have worked on all tracks and are
satisfied, try not to adjust too much afterwards. Listen to it for a while, save your mixer
settings (or save the song on a computer or digital system). Once you have the starter mix
running, like Drums, Bass, Guitar and Keyboards sounding well together, this routine
becomes more free. You can adjust faders like Guitar, Keyboard, Vocals, etc more freely
now, add some more EQ, compression, delay or reverb, any effect will do. What you can
feel while working is that you have created some headroom for doing things and still have a
good level on the Master VU-Meter (output) and you have some headroom to work before
hitting 0 dB. This is a good start and makes mixing possibilities for furthermore mixing
possible (freedom) without having to adjust every time for making headroom. Stay in the
boundaries of dimension 1 and 2, applying fader, balance, EQ and compression (gate,
limiter) but not adding effects. Then workout dimension 3.

Digital Distortion.
Remember to keep track of the master VU-Meter; if this goes over 0 dB on a digital system
you will get distortion in the signal as additional unwanted effect. Depending on the bit rate
your digital system is running on internally, internal distortion is not easy to spot. When
you going over 0 dB, do not adjust the master fader for loudness, adjust all other faders in
accordance with the same amount of gain. So each track fader can be set -1 dB lower (or
the amount you think is needed to lower the Master Vu-meter under 0 dB). This can be a
hassle and you must be precise with this job. Anyway it is better to lower all faders the
same amount and keep the master fader at 0 dB at all times. Some digital mixers have
options to do this job more easily by grabbing all faders and correct them all with the same
amount of gain. You will be tempted to touch the master fader anyway because it the easiest
solution, but it will not work for your mixing purposes. Keeping the signal internally good
is adjusting single track faders. That is why you need to create some headroom from start.
Even for 32 Bit Float or higher (64 bit) digital systems that can address the 0 dB problem
better and can handle > 0 dB signals, it is better to stay below 0 dB. For Integer 32/24 and
16 Bit digital systems, do not go over 0 dB at any time, this will surely add distortion and
add unwanted artifacts. Sometimes as a feature we add a little distortion, but most likely
when starting a mix towards a static mix, we do not need it. We tend to keep away any
distortion for now. Limiters are good to just scrape the peaks whenever the threshold is set
at -0.3 dB or setting for peak reduction levels -1 dB to -2 dB, thus affecting only signals
that would otherwise jump shortly over 0 dB. Tough limiters are not a first solution, limiters
are to be avoided but sometimes needed. For mixing only use a Brickwall limiter on the
master fader (for starters, but even try to avoid this). When your mix goes over 0 dB, be
sure the metering your watching is fast enough to intercept (spot) peaks that go over 0 dB.
Else the limiter on the master track will tell you when this is happing by showing the
reduced amount in dB or with its warming (red) lights. Sometimes with a Brickwall limiter
or digital mixing console two red lights (left and right signal) will tell you when youre

passing over 0 dB. Try to lower your group tracks or individual tracks by the same amount
to get back some headroom, keeping the master fader at that same 0 dB position.
Sometimes an instrument or track is unbalanced, even a whole mix can sound unbalanced,
this can cause left or right signal to be of uneven levels and sway around.
Single Track Mixing.
Adjusting individual instruments is commonly done with level, balance, EQ, Compression,
muting, gating and limiting. Within the three dimensions some planning can be done before
or while you mix further, stage planning. Most single or multitrack mixers do have some
EQ bands and some even have compression settings per track. By Single Track Mixing we
mean the Fader, Level, Gain, Balance and all other buttons, knobs on this single track. Also
for all effects we apply to single tracks or instruments, we are talking single track or
instrument effects.

On digital systems we can add effects as inserts. For this refer to your mixer manual how a
track is build up technically, some insert effects can be placed before the track fader and
panning (pre-fader). This will affect the signal with the effect first, before track EQ, Fader
and Panning is applied. Some insert effects can be added after the track fader (post-fader)
and will first process Level, Panning, EQ and track Compression before going to trough the
effect inserts. Thus deciding where to place an effect insert (pre-fader or post-fader) can
rely on the equipment you are using or the decisions you make while mixing. In general we
place effects like EQ, Compression, gating and limiting in front of the fader (pre-fader),
just because we like to adjust the sound before it goes through the mixer furthermore.
Reverb and delay we place post-fader or on sends and groups, as a second in line feature.
Anyway what happens on single tracks are the individual instruments, so whenever you
need to change something that applies to a single instrument, do this on the single track
instrument only. Fist fiddling with level, balance, EQ, compression, gate, mute or limiter.
First look for reduction, keeping the balance panorama planned, use EQ cuts for separation

and dynamic headroom. Control level or transients with a compressor. Composition and
reduction/separation wise use manual editing or the mute button cuts and limits. Then
enhance quality of the instruments in dimension 2 and 3. The group tracks explained below
are for combining tracks as a group and therefore control the ' layer' of combined
instruments together.

Group Track Mixing.


Routing single tracks to a group will give you more flexibility in handling the mix as a
whole, for this you can route all drum tracks (Basedrum, snare, HI hat, drum set, etc) to a
single group track. Now you can control each single track individually and at the same time
control all single tracks with the group track (as a general we place an ambient room or
drum booth reverb on a group or send anyway for the complete drum set to convey). It is
common to add all drum sounds to one group track. This group could include also the Bass;
this is a matter of mixing purposes or decision. The single bass instrument or track could
also be routed to its own group (but mostly we like to use the ambient reverb on the drum
set group or send anyway). If you have the availability of multiple groups (like a digital
mixing system can handle) you can create layers of groups. By combining the Drums
Group and the Bass Group and route it to a new Group, you can control both drums and
bass with this group. By combining into groups this is called welding and forms a layer. By
welding instruments together we tend to get some togetherness, so grouping towards the
master mix is layering (summing). Building layers of instruments that combine together as
a group (welding), will give control to the different sound sets of a mix. By having group
tracks on a digital system that has different mixer setups, thus can show a mixer that has
only the group tracks and the master left over. With the group track mixer you can more
easily control the layering of your mix and therefore adjust the welding process and your
planning of the three dimensions for each layer. For digital summing (emulate analog
summing), we can even add some tube amp or analog tape deck simulator, to get some of
that analog summing feeling. Therefore when mixing, we tend to use single tracks for
adjusting each instrument (separation). And we use the group tracks to combine instruments
(together). When you need to affect a single instrument use its single track, when you need
to adjust a whole layer of instruments use the group, you can decide. So now we know
where to adjust level and balance, muting or manual editing, place EQ, Compression,
gating and limiting or place delay or reverberation effects and can decide to use it on
groups or single tracks, depending on what we need to adjust.

Each group track combines single tracks together, for this we can call a group track a layer.
With the Drums Group for instance you have combined all drum sounds together (layer)
and can control them as one with the group. For instance when you have a guitar on the left
and one on the right, this combined coexistence in a group guitar track does add another
layer to your mix. When you have combined already the drums group with the bass group,
you can now control the Drums, Bass and Guitars with only two group tracks. When you
have for instance an Organ and a Piano, group them when they coexist within the three
dimensions of your planned mix. Decisions when ever to make a group of combined single
tracks is a matter of taste, planning and creative mind. It is likely that if tracks coexist and
form togetherness as a layer for your mix, you can combine them into a group. The last step
is to combine all groups to be routed towards the master track (the output of your mixer).

This is figure above shows how final grouping could look like; you now have three kinds of
ways to adjust the mix. At single track level you can control all individual instruments
separately. The welding groups contains the groups of individual tracks and therefore
controls the first layer of your mix (some togetherness). The second layer and the master
control the final mix for further more welding and layering, summing to emulate analog
feeling (some more togetherness). Depending on instruments at hand, pre-planning and
labeling all tracks and groups can help you get a whole picture of your mix design. Mostly
a DAW has got label and some even have a notepad per track, keeping track of things for
the old days when we do not remember anymore what we did to achieve. How you arrange
is a matter of coexistence and creative mind, but mostly follow the rules of our hearing and
the laws for the dimensions, starter and static mix. For most cases starting a mix design will
start off from the left side of the mixer, adding the most fundamental instruments first.
Building up as a stage separating instruments as single tracks. Also we start with
fundamental centered instruments, then not fundamental lower instruments, then at the right

hand side the higher not fundamental instruments. As you progress with adding groups,
look at your dimensional planning as you combine, looking for instruments that coexist
(counteract) in your planning can make decisions easier. This layering and welding is
common, but artistic and creative matters will be furthermore discussed later on, for now
we are designing and planning the staged mix.

Layering and Welding.


Using compression on groups can weld instruments or tracks together, making a more
coexisting sound. Even placing an EQ to correct the sound can have welding purposes.
Each group that combines individual instruments or tracks together as one is called a layer.
(Summing up into the later groups before entering the master bus, we can do some analog
summing by placing a tube amp or analog tape based effect to create that analog together
feeling. Summing up analog style affects all settings we did before, so we do not tend to
use while mixing. You can decide to use analog summing on a digital system or not. right
now we do not recommend this at all, it will affect our mix we so time staking-ly have been
trying to put together).

Design.
The most of the togetherness of a mix can be found in a well setup design for dimensions
and layering together. Ending up at the master bus of your mixing console. The
togetherness of your mix is all combined instruments sounding together, through each
single track and grouped towards the master bus fader (output). As far as planning your mix
and starting off, first adjust individual instruments and tracks, then weld them together with
groups that coexist towards the master track. When you have to control the mix or having
an idea to change it, you must know where at what level you can do this best. Resorting to
single tracks first. Remembering the dimensions. Placing some cutting EQ or Compressor
will affect the behavior of the layers or single instruments. Place effects only when and
where they are needed. Deciding what you need and where you will place it, is
understanding where elements are adjusted at what level. This searching for separation as
well as for togetherness, as we search for a nice clean starter mix toward a static mix is the
only way to make more headroom and leave some space for designing purposes and issues
later on. By being scarse with adding (effects, reverberation), it is better to remove what is
not needed first (quality and reduction), cleaning up the mix as well as individual
instruments and sounds. Design a stage plan, deciding where all instruments have their
space or location. After finding some balanced mix with Level, Panorama, Frequency
Spectrum and Depth with the faders, balance, pan, EQ or compression (gate and limiter).
Only then you can add some more depth in the last dimension 3. This kind of mixing is
quite common, but dimension 1 is most overlooked in ways of setting up, dimension 2 is at
least as important and can be difficult to hear or understand. Combining dimension 1 with
dimension 2 and then dimension 3, will be the best progression for clearness and you will
not have fight and return to correct as much later on. When you start with adding a Reverb

before finishing off dimension 1 and 2, you might end up with a muddy or fuzzy sound
(masking, correlation) , mostly EQ ing and compensating for the Reverb over blowing the
other instruments or mix. So first the instruments, then the layers, then the mix, then the
master. First dimension 1, then 2. Then 3!

Effect Tracks or Send Effects.


Common effects can be used on Send Tracks and this will make the effect available to use
on all tracks/instruments when placed on groups. On a DAW we can use send or groups
depending on the way we want to sum up levels towards the master bus fader. The normal
way of a mixer is to route send effects toward the master bus. But routing sends to groups
can also be done. Most likely the default configuration for a send track is to end up at the
master bus. Sometimes a send track can be routed otherwise. So if you need routing on
special an effect Group, create some new groups and place insert effects on these groups.
Now youre able to route anything to the effect groups.

Send Effects that end up directly to the master bus are for adjusting the final mix as a whole
(summing). But remember you have the Group Tracks to place effects on also as well as
single tracks and sends, so maybe you can be a bit more scarse using effect sends and the
use of effect on single tracks. It is likely to place Send Tracks (send effect tracks) on the far
right of the mixer. So drums start left on the mixer and the send effects are last right on the
mixer after the last vocals. Then you have last, the master track. Remember you can assign
the outputs of the send effects to return to any track or group, to be creative. Some mixers
in the digital domain do not allow you to return to previous tracks because of feedback
reasons, and therefore only assigning to higher tracks or groups. By default send effect
tracks are routed to the master bus. It is up to you to assign differently according to your
needs. Also if youre using a send effect, think of groups and instead place an insert effect
inside the group, this can be clearer for the overview of your mix and can have better sound
mixing results. The fewer send effect tracks the better, the more controlled and adjustable
your mix will be for later use.

Masking and Unmasking.


EQ or Equalization is referred to as a dynamic processing tool, not an effect. EQ is mostly
used to eliminate frequency conflicts between instruments. It is connected to non-linearity
human hearing, namely affects musical masking. When two sources with overlapped
spectrums are situated in one space (center for instance), and one of them is playing at
much lower levels ( -15 db) than the other, we stop to hear the sound that is more silent,
they are disturbing each other (masking). When we pan both instruments left and right, we
can hear both signals again (unmasking). All instruments sound perfect in single mode
when mixing, but together in the mix it can be soap. This is a result of acoustical binaural
phenomenon called masking. Avoid possible conflict with correct composition and
arrangement. EQ and compression are used on almost every instrument (95 %) inside a
mix. With EQ we mostly are looking to unmask and avoid masking. There is no universal
equalizer. Each EQ will sound different, having different functions, but at extreme raising
or lowering (adjustments), the difference can be critical. EQ works best while we are
cutting frequencies, not raising them. Mostly beginners will raise what they feels and
sounds good, but we can do the same by cutting those frequencies not needed. An EQ will
surely produce artifacts when it is raised strongly. So we try to cut first, then raise. In the
bottom end range we use a small width EQ band (Q factor), in the high range a big width
EQ band (Q Factor). Almost any change in one band will affect the sound in other bands.
Acoustical masking is a binaural phenomenon, pan as a first measure can solve frequency
conflicts, then resort to EQ as a second (but much needed) tool for unmasking. Many
producers will push the button called mono at the start of mixing, but the goniometer (as a
visual) can do a good job at the end of mixing as well as the correlation meter. It is easier to
solve the frequency conflicts on instruments groups.
EQ or Equalization.
The equalizer comes in all forms and shapes and works in the vertical dimension 2. The
frequency range mostly goes from 0 Hz to about 22 KHz. All EQ is caused by a filter or
some kind of filtering. But for adjusting how an instrument will sound, EQ is the best
starting point (quality or reduction). Probably the most important tools in the mastering
engineers toolbox are equalizers. When we cut we do this with a small and steep filter,
when we boost we do this with a wide filter. We tend to cut more then we boost. We tend to
use fader level, panning, before using any EQ. Then use EQ. Secondly compression,
limiting or gating. Don't hasty overlook the fader level and balance or panorama as a first
dimension tool. Most beginners will understand what EQ equalizing is; they know it from
home stereo systems or have some experience already. Most will understand when they
adjust lower frequencies the sound of a bass will be more heavy or less. And when they
adjust the higher frequency range of a HI hat it will sound brighter or less bright (trebles).
Mostly we talk about cutting or boosting, lowering or raising the EQ amount. The most
common are Parametric EQ and Graphic EQ. Remember that pushing the EQ frequency
levels (raise, boost) upwards will give more level and this can affect in the result ending up
with less headroom or going over 0 dB on the master VU-Meter. Cutting more than
boosting, that is a fact. So lowering levels with EQ is better than pumping or boost the

levels upwards. Anyway it is better to take away then to add while doing EQ ing (for
quality and reduction). Giving each instrument a place in the frequency spectrum is what
youre looking for (quality, reduction, dimensions). Almost all instruments will play in the
range of 120 Hz to 350 Hz, 500 Hz (misery range) and are represented here, this range can
be crowded and most be well looked after.

So whenever you can, make a plan and make way for other instruments to have a place in
the field (stage). When two instruments are playing in the same frequency range (masking),
like two guitars playing, it is likely that you will not cutout frequencies with any of them,
so balancing one left and one right can solve this problem at first hand (of overcrowding),
this is the first solution in dimension 1 panorama. Most place them off center anyway
keeping a clear path for fundamental instruments. You must decide what sounds best and
when to use EQ, but leaving space in the frequency spectrum from Left, Center and Right,
by cutting out frequencies of instruments you do not need is more common EQ style and
recommended. Instead of raising the Bass because you think it's not been heard, you could
check if other instruments do muddy up in the lower frequency range of your mix or just
lower all of them instead (cutting all lower 0 - 120 Hz frequencies out of not fundamentals).
Boosting frequencies can mean you enter a zone of another instrument or track its main
frequencies and the sound of them playing together combines. This can muddy up or fuzz
your mix and with a low quality EQ produce artifacts (use quality EQ or oversampling EQ).
However, there is a twist. It does not mean that all two sounds in the same frequency range
cannot sound together, that is just how you listen to it and that is called mixing. Yes we
have some mixing freedom. Remember by applying balancing can separate instruments and
must be done first (dimension 1), so with two guitars sound just the same balancing guitar 1
to the left and guitar 2 to the right might solve the problem. Most of the time the frequency
range from 30 Hz to 22 KHz is filled with all instruments layered, sounding together as one
mix. Also a second rule is the that lower frequency fundamental instruments will stay more
centered, as higher frequency not fundamental instruments are panned more outwards,
more left or more right. Just remember cutting is better and spreading is better. Make room
and plan the frequency range. Place instruments inside the frequency range, spreading
them, balancing them. Do use EQ only where needed. First use EQ on a single instrument
track can help creating a better instrument sound (quality and composition wise/rhythmical
intent). Second by cutting out frequencies, you will leave open space for other instruments
(reduction) to play clearly. For lower frequency range instruments you can use a high cut

also control the distance. All instruments can use some kind of low-cut. By doing this we
can be sure that no rumble or high noise is entering the mix and as well leave headroom in
the whole frequency spectrum. Remember you almost always need a steep cut EQ from 0
Hz to 30 Hz on all instruments except the maybe Bass. This way more or less all
instruments need EQ on their own single track (quality and reduction), just to make these
kind of corrections to make every instrument sound clear and at its defined placement
inside the three dimensions. When using sampling, maybe you could process the EQ
offline. Or use the EQ offline inside digital sequencers (digital audio tracks), be sure you
can always revert back to the original file (without EQ). Some digital systems have
unlimited undo functions. Processing instead all in real-time, you can more easily adjust the
mix without re-loading or undo (timesaver). This means you can always adjust the EQ
settings. Off course the more you process online, the more computing power you need, but
keeps it adjustable for later purposes. Latency can be a problem when processor computing
speed is low, you might hear clicks or unwanted audio signals inside your mix when this
happens. Use oversampling EQ, for high frequency instruments and working > 8 KHz
ranger, at least you should know your EQ does not produce artifacts in any range,
especially the high ranges. First remove, then add. Removing/lowering can be done with a
small Q band filter, while adding/raising with a wide filter. Remember L+C+R and panning
laws. Know sweet spots frequencies of different instruments. First lower then raise. Lower
steeply, raise broadband. Almost any change in one band will affect the sound in other
bands. Remember level and panning concepts, clear and logical panorama mixing, balanced
frequency distribution Left + Center + Right, frequency ranges, each instrument can fulfill
its role inside the mix. Many instruments are have two main frequency spots, others only
operate within a single frequency band. A mix requires at least the same number of low-cut
filters as there are tracks. A frequency component between 0 and 1Hz is called DC offset
and must be eliminated, use a the DC removal tool for this purpose. The misery area
between 120 and 350 Hz is the second pillar for the warmth in a song after 0-120 Hz, but
potential to be unpleasant when distributed unevenly (L+C+R, panning laws). You should
pay attention to these range, because almost all instruments will be present over here on a
dynamic level. Cut all frequencies lower than 100 Hz - 150 Hz from all instruments except
bass and bass drum. It allows to get rid of all sub-bass artifacts 100% with a good cut.

Graphic Equalizer.
A common type of equalizer is the Graphic Equalizer, which consists of a bank of sliders
for boosting and cutting, different bands (or frequencies ranges) progress upwards in
frequency. Normally, these bands are tight enough to give at least 3 dB or 6 dB maximum
effect for neighboring bands, and cover the range from 20 Hz to 20 KHz (the full frequency
spectrum). A typical equalizer for sound reinforcement might have as many as 24 or 31
bands. A typical 31-band equalizer is also called a 1/3-octave equalizer because the center
frequencies of sliders are spaced one third of an octave apart. Any graphic EQ will be more
adjustable with more EQ Bands.

A graphic equalizer uses a predetermined Q-factor and each frequency band is equally
spaced according to the musical intervals, such as the octave (12-band graphic EQ) or one
third of an octave (31-band graphic EQ). These frequency bands can each boost or cut. This
type of EQ is often used for live applications, such as concerts because they are simple and
fast to setup. For mixing the Graphic EQ is not precise because the EQ bands do crossover
each others next range and affect them. Also mostly using a single type of filter. But
however a > 20 band Graphic EQ can do a good job, because it is fast and easy. As a whole
the more EQ bands the more precise the graphic EQ becomes. For overall setting of a track
and with instruments just needed to correct a bit, the Graphic EQ is best when you need to
setup fast and be less accurate. Also because the Graphic EQ is defined, the Graphic EQ
will give you a feel of understanding and commitment. Once you know what you can do
with Graphic EQ as you get more experienced, you might not need so much peaking EQ or
parametric EQ. Also the more EQ bands the better, like 30 > or more EQ bands. Because
ranging from 0 Hz to 22 KHz it can also give a view to the spectrum once you look at the
whole EQ banding picture. Working with the same brand or manufactured Graphic EQ,
maybe will give a steadier outcome each time, compared to Peaking EQ. For quality and
reduction purposes the Graphic EQ is a good all-rounder. For removal of frequency ranges,
use a parametric filter with a high q-factor and strong raise, sweep towards the problem
area, and then lower them, mostly we use parametric EQ for this more exact and precise
job.

Parametric EQ or Peaking EQ.


A parametric equalizer or peaking EQ uses independent parameters for Q, frequency, boost
or cut. Any frequency or range of frequencies can be selected and then processed. This is
the most powerful EQ because it allows full control over all three variables. This
parametric or shelving EQ is predominantly used in recording and mixing. You can hear
easily when raising or lowering the frequency band, what is going on. You can hunt down
and find where the nasty and good parts are, finding out what to cut and what to boost. Very
precise EQ ing can be done using a small range steep filter. Like a scalpel you can cut or
boost certain adjustable frequency ranges and be a sound doctor in EQ ing. Just remember
more cuts then boosts are the main key to get doors open. Cut what is not needed. Boost
only when necessary. Watch out for using small band frequency ranges for EQ, depending
to the quality and natural behavior of EQ filters, there can be nasty side effects (alike a
harsh sound, artifacts). Also when we boost high frequencies (use oversampling quality
EQ) we can create a harsh sound and artifacts. Generally with most EQ ing we try to use

medium or large frequency bands for EQ boosting. This means we will use low q-factors
more than high q-factors. For cutting we use steep low cuts and steep filters just to remove
what we need. For quality and reduction purposes parametric EQ can be an outstanding
tool. But however depending on the features (brand, manufacturer), they need to be very
flexible to setup. Some are outstanding for bass drum and bass, while others have their
focus on vocals, strings, highs, etc.

F - Frequency, all equalizers are built on peaking filters using a Bell Curve which allows
the equalizer to operate smoothly across a range of frequencies. The center frequency
occurs at the top of the bell curve and is the frequency most affected by equalization. It is
often notated as fc and is measured in Hz. When using a cut-off filter the frequency will be
cut before or after this frequency.
Q - This is a variable Quality Factor which refers to the width of the bell curve or the
affected frequency range. The higher the Q, the narrower the bandwidth or frequency range,
the more scalpel-like (removing, cutting, lowering). A high Q means that only a few
frequencies are affected, whereas a low Q affects many frequencies (boosting, raising, be
gentle). Staying with a low Q guarantees the EQ quality, as with a higher Q most equalizers
do not perform as well. As well as the higher the frequencies we need to EQ, we tend to use
more quality EQ or oversampling EQ. The quality of the equalizer is of importance,
specially using a high Q, so use the best and leave the rest.
G - Gain (Level, amplitude). This determines how much of the selected frequencies should
be present. A boost means that those frequencies will be louder after being equalized,
whereas a cut will soften them. The amount of boost or cut (gain) is measured in Decibels,
such as +3 dB or -6 dB. A boost or gain of +10 dB generally amounts to the sound being
twice as loud after equalization. Boosting above + 6 dB can create some nasty sounds, so
use a quality EQ. Generally for boosting we tend to use less and be wide, so anywhere up to
-3 dB (-5dB max) is great. When boosting more, nasty side effects tend to enter to the
sound, we use a wide filter and quality EQ.

Shelving EQ.
Shelving filters boost or cut from a determined frequency until they reach a preset level
which is applied to the rest of the frequency spectrum. This kind of EQ filter is usually
found on the trebles and bass controls of home audio units EQ mixers. High pass and low

pass filters boost or cut frequencies above or below a selected frequency, called the cutoff
frequency. A high pass filter allows only frequencies above the cutoff frequency to pass
through unaffected.

In this chart two shelving EQ's are used, one to cut lower frequencies and the second for
raising the highs. With shelving frequencies below the cutoff frequency, are attenuated
(boost or cut) at a constant rate per octave. Low pass filters will cut off all frequencies
below the cutoff frequency. All higher frequencies are allowed to pass through unaffected.
High pass filters will cut off all frequencies above the cutoff frequency and all lower
frequencies are allowed to pass through unaffected. Common attenuation rates are 6 dB, 12
dB, and 18 dB per octave. These filters are used to reduce noise and hiss, eliminate pops,
and remove rumble (reduction). It is common to use a high pass filter (at about 60 to 80 Hz)
when recording vocals to eliminate rumble. Best used as a reduction or separation tool,
shelving EQ is used to separate instruments, to give each a place in the spectral dimension
(2).

EQ and dimension 2.
The Base drum and Bass will be most common in the lower frequency range 30 Hz to 120
Hz (180 Hz). Keeping the lower frequencies and lowering or cutting the higher frequencies
is making headroom for all other instruments to sound clearly. You are trying to give each
instrument a place in the frequency spectrum (instrument ranges) and give them an open
pathway (unmasking). The HI hat is working and showing (sounding) better when other
instruments are not in the same frequency range, so the bass or Basedrum will not affect the
HI hat with its higher frequencies when they are cutoff in the higher frequency range. How
much you cut out or adjust is a creative factor, but keeping Bass and Basedrum separated
(as dominating the lower frequency range 30 Hz to 120 Hz) and keeping other instruments
or tracks away from this range is common. This will give a clear path for the fundamental
instruments to play in the lower range of frequencies and stay at center, where speakers do
their best job on producing low level events, without other instruments or tracks playing in
this range or center position. Also all instruments who have a similar panorama settings,
alike the Basedrum, Snare, Bass and Main Vocals (at dead center), these can be set in
distance by using EQ to roll of the trebles for setting distance. Thus for all being played at
center position, you can still adjust their perceived depth (dimension 3) to separate them a
bit. Ok you can make adjustments to make the bass sound better (quality, boosting),

remember when other instruments play in the same range, this added combined sound is the
result of a muddy bass range 30 - 120 Hz. You are aiming that each sound or instrument to
be heard. Heard the way you want it, leaving open space (headroom) for all instruments is
better than to just layer all instruments on top of each other (muddy, fuzzy mix). Especially
when youre running a clean mix without effects the placement of instruments is best heard.
So keeping away effects as long as you can, while mixing dry is best to sort out some
placements. For quality often two frequency ranges are applied for boosting, for reduction
mostly a low steep cutoff filter on single tracks, groups, etc. For distance we tend to cutoff
more high trebles.

EQ Example.
Every instrument must be clearly heard, progress from the fundamental instruments
towards the not fundamental instruments. Using EQ cuts on lower or higher frequencies can
free up space (headroom) for other instruments to play and make clear pathways.
Muddiness of a mix will happen very fast when not paying attention to the mix (separation,
reduction) or do not align according to your stage plan. Specially the misery range 120 Hz
to 350 Hz (500 Hz) is the second range we need to pay attention to (quality), you can make
some difference over here while EQ ing. Adding a reverb will clutter up very fast. So it is
better to start listening to a clean mix and concentrate on this for a while (dimension 1 and
2). Be scarse with adding effects until you are quite sure your clean mix (starter mix toward
static reference mix) is running well and can be heard well. Again anything you add or raise
will muddy up, anything you cut or lower will unmuddy the mix. But still you cannot
prevent muddiness altogether (masking), so don't get stuck with it, setting up a mix must be
a bit of routine (planning the dimensions and having a stage plan readymade). Starting
clean is best and can work fast as a routine, later on you can work more freely and add
more. A good clean start according to these rules means better end results. Even when
adding effects we tend to use EQ to control the signals to keep everything according to
stage planning (dimensions, quality, reduction, headroom, etc.). EQ is the first effect or tool
to reach for, after fader levels and balances in the panorama are setup. So you can be sure
(almost) that on each track you will use some EQ and is most common, especially use as
many low cuts as there are single tracks. Again how your instrument will sound is adjusting
EQ and be happy with the sound. Remember there are two ways we can use EQ ing as a
tool, quality and reduction. A guitar can sound thin when played in solo mode, it can be
sounding very well inside a mix. When a sound is recorded badly and unattractive, it is
likely you cannot change a lot when using EQ or when correcting it in any other way. So it
is better to record the best sound you can. EQ can bring out any instruments quality. But
also with the same EQ you can make headroom inside a mix by cutting out what is not
needed and at the same time make the fundamental sound ranges heard more clearly. The
less muddy and clearer your mix will sound (in the lower frequency range) is started by
separating what you really need to hear and cutout what you do not need. The lower
frequencies will give more power and is really the focus of the mix, the lower frequencies
most be in center all the time, so when using a stereo EQ watch out for swaying more left
or right. The higher frequencies are also important to watch, but are not really adding to the
overall power of your mix, they mainly adjust for rhythmical compositional intent and is a

good measure for the distance of individual instruments. Another thing is being fitted with
good sounding speakers or monitors while adjusting EQ. Even headphones need to be of
pure quality. Remember when you do play on monitors a frequency range 0 Hz to 50 Hz
will not be heard at all. This will mean you will not hear them as loud as your mix is really
putting out, only because you do not hear them through your speakers. Not hearing lower
frequencies correctly out of your speakers can mean you will counteract this failure by
pumping up the lower frequencies. When listening on good speakers that play the lower
frequencies well, you might avoid this mistake without adding more then you need. The
bigger base speaker you can get or a better frequency range from your speakers will
improve your mixing and hearing correctly what is being played. Also monitor speakers
tend to be more natural when their whole frequency range is linear. Also the room you
listen in is of importance. For monitor speakers to really shine, they need to have a flat
frequency spectrum. You can't EQ when you do not hear it correctly played. Get good
monitor speakers or when you listen on headphones get a good one. This can be costly, but
the best equipment is needed. Headphones are cheaper and for EQ ing they have a better
frequency range. Tough headphones can be less effective playing reverberation sound as
they are close distanced to our ears and do not include the room reverberation sound, they
can be a good tool for EQ and Compression, unmasking, correlation and balance,
dimension 1 and 2. Listening to good speakers is important, when you listen on a home
stereo set you are missing out on hearing the correct amount of frequencies played. Get
good monitor speakers instead. Good equipment starts with good monitor speakers that
represent frequencies well from low to high and are as flat as can be. EQ ing is almost
impossible when you can't hear what youre doing. Invest in speakers and a good soundcard
or mixer is helping you hear what is being played. Invest in noise free and quality
equipment, will help you to hear what you mix is about, without interference. Only then
you can hear what you are doing, thus using quality or reduction without compromise.

Common Frequency Ranges.


Frequency Range 0 30 Hz, Sub Bass, Remove.
Frequency Range 30 120 Hz, Bass Range, Bass and Basedrum.
Frequency Range 120 350 Hz, Lower Mid-Range, Warmth, Misery Area.
Frequency Range 350 2 KHz Hz, Mid-Range, Nasal.
Frequency Range 2 KHz 8 KHz, Upper Mid-Range, Speech, Vocals.
Frequency Range 8 KHz 12 KHz, High Range, Trebles.
Frequency Range 12 KHz 22 KHz, Upper Trebles, Air.
Brilliance, > 6 KHz.
Presence, 3.5 KHz < > 6 KHz.
Upper Mids, 1.5 KHz < > 3.5 KHz.
Lower Mids, 250 Hz < > 1.5 KHz.
Bass, 60 Hz < > 250 Hz.
Sub Bass, 0 Hz < > 60 Hz.
Compression.

Supporting Transients, Sustain, increasing level of quieter sections. Compression is referred


to as a dynamic processing tool, not an effect. A compressor reduces the dynamic range of
an audio signal if amplitude exceeds the threshold. The amount of gain reduction is
determined by Attack, Delay, Threshold and Ratio settings. The Compressor works the like
an automatic volume fader, any signal going above the threshold is affected. It is better to
compress frequently and gently rather than rarely and hard.

A compressor is a good tool to reduce instruments peaks and give some more dynamics
(headroom) back to the mix (reduction). The major issue with a compressor is pumping
(quality). We as humans like our music to pump, just as we like our hearts to continue
pumping and beating. Just as we like to pump it loud. Pumping can be achieved by single
band or even multiband compressors to decent effect. The only way we actually hear a
compressor at work is when it is hitting hard at its threshold levels. Most likely you have
gone too far and must be more subtle. Anyway the compressor is a subtle effect and only
really good heard when pumping starts to sound. We tend to compress more evenly with a
low ratio level, and with a lesser degree scraping of peak with a limiter (as this a
compressor with higher settings on ratio, etc).

The setting of the Threshold level is of importance, this will set the level for anything that
goes over the threshold is to be reduced by a certain amount of level. This reduction is

progressive and will be more when the level of the sound inputted goes further over the
threshold level. By setting Attack and Delay times for the compressor, you can play around
with how fast the compressor will act in reducing the amount and releasing this reduction
after the signal goes below the threshold level. By setting attack and delay we can affect
transients or sustaining sounds. By setting ratio we can adjust the amount of compression.

This is simple ADSR volume compression. Sometimes an envelope effect can work out
greatly for instruments, so refer to your instruments settings first. With the envelope from
the instruments ADSR we can achieve a good sound before even using compression. A
peak compressor with an Threshold of -10 dB and Attack time set at 10 ms and release at
100 ms, will reduce any signal that goes over -10 dB and is longer than 10 ms, after the
signal goes below -10 dB the reduction will gradually reduce for 100 ms. The same
procedure will follow when the threshold level is reached again.

Most compressors have the following controls, though they may be labeled slightly
differently. Mostly used on a general instrument RMS level, a general compressor setting is
being subtle and just try to remove some hard signals and making some headroom again for
other instruments. Even adjusting transients or sustain of the original sound, the RMS level,
or peaks.
Threshold - This is the level at which gain reduction begins to happen. Usually measured in
dB. Lower threshold values increase the amount of compression, a lesser signal is required
for gain reduction to occur.

Ratio - This is the ratio of change between input level and output level once the threshold is
reached. For example, a ratio of 4:1 means that an input level increase of 4 db would only
result in an output level increase of 1 db. The compression result is a reduction of -3 dB.
The Ratio is the amount of reduction. When ratio is set at 1:1 there will be no reduction
when the threshold is passed, the compressor is bypassed. But with 2:1, each 1 dB of more
signal over threshold is reduced by halve, and will be compressed to 0.5 dB and so on. The
more amount of Ratio the more compression and reduction will be done. A limiter is a
compressor that has high ratio settings, alike 10:1 to 50:1 or infinite. Like from a Brickwall
limiter you would expect everything that goes over the threshold level will be reduced to
the threshold level, as the amount of ratio is so much, it will be close to the threshold level.
A compressor with ratios between 1:1 and 5:1 are being more subtle then a limiter.
Attack Time - The amount of time it takes for gain reduction to take place once the
threshold is reached. The ratio is not applied instantaneously but over a period of time (the
attack time) usually measured in microseconds or milliseconds. Use longer attack times
when you want more of the transient information to pass through without being reduced
(for example, allowing the initial attack of a snare drum). Specially for keeping the
transients the attack can be set > 10 ms or even more. This can enhance rhythmic and
compositional intent, enhance the quality of our stage plan.
Release Time - The amount of time it takes for gain to return to normal when the signal
drops below the threshold. Usually measured in microseconds or milliseconds. With a fast
attack and a fast release, the more you will sustain the end part of a note (sustaining a bass
note or baseline, to bring out longer standing bass notes). Thus reducing the transients
therefore boosting the parts sounding after the transients (sustain).
Makeup Gain - Brings the level of the whole signal back up to a decent level after it has
been reduced by the compressor. This also has the effect making quiet parts (that are not
being compressed) louder (see Release). For mixing purposes when compression has
reduced the original level, we can boost with make-up-gain to get the signal up to its
original level again. Sometimes a compressor has automatic make-up gain. For mastering
purposes we tend to stay away from using make-up-gain.

Hard knee and Soft Knee is the way reduction takes place above and around the threshold.
Soft knee is more curved and hard knee is at a certain angle. Soft knee tends to be more
natural/analog and hard knee tends to be more aggressive/digital.

Opto or RMS : Opto behavior is more digital and straightforward and for percussive
instruments and drums (fast). RMS for the rest (slower).

Side chain compressors.


Side chain compression can solve mixing problems when two sounds are played together
on two different tracks inside a mix (masking, when a bass note and bass drum are
sounding together in the same frequency range). Split-mode side chain compression is most
scalpel-like dynamic shaping tool to ever exist. Compressing dynamically according to a
key input as you can choose which frequency range you want compressed by your keying
value. On Vocals for instance compression can reduce some difference between loud and
soft parts, correcting sudden louder parts of the vocals that jump out. Maybe you need to
compress the acoustic guitar part only when the vocalist sings ? To create some headroom
and unmasking you would like when a part goes over a set loudness level, that the loudness
is reduced for that short instance of time. Sometimes a Bass note and the Basedrum do
appear at the same moment, thus the bass note is overcrowding the Basedrum for a short
while. A nice trick is reducing the Bass only when the base drum and bass play at the same
time moment, this makes the Base drum more clear and will not affect the baseline as
much. This can be done manually by editing, muting or cutting out bass notes, or with a
side chain compressor trick. For this instance we could use a side chain compressor to
correct the problem by reducing the bass note when the Basedrum goes over a certain
threshold, thus temporally reducing the bass note. This will keep the boom of your
Basedrum to hear unaffected, as this is the fundamental reference sound (frequency wise
and rhythmically) that can be crucial to your mix.

Multiband Compressors.
This compressor is mainly used at the mastering stage but also can come in handy while
mixing. Most multiband compressors do have 4 Multibands. Each multiband has got its
own frequency range and the reduction of each multiband can be setup separately. For
instance controlling the bass drum or bass, we can adjust low, mid, and high with different
compression setting.

Normal Multiband Default settings.


Band 1, 0 - 120 Hz, Power.
Band 2, 120 Hz - 2 KHz, Warmth.
Band 3, 2 KHz - 10 KHz, Treble, Upper Harmonics.
Band 4, 10 KHz - 20 KHz, Air.

Adjust the bands when needed, for instance.


Band 1, 0 - 120 Hz, Power, first low band.
Band 2, 120 - 350 Hz, Misery range, second low band.
Band 3, 350 - 8 KHz, Mid-range.
Band 4, 8 - 20 KHz, Air, Trebles
Each band will be the same acting as a single band compressor or normal compressor, just
that the spectrum can be adjusted in multiband ranges. Now you can control the Bottom
End and not affect the higher frequencies while compressing. Each multiband crosses over
in the next multiband. You can understand with vocals that can be expected to be handled
carefully, maybe only the Mids can be compressed a bit, without harming the crispy highs
or lows. For mixing purposes the multiband compressor could become handy, but however
setting up a 4 multiband compressor can be a fiddly job. Even with 4 compressors running
at the same time, you might not hear as good what youre doing. Because of this
complexity, multiband compressors are most likely only used for mastering purposes and
scarcely used for mixing purposes, but can become a handy tool when resorting for a trick
to solve problems. Especially when you need spilt signals to be controlled, but you do not
like to have copied instruments, a multiband compressor can help solve things for you in
the mix. For use on single instruments try to avoid, only as a last resort. For use on groups
use only when they have the desired effect without much fiddling around. Multiband
compressors tend to show less pumping, but this soly depends on what frequency band or
instruments youre working on. To control pumping better use a single band compressor
instead, controlling 4 multibands can be a hassle.

Compressing.
Compressors on individual instruments or tracks are almost always used as an insert effect
(pre-fader) and (almost) never used as a send effect, because the main function is to change
the signal directly. Compressors can be inserted at single instrument track level or as an
insert on groups or sends. What we try to achieve is a cleaned and better sound (better
transient, sustain, RMS levels then before), so making sure what goes into the compressor
needs to be as clean as can be. Prior to compression we can place an EQ for cleaning

purposes. Use manual editing. Popping sounds and air noises are best rolled off with a low
cut 0 Hz - 35/50 Hz to 120 Hz for not fundamentals. A gate can also help clear up the input
signal as well as automated or manual muting. When recording you can use compression
just to scrape some peaks, the real compression can be done later in inside the mix. Maybe
you already placed an EQ for cleaning up (quality, reduction), and then place the
compressor behind the EQ (all pre-fader). If for instance youre working on a digital system
then you would have more places to insert an effect on a track or instrument, send or a
group. When you place a compressor as insert effect, do this in effect slot 2, so effect slot 1
stays free for EQ (all pre-fader). Compression is highly dependent on the source material,
and as such, there is no preset amount of compression that will work for any given material.
Some compressors do have presets for certain types of audio, and these can be a good
starting point for the inexperienced, but remember that you will still have to adjust the input
and threshold for it to work properly. Because every recording is done with different
headroom and dynamics, every compressor will also their own sound and main purpose.
The main purpose of the compressor in mixing is give some structure and dynamics to the
sound that is passing through the compressor.
Compression is done by controlling the dynamics (level) of the input by compressing the
output. Basically there are some good reasons to use a compressor. For controlling the
Transients (start of each note 0 ms -25 ms) and controlling the Sustain (30 ms >) a
compressor can do a good job to make certain instruments more clear and work them into
the dimensions you need (quality). Also by compressing a loud part, will give softer parts
more volume (level). This is why we need to clean the input signal of unwanted noise; else
the compressor will only make them louder. Pops and clicks in the lower frequencies can
make the compressor react, while you do not want it to react. So better be sure your
delivering a signal input into the compressor that is good, else try to remove with EQ ing
upfront, gate or even edit the audio manually (removing pops, clicks, etc). The ratio setting
for individual instruments is about from 4:1 to 10:1, don't be shy. Setting the ratio lower
will make you use the threshold more. Setting the ratio too high, the compressor almost
starts to act as a limiter. By chance the only limiter that is used in a mix is on the master bus
(for scraping some peaks a Brickwall limiter), so ratios like this are out of order on group
tracks and individual tracks or instruments. We can use general RMS compression on a
group track to join or weld the individual tracks together even more (also use some
compression on the sends) as well as we can use summing. With a ratio setting from 1:1 to
4:1 (that is lesser then when working on individual instrument tracks), the more subtle the
compressor will be and weld (blend) the group into a layer. For mastering purposes a ratio
from 1.5:1 to 3:1 is commonly used.
Very Short release times emphasize the quieter sounds, after the transients have passed.
This is handy with Bass, Guitar or any other instrument that does not hold its sustain very
well. You can get each note sound straight until end doing this (sustain). Set the decay time
for rhythmical content to tempo, a measure or beat.
When you reduce the peaks of a signal and then add the same relative amount of makeup
gain, you are raising not only the instrument by x amount of dB, but raising the Noise Floor
as well. This is why we need cleaned up material. While usually not an issue in quality
recordings, it can become apparent when compressing quiet acoustic recordings or

recording with a low Signal to Noise ratio. That computer running in the background while
recording suddenly becomes more apparent or you forgot to turn off the ventilator in your
living room. Unheard sounds could become from being unnoticeable to being an annoying
hum if you compress and raise the makeup gain. Even when using EQ. That is why the
input must be as clean as possible and cleared of unwanted sounds.
The pumping sound you might hear occurs when the compressor initiates but then has too
fast of release and the rest of the mix comes up to fast after the hit (lesser transients and
more sustain). To fix this have a slower release, lower ratio, slower attack or higher
threshold. They all have a different effect so listen and decide what sounds best and gives
you what you are trying to achieve. When pumping is noticeable, after a while this becomes
apparent. When pumping occurs, it is likely we have gone too far. If you train your ear
pretty much all radio signals have a certain "acceptable" amount of pumping. When the
compressor is set previously, do not affect the input signal, because this will affect the
threshold placement and needs to be set again. This is why we first make use of level,
balance, EQ before adding a compressor. Hunt down and up for hearing the correct setting
of a compressor. Listen and go extreme before backing down to a good sound, it is the only
way to really hear the reduction good while setting up a compressor. Do not fiddle around
-5 dB change of threshold, go extreme and go way lower or way higher, or crank or lower
the ratio and listen to the difference (pumping or not). A good rule is when you hear a
compressor start to work, you have gone too far. Experiment. Generally you will get better
results by learning to use compression, and understanding how the controls affect the audio
signal. Experiment, listen and visualize, then apply. When compression is not working to
adjust levels, use event fader level or balance automation (unmasking). Even after the
compressor. Also automation of level (the fader) is a kind of compression that can be done
manually, maybe the first choice in line when overall compression does not seem to
workout. Using the mute button for instance. Compression is easily available, but the
original audio must have some good even sound before entering the compressor. In most
cases midi notes can be raised or lowered in volume / level by manually editing. Samples
can be manually adjusted. Also audio on a track can be edited and maybe you might take
the time to do this note by note, level by level. The more even of level or controlled the
original is audio enters the compressor (RMS, Peaks, noise, artifacts, etc), the less work the
compressor has to do (less artifacts and pumping), the better the result.
Limiter.
A Limiter is nothing more than an automated volume fader. Commonly a limiter will top
(scrape off) the signals. Unlike its big brother the compressor, the limiter has fewer buttons
and knobs to play with, in comparison to a compressor a limiter has got a ratio setting that
is high on value, therefore compressing power is high. Limiters work good on a whole mix
on the master track. A good between version is the peak compressor, combining functions
of a compressor and limiter together. A limiter is basically reducing all signals that do come
over the set threshold. Mostly used to scrape off some peaks while on the master track.
Uncommonly used on groups or single tracks, but for the same purpose used on the master
bus fader preventing overs on the main mix. For scraping the peaks set the threshold to -0.3
dB or a reduction amount of 1 dB to 2 dB and does not hurt the transients. Limiters can
have artistic and creative purposes that are uncommon.

Gate.
A gate is basically cutting all signals that do come over the set threshold. A gate can be
compared to a compressor, instead of using reduction by measuring the signal; the gate cuts
all signals to inaudible. For removing unwanted material (cleaning and reduction) a gate
can make a difference. For rhythmical sound content (drum set, percussion, etc) a gate
could cutoff the reverb or any other effect, according to tempo. A gate could cutoff
sustaining sounds. For instance when a pre-recorded snare has got room sounds or
sustaining sounds recorded, a gate could clean or clear the reverberation sounds or
sustaining sounds, by only passing the first transient sound. Then after the gate you will
have a more dry snare, you could now create the room by adding a reverb that fits the dry
snare signal. Endless creative quality and reduction possibilities over here. Delay's and
gates are often synced to tempo of the track. Use the mute button for composition wise
intent or manual gating.
Finishing a first starter mix.
For now we have discussed all features for starting a mix towards a static reference mix.
Once you get the hang of starting a mix, this will be a good basic setup. Mixing is just more
than setting up all faders and knobs, but for starting/static a mix we can only give some
guidelines and proven togetherness. Starting a mix we like to stay in dimension 1 and 2 and
use the common tools available. We try to avoid dimension 3 for now. Keep on mixing with
the tools for dimension 1 and 2, until satisfied. Then we will discuss dimension 3, as we
need depth also to make our stage plan true.

The Static Mix Reference.


But most likely you want the best out of your mix and you will be adding more effects later
on. Do anything to make the whole sound better. Using EQ, Compression, Delay, Reverb
(discussed later on), Limiter or any other device or effect will change the way your mix will
sound (the three dimensions, your stage plan). Remember when you know to add
something to your mix, you are changing the levels. So check, adjust and re-check
whenever you can. It is quite ok to mix freely and set faders and knobs as you want, setup
however you like. As long as it sounds good, it must be good. But keeping headroom (open
space for adding) and keeping the Vu-Meter below 0 dB is important. Also it is general for
most beginning mixer to pump all levels as loud as can be; this is not what youre looking
for. Loudness can seem to be better, it is actually the same and we will pay attention to
overall loudness while mastering. So keeping the total levels (summing) on the master fader
VU-Meter is keeping you ready for mixing purposes applied for later use. If you are happy
with the togetherness of your sounding mix, maybe you can raise all track faders so that the
VU-Meter is more on the upper side closer to 0 dB, still remember doing this is not
changing sound but the level only (and will produce more artifact when raising too high,
you will just lose some headroom instead. Keeping headroom anywhere from -4 db to

-14db is allowed and good accepted in mixing. Because in the mastering stage there is
plenty of power for loudness to get your mix to sound as loud as can be, care less about
loudness levels when mixing, care about how your mix sounds as a whole. Using quality
and reduction first (apply the dimensions in order). Care about how your stage planning is
perceived. So once again to hammer it down, your mixing now, so separation as well as
togetherness is important only. Loudness we wait for as we have finished the mix and go
for mastering. As a rule for a good starter mix, we tend to stay inside dimension 1 and 2
more. We only add dimension 3, when we are satisfied finishing off all earlier dimensions
(the static mix). Resorting first to panning, level, EQ, compression, gates, mutes, limiters,
reverb, delay, overall effect and the correct order.

Review of our start.


At least in mixing an EQ and Compressor, Limiter and Gate are good tools to adjust the
mix, before throwing in more effects and more sounds. Together with Fader level and
Balance, EQ and Compression are the most used carving tools for a mix (starter mix
towards a static reference mix). Basically EQ will do a good job on just reducing or gaining
frequencies overall on the whole part or frequency spectrum. Compression, limiting and
gating will give you something an EQ can't do, that is to affect only certain signals when
they are passing a defined border. Thus controlling transients and sustain. Taken in account
that for overall level use the level faders first, manual editing and muting, panning the
panorama first (separation). Use EQ when you need to cut (separation) or raise overall
instrumental frequency ranges (quality). Use Compression when some parts of instruments
at certain times peak and need to be lowered or reduced to give more dynamic range back,
keeping things tidy and together (headroom). Use a compressor for transients and sustain
(quality). Use a gate to really cut unwanted events. Use a limiter to scrape off some peaks.
Use manual editing for removing pops, clicks etc (sometimes breathing noises on vocals). A
Good start is giving each track or instrument a place in the spectrum available (stage
planning). These are good tools to get some headroom back, thus reducing or scraping
peaks. Try to imagine what the whole mix can sound like, and after some repeated times
you have setup a mix, you will get the hang of it. Remember to get some
separation/togetherness out of your mix, reduce frequencies that are not needed per
instrument. Try to be natural and close to the original sounds, but keep what is needed and
wipe away what is not to be heard (wipe away more, raise less). Try to transmit natural
signals towards the listener, so our brain does not get confused (dimensions, 3d spatial
information, stage planning). This will mean sometimes using EQ and just cutoff outside
ranges of an instrument with shelving low or high cuts (reduction). Sometimes the internal
range of the instrument needs to be sounding better (quality), use EQ for overall editing of
the sound, while using Compression (Gating or Limiting also) for more time and loudness
related peaks that you need to correct (transients, sustain). Not forgetting to balance the
instrument from left to right and to keep track of the Vu-Meter, correlation meter,
goniometer, spectrum analyzer. Do some checks and rechecks on your reference tracks
alike Basedrum or any track you choose as reference loudest track. Soloing as well as listen
trough a mix summing up towards the master bus fader, towards the last output.

Take in account that mixing is always debated and can be explained in different ways,
because mixing is a creative thing. But having some guidelines and working by it will
increase effectiveness. Specially knowing panning laws, stage planning, where and what to
cut, masking and unmasking, dimensions and 3d spatial informational hearing, the more
natural the better. Understanding how to do things will take time and is repeated learning
process, it is pure experience in the end that makes the speed and time needed for mixing
towards a starter, static and dynamic mix. This will mean you will mix good or bad, but you
will continue to learn from it when doing so. Also the human brain needs time to take all
information by learning and processing information, ordering this into something you can
understand later on. We will get tired when hearing for longer times to loud music. Getting
to much information and working just too hard is not getting you there any faster. Take
some time off and give it a rest, give your fatigue ears a good rest, this will help you find
your mix on another day sounding different than before. Making better decisions. Each
time you will learn for a while and then some realization will set in afterwards. Then you
will understand the whole picture.

What your aiming for is separation and still have some togetherness.
Remember it is better to reduce then to add, and cut away what is not needed, the headroom
that you create with it will be rewarded when you need to add things to the mix later on.
Getting things to sound louder each time you mix is not important, that we do later on while
mastering. Relatively we have now worked more on dimensions 1 and 2. And have avoided
dimension 3 until now, although we have discussed it we did not apply dimension 3 really
as an example. Here we introduce dimension 3 and some more effects and being less
restricted and more creative with the mix (Static Mixing).

Brass.

Panorama: Horns, Trumpets, Trombones and Tuba. Depending on their frequency range and
placement, decide where they fit in. Scattered across the whole panorama. Placing lower
instruments more centered and higher instruments more outwards (panning laws).
Frequency Range:
Trumpet Note Range: 115Hz (E3) to 1047 Hz (C6).
Trombone Note Range: 82 Hz (E2) to 698 Hz (F5).
French Horn Note Range: 65 Hz (C2) to 698 Hz (F5).
Tuba Note Range: 37 Hz (D1) to 349 Hz (F4).
Piccolo Note Range: 587 Hz (D5) to 349 Hz (F4).
Flute Note Range: 262 Hz (C4) to 2349 Hz (D7).
Oboe Note Range: 247 Hz (B3) to 349 Hz (F4).
Clarinet Note Range: 147 Hz (D3) to 349 Hz (F4).
Alto Sax Note Range: 147 Hz (D3) to 880 Hz (A5).
Tenor Sax Note Range: 98 Hz (G2) to 698 Hz (F5).
Baritone Sax Note Range: 73 Hz (D2) to 440 Hz (A4).
Bassoon Note Range: 62 Hz (B1) to 587 Hz (D5).
Cut, 0 Hz to 120 Hz (180 Hz), Reduction, Separation.
Between, 120 Hz to 550 Hz, Power, Warmth, Fullness.
Between, 1 KHz to 5 KHz, Honky, Contrast.
Between, 6 KHz to 8 KHz, Rasp, Harmonics, Solo.
Between, 5 KHz to 10 KHz, Shrill.
Roll Off, 12 KHz, Distance, Reduction.
Quality: Fullness at 120 Hz to 240 Hz, shrill at 5 KHz to 10 KHz. Roll off some highs
according to distance.
Reduction: For the higher instruments alike trumpets and some trombones, cut a lot from 0
Hz to 180 Hz. As for lower instruments alike Tuba and Horns cut a lot from 0 Hz to 120
Hz. We do not like the brass instruments behind the drummer, so do not roll of too much.
Compression: The trumpet is by far the loudest of the horns, with a large dynamic range
that can reach from soft melodies up to stabs and shouts, not so constant overall levels.
When dealing with EQ and compression, you'll often deal with the horn section as a single
unit (Group). Apply a good amount of compression on peaks, but stay away from really
compressing the main parts.

Reverberation: There's something that adds to the excitement of a horn section when you
hear it from a distance, when it's interacting with the room. We tend to use a more roomy
reverb sound, hall. Reverb and delay work very well with horns.

Percussion.
Panorama: There is no basic placement for panning on percussion and cymbals. Percussive
elements are often panned left or right and kept away from the center, set as much outwards
as possible. A stereo expander can bring the percussion elements even more outwards. We
like to pan percussion more left or right, not centered. Bongos to the left and far behind in
distance, conga's far to the right and also far behind in distance. Panning outwards they
remain unmasked by other signals, set in distance when other instruments already
overcrowd the stage.
Frequency Range:
Cut, 0 Hz to 120 Hz, Reduction, Separation.
Cut, 200 Hz to 400 Hz, Higher frequency percussion.
Between, 200 Hz to 240 Hz, Resonance.
Around, 5 KHz, Presence Slap.
Between 10 KHz to 16 KHz, Air, Crisp Percussion.
Quality: Resonance at 200 Hz to 240Hz, presence slap at 5 KHz. For distance depth roll of
some high trebles, send the more to the backstage.
Reduction: Roll off some lows from 0 Hz to 120 Hz or more, as they are outwards and do
not need a lower transmission (panning laws). According to stage placement, roll off lower
frequencies. Percussion and cymbals can be cut 1 KHz and higher. Cut with shelf filter at
800 Hz and higher (1 KHz - 4 KHz), be careful otherwise will sound acid and unnatural.
Compression: Compression can help bring forward the transients while reducing the
sustaining sounds (keep some headroom). Use Opto mode for all percussive drums.
Chorus: Often used.

Reverberation: As for using reverbs or delay, maybe the reverb placed for the snare (Group,
Send)can be useful also for percussion purposes, we tend not to use the ambient reverb of
the whole drum set, maybe just some to glue into some togetherness with the rest of the
drum set. Percussion requires a long reverb with little pre-delay and a little high frequency
cut or is damped by the reverb setting. For the Percussion (Group, Send) a medium sized
Room from 1.5 seconds to 2 seconds of delay time. A pre-delay of 15 ms and a medium roll
off in frequency (damped or EQ). The masking effect might hide the reverb, but just set the
loudness of the reverb high enough to get some 3d spatial information transmitted. Maybe a
stereo expander after the reverb signal and some automation will solve the hiding problem,
panned outwards, just watch the correlation meter as you are using the stereo expander to
widen. A delay can help sweeten the percussion, only when you adjust the delay in tempo
synced and keeping pre-delay short. Reverberation, use no pre-delay or < 10 ms, checking
the rhythmic (use a high trebles roll off for setting distance instead). Transients are more
important for percussive sounds, they are mostly placed backstage so for rhythmical
content we need the original transients to be heard. Percussion instruments are placed
consciously toward the rear, we will need a large reverb with some pre-delay and filtered
trebles. Reverb can be generously applied here, so the masking effect will stay away or is
overpowered by reverb and brings over the 3d spatial information. Reverb layering for
instance, gives percussion tracks a medium, thick room with quality. The return is
processed with a little widening to counteract the masking effect and to place the
percussion behind the drums. A little pre-delay on the reverb and slightly attenuated or
lowered trebles.

Vocals.
Recording: Record doubled takes. Use low in the mix so it is not obvious. Timing is
important, so maybe manually edit the audio. The different doubled takes can differ in
tuning and vocal quality, but most of the time does not need to be returned at all.
Panorama: Main Vocals are placed at center and upfront, dead in front of all fundamentals
and not fundamentals. Maybe have two different copies running at left and right (doubling),
still this must result in centered main vocals (avoid swaying around). A good trick is
panning duplicates of the vocals left and right. You can invert the right signal for a real

dramatic effect. Also pitch shifting left -4 and right +4 can make a more dramatic effect.
But however the vocals should align to center always.
Frequency Range:
Vocals Note Range: 82 Hz (E2) to 880 Hz (A5).
Cut, 0 Hz to 100 Hz (120 Hz), Roll Off, Reduction, Separation.
Fullness, 120 Hz.
Male Fundamentals, 100 Hz - 500 Hz, Power, Warmth.
Female Fundamentals, 120 Hz to 800 Hz, Power, Warmth.
Cut, 200 Hz to 400Hz, Clarity.
Boost, 500 Hz, Body.
Boost, 315 Hz to 1 KHz, Telephone sound.
Boost, 800 Hz to 1 KHz, Thicken.
Vowels, 350 Hz to 2 KHz.
Cut, 600 Hz to 3 KHz, Loose Nasal quality.
Consonants, 1.5 KHz to 4 KHz.
Boost, 2.5 KHz to 5 KHz, Definition Presence.
Between, 7 KHz to 10 KHz, Sibilance.
Around, 12 KHz, Sheen.
Around, 10 KHz to 16 KHz, Air.
Between, 16 KHz to 18 KHz, Crisp.
Words:
sAY, 600 Hz to 1.2 KHz.
cAt, 500 Hz to 1.2 KHz.
cAr, 600 Hz to 1.3 KHz.
glEE, 200 Hz to 400 Hz.
bId, 300 Hz to 600 Hz.
tOE, 350 Hz to 550 Hz.
cORd, 400 Hz to 700 Hz.
fOOl, 175 Hz to 400 Hz.
cUt, 500 Hz to 1.1Kz.

EQ: If vocals sound tend to close-up, boost some 120 to 350 Hz. Men range 2 KHz and
women 3 KHz, with a wide Q-factor (standard for vocal use). The range trough 6 - 8 KHz
are sensitive sibilant sounds to 12 KHz. Boost subtle always. Combining a de-esser can

help. Even before EQ look at some manual editing. Wideness and openness at 10 to 12 KHz
and beyond. Use a quality oversampling EQ on highs. Sometimes a complete vocal track
needs to be processed overall, AAMS Auto Audio Mastering System can help with its
reference vocal presets.
Quality: Filtering can make a difference for a chorus section (for instance that is mudded,
masked). Boost some, 3 KHz to 4 KHz for our hearing to recognize the vocals more
naturally and upfront. Boost, 6 KHz to 10 KHz, to sweeten vocals. The higher the
frequency you boost the more airy and breathy the result will be (the more quality EQ you
will need). Cut, 2 KHz to 3 KHz, apply cut to smoothen a harsh sounding vocal part. Cut
around 3 KHz, to remove the hard edge of piercing vocals. When a vocal sounds boxy, then
apply some steep EQ 150 Hz to 250 Hz. This will reduce these levels to unboxy (sounds
more open). Boost 2 KHz to 3 KHz (5 KHz) with a low Q-factor to ranging from 1 KHz
to10 KHz, to adjust speech comprehensibility. Some slight support here is standard, any
microphone will muffle the sound a bit, so we must compensate for this at the 2 KHz - 3
KHz range. Main adjustment ranges, fullness at 120 Hz, boominess at 200 Hz to 240 Hz,
presence at 5 KHz, sibilance at 7.5 KHz to 10 KHz. You could add a small amount of
harmonic distortion or tape emulation effect. A good trick can be running duplicated
manipulated copies of the main vocal left and inverted right, this will be heard in stereo, but
not in mono.
Reduction: Roll Off < 50 Hz (< 80 Hz downwards steep), cut below this frequency on all
vocal tracks. Use a good low-cut from 0 Hz to 120 Hz. This should reduce the effect of any
microphone pops. It is common to use a high pass filter (at about 60 to 80 Hz) when
recording vocals to eliminate rumble. The better vocals are recorded, the better they can
placed inside the mix. Breathers are a question of style, cutting them is common. If you
duplicate a track, do not duplicate the breathers. You can edit out all breathers separately on
its own track, and then remove all other breathers from the vocals. Syllables and 'Ts' end
sounds (rattleling in chorus) can be faded out. Mostly done by manual editing the vocal
tracks. No roll off to make the main vocals even more upfront (keeping trebles intact), roll
off background vocals though. When a Main Vocal is not sounding, cut 600 Hz out of all
other conflicting instruments except drums. When still having problems do a little more cut
at 1.2 KHz, solves all problems. Voice is very easy to make flat, sharp or unnatural, think
twice before using EQ. The classical way is to record vocal to tape in Dolby mode, and
played without Dolby mode. Digital systems Dolby 740, digital up activator.
De-esser: Expanding or Compressing frequencies between 6 KHz to 8 KHz are in the 'sss'
and de-esser range. With a band pass filter. A good de-esser is crucial (extreme reduction,
but no lisp effect). You can also edit all 'sss' sounds manually. To make the vocal more
open, boost trebles from 10 KHz > (use oversampling EQ) to make them sound upfront.
Consider manual editing before using a de-esser.

Tune and Double: Auto tune or tune the vocals. Maybe to original track mixed with the
tuned track together, just copy a ghost track and manipulate. You can even use some stereo
expansion or widening. When you do not have enough vocals or background vocals, copy
them and double / tune / manipulate. Do not widen at copied tracks for main vocals, but
you can widen de background vocals with an stereo expander according to panning laws.
Compression: 1176! To make all vocals sit in the mix, we need compression. Mostly
compression on vocals can sound loud and hard, it will be fine inside the mix and keeps it
upfront. Background choirs can be of many voices, often compressed on a group combined.
A fast attack and release, ratio depends on the recording and vocal style. Usually a soft knee
compressor. Long attack times for the transients, and release time should be set to the song
tempo (shorter then) with little sustain. Vocals now have more presence and charisma,
upfront. Start with a ratio of around 4:1 and work upwards for a rockier vocal. Use a fairly
fast attack time, release time would normally be around 0.5 sec. A reduction of 12 dB is
common for untrained vocalists. Be careful not to over compress, you can always add more
later on. A multiband compressor is a good tool for removing unwanted sounds from
vocals, use as de-esser for sss sounds, but also for other unwanted frequencies alike pops
clicks and some rumble. Also the multibands can be used for different vocal applications.
One Band, 0 Hz to 120 Hz, mainly compress for rumble and pops, use a fast attack.
Another Band, 3 KHz to 10 KHz, search for de-essing sss sounds. Start with ratio 5:1 to
8:1. Lower the threshold until sss peaks are hit. Another Band, 4 KHz to 8 KHz, can be
used for presence with light ratio's 1:5 to 3:1.
Reverb and delay: Using a large reverb on main vocal is not allowed, it is less direct, the
singer will sound backed up. Use small room or ambient reverb, be subtle, making the
listener not aware of them or noticeable. Combined with bigger rooms and delay, help make
the vocals sound fuller without pushing them backwards. Delay, instead of reverb. A delay
can make main vocals fuller, without placing them further back on the stage. The more
delay used, you must pay attention to the center placement of the lead vocals. Use the
goniometer.
Reverberation: Reverb for the lead vocals tend to be dry, require a high-quality
oversampling device to prevent them from being pulled into a cloud of reverberation. You
need a small, unobtrusive reverb with attributes similar to a drum booth. Often combined
with a delay works well. That might blur less than a medium reverb. A delay might be far
better on main vocals, especially when you need then to be upfront as in most cases. A
delay tail on the front vocals, make the vocals appear with more warmth and appear fuller.
Without putting the frontal placement into danger mode (panorama). The more the delay is
appearing in the mix, the more it will cover the vocals, using ducking (side chain or not) on
the first part of the vocals (transients and a little sustain part) can free up and loose some
fuzziness. Record vocals dry and you can apply reverberation in style later on. Use a large

amount of small room reverbs on the main vocals, instead of using a larger size reverb. Or
double the main vocals. Add one track with a small room reverb. Add another with a bigger
room trough a delay (1/4 step) and a gate to stay in rhythm (1/4 step). Maybe use a spaced
echo. Anyway it is better to not clutter the vocals with on top reverbs and delays after each
other (serial). Separate all reverb channels here (parallel), containing dry signal and
reverbed signals. Sometimes expand the reverb or delay outwards. For Main Vocals (Single
Track or Group) a vocal room, drum booth or small ambient reverb. Bright reverbs can
sound exciting, but emphasize sibilance. No pre-delay to set the vocals upfront. Combining
with a delay, using a medium reverb might be just too much. Main Vocals - Try to use a
Stereo Reverb with delay tail for the main vocals, place the reverb a little hidden. You
might solo the reverb and listen to it and find it a bit loud. Within the vocal mix it might be
just right, so dont be scared by this effect. The dry vocals will mask the reverb a bit. To
place a choir into the back requires a long reverb, with a bit of pre-delay and damped high
ends. The reverb can be set quite high for our ears to accept the 3d spatial information and
fight the masking effect. Experiment with a stereo expander in the reverbs return. For
Vocals delay can give more depth and placement inside a mix. Use a stereo delay to add
small amounts of delay (around 35 ms), watch out for correlation effects.
Delay: Delay can work out better and makes any instrument stay more upfront, as it tends
to keep upfront, a reverb will draw more to the back. Delays are more clear and less
muddy/fuzzy, so this helps again to make it stand upfront but still have some space. Lead
vocals reverb and delay, its all about the mix. Create a dry counterweight by doubling the
lead, add EQ, compression maybe a short delay, mix it back in. This way the lead vocals are
not pushed back too far, but at the same time sound fatter. A little stereo reverb with delay
tail for the vocals may work.
Offside Vocals.
Panorama: Sometimes a main vocal singer is accompanied by one or more vocalists.
Mostly placed more left and right from the centered main vocals. According to their stage
position. As long as you counteract and balance the stereo field, both speakers are playing
the same kind of vocal loudness. The background vocals are spread by panning laws, lower
voices more in the middle and higher on the outsides. Basically the settings for these
accompany vocals are the same as for the main vocals.

Background Vocals or Chorus.

Panorama: The chorus is always arranged that the higher voices are more outside and lower
voices more centered according to panning laws. Use a stereo expander for the chorus to
widen even more. Also there are effects that can double or harmonize vocals.
Quality: When a vocal sounds boxy, then apply some steep EQ 150 Hz to 250 Hz. This will
reduce these levels to unboxy (sounds more open). Boost 2 KHz to 3 KHz (5 KHz) with a
low Q-factor to ranging from 1 KHz to 10 KHz, to adjust speech comprehensibility. Some
slight support here is standard, any microphone will muffle the sound a bit, so we must
compensate for this at the 2 KHz - 3 KHz range. For bigger chorus you can duplicate tracks
and use some automatic tuner, pitch or any modeler, thus slightly changing the color of
each copy. Chorus can be layered on several tracks, as for recording chorus maybe 4 to 16
(or more) vocals could be used to generate a nice sounding chorus section. The more
natural vocals sound the better. Roll of some great deal of highs for distance, set them at the
back of the stage.
Reduction: Use a good low-cut or roll off 0 Hz to 120 Hz. To make the chorus more
distanced, lower trebles from 10 KHz > to make them sound at the back of the stage
(behind the drummer).
Pitch Shifter : A real-time pitch shifter set to shift -4 and +3 panned more left and right, can
be used for doubling and creative effects. Also pointing out doubling, harmonizing and
special vocal effects alike the vocoder or voice changers.
Reverberation: Backing Vocals are placed toward the rear, a large reverb with some predelay and filtered trebles. Reverb can be generously applied here, so the masking effect will
stay away or is overpowered by reverb and brings over the 3d spatial information.
Record vocals dry and you can apply reverberation in style later on. For Background Vocals
or Choirs (Group) use a Large Reverb with a Pre-delay of about 25ms (Check the snare
reverb for starters). Use an EQ to roll off the highs strongly and the reverb sends them all to
the back in distance where they belong. High pre-delay for choirs can send them assigned
to the back, to the back rows. Try sending the background vocals to a group track. Set a
compressor that compresses the loud section, but leaves the quiet ones uncompressed.
When this is used to feed a reverb the loud sections will be dryer and the softer sections
wetter.
De-esser: Frequencies between 6 KHz to 8 KHz are in the 'sss' and de-esser range. A good
de-esser is crucial (extreme reduction, but no lisp effect). You also edit all 'sss' sounds
manually.
Static Mix Reference.

Reading up to here, you should have enough information to finish off the Static Mix as a
reference for furthermore mixing. Using the dimensions, quality and reduction. As well as
finding some stability with separation and togetherness. Unmasking as much as we can to
have clear pathways and at the same time save some headroom. Until now we have
discussed first the Starter Mix progressed towards a finished Static Mix. Basically called
static because after setting up there is no automatic timeline movement of knobs, faders and
settings in the timeline of the mix. In the Static mix, we have setup quality, separation
(headroom) and the three dimensions (stage plan). We have discussed why it is better to
start with dimension 1 and 2 (starter mix) before starting with dimension 3 (static mix)). We
would like to finish off dimension 1 and 2 as well as dimension 3 for a good static mix to
finish off. Again the Static Mix is our reference point for furthermore mixing purposes, so
we need to be sure we have done our very best to get the highest possible result before we
progress mixing more dynamical. Now is a great time to just listen, correct until your
completely satisfied. Waiting a day and resting our fatigue ears might be a good idea for a
last later on re-check. Be 100% sure you have finished a good reference static mix, or else
re-check or re-start, before progressing...

Before you use AAMS Auto Audio


Mastering System, Check the Mix!
There are a number of audio mixing and editting tips that will help you prepare your mixes
before using AAMS.
It is important to know how to prepare your mix, so you can get the best sound for your
songs!
When quality is at stake, be sure to read this page and spend some time to get your mixes
right.

Audio mastering is a process that stands far from mixing, it is the next stage afther mixing
and it is the final stage for sound quality. Actually while mixing we do not attend the
loudness much, we mix. What everybody is thinking of 'How to get our mix sound loud'!
That is what AAMS Mastering stands for, most likely preferred that your mix will become
an adequate to commercial radio, CD or MP3 streaming levels, just to fit in correctly. We
do not attend the Loudness War, but we need appropiate levels and professional quality.
Also when Mastering a Full Album, AAMS Mastering will make the whole Album Sound
as an Album. We name it 'the album sound'. So AAMS can do single tracks as well as full
albums, and create a good quality professional sound for you. But however, mixing is an
important stage before mastering with AAMS starts. So we ask you to attend some time and
thought.

Check, Check, Double Check!


0. You should do these mix check steps before you plan to use AAMS.
1. Eliminate any noise or pops that may be in each single track. Apply fades or cuts or
mutes to spots containing recorded noise, pops or clicks.
2. Keep Your Mix Clean And Dynamic. Unless there is a specific sound you need, do not
put compressing or processing on the master out of the mixing bus. It is best to keep the
master buss free of outboard processing or plugins. Dont add any processing to the overall
mix, just to individual channels. There should never be a limiter or loudness maximiser set
on the master out mix bus!
3. The loudest part in a mix should peak at no more that -3db on the master bus, leaving
headroom. It does not matter How Loud your mix sounds at this time, mixing means
mixing.
4. Does your mix Work In Mono? As a final reality check, switch the master buss output to
mono and make sure that there is no weakening or thinning out of the sound. In any event,
do not forget to switch the bussing back to stereo afther this check.
5. Only when a mix is completed and finished off, and your are happy with the overall
mixing sound and quality, then the next fase is Aplus Mastering to do their work.
6. Normalising a track is not necessarily a good idea.
7. Dont add any fades or crossfades, anywhere. Dont fade beginning or end.
8. Do not dither individual mixes.
9. You can output the mix on a stereo, save your mix in Stereo. Use a lossless
format! Using digital equipment Wav 32Bit Float Stereo is a good output format.
10. Do not try to output your mix to a mp3 file, this can mean loss of information! If you do
want to send in MP3 files, be sure they are of quality, prefer a bitrate higher than >
192kbps, 320kbps is quite good.
11. Export your mix out of your sequencer or audio setup in a correct and quality
unharming format;
12. Finally, always back up your original mixed files!
13. Put all your files of a single mix (the stereo file, reference songs, text documents or
pictures or any file that you need to send) in one single directory.

15. Use a packing program like ZIP, RAR, 7z and pack all files in that directory to one
single packed file. Name this file correctly, preferably the track number and name of the
track.
16. Backup your files!

Prefer the following audio formats.

- Uncompressed Audio : Wav, Aiff.


- Lossless Audio : Flac, WAVpack, Monkey Audio, ALAC.
- Lossy Audio : MP3, AAC, WMA (> 192 Kbps).

Mastering Stems
Mastering from stems is becoming little by little more common practice. This is where the
mix is consolidated into a number of stereo stems subgroups to be submitted individually.
Instead of submitting a Stereo output of your mix, you can send the mixing tracks
seperately. For example you might have different tracks for Drums, Bass, Keys, Guitars,
Vocal, and Background Vocals. This will give Aplus Mastering more control over the mix
and master. If a master from stems is desired, following the same steps listed above is best
for each stem. When submitting stems each file track must start at the beginning and must
durate though the end, most mixing sequencers will output this way exactly to the sample.
Each stem file should be exactly the same length.
Basic Mixing.
We provide on our site under resources a great manual on mixing. This manual goes under
the name 'Basic Mixing I', 'Basic Mixing II' and 'Basic Mixing III'. For any one wanting to
learn more and improve their mixing skills we advise to read this exellent documentation.
Goto 'Resources' and click on 'Basic Mixing I' to start reading. It maybe will open your
eyes.

Audio mastering is a process that stands far from mixing. Actually while mixing we did not
attend the loudness much, we mix. While mastering loudness becomes more important.
Most likely preferred that our mix will become an adequate to commercial radio or CD
levels, just to fit in correctly. Also for more togetherness and quality we could use some
tools we did not use while mixing, for glueing and welding more.

Did we finish the mix ?


Only when a mix is completed and finished off, then the next fase is mastering the mix.
Mostly you can output the mix on a stereo track. Using digital equipment Wav 32Bit Float
Stereo is a good output format. Do not try to output your mix to an mp3 file, this can mean
loss of information or use a high bit rate > 256 Kpbs. Anyway export your mix out of your
sequencer or audio setup. For analog mixers, the methods are most likely based on the
equipment you have for mastering purposes. For digital equipment just export your mix to a
stereo file. Basically the mastering tools are EQ, Compression and Loudness.
Mastering EQ.
First we can use a good EQ for correcting the frequency spectrum. Specially an EQ
equipped with a real-time spectrum analyzer can visualize and localize better. For a good
start we take a look at the bottom end of the mix. The range 0 Hz to 35 Hz contains mainly
rumble, pops and lower clicks. So we actually do not need this sound, we can cutoff with
the EQ. Basically in this lower range no instruments do play, if an instrument could reach
this low it would be bass. Check if youre not hurting the bass (mostly youre not) and do a
good steep cutoff. By placing this kind of cutoff when starting mastering, is giving you
some more headroom as well as remove some nasty events. The rest of the spectrum can be
changed, but however do not change anything. Youre affecting the mix! Well just be
careful, some cutting or boosting is allowed. Use a good low Q-Factor, this will ensure that
the EQ is working well and not generating side effect sounds that we do not need. Specially
using a high Q-Factor can generate unwanted sounds. Do more cutting then boosting, but
be scarse.

This chart is taken from AAMS Auto Audio Mastering System - Software. It shows a
frequency spectrum for a finished mix. Depending on using Mastering EQ, we can decide
where to cut and where to boost more. Do not boost a lot and do not cut heavily. Anyway
you may find a Mastering EQ setting that is suited. Be scarce with being tidy and fiddly.
The general mastering EQ setup can be done by lowering or boosting just a few bands.
Using too many EQ bands means you actually are changing the mix, so maybe when you
are just using too much EQ while mastering, you could adjust your mix first. Mastering the
frequency spectrum of a mix should be no more than just a little bit. If your find yourself
using heavy EQ setting while mastering, you can decide to adjust your mix. Sometimes we
can master an unfinished mix, just to hear the mastered sound and then do some more
corrections on the mix, according to the outcome of the master. Anyway using Mastering
EQ, we can have two purposes. First delete unwanted events (bottom end). Second,
according to decisions based on the whole frequency spectrum, we can use furthermore EQ
to raise the overall quality. Also with Mastering EQ we can prevent loud sounds (in
frequency) to enter the Mastering Compressor, by correcting the input signal. Also for
Mastering EQ, if youre using stereo settings (different settings for left and right), be aware
that the overall balance of the mix should stay at center. So use pan or balance to correct
your settings and re-place the whole mix at dead center. Use a steep filter to cut from 0 Hz
to about 30 Hz. Beginners might find this alarming, but only a few people can actually hear
down to 20Hz. Only a very small part of musical information is lost by cutting, the benefits
are more headroom and fewer rumbles, pops and low end clicks. What people perceive as
the bass range is in n the 50 Hz to 100 Hz region. Generally a pleasing high frequency roll
off is recommended. Don't roll off too much; else the mix end's up dull. Cut bands instead
of boosting, lesser EQ bands are better, use a wide bandwidth, low Q. Cutting more than 6
dB on smaller ranges, can bring out unwanted sounds, often we return to the mix to correct.
For bass dynamics do not boost the lower bottom end, instead do this on band 1 of the
multiband compressor.

Master Track EQ.


Roll off, 0 Hz to 30 Hz, No subs needed, Reduction.
Boost, 80 Hz, Front Base.
Cut, 40 Hz, Back Base.
Cut, 120 Hz, Back Base.
Boost, 100 Hz, Boom Weight.
Cut, 100 Hz, Light LF Clarity.
Boost, 120 Hz, Cloud Wood Body.
Boost, 400-800 Hz, Snares Claps Boink.
Cut, 120 Hz - 1 KHz, Spice Openness.
Boost, 1 KHz - 10 KHz, Harshness/clarity.
Cut, 1 KHz - 10 KHz, Absence/distance.
Boost, 10 KHz >, Sparkle/sizzle.
Cut, - 10 KHz >, Dullness.
Muddering, Cut, 100 Hz -300 Hz (180 Hz).
Nasal, Cut, 250 Hz -1000 Hz (520 Hz).
Harsh, Cut, 1000 Hz -3000 Hz (1820 Hz).
Mastering Compressor.
Master compression, super-fast attack and release, with usually a high threshold, only use
the compressor here as a peak reduction tool. A limiter would be best for this purpose.
Basically a Mastering Compressor has a different purpose then a compressor intended for
mixing. For getting more loudness, compressing some frequency bands with a 4 to 8 band
multiband compressor. With a multiband compressor you can divide the mix into sections.
Band 1, 0 Hz to 120 Hz, focusing on the bottom end, mainly Base drum and Bass or
fundamentals. Band 2, 120 Hz to 2 KHz, fundamentals of vocals and midrange instruments.
Band 3, 2 KHz to 10 KHz, can contain cymbals and upper harmonics of instruments
(trebles). Band 4, 10 KHz to 22 KHz, Some trebles and air. For each band use the mute
button to hear exactly the frequencies that are played. You can adjust each band cut off
frequency for each band. A more consistent sound can be found with the multiband
compressor as opposite to boosting EQ. The general EQ settings and frequency ranges also
apply to a multiband compressor. Actually we are changing the frequency spectrum with
each multiband in place, so the multiband compressor could be used to correct the
frequency spectrum of a mix. When louder band sounds go over a certain threshold for each
multiband, thus compressing more or less, we can control the peaks, as well as their
transients and sustain. Anyway a compressor mainly lowers a signal when it peaks over the
threshold. Mostly this will start a pumping effect, by hearing this you will know you went
too far with compression and you most likely have to set the compressor for a lower level.
Anyway sometimes this pumping or breathing effect is used while mastering, but better to
be fixed inside the mix. Whenever Mastering EQ or Master Compression is done for a
creative aspect, try to do this inside the mix before mastering. As the compressor only
lowers signals, we leave more headroom. Using gain inside the mastering compressor is not
really recommended, but sometimes we have to. The Ratio can be set lower 1.5:1 to 3:1

(while mixing we use 4:1 or more). Adjust the threshold for each band, just hitting peaks.
Anything more would not be common; we are just scraping some peaks. Using heavy
compression is not recommended, you could maybe revert to your Mastering EQ settings
and correct over there. Sometimes stereo wideners are applied or even a reverb, but hey
why not use this while mixing? Actually it is better to place any effect inside the mix, not
while mastering. If you still need effects while mastering, use scarse and be aware you are
actually changing the sound of the mix. Use a correlation meter to check mono
compatibility. Also for Mastering Compression, if youre using stereo settings (different
settings for left and right), be aware that the overall balance of the mix should stay at
center. So use pan or balance to correct your settings and re-place the whole mix at dead
center.
First set ratio, depending the band in place. Full mix, 1:1 to 2:1. Bass, Bass drum, 3:1 to
5:1. Vocals, 2:1 to 3:1. Set threshold, attack and release timings. Short attacks will level off
more of the transients, can cause distortion. Go for the lowest attack possible before hearing
any artifacts.
Mastering Expander.
An expander affects the lowest level signals, below threshold is expanded. Bringing up the
softer parts of the mix. This is called upward compression, a ratio less than 1:0. Also the
expander can be used as noise gate.

Mastering Loudness.
Now we have removed some unwanted events with EQ and Compression, as well as
leaving some more headroom, we can address loudness. Loudness is mainly detected by
peaks and RMS measurements. For a mix to be acceptable anywhere between -8 dB and 14 dB is quite common. The tools for loudness are a good limiter and a loudness
maximizer. Before we use gain or loudness, we should check the mix will play at the center
of both speakers. Use pan or balance to adjust the panorama. Then next use a tool for
getting more loudness. Some use a limiter set at -0.3 dB to -1 dB and just raise the master
fader until the desired loudness is heard. This is kind of a basic approach and work best
with digital systems working with a higher bit value or sampling frequency. Because of
clipping and distortion issues this approach is quite radical. Using a loudness maximizer we
can raise the level more easily without too much distortion going on. Aiming for RMS
levels between - 10 dB to -12 dB for louder styles and -12 dB to -16 for softer styles.
Depending on the material or mix. Mastering loudness at RMS -11 dB, threshold at -11 dB,
Margin -0.3 dB, Setting Infinite 8182, 50 Hamm. Steep cut around 36 Hz.
Finishing the Master.
Maybe a little fade-in and fade-out can be done. If we are using this track for radio or single
track releases, you can apply some normalization. Else wait for normalization when all

complementing tracks are finished, maybe then apply some normalization. Be wise,
normalization can ruin the compilation's coherence. If you do not know what youre doing,
do not apply normalization on multiple tracks.
The mastering engineer carefully increases loudness and shapes the material according to
the Fletcher-Munson curve and makes adjustments so that the tracks sound equally.

Audio Mastering Tips


Many home recordists hope to perfect their productions by doing their own mastering on
their studio computer.
However, few seem to achieve the classy results they are after.
So how much can you realistically achieve by going it alone, and what techniques will give
the highest-quality results.
Mastering is a vital part of the recording process, so much so that a substantial amount of
mythology is associated with it.
We have all heard stories of high-priced mastering engineers with mystical, proprietary gear
based on gilded vacuum tubes salvaged from ancient Russian submarines or something
similar.
But we have also heard of computer studio owners with a two-track editor and a few plugins who have started mastering their own material.
Can You Do Your Own Mastering?

Prior to the digital revolution, mastering had a very defined set of functions.
You brought your finished mixes on tape to a mastering engineer, who would often bounce
them to another tape through various signal processors designed to sweeten the sound.
The tunes would then be assembled in the desired order, and acetate test pressings would be
made to evaluate the final product prior to mass-producing albums.
Mastering was rightly regarded as an arcane, mystifying art.
Few musicians had access to the high-end, expensive tools needed to do mastering, nor did
they have the experience of someone who had listened to thousands of recordings, and
knew how to make them ready for the real world.
Today, the tools for quality mastering are finally within the financial and technical reach of
anyone whos serious about recording.
But 95 percent of mastering is not in the tools its in the ears. Unless you have the ears of
a mastering engineer, you cant expect any plug-in to provide them for you.
Besides, much of the point of using a mastering engineer is to bring in an objective set of
ears to make any needed changes prior to release.
This buss in Steinbergs Cubase SX is dedicated to mastering effects.
As shown in the routing view of the inserts, the EQ1equaliser plug-in goes before the L1
limiter; after the fader (shown in white) comes the Double Delayand the UV22HR dithering
plug-in.
This means that the level control wont cut off the Reverb tail or interfere with the dithering.
So does this mean only experts should attempt to do mastering ?
No. Firstly, not all mastering situations require a professionals touch.
Maybe you have a live recording that you want to give to friends or sell at gigs.
Sure, you can just duplicate the mixes, but a mastered veneer will give your listeners a
better experience.
Or perhaps you have recorded several tunes and want to test how they flow together as an
album.
Why not master it yourself ?
After you have sorted out the order and such, you can always take the individual mixes to a
pro mastering engineer.
And when you do, you will be able to talk about what you want in more educated terms,
because you are more familiar with the process, and you will have listened to your work
with mastering in mind.
Besides, the only way to get good at anything is practice.
For years, I used only professional mastering engineers; I would never have dreamed of
doing mastering myself.
But I learned a lot from observing them, started mastering my own material, and now
people hire me to master their recordings because they like the results I get.
Still, if you have any doubts whatsoever about your abilities, seek out a professional who
can present your music in the best possible light.
Most mastering is done with specialised digital audio editing programs such as Sonic

FoundrySound Forge, Steinberg Wavelab, Bias Peak, Adobe Audition, and so on.
These offer good navigation facilities, the ability to zoom in on waveforms, pencil tools to
draw out clicks, and plug-ins for mastering tasks (along with the ability to host third-party
plug-ins).
However, if your requirements are not too demanding, there are several ways to master
using conventional multitrack recording programs.
And, interestingly, some can even do tricks conventional digital audio editors cant.
Before You Master
The mastering process should actually begin with mixing, as there are several steps you can
take while mixing to make for easier mastering.
You should do these whether you plan to master material yourself, or hand your project to a
mastering engineer.
If you recorded your music in high-resolution audio, then mix as high-resolution files.
Maintain the higher resolution throughout the mastering process, and only dither down to
16-bit at the very end, when you are about to create CDs.
Do not dither individual mixes, and dont add any fades while mixing fades and crossfades
should be done while mastering, when you have a better sense of the ideal fade time.
Normalising a track before you master it is not necessarily a good idea the extra processing
will slightly degrade the sound, and you will probably need to adjust levels between the
different tracks at a later stage anyway.
As for trimming the starts and ends of tracks, with some music you may decide its better to
have a little room noise between cuts rather than dead silence, or to leave a few
milliseconds of anticipatory space before the first note to avoid too abrupt a transition from
silence to music.
Another consideration involves the possible need for noise reduction.
Sometimes there may be a slight hiss, hum, or other constant noise at a very low level.
If you can obtain a clean sample of this sound, it can be loaded into a noise-reduction
program that mathematically subtracts the noise from the track.
Even if this noise is way down in level, removing it can improve the sound in a subtle way
by opening up the sound stage and improving stereo separation.
Dont add any processing to the overall mix, just to individual channels.
Processing completed mixes is best left for mastering. As you mix, you should also watch
closely for distortion a few overloads may not be audible as you listen to the mix, but may
be accentuated if you add EQ or limiting while mastering.
Its better to concede a few decibels of headroom rather than risk distortion.
Its not necessarily a good idea to add normalisation, as that means another stage of DSP
(which may degrade the sound, however slightly) and you may need to change the overall
level anyway when assembling all the mixes into a finished album.
Finally, always back up your original mixed files prior to mastering.
If the song is later remastered for any reason for a high-resolution re-release, a compilation,
or for use in any other context you will want a mix thats as easy to remaster as possible.
Does It Work In Mono?

As a final reality check, switch the master buss output to mono and make sure that there is
no weakening or thinning out of the sound.
At the mastering stage, there is not much you can do to fix this; you will need to go back to
the mix and analyse the individual tracks to see where the problem resides.
Typical culprits include effects that alter phase to create a super-wide stereo spread, but
problems can also occur when miking an instrument with two mics spaced at different
distances from the source.
You can always try flipping the phase of one channel, and if that fixes the phase issues,
great.
But the odds are against that doing any good. In any event, do not forget to switch the
bussing back to stereo when exporting the file or burning a CD!
Real-time Mastering Within Your Sequencer A major difference between mastering in a
MIDI + Audio sequencer and using a digital audio editor is that you have the option to
adjust mastering processors (which affect the final mixed output) as you mix.
With digital audio editors, you are always working off-line with a previously mixed file.
However, there are advantages and disadvantages to both methods.
The process of mixing is daunting enough without throwing mastering into the equation;
however, mastering while you mix means you know exactly what the final version will
sound like.
But remember that a huge part of conventional mastering is about involving someone who
can be more objective about what needs to be done with your music.
Unless that person can sit in on the mix and adjust the mastering processors, you are better
off giving them your files and some space to do their job right.
Automation envelopes can reduce the odd rogue signal peak, thus opening up more
headroom and allowing a hotter sound without you having to use as much dynamics
processing.
If you decide to master as you mix, you will be putting your mastering processors in busses.
This is because when you create a non-surround multitrack project, eventually all the tracks
are going to dump through a mixer into a master stereo output buss.
As with individual channels, this should have provisions for adding plug-in effects.
How effects are accommodated depends on the program; for example, with Cakewalk
Sonar, the busses have standard effects slots, just like tracks.
But Steinbergs Cubase SX has a few extra touches: both pre-fader and post-fader slots for
effects, as well as excellent dithering algorithms for cutting your high-resolution audio
down to a lower bit resolution.
(If a program does not include an effects slot after the main output level control, you may
be able to feed one buss into another to achieve a similar signal chain insert the effect
into the second buss, and control overall level at the output of the first buss.)
Once your plug-in effects have been added and edited as desired, you have three main
options to create a mastered file: Render (also called bounce or export) the track to hard
disk.
This reads the signal at the final output, including the results of any effects you have added,
and writes the file to hard disk.
This is your final, mastered track. However, it still needs to be assembled with other tracks
to create a complete CD.
Send the output to a stand-alone CD or DAT recorder.

This will record the final, mastered song although, again, you will still need to assemble
these.
Send the output through analogue mastering processors, record their outputs into two empty
tracks in your multitrack, then export those tracks to your hard disk.
(See the Adding Outboard Processors To A Multitrack Host box for more on this).
Of course, if you choose to do real-time mastering, you do better get things right the first
time, because if you want to make any changes later you wont be working with the raw mix
file.
For example, if you decide there is too much multi-band compression, you wont be able to
undo this, and neither will any mastering engineer; you will have to do another mix.
Adding Outboard Processors To A Multitrack Host
There are some superb hardware outboard mastering tools, both analogue and digital, that
you may prefer to plug-ins with similar functionality.
If your multitrack host has an audio interface with multiple outputs, there is no reason why
you cant use them.
Martin Walker wrote a lengthy article on using outboard gear with computer workstations
in the SOS March 2004, but the basic idea is that you send the mix buss to a hardware
output on your audio interface, process the signal with the hardware processor, then blast
the audio back into the computers audio interface inputs.
Once you have selected the appropriate inputs within your recording software, you can
record the processed results and then replace the original mix with the processed version.
Voil hardware processing for your tunes.
The Best Of Both Worlds For most mastering tasks, a multi-band dynamics plug-in such as
Waves C4(bottom) will achieve the most transparent results, but that doesnt mean that you
cant use a full-band compressor such as Universal Audios1176SE (top) if you are after a
more vintage pumping sound.
There is another technique which makes a compromise between mastering as you mix and
mastering off-line.
After having a song mastered, you will sometimes wish you had mixed the song a little
differently, because mastering brings out some elements that might have been less obvious
while mixing.
For example, its not uncommon to find out when compressing at the mastering stage that
the mix changes subtly, requiring you to go back and do a quick remix (another reason why
mix automation is so useful).
So, to create a more mastering-friendly mix, consider adding some multi-band compression
and overall EQ (usually a little more high-end air and some tweaks in the bass) in the
master buss to create a more mastered sound.
Mix the tune while monitoring through these processors.
Then, when you render or otherwise save the file, bypass the master effects you used.
This results in a raw mix you can master in a separate program (or give to a mastering
engineer) and which anticipates the use of mastering processors without incorporating their

effects in the file.


Should you do this, make sure that the levels remain optimised when you remove the
processors you may need to tweak the overall level.
If you plan to use a mastering engineer, do not be tempted to present them with a premastered mix where you have tried to take the sound part of the way towards where you
want it.
Always provide the raw, two-track (or surround) mix with no mastering effects. However, it
may be worth creating a separate version of the tune that uses mastering effects to give the
engineer an idea of the type of sound you like.
The engineer can then translate your ideas into something perhaps even better, while taking
your desires into account.
Splitting The Stereo Channels
I have also used a multitrack host to do audio restoration and remastering of a tune that was
recorded in the 60s; this would have been very difficult to do with a conventional digital
audio editor.
One instrument was overly prominent in only the left channel and this needed to be fixed.
I split the stereo signal into two mono tracks, and loaded each one into the host.
Through a combination of equalisation, dynamics control, and level automation in just the
right spots, I was able to reduce the level of the problematic instrument.
As this also reduced the apparent level of the left channel, I used a combination of panning
on the individual tracks and balance control on the output buss to restore a better sense of
balance.
Processing Individual Mixes Mastering a multitrack project in real time is a fairly new
technique; its definitely not for everyone, nor is it suitable for all situations.
So lets look at two traditional approaches to mastering that use your computer more like a
standard digital audio editor.
The more old-school approach is to take each tune, master it, then as a separate operation
assemble all the tunes into a cohesive whole.
A newer approach is to assemble all the tunes first and then apply any processing on a more
global level.
Basically, this combines both mastering and assembly into one operation.
Lets look at the individual song approach first.
Digital emulations of classic analogue equalisers, such as the TL Audio and Pultec
recreations shown above, will often produce the most musical results when you are
applying broad and gentle processing during mastering.
Open up a new file and import the mix into a track.
If you need to process the right and left channels independently (for example, if there is an
instrument in the left channel that has excessive treble, and you want to EQ just that
channel a bit without processing the right channel), then separate the stereo file into two
mono files (typically using a digital audio editor) and import each one into its own track.
You may also be able to bring a stereo file into two tracks, use the balance control to

separate the left and right tracks, then re-combine them.


Here are some of the editing operations you might want to do: Reduce Peaks Using
Automation:
If some peaks are significantly louder than the rest of the material, this reduces the chance
to have a higher average level, as the peaks use up much of the headroom.
One solution is to add limiting, but another option that can affect the sound less is to use an
automation envelope to reduce the levels of just those peaks.
If the automation works on just a single cycle of the waveform, you probably wont hear any
difference compared to not reducing that peak; but once the major peaks are reduced, you
will be able to raise the overall level.
Furthermore, if you do add any compression, it wont have to work as hard. Add Dynamics
Processing: Generally, you will use a dynamics plug-in for the track holding the file, or
possibly for the buss it feeds.
Multi-band dynamics processors are your best option; compared to standard compressors,
theyre more transparent, because dynamics control in one frequency band doesnt affect
other frequency bands.
However, some people like slamming a stereo compressor, because they can hear some
pumping and breathing, which gives more of a vintage sound.
Another popular option is a loudness maximiser plug-in, like the venerable Waves L1.
This type of processor can greatly increase the overall average level, producing a hotter
sound.
These plug-ins are often overused on nowsdays recordings, which creates distortion and
degrades definition.
As a rule of thumb, I advise increasing the amount of maximisation until you can hear the
effect working.
Then reduce the amount so you dont hear it working. Eventually you will find a sweet spot
where you can increase overall loudness while retaining good dynamics.
All the different song files on the album have here been assembled into different tracks in
Magix Samplitude so that different styles of track can be processed differently.
Once the plug-in settings have been finalised, the tracks can be rendered into a single file.
No matter what form of dynamics control you use, it will affect the mix by reducing peaks
and bringing up lower-level sounds.
This is equivalent to having a more even mix, and might be desirable.
But if the mix ends up sounding too uniform, reduce the amount of maximisation.
Peaks and valleys are essential to a satisfying listening experience.
A really loud cut may seem impressive at first, but it becomes fatiguing after a short period
of time.
Add Equalisation
For mastering, you will hopefully be dealing in broad strokes a mild bass cut, or a little
high-end lift.
This is why many older equalisers are favoured for mastering, because they have a subtle,

yet pleasing, effect on the sound.


Plug-ins like Steinbergs TLA1, PSPs MasterQ, and the UAD1s Pultec emulation fulfil this
role in software.
Significant EQ problems, like large mid-range or low-end peaks, should have been fixed in
the mixing process.
If they we are not, you are likely need to plug in a full-blown parametric EQ, and tweak out
the individual problems.
Your audio editor probably already includes EQ, but be careful about using it.
Built-in EQs are usually optimised so you can open lots of instances at the same time,
which means they cant consume too much CPU power.
Mastering-oriented plug-ins, on the other hand, tend to eat more power, but it doesnt matter
because you are using them on a simple stereo file rather than running a bunch of audio
tracks and soft synths.
Other Processing Goodies
Some people swear by particular plug-ins for mastering, like enhancers, stereo-image
wideners, and the like.
I tend to avoid these because dynamics and EQ cover 99 percent of whats needed in most
cases.
But I have found situations where a little high-frequency exciter helps add a different kind
of sparkle than EQ, and once I even added a phasing effect in the middle of a tune during a
spoken-word part (the client loved it).
I think if a mix has a certain direction, its often best to enhance what you have rather than
try to turn it into something completely different.
Assembling Your Album A loudness-maximising limiter such as Waves L2 can increase the
overall level off your mastered track with surprisingly few audible artefacts.
You can do album assembly in a multitrack host, and once the tracks are in the desired
order you render the whole thing to disk as one large file.
If needed, you can then import this file into a CD-burning program to add track markers,
CD Text, and so forth.
If you are editing within a multitrack application, the files can either be placed end to end in
a single track, or you can spread them over several different tracks.
For example, one project I mastered had three distinctly different flavours of mixes: some
were mixed in a studio which probably had bad acoustics, because the bass was too heavy;
another set of mixes was very neutral (just the kind I like to work with); and the third set
had compression applied to the master buss, and were already somewhat squashed.
I sorted each type onto its own track, and applied the same processing to like-sounding
files.
The bass-heavy ones needed a different kind of EQ to the neutral-sounding ones, and I also
added multi-band compression to both of these tracks.
The songs that were already compressed did not get any multi-band compression, but did
need a fair amount of EQ this created a few peaks, so I added a small amount of limiting.
As mentioned earlier, a multitrack host allows you to do tricks that may be difficult with a
dedicated digital audio editing program.
This is particularly true with dance music, where you have a continuous stream of sound.

Its easy to create crossfades, for example, either using an automatic crossfade function
where overlapping two tracks creates a crossfade, or by having the tunes on separate tracks
and adding fades manually.
You can also dedicate a separate track for transitions or sound effects when doing a dance
mix, add track automation to bring effects in and out (to increase a high-pass filters cutoff
as a song fades, for instance, so it seems to disappear just before the next track comes in),
and so on.
This process essentially creates a meta-mix where, instead of mixing individual tracks to
create a two-track file, you are mixing two-track files to create a final album.
Master Effects Routing
If you process and render one track at a time, you can use a dedicated audio CD-burning
utility such as Roxios Toast With Jam to compile them into a finished CD, complete with
advanced features such as CD Text.
We noted that Steinberg Cubase SXs busses have slots both before and after the gain
control.
In general, you would place your processing plug-ins prior to the gain control, and your
dithering after the gain control.
However, things get more complex when you start using effects. Suppose you are mixing a
tune that has an abrupt end, but you want a delay or reverb tail to spill over.
If the echo is generated before the master output and you pull down the master fader for the
abrupt end, the echo will stop too.
Therefore, you need to place the delay after the fader, and place dithering after the delay. If
there is only one post-fader slot, then chain two busses and insert the dithering in the
second buss.
Some multitrack hosts dont have an option to place effects after the final gain control, thus
making it difficult to implement the delay effect mentioned above.
For example, Cakewalk Sonars master fader is always at the buss output. But it also has a
trim control that can change the incoming level to the buss.
This alters the level going to the effect, but not the effect output.
With the above example of delay, you might even want to use both controls: pull down on
the input trim to create the abrupt end, then as the echoes fade out reduce the main buss
fader.
Alternatively, you could use this technique if you had loudness maximisation patched into a
master buss and you wanted to push the sound harder on some tracks. For example, lets say
I inserted Waves L1 into the master buss in Sonar, with the threshold set to -3.0dBFS, and
the output ceiling set at -0.1dBFS. Any signal louder than -3dBFS will force the limiter to
start attenuating the signal. Increasing the level of the input trim control pushes more signal
into the L1, causing a greater degree of loudness maximisation. No matter how hard you
push the input trim control, the clippingoverload indicators will remain unlit, because the
L1s output ceiling has been set to -0.1dBFS, so you have to be careful that you dont overdo
things. Mastering For Vinyl Although the market for vinyl is now minuscule at best, it
remains important for DJs and some audio purists who regard CDs as an invention of Satan
that is destined to cause the end of Western civilisation as we know it. So lets address the

issue of mastering for vinyl. Despite what you may have heard, mastering for vinyl is the
easiest type of mastering you can do, as it involves only two steps:
Find a mastering engineer who has mastered a ton of recordings for release on vinyl.
Present your final mixes to that person and say "Here, you do it." Vinyl is an unforgiving
medium, and mastering for it is extremely difficult. Its dynamic range is a puny 50dB or so,
even with decent vinyl, compared to the 80dB or more we enjoy with even the most basic
digital media. As a result, compression is essentially mandatory to shoehorn musics wide
dynamic range into vinyls narrow dynamic range. But vinyl has other problems. there is a
trade-off between loudness and length. This is because a groove in a record is just a
waveform, and a louder waveform will cause the groove to have a wider physical
excursion. So, to get a lot of material on an LP, you have to cut the vinyl at a pretty low
level. Bass is also troublesome. Bass waveforms have a very wide excursion and, with
stereo, if the left and right channels are even slightly out of phase, the stylus can jump the
track as it tries in vain to follow different curves for the right and left channels. We take
concepts like stereo bass for granted now, but back in the days of vinyl bass had to be
mono. And thats not all!
As the record gets closer to the end, the tone arm hits the groove at more of an angle
(except with linear-tracking turntables), causing whats called inner groove distortion. As a
result, song orders often used to be created with the softest songs coming at the end of an
albums side, so that the inner grooves would be less subject to distortion. In the old days,
recording engineers were well aware of the limitations of vinyl, and took them into account
during the recording process. Many of todays engineers were brought up in an essentially
vinyl-less world, and dont consider the problems discussed above. This makes it more
important than ever to use a mastering engineer who is an expert in the art. When it comes
to mastering for vinyl, the advice is simple: dont try this at home! Managing Your Levels
Although most modern audio software packages use 32-bit floating-point audio engines and
have lots of headroom, overloading can still occur unless levels are set properly, especially
if the master buss is the sum of different channels. Clipping indicators are helpful, but
programs that include a numeric read-out of how much a peak level is above or below
0dBFS are far more useful.
This value, called the margin, is positive if the level is above 0dBFS and negative if below.
If possible, I generally enable any kind of peak-hold feature so that I can see the highest
level attained at the end of a song without having to keep my eyes glued to the meters. Note
that if the margin indicator isnt reset automatically (when you click the transport stop
button, for instance), you will have to clear the value manually from time to time. Its
extremely useful to have access to exact headroom and gain figures while mastering
these can be seen on this Sonic FoundryVegas master fader at the top and bottom of the
meter, respectively. The faders themselves should also be calibrated; heres an example of
how to use this feature. Suppose the fader is currently set to 0dB gain, and you send in a
signal that reaches -3dBFS. The margin indicator will also show -3dBFS. If the master
fader setting is -1.5dB and you feed in the same -3dBFS signal, then the margin indicator
would show -4.5dBFS the original value, less the amount of attenuation provided by the
master fader. Ideally, the margin should indicate not 0dBFS but slightly less say
-0.1dBFS. This is important, because if a tune has peaks that hit 0dBFS for more than a few

milliseconds, it may be rejected by a CD pressing plant on the assumption that those peaks
represent distortion. To set the master fader for the highest possible level short of distortion,
first reset the margin indicators, then play the tune through from start to finish. When its
over, check the margin and note the reading. Lets say its -4.1dBFS. As you want the margin
to read -0.1dBFS, that means the overall level needs to be raised by 4dB. Now note the
fader reading. Well assume it shows 1.5dB. We want to add another 4dB of level, so if we
set the fader reading to 5.5dB, then the next time the song plays from start to finish the
margin should indicate -0.1dBFS. Mastering Mastering I certainly wouldnt want to imply
that following the above techniques will make you a mastering engineer. However, I
believe that if you apply these ideas correctly you will end up with mixes that sound better
than before and thats the whole point. Besides, if you start working on your mastering
chops now, you just might discover a whole new outlet for your creativity. Published in
SOS Au
The final stage of production turning your mixes into a finished CD requires some
specialised tools.
Martin Walker runs through the options and considers how best to set up and use the
necessary PC software. Lots of SOS readers have been asking me over the last few months
if they can master their albums using a computer, rather than relying on external rackmount
hardware to do the job. Mastering basically involves taking the individual songs, placing
them in a suitable order (which not always as easy as it sounds), adjusting their relative
levels and EQ to make them sit more comfortably together, and adding any final fairy dust
if and when needed. Well, of course, this is possible using a computer, and once you have
your recordings on a hard drive you could leave them there at every stage right up to CD
burning of the final product if you wish. However, the hot debate among traditional studio
owners concerns software plug-ins, and whether or not their quality matches up to that of
external rackmount outboard effects. This question is particularly important during
mastering, since each make and model of the types of processor that get used, tends to have
its own unique sound.
As I discussed in PC Musician SOS November 2000, there is no inherent reason why a
software solution should be inferior for most types of effect, subject to it being given
sufficient processing power. Indeed, some existing rackmount effects are nothing more than
powerful computers in a box with additional IO. Most modern studios seem to be using at
least some plug-ins, and some have embraced them wholeheartedly as another way
forward, if not the only way forward. So, the answer is yes you can master an album
without ever leaving the comfort of your computer, and get results that are good enough to
release commercially. However, mixing and mastering are very different skills, and its
important to know how to make the most of the available software tools. Suitable Software
If all you want to do is assemble a set of already perfectly formed audio files into a chosen
order and then burn an audio CD you wont need a mastering application, since you can do
this with almost any CD-burning utility.
However, this can be a frustrating approach, since unless you can listen to the tracks in
sequence before the burn, you wont be able to hear how they sound one after the other until
you put the finished CD-R in your hi-fi. The latest versions of products like Adaptecs Easy

CD Creator, Ahead Softwares Nero, and CeQuadrats WinOnCD all provide more facilities
for those creating audio CDs, but there is no substitute for being able to audition and make
changes to the WAV files in context. Professional Mastering While mastering on your PC
can give good results, I certainly wouldnt claim that it gives results as good as those
achieved by a professional mastering engineer. For a start, you are unlikely to have a stateof-the-art monitoring system thats flat down to 30Hz or less to accurately judge the bass
end. You wont have tens of thousands of pounds worth of esoteric EQ, compression, reverb,
and other goodies to tweak your sound to perfection. Most of all, however, you are unlikely
to have the same level of expertise, objectivity, and impartiality. Good mastering engineers
are renowned for their golden ears, and their skills are acquired through years of training
and experience.
The next step up is a list-based stereo audio editor that lets you assemble your audio files,
add fades in and out where required, and audition the joins. Nowadays most of these have a
graphic environment that makes the process far more intuitive, as well as the ability to drag
and drop each track relative to each other to adjust spacing, and in some cases even drag
one across another to automatically create crossfades between tracks. Since modern
multitrack audio software is capable of running plug-ins suitable for mastering purposes, its
also perfectly possible to master in the same environment in which you record and mix.
However, a dedicated mastering application may still prove easier to use in the long run, for
various reasons. First, its vital to be able to zoom in to view waveforms at single-sample
level to be able to spot and remove clicks and pops, and not all multitrack applications let
you do this without accessing an external audio editor.
In addition, its often easier to assemble a set of stereo mixdowns into the final order and
adjust the spacing between them using a list-based approach, even if you can also view
them as graphics, since dragging and dropping text in a list is far easier to deal with.
Finally, using a dedicated mastering application into which the CD-burning process is fully
integrated can make the overall process even easier, especially where the final CD audio
file is being calculated on the fly. This is because the individual tracks always remain
separate, so that you are not having to deal with single 600Mb image files for an hour-long
album. In many cases the fades can also be applied on the fly during the burn, which makes
it easier to change things at the last moment if required, and some packages even let you
apply plug-in effects to individual tracks as well. I have come across one multitrack
application that provides all these facilities Samplitude 2496 (formerly marketed by
SEKD, but now under the banner of Magix, who also offer a dedicated stereo version called
Samplitude Master devoted to mastering).
On the PC there are several other software applications that are specifically intended for
detailed work on final mono or stereo tracks. The most famous is Steinbergs Wavelab, now
at version 3.0, which now incorporates the multitrack Montage function to assemble more
complex tracks, add fades and effects on the fly, and has integrated CD-burning facilities.
Sonic Foundrys CD Architect is another elegant application, and comes bundled with their
Sound Forge Lite editor, although most musicians will prefer the more comprehensive
Sound Forge if its within their budget. Sample rates of up to 96kHz have been supported
for some time, but only after a wait of several years for the recent version 5.0 have 24-bit
files been supported as well. Syntrilliums Cool Edit Pro is also an excellent stereo and

multitrack editing package, but doesnt have integrated CD-burning facilities.


IK Multimedias T-Racks provides virtual valve EQ, compression, and multi-band limiting,
as well as fade options, and has recently been updated to accept 24-bit files, but unlike the
others mentioned here doesnt provide graphic editing in fact, its more like a rackmount
processor such as TCs Finalizer in approach. Application Settings For the best final results
when mastering, the stereo audio files of each track should be at 24-bit or higher resolution.
This doesnt necessarily mean that you have to record every track in your audio sequencer at
24-bit resolution, since most multitrack applications will let you mix 16-bit and 24-bit
tracks at will: the important part is to make the final stereo mixdown at 24-bit, or even 32bit if you have a suitable application like Cubase VST32.
You will benefit from this even when using 16-bit converters or samples on the original
recordings, since as soon as multiple tracks are mixed together there will be more than 16
bits worth of resolution anyway. Visual Information Many musicians find that using
analytical tools helps during mastering, and I discussed many of the options in some detail
in SOSSeptember 2000. A spectrum analyser is useful to examine frequency response
against other recordings, and can also be invaluable in spotting low-end problems that may
not be audible on nearfield monitors. Most audio editors, including Cool Edit ProSound
Forge, and Wavelab now incorporate them, and shareware plug-ins are also available from
Nick Whitehurst (see Contacts box). Steinbergs FreeFilter also has one built in, and can
learn the frequency response of another track and apply it to one of yours. A phase display
can help check for mono compatibility (which is still vital if you expect radio play).
Steinberg has one in its Mastering Edition, Nick Whitehurst incorporates one in his
shareware C_SuperStereo, and PSP provide a Stereo Analyser in their StereoPack A
sonogram display can help you make decisions about high-frequency enhancement, as well
as spotting low-level hums, whistle and DC offset problems. Again, Cool Edit Proand
Sound Forge have one built in, while Steinbergs Mastering Edition has one in plug-in
form.
Choice of sample rate is a more thorny issue. I still use 44.1kHz since this is my target rate
for burning audio CDs and although some people maintain that modern sample-rate
converters are now so good that you can start at 96kHz and then down-convert at the end, I
prefer not to put my audio through an extra stage of conversion. However, if you are
convinced of the audible benefits of high-sample-rate recording for your type of music and
gear, and your system can cope with the increased processor and hard disk requirements, go
ahead, though bear in mind that 88.2kHz may be a more benign choice than 96kHz, since
the down-sampling process to 44.1kHz is so much simpler. If you are transferring a 48kHz
DAT tape into your PC for mastering, you will obviously have to convert this to 44.1kHz.
Since this will change the overall sound slightly, I would be inclined to do this as the first
process, so that you can add further tonal tweaks to the final 44.1kHz version as required. If
you have several applications capable of this conversion, try them all and compare the
results, and if there are any quality options make sure that you always use the highest one
it may take considerably longer to process the whole track, but you want to lose as little
quality as possible. Whatever sample rate you choose, you should leave recordings at as
high a bit depth as you can until the last moment, and then convert to the final format

(normally 16-bit for audio CD burning) as the last stage, with suitable dithering. This is
because any alterations you make to the audio files including gain changes,
compression, EQ, and fades will produce rounding errors in the calculations. If they are
already at 16-bit then the accumulating errors will gradually make your tracks sound coarse
and grainy, and you will lose fine transient detail and stereo localisation. If your software
lets you choose a resolution for temporary files, make sure this is also at a suitably high
setting. In Wavelab, for instance, choose Create 32-bit float temporary files in the File page
of Preferences. While you are there, if you have separate Windows and audio drives make
sure that you set the Folder for Temporary Files to the Windows one, since keeping them on
a different drive from your audio ones will greatly speed up most Undo operations. You can
also do this for Sound Forge in the Perform page of its Preferences. If you are using Cool
Edit Pro, ticking the Auto-Convert all data to 32-bit upon opening box will ensure that all
subsequent editing is also carried out at 32-bit resolution.
20 Tips There is a world of difference between what happens in a professional mastering
suite and what the average project studio owner can do at home. But as more computerbased mastering tools become available it is quite possible to achieve very impressive
results with relatively inexpensive equipment. Certainly there is a lot more to mastering
than simply compressing everything, though compression can play an important role. The
most crucial tool is the ear of the person doing the job, because successful mastering is all
about treating every project individually. There is no standard blanket treatment that you
can apply to everything to make it sound more produced. Every mastering engineer has
preferences regarding the best tools for the job, but if you are just getting started I
recommend a good parametric equaliser, a nice compressor limiter, and perhaps an
enhancer. Such as an Aphex Exciter or an SPL Vitalizer. You also need an accurate
monitoring environment with speakers that have a reasonable bass extension, and some
form of computer editor that can handle stereo files. The latter should ideally have digital
inputs and outputs, though if you are using an external analogue processor you will
probably be going into the computer via its analogue inputs, in which case these need to be
of good quality too. A professional may want to start off with a 20- or 24-bit master tape or
to work from a half-inch analogue master, but in the home studio most recording is done to
16-bit DAT. This should not be a problem for most pop music, providing you proceed
carefully. Most mistakes are due to over-processing, and the old adage. If it is not broke, do
not fix it, applies perfectly to mastering. Do not feel that you have to process a piece of
music just because you can. You might find that your master sounds worse than the original
material.
And now for the tips...
1. Where possible, handle fade-out endings in a computer editor, rather than using a master
tape that was faded while mixing. Not only does the computer provide more control, it will
also fade out any background noise along with the music, so that the songs end in perfect
silence.
2. Editing on DAT is very imprecise, so when you beam the material into the computer
(digitally, if at all possible) clean up the starts of songs using the Silence function. Use the
waveform display to make sure you silence right up to the start of the song without clipping

it. As a rule, endings should be faded out rather than silenced, as most instruments end with
a natural decay. When the last note or beat has decayed to around 5% of its maximum level,
start your fade and make it around a second long. You can also try this if the song already
has a fade-out, though you may want a slightly longer fade time. Listen carefully to make
sure you arent shortening any long reverb tails or making an existing fade sound unnatural.
3. Once you have decided on a running order for the tracks on the album, you will need to
match their levels. This doesnt simply mean making everything the same level, as this will
make any ballads seem inappropriately loud. The vocals often give you the best idea of how
well matched levels are across songs, but ultimately your ears are the best judge. Use the
computers ability to access any part of the album at random to compare the subjective
levels of different songs, and pay particular attention to the levels of the songs either side of
the one you are working on. Its in the transition between one song and the next that bad
level-matching shows up most."If an albums tracks were recorded at different times or in
different studios, they may not sit well together without further processing."
4. If an albums tracks were recorded at different times or in different studios, they may not
sit well together without further processing. The use of a good parametric equaliser
(hardware or software) will often improve matters. Listen to the bass end of each song to
see how that differs and use the EQ to try to even things out. For example, one song might
have all the bass energy bunched up at around 80 or 90Hz while another might have an
extended deep bass that goes right down to 40Hz or below. Rolling off the sub-bass and
peaking up the 80Hz area slightly may bring the bass end back into focus. Similarly, the
track with bunched-up bass could be treated with a gentle 40Hz boost and a little cut at
around 120Hz. Every equaliser behaves differently, so there are no universal figures you
will need to experiment. At the mid and high end, use gentle boost between 6 and 15kHz to
add air and presence to a mix, while cutting at 1-3kHz to reduce harshness. Boxiness tends
to occur between 150 and 400Hz. If you need to add top to a track that doesnt have any, try
a harmonic enhancer such as an Aphex Exciter high-end EQ boost will simply increase the
hiss.
5. To make a track sound louder when its already peaking close to digital full scale, use a
digital limiter such as the excellent Waves L1 plug-in or Logic Audios built-in Energizer. In
most cases you can increase the overall level by 6dB or more before your ears notice that
the peaks have been processed. A nice feature of the L1 is that you can effectively limit and
normalise in one operation. Its always good practice to normalise the loudest track on an
album to peak at around -0.5dB and then balance the others to that track, but if you are
using the L1 to do this, make normalising your last process, so that you can use the Waves
proprietary noise-shaped dither system to give the best possible dynamic range.
Normalising or other level-matching changes should always be the final procedure, as all
EQ, dynamics and enhancement involves some degree of level change. Proper re-dithering
at the 16-bit level is also recommended if going direct via a digital output to the production
master tape, as it preserves the best dynamic range. Analogue outputs will be re-dithered by
the A-D converter of the recorder.
6. If a mix sounds middly or lacking in definition, the SPL Vitalizer can be very useful
(even the very inexpensive Stereo Jack version produces excellent results). This device

combines EQ and enhancer principles in a single box, and one characteristic of the Vitalizer
process is that the mid-range tends to get cleaned up at the same time as the high end is
enhanced and deep bass is added. As with all enhancers, though, be very careful not to
over-use it: keep switching the process in and out, to preserve your sense of perspective.
The same applies to EQ and dynamics check regularly against the untreated version to
ensure that you are not making things worse.
7. Have a CD player and reference material on hand to use as a comparison for your work.
Not only does this act as a reference for your ears, it also helps to iron out any inaccuracies
in your monitoring system.
8. Overall compression can add energy to a mix and even out a performance, but it isnt
mandatory. Music needs some light and shade to provide dynamics. Often a compressor
will change the apparent balance of a mix slightly, so you may need to use it in combination
with EQ. Placing EQ before the compressor results in any boosted frequencies being
compressed most, while placing it after the compressor allows you to equalise the
compressed sound without affecting the compressor operation. Which is best depends on
the material being treated, so try both.
9. A split-band compressor or dynamic equaliser gives more scope for changing the spectral
balance of a mix, but these devices take a little practice before you feel you are controlling
them and not vice versa!
10. One way to homogenise a mix that doesnt quite gel, or one that sounds too dry, is to add
reverb to the entire mix. This has to be done very carefully, as excess reverb can create a
washy or cluttered impression, but I find Lexicons Ambience programs excellent for giving
a mix a discreet sense of space and identity. If the reverb is cluttering up the bass sounds,
try rolling off the bass from the reverb send. If you want to add a stereo width enhancing
effect to a finished mix, there are two main things to consider: the balance of the mix and
the mono compatibility of the end result. Most width enhancers tend to increase the level of
panned or stereo sounds while suppressing centre sounds slightly. Sometimes this can be
compensated for by EQ, but being aware of whats happening is half the battle. Other than
the simple phase-inversion width enhancement used in the SPL Vitalizer, which is
completely mono-compatible, width enhancement tends to compromise the sound of the
mono mix, so always check with the mono button in. While most serious listening
equipment is stereo these days, many TVs and portable radios are not, so mono
compatibility is important.
11. Listen to the finished master all the way through, preferably using headphones, as these
have the ability to show up small glitches and noises that loudspeakers may mask. Digital
clicks can occur in even the best systems, though using good quality digital interconnects
that are no longer than necessary helps to reduce the risk.
12. Try to work from a 44.1kHz master tape if the end product will be a CD master. If you
have to work from a 48kHz tape or one with different tracks recorded at different sample
rates, a stand-alone sample-rate converter can be used during transfer of the material into a
computer. If you dont have a sample-rate converter, most editing software will allow you to

do a conversion inside the computer, though this takes processing time and the quality is
not always as good as that from a good-quality dedicated unit. When using a software
sample-rate converter, ensure that the tracks are all recorded with the computer system set
to the same sample rate as the source material. If you dont have a sample-rate converter at
all, dont worry too much, as transferring in the analogue domain via decent external A-D
and D-A converters may well produce better results than an indifferent sample rate
converter (with free re-dithering thrown in too!). Alternatively, if your master is for
commercial production rather than for making CD-Rs at home, leave your master at 48kHz
and inform the mastering house so that they can handle the conversion for you.
13. When you are transferring digital material into a computer, ensure that the computer
hardware is set to external digital sync during recording and internal sync during playback.
Also double-check that your record sample rate matches the source sample rate people will
often present you with DAT tapes at the wrong sample rate, or even with different tracks at
different sample rates. All too often this is overlooked, until someone realises that one of
the songs is playing back around 10 percent slow!
14. Dont expect digital de-noising programs to work miracles even the best systems
produce side-effects if you push them too far. The simpler systems are effectively multiband expanders, where the threshold of each band is set by first analysing a section of noise
from between tracks. For this reason its best not to try to clean up your original masters
prior to editing, otherwise there may be no noise samples left to work from. With careful
use you can achieve a few dB of noise reduction before the side-effects set in as low-level
signals open and close the expanders in the various bands, the background noise is
modulated in a way that can only be described as chirping. The more noise reduction you
try to achieve, the worse the chirping, so its best to use as little as you can get away with.
15. When editing individual tracks for example, when compiling a version from the best
sections of several recordings try to make butt joins just before or just after a drum beat, so
that any discontinuities are masked by the beat. However, if you have to use a crossfade
edit to smooth over a transition, try to avoid including a drum beat in the crossfade zone, or
you may hear a phasing or flamming effect where the two beats overlap. As a rule,
crossfades should be as short as you can get away with, to avoid a double-tracked effect
during the fade zone. As little as 10-30ms is enough to avoid producing a click.
16. On important projects, make two copies of the final mastered DAT (one as a backup)
and mark these as Production Master and Clone. Write the sample rate on the box, along
with all other relevant data. If you include test tones, document their level and include a list
of all the track start times and running lengths for the benefit of the CD manufacturer. As
mentioned earlier, if, for any reason, you have produced a 48kHz sample rate master, mark
this clearly on the Production DAT Master so that the CD manufacturer can sample-rate
convert it for you. Its always a good idea to avoid recording audio during the first minute or
so of a new DAT tape, to avoid the large number of dropouts commonly caused by the
leader clip in the tape-spool hub. You can, however, use this section to record test tones,
which will also demonstrate to the person playing your tape that it isnt blank! If you put
DAT start IDs on each track, check them carefully to make sure that there are no spurious
ones, and dont use skip IDs.

17. When deciding on how much space to leave between tracks on an album, listen to how
the first track ends and the second one starts. Gaps are rarely shorter than two seconds, but
if the starts and ends are very abrupt you may need to leave up to four seconds between
tracks. Use the pre-roll feature of your digital editor to listen to the transition, so that you
can get a feel for when the next track should start.
18. When using a CD-R recorder to produce a master that will itself be used for commercial
CD production, ensure that the disc can be written in disc-at-once mode rather than a track
at a time, and that your software supports PQ coding to Red Book standard. Check with
your CD manufacturer to confirm that they can work from CD-R as a master, and take note
of any special requirements they may have. Be very careful when handling blank CD-Rs
there are commercial CDs on the market with beautiful fingerprints embedded in the digital
data! The old adage If it aint broke, dont fix it applies perfectly to mastering."
19. Be aware that stand-alone audio CD recorders usually have an automatic shut-off
function if gaps in the audio exceed a preset number of seconds, usually between six and
20. This may be a problem if you need large gaps between tracks. Occasionally, even very
low-level passages in classical music can be interpreted as gaps. Also note that these
recorders will continue recording for that same preset number of seconds after the last
track, so you will need to stop recording manually if you dont want a chunk of silence at
the end of the album. 20. When making a digital transfer from a DAT recorder to a CD
recorder that can read DAT IDs, its best to manually edit the DAT IDs first, so that they
occur around half a second before the start of the track. Then you dont risk missing part of
the first note when the track is accessed on a regular CD player. Alternatively, there are
commercial interface units (or CD-R recorders with the facility built in) that delay the audio
stream in order to make coincident or slightly late DAT IDs appear before the audio on the
CD-R.

You might also like