Friday, October 4, 2013

ProTools Made Easy

Hi friends, I really hope this post will help you all understand ProTools a little better. I will use screen shots of a ProTools setup here in the studio for you to have a good visual along with explanations and more! (I tried to use my personal ProTools at home, but I forgot my authorization code in California....) ha

First things first, if you want to get the most out of ProTools, YOU MUST LEARN AND COMMIT TO THE KEYSTROKES. It isn't easy at first, but if you use them enough, they will become second nature. 

What is a keystroke??? I want to copy this line and paste it somewhere else.... You highlight the line, click  or more recently only ⌘ (command) and the letter C to copy then  V to paste. That is a keystroke. These will make your work flow in ProTools quicker and start you off on a good note.

1. Open ProTools and create a new "Session" by clicking Create Blank Session...  and hitting okay. Some other options to choose in this dialog window are
  • Open Recent Session
  • Open Session...
  • Create Session from Template - templates are useful when you get a set up that you like and need for multiple projects. 
    • For example, when I recorded my album, I had my drummer (the first real instrument I tracked, or recorded) come in for 6-8 hours a day for two days. I had 10 songs for him to track and each session took me a good 20 minutes to set up and text before tracking. So, after the first session was made, I saved it as a session template so that when I moved on to the next song, I didn't have to waste time! I recommend using templates AFTER you are fluent in the setup process because these skills will open so many doors I promise :D

2. You must name your session and I want to stress the importance of file management. Seriously, this may be the point where people screw up the most. ProTools is notorious for having a difficult filing system that can get so out of hand if you're not paying attention. So, when you click save here, make sure you are saving in the right place. 

As you create more sessions, it will get harder to manage them, UNLESS you are organized and mindful, as I have had to learn over the course of lost sessions, hours of work wasted, and frustrating protools life lessons! 

Just to clarify, I will use ">" to reference hierarchies to finding where things go and are. 

For instance File > Save As... 

As you can see, I saved this session as Practice Session 4/10/13 Stephens in 

Local Documents > MMus > Create your folder here > Stevie Rae Stephens

Click Save and bam, this comes up and you are ready to rock n roll. 

This is your session window!

3. Let's make a track. A track is something you all have seen before, but in case you aren't 100% sure, it's the thing that you record types of audio onto in the DAW (DAW = Digital Audio Workstation; examples of other DAWs are Reason, Logic, or Garageband).

Here is your first Keystroke, and one of the most important (because it works in many other applications, not just in ProTools!)
  • Shift  N - In newer Macs, the  is actually ⌘ (Command), but it's easier for me to just put the symbol in. This keystroke creates a new track by bringing up the New Track Dialog Window

There are a few options in this window. You should know what each of them are and do because you will be using most of them at some point or another. Your options are

  • Audio Track - Use audio when you are recording through microphone(s) or when you are using an instrument that is plugged into the interface (i.e. bass guitar, electric guitar, keyboard)
  • Aux Input - Use auxiliary input tracks when you need bus and send returns. (I'll explain a little more later on)
  • Master Fader - This should already be in the session (1. Flaw in ProTools). This track's fader will govern all the other tracks in the session including aux and instrument tracks. 
  • MIDI Track - Use MIDI when you want to input MIDI messages, mostly from external sound hardware. Originally, MIDI tracks had to be used in tandem with auxiliary inputs to play and monitor the sound, but now the Instrument track is a combo. 
  • Instrument Track - Use instrument when you want to input MIDI messages and hear what you are playing using a virtual instrument provided in ProTools. Basically, use this one with MIDI.

You also have the option to make your track stereo or mono. Unless for some reason I want to use one track to capture both the left and right signals of a source, like a keyboard, I will use MONO tracks for my audio so I can have more control.

  • Remember, if something isn't clear, make a mental note and either FB message me, text me, or ask me on Monday or whenever and I can clarify more :D 

 I use STEREO for my Auxiliary Input tracks. Again, I will explain further later down the line.

Before you click create, think ahead in case you might want more than one track. The next keystroke you might want to know is

Shift  Up/Down Arrow - This will create more tracks in the New Track Dialog Window.

I use this tool when I know I am going to need multiple tracks (i.e. 8 part harmony backing vocals, drum recording, or multi-tracking).

This is what will show up. This is your brand new AUDIO track.

(In some DAWs, a single track has the ability to be any of the above options making workflow even quicker. 2. Flaw in ProTools). However, getting to understand the difference between each of these is very important in the recording world, which will soon be yours too :)

On the left side of the track, you will find the label (I double clicked the white part, which will originally say Audio 1, to rename it).

This is another way to demonstrate good file management because once you start recording, cutting, moving, and changing the audio you record, ProTools will name it accordingly. If I leave the track name as Audio 1 and record a vocal, the vocal take will be labeled in my ProTools Audio Files Folder as Audio 1. Not Stevie Rae Lead Vocal, or Guitar Neck Left, or Snare Mic, whatever.

LABEL your tracks!

Before getting into the next step, let me introduce you to your editing tools. Up in the left top corner of your session window, you will see this -------------------->

The left most button controls the size of the waveform. It comes in handy when you want to really zoom into the waveform for insane editing. This button is awesome if you remember it's there. If not, you end up with a fat block of audio that looks like this....

And your class notes end up looking like this...

Back to the Waveform zooming tool... Using this tool isn't the same as zooming in or out, its actually making the audio waveform bigger or smaller. I like to set this tool 4 clicks up from the smallest it goes. That usually gives me a better representation of the waveform in terms of volume and clipping.

Clipping is when the signal being received by ProTools is too loud (hot) causing information loss. You will see the meter on the track hit the red when this happens. Simply ease off of the gain knob on the interface.

I am skipping over the button with the little blocks.

The third button in is the actual zooming tool, which you will never use.

Because of KEYSTROKES! The next two keystrokes you will have to get in bed with are R and T, zoom in and zoom out. These are very useful and will enhance your workflow exponentially!

Next is the highlighted Smart Tool. The Smart Tool is the grouping of the Trimming Tool, the Selecting Tool, and the Grabber Tool! These are grouped together because they are used often and when you select the Smart Tool, ProTools actually gets smarter ;)

With this tool selected, when the mouse is at either end of the audio, it will turn into the TRIMMING TOOL, when on the upper half of the audio line, the mouse becomes the SELECTING TOOL, when under the line, the GRABBER TOOL comes through, and when in the upper corners, the FADER TOOL steals the show.

Toggle through these tools by using the Escape Key keystroke.

Moving on.

Here is a session of mine I opened to show some basic editing. I have selected a few of the tracks because I want to cut out all the silence at the beginning. This is called useless information that ProTools uses processing power to think about while recording or playing back.

So, I cut it out by using the Keystroke  E - Splits the region, or selected tracks, at the cursor.

Then I just delete the empty spaces I created. You can also use the Trimmer Tool to drag the selected audio start or end to where you want it. This tool looks like [ or ].

Then I decided that the audio waveforms were a little too big and blocky for my taste and I used the waveform zoom tool to make them more appealing to my eye.

The next Keystroke you should learn is  = - Use this keystroke to toggle between your Mix Window and your Session Window. This is what should appear when you use  =

Next, you should know about the Modes of trimming and editing. There are 4 modes and you can toggle through them by using the `/~ (Tilda/Open Accent) Key keystroke.

The modes include

  • Shuffle - Try not to use this mode unless you have a particular use for it. I'll show you why in person because it's weird to explain. 
  • Spot - This will bring up a window asking where the spot is you want to edit.
  • Slip - Use this when you want to freely edit audio. 
  • Grid - Use this when you want to edit audio according to the grid. 

I use the last two almost exclusively, but sometimes you need the others for something specific. 

If you want to adjust the grid in which you are using to edit, click on the musical note next to Grid and select the value. You have options from whole notes to sixty-forth notes.

Next to that tool is the mini transport with the stop button and the loop playback button for when you want something to play in a loop.

Also, none of these keystrokes will work if you don't have this dumb little a/z button clicked and highlighted yellow.
 Next thing you should look at is the drop down menu on the track itself.

Here is where you will find your fine volume adjusting and muting and panning. As you explore with more plugins and such, you can assign more automate parameters. 

That's for later.

Playlist is awesome in ProTools because you can save all your different takes of recording passes there and pick and choose from the different audio takes later in the editing process.

I will show you later in person.

Next, you should know how to create a click track.

If you struggle or refuse to play/practice/record to a click track, you might want to start using it now. It gets easier I promise.

(There is no keystroke to make a click track in ProTools. 3. Flaw)

Go to Track > Create Click Track

The click track will show up and is colored green, whereas audio tracks are blue.

Just a suggestion - I always change the default sound of the click to Marimba II because it honestly is the least obnoxious of the presets. Food for thought.

You can see the dialog box on the right that comes up when you click on the little tab called Click. You can adjust a lot of settings about the click here if you so choose.

Now, how to use a click.

Use a click so you can play in time with the tempo you have chosen for your project. In this case, ProTools defaults to 120 bpm, or beats per minute. You can change the tempo by clicking on the little red button next to the quarter note and the number 120 and this dialog window comes up. Make sure the location is like this if you want to change the whole project's tempo. Same thing goes for the meter, but in the meter section.

Click the red record enable button to let the track know you want it to record audio.

I recorded some audio. I was clapping loudly into the mic.

(Keypad) 3 - this is the keystroke to start recording.

Now let's add some reverb to the audio I recorded because most times, vocals and other instruments will sound nice with some reverb!

Wow, this screen shot didn't turn out so hot. Sorry about that. It says 1 Stereo Aux Input.

This is where I would want to use an Aux Input Track. Plugins use a lot of processing power and in order to avoid a sluggish ProTools session, you should employ an Aux Track with your plugin on it and bus the audio track to the effect. In this case, the effect or plugin we will use is Reverb.

I know its hard to see the words here so I'll type them out.

Inserts > Multichannel plug-in > Reverb > D-Verb

I like using D-Verb because it seems to be more user friendly, but you should try out the others in case you find the perfect preset or settings that work for you.
This is what the D-Verb plug-in looks like when you click on it.

I like the preset Vocal Plate, but try whichever ones you want.

I actually usually change the size to small or medium depending on the song, but I forgot here.

Now to bus.

Bussing is exactly what it sounds like. Signals sitting on little busses that take them somewhere else.

This is the first step to understanding signal flow, which will come in handy as you get more familiar with the basics.

So I am going to bus the clapping I recorded on the audio track to the auxiliary track with the reverb on it. Why is this better than just putting the reverb plug-in on the audio track itself? Because now, you can bus other audio tracks to this auxiliary reverb track giving your computer a break and giving your project a glue like feel when all the reverb is so cohesive coming from one source!

Sorry again for the fuzziness.

On the Aux Track,

Input (No Input it says) > bus > Reverb

It's nice that there is automatically a named bus ready for you to use. You can see the others that are there as well.

Now, go over to your Audio Track and under the sends section,

Sends > bus > Reverb (Stereo)

Since you assigned that particular bus to be in use on the Aux Track, it will be highlighted yellow when you go to select it on the audio track.

Now you can Option/Alt drag the little bus tab to other tracks you want to send to the aux reverb later.

If you are struggling with these last few concepts, or all of them, I promise it will get better and it might be easier to see in person :) Don't stress because my teacher was drawing little busses on the white board for me and my class when we learned hahah

What happens now is a little floating aux fader will pop up.

Its fader will be all the way down and you will not hear any of the reverb until you pull the fader up to unity (zero). This is very dumb because it should just default to unity. (4. Flaw)

So, if you don't hear anything, make sure you pulled up the fader using the keystroke Alt Click

Alt Click will actually send any type of fader (pan pot, volume, automation) to unity gain, or zero.

Now, when I hit the space bar to play, you can see how the audio is being sent to the bus. I can explain further about busses and the little pop out window and what it does.

Next, lets add an Instrument track for some MIDI piano.

Use Shift  N to create the new track, select Instrument from the drop down menu and click create.

Toggle to the Mix window using  = and in the instrument track's insert section, we will add the virtual instrument.

A great virtual instrument that comes with ProTools is MiniGrand. When used right, you can actually come up with some awesome sounding piano.

Inserts > plug-in > Instrument > Mini Grand (Mono)

You will see a plug-in of a piano pop up and it will take its sweet time getting set up for you.

Below is Mini Grand. You can adjust the settings or find a preset in the drop down menu that you like.

Fortunately, the pods are pretty much all wired and set up correctly to just work when Mini Grand or any other virtual instrument is used.

Now, you need to record enable the Instrument track and record some MIDI.

MIDI = Musical Instrument Device Interface.

It is not actually audio, but rather a series of values that indicate the velocity, duration, and note on a scale of 1 to 127.

I can show you how MIDI can be fun and helpful in certain situations later.

So, I record enable with Keypad 3 and go for it.
This is what MIDI will look like after you are finished recording.

The great thing about MIDI is that I can toggle the MIDI Editor Window by using the keystroke

Ctrl =

And from there you can do wonders from adding notes you want, subtracting ones you don't want, etc.

You can see in my MIDI Editor Window that my notes are all off and out of time.
Introduction to the amazing Quantize! To quantize these notes, or have them snap into their rightful places according to the tempo and meter you have set, use the keystroke

 (Keypad) 0 - Brings up the Event Operations Window with the option to Quantize your notes.

You can adjust the parameters of the quantize by selecting where they should snap to (the nearest half note, the nearest quarter note, etc).

Sometimes the quantize doesn't get it quite right, so practice with a click so it doesn't have to guess and fail so badly! haha

Below you can see the difference the quantize made.

When I first learned about this awesome tool, I had a lot of fun with it. Until I realized that when you quantize things like this, you end up with a very inhuman feel. There are shortcuts to varying the degree of quantization so that the human is still obtained, and I will show you later.


Let's save this bad boy. Under File > Save you will be able to save all changes you made to your project and it will be completely updated.

Save As is a different way of saving and it will ask you for a new name for your project because the original project will stay as it was last saved and the new Save As will be a copy of the original just with the changes. This is useful when you want to save the progress or you want to start over without doing the undo button 40,000 times!

Save Copy In, however; is a terrible idea unless you know how and when to use it. It destroys file management, I've seen it happen! So until you have a use for it, stay away from that option.

One last thing to know is how to Bounce a project out of ProTools.

A bounce is basically a creating an audio file out of the entire project when you are finished or just want to have an mp3 or wav of the session.

To obtain this bounce, you must go to
File > Bounce to > Disk...

This will bring up the Bounce Dialog and your settings should look as follows below.

The only thing you might change is whether you want the bounce to be an .mp3 or a .wav

The difference between an .mp3 and a .wav is that an .mp3 has been significantly compressed to be more efficient in size. Most songs on your iPods are .mp3s to reduce space. In the conversion/compression the audio will lose quality, however; these days the loss of quality can be negligible.

A .wav is something called a lossless format because it is printed as is will no compression. These will be better quality, but a lot bigger.

A good analogy is the difference between the photos you take on your iPhone opposed to the photos you take on a high definition SLR camera. The difference is obvious when the two are blown up in size, but on your phone's screen, you don't really see the detailed differences.

Basically, and .mp3 vs. a .wav in your earbuds wont sound so different as they would on some awesome speakers or professional headphones.

Last but not least, this is a shot of what your ProTools folder will look like after your done. You will have your Audio Folder with all of your properly named audio files, your fade files in the fades folder, your plugin settings in the plugin folder, and so on...

I just now realized that the first few folders of this screen shot are off because I used slashes to part my dates, which will actually just produce a new subfolder. This is a great example of not paying attention to a simple detail leaving my ProTools folder off and the name of my session wrong. Dammit.

Anyway, you get the idea.

I really hope this blog helps, it took me all day to put together, so please use it as much as you want and ask me any questions. I would love to help make recording your new best friend :D


Friday, May 4, 2012

You Have to Walk Through it

We can be provided with the tools and resources that will make us great engineers, but we are the ones who have to apply their teaching and utilize those resources.

If the Gate is provided, You have to walk through it. <------ hahahah corny.


Strip Silence. Use the keystrokes U to bring up the strip silencing window and F to apply fades to the starts and ends of each transient. Strip silence is a tool used to silence audio in between the desired audio. Room sounds, other instruments, anything not meant for that specific microphone to hear can be cut out using this trick.

Start everything at zero, or all the way to the left. And grab the threshold, pull right, lowering the db, then grab the end region puller and lengthen the tails.

Shift click two tracks and strip silence both at the same time.

F to fade the ends and beginnings of the transients and use an algorithmic curve which will sound better to the human ear.

Mix in the box first, then mix out of the box and try to duplicate that mix. This is a great learning technique because it shows you how to make things sound good before you try to emulate it out of the box.

Next great technique is making sums of certain instruments like the top and bottom of a snare and the inside and outside of a kick drum. Create new mono auxiliary track and set the input to Bus 17. Assign the outputs of the other tracks meant to be summed to Bus 17 as well.

The EQs on the board channels are really good sounding and can be used instead of mixing in protools.

Fast releases make quiet stuff turn up like the cymbals in the snare track (room sound). The numbers on the bombfactory are counter intuitive and the attack and release knobs go from 1 to 7. 1 being long and slow and 7 being short and fast. Think of the knobs on the 1176 relative to speed. 1 mph is slow and 7 mph is fast.

Set the threshold where a lot is being compressed and mess with the attack and release to get to know the compressor.


An expander is to a gate what a limiter is to a compressor. Gates are like doors. They open and close whether the threshold meets a signal above it. Everything BELOW the threshold is squashed.

The gate is ineffective if used after a compressor. EQ can go first in the downward flow of insert section in protools. Lower threshold gates are sought.


Using a gate as a special effect in a side chain is really cool and almost better than actually using the gate! Bus the Kick on an audio track to an auxiliary track with a gate on it. The gate has a section with a key image where you can select the bus you want to use. Then highlight the key button. This makes the kick the CONTROL SIGNAL. So, anytime the kick hits, the gate will open triggering the signal generator which is sitting under the gate.

Sunday, April 22, 2012

A Little Compression Never Hurt Anyone...

Here's a continuation of the compression lectures!


First, we recorded bass through the millenia into protools and mixed the signal back through the millenia to experiment with compression!

Compression is program dependent meaning that the settings of compression are dependent on factors specific to each piece. For example, tempo and time. Two major factors of things that are program dependent.

Fast releases on low frequencies create distortion because the release is happening on every peak! This is bad for low frequencies.

Pumping and breathing is also an affect that compression can have. Its bad to have too fast an attack or release. The idea is to make compression sound natural and musical, not falsified.

Fast attack is good for snare and bass drum because the attack can get through.

Everything generally has a quick attack so too long an attack causes the signal to already be at the decay stage when the compression attack happens.

If the release is too long, the following transient is being cut off because the first one overlaps it. This ends up being a compression war. And this can also result from too low of a threshold.

Through the distressor, we can change one variable at a time to hear the differences. Use extremes of each variable to hear the differences. Once you know what is being changed, you will understand compression better.

The input affects the threshold as well and the higher the input, the more is being crushed.

Sunday, April 15, 2012



A compressor is an automated volume knob and it enables you to make a soft sound louder and a loud sound softer!

They process the dynamic range of the audio by varying the gain or the volume of that sound.

Dynamic range is the difference between the highest and lowest volumes of an audio.

Compressors can be used on everything. The variables are the input, threshold, ratio, attack, release, output level or makeup gain, and the VU meter.

The input level is the amount of input going into the compressor. When set at zero, the level leaving the computer is the same as the level going into the compressor.

The threshold is the user set level where any audio above the threshold is compressed and anything below the threshold is unaffected.

Next, is the ratio which is the ratio of decibels above the threshold to the decibels that will be heard after compression. So a 4:1 ratio means that for every 4 decibels above the threshold, 1 will be heard. If 16 decibels are above the threshold, 4 will be heard.

Limiting happens after a certain point (100:1 ratio), and you don't hear too much of a difference anymore.

Attack is the speed at which the device affects the signal. The difference between the thuck and crack sounds.

The release is the rate at which the device lets the signal decay. The time the compressor takes to return the signal to below the threshold.

The output level controls the audio level after the compression happens.

Why compress? It can make your tracks sound smooth and more consistent, it can change the tone and quality of the source, and it can change the room sound!

Compressors are dangerous and can kill an entire mix. Don't abuse them!

Compression is distortion! They are the same thing!


I love my group. We have our ups and downs, but we all have learned how to work collaboratively and effectively together to make the best recordings we can.

Compression is awesome. How did I not know this before?

Out of the Box!


In order to mix out of the box, which means using the board to mix down a session, start with assigning the Audio Path Selector to Protools.

From the Gray patch panel ProTools Outputs to the Line One Inputs. This tells the board that the signal is at line level.

To make the reach to the faders closer and more convenient, use the faders 25 through 40.

Set the faders to unity gain and set the line 1 button on, the line 1 pot set to 0, and the mix button at the top of the fader strip to on.

Also, set the auxiliary master fader up and the controls all up! Press down the dim button to bring the signal down and set the monitor speakers to about 10 o'clock. Just don't forget that the dim button is on because turning it off will blow up the room!

Side Note: Keypad 4 is the key stroke to loop playback!

Also, pan the Toms hard left and right not only on the protools channel, but also on the board.

Create a stereo audio track at the bottom and always mute it. Assign the input to B9-10 and the output to B1-2.

Patch Bay 17/18  =  IN B 9-10  =  OUT B 1-2

The whole mix goes to the Remix L/R Output into Protools 17/18 and back out (B9-10) to the 2 track monitors.

The MIX button at the top of each channel strip sends the signal to the remix output (L/R).

Ready to record? Mute, Input Monitor, Record. Then the other direction on the way back!

Friday, April 13, 2012

It Might Get Loud

"It Might Get Loud" 4.9.12

        The documentary, It Might Get Loud, features the varied playing and recording styles of the famous guitarists, Jimmy Page from Led Zeppelin, Jack White from The White Stripes, and The Edge from U2. The documentary was filmed on January 23, 2008 and premiered at the 2008 Toronto Film Festival as well as both the Sundance Film Festival and the Berlin International Film Festival in 2009. 
In order to understand the overall dynamic of the documentary, one must understand that each guitarist is from a certain generation. Jimmy Page was born in 1944 making him 68, Edge was born in 1961 making him 50, and Jack White was born in 1975 making him 36. This difference in age, experience, and background created an overlapping and extensive blanket of individualism and creativity. "It reveals how each developed his unique sound and style of playing favorite instruments, guitars both found and invented. Concentrating on the artists musical rebellion, traveling with him to influential locations, provoking rare discussion as to how and why he writes and plays, this film lets you witness intimate moments and hear new music from each artist" (IMDb, 2008). 
Throughout the documentary, Edge and Jack White demonstrate admiration and ultra respect for Jimmy Page's stories, tricks, and music. This  is no surprise given that Jimmy Page is the oldest of the three and the guitarist for one of the most famous and well-known bands of the time. Jimmy Page started his guitar career as a session guitarist and member of the English band, The Yardbirds, before he founded the rock band, Led Zeppelin. It is an understatement to say that Jimmy Page was simply the guitarist for Led Zeppelin. In fact, he is considered to be one of the most influential guitarists and songwriters in rock music. Jimmy's introduction in the documentary is subtle as he plays a very dynamic piece on the guitar explaining the technique as "the shade whisper to the thunder" (Might Get Load, 2008). When Jimmy played or spoke, Edge and Jack listened with full attention and respect. Jimmy's early taste in music was "anything with guitar in it" and explained the revelation he experienced when he first heard the guitar "rumble". In the 60's, Jimmy was featured as a session guitarist on songs by Marianne Faithful, The Nashville Teens, The Rolling Stones, Van Morrison & Them, Dave Berry, Donovan Leitch, Al Stewart, Joe Cocker, Eric Clapton, and Chris Farlowe. Something very interesting about Jimmy is that he had an early interest in art other than music. Jimmy tells the cameras about his early interest and fascination with design, painting, and drawing. He talks about some of Led Zeppelin's famous songs such as "Whole Lotta Love" and, of course, "Stairway to Heaven". 
David Evans, more commonly known as The Edge, is the guitarist for the Irish band, U2. Edge is not only known for his involvement in the widely known band, but also for his distinguishable guitar delay techniques and other effects he uses. Edge met his band mates that would later form U2 at Mount Temple School in Dublin, Ireland. He mentions that "[they] could barely play" when they first started in the fall of 1976. Edge's background and career as a technical guitarist comes from his aspirations to become an engineer in his early years before U2. When he was young, he and his brother Dick built a guitar from scratch together which also contributes to his fascination with them. Something that Edge did not appreciate in his rise to fame, were famous musicians'  tendency of self-indulgence and carelessness towards their fans. He considers his fans to be the reason he and his band mates are where they are. "He has often been called an "anti-guitar hero" because of his aversion to the indulgent, showy style based on intense soloing of many contemporaries, preferring instead to play in often a technically undemanding and low-key, yet original, way. He is renowned for being a guitarist who is more concerned with sounds, texture and innovation rather than flashy technique" (@U2, 2012). Edge even tells the camera that growing up, he never wanted to play the guitar because everyone played the guitar. One of his techniques he uses that differs from the other two guitarist is his tendency to simplify the chords he plays by omitting the filler tones within the chords. 
Jack White is the youngest of the three guitarists. Before the three sat down and started, Jack mentions that he is going to "trick them into teaching [him] all their tricks" (Might Get Load, 2008). Jack's playing resembles a more country, southern blues style when he is shown playing piano with his son out in the country. Jack is an American singer-songwriter and is best known as the lead vocalist and guitarist for the band, The White Stripes. He was also a member of the bands The Raconteurs and The Dead Weather. Jack White is listed as #70 on Rolling Stones' list of top 100 Greatest Guitarists of All Time. During the documentary, the difference in age and years of experience was definitely pronounced when featuring the three guitarists together. Politely and humbly, Jack seemed to listen and learn more than project the knowledge he has and came across as self deprecating to a degree. Jack's style of guitar was also the most different. His early inspiration was the sounds of Son House saying that it spoke to him in a thousand different ways with the simplicity of singing and clapping. He recalled that the use of stripes on his and Meg's outfits were actually used to distract the audience from the fact that they were "trying to play Son House" (Might Get Loud, 2008). Jack's history as a musician comes from where most angry, punk rock guitar players come from--getting picked on in school. He grew up in a town where rock and roll was uncool to listen to and playing any instrument was even worse. One of the interesting techniques of Jack White's playing is his use of the Harmonica Microphone installed in his guitar. 
As for the interpersonal relationships between the three guitarists, I gathered Jack as being far more strange and eccentric than the other two, Edge being more concerned with the math and technology behind his playing, and Jimmy being the musical virtuoso who considers dynamics and collaborative playing to be key. They were all extremely honest and respectful of each other during the documentary and actively listened to each others' experiences, background, and opinions. The honesty level came in when Edge admitted to playing a wrong chord for a good amount of the time they were all jamming on one song and when Jimmy blurted that he can't sing. Jack and Edge started harmonizing very well during another song and it was obvious that an extra step of bonding happened. The three guitarists are all talented and influential in their own way, contributing to the history and list of incredible and important guitarists. By the end of the documentary, each of the three exchanged hugs and handshakes with clear signs of appreciation and respect. 

  • IMDb "It Might Get Loud"
  • Rolling Stone Music
  • Jack White III
  • @U2 "The Edge Biography"
  • It Might Get Loud, and the Documentary itself

Tuesday, March 27, 2012


So, the new board is coming in late March and it is a Solid State Logic (Matrix) according to my notes! We are all really fortunate to have learned on the MTA980 this semester and are excited for the new board coming soon!

I'm pretty behind on my blogs, so I will attempt to compile a couple for this one on EQ.


When tracking, make groups for each group of instruments or pieces of the band. So the guitars go with the guitars, the backup vox with the other backup vox, and so on. When labeling the tracks and ordering them, they should line up like this, from left to right:

  • Drums
    • Kick
    • Snare
    • Toms
    • Overheads (L&R)
  • Bass
  • Guitars
  • Keys
  • Vocals
    • Backup Vox
  • Auxiliaries
    • Reverb, Talkback, etc.
Additive EQ = Bedroom sounding EQing. Additive is when you BOOST certain frequencies in the spectrum.

Subtractive EQ = Good sounding EQing. Subtractive is when you boost a certain frequency by CUTTING others around it. 

For example, the is a frequency in the kick drum that has a cardboard quality. By CUTTING the frequencies that range from 250-350, that cardboard sound will be diminished. By BOOSTING 7k just a little bit, one can increase the snap of the kick. 

Vocals = You can't hear the vocals at 1k. SO, subtract everything around 1k to ultimately boost that frequency. Don't boost at 1k.

For the kick drum, one cant cut everything around 7k because there isn't anything is down there!

When EQing the overheads, one can cut a lot of bass out because the overheads should only be the cymbals and high frequencies. This also makes ROOM for other instruments like the bass guitar and the kick drum that need that space. 

The snap of a snare is at 5k and that can be enhanced and brightened by applying a shelf. 

Complimentary EQing is when you make room where space is needed. So, turn up the bass guitar frequencies where the bass drum's frequencies are turned down. 

Additive EQ isn't a bad thing. Its bad when it is overused or improperly applied. It can completely ruin a mix. When adding, make the curve musical by adding over a 2-3 octave range and only add up to 2-3 db. 

When mixing, mix to the other factors of the performance. Don't solo a guitar, mix it, and move on because the mix needs to be one mix not a bunch of mixes piled on top of each other. 

The Save Solo for Auxiliaries is  Click the Solo button. 

Don't double tracks. OOPS, I do that.

Use panning on multiple guitars to even them out nicely.

For kicks, put a high pass filter (HPF) on everything except the bass guitar and the kick, and sometimes the low end of a piano. Do this because there are a lot of frequencies that are completely unnecessary at the low end that take up room. 

Mix in mono because it is so much harder to do. Then, when you pull it into stereo, it will sound 100x better. 

Vocals - Create 2 aux tracks for delay and reverb. Start at a medium plate with input at 100% and decay at 1 second for the reverb. The delay should be at medium 50 milliseconds on one track and the other at 25 milliseconds. 

Do not put any low end content into the reverb because it will sound muddy in the mix. 


The goal of equalizing is to produce a flat frequency response. 

Inserts = Dynamic processing which includes expanders, limiters, and gates that are all variable. 

EQs are used to modify the amplitudes of selected parts of the frequency spectrum of an audio signal. 

They allow precise tonal adjustments of the sound by isolating a range of harmonics within the sound using a filter. 

A flat frequency response would be if an instrument could produce all the notes at the same amplitude no matter the frequency. This is impossible except for computers and synthesizers. This is also what is desirable in a mix. 

Headroom = available space to turns things up. Low frequency content robs the mix of headroom and can create LFOs which effect other high frequencies and can thus be heard. 

Filter Slope = Gradual slopes sound better and more musical opposed to sharp, sudden slopes. The gradual slopes allow for more frequencies to be heard. 

In this slope, there are 6db per octave. The difference between octaves = division and multiplication. 

Notch Filters = Used when attenuating and it means CUTTING the frequencies, sometimes in a bell shape. 

A wider "Q" will be more musical. The frequency, bandwidth, and gain all affect EQ.

Frequency = Which frequency is being focused on to be cut or boosted. 
Bandwidth = Whether the Q is narrow or wide.
Gain = Whether the frequency is being boosted or cut. 

To train our ears to hear or comprehend certain frequencies, come up with adjectives that describe them. 


To find problem frequencies, boost individual areas of the spectrum until you find the annoying one and then cut it. This can be done on a parametric EQ. 

Before recording, think about what you want the mix to sound like so that you can compliment areas of the song with EQ and instruments.

Some common shorthand for the board:
  • SNR - Snare
  • EBass - Electric Bass
  • VOX - Vocals
  • GTR - Guitar
  • MONO - Mono Room Mic
  • KIK - Kick Drum
  • OHL - Overhead Left
  • OHR - Overhead Right
EQ should be the last resort to making an awesome mix. First, get the right players, get the best performance, mic placement is a huge variable, and TUNE!

Never FIX IT IN THE MIX. Get it right the first time. 

Try every mic placement, go crazy and try everything. 

High Redundancy, Low Information....
- Ceiling fans, refrigerators, lights, etc. 
- Take out these kinds of unnecessary noises. 

U + F = Strip Silence and Threshold. Use this on drums to cut out sections that are High Redundancy and Low Information.


EQ is something I have never used because I never understood it. I think its time to get to know my frequencies. Oh man.