And Lastly, EQ

Out of the three main tools for mixing (Compression, Reverb and EQ), I feel EQ is the most important and also difficult to master. If given no other tools, I think the best results could be achieved with only having access to EQ. That being said, I think this is the tool with which I have the most yet to learn and master. While EQ can do so much to alter the timbre of an element to make it sound exactly the way you want, I get the most enjoyment when I can use it to create clarity and separate elements in the mix.

One of the biggest mistakes made by beginners when starting to mix is to EQ everything so that it sounds exacly the way they want it to sound. Only problem is they usually do this for each element individually. As soon as they listen to everything together, it turns into an audio soup. Every element becomes unfocused and muddy with a lack of definition.

Instead, the focus should be on what the important qualities of an element in the mix are. What makes that timbre stand out and sound unique from the rest? What do you want to stand out about it? Decide what elements live where in the mix with the entire mix and relationship of all elements in mind.

A great example of this is in dance music. There has to be a conscious decision made on whether the kick drum or bass is going to occupy the low frequencies. If they both occupy 100hz and lower, you won’t get much definition in either and neither will sound good. Dig deeper and really think about what you like about each sound. Do you really like a low boomy kick drum or is the snap of the mallet hitting the head on the hi end really important? Does the deepness of the bass sound pleasing or is there a distorted aspect to the timbre of the bass that you like? Where in the spectrum is that?

For the track below, I wanted the bass synth to have a low end body and consciously chose to make the kick drum have a higher snappier quality to it. The kick drum still sounds big and heavy since an instrument is still playing the low end frequencies I took out of the kick drum. In addition, I also put a hi shelf on the kick drum around 8k to add snap. I also put a cut in the bass synth from 90-110hz with a narrow Q. I then boosted the kick drum with a narrow Q to occupy the space that the bass synth was now not occupying. Now, the entire are of the spectrum has an instrument occupying it but they don’t overlap creating clarity as well as the timbre I wanted for both instruments.

There are tricks such as automating EQ to “trade” which instrument is occupying what space so you can create the illusion that multiple instruments are occupying the same space. Or, taking a very narrow cut and sneaking only that sliver sound in the void from another instrument like I did in the example above.

Ultimately, when you start prioritizing elements, analyzing timbral qualities and making conscious decisions on where each sound is occupying the frequency spectrum, you can start to make some fantastic sounding mixes.

How I Like to Use Reverb

Going along with my posts on how I personally like to use some of the main tools for mixing music, this time I’m going to talk about how I like to use reverb.

The most common application is of course to make the different elements of the mix sound as if they are in the same space. Every room, concert hall and outdoor area all sound different because of the different activities and people, but for our purposes it’s important because they are all  organized physically different, made of different materials and all affect the sounds in them a different way. There’s alot that goes into making all the elements  sound like they’re in the same space – how much verb is already on the instrument or stem, the timbre of the instrument and how well it cuts through the rest of the mix, how forward or behind the element is in the mix (think in physical terms), the reflections of walls and other surfaces in the space – and honestly it’s all incredibly important but I find it a bit boring. It’s somewhat of an obligation in mixing as opposed to a creative decision. That’s why I was thrilled when I found out about this plug-in:

It sounds phenomenal and takes all known factors of a space into consideration. Reflections, room size, location, everything. If you don’t enjoy the process of making everything sound like it’s in the same space, you should see if you like this plugin as much as I do.

What I enjoy much more about working with reverb is it’s ability to blend and make instruments sound more real. So much of what Composers do is created electronically with sample libraries. As time goes on, these libraries sound more and more realistic and we as Composers learn how to manipulate them more and more to add to the real effect. More often than not, the attack (or beginning of the note) is what can sound fake. Some libraries sound better than others (varying samples, velocity, etc.), but hearing the same amount of pressure on a string or force of air into a woodwind instrument for every note and/or the same note is a dead fake giveaway. Reverb lessens how noticeable this effect is while at the same time giving air and space to the instrument. Verb is essential to bringing your elements to life and a must to making them sound real.

I also enjoy using reverb to blend instruments and create textures. There are many times as a Composer that I don’t necessarily want the listener to be able to hear individual elements in the mix. Sometimes, I want a couple instruments to play the same note or a cluster of notes and create a new sound or timbre. Reverb can make each element less distinguishable and blend into something new – a wonderful tool for your composition and mixing kit.

Lastly, I really enjoy automating reverb and the way it can draw a listeners attention. Imagine having a large orchestral piece with very wet reverb blending instruments and suddenly all the instruments cut out, their reverb disappears and a single guitar plays with very little or no reverb. You’ve created an effect of being in a concert hall listening from the seats and suddenly the orchestra has disappeared, you’re in a small room and the guitar is being played right next to you. A fantastic example of this is at 1:54 in Daft Punk’s “Touch.”

Moreso than just creating an interesting sonic and compositional experience, this is a great way to dramatize a narrative. You can also do the opposite, and have a very dry, small sounding element grow in body and expand the space in the mix.

These are a few ways and methods I like to use reverb and there are still a whole slew of other creative techniques and tricks both for utilitarian and creative purposes. Just start experimenting and find what works, what doesn’t, and what speaks to you as a Mixer, Composer and Artist.

How I Like to Use Compressors

The compressor is one of the three main tools for a mixer (compressor, eq, reverb). There are a ton of posts that explain how a compressor works (I even posted a video below if you’re looking for that information), so instead of adding to that library, I thought I’d post some ways I use a compressor in my mixes. These are not necessarily generally accepted techniques, but rather uses I have found for compression through experimentation and exploration.

Many times I will use compressors to direct attention toward another element in a mix. Many times I will automate the compressor to engage (or disengage) at the start of a new phrase or when I want to switch the focus of the mix. Depending on how you set your compressor, you can use it to bring elements forward or backward in the mix – but I generally tend to use it to bring elements forward by raising the makeup gain and not compressing the elements I want further back in the mix. This make the softer qualities of the element louder which in turn makes it sound closer. To think of it in spacial terms, I’ve moved the element closer to the listener while the uncompressed elements are still farther away. Lots of times I will be engaging the compression for one element in the mix at the start of a new phrase while simultaneously  disengaging the compression of another element that I don’t want to be the focus any more.

Making soft qualities of an element louder can drastically change the timbre of the instrument. I especially like using compression to get the right snap sort of sound for percussion. A great example is the kick drum for dance music. Most kick drum recordings have very little high end. It’s through a combination of EQ and compression that the high end snap becomes so prominent.

Lastly, I enjoy using sidechain compression to carve out small spaces for an instrument to come through when it’s competing with another instrument. Recently, I had a piece that had a texture for a phrase, but I wanted to have another element act as a sort of stab – to play for a very short amount of time and to immediately leave. Unfortunately, no matter what EQ experimenting I did, the stab wasn’t cutting through well enough for me – I had another element that was competing with it. So, I used sidechain compression on the texture. Now when the stab came in, the sidechain compression was activated and since I had a low makeup gain setting, it was quiet during the stab and loud when the stab wasn’t playing. Now, I wouldn’t use this technique for general mixing of elements from phrase to phrase since the compression can noticeably change the timbre and sound uneven since it is reacting to another element which is inherently not at a constant level. But, in this short passage (1-2 secs) it carved out space for the stab. To listener it sounds like the texture didn’t change, there was just a stab that came in that was louder than the texture – even though overall the mix maintained the same loudness!

Compression is fascinating and I still have much more to learn on techniques of implementation. But these tricks are some of my go to’s in a mix. If you want to learn a bit more on the nuts and bolts of a compressor, watch the video below!

Game Sound Implementation

So, up til this point I’ve only considered doing the music for video games – but the past 2 weeks have sort of changed my mind.

First off, I did some music for a Boat-Combat game and in return, they gave me a quick tutorial into game implementation inside Unreal Engine 4. You can check out their game here!

It was amazing how detailed and how many options come inherently with this game engine, but luckily it supports the call of duty vanguard hacks. They were showing me the back-end nuts and bolts of how the audio as well as the entire game comes together, and that’s why learning about the instruments and how to get is really useful. Long story short, I was sold that I needed to learn this stuff! How great would it be if I could compose the music for a game as well as create and implement all the sounds for it? Only a few days later, I was told about a game audio panel happening and we discussed a copyright issue.. To stop people sharing your music, learn How to watermark audio files, to protect your music.

It was refreshing to hear that almost everyone who has made this a career had to stumble around a bit to get there. Plus, they also told me about the middle-ware developers are using – FMOD, Wwise and a few others. I’m incredibly excited at this new opportunity I’ve discovered!

Next week I’m heading to CA to meet with developers and plant some career seeds. I have a meeting to tour Naughty Dog studios!

Finally, I had the chance to work on some very dark ambient music for Project Frequency – a survival horror game. You can hear some of the music I composed for the game below, for your comfort at home look at the georgia home theatre services and enjoy your gaming and music shows ! And, if you’d like to know more about the game, go here!

Rise and Hit Transitions

One thing I think many new composers need to improve on is transitions – and not necessarily transitions to new musical material. Using the same musical material repeatedly can sound very interesting as long as it doesn’t feel like it’s the same thing over and over again. When coming to the end of a musical phrase, it can begin to feel very stagnant and at a constant musical and dynamic level if there is not some sense of excitement or variation that pushes into the next phrase. It’s one of the reasons why drum-fills exist. Without them, there wouldn’t be as much of a sense of completion or momentum at the end of a phrase.

Technology in the music production space has taken off in the last few years. This comes as a huge benefit to bedroom producers or those looking to buy studio equipment to have a little side hustle, you can look for the best violin for beginners to have in your studio.

There are lots of different ways this can be achieved, but most techniques involve a swelling of some sort. A drum fill will lead to a cymbal on the next phrase’s downbeat. A cymbal swell will get louder and finally decay once the new phrase begins. Instruments and/or sections will increase in volume or rhythms will be subdivided and subdivided and melodies or textures will have a higher and higher pitch. All this to create a race to the edge of a cliff with the new phrase as the satisfying payoff and result.

Basically, these transitions are important and I thought I’d take a second to talk about one of my go-to plugins for them.

Rise and Hit by Native Instruments has wonderful sounds and textures for every genre. Not only does it play the role of generating the momentum at the end of phrases within larger musical works, but if can stand on it’s own as well. I’ve written cues where large portions of it are just one patch in Rise and Hit! You can edit, modulate and customize every preset patch as well as generate your own. It even allows you to edit lengths of samples by beats and seconds – making it very easy to use in a stand-alone piece of music as well as for scoring to a timecode.

You can check out more information about Rise and Hit from Native Instruments here.

And, here’s a video showing it in action!

Finally Here

Wow – finally. I’m at the end of a very long road here. As of last week, I have a Masters degree as well as a commission in the US Army as a Second Lieutenant. It’s a pretty amazing feeling knowing what I went through to accomplish both of these goals. I don’t think many people join the Army to study music – and fewer Composers are also Officers in the military. But, I love having the balance of these two worlds in my life. There are things about each end of the social spectrum that are incredibly enriching and valuable.

Ok, this site and these entries are supposed to be about music and sound design, so let’s talk about why I love the creative process of music so much.

To me, it’s a wonderful blend of intuition and analytical and theoretical thought. Personally, I always use musical theory and analysis to spur an intuitive spark – to make me feel what the music should be. Once that idea is in my head, my classical education helps me to identify what I’m hearing and how to execute it. Music school doesn’t teach you what the answers are – it teaches you what the answers could be and how to find your own answers! If I don’t know what the answer is, I think and analyze my progression, my inversion, my mix, variation techniques and ornamentation. While this happens, an idea will just appear. Maybe a mistake will spur it or maybe one of my textbook techniques will just feel right. Either way, my goal is always to induce this feeling of what the music should be.

There is however, one facet of music that I only feel and never “think” about. The Melody is the greatest human element and gives the most direct conversation between the audience and the music. Once I have a melody, I almost always know what the  progression underneath it is and what variations I may want to experiment with – but the melody is never known until you hear/feel it.

Music has to be felt and created by feeling for it to begin to breathe and become something truly unique with a life and personality of it’s own. I can’t wait to wake up every day and make a living by engrossing myself in this process!