Electronic Music

The Composer-Performer

I hardly considered being a performer. The hours in a practice room didn’t appeal to me, and I felt insecure about my playing abilities. Moreover, I wanted my weekends and evenings generally free and knew I’d have to sacrifice them for the sake of concerts. Also, I loved to improvise and preferred to not be bound to a score.

However, over the years, I acquiesed to friends’ and colleagues’ requests to perform their music, and I found that I could perform my own music in a bind. One day, it struck me that my skills and confidence improved drastically yet imperceptibly to me since my undergraduate days. I had been spending hours practicing, and I had performed in vulnerable settings. The music I was performing was within the realm of what made me feel alive, and it was a positive experience without my self-imposed shackles. What was the secret?

Well, it’s in the title “composer-performer.” How so?

  1. A composer who performs does not feel obligated to go through the rites of passage that a normal performer might. In other words, there is no slaving away for orchestral auditions, and there is no need to memorize the Mozart Clarinet Concerto (which is fun but not what I’d like to perform live).

  2. The composer-performer sets oneself up to play music by living composers. A composer is, of course, friends with lots of composers—hundreds of them. So, just as it has been a part of friendships, it continues as such. I can play the music of my friends and curate concerts that are meaningful to me, without any necessity to program the “classics.”

  3. The composer who performs can write music for oneself and have the immediate satisfaction of hearing it live. Not only is there this immediate satisfaction, but there is the opportunity to edit the piece days before the concert… or even during the concert itself. I’ve noticed that I’m add or crossing out notes here and there as I prepare my own music and that I can follow a whim during the live performance, knowing that it’s my piece and my opportunity to do whatever I want with it.

  4. The composer-performer carries some legitimacy with performers. From my early composition days to today, I have befriended clarinetists and have participated in organizations related to the clarinet. Most of my commissions have been by woodwind players. Why? In part, I imagine it is because they know that I know how it feels to play the instrument.

  5. The composer-performer can experiment in a safe space. It just makes sense for a composer who writes experimental electronic music for live instruments to be the performer. There are so many intricacies to electroacoustic music that I feel much more comfortable writing it for me and testing everything out that way. Certainly, I love writing for other instruments too, but it’s liberating to deal with all the tech issues through my own preparation of the work.

So, if you’re a composer, don’t stop performing. If you’re a performer, you might find that you enjoy composing or being closely associated with living composers.

I’m excited to perform the music of Patrick Chan, Ingrid Stölzel, and Mark Volker and to perform an improvisation with Monte Taylor and Patrick Chan next month as part of the Kairos Multimedia Concert. Saxophonist Drew Hosler is coming down to premiere my A Real Buster and to perform a new work by Cara Haxo. It’s great to make events that celebrate friendships and good music while experimenting with crazy tech setups and new ideas (to me). So, if you’re in the Cleveland/Akron area, come on down to Wooster on April 2nd for a wild ride.

Multimedia Production from the Musician's Perspective

October brings the premiere of two pieces that have deep personal meaning to me. Next week is the premiere of The Story of Our Journey, written about earlier and detailed more here. And at the end of the month Lo! premieres, thanks to a grant from the Brigham Young University Group for New Music. The thing they have in common? Both include a carefully constructed video to complement the music.

I finished the music for The Story of Our Journey in May 2020, yet little did I know how much work still lay ahead. I admit to a serious misperception of the amount of painstaking work that goes into making video, especially to make something as artistically satisfying as the music itself. Our volunteer video director from Their Story is Our Story, Esther Michela, was tasked to make the entire 51-minute video by herself while we battered her with constructive criticism in a push towards a July deadline that, if we had been honest with ourselves, was a complete impossibility. In a state of emergency, TSOS sought out additional help for Esther (realizing that most productions of this stature have an entire team!). They were able to recruit Garrett Gibbons and David McAllister, who provided additional insights and helped with the other movements. Even then, we had too much work to do and after the passing of another impossible deadline (August 1st), we resolved on the first realistic goal of October 16th. I am grateful we waited because the project is now something that has revolutionized the way I want to approach the presentation of my music. Video and music, when properly balanced, are more powerful than when separate. Especially when only online performances are readily consumable, a good video is everything.

How does one balance video and music? This is a question of counterpoint, which is normally a term used to describe the interaction between musical lines. The principles are similar, for there must be a relationship between the two elements that allows for one to not overpower the other. On the one extreme, a video of a live performance from one camera angle is all about the music and relegates the video component to simply a captured moment that probably would have been much cooler live. The opposite of this is film music, where the music always lurks in the shadows while the visuals drive the narrative (especially in Hollywood films). Musically sensitive film directors and composers are able to navigate good counterpoint with the music, and you know this when you remark on the music and the film. The best counterpoint between video and audio would include some sort of interaction between video and audio that allows both to “speak,” which means that there needs to be some crossover in traditions.

The Story of Our Journey captured the happy medium between the two in ways I did not initially consider. Crucial to the music are the interview clips; in fact, every melody in the clarinet and synthesizers—almost every musical note in the entire piece—rises out of the speech patterns and even the background noises (especially a distinctive truck horn) in the interviews. When the video team matched the interview content with its fragmented audio counterparts in the music, it created additional opportunities for interaction. Video effects caught the grittiness of my noisy synthesizers inspired by desert sands from the narrative. The energy of the oceanic electronic rushes became a literal dive underwater with the refugees crossing the Mediterranean. A complex web of relationships were either clarified or compounded onto what the music alone had to offer, and I feel like the image complements rather than conquers the music, which would have been tempting to do. Our clarinetist Csaba Jevtic-Somlai keyed the term Gesamtkunstwerk for this perfectly balanced collaboration. I am grateful to Esther Michela and Garrett Gibbon’s enormous efforts to make such a wonderful and equal counterpart to the music.

The process inspired me to try my hand at video-making, which became important for my commission by the Brigham Young University Group for New Music. I wanted to have complete freedom and safety in my video-making, so I took historical public domain footage from the Prelinger Archives, specifically old television advertisements and a short-lived game show. Again I took the audio from this archival footage and made it central to the music, and I put a thick layer of noisy gestures to complement the video clips’ rough sound quality. It was surprisingly intuitive to work with video editing software because the abstract development of materials is still the same. I found the ideas of opposition, fragmentation, juxtaposition, large-scale evolution through variation, and so forth relatable in terms of color and audio effects. With the help of a friend Erin Jossie, I was able to capture nature imagery for the end of the piece and edit it to feel natural (going through a variety of shots instead of developing material was less natural to me, and I definitely needed the counsel!).

Despite my best efforts, the video was much improved by my brother Michael, a professional multimedia artist. He was able to express the noise in the audio in a way I could not and added some visual consistencies that helped unify the work. He did countless micro-edits in addition to some major reconstruction and still managed to keep my original vision and feel intact. I learned that I have much to do in having the technical capabilities, the imagination, and the eyes for top-grade video editing, and I look forward to collaborations very soon to continue learning.

It’s hard to go back to setting one camera down at a performance after considering how video changes the viewer’s experience. We love to simply listen to music as musicians, but video done artfully adds a visual perspective that approaches both a depth and immediacy hard to achieve in music alone, especially when estranged from its live venue. Here’s to much more video work in the near future.

An Intro to a DAW

A sizable part of the world is isolated at home. The internet has been extremely lively as a result as people try to keep up social interactions. This could also be a great time to try out new (or any) music software, and here’s my take on my latest software exploration and an introduction to one of the two primary forms of music-making on a computer today (I’ll explain music notation software soon, but it deserves its own post). My comments are aimed towards a general public not familiar with the program with a nod or two to those who might be more experienced in this regard.

This digital audio workstation (DAW) has been around for some time, but I finally started using it intensively for my dissertation project The Story of Our Journey. Despite ample work in electronics, I have not written a true fixed media piece (a piece without live manipulation of sound) since 2014. I was worried that jumping into such a large project without a deep understanding of the software would come at great costs. But the only cost I have seen so far was the $450 student pricing ($700 for non-students).

Ableton prides itself on two views, session and arrangement view. Arrangement view is the typical DAW setup that you would see in software like ProTools (the software I initially learned). There are spaces stacked vertically called tracks in which you can place sample clips or MIDI clips. The x-axis is time. You simply drop clips into tracks and place them in time where you need them. Easy. Zooming in and out is essential to make sure each clip is aligned perfectly, and Ableton has zoom panels that allow for quick navigation. As a laptop user, I do get slightly frustrated that I cannot swipe left and right to move to time points beyond my screen, but the ability to zoom by dragging the pointer around in the panel is excellent. Also important to arrangement view is seeing as many tracks as possible. Thankfully, most panels in Ableton can be moved out of the way with a click (or keyboard shortcut) in order to maximize space where the music is being made. Each track can individually be expanded to do detail work. And there are handy buttons with H and W that compress everything you’ve done into the height or width of your available screen space. So navigation and visibility are mostly assets for me.

Now, let’s quickly distinguish between sample and MIDI clips. Samples are pre-recorded audio clips, like the songs and sounds on your computer. When you work with them in a DAW, you are working with digital sound itself. MIDI, on the other hand, is data that the system converts into different musical parameters. An instrument or sampler is chosen to be the sound source that brings the data to life. For example, within a MIDI clip, I can take a sound and map it onto different pitches of the keyboard to create scales. I can use MIDI to control dynamics (or velocity). I can also assign pitch bending and other variables to pitch and velocity to further shape notes. So rather than work with the nature of the sound itself, I am subjecting traditional musical pitch, rhythm, and dynamic thinking onto what the sound can do within the clip as if it were a keyboard instrument. Both can be stretched, expanded, raised in pitch, lowered in pitch, chopped into pieces, and this is done through clip properties and, with more precision, clip automation, which allows for property changes to happen during a clip while playing. Ableton’s clip editing abilities are limited compared to ProTools, which will make sense very soon, but there is great potential in the automation available.

Tracks do more than simply hold clips. Each track processes sound through effects. Ableton hosts three different types of effects: audio, MIDI, and Max effects. The first one applies to all sounds (audio signal processing), the second obviously applies to MIDI, and the last one processes MIDI through the software Max (and those who use Max can make their own effects). Audio effects originate in a variety of sound manipulation techniques that go back at least 80 years (it’s very easy to point to earlier primitive counterparts to the standard techniques, including splicing, before Pierre Schaeffer). Understanding the analog counterparts to these techniques helps in predicting the outcome of the effects, but most DAWs are designed in a way that invites experimentation. Some of the basic techniques possible with analog electronics include: delay, echo, chorus, flanger, filtering, phaser, panning, distortion, amplitude modulation, ring modulation, frequency modulation, and filtering. Techniques accessible through digital means are granulation and Fast Fourier transforms that allow for better pitch shifting and compression/expansion of audio files. MIDI effects and Max effects both process MIDI data prior to becoming a signal. In other words, these effects can change the more traditional music parameters such as pitch, rhythm, and velocity. Many of Ableton’s MIDI and Max effects are arpeggiators and note randomizers. So, for a MIDI track, MIDI effects will process the data before being channeled through a sound, and audio effects can process the resultant signal. Unfortunately, none of these effects can be directly applied to a clip; every clip must be placed in a track with the effects assigned to it.

Effects are not only applied to a track but can change over time, which is called automation. These can be inputted statically with lines, or the program will record your manipulations to the music live. For example, the high frequencies of a sound can slowly disappear as a low-pass filter moves downward. An echo effect can increase gradually. The sounds can be panned left and right to create the perception of movement in space. Ableton allows for automation within individual clips; however, the automation only applies to the effects on the track and a few other items of interest (panning, pitch bending, volume). The best way to freely use a clip with automation is to export the file as if rendering the piece, which is a hassle. These clips may be organized into categories or into folders within the main project folder to quickly find, but automation is most powerful on the tracks.

A final type of audio information registered by Ableton is a live feed. By using the techniques from above on a track, the sound can be processed instantaneously by audio effects. I have not worked with the live part of Ableton Live much, but the Push control board used extensively for Live maps the automation from above to its many buttons, as could be done to the normal computer keyboard.

What sets Ableton Live apart from other DAWs is the Session View. This view loops clips to a meter (and there are ways to get more complex metrical interactions) and allows the user to take the role of DJ. Clips placed horizontally are cued simultaneously while vertical clips move in a sequence. A simple and practical use of Session View would be to create the structure of a basic song with an alternating verse and chorus. The layers of instruments in the verse appear in one row while the second row contains the chorus material. While recording from session view, each verse can be different by simply omitting or adding layers to the verse cue. As a composer who works in a more controlled writing environment, I find the Session View important for generating ideas and the Arrangement View for putting those ideas to action.

Overall, writing electronic music is a different experience than writing with traditional notation. The composer gets to deal with the sound itself instead of with notation that must be translated to sound by a performer or computer. The MIDI information acts closest to notation, but the use of that data is much different than what can be done on a five-line staff without serious effort. Also, effects are largely coloristic and textual, in stark contrast to the grid system of pitch and rhythm set upon in traditional notation. The features described above can create the simplest of songs and the most complex arrangements, and most DAWs will allow for some level of manipulation to get the sounds desired.

Coming very soon: my thoughts on the Dorico music notation software as an 11-year user of Finale…

Mostly written in mid-March as dated here but published on May 20th.