Selected Creative Work

Polytera II

Year: 2017-19
Instrumentation: flute, piano, computer
Duration: 9:00

The score for Polytera II is generated in real-time with the performer’s input (microphone feeding an analysis system) driving the parameters for subsequent staff systems of musical material. The large-scale features of the piece are pre-composed: the piece “restarts” three times and I urge the algorithm to go in different directions each time. Electronics and live processing are also embedded in this algorithmic system. Performers: Calliope Duo, Elizabeth McNutt, flute; Shannon Wettstein, piano. Representative excerpt: 03:18 – 06:27

Blue Sky Catastrophe

Year: 2014-20
Instrumentation: piano, computer
Duration: 11:00

Blue Sky Catastrophe was developed and programmed in collaboration with pianist Vicki Ray and composer Martin Herman. Together, Martin and I developed the concept of a game-like, notational feedback system wherein Vicki would be both challenged to accurately sight-read in performance and allowed freedom to improvise with a pseudo-artificially intelligent “phantom” pianist. Vicki provided insight for the constraints and affordances of the live notation system. I accomplished all of the computer programming, notation display, and multichannel generative computer sound. Further details of the feedback loop between improvising performer, computer, and composers can be found in the notes document. Representative excerpt: 04:04 – 07:15.

Terraformation

Year: 2016-17
Instrumentation: viola (or violin), computer
Duration: 16:00

Terraformation uses real-time notation to facilitate a feedback loop between the performer and underlying algorithmic system. The computer part contains a model for producing idiomatic music for the viola, taking into account hand shapes, position, bow contact position, and a variety of other parameters. The computer prompts the performer to play a notated phrase. The performer can choose to play the material or ask the computer for another option. Depending on the option chosen, the computer will suggest the next best musical option. This process continues for the duration of the piece. In this way, the piece is a cascade of forking paths mediated by the performer. The live audio processing and computer generated sounds are also driven by this human-computer feedback loop. Performer: Michael Capone, viola. Representative excerpt: 11:30 – 13:30.

Amber Lambents

Year: 2019
Instrumentation: mandolin and computer
Duration: 7:00

Amber Lambents is a guided improvisation using custom-designed software to live capture, process, and spatialize acoustic sounds made with the mandolin and implements. Implements include waxed floss attached to a mandolin string that I pull to activate the string, a guitar string that I use to bow the mandolin, a guitar pick, bird call, large rubber band, wooden skewer, and comb. In addition, I apply a transducer with variable amplitude to activate the body and strings of the mandolin. The sound of the mandolin is captured with both an air mic and piezo pickup inside of the instrument. Further details on the setup and software of the piece can be found in the notes document. Representative excerpt: 03:13 – 04:55.

r u ok

Year: 2022
Instrumentation: soprano, computer, lights
Duration: 13:00

r u ok heavily samples and emulates the music of avant pop producer and composer Sophie Xeon. The text is generated using a recurrent neural network supplied with every Sophie lyric. The physical movement is remixed from Sophie’s music video “It’s Okay to Cry.” Both voice and gestural input drive the live vocal processing. Performer: Shelby VanNordstrand, soprano. Representative excerpt: 05:01 – 07:45.

Liquid Encryption

Year: 2021
Instrumentation: live telematic audio/video, dancer
Duration: 11:00

Liquid Encryption is a live, telematic improvisation between myself, composer Bradley Robin, and choreographer Sarah Church. My role in the improvisation is to contribute to the audio stream, process Brad’s audio, process Brad and Sarah’s videos, and stream the combined audio/video output. I built custom software tools to capture and process the video elements. I also built custom software to create and process the audio elements. Peer2Peer has performed live telematically at many festivals and conferences from April 2021 to the present. Representative excerpt: 02:30-04:00.

Selected Research Work

The Sonification of Solar Harmonics (SOSH) Project

Year: 2019
Publication: Proceedings of the 2019 International Conference on Auditory Display
Co-Authors: Tim Larson, Elaine diFalco

The Sun is a resonant cavity for very low frequency acoustic waves, and just like a musical instrument, it supports a number of oscillation modes, also commonly known as harmonics. We are able to observe these harmonics by looking at how the Sun’s surface oscillates in response to them. Although this data has been studied scientifically for decades, it has only rarely been sonified. The Sonification of Solar Harmonics (SoSH) Project seeks to sonify data related to the field of helioseismology and distribute tools for others to do the same. Creative applications of this research by the authors include musical compositions, installation artwork, a short documentary, and a full-dome planetarium experience.

New Behaviours and Strategies for the Performance Practice of Real-Time Notation

Year: 2018
Publication: eContact! 19.3

This paper addresses the performance practice issues encountered when the notation of a work loosens its bounds in the world of the fixed and knowable, and explores the realms of chance, spontaneity, and interactivity. Some of these performance practice issues include the problem of rehearsal, the problem of ensemble synchronization, the extreme limits of sight-reading, strategies for dealing with failure in performance, new freedoms for the performer and composer, and new opportunities offered by the ephemerality and multiplicity of real-time notation.

Performer Action Modeling in Real-Time Notation

Year: 2017
Publication: Proceedings of the 2017 International Conference on Technologies for Music Notation and Representation

This paper discusses the application of action-based music notation, and in particular performer action modeling, to my real-time notation (RTN) work, Terraformation (2016– 17), which uses a combination of common practice notation (CPN), fingerboard tablature, and color gradients.

T-Shaped Music Tech Curriculums: Preparing Music Tech Students for the 21st-Century Creative and Technology Workforce

Year: 2018
Publication: Proceedings of the Sempre MET2018
Co-Author: Jeremy Baguyos

This paper documents and communicates efforts to cultivate T-shaped professionals within music technology curriculums in higher education. In addition to teaching the requisite music and music technology competencies that are necessary for a successful career in music technology fields, music technology programs are also poised, without too much additional overhead, to teach transdisciplinary competencies within the music tech curriculum.This allows students to branch out and find employment in information technology fields, in addition to music technology fields. For more than a decade, the University of Nebraska at Omaha has deployed a music technology curriculum that graduates T-shaped professionals. Graduates from the program find employment in diverse fields ranging from music to information technology. This paper and presentation outlines aspects of the curriculum that prepare students to be T-shaped professionals.