A performance artist’s journey to software engineering
As a performing artist I had always been drawn to and needed music. As time went on music production became more and more integrated with and facilitated by technology and computers. Before long I was making my own sounds, songs and video with entry-level digital audio workstations and video editors like GarageBand and iMovie. This gave way to Final Cut Pro and Ableton Live. Ableton Live led to performing my music out in the world in unique and increasingly spontaneous ways with the adoption of USB MIDI controllers and wearables. I wanted to take my mostly electronic music performance to a higher level so I began researching ways to integrate live video and lighting controlled by the performers onstage rather than folks dressed all in black hidden away in a tech booth at the rear of the venue.
I managed to create full length audio visual spectacles that accompanied each performance allowing for moments of humanity interspersed throughout…that is to say not completely programmed within an inch of its life.
Not long after I mastered these performance techniques I wanted to increase my skills and professional experience in the art and performance world while attending graduate school in New York within the Performance and Interactive Media Arts program at Brooklyn College(CUNY). Here I learned some pseudo-languages like Max/MSP/Jitter and Isadora. These programs provided me with a visual, asynchronous ecosystem where you can connect algorithms by virtual patch cords a la modular synths/telephone operators.
My thesis partner and I eventually developed a multimedia musical performance piece about UFOs and the United States’ changing relationship to them. Our performance used synthesizers, web-scraping, a Python text-to-speech script, multi-surface projection mapping, brainwave readers, and Arduino controlled LED strips to build a strange and immersive experience.