Biometric Studies for Solo Performer

2023

The origins of this series of performances largely comes from a place of frustration, both with regard to my performance practice, as well as my perceived inability to form a creative synthesis between two distinct aspects of my craft. Like many others, over the course of the pandemic, I found myself revisiting old hobbies as well as addressing equally aged insecurities. In my case, this manifested itself as a return to playing bass as a primary instrument – something I had partially abandoned due to time constraints, conflicting priorities, years of transitory existence. In all, the return was a wonderful experience and as the weeks became months, I found myself truly taking a deep dive into the instrument as an improvising medium, creating an entirely new vocabulary and potentially accessing new levels of spontaneous creativity that can only be explained as a trance, or potentially even flow. In all, the experience was quite liberating and an online interest grew to the point where there were one or two releases and the prospects of public performance once the all-clear was given.

This is where the disjunctive frustration begins – while booking a small jaunt to the South to correspond with a trip to Asheville, NC, I was asked what new devices I would be bringing with me on this trek. In an ideal scenario, I would have explained to the various promoters that this was something different than my usual electronics, but – seeing as the sale of such devices have been essential in funding past tours, I acquiesced and attempted to cobble together a last-minute electroacoustic set to less than optimal results. By the time I got to Asheville, I was perplexed – somehow I was unable to play bass when concentrating on electronics and vice versa – and the shift between instruments was palpable – and the disjuncture less than aesthetic. Half-wondering if there was something wrong with me, I considered the notion of getting a brainwave monitor to track the change in consciousness as I switched between instruments.

And then it hit me – what if instead of simply observing these changes, I attempted to harness them to my performative advantage? Seeing as I experienced issues transitioning from bass to electronics, could there be a way to create an interface using this monitor that could either provide control of electronics without the use of my hands, or, in a more abstract approach, provide some sort of impulse that could act as a third hand that could allow me to play bass and manipulate electronics simultaneously? With that in mind, this experiment began.

Iteration 1: EEG to Solenoid

For the first iteration, the goal was to see if I could directly transduce brainwave activity to solenoids that could be then used to play an instrument autonomously. For an EEG, I opted to use a Muse headband, which, while as a product it’s intended to be used as a meditation and concentration aid, was formerly an open-source scientific project with a SDK that is still readily available. In some instances, it is possible to directly pair the headband with a computer, such is not the case with Max/MSP, my software of choice. Instead, the workaround that allows connectivity to Max is via Mind Monitor, a neuroscience phone app that allows streaming of data via OSC. From there, it was simply a matter of unpacking the udp data and directing traffic as needed. Thankfully, I’m not the first person to attempt this style of connection and found the Max 4 Live MusePort plugin to be a wonderful starting point for what I wanted to do. As solenoids essentially operate with simple on-off instructions, I opted for an threshold/edge-detection approach to brain activity – if a certain bandwidth goes above a certain level of activity, send a 1 (on) – when it drops below, revert to 0 (off). 

Conversion from data to voltage was accomplished via sig~ objects that converted the binary data into audio spikes that were then sent to individual outputs of a DC-coupled audio interface (either a MOTU M4 or an Expert Sleepers ES-9). Unfortunately, this was enough power to drive the solenoids directly, so to compensate for this, I used a solenoid driver circuit that amplifies the gate signal coming from the interface to 12v DC. While the code and electronics themselves are relatively simple, the results were effective and I was able to craft a decent enough performance dialogue with it that was implemented in a series of national and international performances with it over the course of 2023.

A rough itinerary with recordings, when available is as follows: 

  • 03.09.2023 – Midday Music, Lincoln Hall, Cornell University, Ithaca, NY
  • 03.19.2023 – Durgin Hall, UMass Lowell, Lowell, Ma.
  • 04.22.2023 – Rhizome DC, Washington DC
    • w. Jeff Surak, Tag Cloud, Guillermo Pizarro, Emily Gen & Xenojothsz
  • 04.28.2023 – Fire Museum Presents, Vox Populi, Philadelphia, Pa.
    • w. Worm Eater & Christian Valentino Munoz
  • 05.07.2023 – The Grayhaven Motel, Ithaca, NY
    • w. Paulina Velazquez-Solis & Anna Oxygen
  • 05.25.2023 – Wharf Chambers, Leeds, UK
    • w. Marlo de Lara, Comfort & Cowboy Builder
  • 05.27.2023 – Shift, Cardiff, Wales, UK
  • 05.28.2023 – JT Soar, Nottingham, UK
    • w. Marlo de Lara & Murray Royston-Ward
  • 05.29.2023 – Reid Concert Hall, University of Edinburgh, Edinburgh, Scotland, UK
    • w. Marlo de Lara, Ali Robertson & Dead Labour Process

Iteration 2: EEG to Machine Learning Software

Following these performances, the remainder of 2023 was primarily dedicated to the completion of the Moog Rothenberg organ project and the subsequent physical and spiritual recovery following that sprint. However, while I was putting the organ project to bed, opportunities for performances on the west coast were popping up, but with the proviso that since I would be coming from a conference, space for instruments and hardware would be limited – meaning no basses, solenoids or analog hardware – just a laptop to tide me over. Looking to expand on this project, I returned to the patch from the previous year and modified it track actual brainwave data and not just bandwidth thresholds and mapped said activity to x-y plotters in Max that were in turn scanning through 2-dimensional maps of datasets consisting of audio from past performances and compositions. Needless to say, the resultant sounds were quite different though the general consensus from friends and well-wishers was that it was some of the best sounding AI they’ve heard to far. Of course, one could get into the weeds and argue that the utilization of this setup is more along the lines of machine learning as opposed to actual artificial intelligence, but hey, a complements a complement. 

Performances using this setup are as follows:  

  • 06.15.2024 – Coaxial Arts, Los Angeles, Ca.
    • w. Crayons to Perfume, Mattie Barbier & Ian Wellman
  • 06.17.2024 – Dead End Vintage, San Francisco, Ca.
    • w. KROB, Cruel Work, Decision/Fatigue & Nurse Betty
  • 06.18.2024 – Thee Stork Club, Oakland, Ca.
    • w. Key West, Blood of Chhinnamastika, Magick Creature & Rubber (() Cement
  • 06.30.2024 – Neighbors Gallery, Ithaca, NY
    • w. Sarah Hennies & Core of the Coalman

Derivative Work


Cybernetic Action for Guitar (2023)

An experiment conducted with visual artist Mauricio Esquivel to determine how the EEG-Solenoid interface would react to an extreme reaction or emotion – in this case the pain experienced while being tattooed. This experiment forms the basis of a new composition, currently in progress where the score is intended to be tattooed onto the performer as they play it.  

Biometric/Machine Learning Studies for Three Performers (2024)

This slideshow requires JavaScript.

A composition using the same code as the second iteration of this piece, albeit in triplicate. In performance, datasets are created via live-sampling, adding an additional layer of stimuli for the performers to respond to. Individual actions are determined via open-ended prompts on index cards that will guide the dynamics of interaction between ensemble, as well as via EEG-guided machine-learning reconstructed audio data taken from the performance itself. It was premiered by Tacet(i) on December 22, 2024 at the Bangkok Art and Culture Center in Bangkok, Thailand, with instrumentation for this performance set as guitar, trombone and percussion.