research & software
Learn more here about my University of Huddersfield CeReNeM postdoc research on the Fluid Corpus Manipulation project.
Much of my creative work involves creative coding, so here is just a selection of some tools that are bigger projects and/or may be useful and interesting to others. Checkout my github for more.
- DJII modular live electronic fx commissioned by improvising bassoonist Dana Jessen (2022-) (SuperCollider)
- Mel-Frequency Cepstral Coefficients (MFCC) Interactive Explanation (2022) (p5.js)
- Serge Modular Archive Instrument (2021-22) (SuperCollider, Processing, & C++ openFrameworks)
- Aluminum Forest (2021) (Arduino & SuperCollider)
- PlotXYColor plotter for inspecting multi-dimensional data (2020) (SuperCollider)
- JSON Writer (2020) (SuperCollider)
- Audio-Reactive Modular Video Design (2019-) (C++, openFrameworks, SuperCollider)
- Neural Network (SuperCollider client side implementation) (2019) (see a performance of this in use)
- Module-Tensor: laptop improvisation software (2014-) (SuperCollider) (see performances of this software in use)
- Live Video & Audio Sampler (2017) (C++, openFrameworks, see a performance with this in use)
- Voice Modulator for theater artist Eric F. Avery’s production of The Life and Death of Eric F. Avery (2016) (Supercollider)
- LFO / Arpeggiator / Gate / Trigger for Endorphin.es Shuttle Control (2016) (SuperCollider)
- Microtonal Keyboard (2016) (SuperCollider)
“Polynomial Functions in Žuraj’s Changeover” Perspectives of New Music (2022)
A mathematical analysis of Vito Žuraj’s orchestral work Changeover. Knowing that Žuraj composes using custom made computer-aided composition tools, this analysis reverse engineers some of the equations and algorithms that he may have used. A generative example using Žuraj’s methods is included.
“Expression, Collaboration, and Intuition” Wet Ink Archive (2022)
Wet Ink Ensemble asked me to share some thoughts on my use of artificial intelligence in my compositional and improvisational practice. The article describes some of my implementations of machine learning for music making, along with some broader thoughts about why I use these algorithms.
Human and Artificial Intelligence Alignment: AI as Musical Assistant and Collaborator (2021)
The research I conducted for my PhD included a series of creative projects applying cybernetic systems that use machine learning to my creative practice. I share four of these experiments here including many of the technical details of the implementation. I also share analyses of how I experience using AI for music making, offering a phenomenological understanding of artificial intelligence in the context of creative applications. The concluding section conveys why I choose to use machine learning in my practice, by comparing its use and effects to using randomness and complex systems.
Non-negative Matrix Factorization for Spatial Audio (2020)
Due to COVID-19 the 2020 Spatial Music Workshop in the Cube at Virginia Tech was cancelled, but the organizers invited alumni to give talks about some aspect of their work with spatial audio. I presented my use of non-negative matrix factorization (NMF) for audio decomposition and spatialization. See the NMF overview I created for the FluCoMa project.
Interference Patterns: analysis of interacting feedbacks in hollow (2020)
This presentation analyzes the feedback system of my piece, hollow, which uses three large PVC tubes to create feedback at the resonant frequencies of the tubes. Through filtering, delay line modulation, and serial feedback routing, various emergent sonic properties arise. Analysis of the resulting sounds provides some insight into the behaviors of the system.
Preserving User-Defined Expression through Dimensionality Reduction (2019)
This is a talk a I gave at the FluCoMa Plenary Session at CeReNeM at the University of Huddersfield in the UK. It demonstrates various machine learning algorithms implemented in my improvisation software and how I use those algorithms to explore new modes of expressivity.
Machine Learning Applications for Live Computer Music Performance (2019)
Presentation at the University of Chicago Digital Media Workshop. This presentation demonstrates three uses of machine learning in live computer music performance: (1) using a neural network to classify no-input mixer timbres for light control, (2) a frequency modulation synthesizer that predictions synthesis parameters based on novel incoming spectra, and (3) a TSNE based dimensionality reduction system for low-dimensional control of synthesizers with high-dimensional parameters spaces.
Approaches to Live Performance and Composition with Machine Learning and Music Information Retrieval Analysis (2019)
This presentation offers three creative uses of machine learning: (1) using audio descriptor analysis and machine learning to organize grains of audio into a performable two dimensional space, (2) using a neural network to classify no-input mixer timbres for light control, and (3) using a traveling salesperson pathfinding algorithm to re-organize audio grains into a new sequence.