John Matthias’ new album uses an instrument modelled on the firing of the brain’s neurons. The composer explains how he developed a totally new way to create sound
Musicians and composers have been working with samplers since portable tape recorders were invented in the mid-20th century. Later on in the 1980s, bands like Public Enemy and Coldcut would use excerpts of other musicians’ recordings within electronic dance music, a process known as ‘sampling’. Composers have also experimented throughout the 20th and early 21st centuries with shortened and extended timescales in music. The process of combining very short samples or ‘grains’ of recorded material to form new textures is known as ‘granular sampling’.
I wanted to experiment with these processes by adding in rhythms from the natural world, and from complex systems. These systems can be physical objects or organisations, anything from sand piles to the economy, which exhibit unpredictable behaviours. We looked at the association of a sound event, such as a ‘grain’ of sound, with an event from a mathematical model of a complex system.
The event we found most exciting and interesting was the electrical firing of a nerve cell, or neuron. The charge on the cell’s membrane changes as it receives signals from other neurons; the cells are connected through axons, long tubes at junctions called synapses. When this charge exceeds a particular threshold, the neuron fires an event and a sharp electrical signal is sent down those axons to all the other connected neurons.
Several years ago, I started playing around with mathematical computer models of these neural networks at Plymouth University, and became fascinated by the collective rhythms of the firing patterns. The patterns can change according to how particular neurons are stimulated and the strength of the neural connections. The breakthrough moment came when I used the firing times to trigger fragments of sound from a piece of music (A Love Supreme by John Coltrane). Disembodied fragments of the music came back at me through the computer, but rather than being random, the patterns were strangely familiar and surprisingly musical.
In 2006, I started working closely with artist Jane Grant and programmer Tim Hodgson to produce an interface for the instrument. The interface comprised of a drop-down box to upload a sound file, which is then deconstructed and ‘neuralised’ by changing some of the neural network parameters, such as the number of neurons and number of connections.
Jane began to experiment with longer sound events triggered by neural firing, and worked with human breath and speech to produce an artwork called Threshold.
This really expanded the sonic possibilities of the instrument – we realised we could make a ‘live’ version in which sound was input through a microphone, and then re-triggered by a live neural network. Then things really started getting exciting. Jane had the idea of situating the nodes of the network in various geographical locations and feeding sounds to a central gallery, whilst Nick had explored feeding the sounds back from the gallery to the remote locations.
This idea became The Fragmented Orchestra, which won the PRS Foundation for New Music award in 2008 and was displayed at FACT in Liverpool.
Once we had solved the problem of stimulating and re-triggering live sound, we needed a working interface for our instrument. We called it the Neurogranular Sampler. We worked closely with the brilliant London-based design company Kin to produce a way for a musician or artist to interact with the network and produce sound output.
We are very keen on collaborating with musicians and artists who work with the instrument, and their comments inform the instrument’s design. Underworld has recently been experimenting with the sampler.
So what does it sound like? You can play any sound into it, whether it’s a recording or a live instrument. The sampler will then take tiny pieces from it (the durations of which you can control) and then replay them when the neurons inside the computer ‘fire’. You can choose parameters in the computer to make the neurons pulse, or find other firing patterns which changes the firing pathways. The sound this produces reflects the sound of the recording that goes in, but it will be reconfigured according to how the neurons in the instrument are behaving and how their connections change when they are stimulated.
On my album Geisterfahrer, due for release on Village Green this month, I’ve used the instrument a lot. You can hear it triggering The Holst Singers in the track Climbing Walls, and you can hear it processing the vocals at the end of Spreadsheet Blues, where it takes the vocal slightly out of key. We will use it a lot in the forthcoming live performances of the album, with the live interface designed by Kin as a backdrop.