“Through
Music to the Brain”
Spring, 2004
Presbyterian College,
Clinton, South Carolina
Ann Stoddard, Curator
Piece #1:
“Please help us understand insect
brains by expressing your musical tastes.”
(No
insult to your musical tastes is implied at all!)
Visitors to the gallery help us understand neural events in insect
brains through indications of music appreciation.
The
story:
Collaborations with neuroscientists
have kept me on my toes for some years. One of them, Christoph
von der Marsburg, proposed many years ago that synchronous rhythmic
firing was a principle information- bearing phenomenon in a neural
network. Meanwhile another of my long term collaborators, Jim
Bower, one of the fathers of computational neuroscience, was building
ever better computer programs to represent biologically realistic
neural networks. While more and more is known about the brain, no
one knows how to interpret the brain’s internal patterns, the internal
coding scheme.
As the great physicist Freeman Dyson
has observed, science is most often driven by the availability of new
instruments to measure reality in new ways. A new
instrument did appear about a decade ago, in this case in the form of
tiny electrodes that are made using technology similar to that
used to make computer chips. These can be placed inside the
brains of living insects, allowing measurements of patterns of neural
firing events deep inside a brain.
So what to make of the new data?
The two most common approaches are to analyze data using mathematical
techniques and to make graphical representations of it. In late
2003, Jim Bower asked me to try a third technique. Since the
working hypothesis is that synchronization and periodicity are
important, why not turn the data into music, which emphasizes these
qualities? Maybe something surprising will become audible.
We sought out Kevin Daly, who had
actually tried this already. He gathers data from the olfactory
bulbs of moths that are being exposed to a series of
odorants. The music he made from this data was interesting,
but did not seem to clarify what core structure was actually present in
the data. You can make even random numbers sound like
music. A musical rendering of data ought to reflect something in
the data that’s really there even though we didn’t notice it before.
Over the recent holidays I connected
some of Kevin’s moth data to a software system that can turn it into
music in a myriad of ways. I have been listening for the past
weeks to music derived from the firing patterns in the olfactory bulb
of a particular moth that was exposed to a sequence of several dozen
odorants.
So far the music hasn’t displayed any
overwhelmingly obvious structure, although there are interesting
moments. But the music might very well become informative if I
find an appropriate mapping of data to music.
I need your help to do that. You
are listening to music that changes every few seconds. What is
happening is that the mapping of moth brain data into music is changing
over time.
Please press on the big silver button
whenever the music sounds “good.”
I will go over the favorably marked
sections to find out what visitors were hearing. In this way I
might accelerate the search for a useful musical mapping of the data.
Please be honest and don’t press the
button when the music sounds random. To correct for this
possibility, I am occasionally letting the system play music that
really IS random. So if someone hits the button during one of
those periods, I’ll throw out the surrounding button hit records, on
the assumption you were just fooling around.
Science is how we can see more of
nature, the most precious and lovely stuff we are so fortunate to have
inherited in this world. Please join in me in the use of
esthetics to both further science and celebrate it.
Piece #2)
“Face the music”
The
story:
One of the pleasures of working with
neuroscientists is that once in a while research leads to new computer
capabilities. This happened in the case of Chistoph von der
Marsburg and his former student Hartmut Neven. We have worked on
ways to use ideas about how the brain is able to interpret images in
order to create programs on computers that are surprisingly good at
recognizing objects placed in front of a camera.
In particular, we’ve been interested
in the human face.
In this exhibit, one visitor at a time
will stride up to a camera and make funny faces. These result in
immediate and wonderful visual and musical effects.
Piece #3:
“Motion into Shape”
The story:
The long term potential of virtual
reality in my fondest expectations includes the expansion of the range
of means of expression between people. Specifically, I hope
culture will expand to include what I call “Post-symbolic
communication.” This means that people would be able to create
content and events within a shared virtual world with a level of speed,
ease, and fluency comparable to that of spoken language in the current
era. One could create the world instead of, or in addition to
symbolizing it. The result would be open in possibility, like a
dream, but taking place in an intentional waking state, while also
being collaborative, like talking. It would be concrete, but
without the limitations we normally associate with concreteness, so
abstraction would become an option, rather than a necessity.
The primary obstacle to achieving this
potential future is the user interface to support this kind of fluency
in world improvisation.
As a tiny step in the direction of
imagining this interface, I have created an installation in which body
motions are instantly turned into shapes and activities in a virtual
world. This installation has appeared in various forms over time,
in places like the Brooklyn Bridge Anchorage, the Roskilde Museum of
Modern Art, and the Exit Art Gallery in NYC.
Go back to Jaron's home page.