Pure Research Submission Process
Pure Research Reports:
'The
Choral Revolution'
by Rececca Singh and Nick Carpenter
'Kinesthetic Transference in Performance'
by Erika Batdorf, Kate Digby and Denise
Fujiwara
'The Unsuspecting Audience'
by Moynan King & Sherri Hay
'The Invitation'
by Moynan King & Sherri Hay
'The
Box'
by Camellia Koo
'Voice, Music & Narrative'
by Martin Julien
'Hello!
Sound, Voice and Connection'
by Heather Nicol
'Beneath the Poetry: Magic not Meaning'
by Kate Hennig
'Exploring the Land Between Speaking
and Singing'
by Guillaume Bernardi
'On
Comedy'
by Lois Brown & Liz Pickard
'Theatre
of Illumination'
by Shadowland Theatre
Read Brian's article on Pure Research from the Canadian Theatre
Review
|
Pure Research Report - December
2006:
Sound Manipulation
by Cathy Nosaty, Laurel MacDonald and Philip Strong
The Pure Research project of Cathy Nosaty, Laurel MacDonald and
Philip Strong was to experiment with Ableton Live, a multi-track
audio recording software which offers several possibilities for
immediate manipulation and playback, ie. looping, pitch change,
audio effects and filtering. We determined to conduct this research
with the assistance of several senior artists from different artistic
disciplines.
We began with the following questions: How could Ableton Live software
be used in a theatrical performance to collect voices
and sounds from the audience and use them in the performance? How
might one collect sounds from an audience in a theatrical way with
a minimum of instruction? How might artists from various disciplines
use Ableton Live in different ways than might occur to a musician?
Can the physical steps required to use the software be extended
to physical movement, sound creation and storytelling on stage?
Day 1 A.M - LAUREL, PHILIP, CATHY, JESSICA and BRIAN
The goal of this morning was to set up and test the software and
audio equipment. We set up 2 microphones, each with a pedal to trigger
the software to go in or out of record mode. We also set up a remote
keyboard next of one of the mic 'stations': the keys were programmed
to play back recorded audio, and the controls on the keyboard were
programmed to manipulate pitch and audio filters.
Along with Brian Quirt and our University of Toronto graduate theatre
student assistant Jessica Glanfield, we used the software for three
vocal improvisations during which Philip manipulated the audio.
We observed that if there were more than three audio events being
recorded and played back simultaneously, it was difficult to hear
and make sense of the audio information, as it became cacophonous.
It seemed that 'simpler was better' when it came to choosing the
number of audio events to be recorded, manipulated and played back
with fewer audio tracks playing simultaneously, it was more
satisfying to listen and 'follow' how the audio events were progressing.
DAY 1 P.M. - with dancer/choreographer YVONNE NG
After a fairly detailed explanation of the equipment, Yvonne did
a solo improvisation manipulating her own audio and then suggested
we do a group improvisation that would be recorded and manipulated
by Philip. Yvonne next suggested that as we listen back to our recorded
improvisation, each person draw a 'map' interpreting what they heard.
Using the maps as a score, we collectively did another vocal improvisation.
Our final two improvisations of the afternoon were a duet with Yvonne
and Laurel, and finally a solo vocal/physical improvisation with
Yvonne wearing a wired mic and indicating to Philip when she wished
to stop and start recording.
Our observations of the work in the afternoon were that we found
that it was
much more interesting to hear audio loops and manipulations with
one or two participants at most when more than two people
participated in an improvisation, often the audio became very dense.
It was very satisfying to observe the vocal and movement improvisations
between Yvonne and Laurel,
how the choices of each person influences the other.
After Yvonne's final solo improvisation, we all enjoyed the clarity
of observing the connection between Yvonne's movement and her vocal
sounds, and Philip noted that during the improvisation, he was often
surprised by Yvonne's movement. We observed that Philip's sensitive
manipulation of the recorded sound during the days' improvisations
had a lot to do with the effective use of the software otherwise
it could be very easy for sounds to 'build up' and become repetitive,
disinteresting and indistinguishable from one another.
Yvonne expressed interest in Ableton, saying that she would enjoy
exploring ways in which it could provide an opportunity to have
the audience influence a live dance performance
We also discussed the desirability of a trigger that would give
a performer the ability to start and stop recording without hampering
their creative flow. We decided to add a wireless mic to our equipment
setup for Day 2.
DAY 2 A.M. - with choreographer CHRISTOPHER HOUSE
We gave Christopher a brief instruction about triggering record
and playback using the mic and pedals he decided not to use
the keyboard in his improvisations, and occasionally he chose to
use the wireless headset.
We did 7 experiments with Christopher: some solo, and some with
Laurel and/or Philip. As the improvisations progressed, we made
some alterations to the equipment configuration so that only the
last two recorded loops were audible. With this new configuration,
it became very interesting and game-like to observe when someone
would begin a new loop, thereby 'knocking out' the second-to-last
recorded loop. When we used three mics and three pedals with two
participants, the audio became difficult to hear clearly and Brian
observed that it was difficult to find any silence using the technology
in this way. Using three mic 'stations' with Christopher in a solo
improvisation was interesting, as he created an audio landscape
as he moved from station to station.
Some observations in the improvisations between Christopher and
Laurel: when each person was given the ability to manipulate the
other's voice, the real-time manipulation created relationships,
narrative, and occasionally conflict. When Laurel sang a loop, Christopher
transformed it by pitching it down very low. At another point, Christopher
whispered a loop: when Laurel increased the volume, it seemed as
though something very private was made public.
Conversely, a loud, more aggressive timbre was diminished by having
the volume turned down. Laurel would often play or sing a part first,
then repeat and record it: it was interesting to hear a sound, become
familiar with it, then hear it repeat as it was 'captured' in a
loop.
For the final improvisation of the morning between Philip and Christopher,
the speakers were localized to the microphones and triggers, so
that a loop recorded at a mic station would be heard only from the
speaker at the same mic station. Having the loops localized to the
speakers enhanced distinction between loops. Philip created a rhythmic
bedtrack, causing this improvisation to feel more songlike than
previous experiments, and they used text (the phrase 'candies for
children').
Cathy felt that the rhythmic bedtrack possibly made for a desire
for greater density in the improv: with only the two last loops
audible, the audio texture would begin to build, but would be continually
dismantled whenever a new loop was added.
Christopher noted that rather than be given the control to manipulate
his recorded audio himself, he preferred to have Philip manipulate
his tracks.
As we had concluded on Day 1, we noted that both Christopher and
Yvonne created loops of much longer duration than those typical
used in Ableton Live in the context of pop music. Also, the wireless
headset was a welcome addition, as it was great to not have the
participants movement restricted by a fixed mic position.
DAY 2 P.M. - with JESSICA and a group of U of T theatre graduate
students
We began the afternoon with two experiments using loops to create
a musical round or canon form. Laurel recorded and looped a round
'Had I Wings To Fly'.
Jessica asked if it was possible to have the recorded sound move
around the space, so Philip reassigned the mics so that all three
mics would go into record simultaneously from one trigger: then
Philip and Laurel did an improvisation using voice and waterphone
as they moved around the space. As we listened back to the improv
done with this new equipment configuration, it was very intriguing
to hear the sound 'move' around in the space.
The next three experiments involved the group of students: the first
experiment was a vocal improvisation, the second was a soundscape,
and in the third, students were instructed to improvise accompaniment
to Laurel's live lead vocal.
The vocal improvisation involving everyone was very dense. The soundscape
(we asked the students to create a seaside environment) was very
interesting and evocative. In the final experiment, the students
noted that they found it difficult to accompany Laurel by creating
loops, especially in terms of lining up rhythmic elements with her
live singing. Laurel expressed a desire to have more direct contact
with the accompanists, and we felt that using the software in this
fashion caused the 'live' performer to feel compelled to accompany
the loops rather than the other way around.
At the end of the afternoon we all agreed that sourcing each speaker
to the microphone at each station and hearing loops panned and move
around the space was a very desirable and satisfying element of
the audio treatment.
DAY 3 A.M. - with actor/writer MARTIN JULIEN
After giving Martin a brief introduction to the equipment and the
software, we engaged him in 10 experiments one was an improvisation
with Philip, one was with Laurel, and the others were solo improvisations.
It was very interesting to watch Martin listen and respond physically
as well as vocally to the loops as he created them, and to observe
the speed and intention with which he moved from each mic station
to the control keyboard to manipulate his loops. At one point Martin
was lipsync-ing to his own voice, which was a curious effect. Brian
observed that he lost track of when Martin was triggering a loop
and when Martin was making vocal sounds 'live'. Martin was very
adept at using text with the technology, both in linear and non-linear
fashions. In one improvisation, he skilfully created a fascinating
narrative by looping bits of improvised text, and created characters
by manipulating the pitch and volume of his loops. In this experiment,
the loops were perceived as externalized versions of each characters
internal thoughts.
A general observation at the end of the morning: we noted that we
haven't been using reverb or digital delay in our manipulations
(although they were available), and did not miss those audio effects.
Brian observed that over the past days we had learned how to set
up templates and parameters that were useful to participants to
enable them to use the technology themselves. Philip suggested another
possibility for future experiments to have all 3 pedals at
one central position with the mic and speakers in different locations.
DAY 3 P.M. - with composer/performer LEE PUI MING
We began with a brief introduction to Pui Ming of the mic, software
and control keyboard and she did a trial improvisation. Her second
improv was at one vocal station. During the improv, she asked Philip
to alter volumes of some of her loops and to remove some of her
earlier loops to 'thin out' the texture of her improv. Pui Ming
wanted control of which tracks were playing, and also wanted the
ability to make abrupt breaks in the texture.
We conducted 6 more solo experiments with Pui Ming. In one improvisation,
Pui Ming used 2 stations - a mic with a footswitch trigger to record,
and a keyboard controller to manipulate the recorded sounds. It
was very interesting to watch Pui Ming shift from 'performance'
energy (very intense and 'in the moment') when she created the loops
to a more impartial, 'matter-of-fact' energy when she manipulated
the audio.
In an improvisation with Pui Ming, Laurel wore the wireless headset
and indicated to Philip when she wanted to go into record, while
Pui Ming used the mic at one of two stations to record. It was interesting
to watch the difference between the stationary position and the
wireless headset. Brian observed that it was difficult sometimes
to tell what was 'live' and what was being being recorded and played
back: this distinction was blurred by the fact that Laurel was always
being amplified whether she was in record or not.
We noted that one way that Laurel could signal Philip during the
improv was to repeat a sound (in effect creating 'live' loops).
Pui Ming indicated that she didn't want control of Laurel's voice,
and that she enjoyed not being the sole person responsible for the
manipulation of the sound. She also indicated a preference for performing
live in the improv with Laurel over using Ableton.
Pui Ming indicated that she would have liked to have a trigger on
her person, and that it was a distraction from the creative moment
to have to return to the pedal and microphone 'station'. Philip
liked hearing the mix of acoustic and amplified sound, and enjoyed
hearing the sound of the building itself respond and reverberate
with the sounds created in the improv.
A general observation this afternoon: we noted that we should have
considered altering the lighting in the space earlier we
felt that the lighting was more conducive to our work when we shut
off the florescent work lights.
DAY 4
We met with Bruce Barton and students of his course 'Liveness Reconsidered':
we presented our observations about our work with Ableton Live,
and had a lively discussion with the graduate students about technology
and performance.
SUMMARY
At the conclusion of our research we made the following observations:-
we found that the chosen array of equipment and software was fairly
malleable. Philip was usually able to quickly reconfigure the apparatus
to accommodate new ideas and whims of the participants (including
ourselves), and that the software was very flexible and adaptive
to the wide variety of approaches by our collaborators.
- for the majority of the experiments, Philip played an active role
in the performance as the shaper of the composition, and he noted
that he found it hard to keep track of what was happening when more
than 4 or 5 recordings were actively looping. Philip also observed
that the origin of manipulated sounds seemed to remain clear unless
very heavy manipulation was applied. Putting the performers in control
of the equipment was most interesting when they interacted emotionally
with the devices they were using to manipulate the sound.
- there was something inherently engaging about the "deja-vu"
(deja-ecouté?) effect of hearing sonic reproductions (loops)
created from a performance we had just witnessed.
- one interesting and unforeseen development was the use of our
multiple microphones simultaneously to record a surround sound "impression".
When played back over multiple speakers (corresponding to each microphone),
the movement of the performers was reproduced as well as the sound
itself. This effect could be described as "ghostly".
We thank Nightswimming and Pure Research for the wonderful opportunity
to experiment with our collaborators and Ableton Live.
This research was conducted at the University
of Toronto, Canada,
from December 11-13, 2006.
Back to top ^
|
|