Timbre Spatialisation: The Medium Is The Space
Timbre Spatialisation: The Medium Is The Space
the space
ROBERT NORMANDEAU
Faculté de musique, Université de Montréal, C.P. 6128, succursale Centre-ville, Montréal, Québec H3C 3J7, Canada
E-mail: [email protected]
In this text, the author argues that space should be considered 1. THE SPACE OF THE SOUND
as important a musical parameter in acousmatic music
composition as more conventional musical parameters in There are a few examples in history where composers
instrumental music. There are aspects of sound spatialisation have taken the notion of space into account. One
that can be considered exclusive to the acousmatic language: can think of the music of Giovanni Gabrieli
for example, immersive spatialisation places listeners in an (c. 1557–1612) who used ‘stereo’ choirs in St Mark’s
environment where they are surrounded by speakers. The Basilica in Venice, or Hector Berlioz (1803–69) whose
author traces a history of immersive spatialisation techniques, Requiem, created for Les Invalides in Paris, used
and describes the tools available today and the research distance effects with various wind instruments.
needed to develop this parameter in the future. The author However, their music was rarely determined by this
presents his own cycle of works within which he has developed spatial parameter. The space had been added after-
a new way to compose for a spatial parameter. He calls this
wards, like an effect: sometimes spectacular, but
technique timbre spatialisation.
rarely essential. Stereo listening, even with mono
recordings, doesn’t change the musical value, whereas
in some acousmatic music this idea has been devel-
oped to such an extent that the work loses interest
THE MEDIUM IS THE SPACE when removed from its original context in space.
In 1967 Marshall McLuhan wrote: ‘The medium is
the message’, meaning that when a new medium 1.1. Internal space, external space
appears – such as cinema, television, or the internet
today – it borrows the language of the media from There are two types of space in acousmatic music: the
which it comes at before developing its own language. internal space – put in the work by the composer – and
For example, in the beginning, cinema was essentially the external one – added by the concert hall (Chion
filmed theatre. The camera was fixed and the actors 1988). The first is fixed and is part of the work on the
appeared on the screen from the left or the right, as same basis than the other musical parameters. The
though they were on a stage. There were no crane second is variable, changing according to the differ-
shots, or close-ups, and so on. Yet, progressively, ences in hall and speaker configurations.
cinema developed its own language, including the
aforementioned shots, editing techniques, and camera 1.2. Invariable space
movements. And if this new language was partly
borrowed from the theatre, photography and litera- Yet, one can imagine that there exists in some works
ture, some elements were exclusive to cinema. The an invariable space where internal and external space
medium became the message. The question today is: are fixed in a standardised relationship, like in cinema
what characteristics are specific to electroacoustic with image and Dolby Surround sound. The idea
music? First, let us note that electroacoustic music is behind this standardisation would be to minimise the
a media art much more so than a performance art role of a hall’s acoustics in a concert situation.
such as instrumental music. This distinction plays an
important part in specifying the medium of electro-
2. MULTICHANNEL COMPOSITION
acoustic music. With its introduction in 1948, a form
of music existed that, for the first time in history, did In 1990, the tools in my studio looked like this: a
not need to be played by a live performer. Further- computer (a Mac Plus with 1 Mb of RAM with one
more, there was the possibility for this new medium of the first MIDI sequencers: Master Tracks Pro), a
to introduce space as a musical parameter that could sampler (an Akai S-1000: the first stereo CD-quality
be used side by side with other musical parameters sampler with eight outputs and 2 Mb of RAM!), and
such as pitch, rhythm and duration. an analogue 16-track tape recorder (Fostex) on 12 inch
Organised Sound 14(3): 277–285 & Cambridge University Press, 2009. doi:10.1017/S1355771809990094
Downloaded from https://www.cambridge.org/core. Bilkent University Library, on 30 Oct 2021 at 08:40:47, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms.
https://doi.org/10.1017/S1355771809990094
278 Robert Normandeau
with Dolby S. These tools were quite simple and the performers to different locations in the concert hall
constraints that came with them implied two things (such as Stockhausen’s Gruppen or Carre´). However,
for someone who was interested in composing mul- these works are limited to the timbre of the instru-
tichannel works. Firstly, because this was before ments: the violin on the left side will always sound
digital audio became affordable, there was only a like a violin on the left. The sound and the projection
small amount of memory available in samplers, so source are linked together. What is specific to the
composers had no other option but to compose with acousmatic medium is its virtuality: the sound and
sound objects of short duration. Secondly, these the projection source are not linked.
constraints had an effect on the relationship between A speaker can project any kind of timbre. Fur-
the recorded material on tape and what was spatia- thermore, today, with the appropriate software, all
lised in concert. I was trying then to reach a point these sounds can be located at any point between any
where the mix of the work was created in the com- group of speakers. What is unique in electroacoustic
puter and then recorded on the multitrack tape to be music is the possibility to fragment sound spectra
played in concert as it was. Consequently, the final amongst a network of speakers. When a violin is
result of the cumulated tracks – recorded two at a played, the entire spectrum of the instrument sounds,
time – was really known only at the end of the pro- whereas with multichannel electroacoustic music
cess. But it was so precise that the computer kept the timbre can be distributed over all virtual points
traces of every single gesture made by the composer, available in the defined space.
while in the past only the sounds or the mixes were This is what I call timbre spatialisation: the entire
kept on tape. In the analogue days, there was no way spectrum of a sound is recombined only virtually in
to record the movement of a fader or a rotary button. the space of the concert hall. Each point represents
The recording of these gestures made by the compo- only a part of the ensemble. It is not a conception of
sers was then a major change in the way they figured space that is added at the end of the composition
their relationship with sound material. process – an approach frequently seen, especially
The push towards creating multichannel works is today with multitrack software – but a truly com-
directly related to two main ideas. The first argues posed spatialisation. It is a musical parameter that is
that if a speaker is sent a less complex sound, it is able exclusive to acousmatic music.
to represent that sound with better accuracy and
clarity. Thus, by dividing music into different layers
4. A CYCLE OF WORKS
and directing those layers to different loudspeakers,
the result is much clearer than if you were to multiply In the movement of Clair de terre (1999) entitled
a stereo signal over a group of speakers – even if all ‘Couleurs primaires et secondaires’ (Primary and
the speakers are placed side by side on a stage. Secondary Colours), I had the idea to divide the
The second concept behind multichannel diffusion timbre of a group of transformed sounds of a Balinese
arises out of the ability of human ears (like all gamelan into different registers and to send these
mammals’ ears) to localise sound in space with great different registers to different speakers. It was only a
accuracy. Thus, music spatialised over a group of short movement (20 5400 ) of a large work (360 ), but I had
speakers placed throughout a hall allows the listener the feeling that this way of spatialising the sound was
to better hear the polyphony of the music: each layer quite novel at the time. I decided then to push my
arriving to the listener from a different ___location in music a little bit further in that direction.
space.
I started to compose multichannel works in 1990 with
4.1. StrinGDberg
my first 16-track piece entitled Be´de´ (Normandeau 1994).
It was presented during the Canadian Electroacoustic StrinGDberg (2001–03; 180 ) is a work commissioned
Community conference ..PERSPECTIVES.. in by the Groupe de Recherches Musicales in Paris in
1991. This first piece was followed by a number of 2001. The third and final version was completed in
compositions that used multichannel sound diffusion in 2003 (Normandeau 2005).
a one-to-one relationship: one track assigned to one StrinG refers to the only sound sources of the piece
speaker. Amongst those are works such as Éclats de voix – a hurdy-gurdy and a cello – both string instruments.
(1991), Tangram (1992), Spleen (1993) (Normandeau StrinDberg refers to the origin of the piece. It was
1994), Le renard et la rose (1995) (Normandeau 1999) made for Miss Julie, a theatre play by August
and Clair de terre (1999) (Normandeau 2001). Strindberg with stage direction by Brigitte Haentjens,
presented in Montreal in 2001.
The two instruments used in the work represent
3. TIMBRE SPATIALISATION
two eras in instrument design and suggest differences
With instrumental music in the 1960s, composers in social class: the first belongs to a period where
explored spatialisation, creating works that assigned the sonorities were rude and closer to the people; the
Downloaded from https://www.cambridge.org/core. Bilkent University Library, on 30 Oct 2021 at 08:40:47, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms.
https://doi.org/10.1017/S1355771809990094
Timbre Spatialisation: The medium is the space 279
second evokes the refinement of the aristocracy. The diffusion in mind. This is another defining char-
piece is constructed using two superimposed layers. acteristic of what I call ‘spectral diffusion’ or ‘timbre
The first layer is composed of a single recording of an spatialisation’. The different filtered bands are
improvisation on the hurdy-gurdy that lasts about a assigned to different loudspeakers: 16 in the original
minute. Stretched, filtered and layered, the sound of version. The final mix is then formed in the concert
the hurdy-gurdy, distributed in a multiphonic space, hall and in different ways for every listener. It solves
is revealed, layer by layer, over the length of the piece the balance problems caused by the proximity of the
(figure 1). A second layer, made from sounds of the listener to a specific speaker because the sounds are
cello, adds rhythm to the work, as well as a strong constantly changing and evolving in each speaker.
dramatic quality at the end. In an ideal situation, the piece is not presented in a
Using the hurdy-gurdy, the player improvised a conventional concert hall, but in a huge space where
three-part sequence: improvisation 1, melody and people can walk about during the concert. It is not an
improvisation 2. I primarily used the middle part to installation – the piece has to be listened to in its
compose the work. Out of the middle part of this entirety – but it is a deep listening experience that allows
improvisation, I kept the twelve ‘consonants’ – the the audience to move into the sound and to experience
attacks of the notes – and the twelve ‘vowels’ – the their own body completely immersed in the sound.
sustained parts between the attacks.
Both the consonants and vowels were then ‘frozen’.
All 24 were filtered by four dynamic band pass filters, 4.2 Éden
the parameters of which changed over sections and Éden (2003; 160 ) is a work commissioned by the
time. The opening of each filter increased over the Groupe de Musique Expérimentale de Marseille in
duration of the work and the centre frequency changed 2003 (Normandeau 2005). It is based on music I
constantly. That means that the sound was globally composed for the play L’Éden cine´ma by Marguerite
filtered at the beginning of the work and it ended up at Duras (actually, both the concert piece and the
a point where the entire spectrum was unfiltered. stage music were composed in parallel). In the con-
In StrinGDberg, the form, duration, rhythms and cert version, the music represents the different aspects
proportions were derived from the original impro- of the sonic universe of the play: Vietnam, where
vised melody (figure 2). All the sound files for the Marguerite Duras was born and where she lived up to
work were created and organised with multichannel her teenage years, the Éden cinema’s piano, the sea,
Downloaded from https://www.cambridge.org/core. Bilkent University Library, on 30 Oct 2021 at 08:40:47, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms.
https://doi.org/10.1017/S1355771809990094
280 Robert Normandeau
the sound of Chinese balls, the omnipresence of the with these elements, everything else was made in such
rhythm, the journey and the voice of a Laotian a way that the listening experience would be the same
singer, used in Marguerite Duras’ film India Song. in both directions, including the channels of the ste-
Contrary to StrinGDberg, where there is a pro- reo files, the structure of the 24-track spatialisation,
gression in the integrity of the spectrum over time, in the levels of the different curves of the mix, and the
Éden, a progression is constructed through the musical phrases (figures 4 and 5).
rhythms and the density of the information (figure 3). The form is identical and symmetrical in both
The general amplitude of the work stays the same directions, and it is made from two mixes of the
over time. The general form is based on the con- same 96 tracks whose ‘weight’, instead of the exact
traction/expansion of time around a point two-thirds inversion, is the same from the beginning to the end
of the way through the work. Similar to StrinGDberg, and vice versa. The only difference between the two
every sound file is filtered by four different band pass mixes, introduced for musical reasons, is that the
filters assigned to different loudspeakers. The central first mix is a decrescendo that begins with a tutti
difference is that in Éden there are many different which is gradually filtered to the high-frequency
timbres that are superimposed, one on top of the other, register, while the second mix is a crescendo that
and there is no progression over time – the entire begins with the low-frequency register gradually
spectrum is always present. Only the ‘inner’ nature of increased to a tutti. One can consider this as a vertical
the sounds, the microvariations, change over time. palindrome!
In this work, the timbre spatialisation is made up
of two elements that coexist. The first is a group of
4.3. Palindrome
sound material that is equally distributed amongst the
Palindrome (2006–09; 150 ) is a work commissioned by speakers without any filtering. The second is a group
the Institut de Musique Electroacoustique de Bourges filtered with four band-pass filters, like in previous
in 2006. A palindrome is a succession of graphical works, but with one difference: there is no evolution in
signs (letters, numbers, symbolsy) that can be read the width of the filter, nor in the movement of the
from left to right as well as from right to left. central frequencies over time. What changes over time
In this work, the palindrome exists in both the is the mixing of these elements, from the low-frequency
form of the piece and the sound material itself. Along content at the beginning up to a tutti at the end in the
Downloaded from https://www.cambridge.org/core. Bilkent University Library, on 30 Oct 2021 at 08:40:47, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms.
https://doi.org/10.1017/S1355771809990094
Timbre Spatialisation: The medium is the space 281
Downloaded from https://www.cambridge.org/core. Bilkent University Library, on 30 Oct 2021 at 08:40:47, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms.
https://doi.org/10.1017/S1355771809990094
282 Robert Normandeau
Downloaded from https://www.cambridge.org/core. Bilkent University Library, on 30 Oct 2021 at 08:40:47, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms.
https://doi.org/10.1017/S1355771809990094
Timbre Spatialisation: The medium is the space 283
Downloaded from https://www.cambridge.org/core. Bilkent University Library, on 30 Oct 2021 at 08:40:47, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms.
https://doi.org/10.1017/S1355771809990094
284 Robert Normandeau
at the Université de Montréal’s Faculty of Music (36 Digital Performer) and Zirkonium through Zirko-
speakers, built summer 2008). Whatever the speaker nium Audio Unit plugins. These already exist, but
configuration is in these different venues, the use of they do not work properly at the moment. With
VBAP software allows me to make fine adjustments improvements, composers will be able to compose the
according to the specifications of these halls. This is space of a work while they compose materials along a
something that would have been difficult to achieve timeline. The space is not added as a flavor or an
with 16 or 24 track works based on a one-to-one effect at the end of the process; rather, it is part of the
track–to-speaker relationship. composition.
Downloaded from https://www.cambridge.org/core. Bilkent University Library, on 30 Oct 2021 at 08:40:47, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms.
https://doi.org/10.1017/S1355771809990094
Timbre Spatialisation: The medium is the space 285
Downloaded from https://www.cambridge.org/core. Bilkent University Library, on 30 Oct 2021 at 08:40:47, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms.
https://doi.org/10.1017/S1355771809990094