Baron I 2016

Download as pdf or txt
Download as pdf or txt
You are on page 1of 175

This thesis has been submitted in fulfilment of the requirements for a postgraduate degree

(e.g. PhD, MPhil, DClinPsychol) at the University of Edinburgh. Please note the following
terms and conditions of use:

This work is protected by copyright and other intellectual property rights, which are
retained by the thesis author, unless otherwise stated.
A copy can be downloaded for personal non-commercial research or study, without
prior permission or charge.
This thesis cannot be reproduced or quoted extensively from without first obtaining
permission in writing from the author.
The content must not be changed in any way or sold commercially in any format or
medium without the formal permission of the author.
When referring to this work, full bibliographic details including the author, title,
awarding institution and date of the thesis must be given.
GESTURE AND LISTENING
Towards a social and eco-systemic hyperinstrument composition

Nicola Baroni

The University of Edinburgh 2015

1
PORTFOLIO OF COMPOSITIONS
Nicola Baroni
-interactive software,
-supporting performance instructions (PDF and video),
-audio and video recordings

COMPOSITIONS
1) Four Kafka’s messages
for hypercello
-45’ (a 30’ shorter version is allowed)
-Vor dem Gesetz
-The Wish to be a Red Indian
-Odradek
-The Trees
2) Awakening
for Interactive Harp Quartet -10’
3) Le Demoiselles d’Avignon
for Interactive Cello Quartet and Live-Video -11’
4) Wire’s, Hyper-cello solo
from the Live-electronics duo Shaman’s Wires -7’/10’
5) Suite, audio-video interaction -sound installation (demo version 10’)
for 8 self-observing audio files

APPENDIX
Demo documentation of collaborative interactive works.
Gentilini-Baroni Tanpura for hypercello
Messieri-Baroni XXth capriccio from Zadig for Hypercello
A.H. Cello trio and interactive conductor
Pavia-Baroni Sobre Sombras y Reflehos for Hypercello and Piano duo
Baroni-Fabbriciani Kobane for Bass-flute and interactive system

2
ABSTRACT

The research implements interactive music processes involving sound synthesis and symbolic
treatments within a single environment.
The algorithms are driven by classical instrumental performance through hybrid systems called
hyperinstruments, in which the sensing of the performance gestures leads to open and goal-oriented
generative music forms.

The interactions are composed with MAX/Msp, designing contexts and relationships between
real-time instrumental timbre analysis (sometimes with added inertial motion tracking) with a
gesture-based idea of form shaping. Physical classical instruments are treated as interfaces, giving
rise to the need to develop unconventional mapping strategies on account of the multi-dimensional
and interconnecting quality of timbre.

Performance and sound gestures are viewed as salient energies, phrasings and articulations carrying
information about human intentions, in this way becoming able to change the musical behaviour of
a composition inside a coded dramaturgy. The interactive networks are designed in order to
integrate traditional music practices and “languages” with computational systems designed to be
self-regulating, through the mediation of timbre space and performance gestural descriptions.

Following its classic definition, technology aims to be mainly related not to mechanical practices
but rather to rhetorical approaches: for this reason the software often foresees interactive scores, and
must be performed in accordance with a set of external verbal (and video) explanations, whose
technical detail should nevertheless not impair the most intuitive approach to music making.

PUBLISHED PAPER
http://www.ems-network.org/spip.php?article405

3
Acknowledgments

I would like to thank my PhD supervisors, Dr. Michael Edwards and Dr. Martin Parker for their
insightful presence, knowledge, and their energising empathy; Owen Green, Kevin Hey and
Jules Rawlinson for their competent help; Mike Webb for his supportive and precise assistance
with my foreign English language; Marco Biscarini and Stefano Albarello for their musical and
professional collaboration; Donald Bell, Dante Tanzi, Robert Hamilton, Marcellino Garau,
Francesco Erdas, Andrea Melega and Tommaso Peregalli for their technical support during the
concerts. A passionate thank you to all the wonderful musicians collaborating in my endeavours:
Antonello Manzo, Antonio Mostacci, Clea Friend, Ms. Angelica Ferrari, Nicola Vendramin,
Cristina Centa, Cristiana Passerini, Pete Furniss, Dimitri Papageorgiou, Emma Lloyd, Akiko
Nakada, Elena Zivas and Giacomo Serra. Special thanks to Prof. DK Arvind and the Speckled
Computing team, who generously offered me the concrete possibility to experiment and develop
music through the inspiring Orient Motion Tracking System.This research would never have
been started without Machover’s leading ideas and implementations of the hyperinstruments,
hyper-strings, hyper-cello developed at the MIT since the 90s. Further significant aspects of my
compositional work rely on and develop the concepts of Performance Ecosystems by Eigenfeldt,
and Audible Ecosystems by Di Scipio.

Submitted in satisfaction of the requirements for the degree of PhD in the University of Edinburgh, 2015

Declaration

I composed this portfolio, the work is my own.



No part of this portfolio has been submitted for any other degree or qualification.

4
LIST OF CONTENTS

The performance instructions are given here in bound paper as a verbal score.

The full documentation is available inside the attached hard-drive and online at:

nicolabaroni.com/phd/documents

password: music

Each composition is inside a separate folder containing: -List of contents



-PDF instructions

-Software -Audio-video_docs

The documentation at nicolabaroni.com/phd/documents follows the same structure, with the only
difference that the software is given there as a downloadable standalone MAX application.

Audio-video documentation:

The files of the studio recordings are given inside the hard drive, as well as the files of the video
performance instructions, which are also accessible from within the application (performance-notes
section). The whole audio-video documentation is online

INDEX

1) K_messages p.6

2) Awakening p.64

3) Le Demoiselles d’Avignon p.94

4) Wire’s p.152

5) Suite p.166


A detailed list of externals and a specific index are given after each presentation

5
K_messages
hyper-cello

PRESENTATION
The work is a cycle of four interactive compositions inspired by Kafka's short novels.
Every "message" can be performed as a single piece or as a part of a complete work in four
movements. The optimal duration of the whole cycle is about 45’, though reduced versions until 30'
are possible. The composition is focused on the central role of the soloist (the hyper-cellist), who
develops the electroacoustic music in real-time, sharing part of the responsibility of the composer.
It is not essential to be skilled in computer music in order to perform the work, but the appropriate
equipment is necessary. The interaction and its performance trajectories are visually monitored by
the cellist on the laptop screen (replacing the classic score and music stand).

Sound is the absolute means of communication between the cellist and the electronics. In other
words the cellist feeds the live electronics with his/her sound, but at the same time the cello sound
gives the machine some information necessary to execute compositional choices in real-time.

By understanding the essence of the composition and properly monitoring the sounds and functions,
the cellist will be able, by playing, to drive and influence the music composition in real-time.

However, this compositional empowerment is driven through the computational


language of the machine, which is not exactly the same “language” as
perception, performance and composition.
The machine "understands" the cello sound through spectral analysis, while the
cellist drives the interaction by listening and playing.
True connections and true distances can be found between these languages
and dimensions of experience, allowing for non-obvious interactions, which
require new symbols and performance styles.
The cello sound is exploited by the performer as a mediating technology
between acoustic music and composition processes.
The musical results (maybe powerful or illuminating, maybe conflicting or
complicated) are dynamically linked to the poetic realism of Kafka, where
hyperbolic desires, inflexible laws, unexpected reflexions drive human situations
towards extreme conditions of appearance.
Fig.1 Franz Kafka

Simple animated graphic/verbal scores are provided in order to properly mediate the interactive
composition, autonomously evolving without the help of any external Live Electronics performer.
This total autonomy puts the cellist in the condition of a direct contact with the electronic processes
through an expanded knowledge of his/her sound and musical actions.

6
PLAN
Message_1) Vor dem Gesetz (duration from 6 to 15 minutes, default 10')
Message_2) The Wish to be a Red Indian (duration 4')
Message_3) Odradek (minimal duration approx. 10’)
Message_4) The Trees (from 10' to 15', default 13’)
The duration of the “messages” (movements) 1 and 4 has to be set in advance by the performer,
otherwise it is left as default.
Message 2 has a fixed short duration.
The section-advances of message 3 are dependent on the sound of the cellist,
who decides the durations in real-time.

RECORDINGS
Studio recordings
Audio
K_1 https://soundcloud.com/nicola-baroni/vor-dem-gesetz
K_2 https://soundcloud.com/nicola-baroni/the-wish-to-be-a-red-indian
K_3 https://soundcloud.com/nicola-baroni/odradek
K_4 https://soundcloud.com/nicola-baroni/the-trees
Video
K_1 (Antonello Manzo) https://www.youtube.com/watch?v=FW3ho-6fPfk
K_4 https://youtu.be/-7NAoElhXPQ

Live recordings
Compilation https://www.dropbox.com/s/441azkw88z2usfo/kafkas.mp4?dl=0
K_1 https://youtu.be/KiivIwgPM7I
K_1 (Clea Friend) https://youtu.be/jbMAJYYDoOA
K_2 https://youtu.be/UwzEVNd_rjA
K_2 (duo) https://youtu.be/X9lqKw64TSE
K_3 https://youtu.be/MHpixGI1xMQ
K_4 https://youtu.be/-7NAoElhXPQ

7
EQUIPMENT
Movements 1, 2 and 3 require:
1 microphone for the audio,
1 pickup for the sound analysis
1 sound card (at least 4 outputs)
1 laptop containing the Applications (or the native MAX patches)
My personal equipment involves DPA4099, Cello-Fishmann pickup, RME UCX.
A different set of equipment needs careful calibration of the analysis data
(see technical section below).
Some main calibration parameters should anyway be checked before any performance,
at least for movements 1 and 3.
The pickup input has to guarantee a full isolation of the cello sound from the environment
(not always gained through piezoelectric or directional microphones)

Movement 4 “The Trees”


Movement 4 requires the same equipment, but with the addition of:
- 2 small speakers
- 2 condenser microphones (possibly omnidirectional)
- 1 inertial sensor (3 axis accelerometer and gyroscope)
The extra speakers and microphones have to be positioned on stage, close to the cellist in order to
produce controlled audio feedback. In case of a small sound card,
1 mini-mixer is necessary in order to allow the additional phantom power inputs.

Every single piece is a dedicated MAX Application working on MAC OS 10.8 (or above).
The performance is also possible through the original code if MAX/Msp 6.1 (or above) and the
externals listed in the appendix are installed: a Mac dual core is sufficient, more powerful HD
is suggested at least for Message_3. In Message_4, running the Orients_15 Motion Tracking
system, a native Bluetooth 4 Mac version is the minimal requirement (see page 49).
The spatialization is quadraphonic. Options for 8 speakers are foreseen.
Stereo diffusion is allowed but not ideal.

8
Message_1 “Vor dem Gesetz”
Video instructions at: https://www.dropbox.com/s/x6t37zoczi67xrg/K_1_instructions.mov?dl=0
Novel at: http://www.kafka-online.info/before-the-law.html

COMPOSITION
Invent and perform a music introduction segmented into four contrasting phrases.
Don't think of your music too much in terms of notes, but mainly in terms of contrasting timbres,
dynamic shapes and pitch registers, organised in four "phrases".
Every "phrase" should be best conceived as a well characterised sound-gesture, a sculpted sound
expression internally changing and moving towards the next one.
Each gesture ("phrase") lasts 20" and the overall duration of these four divisions is precisely 1' 20”.

The machine "listens" to your music, but instead


of recording your sound it records, as flowing
numbers, five features of your analysed sound:
pitch, loudness, resonance, quality, density.
Your initial task is to find and perform music
contrasts and developments in these terms.

Fig.1-K_1 The interactive screen (Message K_1)

Two close red flashes in the upper part of your screen signal the beginning of every
phrase. One single red flash (after 10") tells that you are passing the middle portion
of the phrase.

Fig.2-K_1 The Timer flash

You can see in the left part of the screen the monitor of what is tracked in real-time by the system,
and on the right how it is stored inside the memory machine.

Fig.3-K_1 Sound analysis in real-time Fig.4-K_1 Sound features stored in memory


and later recalled during the performance

9
INTERACTION
After 1' 30" from the beginning the computer starts to play electronic music, and from this
moment until the end your cello performance is interactive. During the continuation of the piece,
the software predicts which one of the four “phrases” (the previous 20" sound-models) is matched
by the music you are “currently playing”.

Technically, the process happening in your 1’ 20” initial acoustic performance is called “machine
learning”, and the continuation until the end is called “gesture following”.

Fig.5-K_1 Machine learning and machine following

The computer performs a “data driven” work of “feature recognition” within the time of your
music. In machine learning mode you define and perform four music models, in machine following
mode you vary these models in order to creatively interact. Compositionally you have to:
-carefully invent the four contrasting cello “sound-gestures” and shape them in an overall music
introduction (you can write sketches, partial pentagrams or playing by memory)
-vary these models in order to improvise a new cello music, which contextually drives
the electronics by means of parameters of similarity/difference between the current and the previous
music (you will follow the screen monitors as an interactive score).

Machine following and music variations:


-If you try to repeat one of the opening phrases perfectly, the system should be able to tell
you precisely which portion of the phrase you are currently playing and if your speed of execution
is the same or different.
-If you insert some variations compared to your previous performance, the machine starts to jump
in order to identify which part of your previous music you are imitating.
-If you radically change your sounds the machine still attempts (with unpredictable results)
to recognise which is the model phrase and which is the point and speed of reproduction.

You can see all of this in the monitor on the right of your screen.
The machine is natively built in order to recognise perfect imitations of previous models and slight
time variations upon them, but I have increased its "tolerance" parameter in order to allow for some
more variations inside your performance.

10
SYSTEM
In the bottom part of the screen you see which one of the four initial gestures the machine is telling
you are performing. Since it is a statistical system you can see the percentage of probability with
which your performance is pertaining to one model instead of the other ones. The more you perform
strict imitations of a previous phrase, the more the machine recognition should be straightforward.
The more you vary the music, the more the machine starts to take time to gain a consistent
response, and the four models of similarity could be moving around middle ranges of probability.

The figure shows two different possible states of sound recognition.


In both cases it appears that the current sound recalls the second phrase (“likeliest”),
but the parameters of “likelihood” are different.

In the first case the system shows that the currently played
sound is similar to the second model-phrase, but it also
has some degrees of similarity with the third phrase.

In the second example the currently played music should


be quite different from any previously played model because
all the parameters of similarity are quite low and levelled.

Fig.6-K_1 Pattern recognition

The Time-Index parameter shows, from “the point of view“ of the machine, which portion
of the 20” model you are currently imitating (the upper example detects the final part of the phrase,
and the second example the initial part), the same detection is shown through a moving bar inside
the bottom-right monitor (see fig. 8-K_1).
The Detected-Speed predicts if the imitation of the model is performed more slowly or faster.

We are using this system as a musical instrument. You can influence the machine predictions
as you wish: any different prediction about how you are playing creates a different electronic music.

Through previous rehearsals you will gain a refined technique of electroacoustic sound modelling
in real-time. Obviously a careful choice and timbre contrast of the sounds performed in the four
opening 20" music gestures is extremely important.

In other words you can compose live electronic music by carefully balancing imitations
and variations of your initial cello performance. The four parameters of “likelihood” act as a mixer
because they are mapped to the amplitude of four different Virtual Music Instruments
(each VMI processes the live sound of the cello in a radically different way).
Therefore as an example, if your current music is detected as totally similar to your second initial
phrase you will interact with the sounds of VMI nr. 2 at full amplitude, but if your music
ambiguously matches different initial models (as in the shown cases of the figures above) you will
produce a mix of different Live Electronics (VMIs) with variable relative amplitudes.
11
THE NOVEL
Kafka's novel presents a country-man who asked for a meeting with the emperor. The man stands
in front of the door of the fabulous palace waiting for admission. A porter tells the country-man
how difficult and dangerous it is to get inside, the country-man waits an immensely long time
to get in until the porter closes the door telling him: "the door was opened for you,
but now you are dying and I have to close it”. The music interaction asks the cellist to dramatise
the theme of expectancy, by provoking the question: "What if you were the country-man?".

The computer response could sound alien in terms of sound (and maybe also in terms of pattern
recognition!) as in the case of the Kafka novel, but the cellist is provided with many possible
solutions in order to gain "desired" music responses (in terms of music narration, experienced
anxiety, or interactive music play).

REAL TIME COMPOSITION AND ANALYSIS


The common language between the cellist and the machine is the real-time analysis of the cello
timbre as it is performed, therefore spectral analysis is the grammar of the computer and sound
is the means of control by the cellist upon the machine and the real-time composition (RTC).
Obviously sound and spectrum are different aspects of a unique entity, and the cellist’s means
of control upon the system involves complex strategies of musical navigation, whose output could
be technically successful, confused or unexpected. The machine can “listen” differently to the cello
sound compared to human expectancies, because its system is abstract and computational.

The conceptual and technical virtuosity of the cellist could be in the direction of finding a deep
reciprocal common control and understanding with the machine, but also challenging the system
with unexpected sounds, or otherwise musically exploiting possible reciprocal misunderstandings
as an opportunity to produce interesting music.
The theme of the interaction is real-time composition through expectancy and variation, in which
Kafka’s novel represents the state and the affect of a "threshold of failure”.

The cellist is free to repurpose sequences perfectly identical, varied or mingled in order to extract in
real-time different machine evaluations of similarity between music sequences. The parameters of
these evaluations are mapped to a macro-form (the "sound narrative") of the current electroacou-
stic music. In this way the musical choices of the cellist, as high-level decisions, drive the composi-
tion, treating the parameters of similarity vs. variation as means to control the overall result.

The choice to help or to confuse the machine through linear vs. scrambled patterns is a possible
option involving the possible misunderstanding of the HCI interaction as a musical instrument
and opportunity. On the other hand the success of the machine is not absolutely guaranteed and
it could sometimes suggest the need for possible revised music strategies on-the-fly by the cellist.

The whole composition is driven by the cello sound which feeds the audio of the electronics,
and at the same time modifies the kinds of electronic sound treatments interacting with the audio
analysis of him/herself. This last compositional interplay can be developed by the cellist intuitively
or through a deeper understanding of the spectral data and mappings.
12
PERFORMANCE NOTES
The overall duration of the pieces is 12 minutes by default, but it can be set diffe-
rently beforehand by the performer, by pressing the dedicated message-number.

Fig.7-K_1 Setting time-duration

Press the “Spacebar” to begin:


-during the first part (the acoustic interactive seed of the work) you will follow the red “Timer”
flashes (see p. 3) and the increasing section number.
The two bottom monitors show the incoming analysis of your cello sound.

During the electronic continuation you will follow the interactive mixer
(called “likelihood”).
The bottom-right monitor will be showing the stored analysis data of the
“phrase” model in action, and the predicted point of time-occurrence of your
music inside the currently active model-phrase.

Fig.8-K_1 Monitors of interaction

A number shows (in minutes) the time point of the current performance.

Fig.9-K_1 Monitor of duration

When the piece is ending the electronics fade out.

Rehearse mode:

It is possible to simply explore the sound analysis system without starting the piece,
or when you calibrate, press the icon “DSP start” using the mouse.

Fig.10-K_1. Rehearse mode

Calibration:

A following section explains how to set up optimal parameters of sound


analysis

Fig.11-K_1 Calibration icon

13
Spatialization:

Spatialisation is driven by Ambisonics (azimuth, distance, velocity of source


shifts). Each of the VMI outputs are automatically positioned in space accor-
dingly with features coming from their individual real-time sound analysis.
You can monitor them.

Fig.12-K_1 Ambisonics interface

Spatialisation is quadraphonic by default, but it can be differently set to 8


speakers or as a stereo.

Fig.13-K_1 Setting the speakers

MICRO-SHAPES
If the overall sound narrative is shaped by the main mixer called “likelihood” (see above), multiple
mappings allow for local and micro controls/influences upon the electronics.

The electroacoustic sound is in fact made from four Virtual Music Instruments, dynamically mixed
by the main parameter of likelihood.

The VMIs are fed by the cello sound, processing itself in real-time through variable parameters;
these variables are at the same time modified by the same cello input.

In other words the cellist creates the material and the means of control of the electronics through
the same sound gestures. Despite the aurally distant result, the electronics thus keep an intimate
connection with the cello sound in terms of textural and gestural distributions of common materials.

This mixed music is not created before-hand by a composer, nor driven by a score,
but it is functionally designed as a creative interaction in order to be explored and revealed on-stage
by the cellist, who feeds the open system in terms of sound and control.

Artificial sounds are not intended as extensions, direct processing nor responses to the cello, they
are instead conceived as a parallel music.
The complexity of the performance underlines that it is a compositional task, sonically driven
by the cellist.

14
INSTRUMENTS
The software composition is based on the final mixing of the following VMIs:
1) Harmonizer, 2) FOG synthesis, 3) Sampler, 4) Delay plus feedback.

The output of the VMIs is strongly influenced by the cello sound through local mappings.
A detailed analysis of the complex internal mappings of each single instrument can be done by na-
vigating inside the MAX application and reading the annotated comments inside every abstraction.

The index number of each of the VMIs is the same number as tagged inside the main mixer called
“likelihood” which controls the referenced amplitudes of every single VMI.
The parameters “time-index” and “detected speed” shown in the bottom part of the screen,
influence in many ways the electronic sounds of each instrument. They refer to the time location
and speed of performance as they are predicted by the computer recalling the initial music phrases
(see the section “system”, p.5).

The VMIs are called: 1) cellos; 2) fog4_K; 3) sampler; 4) extra-amp.


It is suggested, but not mandatory, to feed the initial four phrases
(which reference the four instruments) with:
1) light/expressive sonorities;
2) dense textures;
3) extreme gestures;
4) subtle extended techniques.

Sonorities well fitting with the reciprocal VMI should be found and explored.

A few small monitor cues are given, including the amplitude monitor of each effect.

Fig.14-K_1 VMIs interface

15
-1) “Cellos”.
The four voice harmoniser multiplies the cello output.
Each voice can be independently pitch-transposed and shifted in time. The system drives a
non-continuous transformation of these parameters: globally an increasing variability and
contrast of the live cello timbres pushes towards a much more dynamic and changing behaviour of
the voices, which conversely tend to be fixed in delay and transposition when the cellist plays softer
and more stably. Extreme sounds allow for bigger changes: very high/low pitches drive extreme
transpositions (until 2 octaves high and low), while spectral centroid tracking pushes away the in-
dividuated delayed copy of the cello sound by 10 seconds (when the performed timbre is harsh and
high-pitched), or approaches it by less than 1 second when the performed timbre is low, soft and fat.
The performed noisiness increases the feedback (the density of transposing delays). The whole
harmoniser is modified in timbre, especially by the parameters of “time-index” and “speed-detec-
tion” (the “imitation” parameters of the referenced “phrase” 1, shown by the monitors in the bottom
part of the screen). As shown in Fig.14-K_1 small yellow flashes signal when the main parameter
are changing, and the number box called “target” signals which of the four voices is involved.

-2) “Fog4_K”.
FOG synthesis granulates the cello sound cyclically live recorded.
The most prominent influences on the parameters of granulation are:
-live cello density: full live sound increases the speed of the effect making it more intense,
rhythmic and similar to a recognisable cello sound; airy sounds and pizzicato slow down the effect
until a sort of “spiritual drone”.
-live cello amplitude: by playing loud you rarefy and distance the grains of sound output,
by playing soft you intensify the textural overlapping. Moreover, by lowering the “fog parameter”
inside the calibration module (from the default 10 until 5 or less) you can anyway gain more
textural overlapping, by setting it higher (until 20 or 25) you increase the tendency to granular
rarefaction (see the calibration section, and the referenced calibration patcher)
-pitch and brightness of the live cello playing influence different timbre qualities
of the FOG synthesis (which are to be mainly experienced by listening).
-again the parameters of change in the granulation are not continuous: the rate of change
of the granulation parameters is faster if the “time-index” of the imitated phrase approaches
the end of the model, and slower if it is at the beginning. High “speed-detection” lessens
the smoothing parameter, driving for impulsive changes.

-3) “Sampler”
outputs chunks of a prerecorded cello sound file (rhythmic and aggressive)
-the sound character of the live cello recalls similar sounds stored in the file
-high pitches and brightness of the live cello increases the overall density
-a high “time index” (final portion of the referenced phrase) decreases the overall density, while
the detection of its initial portion increases it.
-crescendos allow for upward glides, decrescendos for downward glides
-pitch classes of the live cello like C, C sharp, D dramatically increase the density of the output;
B, B flat, A instead scatters the sampling into detached groups of sounds, the other middle notes
(in a chromatic scale) maintain the effect at middle ranges
16
-4) ”Extra-amp” (variable delays)
-the output gain is set to high amplitudes in order to allow special effects such as noisy sounds,
extended techniques or subtle textures
-the “time index” crucially modifies the delay line and its feedback: when the initial portion
of the phrase is detected the output produces short delays (reverb-like), on approaching the final
part of the phrase the delays are distanced.
-live sound amplitude is mapped to up-down glides of the output
-inside instrument 4 the time-index parameter is much more prominent, therefore it is advised
to invent the model-phrase 4 (at the beginning of the performance) in order to start with a textural
sound developing towards a rhythmic patterned continuation
-a small monitor of the parameters is given above the patcher

TECHNICAL REMARKS
AUDIO ANALYSIS
The five "timbre" descriptors feeding the interaction need a brief discussion.
Traditionally the word "timbre" indicates a sort of ill-defined condition (we clearly perceive timbre
features but it is hard to objectively define them with shared words).

The five audio parameters centrally involved in the interaction are:


pitch, loudness, resonance, quality, density.
- Pitch and loudness are extracted by the central frequency and the amplitude of the cello sound.
Classical theory considers them as quantifiable aspects of sound independent from “timbre”.
They are indeed timbre aspects of sound, but in any case clear concepts to be experienced,
and straightforward to be spectrally tracked in real time (respectively through the “yin” algorithm
and the envelope-following).
Loudness needs calibration.

The other three parameters are more concerned with the traditional idea of timbre and they need
specific levels of treatments, filtering and compression.
- Quality is the most direct timbre descriptor in this context, referencing the noisiness vs. periodici-
ty of the spectral components (through the “yin” algorithm). The noisy vs. purely tuned cello
sound can be intuitively monitored by the performer (but note that chords and double stops are
tracked as much less pure in “quality” than single notes). The numeric output in any case needs
compression in order to be tuned to the physical specs of a cello and to the individual character of
different performers: therefore it could be calibrated in order to get better nuanced responses.
- Resonance tracks the gradients of response between free vibration (i.e. after a soft pizzicato),
a soft airy bow conduction, a “full-tone”, a compressed sound production.
The more the cello bow stresses the string, the more the value of “resonance” falls down to zero.
The parameter is obtained through filtering, compression and scaling of the flowing value
of the spectral statistical distribution called “kurtosis", detecting the peakedness vs. flatness
of the real-time spectral envelope.

17
- Density deals together with the different parameters of spectral centroid, spectral spread, central
frequency and amplitude in order to approach the tracking of “sul tasto” effect vs.
full-expressive/near-the-bridge sound, generally obtained by variations in speed and point of
contact of the bow. Pizzicato styles should result as zero “density”.

A double threshold sets the main values to zero when the loudness is very low or when the sound
is definitely noisy. In this way noisiness can be detected as a special effect. Moreover in this
way the sound tracking can be performed within the range of a normal cello playing (avoiding false
detections of environmental sounds, and unreliable analysis of noisy components)

All the values are filtered in order to smooth meaningless peaks. These five parameters feed the
main motor of the composed interaction, through the Gesture Follower system.

Detailed local mappings drive the behaviours of the four VMIs, exploiting further audio descriptors
such as roughness, spectral flux, spectral centroid and spectral spread.

GESTURE FOLLOWER

The GF is a statistic data-driven software developing the technology of machine


learning.

Fig.15-K_1 The GF editor

It stores and indexes in its memory different lists of incoming numbers (learning): having
completed the learning phase and on receiving a new list, the GF evaluates and compares
any new lists with the old items previously stored (following). By applying sensing data such
as the descriptions of human physical movements (gestures) as inputs to the GF, the system is able
to reconstruct and compare previous gestures (stored and indexed during the learning phase) with
the new ones, (following phase).
In this way it is possible to extract patterns of similarity between different gestures, and also
compute the identity, the percentage of deviation, and the predicted speed of their execution
compared to the identified initial models.

In this composition the cello sound is segmented and treated as a shaped sound-gesture.
For this reason the cellist is initially asked to invent and perform the four different “sound gestures”
(conventionally called “phrases”), feeding the GF as learning models.
The continuation of the music (gesture-following mode) is the creative interplay between the cellist
and the GF: the found patterns of similarity feed the main mixer, which is responsible
for the macro-form continuation of the piece. In addition the secondary GF parameters “time-index”
and “speed detection” (the temporal point of performance and the speed of imitation
of the “learned” model with respect to the currently played sound) are mapped inside each
of the four VMIs. In this way the reproduction/deviation modes of the cello music from
the previous models have a further influence upon the electronic sounds.
18
COMPOSITION
In this way the technique of machine learning is framed inside the composition since the four initial
cello phrases (corresponding to the four machine-learning steps) are inscribed in the interaction
as automatic time lines.

After that (1’ 30” from the beginning) the GF


is automatically set in “follow” mode, while
the player continues to perform with the free
task of recalling his/her previous phrases,
choosing the different degrees of similarity
with his/her initial music.

Fig.16-K_1 The timeline to the GF and to the composition

Therefore the performer assumes the responsibility for the overall form-building and the electronic
developments. The performance could be considered as a "timbre-motivic" interplay driven by time
remote connections between sound gestures, through the mediation of the machine.
Expectancy could be viewed as a non-obvious means of communication between the past,
the present and the future of the music through similarity, identity and variation of musical
and timbre patterns performed live.

The system is designed as a novel means of real-time-composition (RTC). A high focus is given
to timbral descriptions tuned to the cello behaviours in terms of sound and performance techniques.
The four sound models segmented by the GF are conventionally called “phrases” inside
the performance instructions, but they lack "syntactical" lattice-based structuring:
in fact the intention is to segment their features in terms or timbre gestures/textures in order
to foster an electro-acoustical approach.
In this context the problem of a cultural continuity between Western instrumental practices
and electronic musical thought is far from obvious, and the integration of their approaches
is a principal aim of this project. Giving a central creative role to the classical performer raises
the problem of which music theory acts behind the composition.

The first of the four K_messages explores this question more radically since the entire macro-form
is dramatically dependent on the communication between timbre as it is performed on a cello
(and indirectly as it is perceived by the audience), and how it is tracked and reordered
by a computational machine.
Moreover timbre is non-linear and involves music strategies that cannot be framed inside
any conventional mapping strategy or theory.

Kafka’s novel acts as a metaphor and a mirror of interaction.

19
CALIBRATION

All the functions are contained in the


main patcher “calibration” allowing for:
-optional software balance of the two
inputs (microphone and pickup)
-main calibration settings
-min/mean/max balance for the complex
calibration of density and resonance
-storage of the new values
-simulation from sound files
(in case of conferences)

Fig.17-K_1 The calibration patcher

The storage/recalling of the last calibration setup is automatic (after saving the patcher before
closing it) only if the system works as a MAX/Msp patch.
If the system is a MAX standalone application, the calibration values are to be manually stored
by pressing the message “store 1”: they will be automatically loaded at the next opening
(this manual procedure is obviously possible also inside the native MAX patch).

Calibration has to be performed when the system


is in “rehearse mode” (DSP on, without starting
the interaction). The monitors coming from the
yellow “receives” show the raw values coming
from the analysis modules, the calibration
numbers on the left have to be set in order to gain
the most meaningful normalisation, which is sho-
wn on the fly inside the main monitor positioned
in the left part of the performance patch.

Fig.18-K_1 Main calibration parameters

The calibration should be done by playing the cello and with the main data monitor visible.

- The compress, offset, scale parameters for density allow for a full ranged normalised output
between 0. (soft pizzicato) and 1. (dense “saw-teeth-like” sound). The middle ranges should
be well shaped by the calibration in order to detect intermediate values for “sul-tasto”,
and “ordinario”, and soft vs. full sounds. The density curve cannot be truly flat with respect
to the central frequency, therefore the cellist has to be aware that the system unexpectedly
interprets some specific notes as “more dense” than others: this makes sense since the timbre
responses of a cello are not linear.
20
- The min, max, compress parameters for kurtosis (“resonance”) require compression values near
to zero (this means very high compression): min and max are the thresholds of clipping between 0.
and 1. This calibration is a trial process that can be checked in relation to the flowing
kurt-monitor number. A soft-low-resonating pizzicato will rapidly rise to the maximum
(clipped to 1.), but the lower values should be calibrated in order to give room to some meaningful
differences between “airy” and “intense” timbres.

- The quality_compress parameter (set to power 2 by default) should give focus to the middle
ranges of noisiness vs. pureness of the cello sound. Increasing the power number should improve
the tuning of the detection, but when the power coefficient is below 1 it could instead raise
the values of half-noisy sounds.

- The ampl_max calibration number has to be set at the same value as the maximum amplitude
performable by the cellist, in order to set the maximum amplitude as 1.

The threshold below which the analysis is interrupted (detecting silence) is set by default to -65 Db
(with respect to the signal coming from the pickup).
It can be optionally modified.

New adaptations of the 5 thresholds of the spectral.flux


can be optionally set differently.

The “FOG” parameter is explained in the above


“Instruments” section (pag. 9), it refers to a specific
threshold of amplitude following.

Fig.19-K_1 Further calibrations

After the main calibration, a fine-tuned further calibration of density and resonance is possible
by pressing respectively keys 1 and 2 of the laptop keyboard.
After 2” the automatic calibration starts, lasting 5”: the cellist will be performing in order to store
on the fly the Min and Max threshold values, the last number received is stored as the Mean value.
Min and Max will be clipped as 0. and 1.; the Mean will be fixed as 0.5. The procedure
is performed through the abstraction “calibrator” borrowed from the CNMAT MAX library.

21
Message_2 “The Wish to be a Red Indian”
Video instructions at: https://www.dropbox.com/s/3o9vdfw3jtd5xs3/K_2-nstructions.mp4?dl=0

If one were only an Indian, instantly alert, and on a racing horse, leaning against the wind, kept on
quivering jerkily over the quivering ground, until one shed one's spurs, for there needed no spurs,
threw away the reins, for there needed no reins, and hardly saw that the land before one was smoo-
thly shorn heath when horse's neck and head would be already gone.

Fig. 1-K_2 The main interactive patch (Message K_2)

COMPOSITION
Following the interactive instructions, play with rhythmic flexibility and freedom.
Don’t avoid extreme variance in pitch loudness and timbre. In a short time you should cross distant
sounds and techniques, including scraping sounds and Bartok-pizzicatos.

The system records your sounds, and you recompose them in many ways in real-time through
the sounds of your cello performance.

22
SYSTEM
This 2nd movement of the “K_messages” is a strict interaction between the performer
and a set of live-cello sound memories stored as audio files.
The files are cyclically loaded by the system, and the output sounds are interactively treated and
transformed through different cello playing styles. Again the hyper-cellist is responsible for creating
the input to the live electronics, and to drive, through performance, the methods of the interaction.

Two different modes of performance are allowed:


a) the audio files come from live recorded portions of the 1st movement automatically stored in the
HD and recalled during the 2nd one.
b) the sound contents are directly live recorded on-stage, as a part of the performance.

Press “Enter” to start the performance in mode a)


Press “Spacebar” to start the performance in mode b)

The electronic sound is created by four modules:


Sampler_1, Sampler_2, Sampler_3, Amplified/Flanged-cello.

All the electronic treatments are musically driven by the cellist interacting with the sound analysis
of his/her incoming cello signal. The live cello is analysed in pitch, amplitude, note onset,
note duration, and timbre (brightness, noisiness, centroid and spectral distributions).

These analysis data transform the sound of the audio recordings in order to drive the samplings
through a large-grain mosaic technique. The engine of the samplers is granular, but the file
fragments (scattering or accumulating) are mainly treated at a note-length time level.

The performer must be aware of the sound interactions described below, designed by an overall
automatic time segmentation (macro-form) inscribed in the software.
The performance will be improvised according to functional lines of interaction chosen in advance
by the cellist, following some essential graphic animations.

23
SOUNDS

1) Sampler_1 “femme” (upper left part of the screen) processes


a fixed external prerecorded file.

Fig. 2-K_2 Sampler_1 (sound file interface)

The file is quite energetic and rhythmic and has to be freely recalled by the cellist in order
to increase music contrast in the course of the performance.
Sampler_1 is a slightly varied version of the instrument called “sampler” operating
in the 1st movement. This is a means of continuity between the two movements.

2) Sampler_2 “knn” (upper right part of the screen) is filled with live
recorded cello material. New materials are loaded every half minute.
The sound transformations partially and continuously alter the
timbre/textural contents coming from the cello sounds.

Fig 3-K_2 Sampler_2


3) Sampler_3 “onsets” (bottom right part of the screen) is also filled
with live recorded cello material. New materials are loaded every
minute.
Sampler_3 is the main module of the piece: it doesn’t transform
timbre, but it fragments, overlaps and transposes the sound contents
at differentiated note time-lengths in order to operate as an
algorithmic composer in real-time.

Fig. 4-K_2 Sampler_3

4) Direct sound: the amplified cello is sometimes “distorted” through a flanging technique,
incremented when the cello input is more aggressive and noisy.

TIME MACRO-FORM
Overall duration 3' 40”.
The development is created by the alternating fillings of Sampler_2 and _3 with new sound
materials, and their treatments in real-time.
You are required to play in sequence:
- 0’ -> ribattuto col legno;
- 1’ -> pizzicato;
- 2’ -> espressivo;
- 3’ -> tremolo.

24
Other interactive instructions regarding playing styles such as Accentuato; Legato/Staccato;
Aggressive-pizzicato are intermingled in order to drive the sampling treatments.

Every half minute you are suggested (through graphic-verbal screen


animated indications) to feed the music system with a different music
style, which at the same time interactively drives the methods of sampling.

Fig.5-K_2 Example of the animated score

The number, width and position of the small graphic bars suggest the density, power and pitch
register of the music gestures verbally described.
Sampler_3 is filled with new sound material (lasting 20”) every minute.
Sampler_2 is filled with new material (lasting 10”) every half minute.

You can see two red flashes near the buffers of the samplers, which signal
the beginning of this process. The filling process happens in real-time therefore
Sampler_2 loads sound material (and processes it at the same time) for 10” and
during the following 20” the processing works on the fixed already stored sounds.

The same happens with Sampler_3 at doubled time intervals.

Fig.6-K_2 Flashes signalling the beginning of the recording process

The buffers have a graphic interface: you can see some of the sound qualities of the stored sounds,
and the principal processing parameters whose evolutions depend on the sound you are producing
(see figures above and below).

The whole interaction starts immediately at the beginning of the piece: therefore every minute
the electronics radically process new sound materials (Sampler_2 plus Sampler_3) and every half
minute the processed materials will be only partially replaced (Sampler_2 only).

In this way Sampler_3 will be fed with four different and contrasting cello sound materials
(every minute), and Sampler_2 with seven different cello sound materials (every half minute).
This overall macro-form is shaped and underlined by a chain of automatic amplitude gains,
which drive the interaction from:
-an initial long fade in,
-three mid culminating points,
-a last faster fade out allowing a possible brief acoustic cello conclusion.

25
SPACE MACRO-FORM
The spatialisation is organised in a mixed fashion.
Sampler_3 (the principal electronic instrument) outputs four layers of sampling, each located
in a fixed position of the quadraphonic field.

The other 4 electronic sources (Sampler_1, Sampler_2, Amplified_cello, Flanged_cello)


are spatialised through dynamic parameters tuned to some live cello sound features:

-the locations in space of the amplified and flanged direct cello are dependent
on the pitch of the live performed cello
-Sampler_1 and Sampler_2 spatial movements mainly depend on the brightness
of the live cello sound.

Fig.7-K_2 Spatial monitor

Timbre and note densities of the live cello affect the velocities of the spatial displacements of each
source, whose distance from the centre of the audience room depends on the periodicity
vs. noisiness of the live cello.

Fig. 7-K_2 shows the monitor interface of spatialisation, each sound source is numbered
and the monitor shows its current position inside the audience space of listening (the front speakers
are represented as being in the upper portion of the diagram, the rear in the bottom).

PERFORMANCE NOTES
The main interactive tasks for the cellist are:
-1-The choice of contrasting sound materials filling Sampler_3 and _2
-2-The amplitude balance between Sampler_1 and Sampler_2
-3-The interactive strategies towards each of the three samplers.

The controls of the flanger and of the spatialisation could be considered as byproducts of the cello
performance, not necessarily to be strictly controlled and focused by the cellist.

1) SAMPLING MATERIALS
In case of performance mode A, Sampler_3 will be filled by the same cello sounds as played during
the beginning of the 1st movement. In this case the four contrasting cello model-phrases
(lasting 20” each) already performed in the 1st movement make up the sound contents of the four
subsequent contents of Sampler_3.

In case of performance mode B the contents of Sampler_3 will be fed in real-time by the cellist.
In this last case the filling sound contents are to be contrasting in sound (as indicated by the verbal
animation).

26
2) AMPLITUDE BALANCE
The output amplitudes of Sampler_3 are directly responsive to the live cello resonance.

The final gains of Sampler_1 and Sampler_2 are instead balanced


by the cellist:
-high pitches proportionally increase the amplitude of Sampler_1
(intensifying aggressive-rhythmic contrast)
-low pitches proportionally increase the amplitude of Sampler_2
(adding colour and textural density to the music)

Fig.8-K_2 Sound monitors of samplers 1 and 2

Middle range pitches obviously mix these sound contents with different proportions.

Sampler_1 is especially suitable for brief energetic commentaries matching the rhythms
of the sampler with high-pitched live cello excursions.

3) SAMPLERS
The samplers granulate the sounds after being recorded in their buffers, and output them
with a sound-mosaic technique.

Their main parameters are:


-period: defining how many sound grains are output.
The parameter is set in milliseconds (i.e. period 1000 = 1 new sound grain is output every second;
period 50 = 20 sound grains are output every second)
-duration: defining the time length of the grain (in milliseconds); if the grain duration is lower than
the period the result will be a scattered sequence of sound pulses, if the duration is higher than
the period the grains will be overlapping. In case of a large difference (as in the extreme case
of figure 10-K_2, where the period is 50 and duration 8700) the result will be a dense overlapping
texture of very long sound grains.
-resampling: it means transposition, computed in cents of a semitone (i.e 1200 = 1 octave)
-level: it defines the amplitude increase or decrease with respect to the internal sound materials.
-attack/release affect the clarity and definition of the sound grains
-filtering is also influenced by the live cello

Sampler_3 is the main engine of the music.

Sampler_1 and _2 have a subsidiary role affecting the overall sound character.

27
Sampler_1: (upper left of the main patch).
Its internal file is pre-recorded and very rhythmic.

The more the cellist plays high notes, the more this sampler plays loudly.

The internal sound file is pre-analysed1, a similar analysis is performed in real-time upon the live
cello sound: the system is therefore able to select and output the file sound portions more similar
in timbre with respect to the sound gestures performed by the cellist. The different portions
of the sampled sounds are then “sewn” in real-time through a sound-mosaic technique.

In addition the cellist can influence the length, the density and the intonation of the output
sound-mosaic portions in the following way:
-if the cellist plays louder, the sound-file “grains” are transposed higher, but at the same time they
become shorter (extremely strong cello sounds output grains lasting less than a second until a fifth
of a second, soft cello sounds increase the length of the grains until a few seconds): the file-grains
are anyway output at regularly time-cut “note” lengths.
-the density of the events coming from the audio file (the number of grains output within a second)
increases if the cellist plays the first notes of the chromatic scale starting on C (C, C#, D .....)
but progressively decreases on reaching A, A#, B…..).

By interacting through amplitude and pitch (pitch-classes) the cellist can obtain extremely varied
methods of sampling in terms of grain density, length and transposition.

More subtle timbre variations of the grains are dependent on timbre variations of the live cello.
The values of the grain-mosaic processing can be monitored on the screen as shown
in figure 9-K_2.

Fig.9-K_2 Sampler_1 engine

1 The timbre analysis is performed through Mel-cepstral coefficients stored in the software, and the
similarity between the stored and the real-time coefficients is computed in real-time by a “KNN” algorithm
(key nearest neighbour search).
28
Sampler_2: (upper right of the main patch)
The sounds coming out from Sampler_2 have a textural/background quality.

Their amplitude increases when the live cello is performing low pitches/sounds, and decreases
in the presence of high pitched sounds. The amplitude balance and the sound character of
Sampler_2 are opposite to Sampler_1, and easily controllable by the performer.

The main algorithm of Sampler_2 is also dri-


ven by the KNN search (see note 13). The core
system is a varied version of Sampler_1, but
instead processing dynamic sound contents
loaded on the fly. The output is a mosaic of
sound fragments (like Sampler_1).

Fig.10-K_2 Sampler_2

The graphic interface of Sampler_2 shows the waveform and the dynamic parameters of interaction.

Pitch, amplitude and brightness of the cello are the principal means of interaction. The density
of the sound texture (shorter grain periods) increases proportionally to low cello pitches.

High pitches, soft sounds and bright timbre contribute together to increase the length
of the grains (i.e. a soft high pitch performed near the cello bridge will contribute to a maximum
grain length, as in the example of figure 10-K_2).

Cello amplitude affects transposition. The overall filtering of the sounds is also dependent
on the main live cello pitches, passing from no filtering for the lowest pitches, through different
light filtering options, until resonant options in case of mid range notes (from middle C upwards).

Brightness, resonance and noisiness of the cellist have a role in the filtering and clarity of the
sound mosaics, whose final sounds must be explored through performance rather than intellectually.

29
Sampler_3: (bottom right of the main patch).
Sampler_3 is the main sound engine of the system.

The sound filling its buffer (20” long) is segmented taking into
account the onset energy of the incoming sound.
Therefore rhythmic sounds will produce much more segments than
smoother sounds. The buffer interface shows the attack-segments as
black vertical bar lines.
The output will be driven by an onset-detection internal system
describing the live cello rhythmic behaviour.

Fig. 11-K_2 Sampler_3

The sampler outputs the sound segments following a mosaic technique (similarly to the other
samplers), but in this case the segments are output in the same sequence as they were recorded.

The sound segments are distributed in four separate channels, each of them with different
velocities and densities of fragmentation: the final output will be a “four voice polyphony”
of differently fragmented portions of the live-recorded sound, where the segmentation will
be guided by the rhythmic salience (onset detection) of the cello sound.

The note (onset) detector is placed above the sampler, detecting (inside the live cello
performance) the presence of a note when a cross appears, and a pause when
the cross disappears.

Fig.12-K_2 Note detection monitor

The note is analysed in terms of its duration, onset distance (time distance from the previous onset),
initial pitch, amplitude and quality factor (periodicity vs. noisiness), all affecting the kind of output
segmentation inside the sampler. The last four “note detections” determine the parameters of each
of the four overlapping output channels of the sampler.

Fig. 30 shows the four groups of parameters organised in rows. When the “period” is lower than
the “duration” the sound segments will be output overlapping each other in a dense rhythmic
fashion, on the other hand low durations will output brief noise-like sounds, and higher durations
will make the sound qualities of the stored segmented sounds recognisable (possibly allowing
for more sound overlaps). Period and duration are computed in milliseconds. The parameter called
“resampling” is connected with the pitch transposition of the referenced sound segment
(i.e. 100 = 1 semitone higher; -100 = 1 semitone lower).
Further timbre details vary in accordance with the cello playing (as it is possible to verify inside
the commented abstractions of the application)

The system detects a cello note when an amplitude threshold is passed.


A pause is detected through a double threshold detecting a partial decrease in amplitude.

30
A new note can be detected only after a pause (tiny or longer).
The system has good reactions with staccato, pizzicato or accented styles.
A straight legato style (with no amplitude decrease between notes) could behave unexpectedly
(showing one extremely long note until a next amplitude decrease happens).
Be careful when long notes are really your interactive intention; in general the music system
is preferentially conceived for nervous staccato styles as suggested by the screen instructions
and the subject of the piece.

-A high density of sound attacks (note-onsets) by the cellist increases the density of sound events
(the sound segments output by the sampler) in the proportion of
Inter-Onset-Interval (of the live cello) -> Period (of the sampled sound segments)
-The duration of the live cello performed note proportionally affects the duration of the sampled
sound segments.
-The last detected pitch interval performed determines the transposition of the segments with
respect to the original recorded sound.
-Only one of the four streams of sampled sound segments continuously follows in transposition
your cello intonation, the others remaining fixed at their onset interval value.

TECHNICAL NOTES
CALIBRATION
The main calibration parameter concerns the note-detection.
Two thresholds work in parallel detecting the note-on and note-off when the cello amplitude crosses
one of the values of -20 and -30 Db.
Should the cello playing and sound card adjustments be insufficient for focusing the average
amplitudes around these Db values, a software calibration is suggested.

If you type inside the “gain_trim” number box a positive or negative value, the nominal amplitude
detection will change accordingly (“gain_trim” simply adds or subtracts its value to the actual
Db detection). Find the appropriate value in order to better focus an effective note detection.

The system should remember any new calibration by just saving the patch before closing it.
A safer way to store data is available by pressing the “write” label inside the calibration set. Further
calibrations are present in the system, and they can be explored inside the internals of the patch.

The sound diffusion is quadraphonic by default, in case of a different choice you need to remember,
before playing, to press the button above the chosen different option.

31
Message_3 “Odradek”

Video instructions at: https://www.dropbox.com/s/b84puaj3ovmkyyu/K_3_instructions.mov?dl=0


Novel at: http://www.kafka.org/index.php?aid=284

Fig.1-K_3 Odradek’s application (Message K_3)

COMPOSITION
PERFORMANCE STYLE
By only using the bow on the open strings (no pizzicato and never exploiting the left hand)
you will produce different overtone pitches.
If the bow pressure is extremely soft, a specific overtone will arise, whose high pitch mainly
depends on the bow-bridge distance (the portion of the vibrating string).
Play with extreme softness (below the loudness-threshold of the normal cello tones), gently
sliding between different quantities of bow hair and different distances from the bridge.
Keep the bow-motion regular and wait the due time in order to let the harmonic sound grow,
as much as possible beyond the loudness of the fundamental frequency of the string: you can softly
change the pitch of the overtones as you wish.
You can also create transitions between the purest sounds and soft but rougher and gently scraping
tones. You can sometimes play full notes, even loudly, or even loud and noisy, taking into account
that all these choices affect the electronic interactive sound results.

When you play low notes with full tone, you can hear that your sound is doubled one octave lower.
This performance style can be considered as a study on the construction of harmony through timbre.
32
In this way you extract actual significant components of the sound spectrum, building their sound
against the fundamental frequency of the open string.

In this work the main string pitch always acts as a background, just like Odradek’s “voice without
lounges”.

The spectral components of the sound are instead the main sound characters, reversing the impor-
tance of note/timbre categories with respect to the classic performance styles: the electronics simply
enhance this timbre based attitude, since they only expand what is actually contained in your sound.

INTERACTIONS

The performance is guided by interactive verbal instructions


appearing on your screen.

Fig.2-K_3: an example of interactive verbal instructions

The electronic sounds, upon which you have a total control through cello timbre nuances, follow
you like a shadow. After a starting section, the electronics will be alternating between two different
states, which correspond to performance mode A and performance mode B.

The time transitions between music sections and performance modes are the consequence of three
main sound events. You decide when performing these three events, therefore the advancement
of the piece (and its global duration) is at your will.

Performance modes A and B require two different cello modes of interaction, since the electronics
will be slightly different, even if the general sound palettes are acoustically rather similar.

At the beginning of the piece you are requested to tune the cello strings unconventionally from high
to low as: F, H (quarter tone lower), G (sixth of tone higher), H (quarter tone lower),
at frequencies 175, 120, 100, 60.
Some of these relations foster special string resonances.

The number on the right of your screen tells you which frequency
you are currently playing.

Fig.3-K_3 Cello frequency monitor (tuning section)

33
SECTIONS
Beginning
The opening scene consists in the narrator's voice against your initial cello tuning.
Tuning should be accurate by means of long, soft, airy bowing (maybe intermingling with
a few pizzicatos).
This provocative inclusion as the beginning of the performance requires a theatrical focus, notice
that your sound transforms the quality of the spoken voice (as a consequence of single note
vs. double step, sound vs. pause, different volumes).

Events
After this prologue, the whole composition advances as a consequence of the three main cello
timbre events "noisy, loud, soft”: they have to be performed during special moments
of the composition.

In the central part of your screen you can see the


detectors of these events represented by color flashes
and one squared button: the events cause the
transitions through sections 1, 2, 3 and 4.

Fig.4-K_3 The event-to-sections monitors

Further monitors, but principally the electronic sounds, make you aware of every new section.
-Event 1 (noisy) is a loud-scraping crescendo:
from section 1 (tuning plus narrator's voice) to section 2 (performance mode A)
-Event 2 (loud) is a crescendo with full-pure sound
from section 2 (performance mode A) to section 3 (performance mode B)
-After 1’ 30” the system automatically folds back to performance mode A
-Event 3 (soft) is a long and extremely pure and soft sound
from section 3 (performance mode A) to section 4 (performance mode B)
The advancements of the sections allow the alternate performance of modes A and B.
In this way the composition develops 2 principal electronic sound states:

Performance modes A-B

A) filter-resonator effects come out of the 10-channel virtual mixer


(bottom portion of the patch). In performance mode A the volume-slider
of the mixer is open, but it is closed in performance mode B.

Fig.5-K_3 The 10-channel main mixer (performance mode A)

34
B) direct/reverberated enhanced cello sound (right bottom monitors)
“direct” displays the volume of your sound coming from the main microphone,
“verb” tells you the quantity of reverberated sound,
“low” tells you when your sound is doubled at the lower octave.

Fig.6-K_3 Direct-verb gain faders (performance mode B)

During the performance mode A the sliders of “direct” and “verb” are closed.
During the performance mode B they move following automatic envelopes.

Sounds A) are produced by filtering and additive synthesis, mirroring the spectral analysis of your
sound, captured from the input pickup.
Sounds B) enhances the microphone output, boosting the overtones that you are performing.

When the system is in mode A you can mix 10 different effects, just by playing and balancing
at different volumes full tones vs extremely soft overtones, or alternating very pure sounds with
soft bow-string noises.
Put briefly, you control the mixer by means of your amplitude and sound pureness (vs. noisiness).

In mode B you will play only extremely soft overtones, in order to prolong and overlap them
through the reverberation, but you will need to be cautious in order not to increase this effect
too much, because of the risk of the audio-feedback.

STRUCTURE
You will follow the interactive verbal instructions appearing at the upper right portion of the screen.
You are totally free in choosing the times and the characters of your timbre interaction.

The crucial aspect is the quality, continuity and harmonic interest of the overtones performed.
Alternating them with soft bow noises and full tones, represents the main means to navigate
the system and the compositional form of the piece (in performance mode A).
A special sensitivity towards the chordal harmonic nature of performance B offers a means
of balance.

Below is a detailed description of the event-section time structure, and later on some details
are given about the sound effects. The last part of the document involves calibration instructions.

35
PERFORMANCE

CELLO EVENTS AND MUSIC SECTIONS


After having opened the main application, and checked the presence of the audio card:

0) press the spacebar (start music)


tune the cello to the indicated frequencies,
the overlapping voice of a narrator (reciting Kafka's novel) is diffused;
you modulate the voice through frequency, amplitude and cello sound periodicity.
After completing the tuning, keep a long sound on the D open string avoiding full tones:
by playing extremely softly with little hair, search for and keep sounding one D overtone
or alternatively play a short sequence of D overtones, gently shifting the bow
(ordinario->sul-tasto->ponticello) with slow, free and continuous patterns of bow “drains”.

1) perform a noisy crescendo until the red flash informs you that the "noisy" threshold
is reached.
After that, maintain the previous overtone performance style: you can occasionally change
the string you are playing on, but remember that you will be performing only on open strings
throughout the piece. After this “noisy” event, the gain of the 10-channel virtual mixer arises:
the effects are strong filtering and artificial resonators linked to the spectrum of your current sound.

The effects are distributed through the 10 channels of the virtual


mixer visible in the bottom part of the patch: you mix the 10
channels through cello amplitude and timbre.

Fig.7-K_3 Main mixer in action (performance mode A)

Keep playing very softly, shifting between the effects. This is the performance mode A.

2) after 30” or 1’ ca. (as you wish)


perform a full tone crescendo until the green flash detects that the "loud" threshold is reached.

At this point the direct-cello gradually crossfades with the reverberated


cello, whilst the 10-channel effects slowly fade out. The artificial effects
disappear, and the overtones that you keep playing are now simply
increased and sustained by natural amplification and reverb.
This is the performance mode B.

Fig.8-K_3 Reverb in action (performance mode B)

36
3) after 1' 30" of this performance state
the 10-channel artificial effects automatically and gradually fade in again, while the direct
and reverberated gains fade out (performance mode A’).
After 1 further minute a big flash (and the crossed big button) inform you that the "soft" threshold
is now in listening state: freely perform following the interactive instructions until you decide
to enter the last section.

4) the last transition starts when you reach the "soft" threshold: the yellow number called
“soft_thresh” has to surpass 150 (if not differently calibrated).

When the big white button is crossed, the “soft threshold” starts its listening mode,
you do not need to reach the “soft threshold” immediately, and you can delay
the section advancement at will, navigating the performance mode A in different ways.

Fig.9-K_3 Crossed button (listening mode of the “soft_thresh” enabled)

When you are ready for the last section, you have to raise the yellow number,
performing extremely soft and pure tones until the number reaches the set threshold.

Fig.10-K_3 Soft threshold monitor (high numbers = softer sound)

This process could be fast or longer, depending on the quality of your sound. This not trivial
task requires a sustained sound possibly lower than the -80 amplitude level, but highly periodic
at the same time (keep the “periodicity” and “peak” number monitors under your view).

When this last section is enabled, performance mode B’ starts to be in action:


reverberated and direct sound gains fade in again, and the narrator's voice reappears softly.

5) after 1' 30” direct and reverberated gains automatically crossfade a last time
(now reaching higher gain values), until the whole electronics fade out as underlined
by the crossing of the bottom in the left part of the patch.
You can continue to perform overtones without any electronics if you wish.

37
PERFORMANCE MODES
The whole performance will therefore be segmented by these 3 sound events (as displayed through
flashes and crossing buttons) and by a set of automatic gain fades. The principal performance style
is the overtone production throughout the whole performance: but the timbre variations
and an extreme control of the cello loudness permits you to fully navigate the interactive system.

Performance mode A
The 10-channel virtual mixer outputs:
-5 artificial voices produced by the "effects" modules called:
waveshaping, oscillators, resonators, res_transform, banks_oscill
-3 strongly filtered cello voices (controlled by the module "filtered")
-2 delayed-feedback filtered copies of the cello sound (controlled by the module "feedback")

All these artificial and filtering treatments are direct consequences of your sound, analysed as:
fundamental frequency, spectral periodicity, amplitude, roughness,
and individual frequency/amplitude of the first 30 cello partials.

Fig.11-K_3 Cello timbre monitors (roughness-periodicity-loudness in db)

The 5 artificial effects "sonificate" the spectral cello data as they are tracked in real-time.
The 3 filters excite the overtone dynamics as they are performed
The 2 feedback voices problematise the noisy components performed

More importantly the 10 effects are mixed by the


cellist through the amplitude and periodicity of
the cello sound. Visually the sound intensities of
each effect appear from top to bottom inside the
central mixer (the green moving lines indicating
the energies of the output sounds, Fig.7-K_3).

Fig.12-K_3 Gains of each effect (and notes about the cello peak-amplitudes enabling each effect)

Some details of the relations between the cello loudness and the gains of each individual effect
can be monitored in the central part of the screen (Fig.12-K_3).

The upper green lines of the main mixer represent the 5 artificial voices: they appear
when the cello plays louder, possibly with full tone (contrasting the main soft overtone
conduct). The 3rd effect (resonators) is boosted by Mezzo-piano with very pure full
tone, the others by different graduations of higher intensities and timbre periodicity.

Fig.13-K_3 Artificial-effects modules


38
The lower green lines represent filters and feedback (the latter appearing in the 2 lowest lines).

Under the numeric gain monitors of these effects you


can see the visual peak monitor containing 4 main thresholds
driving the most important mixing functions. You can see
the red monitor showing your current cello amplitude and
the points at which the different filtering channels are activated.

Fig.14-K_3 Cello amplitude tracker (and connections with the filtering channels activation)

The 5 artificial effects better respond to louder cello full tones


1) "Waveshaping" returns a vibrating copy of your played note;
it is excited by playing pure tones Forte (-50->-40 dB),
your timbre roughness increasing the artificial vibrato
2) "Oscillators" returns a slightly inharmonic copy of your played note;
it is excited by playing slightly rough tones Fortissimo (-45->-35 dB),
your noisiness increasing its resonance
3) "Resonators" returns a little-bell-like image of your sound spectrum;
it is excited by playing extremely pure tones Mezzo-piano (-65->-50 dB),
your periodicity highly increasing its resonance
4) "Res_transform" returns a harmonic copy (sometimes gliding) of your played note;
it is excited by playing pure tones Mezzo-forte (-55->-45 dB),
your periodicity increasing its volume
5) "Banks_oscill" returns a scattering image of your sound if played noisily;
it is excited by playing very noisy tones extremely Fortissimo (-40->-30 dB),
your noisiness increasing its effect

The 3 filter effects require extremely soft and pure sounds, only playing overtones
instead of full notes. These 3 effects will be your main focus during performance mode A.
6) "Sweeping filter" (variable cutoff): this is excited by playing Piano overtones (-> -60 dB),
preferably responding to whistling al-ponticello sounds
7) "Hp filter" (high-pass): this is excited by playing Pianissimo overtones (-> -70 dB),
preferably it requires quick shifts between different bow-bridge distances maybe mingled with
small wood/string infra-sounds
8) "Res filter (resonant): this is excited by playing extremely Pianissimo overtones (-> -80 dB),
it requires the most soft amplitude and pureness, and it can sustain the most subtle overtones

9)-10) The 2 feedback filters react to soft-noisy sound stimuluses (-> -50 dB);
they time expand (through delay-feedback) the filtered cello noise,
one filter is high-pass and the other one is resonant

39
Most effort and time focus will be spent on the filtered overtones, taking the noisy-delays
and the full tone artificial sounds as a musical contrast. The transitions between filtered and
artificial effects will be spontaneously bridged by the "resonators" (3rd effect); the "banks_oscill"
harsh effect should instead be possibly performed only once during the whole performance.

By accessing the calibration section you can see that the above amplitude values are nominal,
and you can recalibrate the amplitude detection if necessary, in order to make your timbre controls
easier.

Performance mode B
This sound state captures the microphone sound allowing for a refined almost naturalistic timbre
quality. It ordinarily requires the soft overtone main way of performance, and especially when
the reverb is high, it has the power to capture any overtone, raising it as a sustained resonance:
by changing overtones you can develop steady polyphonic textures.

But if the resonant system acquires too much power, it can produce unwanted feedback frequencies
(outside the natural harmony of the interaction): in this case you can soften the electronics just
by playing louder, maybe momentarily killing all the reverbs by playing brief cello noises.
The more you play pure and soft, the more you raise the resonant electronics of mode B
(thus the natural enhancing of your overtones); and obviously, by performing in the opposite
way (louder and/or scraping) you can counterbalance possible resonance excesses.

Despite the strict connections between cello and electronics, the work develops as an open form
steered by the choices of the cellist. The work is considered as minimalistic because of:
- the continuous search for natural overtones allowed by a special bow technique
- the challenging control afforded by extremely subtle volume balances

Sound tracking (and sound outputs as well) depends on different channels of calibration.
It is likely that the default values do not suit different cellists and instruments.
By double clicking the label “calibration” you can access the module, and the relative
“notes” embedded section.

40
CALIBRATION

The cello is fitted with 1 microphone (possibly a DPA) routed to adc~1, feeding the outputs only
of performance mode B. A contact pickup (possibly Fishman) is additionally routed to adc~2 fee-
ding the outputs of performance mode A, and the sound analysis upon which
all the controls and processes rely. After having set the amplitude balance of the 2 input channels
from the audio card, further balance should be set inside the software.

The most important parameters are contained inside the calibration section B (amplitude
peak detection) and inside the section C-1 (direct-cello and reverberation expander).

Every new calibration should be remembered by the system after having saved and closed
the “calibration” patcher if the system is a native MAX patch. The option of pressing the "write"
message can be used, but it is mandatory if the system is a MAX standalone application.

A: cello inputs attenuation.


This is a general purpose gain balance, which could be left unchanged, since more specialised
calibrations are contained inside sections B and D.
It can be useful as a pre-calibration amplitude balance test between the two inputs.

B: peak detection
Peak tracking is the main motor of the system in perfor-
mance mode A. The cellist balances the 10 effects through
his/her performed loudness.
The "noise floor" parameter allows for a useful expansion
of the salient cello amplitude region: the value set inside
the "noise floor" number box shifts that value in dB to
a nominal -120 (noise floor). In this way the piano and
pianissimo sounds can be better controlled, since their
nominal values are expanded. Optionally the “gain/trim”
message linearly adds its value (positive or negative)
to the nominal dB tracking.

Fig.15-K_3 Peak amplitude calibration

This calibration should be accomplished by taking principally into account the red monitor
of the main patch, in order to ensure a comfortable control of the lower thresholds by the cellist.

41
C: parameters calibration

Fig.16-K_3 Multiple calibrations

1) The values “diff” and “diff_v” add or subtract amplitude in dB to the automated gains
which dynamically modify respectively the direct and reverberated cello: both signals come
from adc~1, therefore they are active only during the performance mode B. These gains shift throu-
ghout the whole performance (and they are slightly higher during the last part).
Depending on the room/equipment specs, a maximum on stage amplitude has to be gained
without instantiating feedback, since the direct and reverberated cello outputs must be enhanced
in order to raise and prolong the very soft cello overtones (if the cellist plays louder the internal
gains automatically decrease).
These additional gain factors may need to be checked before the performance.

2) attenuation/boosting of: a) the effects coming out of the 10-channel virtual mixer b) the overall
final gain. Obviously < 1. attenuates and > 1. increases these final signals.

3) the sounds feeding the 10-channel mixer (performance mode A) need a strong attenuation, which
can be differently set.

4) the microphone signal (from adc~1) generally needs to be increased due to the soft performance
required, but a lower boosting could maybe help the C-1 calibration.

5) “amplist" scaling affects the amplitude of different filters and resonators: this involves
the amplitude scaling of the partials tracking inside the fft module.

6) the periodicity coefficient compresses the output values towards "noisy" when it is lower
and towards "periodic" when it is higher. Its mappings particularly affect the final gains
of the filters. In case some more nominal “periodicity” is needed, you can very slightly decrease
the coefficient (i.e. the power to which the periodicity tracking is raised).

42
D: threshold setting
The parameters underlined in red set the
thresholds detecting the 3 cello events which
make the composition advance.
After having balanced the B and D sections, some
of these thresholds could be changed in case they
do not fit a fluent cello performance.
Fig.17-K_3 Thresholds calibration

1) the tags “_per” and “_peak” regard the event 1 “noisy”.


They reference the “periodicity” (0 -> 1) and “peak” (-120 -> 0) main monitors: by increasing
their values the loud-noise detection is made more sensitive, by decreasing them the cellist
has to put more effort to accomplish event 1, respectively in terms of noisiness and loudness.
2) the tag “_ampli” regards event 2 “loud”.
The small number (0 -> 1) on the right of the direct-verb gain sliders is referenced. Increasing
the cello loudness this number decreases. Therefore, if this threshold is too sensitive (triggering
an unwanted event) you have to lower it. In the opposite case, you have to slightly raise it.
3) the tag “_advance” regards the event 3 “soft”.
The default value should be preferentially higher than 100, making the task hard but not too
difficult. The referenced yellow number “soft_thresh” rises when the (nominal) cello peak
amplitude is below -80 dB in the context of a high periodicity.

43
Message_4 “The Trees”

Fig.1-K_4 The main application interface (Message K_4)

“For we are like tree trunks in the snow. In appearance they lie sleekly and a little push should
be enough to set them rolling. No, it can't be done, for they are firmly wedded to the ground.
But see, even that is only appearance.”

Video instructions at: https://www.dropbox.com/s/cb9sdraxga0102x/K_4_instructions.mp4?dl=0

SETTINGS
Arrange on stage a small ensemble of microphones and studio-speakers, in order to conduct
a Trio for cello, audio-feedback and tape, through sound-gestural interaction.

The cello is fitted with a microphone and a pickup, exactly as it is in the previous movements
of the Kafka cycle.
These 2 cello inputs (possibly DPA and Fishmann) are placed on the cello body.
In addition 2 small speakers (placed on supports c. 1 mt. high) and at least 2 more microphones
are positioned on stage in order to raise a sound-magnetic field around the cello.
-1st speaker on the right side of the cello (c.1 or 1 and a half meters distance)
-2nd speaker on the left side of the cello and quite close to it in order to resonate with the cello
microphone
- 1 omnidirectional microphone diagonally facing the main cone of the first speaker
- 1 additional microphone should be positioned close to the right side of the cello bridge.

44
SOUND ENSEMBLE.
Stage space

-Cello
-Speaker_1: right
-Speaker_2: left
-Microphone_1:
on the cello (“cello-mic”)
-Microphone_2:
front/right of the cello (“mic2”)
-Microphone_3:
contact pickup (“cello-pick”)
-Microphone_4:
close to Speaker_1 (“mic4”)

Fig.2-K_4 Stage

Audience space

Quadraphonic spatialisation

Fig.3-K_4 Audience speakers

MAIN INTERACTIONS
The cello is moved and turned within the sound-magnetic field in order to alter the individual
contributions of each microphone, collectively creating dynamic “chord” shifts: the resulting
audio-feedback will be a small choir of whistles and modulations.
During the opening part of the work the cellist modulates the audio-feedback only by modifying
the cello position, inclination, and distance from the individual pieces of equipment; during
the continuation of the performance the cello sound will be treated also as an active source
of modulation of the feedback pitches.

-Microphone_1 should be placed in a way so as not to allow an exaggeratedly fixed pitch from
its speaker (distance and angle should be experimented beforehand).
-Special cello drifts with respect to Microphone_2 increase, attenuate or modify the sound
contribution of this microphone: approaching the right sound-hole, it mostly picks up the first mode
of resonance between 90/100 Hertz, while different and not fully predictable resonances will
be raised by turning the sides and the back of the cello towards the microphone.
-The pickup has the function to isolate cello noises independently of the environment
45
-Microphone_4 is responsible for the cello direct sound, naturally mixed with the feedback
occurring on the stage. By occasionally approaching the Speaker_1, the chordal feedback state
can be modified.

1 inertial sensor (3axis accelerometer + 3axis gyroscope)


is positioned under the frog of the bow (tracking Orientations,
Energies and Bowing styles), in order to control the spatialisation
for the audience, and further balances and interactions during
the composition.

Fig.4-K_4 Bow motion tracking

COMPOSITION
SPATIAL SOUND

The overall sounds come from


2 clearly detached sound fields:
-The stage space
(physical cello plus stage monitors)
-The audience space
(four speakers around the audience)

Portions of the outputs are mixed inside these


two spaces, but the opposite locations are
to be clearly perceptible, as a main central
stage source and a distanced mirroring
audience space.

Fig.5-K_4 Overall setup

Reflecting the theme of reality and appearance, the spatial distributions are quite ambiguous.

The Stage Space mixes the acoustic sounds (cello plus audio feedback, with their natural patterns
of spatial radiation) with the 2 local speakers (routing opposite cello sound features and selected
portions of the processed sounds).
The Audience Space mixes the direct and processed sounds, fixing every source at a given location,
but also moving their images inside the outer quadraphonic space.

46
AUDIO FEEDBACK

The system is provided with multiple internal high-amplitude


expanders chained with compressors, filters and signal routings.
In this way feedback becomes an active eco-systemic component
of the music, and it can be modulated by the cellist in terms
of sound, and of gain controls as well.
This multilayered system in part follows automations and in part
the performing actions of the cellist.

Fig.6-K_4 The audio feedback channels

TIME SECTIONS

Fig.7-K_4 Time sections

The overall duration is 13’30” (a 10’ reduced concert-option is allowed). The different sections
of the composition afford different kinds of cello interaction with the eco-systemic.
Sections A are mainly focused on the audio feedback.
Section B involves the diffusion (in part interactive) of a 5’ long tape
mixing recorded sounds of audio feedback, cellos, environment, and everyday objects.
Section C is a final commentary mixing previous elements
47
A’ (beginning)
On pressing “start”, multiple chains of amplitude gains progressively reach high levels.
The audio-feedback slowly emerges: the cellist modulates it by moving and turning the cello.
Different distances and angulations increase or stop emergent feedback frequencies coming from
stage-speakers and microphones.
Bow circular motions in the air slightly spatialise the sounds inside the audience space.
A” (until minute 2’)
The audio-feedback is now principally modulated by the sound of the cello
(without refraining from modulating through cello movements, as before). Slow microtonal waving
ornamental glides of the cello at the edges of the feedback pitches can induce:
-beats (particularly perceivable in low cello-feedback registers).
-the emergence of unexpected and changing feedback pitch patterns.
-resulting amplitude modulations (relating high, loud and steady feedback pitches).
Almost-in-tune 5th and 8th intervals can also influence the chordal feedback response.
In case of high volumes, quasi-haptic string resonances can induce further pitch shifts.
A’” (3’ -> 5’)
The input/output gains inside the system are no longer fixed. From this section till the end,
the cellist can balance them through the amplitude-noisiness-resonance of the cello sound
(see performance notes below). The interaction is now more complex and requires an increase
in the cello sound contribution to the performance.
B (5’ -> 10’ 30”)
A pre-recorded sound file (treating cello sounds, recorded audio-feedback, real-life noises)
is diffused. The cello plus feedback performance “accompanies” the sound-file.
Impulsive bow movements output and treat in real-time selected chunks of the audio file.
C (10’ 30” -> 13’)
After the end of the previous “acousmatic” section, feedback, cello and file chunks are mixed
and scattered through the lens of a variable delay line controlled by the bowing styles,
until the final fade out.

INTERACTIONS
The performance is improvised and requires knowledge and rehearsal of the software circuits
as they change over time. The interactive components of the work are:
-setting of the analogue equipment in order to find an optimal response fitting with the cello.
This arrangement will be site-specific with respect to the concert room, in order to find a balance
of the principal frequencies coming out the audio-feedback system.
-cello sound (microtonal sustained glides, melodic fragments, harmonic contributions).
-cello rolling positions inducing feedback.
-automatic dynamic equalisation of the main feedback sources. The EQ curve continuously varies
throughout the piece, readapting the feedback sound colour.
-part of the quadraphonic main output treated with artificial reverb, whose intensity is dependent
on the cello pitch: high cello frequencies increase the reverb, low cello frequencies dry the output.
-spatialisation, in part fixed and in part driven by the bow movements.
-cello pitch and timbre modifying the filter parameters and the internal gains.

48
SOUND CIRCUITRY

Fig.8-K_4 Processing and routing of the 12 sound sources

Inputs:
4 channel live inputs:
-cello microphone and cello pickup (adc~1 “cello-mic”, adc~ 3, “cello-pick”)
-feedback microphones (adc~2 “mic2”, adc~ 4, “mic4”)
4 channel tape inputs
-stereo tape (active in section B)
-tape stereo fragments (active in section B and C)

Internals:
The 4 live inputs are processed:
-“filt1” = “cello-mic” resonant-filtered
-“filt2” = “mic2” lowpass-filtered
-“fb1” = “mic4+mic2” passing through a chain of high gains, compressors and EQ
-“fb2” = “cello-pick” passing through a chain of high gains, compressors and EQ

“fb1” is the main channel boosting the audio feedback,


“fb2” increases soft-noisy cello sounds, attenuating and lightening the audio feedback.

49
Audience Outputs:
4 live channel, 4 tape channels, 4 live-treated channels = 12 sources
The quadraphonic audience output mixes the 12 sources, assigning them to fixed locations.
Additionally they are moved in a circle by means of the Ambisonics system: the source movements
are driven by bowing styles and orientations through inertial motion tracking.
Section C foresees a multiple delay line of all 12 sources, with fixed output distribution.

The audience spatialisation creates a remote moving image of the sounds produced on stage.

Stage:
Central eco-system (cello, 4 microphones, 2 local speakers).
From the stage are diffused:
-the cello sound
-the emergent audio feedback
-the sounds routed to the stage-speakers:
2 feedback channels “fb1” and “fb2”, plus the tape stereo output (during section B).

Fig.9-K_4 Analog/digital map

50
I/O:
Taking into account the central role of the audio feedback, the inputs (microphones)
are to be considered as outputs at the same time, and vice versa the speakers (at least the stage
monitors) are outputs and inputs at the same time.

The system (including the cello performance as an agent among other agents) is not conceived
for a target music result, but instead for a final balance of different patterns of emergent sound
behaviours.

The sections A-B-C simply shift the essential conditions of the live interaction, in a sense putting
the same autonomous system inside different and pre-determined contextual conditions
of “survival”.

PERFORMANCE
No special software calibration is required, besides the analogue and site-specific setting
of the eco system. Quick performance notes are embedded in the software (module “notes”).

Position the charged IMU under the frog of your bow, and connect its basestation to the first
USB port; the file “OrientOSC.py” needs to be running inside your HD.

By default the folder “nicola” has to be located inside the Desktop, and you need to open the patch,
copy the code provided by the patch, “cd Desktop/nicola/orient5Sim python OrientOSC.py”
and past it inside the terminal.

Any different folder name and path obviously has to be typed or replaced inside the message.

SECTIONS
Press the Spacebar in order to start the interaction.

-Section A’ involves the silent interaction between cello positions and equipment.
From the beginning the amplitude sliders slowly fade in and after a short period the audio feedback
will be rising, emerging from the silence.

-When you feel the introduction is accomplished (not less than 1’, or more than 2’), you start
to play the cello, entering section A’’. The cello performance will primarily be involved in giving
rise to different kinds of amplitude modulations (beats, roughness, phantom glides)
and pitch/chord shifts with respect to the audio feedback.

51
-Section A”’ starts when the amplitude sliders are no longer fixed, but shifting.
Now you control them through the amplitude, noisiness and resonance of your sound.
You can see on your screen 3 chained levels of gain and compression.

The upper part distributes your sound in 2 channels, higher amplitudes increasing channel
1 and softer amplitudes increasing channel 2 (channel 1 coming from the feedback
microphones, channel 2 from the cello pickup). The mid gain level depends on the resonance
of your sound. The final gain level boosts channel 1 the more your sound is periodic, and vice versa
boosts channel 2 the more your sound is noisy.
In this way the balance of feedback vs. soft cello-noises constitutes an added control to the previous
activities of sections A’ and A’’.

-Each new section does not substitute modules or qualities of interaction: it just adds a new means
of control. No special music patterns nor music languages are suggested, the cello sounds will
be focused on the concept of interplay and modulation.
Melodic and ornamental cello patterns can be performed by alternating as a foreground with
the other sound components of the interaction.

-Section B triggers the audio file (developing in embedded contrasting sections containing
recorded audio feedback). Some preselected noisy portions of the file are to be output through
impulsive bow movements in the air.
Different file fragments are output through different directions of impulsive bow rotations (see the
last section “bow interactions”).

The tape must be accompanied by the cello improvisation. The gains of the live sources are again
fixed in order to output a reduced portion of audio-feedback.
The stereo components of the audio file are spatialised through bow-tremolo (stereo right portion)
and bow rotation (stereo left portion).
The overall amplitude of the file increases proportionally to the global velocity of the bow.

Details about the interactive controls can be explored by navigating inside the internal commented
modules of the application.

-At the end of the tape section C starts (you can notice the transition also by looking at the tape
sound monitor on the right). A system of 4 parallel tap-delays scatters all the sounds, while
the audio feedback emerges again. The sound sources most actively involved in this scattering
sound process are the audio feedback, cello percussive-noisy-aggressive sounds, the tape
fragments (still enabled despite the end of the main tape diffusion).

The delay lengths individually increase in response to the intensities of your bowing styles
(Tremolo, Staccato, Balzato, rotation).

-Final fade out.

52
SOUND SOURCES
-1) The four input microphones are subdivided into cello and feedback input couples.
Cello inputs:
-DPA (Microphone_1 “cello-mic”) focuses on cello full sound, in part mixed with the Speaker_1
feedback.
-pickup (Microphone_3 “cello-pick”) in particular amplifies and enhances low-amplitude cello
noises, which feed channel 2
Feedback inputs:
-Microphone_2 (“mic2”) mainly picks up the feedback coming from the cello sound-hole
(low frequencies) and/or any resonance/interference coming from the closed cello placement.
-Microphone_4 (“mic4”), the most distant from the cello, is mainly involved in the feedback
coming from Speaker_2 (higher frequencies).
The ensemble of cello/microphones/stage-speakers is a partially predictable analogue circuitry,
whose complex interactive sound response (comprising the audio feedback) is naturally mixed
because of their close distances, and in which inputs and outputs therefore feed each other.

-2) Each input follows an individual treatment in terms of filtering.


The cello inputs are autonomously enhanced/compressed and equalised
The feedback inputs are filtered respectively Low-pass and Resonant with cut-off depending on the
current cello fundamental frequency.
The main inputs are routed in 2 channels: the 1st enhancing feedback, and the 2nd enhancing
noisy/high-frequency soft cello sounds.

-3) We therefore have 3 groups of sound sources:


-direct sources (the 4 microphones);
- 2 channels of enhanced sources, plus 2 channels of filtered sources
- stereo tape, plus 2 channels of tape fragments
Section A is mainly involved with the 2nd group (enhanced/filtered sources).
Section B is focused on the 3rd group (file sounds) with the addition of the 2nd group.
Section C is an overall mix.

BOW INTERACTIONS
The bow gestures act indirectly as digital controllers: the gestures can be impulsive (triggers) or
continuous (shapes). As detailed inside the above Fig. 10 the system interacts through:
-4 rotational and 1 horizontal triggers
-Orientation (vertical and horizontal)
-Energy (quickness and rotation)
-Styles (tremolo and balzato)
The continuous bow movements affect the spatialisation (circular movements of the output sound
sources around the audience space).

These movements can be more effective when performed in the air, but they are active also when
the bow interacts with the strings, while playing the cello normally.

53
Fig.10-K_4 Bow as a digital controller

Through Horizontal and Vertical Orientations, the bow spatializes the 2 main direct sources
(“mic4” and “cello-mic”): the microphone close to speaker_1 and the cello DPA.

Through the intensities of Tremolo and rotation, the bow spatializes the 2 channels of enhanced
signal (“fb1” and “fb2”): respectively the most influencing part of the audio feedback and the small
cello noises.
The 2 channels of the stereo tape are spatialised in the same way.

Through the intensity of Balzato, the bow spatialises the file fragments, and the 2 channels of fil-
tered inputs (“filt1” and “filt2”).

Impulsive bow rotations in the directions Up-Down-Internal-External trigger different portions of


tape fragments during sections B and C.

A very quick down-bow (preferably performed in the air, since more powerful as a gesture) freezes
the values of horizontal and vertical bow orientation at the moment of the triggering.
These orientation values affect the speed and transposition of the audio fragment when it is
triggered. (see inside the module “seeking” for more details).

54
MOTION TRACKING
Inertial Motion Tracking is tested with the Orients_15 System, developed by the Centre for Spec-
kled Computing of the University of Edinburgh, 2 running through the orientMac application.This
application and the related Readme.txt document are contained in the main folder of this software.
The system needs a native Bluetooth 4 Mac version as minimal requirement.

A different Motion Tracking system is allowed by substituting the abstracion “or_data” with a
different OSC udpreceive module, which must contain proper scaling and normalisation.
Details are given inside the module “or_data” and in the Readme text file.

SOFTWARE
K-Message_1
MAX/Msp 6.1 or K_1-GESETZ standalone application

LIST OF EXTERNALS AND ABSTRACTIONS

ambiencode~, ambidecode~, ambimonitor (Jan Schacher)


http://trondlossius.no/articles/743-ambisonics-externals-for-maxmsp-and-pd

banger (Peter Elsea)


http://peterelsea.com/lobjects.html

bonk~ (Millar Puckette et al.)


http://vud.org/max/

chroma~ (Adam Stark)


http://c4dm.eecs.qmul.ac.uk/people/adams/chordrec/

f0.fold (Fredrik Olofsson)


http://www.fredrikolofsson.com/pages/code-max.html

ftm, ftm.copy, ftm.mess, ftm.object,


gbr:fft, gbr.slice~, gbr.wind=, gbr.yin,
mnm.delta, mnm.moments, mnm.onepole,
FTM-Gabor library (Norbert Schnell et al.)
http://ftm.ircam.fr/index.php/Download

fiddle~ (Millar Puckette et al.)


http://vud.org/max/

2 www.specknet.org
55
fog~ (Michael Clarke and Xavier Rodet)
http://eprints.hud.ac.uk/2331/

gf (Frederic Bevilacqua et al.)


http://forumnet.ircam.fr/shop/en/forumnet/59-mu.html

M4L.gain1~, M4L.delay~ (abstractions)


https://cycling74.com
multiconvolve~ (Alex Harker and Pierre Alexandre Tremblay)
http://www.thehiss.org/

imubu, mubu, mubu.granular~, mubu.knn, mubu.process, mubu.track,


readaptation of the abstraction mubu-mfcc-matching
pipo~ (IRCAM IMTR)
http://forumnet.ircam.fr/shop/en/forumnet/59-mu.html

roughness (John MacCallum)


readaptation of the abstraction rzcalib (Michael Zbyszbynski)
http://www.cnmat.berkeley.edu/MAX

sadam.stat (Ádám Siska)


http://www.sadam.hu/en/software

supervp.trans~ (IRCAM Analysis/Synthesis Team)


readaptation of SuperVP.HarmTransVoice
http://forumnet.ircam.fr/product/supervp-max-en/

zsa.flux~ (zsa.easy_flux) (Mikhail Malt, Emmanuel Jourdan)


readaptation of the abstraction zsa.consonant tracking
http://www.e--j.com/index.php/download-zsa/

56
K-Message_2
MAX/Msp 6.1 or K_2-INDIAN standalone application

LIST OF EXTERNALS AND ABSTRACTIONS

ambiencode~, ambidecode~, ambimonitor (Jan Schacher)


http://trondlossius.no/articles/743-ambisonics-externals-for-maxmsp-and-pd

bonk~ (Millar Puckette et al.)


http://vud.org/max/

chroma~ (Adam Stark)


http://c4dm.eecs.qmul.ac.uk/people/adams/chordrec/

dot.smooth, dot.std (Joseph Malloch et al.)


http://idmil.org/software/digital_orchestra_toolbox

ej.line (Emmanuel Jourdan)


http://www.e--j.com

f0.fold, f0.round (Fredrik Olofsson)


http://www.fredrikolofsson.com/pages/code-max.html

fiddle~ (Millar Puckette et al.)


http://vud.org/max/

ftm, ftm.copy, ftm.list, ftm.mess, ftm.object,


gbr.bands, gbr:fft, gbr.resample, gbr.slice~, gbr.wind=, gbr.yin,
mnm.list2row, mnm.moments, mnm.onepole, mnm.winfilter
FTM-Gabor library (Norbert Schnell et al.)
http://ftm.ircam.fr/index.php/Download

fiddle~ (Millar Puckette et al.)


http://vud.org/max/

imubu, mubu, mubu.concat~, mubu.granular~, mubu.knn, mubu.process, mubu.record, mubu.re-


cord~,mubu.track, pipo~
readaptation of the abstraction mubu-mfcc-matching (IRCAM IMTR)
http://forumnet.ircam.fr/shop/en/forumnet/59-mu.html

roughness (John MacCallum)


http://www.cnmat.berkeley.edu/MAX
K-Message_3
MAX/Msp 6.1 or K_3-ODRADEK standalone application
57
LIST OF EXTERNALS AND ABSTRACTIONS

chebyshape~ (Alex Harker)


http://www.alexanderjharker.co.uk/Software.html

ej.line (Emmanuel Jourdan)


http://www.e--j.com

f0.fold (Fredrik Olofsson)


http://www.fredrikolofsson.com/pages/code-max.html

ftm, ftm.list, ftm.mess, ftm.object,


gbr:fft, gbr.harm, gbr.slice~, gbr.wind=, gbr.yin,
mnm.alphafilter, mnm.list2row, mnm.list2vec, mnm.onepole, mnm.winfilter
FTM-Gabor library (Norbert Schnell et al.)
http://ftm.ircam.fr/index.php/Download

fiddle~ (Millar Puckette et al.)


http://vud.org/max/

ircamverb~ (IRCAM Espaces Nouveaux)


http://forumnet.ircam.fr/product/spat-en

list-interpolate, resonators~, res-transform, sinusoids~, (Adrian Freed)


delta (Matt Wright, Michael Zbyszynski)
roughness (John MacCallum)
http://cnmat.berkeley.edu/downloads

sigmund~ (Millar Puckette et al.)


http://vud.org/max/

yin~ (Norbert Schnell)


http://imtr.ircam.fr/imtr/Max/MSP_externals

58
K-Message_4
MAX/Msp 6.1 or K_4-TREES standalone application

LIST OF EXTERNALS AND ABSTRACTIONS

dag.statistic (Pierre Guillot)


http://www-irma.u-strasbg.fr/~guillot/

ej.line (Emmanuel Jourdan)


http://www.e--j.com

f0.distance, f0.round (Fredrik Olofsson)


http://www.fredrikolofsson.com/pages/code-max.html

ftm, ftm.list, ftm.object,


gbr:fft,, gbr.slice~, gbr.wind=, gbr.yin,
mnm.alphafilter, mnm.delta, mnm.list2col, mnm.list2row, mnm.list2vec, mnm.moments, mnm.one-
pole, mnm.winfilter,
FMAT and Gabor library (Norbert Schnell et al.)
http://ftm.ircam.fr/index.php/Download

multiconvolve~ (Alex Harker and Pierre Alexandre Tremblay)


http://www.thehiss.org/

OSC-route (Matt Wright)


http://www.cnmat.berkeley.edu/MAX

pipo (IRCAM IMTR)


http://forumnet.ircam.fr/shop/en/forumnet/59-mu.html

quat2car (freeware)
http://www.mat.ucsb.edu/~wakefield/soft/quat_release.zip

spat.oper, spat.spat~ (IRCAM Espaces Nouveaux)


http://forumnet.ircam.fr/product/spat-en

59
INDEX
p.6 K_messages
PRESENTATION
p.7 Plan
Recordings
studio recordings
videos
live compilation
p.8 Equipment
movement 4 “The Trees”
p.9 Message_1 “Vor dem Gesetz”
COMPOSITION
p.10 Interaction
machine following and music variations
p.11 System
p12 The novel
Real-time composition and analysis
p.13 PERFORMANCE NOTES
rehearse mode
calibration
p.14 spatialisation
Micro-shapes
p.15 Instruments
p.16 “cellos”
“fog4_K”
“sampler”
p.17 “extra-amp”
TECHICAL REMARKS
Audio analysis
p.18 Gesture follower
p.19 Composition
p.20 Calibration
p.22 Message_2 “The Wish to be a Red Indian”
COMPOSITION
p.23 System
p.24 Sounds
Time macro-form
p.26 Space macro-form
PERFORMANCE NOTES
Sampling materials
p.27 Amplitude balance
Samplers
p.28 sampler_1
p.29 sampler_2
p.30 sampler_3
p.31 TECHNICAL NOTES
Calibration
p.32 Message_3 “Odradek”
COMPOSITION
Performance style
p.33 Interactions
p.34 Sections
beginning
events
performance modes A-B
p.35 Structure
p.36 PERFORMANCE
Cello events and music sections
p.38 Performance modes
performance mode A
p.39 performance mode B
p.41 CALIBRATION
cello input attenuation
60
peak detection
p.42 parameters calibration
p.43 threshold settings
p.44 Message_4 “The Trees”
SETTINGS
p.45 Sound ensemble
stage space
audience space
Main interactions
p46 COMPOSITION
Spatial sound
p.47 Audio feedback
Time sections
a’ (beginning)
p.48 a” (until minute 2’)
a’” (3’->5’)
b (5’->10’30”)
c (10’30”->13”)
Interactions
p.49 Sound circuitry
inputs
internals
p.50 audience outputs
stage
p.51 i/o
PERFORMANCE
Sections
p.53 Sound sources
Bow interactions
p.55 MOTION TRACKING
SOFTWARE
K-Message_1
List of externals
p.57 K-Message_2
List of externals
p.58 K-Message_3
List of externals
p.59 K-Message_4
List of externals
p.60 INDEX
p.62 List of figures

61
LIST OF FIGURES
p.6 Fig.1 Franz Kafka
p.9 Fig.1-K_1 The interactive screen (Message K_1)
Fig.2-K_1 The Timer flash
Fig.3-K_1 Sound analysis in real-time
Fig.4-K_1 Sound features stored in memory and later recalled during the performance
p.10 Fig.5-K_1 Machine learning and machine following
p.11 Fig.6-K_1 Pattern recognition
p.13 Fig.7-K_1 Setting time-duration
Fig.8-K_1 Monitors of interaction
Fig.9-K_1 Monitor of duration
Fig.10-K_1 Rehearse mode
Fig.11-K_1 Calibration icon
p.14 Fig.12-K_1 Ambisonics interface
Fig.13-K_1 Setting the speakers
p.15 Fig.14-K_1 VMIs interface
p.18 Fig.15-K_1 The GF editor
p.19 Fig.16-K_1 The timeline to the GF and to the composition
p.20 Fig.17-K_1 The calibration patcher
Fig.18-K_1 Main calibration parameters
p.21 Fig.19-K_1 Further calibrations
p.22 Fig.1-K_2 The main interactive patch (Message K_2)
p.24 Fig. 2-K_2 Sampler_1 (sound file interface)
Fig.3-K_2 Sampler_2
Fig. 4-K_2 Sampler_3
p.25 Fig.5-K_2 Example of the animated score
Fig.6-K_2 Flashes signalling the beginning of the recording process
p.26 Fig.7-K_2 Spatial monitor
p.27 Fig.8-K_2 Sound monitors of samplers 1 and 2
p.28 Fig.9-K_2 Sampler_1 engine
p.29 Fig.10-K_2 Sampler_2
p.30 Fig. 11-K_2 Sampler_3
Fig.12-K_2 Note detection monitor
p.32 Fig.1-K_3 Odradek’s application (Message K_3)
p.33 Fig.2-K_3 An example of interactive verbal instructions
Fig.3-K_3 Cello frequency monitor (tuning section)
p.34 Fig.4-K_3 The event-to-sections monitors
Fig.5-K_3 The 10-channel main mixer (performance mode A)
p.35 Fig.6-K_3 Direct-verb gain faders (performance mode B)
p.36 Fig.7-K_3 Main mixer in action (performance mode A)
Fig.8-K_3 Reverb in action (performance mode B)
p.37 Fig.9-K_3 Crossed button (listening mode of the “soft_thresh” enabled)
Fig.10-K_3 Soft threshold monitor (high numbers = softer sound)
p.38 Fig.11-K_3 Cello timbre monitors (roughness-periodicity-loudness in db)
Fig.12-K_3 Gains of each effect (and notes about the cello peak-amplitudes enabling each effect)
Fig.13-K_3 Artificial-effects modules
p.39 Fig.14-K_3 Cello amplitude tracker (and connections with the filtering channels activation)
p.41 Fig.15-K_3 Peak amplitude calibration
p.42 Fig.16-K_3 Multiple calibrations
p.43 Fig.17-K_3 Thresholds calibration
p.44 Fig.1-K_4 The main application interface (Message K_4)
p.45 Fig.2-K_4 Stage
Fig.3-K_4 Audience speakers
p.46 Fig.4-K_4 Bow motion tracking
Fig.5-K_4 Overall setup
p.47 Fig.6-K_4 The audio feedback channels
Fig.7-K_4 Time sections
p.49 Fig.8-K_4 Processing and routing of the 12 sound sources
p.50 Fig.9-K_4 Analog/digital map
p.54 Fig.10-K_4 Bow as a digital controller

62
63
Awakening
Interactive harp quartet

Dedicated to the Adria Harp Quartet


First performance FORFEST,
Kromeritz (Czech Republic)
June 23rd 2014
Duration 10’

Video instructions at: https://www.dropbox.com/s/gcv3jjseuzetfu2/Awakening-instructions.mp4?dl=0


Studio recording at: https://soundcloud.com/nicola-baroni/awakening
Video recording at: https://youtu.be/gasFAG5QilY

PRESENTATION

CONCEPT
The idea of shaping music as if it were an organic form whose different articulations interleaves and
grow similar to a living entity could be viewed as a post-romantic heritage. On the other hand this
concept was traditionally related to the development of a drama, where the actions of imaginary or
abstract characters are codified within a score.
These structural relationships should be eventually felt as evolving sounds recalling one another in
the time domains of performance and attentive listening.

The interactive quartet Awakening is conceived as a concrete living organism, where there are no
special focus points on its history and its future, and in fact no scores are fixed.
The living interaction is made of sounds, actions, functions and symbols seeking for balance and
actual boundaries with respect to their environment.
The performers are the actors of this search for an identity of the living sound system, which is
driven by the music-social intentions of the ensemble.

PRELIMINARY NOTE
Each harpist, specified as Harp_1, _2, _3 and _4, has a specific role inside the interaction. The
detail of the following presentation is essential for the musician dealing with the overall setup, in-
stallation and concert location. This responsibility can be assumed by the composer, by an external
sound engineer, or else by one or more members of the ensemble.
Technical notes are given at page 24.

The conceptual involvement of Harp_1 and _2 requires the knowledge of this presentation, which
could be in part skipped by Harp_3 and _4, going directly to the general and individual performance
notes.

64
INSTRUMENTS
The harps are amplified and live processed. The position on stage requires Harp_1 and _2 to be
placed at a certain distance from Harp_3 and _4 (4 meters approximately) in order to avoid the
sound interference of the microphones coming from the opposite couples of players.

Three harps are required, since Harp_2 only interacts with the resonances of the
neighbouring instrument (Harp_1).

Fig.A_1 Arrangement on stage

65
COMPOSITION
ACTIONS
The piece is conceived as a harp quartet, but “Harp_2” doesn’t need to be a harpist (in a sense she is
more a composer rather than an instrumentalist) even if she interacts with the harp.
The composition could be seen as a harp trio plus flutist/vocalist.

Harp_1 (“Digital Harpist”) plays:


-single notes in order to digitally select the sound effects of the live electronics.
-soft passages allowing for electronic depth to the sound, shaping electro-acoustic nuances.
-an IMU sensor worn on the left hand through which the electronic sounds are spatialised.
Harp_2 (“The Breath of Technology”) never truly plays the harp:
-initially she scans the harp resonances, shifting the microphone near their nodes on the
surface of the instrument body, producing modulated audio-feedback.
-during the following sections she performs voiced/noisy sounds with her voice or any wind
instrument (with the exclusion of reed instruments), in order to shape the electronic timbres.
Harp_3 (“Free Harpist”):
receives an animated graphic score interactively upon which to improvise.
Harp_4 (“Classic Harpist”):
receives interactive scores in pentagram and common notation to sight-read.

Fig.A_2 Schema of the interactions. On the left the actions performed by Harp_1 and _2 (composer-harpists).
The central part resumes the monitor interfaces and functions as they appear on the harp laptop screens.
On the right an essential description of the digital interactions connected to the relative actions.
The bottom part shows roles and scores of Harp_3 and _4 (performer harpists).

66
SCORES
Harp_1 and _2 share the graphic interface contained in Laptop_1.
The laptop has to be positioned in a music-stand fashion in order to be visualised by both players.

The upper part of the screen (monitors and annotations) only regards Harp_2.

The middle part of the graphic interface shows the actions performed by Harp_1:
the actions called "points" (single notes) are especially in evidence as on/off functions relevant to
both players.

Fig.A_3 Screen interface as it appears on Laptop_1 (interactive monitor for Harp_1 and _2).
The upper part regards Harp_2 (voice/flute): it monitors the timbre analysis of the “blows”.
Just below is the matrix called “effects”, the mid-low part (“arpa”) is the interface of Harp_1.
At the bottom all the settings.

67
Harp_3 and _4 share the interactive screen of Laptop_2
(receiving messages from Laptop_1).
This laptop too has to be positioned and well visualised by the both players.
The upper part contains the graphic-verbal animated score for Harp_3.
The middle-lower portion contains a pentagram for Harp_4.

Fig.A_4 Screen of Laptop_2. At the top the verbal-graphic animated score for Harp_3, just below the interactive score for Harp_4. At
the bottom variable BPM, number-section advance, various settings.

The lower part of both screens shows preliminary settings.

68
TIME DESIGN
Harp_1 and _2 (“Harpist-composers”) improvise, but strictly interacting with the software functions
and monitors. Harp_3 and _4 (“Harpist-performers”) perform the interactive scores.
A complex net of synchronised changes are automated inside the software.
These hidden agencies, coordinated with an internal timeline, show the performers the essential
time-event-signals and interactive trajectories.
The composition, as if it were a sound installation, maintains an internal consistency upon which
the performers build their own strategies of interaction.

The designed time segmentation of the music (10’ as a default, but changeable inside the internal
settings) produces two complementary states.

TIME-SPACE DEVELOPMENT
Beginning and conclusion (“peripheral body of interaction”).
Audible eco-system.
The performers reveal the natural resonances of the harps through microphone-scanning.
The induced audio-feedback between instruments, technical equipment and room creates
ghost-pitches showing that the audible space doesn’t correspond to the visual boundaries between
stage and audience.
Central part (“central body of interaction”).
Social-digital composition.
The initial audio-feedback ostensibly grows and interleaves in a collective composition digitally
mediated. The ensemble splits itself into two complementary entities of composers and performers.
The combined musical actions of two harpists (Harp_1 and _2) generate and control the live
electronics fed by the sound of the other two harpists (Harp_3 and _4).

In addition the sounds of the two harpist-composers are “interpreted” by the software and are sent
as interactive scores to the couple of sight reading harpist-performers.

AUTOMATED PERFORMANCE

Fig.A_5 Performing the acoustic feedback

69
Beginning-conclusion.

The transitions between the two states of performance (the Audible eco-system and the
Digital-social interaction) are technically realised through cross-fading amplitude gains of the input
microphones.

The audio-feedback is obtained by a chain of variable exaggerated gains operating upon the single
microphones (and compressed in order keep the whistling sounds inside a meaningful not disturbing
range).

This extra-gain is synchronised with a message telling the interested performer to stand up, take off
the microphone from the stand and start moving it around the harp. By approaching with the
microphone the main nodes of resonance inside the body of the instrument, the natural
resonances of the harp are revealed, pitch-colouring the audio-feedback. When the central
digital interaction starts, the gains automatically return to their normal levels.

Central part
During the central digital interaction the software mediates the human actions through sound
analysis: it recognises and monitors features and patterns of the acoustic sounds of Harp_1 and _2.

These two harpist-composers, reading the monitor-analysis of their own sounds, are allowed to
interact with the live electronics machine. In this way they choose and influence the types and
nuances of the sound treatments upon the live sound of Harp_3 and _4, gaining the power to
shape the overall performance.

In addition, by “listening” to the sounds of Harp_1 and _2, the software transforms their sounds into
messages and symbols, sent as animated scores to Harp_3 and _4:
a symbolic resonance of the sounds invented by Harp_1 and _2.

The composer tradition to codify on paper a successful improvisation is rendered in real-time on


stage, allowing for a social creative interaction.

70
Fig.A_6 From sound to interactive functions, to scores, to digital processing, to socially mediated music

AWAKENING
Awakening is the process through which the subtle energies of life, present inside our body,
empowers and harmonises our physical, mental and emotional dimensions through the spiritual
practice of Yoga (whose meaning is union). The opening comparison of this harp quartet with a
living organism is justified by its physical origin inside sound gestures and natural resonances.
The performance arises from the extra-energy of the environmental audio-feedback. On the other
hand the mental-symbolic dimension of the scores is treated as a dynamic part of a collective search
for balance.

The non obvious boundaries of the interaction are performed through circles of opposites.

-Sound as it is perceived vs. Sound as it is computed through models of analysis


-Physical agency (sound gestures) vs. Symbolic resonance (scores)
-Musical gesture (instrumental note/timbre) vs. Electro-acoustic sound
-Technology vs. Environment
-Improvisation vs. Composition in real-time
-Concert-based music vs. Sound installation

71
TIME SEGMENTATION

Fig.A_7 The piece starts with pure harp resonances, and gradually fades in the central digital interaction. From minute 7’, at different
times, the performers receive from their laptops a signal indicating the reprise of the resonance-scanning activity.

0’'00" -> 1'00" Harp_2 starts to scan the harp resonances with the microphone;
the other harpists are silent, with the microphones muted.

1’00" -> 2'00" Harp_4 receives a first score; when she starts playing, this is the signal to begin
the interaction of Harp_1; when Harp_3 receives the first score and starts playing, Harp_2
positions the microphone in the stand and proceeds with the central part of the interaction3.

2'00" -> 7'00" Central body of the digital interaction: Harp_1 and _2 drive the composition in
real-time, Harp_3 and _4 are receiving the scores.

7'00" -> 10'0" Progression of fading out, gradual transition from the digital interaction to the
ensemble scanning of the harp resonances. Harp_3 starts, then Harp _4, and finally Harp_2.

The action of leaving the previous position and grabbing the microphone for the scanning process is
signalled by messages inside the laptop interface.
When everybody is scanning, Harp_1 stops playing and limits the activity only to spatialisation,
until all the microphones fade out to silence.
The interaction finishes after a short time of silent gestures.

3 The scores are received at fixed times, routed through an automatic timeline
72
GENERAL PERFORMANCE NOTES

PERFORMER ROLES
The production of the audible eco-system (harp resonances) involves Harp_2, _3 and _4
(see following performance notes).

The central digital interaction requires an amount of conceptual involvement by Harp_1 and _2
(the “Harpist-composers”): it justifies the dense detail of the following individual explanations.

The verbal notes for Harp_3 and _4 will be much lighter, since a great deal of the performance
instructions is embedded inside their actual interactive scores: in this sense their performance styles
are essentially those of classical players involved in contemporary music and graphic-score
interpretation.

REHEARSALS
Some preliminary section rehearsals by Harp_1 and _2 are suggested before meeting the whole
ensemble.
A previous individual training of Harp_1 and Harp_2 with the system is recommended
(these “harpist-composers” should have an overall knowledge of the above presentation).

Load the patch “Awakening” in Laptop_1

1) Pressing the Spacebar the full rehearsal and performance starts


(and Laptop_2, if connected by Ethernet, will react in its settings-mode).

2) When the piece is finished press Enter (close and reopen the patches before a new take).

3) Press the keyboard key A for “Section rehearsal”, key B for “Harp_1 training”, key C for
“Harp_2 training”, in the case of a study session.

4) During a full rehearsal, you can press the keyboard 2 for “Start from min2” or key 7 for
“Start from min7”, if you want to rehearse only a part of the composition.

73
HARP RESONANCES
A too high amplitude of a microphone can instantiate a magnetic field involving proximate
loudspeakers, and creating an audio-feedback effect. As a result some standing waves (whose
frequency depends on distance, angle, room response and equipment specs) form a generally
disappointing group of fixed whistles. This effect is exploited in a controlled fashion, through a
chain of variable extra-amplitude and compressors: positioning the microphone close to special
nodes of the harp surface (or inside the holes of its body), specific harp modes of resonance start to
influence the pitch and the amplitude of the audio-feedback. This effect, when hybridised by digital
processing, was called audible eco-system4, since it involves the natural interferences of technology
and environment, in our case with the contribution of the resonant body of the harp. During the
normal setup your microphone is positioned on its stand and the sound gain is at a neutral level.
When you are requested to produce the eco-system, the software automatically raises the input
amplitude of your mike, and you should be able to feel a sort of sound-magnetic field.
After the start signal, grab the microphone in your hands, approach it to the harp, and perform
creatively following these general suggestions.
1) Find the most interesting pitched nodes of resonance of your harp, generally the result is
more effective near to the curved shapes of the body (see video instructions).
2) By positioning the mike closed and towards the base, near the lower strings, and very
slowly moving it towards the middle part of the strings and/or towards the low-middle pitch
register of the instrument, some chords can be produced.
3) By scanning with the microphone the holes in the back part of the column, or inside the
pedal holes, you can gain powerful sounds, but there is a possibility of exaggerated and
distorted effects, here the performance has to be extremely careful.
4) You are advised not to hold the microphone too far from the harp, otherwise the effect could
interfere with the noisy components of the environment and fall out of control.
5) It is important to intuitively find the true boundary of the magnetic field: if the mike is
removed too quickly from the harp proximity, the wave could disappear, but moving it one
millimetre too close the sound could be distorted: find a good feeling and balance in your
gestures.
6) The audio-feedback standing wave takes time to emerge and stabilise: sometimes you have to
wait for it, with the mike still and close to the chosen nodal point (maybe you have to wait
longer than you could expect). When the whistle begins, it is better to immediately distance
it by about 1 cm. in order to avoid an uncontrolled sharpening of the effect: the movements
around the sound boundaries should be characterised by specific patterns of acceleration/
deceleration, you can “tune” your gestures by listening after some rehearsals.
7) When you decide to reach a different node (searching for a different pitch), move the mike
slowly and keep it inside the boundary of resonance (a correct distance, which you feel by
careful listening and soft airy gestures), otherwise the resonance disappears and you miss the
opportunity to shift the pitch.

4 Agostino Di Scipio, Contemporary Music Review, 33:1, 2014.


74
DIGITAL INTERACTION_INDIVIDUAL NOTES
HARP_1
The contribution of Harp_1 is crucial in designing the macro-form of the music.
During the time of the performance Harp_1 chooses which effects are acting and in which
sequence. The density of the electronics (the kinds and the number of effects working in parallel)
are extremely important for shaping the music well in terms of variety, tension and interest.

Harp_1 is a hyper-harp: the way she makes music is sensed by the software (through sound
analysis), and directly affects the live electronics.
In addition one accelerometer is fastened to the left hand in order to drive the spatialisation.

Vocabulary
1) POINTS (single detached notes);
2) CLUSTERS (soft glides and note-groups);
3) GESTURES (rotational movements in the air with the left hand wearing an accelerometer)

-1) Points: every note clearly detected by the system has the role to open and close a single
attached effect of the live electronics (transforming the sound of Harp_3 and _4).
-2) Clusters: improvised patterns of soft and continuous timbre commentaries performed in
contrasting pitch registers: they diffuse electronic sound-copies of what Harp_1 is playing (delays
effect)
-3) Gestures: orientation and speed of the left hand affect the final spatialisation.

Time
0’00” -> 1’00” No sound, only spatialisation through Left Hand rotations

1’00” (Harp_4 starts playing)

1’30” The “Points-matrix” is enabled:


play Points, not yet Clusters

2’00” -> 7’00” (Harp_2 take place in front of the microphone): full interaction.

7’00 -> 8’00” (Harp_3 and _4 start scanning the resonances):


play Points, no more Clusters.

8’00 -> end (everybody is scanning the resonances),


stand up, stop playing, only spatialisation through Left Hand rotations.

75
Points
The central part of Laptop_1 is the graphic monitor of the sound actions called Points.
You can see on the screen many virtual buttons (red points and crossed buttons mean that the effect
is turned on).

Fig.A_8 The “Points-matrix”: the effects-console of Harp_1

-Any note you perform, if played alone and detached, can be clearly detected
by the system, which activates a related sound effect.
-The notes are intended as pitch classes, therefore tracked independently by their octave:
generally they have the function to open or close one effect.
-When you play a note the effect is opened, on repeating the same note the effect is closed
(the note B is the only exception).
-It is possible to open many effects together in order to increase the density of the live
electronics; when all the effects are closed, only the acoustic sound of the harps will be heard.
-The notes are better detected if they are in the mid register: avoid high and low pitches if you
need a precise control.
- The name of each note is present on the screen near to the on/off monitor and the name of the
related sound transformation (which affects the sound of Harp_3 and _4).

Even if the machine note detection is quite responsive and accurate, it can happen that some
unwanted notes ("false positives") are captured by the system (especially in the middle part of the
performance, when many notes are performed by the other players!), in this case probably some
efforts will be needed in closing unwanted effects.
A sort of tolerance towards this independent behaviour of the machine makes the interaction more
interesting, obviously if the false positives are not in excess. In this sense the computer is more a
"composer assistant in real time" rather than a strict instrument.
It is extremely important that Harp_1 reaches a clear perception of the difference between every
sound effect, as shown by the video instructions.

76
List of the connections
Note C ring modulation bell-like effect
Note C sharp transposition pitch glides, inside the range of a major third
Note D micro-transposition microtonal glides and beats
Note E flat delays effect of multiplying the most recent sounds
Note E spectral decomposition 1 split the sound in its noisy vs. pure components
Note F spectral decomposition 2 similar, but more harmonic effect
Note F sharp playback playback of a live recorded fragment
(A flat -> record)
Note G spectrum-freeze enables freezing the sound
(as it is at that moment)
Note A flat recording records the last 2 and ½" of sound
(F sharp -> playback)
Note A flanger extreme artificial vibrato modulation
(electric-guitar effect)
Note B flat add-harmony adds one layer of freezing,
a new fixed harmony
Note B clear harmony clear the all freezes harmonies
On playing G the freeze is only enabled.
After enabling, play B flat when you want make a sound, every new B flat adds
one more sound creating a fixed harmony.
Play B when you want to silence the harmonies (a small number tells you how
many harmonies are playing).

Fig.A_9 Freeze

A few more interactions (less important to be fine-controlled)


Notice that Harp_1 only activates the effects, which are instead internally modulated by
Harp_2. The only exception is the Flanger (activated by the note A), which is modulated by you:
take it easy and be intuitive!
But you may notice that:
-the brightness of your sound adds artificial pitched-colour to the flanging,
-when your sound is very resonant, it increases the presence of the effect,
-your high pitches increase the flanger vibrato-reactivity,
-impulsive quick movements of the left hand dramatically increase the reverb of the effect.

The effects called "spectral decomposition", activated by E and F note detection,


can sound extremely harsh (noisy) or very subtle (pan-flute-like) depending on
the performance of Harp_2. You can influence the volume of these effects
(if you feel the need to better balance these sounds):

Fig.A_10 Volumes

77
-Low volume <- soft/touching sounds in the middle of the string,
or leaving the harp free to resonate.
-High volume <- aggressive/brilliant sounds.

You can also influence the playback module (effected by the F sharp): the more your sound is
consonant (“puro”), the less the playback will be transposed (probably the Clusters, since
dissonance will produce many pitch transpositions in the playback).
It is not necessary to pay close attention to the fact that your sound creates the graphic score of
Harp_3, just note that:
- the more numerous the effects opened by your notes (the Points), the greater the quantity of
overlapping instructions to Harp_3 (no Points opened, no verbal instructions to Harp_3).
-the energy of your sounds impact the dynamic animation of the score.

Clusters
Harp_1 alternates the single-note performance with some improvised passages, characterised by
timbre density, very soft intensity, blurred pitch contents, and very low, or by contrast, very high
pitch registers (they shouldn't affect the note detection):
-soft glides inside narrow-band pitch contours,
-trills,
-soft scale-like passages with grace notes,
-slow nail vertical scratches,
-any other sound characterised by softness and timbre density.

Clusters transform only your sound, through a chain of delays adding depth to your
amplification.
Clusters are never to be performed during the Harp-resonances (beginning-end of the piece).
An increasing requirement of Clusters is signalled by recurrent yellow
flashes in the upper right part of the screen, which suggests also Harp_2
to increase melodic variety.

Fig.A_11 Increasing Clusters

The choice of producing Clusters is left to your feeling of adding “fatness” to your
amplification.
This added fatness/depth to your sound is obtained by a delay-system (you will hear
many echoes of your sound). You can influence the echoes in this way, just playing the
harp:
-high pitches -> distant echoes (low pitches -> close echoes, similar to a reverb)
-dissonant/rough sounds (opposite to sound “puro” -> more amplitude
-unstable timbre -> increasing of the effect (feedback)
Noisy or dissonant passages and groupings should increase the overall echo density.
Fig.A_12 Delays

78
But the way a machine detects timbre is not exactly the same as that of our ears, therefore some
previous study and experiments should be individually done in order to reach a fine-tuned control of
the effect5.

Gestures
A three axis accelerometer is fastened to your left hand6.
The direction and the velocity of your movement in the air are tracked, in order to move
the electronic sound sources around the space of the audience.

During the beginning and the end of the piece you can focus only on spatialisation, but when
you are concentrated on the harp sounds, even your involuntary hand movements are still tracked
and spatialised, it is a good idea to give some attention to this during the pauses of your sound,
while it resonates.

Inside the spatial monitor (called "movements") the three sound


sources are coloured, and their colour corresponds to the colours
besides the main graphic called "effects" (the effects appearing in
the first column are marked as red, in the second column as blue,
in the third as green, and they correspond to the digital sound ef-
fects described above).
Fig.A_13 Spatial interface

The first part of the performance involves only one sound (the sounds of resonance extracted by
Harp_2) therefore only one coloured source will appear in the spatial monitor.

An intuitive approach to this kind of interaction is advised, but remember that your hand
can have three straight positions corresponding to the high, horizontal and lateral axis, which
displace the three sound sources in the space positions shown here on the left. Any intermediate
position of the hand will move the sound sources accordingly.
The overall velocity of the hand affects the velocity of shifting of the electronic sounds. In addition
the hand velocity strongly increases the reverb of the flanger effect, when it is active.

These three methods of performance show an unconventional, gestural and highly conceptual way
to play the harp. Maybe not many notes are performed, but each sound is charged with a strong
compositional influence, and intriguing gestural aspect.
Sometimes you will be less focused on the "logic and beauty" of the music coming from your harp,
and much more on the sensitivity towards the interaction.

5 Stable/unstable timbre is technically detected as “spectral flux”: sharp attacks, noise and pitch variability will show
more unstable values. Dissonance is computed through “roughness” (timbre dissonance): melodic/harmonic dissonance,
scraping sounds, but also very low resonant notes create the sensation of timbre dissonance.

6 A simplified tracking could be exploited by a mobile or an iPod tied around the right forearm.
79
HARP_2
Modes of performance
1) 0’00” -> 2’00” Resonances (on Harp_1)

2) 2’00” -> 8’00” Breaths/Voices/Blows – Digital Interaction (on your microphone)

3) 8’00 ->10’00” Resonances (on Harp_1 again)

Shift between “resonances” and “blows” without any hurry or strict sense of time.

Your resonances are the opening event of the music, the first sound will not appear
immediately, wait patiently for it and then start to very gently modulate and harmonise it.

Approximately you go in front of your microphone when Harp_3 starts playing, and you return to
scan the harp resonances after both Harp_3 and _4 have begun the activity in the last part of the
music.

These two opposite performance modes have in common an airy and wireless relation with the
sounds of technology, in both cases the pitches emerge as byproducts of scratch and noise.
Resonances are sounds of the environment, in the mid part of the performance the “resonance”
becomes more conceptual, since your voice (or flute) timbre has the power to electronically tran-
sform the sound of the harps: in other words it is a hyper-instrument.

“Resonances” are explained above, “Digital Interaction” on the next page.

Your sounds are also feeding the pentagram score of Harp_4 during time-defined moments of the
performance.

The yellow button flashes during the times of the score-feeding of Harp_1:
if you wish, you can be more active and harmonic during these moments,
in this way putting some melody inside the score.

Fig.A_14 Melody enhancer

80
Digital Interaction

Fig.A_15 Hyper-instrument console

During the Digital Interaction you have the power to modulate the electronic sounds coming
from Harp_3 and _4. Your timbre micro-shapes directly affect the live electronics.
In the upper part of the laptop-screen you can monitor your timbre shapes, which share the same
curves affecting the electronic effects.

These modulations operate only inside the currently active effects, if more effects are opened in
parallel your timbre will be transforming many different effects at the same time.
The effects are opened by Harp_1 and you can see which ones are working by looking at the matrix
called "effects" below your monitors (red points mean open effects).

Sound analysis is active only when the intensity of your sound is not too low.

Description of the input sounds


The music is improvised, but the acoustic sound is only a partial focus, since any sound inflection
is finalised to shape the live electronics of the harps.
The performance is conceived for voice or/and any kind of flute.
Your music shifts between sound/noise, pitch/breath, voiced/unvoiced.

Schema of the effects


Played feature Transforming technology Heard effect upon the amplified harps

-1) PITCH -> ring-modulation (large bell -> medium bell -> small bell)
-2) VOLUME -> melodic glissando (sitar -> harp -> sitar)
-3) BRIGHTNESS -> beats (normal -> dissonant -> detuned)
-4) DISSONANCE -> delays (eco -> multiplication -> resonance)
-5-6)PERIODICITY+DENSITY ->spectral-decomposition (pan-flute -> artificial harp ->aggressive)
-7) VARIABILITY -> playback-rate (accordion -> normal speed ->fast harp)

81
Explanation of the effects
-1) Pitch to ring modulation
-ring modulation detunes the spectrum of the harps through a sound frequency that modulates them
-the result is a hybrid bell-like sound
-the machine detects your pitch in real-time and continuously tunes it to the modulating frequency
-you have control over the hybridising frequency, affecting the sensation of width of the imaged
harp-bell (low pitch = large bell).

-2) “Volume” to melodic glissando


-variable pitch transpositions are applied to the harps (within a major 3rd range)
-the result is a continuous glide up and down (sitar-like effect)
-the machine doesn’t exactly detect your sound amplitude (“Volume” is here only a conventional
name), instead it detects how much your crescendo/decrescendo is intensifying or relaxing. Notice
that the resulting value is not the intensity of your de/crescendo, but how much it “accelerates”:
in this way your “effort” is active, rather than your sound intensity
-if your crescendo is increasing in a linear proportion (or the decrescendo linearly decreasing)
the harp sounds will be in tune, if you impulsively accentuate your crescendo (or refrain the
decrescendo) the harps make an upward glissando, if you release the push of your crescendo
(or immediately drop with decrescendo) the harp make a downward glissando (sitar-like effect).

-3) Brightness to beats/detuning


-microtonal pitch transpositions are applied to the harps
-extremely subtle glides give the impression of a rough beating timbre, if they increase beyond the
range of (approximately) 1-2 eighths of a tone, they are audible as detuned sounds
-brightness is enhanced by the high-frequency components of a sound and it is connected with the
impression of its “brilliancy”: a noisy or hybrid sound is extremely bright, a tense timbre is
brighter than a relaxed or resonant one. This system detects the variation in brightness: if you start a
soft sound and increase its tension, you should gain a positive value; the transition from a voiced
sound to a breathy/noisy one also returns a positive value, and the opposite
-you can detune the harp sounds by navigating between contrasting timbres, or keep them tuned by
holding onto the same kind of sonority in terms of its brightness.

-4) Dissonance to delays


-artificial echoes (more or less repeating themselves) are applied to the harps -
1 distant echo is 1 repetition of the sound, more repetitions at a short time distance (i.e. half of
quarters of a second) result in a dense overlapping sound texture, numerous echo repetitions in the
range of 20/50 milliseconds create a sensation similar to reverb
- roughness is a method of computing how “dissonant” the timbre is. Effects involving pitched
noise (such as jet whistles, rumbles) or small spectral shifts (i.e. exaggerated vibrato, or detuned
low pitches) sound more “dissonant” than pure noise, pure harmonic tones are not at all dissonant,
but they could increase their dissonance in the case of quick melodic passages
-The more you are “pure” and the more 1 single detached echo is discernible; dissonant sounds
multiply and approach echoes in time, pitched noises (very rough) simulate reverb.

82
-5-6) Periodicity/density to spectral decomposition

-these effects called “spectral decomposition” (more confidentially


“pan-flute effect”) perform a splitting process between the sinusoidal
(harmonic) and the noisy (high frequency) components of the harp sound

Fig.16 The “pan-flute effect”

-in this way it is possible to output only the sinusoidal part of the sound (“pan-flute effect”),
only the noisy part (“aggressive” effect) or some mixes of the two. Take into account that the
reconstruction of the sound can be reduced to its most prominent components (resulting in an effect
perceivable as “artificial”), or expanded to a broad palette of partial components (allowing from
“realistic” to “hyper-real or exaggerated effects)
–your interaction mixes periodic vs. noisy sounds, static vs. vibrato-like sounds,
light vs. dense timbre
–density and noisiness make the electronics aggressive, lightness (and pure high pitches) create the
“pan-flute” effect (enhanced when the sound is pure and periodic), trills, vibrato and breaths enhan-
ce the artificiality of the effect.

7) Spectral flux to playback speed


-short chunks (2 ½ ”) of harp sound are live recorded and played back as loops at variable speeds
-high speed increases the density of the loop, low speed stretches the sound giving more focus to
the timbre, negative speed reverses the sound which resembles an accordion rather than a harp
-the spectral flux detects how variable the sound spectrum is (i.e impulsive attacks, noisy contents,
complex-discontinuous timbres)
-therefore through high timbre variability you increase the velocity of the harp playback, but
through a very static sound the accordion-effect is discernible.
Timbre analysis remarks
You will easily notice that the mapping-effects connected with pitch and “volume” are more natural
as controllers of the electronic sounds, but the other timbre mappings can be quite complex and
elusive.
Timbre is almost impossible to be fully defined by numbers. Machine sound analysis is often
modelled upon the patterns of human perception, but the two “languages” are not the same. Timbre
qualities are generally codified in our brain as music concepts (i.e. bright, expressive, nasal etc.)
or as instrumental techniques (i.e. vibrato, flageolet etc.).
The physical qualities of sound vibration, detected in your monitor by the analysis machine, are
only traces of your timbre, a sort of interactive score, upon which to find creative and meaningful
solutions for shaping the sounds of the electronic harps.

83
Towards a vocabulary
Harp_2 shapes the electronics through 8 sound descriptors (3 of which are variable quantities).
Below is a short vocabulary of instrumental techniques apparently influencing the 4 fixed
descriptors: Dissonance, Noisiness, Vibrato, Density. As observed above, there is no straight
correspondence between one technique and a reciprocal analysis feature, only an influence
happening under different sound conditions.

Dissonance
High value: Singing into the flute (detuned), jet whistle, breathy attack
Mid value: Breathy-sound, chromatic quick passage, deep low pitch

Noisiness
High value: Breath
Mid value: Breathy attack, trill, multiphonic

Vibrato
High value: Trill
Mid value: Vibrato, flatterzeug, flageolet, breath

Density
High value: Low tense pitch,
Mid value: Pitched noise, multiphonics, crescendo

84
HARP_3
1) 0’00” -> 2’00” Silence

2) 2’00” -> 7’00” Sound gestures (Improvised harp effects)

3) 7’00 ->10’00” Resonances (see section above)

You receive a graphic-verbal animated score.


You are the last performer starting to play: when the screen is empty it means silence.
The performance happens when one or more verbal labels appear indicating sound effects upon
which to freely improvise:
-trills,
-arpeggios,
-nails (vertical scratch),
-pedals (sounds of just the pedals in this case),
-harmonics,
-claps (hand percussion on the instrument body),
-glides,
- high-pitches (extremely high passages).

The width and the position of the labels (sometimes moving) suggest modes of performance.
Sometimes the labels are numerous (their number is related to the overall density of the
electronics), in this case the performance has to be more intense until agitation, or even emotional
explosion.
On the contrary one or two single labels, their smaller dimensions, and slow/absent movements
suggest a reduced energy.
Some moving waves appear on the screen intersecting the written labels: feel and invent the
appropriate sounds.

The time occurrence of the animation is fixed, not the contents since they are shaped by the sound
of Harp_1.

When the background becomes black it is the signal to stand up, grab the microphone from
the stand and start to scan the resonances: the other players eventually will be joining you in the
same activity.

At the very end, fading out the sound, maintain the movement for a while, as an empty gesture.

85
HARP_4
1) 0’00” -> 1’00” Silence

2) 1’00” -> 7’30” Phrases (sight reading the interactive score)

3) 7’30 ->10’00” Resonances (see section above)

The score appears at fixed times (as indicated in the bottom part of the screen).
A couple of times the note sequences are the exact repetition of the previous one.

When the score appears you have 30" to mentally read it, after that, a green pointer starts to shift
from left to right and you follow its position by playing.

Grey notes are to be played softer.

Don't accentuate the rhythmic values; the flowing time relationships are often underlined by
grace notes and sequences of re-bounced notes.

Play with intensity.

Your score is a symbolic resonance of the sounds coming from Harp_2.

When the background becomes black, take the time to finish your score sequence: Harp_3 will be
already beginning to scan the harp resonances. Without any hurry join her in the same activity.

At the very end, fading out the sound, keep the movement for a while, as an empty gesture.

86
TECHNICAL NOTES

HARDWARE EQUIPMENT AND SETUP


-2 laptops positioned in front of the two couples of performers, connected by one Ethernet cable
(1000Mbit/s, 3 meters long at least).
-Laptop_1 is concerned with interaction and sound processes, it must be a Mac.
2,4 Ghz double processor, 4 GB RAM, as minimum requirement, more power is advised.
Laptop_2 receives data without processing audio, and sends the screen (scores) to a projector:
it can be Mac or Windows.
-Sound card at least 4 inputs (3 microphone inputs + 1 line input) and 4 outputs, connected
to laptop_1. Optional mixer.
-Quadraphonic PA
-2 small audio monitors positioned on stage, near to the opposite couples of performers, in order to
increase and modulate the audio-feedback
-1 triaxial accelerometer for Harp_1 (mobile accelerometers could be a reduced option).
-Projector for video streaming of the animated scores of laptop_2

MICROPHONES (minimal requirements):


-1 specifically designed harp-pickup (or a piezoelectric pickup positioned inside the back
column facing the string joints) for Harp_1: the pickup should be positioned in the middle part of
the soundboard in order to offer a middle-register sweet-spot
-1 condenser microphone for Harp_2
-2 high quality dynamic, or directional condenser microphones for Harp_3 and _4 (one for each
harp).

MOTION TRACKING
Inertial Motion Tracking is tested with the Orients_15 System, developed by the Centre for
Speckled Computing of the University of Edinburgh, 7 running through the orientMac application.
This application and the related Readme.txt document are contained in the main folder of this
software.
The system needs a native Bluetooth 4 Mac version as minimal requirement.

A different Motion Tracking system is allowed by substituting the abstracion “or_data” with a
different OSC udpreceive module, which must contain proper scaling and normalisation.
Details are given inside the module “or_data” and in the Readme text file.

7 www.specknet.org
87
AUDIO SETTINGS
The option to output portions of direct amplified sound from the instruments (through audio card or
mixer) is to be carefully balanced before the concert depending on the audio-feedback response.

Sound setting before the concert is crucial, and the dedicated software section is visible in Laptop_1

Fig.A_17 Setting section in Laptop_1

Feedback extra-gains
-“s extra_gain” enhances the global feedback from 0.5 upwards
-“s extra_fl” and “s extra_harps” sets the extra gain in Db
(separately for Harp_2 and Harp_3 _4)

Harp inputs
Sometimes the levels of the live processed instruments (Harp_3 and _4) are not sufficiently high.
Pre-DSP software gains are necessary: they affect both harps
-“s vol_H” sets the initial input amplitude (pre high-pass filtering)
-“s compr_H” sets the final input amplitude (post high-pass and pre-DSP)

Threshold of analysis
-“s gain_fl” set the minimum signal level in Db, sent to the analysis module, in order to allow just
the analysis of instrumental sounds, cutting the environmental noise

Final mix
-“pre-gain” sets the final amplitude of the processed harps (Harp_1 excluded)
-“gain_feedb” sets the final amplitude of the audio feedback
- “verb_feedb” sets the final amplitude of the reverb applied on audio feedback
-“s final_gain” sets the final amplitude of the overall electronics
-“s final_verb” sets the final amplitude of the overall reverb

Dry and reverbed outputs are independent signals mixed up.

After the last saved setting, press “write” (right bottom of the patch)

88
SOFTWARE
The interaction is designed in MAX/Msp (6.1.10).
Laptop_1 exploits some externals specific for Mac, Laptop_2 can be Mac or Windows.
Every laptop exploits a different patch, talking through Ethernet.

Requirements: MAX/Msp 6.1, or Awakening, plus Awakscore standalone applications.


Python plus the dedicated python folder installed in Laptop_1 (or otherwise a different Motion
Tracking system, not excluding a simple mobile setup).

In case of a different MT system, replace the “rec_orient” abstraction and “p gyros” patcher with a
fitting module. In case of mobile MT the “mobile_data” abstraction is given inside the main folder.
See the Readme.txt, for details on Motion Tracking installations.

LIST OF EXTERNALS AND ABSTRACTIONS


LAPTOP_1-Awakening

ambiencode~, ambidecode~, ambimonitor (Jan Schacher)


http://trondlossius.no/articles/743-ambisonics-externals-for-maxmsp-and-pd

chroma~ (Adam Stark)


http://c4dm.eecs.qmul.ac.uk/people/adams/chordrec/

contrast-enhancement (Michael Edwards)

dot.smooth, dot.std (Joseph Malloch et al.)


http://idmil.org/software/digital_orchestra_toolbox

ej.line (Emmanuel Jourdan)


http://www.e--j.com

fiddle~ (Millar Puckette et al.)


http://vud.org/max/

ftm, ftm.copy, ftm.list2col, ftm.mess, ftm.object,


gbr:fft, gbr.slice~, gbr.wind=, gbr.yin,
mnm.delta, mnm.moments, mnm.onepole
FMAT and Gabor library (Norbert Schnell et al.)
http://ftm.ircam.fr/index.php/Download

89
jfc-spectral-tutorial3, melody2harmony (Jean Francois Charles)
https://cycling74.com/toolbox/live-spectral-processing-patches-for-expo-74-nyc-2011/#.Vh0sE2A-
BE4

multiconvolve~ (Alex Harker and Pierre Alexandre Tremblay)


http://www.thehiss.org/

newverb~ (freedistribution)

OSC-route (Matt Wright)


roughness (John MacCallum)
http://www.cnmat.berkeley.edu/MAX

SpT.analsynth, SpT.makeharm (abstractions)


Spectral Toolbox (William A. Sethares et.al)
http://www.dynamictonality.com/spectools.htm

zsa.easy_flux (Mikhail Malt, Emmanuel Jourdan)


http://www.e--j.com/index.php/download-zsa/

LAPTOP_2-Awakscore

bach.roll, bach.score, bach.transcribe (Andrea Agostini, Daniele Ghisi)


http://www.bachproject.net

o.route (Adrian Freed)


http://cnmat.berkeley.edu/downloads

90
INDEX

p.64 Awakening
PRESENTATION
Concept
Preliminary note
Instruments
p.66 COMPOSITION
Actions
harp_1 (“Digital Harpist”)
harp_2 (“The Breath of Technology”)
harp_3 (“Free Harpist”)
harp_4 (“Classic Harpist”)
p.67 Scores
p.69 Time design
Time-space development
beginning and conclusion (“peripheral body of interaction”)
central part (“central body of interaction”)
Automated performance
p.70 beginning-conclusion
central part
p.71 Awakening
p.72 Time segmentation
p.73 GENERAL PERFORMANCE NOTES
Performer roles
Rehearsals
p.74 Harp resonances
p.75 DIGITAL INTERACTION_INDIVIDUAL NOTES
Harp_1
vocabulary
time
p.76 points
p.77 list of the connections
a few more interactions
p.78 clusters
p.79 gestures
p.80 HARP_2
modes of performance
p.81 digital interaction
description of the input sounds
schema of the effects
p.82 explanation of the effects
p.84 timbre analysis remarks
towards a vocabulary
p.85 HARP_3
p.86 HARP_4
p.87 TECHNICAL NOTES
Hardware equipment and setup
Microphones
Motion tracking
p.88 Audio settings
feedback extra-gains"91
harp inputs
threshold of analysis
final mix
p.89 Software
List of externals
Laptop_1-Awakening
p.90 Laptop_2-Awakscore
p.91 INDEX
p.92 List of figures

91
LIST OF FIGURES
p.65 Fig.A_1 Arrangement on stage
p.66 Fig.A_2 Schema of the interactions
p.67 Fig.A_3 Screen interface as it appears on Laptop_1
p.68 Fig.A_4 Screen of Laptop_2
p.69 Fig.A_5 Performing the acoustic feedback
p.71 Fig A_6 From sound to interactive functions…..
p.72 Fig.A_7 The piece starts with pure harp resonances……
p.76 Fig.A_8 The “Points-matrix”: the effects-console of Harp_1
p.77 Fig.A_9 Freeze
p.78 Fig.A_10 Volumes
Fig.A_11 Increasing Clusters
Fig.A_12 Delays
p.79 Fig.A_13 Spatial interface
p.80 Fig.A_14 Melody enhancer
p.81 Fig.A_15 Hyper-instrument console
p.83 Fig.A_16 The “pan-flute effect”
p.88 Fig.A_17 Setting section in Laptop_1

92
93
Les Demoiselles D’Avignon
Interactive Quartet

First performance Reid Hall, Edinburgh, February 1st 2015


by Dimitris Papageorgiou, Emma Lloyd, Clea Friend, Pete Furniss
Duration 11’, Dedicated to the Bologna Cello Project

Fig.D_1 The painting

Live recordings at:


https://youtu.be/Tj1VujA90QM
https://www.youtube.com/watch?v=AD0fknwQjRw&feature=youtu.be

PRESENTATION
MODEL
Le Demoiselles d'Avignon is an interactive quartet for bowed-string instruments.
The quotation of Pablo Picasso’s painting refers to the reconstruction of a reality where different
points of observation coexist, also because of the deformations of the space due to the implicit
action of the physical bodies.
The sculpted representation of the female figures (recalling African styles) resists the classic
concept of an objective visual-perceptual organisation. The portrait is not yet Cubist, but clearly
anticipates a new spatial order, towards a plurality of dimensions and categories inside one single
collapsing surface.

FRAMEWORKS
The musical interaction of the quartet is obtained through five laptops in network: one for each
musician, with the addition of one more generating the video in real-time.
The whole set of connections between the gestures and the digital interactions of each single player
creates the sound development and the aim of the work. The time-space (the musical form) of the
work is not imposed a priori, it emerges as a shared activity from the different points of view of
each musician acting as an augmented-instrument performer.
In this work the augmented-cellos are based upon the real-time analysis of the sound as it combines
with the bow gestures performed on-stage.

The musicians read the flows of analysis data, performing them as musical functions.
The software mappings are designed in order to build musical structures and to influence the other
players (processes, messages and scores), charging the habitual chamber music gestures with
interactive extra-meanings.

94
The conscious feedback between the physical performance, its analytical monitored knowledge,
and its compositional use in real-time places the human agents in the conditions to collectively
define shared spaces of action, mediation and exchange.
The current digital means allow actions and symbols to be unified in the same environment: in this
sense physical actions, scores and messages can be part of a complex digital instrument allowing
real-time composition.

The system is conceived in order to recognise music expression by means of spectral sound
analysis, note detection and motion tracking (the latter organised for the description
and computation of some classical bowing styles).
The aim is to augment the traditional chamber music communication through a texture of functional
remote influences and a net of formal mediations between the performers.

NOTES
The work makes use of 5 MAX/Msp applications (one for every laptop).
Each system comprises music processing, scores and monitors, and calibration settings.

An external coordinator of the interaction is suggested, if possible a sound engineer. Due to the
interactive character of the composition, the musicians need detailed explanations of their system.
The scores are embedded in the real-time software and cannot be printed.

The musicians are here defined as Cello_1, _2, _3 and _4. The work can be performed by any type
of bowed-string ensemble, whether classical or experimental. The performers exploit a small
motion tracking sensor (Inertial Motion Unit, IMU) tracing their principal bowing styles
Balzato, Tremolo, Staccato and some bow dynamics. Cello_4 instead interacts only through sound:
if a different ensemble is used, “Cello_4” need not be a string instrument, any other monodic
instrument is allowed.

This document consists of:


-presentation
-interaction explanations
-general and individual performance notes
-setting details for the coordinator of the performance
-tech and audio specs

The performers should be aware of the following interaction notes before reading the performance
details. An explanatory video support for each performer is included.

95
INTERACTION
ROLES
Each musician has the responsibility to:
-invent and perform his/her chamber music
-interact with his/her specific digitally-augmented cello
-generate through music gestures messages, scores and sound processing upon the other players
-influence, through the same music gestures, some global shapes of the overall music event.

The improvisation is thus shared, controlled and functional.


All the visual cues are received by the performers on the screen of their individual laptops,
positioned in front of them in a “music-stand” fashion.
The final behaviour of the composition is in part pre-designed in the software and in part
interactively created by the gestures of the musicians on stage.

These interactive global aspects are the following:

-Cello_1 generates the background colour screen of all the Apps, whose significance regards some
modalities of the overall performance, but above all an indication about the densities of playing.
-Cello_2 sends an animated action score to the other musicians, and makes choices upon the video.
-Cello_3 spatialises the electronic sounds produced by the ensemble, and in addition live-selects,
records, diffuses and processes sounds played on stage.
-Cello_4 sends a variable chord, as a shared tonal centre.

Other shapes can be preconfigured by the ensemble, a video is processed in real-time responding
to the bowing styles and dynamics of the performers.
A full interactive score in pentagram form is received by Cello_4.

AUGMENTED INSTRUMENTS
The sensing system is based on motion tracking and audio analysis, the devices exploited as sensing
inputs are small inertial motion units (IMU) positioned under the frog of the bows, and contact
microphones on the bridge of the cellos. Each musician performs autonomously a totally different
augmented instrument in terms of sensing input, mapping space, kind of output and interactive role.
Each musician contributes to the consistency of the overall result through global controls
and remote dialogues: the same sound-gestural means (audio analysis and gesture
computing-recognition) finalised to global interactions are driven by each musician towards
an individual electroacoustic sound palette.

"
Fig.D_2 The wireless IMU sensor under the bow frog

96
-Instrument_1 (Cello_1; Spectral): generates interactions by means of bowing-styles captured
by the IMU. The sound output spectrally transforms its acoustic cello sound (freeze of the spectrum,
dynamic equalisation, transposition-decomposition of the sinusoidal/noisy partials).
Through bowing-styles it sends variable background colours to the other players (colours are
intended as an interactive graphic score).

-Instrument_2 (Cello_2; Artificial): generates interactions by means of a hybrid gestural sensing


system (bowing-styles captured by the IMU combined with spectral sound analysis). As output it
creates sounds of synthesis (physical models, additive synthesis, frequency modulation synthesis).
It sends graphic interactive scores to the other players.

-Instrument_3 (Cello_3; Sampler): generates interactions by means of bowing-styles captured


by the IMU. Selects prerecorded files, and record-renders live fragments played by the other
musicians, applying transpositions, fragmentations and overlapping on the output materials.
It spatialises the sounds coming from the ensemble of augmented instruments.

-Instrument_4 (Cello_4; Harmoniser): generates interactions by means of the sound expressiveness


of the cello performance captured by means of sound analysis computed at a note-level.
The sound output is made of “canonic” transpositions of the input sound (four-voice harmoniser).
It sends variable chords to the other players (shared tonal centre) and receives an interactive score
in full common notation (built by the bowing styles of Cello_2).

SENSING SYSTEM
The digitally augmented instruments (hyper-instruments) are based on a motion tracking system
aiming to offer the musicians means to interact with the digital composition through the same
gestures normally functional to the acoustic outcome: therefore without disturbing the classical
(or experimental) techniques.

Cello _1, _2 and _3 mainly interact through motion tracking, Cello_4 only through sound
(timbre analysis, note detection and expressiveness pattern recognition).

The former three musicians receive and monitor interpreted bowing styles computing through
the interpretation of:
-Angle/Orientation of the bow movements on the Horizontal axis
(from the low to the high string)
-Angle/Orientation of the bow movements on the Vertical axis
(from “full-hair” bow position to “hair plus wood”)
-Global bow Energy (“velocity”)
-Energy of rotation (with respect to the instrument strings)
-Tremolo intensity
-Balzato intensity
(Energy of orthogonal movement towards the string: Ricochet or Spiccato)
-Staccato intensity (Martelé, massive “alla corda” style)

97
The intensity of a bowing style is here intended as a sum of the global amplitude and velocity
of the pattern.
Motion tracking applies to performing bowing styles, as like to silent bow movements in the air.

These seven continuous parameters are integrated with three types of impulsive bow motion
recognitions (functional to triggering and on/off interactions):
-impulsive rotations of the bow
-impulsive Accelerations of the bow
-hybrid system of bow position recognition with respect to the strings

Below is a graphic summary of the principal gestures recognised and computed by the system. The
description of the audio-analysis parameters will be reserved to the individual notes for Cello_4.

Fig.D_3 Motion Tracking functions

98
COMPOSED INTERACTIONS
The augmented instruments have preferential trajectories of dialogue.

-Cello_1 and Cello_3 drive their electroacoustic processes through bowing styles, but the intensities
of some styles (Tremolo, Staccato, Balzato) are computed either individually or as a reciprocal
gradient of similarity between the two performers.
In this way the two musicians (contrasting or imitating each other) strongly influence the sampling
processing coming from the electronic sound of Cello_3.

-Cello_2 and Cello_4 drive together a crossed system highly reliant on pitches and notes
produced in real-time. Cello_2, through the bowing-styles Tremolo, Staccato, Balzato, generates
pitch and rhythm in one of its virtual instruments, but the same module also generates the
polyphony (density, rhythm and pitch transposition) of Cello_4, whose augmented instrument
is a four-voice harmoniser; in addition the same module of Cello_2 generates the notes received
by Cello_4 in its interactive pentagram.
Part of the sound synthesis of Cello_2, the electronic polyphony of Cello_4, and its score are
therefore strictly correlated in terms of rhythm and intervals since they are generated by the same
gestures, produced by Cello_2.
On the other hand Cello_4 has the power to activate and mute the four virtual instruments of
Cello_2 by means of the melodic intervals of its performance.
Cello_4, through its musical expressivity, can also influence the resonance, intensities and shapes
of Cello_2’s electronic sounds.

As described above, all four musicians have a role in affecting the global development
of the composition:

-Shared tonal centre (interactive variable chord) sent by Cello_4


-Spatialisation driven by Cello_3
-Action score sent by Cello_2
-Background colour (as a graphic animated score) sent by Cello_1

Tonal centre
Cello_4 controls the shared tonal centre through an algorithm defining his/her last most often
performed notes.

Spatialisation
Cello_4 has an autonomous fixed quadraphonic output system .

The other three musicians output their sound in stereo; their stereos are individually spatialised
by the bowing styles of Cello_3 (see Cello_3 individual notes).
The overall spatialisation can be octo or quadraphonic.

99
Action score
Through a conventional bow gesture Cello_2 interrupts for a brief period any current activity
of Cello_1, _2 and _3. An interactive image appears on their screen in synch which has to be
performed by everybody with intensity and impulse.

The image consists in a stylised cellist where:


-A quick moving small segment shows the left hand position on the
fingerboard and the four strings
- A coloured bar represents the bow:
black = more bow pressure on the string,
Yellow = light pressure,
Bar movements up and down =
bow from sul-tasto towards near the-bridge,
Red pointer = point of contact of the bow upon the string (between
frog and point)

The movements of the score are generated by the real-time sound


analysis of each cellist.
Fig.D_4 Action score

Background colours
The background colour of each laptop has a crucial impact on the macro-form.
Each musician receives the same colour sequence at the same time, but with different gradations
of brightness. At the beginning all the musicians will receive a black background (with the
exception of Cello_1, who receives a white background).

Black = Silence; White = play a solo.


The overall length of the work is 10 minutes, preceded by 30” of solo Cello_1.
These default time lengths can be modified before the performance inside the settings of the
Cello_1 App (see setting section below).
During the initial solo, some bowing styles of Cello_1 are associated with colours.

After these 30” the ensemble interaction starts, since the musicians receive the colours created
by Cello_1: the background colour is a trace indicating how to play.
During the ensemble interaction the temporal development of the colours changes 20 times slower
than the original bow gestures of Cello_1, generating them.
The initial 30” are therefore the seed of a macro-formal message, and Cello_1 is aware of that
during his/her solo (see individual instructions).

The musical meaning of the colours should be a shared ensemble decision made in advance and
regarding character, intensities, mood and techniques of the performance
(but a loose improvised interpretation of the colours could also be appropriate).
The only fixed interpretation regards the meaning of the parameter of brightness vs. darkness.

100
Bright = increase in the active generation of original musical materials.
Intermediate = short music commentary, accompaniment, dialogue, digital interaction.
Dark = decrease in originality and limiting to the sound gestures only affecting the interaction
towards the other musicians and the overall music shapes.

After a few rehearsals the musicians easily learn how different these two detached performance
styles are: the first flowing and expressive, the latter discrete, atomised and functional.
The decreasing gradients of activity (solo, dialogue, accompaniment, commentary, single gesture,
silence) imply the increase of functional and structuring “compositional” detached sound gestures,
and two distinct performance modes emerge:

-1) fluid, improvisatory, individualistic

-2) objective, detached, compositional, influencing the external communications and the sounds
of the other people.

The composer has however predefined some internal envelopes of brightness, preserving the colour
but differently time-shaping the individual quantity of light for each performer.

All the musicians will therefore be receiving different tonalities of brightness: in this way Cello_1
will be very active at the beginning, Cello_4 predominant towards the end of the piece, Cello_2
and Cello_3 will be performing with some peaks of foreground action in some intermediate points
of the performance.

101
CIRCUIT

Fig.D_5 The circuit

102
GENERAL INSTRUCTIONS FOR THE WHOLE ENSEMBLE

The laptop screen is at the same time a monitor and an interactive score, upon which to model the
ensemble improvisation, the reciprocal influences and the technical control.
Each musician is provided with a laptop (containing one individual MAX-Application), one sound
card, and at least one microphone (see the final section “hardware equipment”).
The laptops are linked via Ethernet.
The performance notes don’t include any scores, since the composition is an ensemble interaction,
and behaves much more as a collective instrument requiring explanatory details rather than
notational instructions. Before starting the performance, a general review of the settings
is necessary. The settings are contained inside each App, sometimes accessible as hidden modules
by double-clicking the corresponding label.

SETTINGS
Each App contains three setting sections:
1) Setting: this section is positioned in the upper-left part of the screen.
The written labels inside the red and yellow borders can be opened by double-clicking on them
(each yellow or red bordered label can be opened with a double-click).
These interfaces are called: "p network", "p audio-settings", “nb.bowings".
2) Input/Output: sound monitors and number boxes (filled with default values).
These numbers set the input and output gains normalised between 0 and 1.
By typing or dragging the decimal numbers it is possible to modify the gains.
3) Calibration: number-boxes (monitors and settings).
Calibration can be manual or automatic.

A few parameters need to be checked inside these modules before every performance.
-the motion tracking monitors have to show flowing data
-check that the right sound card is active (double-click “p audio-settings”)
-the input/output gains should be appropriate
-in case of lack of motion tracking control, a new calibration needs to be performed
See video description.

PERFORMANCE: START!
After organising the settings (the first time it will be a rather complex procedure), the calibrations
should be remembered by the Applications, and only a brief checkup is recommended before every
new performance (above all the sound card check inside “audio-settings”).
The performance starts when Cello_4 presses “Spacebar” on his/her laptop. At this moment Cello_1
receives two off-beat flashes, and then immediately starts the opening solo.
During the solo the background colour of Cello_1 is white, while it is black for the other musicians.
After the time of the solo (30” by default) all the laptop backgrounds start to shift across different
colours, and then the ensemble performance starts to evolve. At the end, all the laptops will be black
again and the ensemble silent: Cello_4, being the last one to receive a steady black background,
turns off the system by pressing “Enter”

103
CELLO_1. SPECTRAL
Video performance instructions at: https://www.dropbox.com/s/4suc3k9ecm9xcfj/cello1-instructions.mp4?dl=0

Fig.1-C_1 Cello_1 application

ELECTRONIC SOUNDS AND INTERACTIVE ROLE OF CELLO_1


Cello_1 is the instrument that explores most deeply the cello timbre. This feature is underlined
by the foreground role in the initial part of the composition, where the main sound qualities are
introduced, and upon which the other players will act by development and contrast.
The electronic sound of Cello_1 involves spectral modulations of the live performed cello, therefore
not evolving by contrast, but instead deepening and interleaving the cello acoustics.

Performance could be shaped through:


-a frequent use of bow-wood sonorities: this bow rotation amplifies the direct cello sound,
meanwhile attenuating the electronics amplitude, in a dry-wet fashion.
Performing with “full hair” on the strings instead increases the electronics presence.
- scraping, noisy and on-the-bridge sonorities induce broad and rich spectral responses, a fuller and
more common cello sound will produce more static responses from the electronics, energetic
bowing styles, sometimes also performed in-the-air, can help to mix exasperated cello sounds with
more dynamic electronics interactions. Timbre is globally oriented to a clear cohesion and intimacy
between cello and electronics.

The composition opens with a solo, immediately starting after two off-beat flashes on the screen.

104
The solo is improvised but it should be prepared in advance taking care of the following
responsibilities towards the ensemble:
-exposition role of the sounds (obviously1or 2 virtual instruments are to be immediately chosen
and opened by bow triggers)
-development of a tonal centre (freely following the chord sequence written on the screen, which
during the solo is fixed with the pitches C, C-sharp, D-sharp, F-sharp, A-sharp)
- choice of a clear sequence of Tremolo and Balzato bowing styles with precise dynamics
of rotational energy across the strings: as described below, the evolution of the overall background
colour macro-form received by the whole ensemble throughout the performance depends on these
bowing styles alternating during the initial solo.
The opening solo will be created in order to musically mix these three obligations.
The solo is signalled by the white background colour: when some colours start to appear the
ensemble music begins. These instructions take for granted that the section “composed interactions”
on pp. 4-5 is fully known. The overall time-envelopes of brightness/darkness show the tendency for
Cello_1 to fade out from an active foreground role at the beginning towards a final stillness.
Bright colours mean production of flowing and original music materials, intermediate colours mean
gestural and more detached sound commentaries, performed with full attention towards the
ensemble. The darker the colour the more the performance will be involved to focus only on the
dialogue with Cello_3 and on its output (similarity vs. diversity in the intensities of Tremolo,
Staccato and Balzato between Cello_1 and _3 strongly affects the output sound of Cello_3).

FEATURES AND INTERACTIONS


From the very beginning the performance evolves progressively through these focal sections:
1) Solo, building a macro-form (white background)
2) Active flowing sound interactions (bright backgrounds)
3) Short commentaries and dialogues with Cello_3 through bowing styles (dark backgrounds)
4) Silence (black background)

-1) The solo sets a sound exposition and a melodic-tonal centre. This exposition has to be integrated
with a clear sequence and mix of the two bowing styles Tremolo and bouncing, performed with
different intensities of rotation across the strings. The intensities of Tremolo, bouncing and
bow-rotation are recorded by the system, transformed into colour messages, and sent to the
backgrounds of all the laptops when the solo is concluded. Tremolo generates Red intensity, Bounce
(Balzato) generates Green intensity, bow-rotation generates Blue intensity. Mixed bowing styles
generate mixed colours, a single bowing styles generates a pure tonality. Bow stillness creates dark-
ness, and bow hyperactivity brightness. These values are recorded only during the time of the solo,
at the precise moment that the solo finishes the system doesn’t record anymore, instead it starts to
output the background colours for everyone (this is the signal that the ensemble part is beginning).
The output time of the colours is time-stretched and lasts until the end of the work. If the solo lasts
30” and the work 10’, it means that the initial bowing styles will produce a flow of colours 20 times
slower, affecting the subsequent macro-form messages as a seed (i.e. the background colour
happening after two minutes of ensemble performance, is generated and corresponds to the bow
movements performed after six seconds of solo).

105
-2) The more active and flowing-creative sound interaction will happen during the solo and during
the first part of the performance (the brightness of the background colour telling how active
to be).The sound interaction is afforded by bow-impulsive triggers opening and closing the virtual
instruments (see the last section below on virtual instruments and the video example).
The internal response of the virtual instruments is played through the bowing movements,
by internal mappings that can be visually followed from the laptop screen. The mappings avoid
linear parametric approaches thus permitting a global and musical interplay allowed by the complex
behaviour of the modules employed (see foot notes for more info) and by mappings oriented
to navigation instead of punctual control. The aim is to treat the electronic sound as if it were
a “normal” musical instrument whose responses can be logical but highly complex and
non-obvious: they have to be mastered through practice and knowledge of the specific character
of each virtual instrument. The electronics can be attenuated, and the cello more amplified, when
the bow plays with less hair and more wood (tilt towards 0.), producing a slightly scrappy sound:
but the same effect happens also if the bow moves in the air with the same tilt value.

-3) As the background darkens the performance starts to reduce to short detached sounds and the
bow gestures are no longer oriented to produce a flowing sound but instead to mainly interfere with
the sound of Cello_3, by means of reciprocal bow movements. The similarity or contrast of bowing
styles Tremolo, bounce and Staccato) between Cello_1 and _3 strongly affects the sound output
of Cello_3, mainly involved in transforming sound files. Cello_1 takes no notice of this collateral
effect when focusing on making his/her own music, but as the protagonist role reduces, the aim
of interference and bow dialogue starts to be significant.
This bow dialogue is detected as the difference between the bowing intensities of the two
performers in relation to Staccato, Tremolo and Balzato:
-similar Staccato (irrespective of being intense, lazy or absent) -> more dense-overlapping sound
material from the sound files output of Cello_3
-contrasting Staccato (i.e. one performer plays Staccato, the other one Legato) -> short-intermittent
output from the sound files of Cello_3
-similar intensities of Tremolo -> the files from Cello_3 are low transposed (and different intensities
transpose higher)
-similar intensities of Balzato -> the direct sound of Cello_3 is low transposed (and different
intensities transpose higher)

-4) Silence when the background is black (the second part of the performance progressively fades
out, leaving final prominence to the Cello_4)

-5) Cello_2 sometimes sends an action score, which unexpectedly appears as


a window on the screen. The score arrives synchronously to Cello_3 and Cello _2
itself. During these briefs periods stop any previous music activity, and perform
the gestural suggestions collectively, with intensity (see above “composed
interaction”).

Fig.2-C_1 Action score

106
THE FOUR VIRTUAL INSTRUMENTS
The instruments are open and closed by the four quick-impulsive triggering rotations Up, Down,
Internal, External, better responding as in-the-air-bowings. If the virtual instruments are closed,
no sound at all will be output.
It is possible to keep open more than one instrument mixing the resulting sounds as an example one
freeze added to one of the real-time instruments could result in a dynamic live effect upon a groove.

The internal nuances of each instrument are consequences of the bowing styles live performed,
therefore a detailed description of the virtual-instruments internals appears necessary.

Sound characters
The couple of freezes shown in the left part of the screen, when active (in the position ON),
live-record a tiny portion of input sound just at the moment of the quick-impulsive down-bow
(the trigger is underlined by the yellow flash). The sonogram builds up the sound representation
of the recorded cello input: the more the sound captured is strong and brilliant, the more the
sonogram is dark and shaped, and the sound output powerful. A good synchronisation is necessary
between the down-bow impulse (to be performed on the string or otherwise in the air) and the cello
sound the player decides to be captured.

The freeze process captures the sound in both the modules at the same time (if they are open in the
position ON). Any freezes overwrite the previous sound captured, but the sound inside the module
still produces sound after having closed the effect, until a new down-bow impulse is performed
cancelling the recording.

-Static-freeze records a very short chunk of sound, whose length is determined by the tilt position
of the bow during the down-bow impulse (with tilt near zero the recorded chunk will be extremely
short): the length of the freezing sound is given in number of frames inside the nearby number box.
-Dynamic-freeze instead records 2.5” of sound, allowing for a broader interaction whose nuances
upon the freezing sound are controlled by many bow gestures together in combination.

The other two instruments, the real-time instruments, instead process the live sound directly.

-The third instrument, called spectral deconstruction, operates a detachment between the sinusoidal
and the noisy components of the instrumental input. When sinusoidal components are enhanced, the
effect produces a sort of “Flute of Pan” transformation: few or many sinusoids can contribute
depending upon the bow controls of the effect. The noise-component enhancement instead pushes
the sound to be aggressive and very responsive: sound can also be transposed and differently
modelled. Inside the spectral deconstruction effect the final result is highly dependent on the input
in the sense of its flat, light, rich or dense timbre qualities.
-Dynamic equaliser (spectral delay) is a deep and selective equaliser in 64 bands operating
on the cello input. It is designed in order to shift between fixed-EQ states and very dynamic
changes sounding as a sort of cascade-EQ. The single bands are prone to delays and feedback
in order to mix their individual different persistence.

107
All the effects are shaped and controlled in their time behaviours by the bowing styles.
The cellist therefore performs a double action upon the acoustic instrument and upon the electronics
whose shapes are influenced and mastered by the bow gestures.

Internal nuances and controls


The controls schematically summarised below are visually represented in the virtual-instrument
monitors of the App, and should be aurally-visually explored.

"
Fig.3-C_1 Action score Freezing spectrum instruments

1) Dynamic-Freeze
The sound captured and freezing is dynamically shaped in this way:
-playing low strings the portion of sound output is quite thin , rolling towards higher strings the
portion of sound performed is larger, and the output result will sound more blurred and “confused”,
as shown by the blue zone inside the sonogram.
-the intensity of bouncing (Balzato) affects the playback velocity of reproduction of the freezing
sound (few-bouncings -> static sound; intense bouncing -> quick playback;
no bouncing> reverse playback).
-the more intense the Tremolo, the more artificial and light is the output: the denoise effect
proportionally selects the most prominent sinusoidal parts of the captured sound.
-the bow Energies of velocity and rotation help to define clear and quick transients, otherwise
blurred and slowed down when the bow is slow and moving on the same string.

Two small red and blue balls move inside the visual control space as bow monitors, even if the
sonogram returns a more consistent visualisation of the sound processes in action, caused by the
bow interactions.

108
2) Static-Freeze
-an intensively bouncing (Balzato) bow increases the denoise effect
- an intense rotational bow activity increases the smooth effect, making the sound static and dense.
These sound effects are again monitored through a small red ball moving inside the control space,
the largeness of the grey circle shows the time length of the tiny freeze, even if the aural response
and the sonogram diagram should be enough.

"
Fig.4-C_1 Real-time spectral instruments (dynamicEQ)

3) Spectral_deconstruction
-low vs. high strings bow-position moves the red ball high vs. low,
-Tremolo intensity moves it left and right.
It is a visual cue allowing to navigate inside a non-obvious control space, intersecting different
nodes mapped to specific sound effects of density, transposition, sinusoid extraction, noisiness.

4) Spectral delay (dynamic equalizer)


-the intensity of Tremolo shuffles the resonance bands of the spectrum moving from high to low
(graphically from right to left) in a wave fashion: it is represented in the dynamicEQ upper
monitor.- by varying the intensities of bow-bouncing, the delay effect operates upon some detected
spectral bands: EQdelayed middle monitor.
-Staccato increases the “feedback” (the tendency of the effect to persist): EQfdback lower monitor
-the intensity of rotation across the strings makes the equalisation shifts much quicker and dynamic,
whereas playing on one same string makes the equalisation fixed and still, as underlined by the
monitor shifts.
-bow velocity (quickness) and the Tremolo irregularity increase the volume and the reverb of the
effect, as shown by the input and output volume-monitors.

Obviously this detailed support has to be globally experienced inside the aural/visual concrete
interaction in order to be effective.

109
CELLO_2. ARTIFICIAL
Video performance instructions at: https://www.dropbox.com/s/789z2g5lhwccvda/cello2-instructions.mp4?dl=0

Fig.1-C_2 Cello_2 application

ELECTRONIC SOUNDS AND INTERACTIVE ROLE OF CELLO_2


This system is very gestural, and it generates synthetic sounds. Unlike the other musicians, the
electronics act here as a contrast with respect to the cello sound, and the results will be far from
obvious. The cello sound, though, is crucial for influencing the artificial sounds, which are mainly
generated by the bow movements (on the string as well as in the air). 

You can find a good control of the electronics by mixing bowing styles and sounds creatively.
Whilst the computer interprets your musical gestures and transforms them into sounds of synthesis,
it sends you visual monitors of your movements and of the sounds you are producing.
In this way you receive clear feedback about how to invent and organise your performance.

Your electronic sounds are not totally foreseen, and in order to get interesting results you’ll have
to practise, listen and act intuitively. Notice that the acoustic cello sound sums up the electronic
sound, in a sense you are playing two different instruments in coordination and at the same time.


110
Received messages

A variable chord (created by Cello_4 and sent to all the players) appears on the screen: freely
improvise upon this tonal centre.


The electronic sounds are created through four virtual


instruments, represented by differently coloured buttons
located in the central part of your screen: when the button
is crossed the instrument is open and you will see below it
green movements representing the amount of sound produced.

Fig.2-C_2 Four virtual instruments

But you don’t decide when to open and close these instruments (the choice comes from Cello_4).
You can only modulate the sounds internally to the effects when they are opened.
The task is not simply to modulate the electronic sounds of your individual augmented cello: you
have at your disposal further techniques interacting with the other musicians, in particular with
Cello_4 (whose electronic instrument multiplies and transposes in a “canonic” fashion his/her
sound).

Messages to Cello_4
-your bowing styles (Tremolo, Balzato, Staccato) determine the kinds of harmonisation upon
Cello_4
-you decide how many “voices” are making up the Cello_4 “counterpoint”
-Cello_4 sometimes receives a pentagram score in real-time, whose notes are created by your
bowing styles.

Messages to the ensemble


-when you perform a special bow gesture in the air, an interactive action score appears on the
screens of Cello_1 and _3 (and yours too).
-some impulsive bow movements have the function to strongly modify the video.


Timings
As described above in the general instructions, the background colour of the screen has a precise
meaning:

-Black (during Cello_1 solo): silence
-Dark (first part of the interaction): rarified sound commentaries, perform some gestures as
messages to the ensemble.
-Bright (central part of the interaction): play more intensively; when the colour is white it signals
the presence of your solo
-fading to Dark (second part of the interaction) play less intensively, but keep alive the gestures
modifying Cello_4 electronics (you will notice that his/her is playing much more in the second part
of the work). 

-again Black (ending part): stillness
111
INTERACTIONS
Messages to the ensemble (first part of the interaction)

-“quick-impulsive horizontal-bow”.
An extremely impulsive and Horizontal (parallel to the floor) bow gesture
immediately triggers an action score for Cello_1, _2 and _3: you will interrupt
any current activities in order to play this animated graphic score together.
The strength of your impulse has an impact on the length of this performative
window, which it will be anyway short lasting.
The performance instructions of this score are explained above inside the
general instructions (p. 7).

Fig.3-C_2 Action score

Don’t launch this effect too many times. It will be especially suitable for offering immediate vitality
and contrast to the performance: find the right moment, and notice that the effect is active after the
initial solo of Cello_1, and it will be disabled in the final part.

-“quick-impulsive triggering rotations”.


Four quick impulsive rotations of the bow in the air (up-down-external-internal) modify the state
of the video:

Internal = Black
External = Colored
Up = visible score
Down = full video

Fig.4-C_2 Message to video

This interaction is active in the first part of the work, after the first receiving of the score by Cello_4
the process will be automatic, and after that you’ll be able to concentrate better on the musical
aspects.

112
MESSAGES TO CELLO_4 (mainly in the last part of the interaction)
Electronic sounds

You decide how many voices “harmonise” Cello_4 (and you can change their number).

Each of your four strings is connected with one of the four voices of Cello_4.

When you produce a sufficiently clear sound attack, you’ll be able to see inside
your screen a yellow flash (named “bangstring”): at this point the system
attempts to predict which of your strings you are currently playing;
as a consequence the corresponding voice will be opened inside the electronic
polyphony of Cello_4. If you put your bow on a string and you pass from
the position “full-hair” to the position “hair plus wood”, you will close
the corresponding voice.

Fig.5-C_2 String recognition

These string positions are recognised by the system both if the bow is upon the string and if it
is flying in the air.


In the bottom-left part of your screen you can find a small monitor showing
how many voices are active in the polyphonic system of Cello_4.
Four red buttons = all voices active; no red button = silence.
In this way you can model the density of the electronics of your colleague.

Fig.6-C_2 To Cello_4 polyphony

You can also model the quality of the last opened voice (without excessive worries about the
details, the global controls are):
-the intensity of Staccato increases the amplitude of that “voice”,

-the intensity of Balzato transposes it higher (and less or no Balzato = lower)
-the intensity of Tremolo increases the Delay (intense Tremolo = distant repetition until 5” less
or no Tremolo = close repetition until 1/5 of a second)

It is worth noting that an identical system controls one of your virtual instruments, as will
be described later

113
Score

A red flash in the upper part of the screen signals that the Cello_4 score
is building up as a consequence of your bowing styles (it will happen much
more frequently in the second part of the performance). A cross inside
the white square tells that the process is working, until the cross disappears.

Fig.7-C_2 To Cello_4 score

The Cello_4 score is notated in 4/4 tempo. How to influence the interactive score:

-Legato produces long notes (2/4, 3/4, 4/4) 



-Staccato: the more intense the Staccato is, the more quick and irregular the score rhythms will be

-Balzato: no Balzato = low notes; the intensity of Balzato increases the change of pitch register
until very high notes in the treble clef
-Tremolo: when intense and irregular it increases the variability of the melodic contour.

THE ARTIFICIAL SOUNDS: VIRTUAL MUSIC INSTRUMENTS


After the details about the influences of Cello_2 upon the ensemble, this section describes the
internal sound interactions of Cello_2 as an individual augmented instrument. These aspects will
be more present in the central part of the performance, and its density will be tuned to the
background brightness of the laptop screen.

The musical gestures (sounds and bowing styles) modulate the effects of the Virtual Instruments,
only when they are opened (by Cello_4): if several instruments are opened together, the same
musical gestures act in parallel, if the all instruments are closed no sound will be audible, except
the amplified acoustic cello.

Fig.8-C_2 The instrument “Mandolins”

114
-White instrument “Mandolins”
The sound is produced by physical models synthesis, simulating an artificial plucked string.
The system is the same as the virtual engine affecting the Cello_4 “polyphony”, and obviously
the same gestures affecting the “mandolins” have a parallel influx upon the Cello_4 electronics.

Each cello string is connected to a different virtual mandolin, the actions upon the mandolins work
only when the effect is active (white button crossed).
The visual monitor is in the central-bottom part of the screen.

It is the only instrument working through a explicit and visible connection between the bow
gestures and the electronic sounds.

Functions:
A clear sound attack produces the yellow flash “bangstring”, signalling that the system tries
to detect which string you are currently performing.

The corresponding mandolin starts to play and you can modulate its notes (if other mandolins are
active in this moment, they keep steady notes and rhythms as a “bordone”).

In the monitor “voices On-Off” you can see which mandolins are currently playing (crossed
or uncrossed buttons); the numbers in the red and yellow boxes are flowing only in correspondence
with the currently selected “soloist” mandolin.


You modify the notes of the “soloist” mandolin with the same procedures above described:

-Staccato intensity -> quicker rhythms; Legato -> slower rhythms -


Tremolo intensity -> the mandolin note shifts down in small intervals

-Balzato intensity -> the mandolin note shifts down in larger intervals
If the bow stays still, it shifts the notes very high and at a very slow rhythm.

When a new string is selected (through the “bangstring” process), the last note values of the
previous “mandolin” remain unchanged. For this reason it is suggested to cross from one
“mandolin” to a new one in a very dynamic fashion, in order to avoid that the last note pattern will
stay fixed on high notes because of unconscious intermediate bow rests.

The overall velocity of the bow increases the mandolin volume, and the cello timbre has a slight
influence on the mandolin timbre.

When you decide to silence one mandolin, place the bow in position “full hair” on the
corresponding string, and shift it to the position “hair plus wood” on the same string (the process
is only gestural, you can perform it silently without playing).


115
Violet instrument: “Electric guitar”.

The sound is produced by frequency modulation synthesis (FM), with added flanger and reverb.
In this case the cello sound is extremely important as a source of the modulation.
The five carrier frequencies of the FM are the same as your main cello sound partials (sounds on
the bridge or very noisy sounds produce high pitch variability, a more conventional cello sound
retunes the sounds of synthesis together).
The overall amplitude of the effect is regulated by your cello amplitude.

Cello_4 can increase the timbral density of your effect through his/her expressiveness.
This instrument is complex and it has to be governed more with practice and listening rather than
with theories, but it is useful to know that:
-Tremolo intensity increases the volume of the flanger (a sort of “Hendrix” effect) if Cello_4 in this
moment plays more “espressivo”.
-Playing cello without changing string with the bow, the flanger starts to be massively coloured,
but in presence of intensive bow rotations across different strings, the flanger effect decreases,
and the overall sound colour changes (only the FM remaining active).
The FM is mainly modelled by bow velocity, bow tilt (“full hair” vs. “hair plus wood”)
and different gradations of Staccato vs. Legato bowing.

In terms of FM synthesis the result will be:


Slow bow -> much more harmonic sound,
Fast bow -> sound dense and intermingled in timbre;
“Full hair” -> aggressive sound
“Hair plus wood” -> simpler and resonant sound,

Legato -> consonant timbre
Staccato intensity-> inharmonic timbre,

Balzato intensity -> increasing the reverb/resonance of the sound

Therefore a very slow-legato-wood playing style will allow a sort of “spiritual” sound effect
(enhanced if the bow is left still in the air with the cello resonating). Conversely energetic bowing
styles afford different contrasting effects, all raising artificial copies of the cello sound.
Notice that if you wish to completely silence the effect you need to stop the string with your hand,
otherwise the effect will maintain the cello string resonances.

Yellow instrument: “Glissati”


Additive synthesis: also in this case the partials of the cello sound feed the artificial one
(resonators), but here there is no audible resemblance between the acoustic and electronic sound.
The timbre oscillates between pitched glides, whistles and light/foggy small bells.

This instrument interaction is also impossible to be described in detail because the musical gestures
interlace in a complex fashion; also in this case an “espressivo” performance by Cello_4 contributes
to the overall volume and resonance of the effect. 


116
Low notes (especially if dark in timbre i.e. sul-tasto, resonant pizzicatos etc.) make the artificial
pitch glides slower and lower in tuning, high cello pitches (especially if Balzato in bowing styles)
interrupt the continuity of the pitch glides. Tremolo intensity raises the volume and the presence
of hidden resonating sonorities. Contrasting timbral differences are produced if the bow crosses
the strings rapidly rather than playing on a same string.

The overall effect is quite interactive, the artificial sounds respond not synchronously to the cello,
and the instrument requires some previous practice exploration.

Green instrument: “Resonant percussions”


It is a double system of physical models synthesis.


A) deformation of the cello sound,


B) resonant percussion (like a huge gong or a beaten piano stringboard),

-every energetic sound attack of the cello (monitored by the “bangstring” flash) produces
a percussive output

-a nervous and quick bow conduction builds up selective bands and contrasting spectral zones,
as a consequence extreme timbres and intonations arise when the percussion happens

-intensity of Staccato and bow-rotation amplify and characterise these nodes of resonance,
on the contrary legato styles on a same string soften the timbre and make it more changeable.

-playing with a small portion of hair and no Balzato increases the resonance of the percussion,
the opposite contributes to a less aggressive and detailed sound


- Tremolo intensity increases the amplitude of the deformed cello

The overall augmented cello has to be experimented with freedom and focus in order to memorise
these new connections between bow gestures and artificial sounds. The only direct and explicit
instrument is the first one, called “mandolins”, the others are to be understood in an intuitive and
instrument virtuosic fashion.

All the electronic nuances depend on the interrelations between the cello sound and the seven bow
movement detectors, monitored inside the laptop screen. The intensities of these bow movements
contribute together to the artificial sounds and they are organised as bowing styles
(Tremolo, Staccato, Balzato), Energies (velocity and rotation), Orientations (Horizontal from the
low to the high string, Vertical defining the bow inclination with respect to the string)

117
CELLO_3. SAMPLER
Video performance instructions at: https://www.dropbox.com/s/h0d0yi83x2pfkbe/cello3-instructions.mp4?dl=0

Fig.1-C_3 Cello_3 application

ELECTRONIC SOUNDS AND INTERACTIVE ROLE OF CELLO_3


Cello_3 shares with Cello_2 a developed focus on gesture. The sound of the cello is autonomous
from the electronics, which are produced by bowing styles.
The principal role of Cello_3 is to spatialise the sounds of the ensemble: this activity is a principal
aim from the beginning (during the Cello_1 solo) until almost the end of the performance (when
Cello_4 closes the whole work alone, being the only musician provided with an independent
spatialisation).
Cello_3 electronic sounds will be in the foreground especially in the middle part of the performance
(following the evolutions in the brightness of the laptop screen). The autonomous electronics
of Cello_3 consist in selecting and manipulating prerecorded audio files. In the second part of the
performance Cello_3 can record live short portions of the sounds produced on stage by single
musicians, substituting in real-time the old files with some of the new ones.
The events created by Cello_3 (the audio files sonically transformed) have the function to create
contrast and discontinuity, also in opposition to the unifying activity of spatialisation.

118
As below described, a preferential dialogue between Cello_3 and _1 regards contrast/imitation
patterns in the production of the bowing styles Tremolo, Balzato, Staccato.

In addition, together with Cello_1 and _2, Cello_3 will be receiving an interactive
action score sent by Cello_2. The score arrives as an improvised and unforeseen
window inviting the players to interrupt all previous musical action for a short
period. In presence of the action score all three musicians must perform the
gestural indications provided (“Interactions”, p. 7) together and with intensity.

Fig.2-C_3 Action score

SPATIALISATION
The spatialisation is driven through the algorithm Ambisonics.
The sound movements are rotatory inside the audience space.
The upward part of the monitor shows the frontal speakers, the downward the rear. 


Seven sources are spatialised:

-one copy of the player's own amplified cello sound (green source)
shifted by the Horizontal bow Orientation (rotations between the high
and low string of the cello)
-the stereo output of the Cello_3 sampling electronics (two red sources)
driven by the Vertical bow Orientation (from “full hair” to “hair plus
wood” positions)
-the double stereo of Cello_1 and _2 (four blue sources) respectively
shifted by Tremolo and Balzato (Cello_1 stereo), rotation and Staccato
(Cello_2 stereo).
Fig.3-C_3 Spatialisation

The overall bow velocity contributes to:


-accelerate the global sound shiftings
-increase the distance between the Cello_3 outputs (the red sources).

LIVE RECORDING
Through four rotational bow movements in the air (the up, down, internal, external “quick-impulsi-
ve triggering rotations”) you will get the recording on-the-fly respectively of Cello_1, Cello_2,
Cello_3 (Cello) and Cello_3 (Electronics).

"
Fig.4-C_3 Red_Monitor

119
The live recording system is active and exploitable from minute 5 after the beginning: the
recognition of the four triggering movements (crossed/uncrossed buttons), is visible inside the
section “recording” shown in fig.4-C_3. Every new recorded file progressively substitutes the
audios stored from the beginning, their waveforms are visible inside the “Mubu” interface
(fig.5-C_3).
The procedure of file selection in order to output the sampled sounds remains unchanged and it
works as follows:

SAMPLING

Fig.5-C_3 Sampling

Right from the beginning the system loads six short prerecorded sound fragments: you can select
them in this way:

-You have at your disposal six different zones of bow inclination: 0 corresponds to “bow-point
towards the floor and frog towards the ceiling”; 1-2-3-4 correspond to your four cello strings;
5 corresponds to “bow-point towards the ceiling and frog towards the floor”.
-Each of these zones of bow inclination relate to the specific inclinations of your strings towards
the floor (the zones are detected both if your bow is on the string and if it is flying in the air).
A reference number (“sel_string”) shows which zone you are currently occupying.

-In order to put in function the detection you must make a sound with
a clear attack (maybe a left hand pizzicato would best fit, so you don’t
disturb the bow location): a yellow flash signals that the sound attack
is detected and as a consequence the bow-zone number is activated. After
one second the corresponding indexed file is selected and starts playing8.
Fig.6-C_3 Choose-file


8 This latency offers more stability to your musical choices and avoids unwanted file selections. Notice that every
detected sound attack produces a new file selection. Therefore you have to play a sharp sound when you wish to change
a sound file, otherwise during the course of the overall performance your sound style must be generally soft
and smooth.If the machine starts to make autonomous decisions (maybe too many for your taste) don’t be worried:
interact with fantasy!
120
In other words each bow zone tags a different audio file stored in the system; the latency of
the system and the need for a triggering sound attack prevent continuous and meaningless file
selections, affording stability to the sampling interaction.
-the six files in numeric order contain: spoken voice, cello sounds, string quartet chords, electronic
sounds, electrified piano, water mixed with sounds
-after having selected the file, you embed sound transformations upon it through a granular system
which splices sound portions in lengths conceivable as “musical notes”: the audio files will be kept
more or less recognisable, at least in their timbral aspects.
-the methods of sound manipulations of these “sampled notes” are mainly controlled by your
bowing styles, but in part influenced also by the Cello_1 bowing styles: attention and coordination
in the reciprocal bowing styles are therefore a chamber music interactive duty.
-the sound manipulations of the files regard: volume, transposition (up and down) and density.
-the density is obtained through the durations and the frequency of occurrence of each fragment:
very short fragments output at bigger time distances will produce some sort of rhythmic patterns,
long fragments output at high speeds of occurrence will conversely produce overlapping until dense
textures. The joint variation of these two parameters (length and frequency of fragments) will
produce a high degree of variety in sound emissions.
-further parameters influence filtering and more timbre features of the sampled files
-an internal system of audio analysis allows the software to automatically select the file portions
more similar in timbre to the cello sound you are currently performing: you can therefore have
an impact upon the sounds you are sampling through your cello sound; the other methods of
granular sampling are instead driven by your bowing styles.

CELLO
The amplified copy of the cello sound will be generally soft and it will sometimes be output.
Some of your bowing styles increase its presence, some others produce slight gliding transpositions
emerging as distant shadows alternating with your sampled audio files.

As with the other musicians, a variable chord is present on your screen: this shared tonal centre is
a free point of reference for your improvisation.

BOWING STYLES
Each subtle change of your electronics depends on the set of seven bow detections, whose monitor
is visible on your screen. It is pointless to control every single parameter individually without
influencing the others, the bow actions will be the results of global activities focusing upon goal
oriented interactions (as happens in every musical instrument).

Every single parameter has intensities between 0. and 1. (visually from white to black), the
parameters detect:
-Bowing styles (intensities of Tremolo, Staccato, Balzato)
-Energies (velocity and rotation)
-Orientation (Vertical -> “full hair” vs. “hair plus wood”, and horizontal -> from low to high string)

121
These seven parameters are common to the other players, but your system is provided with three
more parameters: the difference in intensity of Tremolo, Staccato and Balzato, compared
to Cello_1: the more your bowing styles are similar and the more the values of difference fall to
zero, the more they are different (i.e. one is playing Legato while the other is playing Staccato,
one player is playing Balzato and the other one not etc.). The difference value raises until 1.
These values are here defined as “Similarity/TR”, “Similarity/ST”, “Similarity/BLZ”.


Below is a list of the bow parameters and their effect upon the electronics (notice that the
parameters are affected through bowing styles performed upon the string, as well as in the air).

Amplified cello:
-Bow slow, with rotations between the strings-> increasing volume 

-Intensity of Balzato -> transposition high
-Similarity/BLZ -> low glissato (high glissato if the value is opposite)

Sound files:
-Impulsive bow Acceleration Horizontal or Vertical (with no rotation!)
-> loud attack with slow fade out
-Intensity of Balzato -> contributes to increase the amplitude
-(Legato, Col-legno, Similarity/ST) -> indirect volume increase (more density of sounds)

-Intensity of Staccato -> shorter grain fragments
-Bow Legato -> longer fragments (more recognisable)

-Col legno -> more density of occurrence of the file grains
-“Full hair” -> rarified and intermittent occurrences of the file grains 

-Similarity/ST -> more density of occurrence of the file grains
-Similarity/TR -> lower transposition of the file sound (higher if the value is opposite)

-Velocity and rotation -> contributes to a clear timbre (in terms of attack/release) 

.
You can notice that some bow gestures overlap their functions, and it contributes to a better
ductility of the system.

It could be helpful to summarise the most important interactions:


-Velocity impulse without any rotations = dramatic loud file impulse



-Intensity of Balzato = sustain in the file amplitude

-Legato, Col-legno = high density-manipulability of the file contents
-Intensity of Staccato = file fragmentation

-Slow rotational bowing = emergence of the cello
The dynamics of similarity/difference of the bowing styles compared to Cello_1 offer unforeseen
variables and the opportunity for a chamber-digital interaction.
Further sound connections can be freely found and explored

122
CELLO_4. HARMONISER
Video performance instructions at: https://www.dropbox.com/s/l22t12bsv2tpwrd/cello4-instructions.mp4?dl=0

"
Fig.1-C_4 Cello_4 application

ELECTRONIC SOUNDS AND INTERACTIVE ROLE OF CELLO_4


You are the only musician not exploiting motion tracking. The kind of interaction is therefore
different, not gestural, but based on communication with the other players through notes, timbre
and music expression. The main part of your interactive system is organised in terms (almost
traditional) of notes and models of expressivity.

Your performance will be split into two different modes:


A) interaction (with Cello_2),
B) free performance plus interactive score sight reading.

The mode A will be prevalent in the first part of the piece, the mode B in the second, but the mode
A will be also present until the end. The density and the alternation between these two performance
modes will be dependent on the colours and brightness of your screen. The brighter your screen
(it will happen in the last part of the work), the more free and intense will be your music (mode B).
When the screen is dark you will be limiting the performance only to the functions of dialogue
and control upon Cello_2 (mode A).

In performing mode A you decide which kind of sonorities are coming out the electronics
of Cello_2. On the other hand an important part of your electronics is controlled by Cello_2.
Your electronic sound is based on repetitions, accumulations and transpositions of the music as
you perform and improvise it. The bowing styles of Cello_2 determine the “harmonisation”
and time shifts of your multiplying electronic cello. Sometimes this harmoniser output could recall
almost classical ideas of “canonic” polyphonies, which can be distorted, hidden or exasperated
by your timbral choices executed during the performance. In fact your cello timbre affects the tim-
bre of your electronic output.

123
START/STOP
You are responsible for the starting process of the interaction.
-The music starts for everybody when you press “Spacebar” from the laptop keyboard.
-The whole music ends when you press “Enter”, after having closed all the effects of Cello_2
and after a brief fading out musical pause.


INTERACTIONS
-After the start the initial part will be silent (your screen will be black)


-During the continuation (after the Cello_1 solo) your screen begins to be dark coloured and as soon
as you see Cello_2 starting to play, you can progressively interact with him/her through brief events
of detached notes (this kind of interaction is explained below)

-The intensity of your interaction can increase as your screen starts to be less dark

-A full and fluid performance is foreseen in the last part of the performance, when your screen will
be bright, a final solo will happen at the end when your screen will be white

-Never forget to maintain a control upon the sounds of Cello_2.

The influence upon Cello_2 is crucial and consists in:


-Opening and closing his/her sound effects (more effects can be left open in parallel, increasing in
this way the overall density of the sounds of Cello_2)
-Contributing to increase volume and resonance of the effects n. 2 and 3 of Cello_2.

"
Fig.2-C_3 Interval detection and messages to Cello_2

The activation of the Cello_2 effects is actuated by a system of recognition of your note-intervals.
The system responds only when your intervals are higher than one octave:


-The interval of a semitone controls the activation of the effect “Mandolins” 


-The interval of a minor third controls the activation of the effect “Electric guitar”

-The interval of a tritone controls the activation of the effect “Glissati”

-The interval of a minor seventh controls the activation of the effect “Resonant percussion”.


A rising interval opens the corresponding effect, a downward interval closes it.

124
On the screen the interval number is visible as it is recognised by the machine: the effective
numbers are: 1, -1, 3, -3, 6, -6, 10, -10, meaning the upward and downward semitone, minor third,
tritone, minor seventh.
At this point a cross appears, or disappears, inside a corresponding box (see fig. 2-C_3), the same
cross appears inside the screen of Cello_2, signalling the opening or closing of the sound effect. 


When your note is detected, if you perform them with more internal crescendos, the system reco-
gnises a higher amount of expressivity through the parameters called “peak_pos” and “atk_max”.
In this way you can increase the volume and resonance of the effects of Cello_2


It will be useful to detail the system of note analysis and detection.

NOTE DETECTION AND EXPRESSIVENESS

-1) In advance of the note-interval recognition the system must


individuate the beginning and the end of the note (“onset” and “offset”):
you can monitor it through a pair of yellow flashes signalling the on-set
and the off-set (note-on and note-off).

Fig.3-C_4 Note-on/off detection

The system of analysis is able to recognise only one note at every take, and before a clear note
release (signalled by a note-off) the machine can never detect a new interval: the release can
be a subtle pause as well as a clear decrease in amplitude; two notes in “legato” style cannot
be detected. In addition the double-stops are not understood by the system, which is monophonic.
Some previous practice will be necessary in order to properly interact, due to these limitations
of the system9.
For this reason the performance style employed for the control upon Cello_2 must be based on
single and slightly detached notes. As a contrast the style exploited for playing the solos will be free
and fluid. The whole performance is based upon the contrast of these two different styles, which
allows two consistently different sound responses from the electronics.

-2) after having time-defined the whole note (after each note-off), the system returns a simple
estimate of a few parameters of expressiveness, computing whether the internal of that note
contains a sound variable in intensity, provided with crescendos, or it is more or less sustained
rather than decaying. Two small monitors in the lower part of your screen show these values
(relative to the last performed note), which are mapped to the intensity and resonance of two
electronic effects of Cello_2 (“glissati” and “electric guitar”).

9 Occasionally the machine can make mistakes in predicting the note interval, if this happens it will be simply
necessary to perform new notes without breaking the continuity of the music. If too many mistakes occur, a new
calibration needs to be done.
125
This pair of parameters, more sensitive to the evaluation of
expressiveness, are “peak_pos” and “atk_max”, respectively indicating:

-where the peak of amplitude is located inside the note (beginning,
middle, end)

-how strong the attack is with respect to the maximum peak of the note.

Fig.4-C_4 Expressiveness parameters

TIMBRE
Your electronic system is in turn strongly influenced by the bowing styles of Cello_2, who
determines the density, transposition and time shifts of the electronic voices which harmonise your
sounds (and as described in a following section, Cello_2 is also responsible, through bowing styles,
for the notes filling your interactive score in real-time). You don’t control your own electronic
harmony, but you have a crucial influence on the timbre of your electronic voices, controlling
it by means of your own cello timbre.


-1) when the monitor “feedb-trans" rises (increasing its black


portion), it signals the increase in the amount of transposed
repetitions of your sound (increasing feedback) therefore your
electronics can become huge; if you lower it, you dry the effect.

Fig.5-C_4 Timbre-expression parameters

In order to increase the feedback, you should produce resonant sounds (i.e. soft low pizzicato
double-bass style, or extremely light sul-tasto bowings); bow techniques “on-the-bridge” as well
as intense-compressed sound styles lower the feedback, reducing the amount of chained repetitions.
This control determines the overall mass of your electronics, but you can gain detail on their timbre.

-2) the “legato” threshold (Fig.28) fixes your electronic timbre at an intermediate default level when
you are playing “staccato”. But when you play “legato” (no significant pauses and spaces between
your notes) you will be able to model your electronic timbre: at this point you will see some sliders
moving, telling you how you are modifying the timbre. In this way the dryer sonorities used to
interact with Cello_2 (mode A of performance) will not interfere with your electronic timbre, which
can instead be modelled during the free part of your performance (mode B). You can visualise
timbre shifts (as an aid to your listening) through three vertical sliders and two rotary knobs.

"
Fig.6-C_4 Tibre interactions

126
-3) -the two knobs “transp” and “scale” are sensitive to a dark-resonant cello timbre as opposed to
a bright-high-compressed-intense one (i.e. resonant pizzicatos or very sul-tasto strokes vs. full tone,
on-the-bridge; but also low notes vs. high notes): in other words detecting low vs. high frequency
spectral contents. The two monitors will tend to move in opposite directions, but this timbre
detection is quite complex and will require experimentation with listening to the timbral causes
and effects (varying between deep, light, exaggerated output sounds).


-4) the three vertical sliders are sensitive to:



-extremely still and pure (in timbre, pitch and amplitude) cello sounds -> the parameter "sinus"
increases producing a light-spiritual electronic sound
-highly variable cello amplitude -> the parameter "noise" increases producing a more aggressive
electronic sound

- variable timbre (fast note changes, and/or quick shifts from ponticello to sul-tasto etc.) ->
the parameter "transient" increases producing more clearly defined timbre edges.

Since instrumental timbre is a complex of interleaved phenomena and features, the affection
of timbre parameters is a global task: try to experiment, focus and find your own style of
performance in order to make a vocabulary of effective timbral influences.

During longer pauses, the sliders could react unexpectedly to the environmental sounds, producing
highly deformed timbres: you can exploit this extreme effect. If you decide to include noisy cello
sounds you can distort the electronics as well, and it could afford the creation of a wider
electronic sound palette.

THE SCORE
The mode of performance B (second part of the piece) includes:
-continuation of mode A (interactions with Cello_2)
-free improvisation (density suggested by the brightness of the background color)
-timbral control upon the electronics (see previous section)
-reading the interactive score (density of score events also suggested by the screen brightness)

You decide when to receive a score. A strong impulsive sound attack (i.e. a
Bartok-pizz, or a sharp Staccato at the frog) produces a yellow flash near the
label “writer”: it activates the score writing process. Cello_2 creates your score
through bowing styles (he/her is aware that you have called for your score).

Fig.7-C_4 Calling for the score

127
-Inside the bottom part of your screen the process of score building (20” long) is visible.
-The score immediately appears, it has to be sight performed, as a unique phrase you have to
individuate its character and musical direction on-the-fly
-Tempo is always fixed as 4/4 at 60 BPM
-If it helps you can freely follow the yellow flashes as a visual metronome
-Tempo is not rigorous, but it must not be too enlarged (keep the musical direction)
-A new score can be called for only after the previous one has disappeared
-You are free to call for new scores at will
-Probably the score performance will keep the electronic timbre at its neutral-intermediate level
-Leave room for alternating score-readings with the other tasks of your mode B performance
-Sometimes an unwanted score could appear, a good calibration avoids misunderstandings.

"
Fig.8-C_4 The score

CALIBRATION
Verifying the parameters of calibration is a task for the coordinator of the performance, but some
crucial parameters can be easily rechecked inside the module “thresholds” (accessible by
double-clicking its label).
The values are highly dependent on individual features, and calibration has to be rechecked when
instrument, player or microphones are different.

1) The main parameter affects the “writer” threshold


(how much sound attack you need in order to call for the score)
-lower the number above the label “s write_th” if it requires too much
effort
-increase the number if it is too sensitive and generates too many scores
-the minimum-maximum values of the “write threshold” should be
between 15 and 35.
2) If too many note-ons are detected you can raise its threshold
(or otherwise lower it if you need the opposite): the optimal range of
“s on_th” is between 0.5 and 2.
Fig.9-C_4 Calibration parameters

After having changed the calibration number, press the label “write”.
The other calibrations are detailed below inside the technical section.

128
LAPTOP_5
Laptop_5 is the main hardware connecting the whole circuitry, and upon which the coordinator of
the interaction mainly operates.
The MAX application contains:
-the OSC receiver of the Motion Tracking system
-the network module sending the sensor data to the other laptops through Ethernet
-the video processing driven by the three IMUs placed under the bows of the performers

The motion tracking module “rec_orients-D” is setup accordingly to the release 2015 of the Orients
system. In case of a different motion tracking system the abstraction “rec_orients-D” has to be
substituted with a new one fitting with the referenced motion tracking hardware.

The system receives accelerometer and gyroscopes in three dimension.


By default the system distributes also quaternions, in case they are not available, please tell the
three performers exploiting motion tracking to press the “spacebar” before playing, in order to
allow an alternative bow-tracking reading.

All the data has to be sent to the performers as already normalised between -1 and +1, in order to be
properly read inside any individual applications. In case of different systems or releases, the default
values inside the module “p normalise” must be manually changed accordingly to the minimum and
maximum values sent by the inertial system.

The laptop_5 patch is designed with MAX 6.1. and Jitter. The only MAX external employed is
“o.route” (see “software” section). The images are processed through Jitter, mixed inside the
abstraction "final_cut" and rendered through the object jit.window.

The video is rendered through Jitter and it processes in real-time an image of Picasso’s painting
Les Demoiselles d’Avignon, occasionally mixed with images of bows and music scores.
The rendering is a process based automation whose engine is preprogrammed by the composer
taking the bowing parameters of the performer as dynamic algorithmic parameters. The performers
don’t need to be aware of the video processing. The video shifts between four different states:
1) black; 2) coloured; 3) score projection; 4) dynamic video.

In the first half of the performance the shifts between these states are controlled by Cello_2, in
the second half they are automatic.

In states 1 and 2 no images are projected, state 3 shows the interactive score as it is received by
Cello_4. State 4 is the complete video rendering.

129
SETTINGS

HARDWARE EQUIPMENT
Five laptops: each musician performs with an individual MAC, one more laptop is required
(MAC or Windows) for video and Motion Tracking collecting/distribution.

The system is tested for MAC OS X 1.6 upwards, and built with MAX 6.1For the App Cello_2
the minimum hardware requirement is 2.4 GHz Intel Core 2 Duo, and the same minimum kind
of processor is recommended also for laptop_5.
The remaining three applications don’t show special CPU heaviness.

5 ethernet cables 1000 Mbit/s (3 meters length), 1 Ethernet Switch.

1 Master sound card 6 inputs, 8 outputs (or 4 outputs as a minimal option), possibly
RME or MOTU+ 2 stereo sound cards (outputs plugged as inputs in the master card),
+ 1 Sound card 2 inputs and 4 outputs.

2 microphones (for Cello_1 and Cello_4) DPA, or condenser cardioids, or directional.2 directional
microphones, or cardioid DPA, or contact (for Cello_2 and Cello_3), 1 pickup for the sound
analysis of Cello_4 (showing input isolation; external microphones, or piezo-pickups should
be avoided).
PA possibly octophonic.

Optionally mixer and more microphones (individual and/or panoramic) for the direct amplification
of the instruments.

Cello_1, _2 and _3 augment their acoustic instruments by means of a small IMU (Inertial Motion
Unity) developed by the Centre Speckled Computing at the University of Edinburgh.
The current setup is dependent on the current specs and its updates.
Any different Inertial Motion Unit needs the substitution of the abstraction “or_data” with a new
fitting abstraction in Laptop_5, as described in the video instructions and in the Readme file.
IMUs must return accelerometer and gyroscope data in the 3 dimensions x-y-z, and possibly also
quaternions.

REAL-TIME SENSING AND ANALYSIS


Motion tracking
Each performer (with the exception of Cello_4) positions the sensor under the frog of the bow
with the help of tie sets (the power-chip pointing up). Motion tracking data come from laptop_5.
When the network is working the performers will see the data flowing inside the monitors (with the
bow Vertical the tilt numbers should be higher, and with the frog pointing to the floor the roll data
should be lower: in case it doesn’t happen the sensor position is to be reversed under the frog).
The module “bowings” contains the MT computations, and cannot be modified.

130
The wireless Orient Motion Tracking system reads Acceleration, Gyroscope and Quaternion in the
three dimensions x, y, z: therefore we access angles and velocities of rotation, but no absolute posi-
tions. Each sensor sends data to a central router talking with laptop_5, which decodes the values
through Python and sends them to the laptop_5 MAX Standalone through the OSC protocol.
Data are then distributed in network through Ethernet.
Figure 4 at the beginning offers a general graphic explication of the bow-tracking functions enabled
in the present composition. See the video instructions about calibration.
IMUs return raw data concerning bow Orientation (“angle positions” with respect to the floor),
angular velocities (in degrees per second) and Acceleration. The system implemented by this
composition develops methods of motion analysis in order to extract bow Orientations, Energies
and Bowing-styles.

1) Orientation. At a first stage calibration and filtering allow the detection of Orientation in the two
dimensions x and y (called here bow-roll and bow-tilt): mean and smooth Acceleration help to de-
tect the bow angle-positions, taking the directions of the cello strings as a hypothetical fixed refer-
ence.
2) Energies. Derivatives and delta root mean squares computed on single and global data from
accelerometers and gyroscopes allow the extraction of time information about the bow Energies.
3) Styles. Standard deviations and FFT are implemented in order to approximate the detection of
specific qualities and intensities of the bowing styles Tremolo, Balzato and Staccato.
The system globally responds in synchronised and affordable ways. It is at an experimental stage
and could be improved in the future.

Sound analysis
Sound analysis is performed by the MAX objects gbr.yin, mnm.moments, sigmund~, analyzer~.
Envelope-following and pitch-tracking are computed upon the overall signal and sometimes at
the level of the principal spectral partials. Timbre features are extracted by means of periodicity
detection, spectral centroid (and its gaussian distributions), and computing some derivatives
and deviations upon the signal amplitude and the spectral centroid.
Sound analysis is performed as a sensing system inside the hybrid tracking system of Cello_2
and more in depth inside the sensing system of Cello_4.
The sensing system experimentally implemented in Cello_4 adds to the spectral features extraction,
a combined system of note detection and expressiveness description (performed not completely
in real time, but on-stage in the time of the performance).

Combining onset detection and pitch tracking the principal notes and intervals are defined on line.
At a note-level some parameters describe and quantify “legato”, timbre-stability, and
attack/continuation performance styles (some essential details are contained inside the Cello_4
performance instructions and in the calibration video).

131
SETUP

Fig.1-T Graphic interfaces common to any cello applications

132
Audio settings
Before any performance it is essential to verify the presence of the sound card.
Double-click the “audio-settings” icon in order to access the I/O module, if the sound card
is not automatically loaded and written inside the yellow menus, it has to be set by dragging
the two arrows in the right part of the yellow menus of input and output, as shown below in Fig. 2-T
(the three menus should contain the labels “ Core Audio” in the upper menu, and the name
of the sound card in the other ones).

Fig.2-T Audio-settings and sound card

If it should be necessary to modify the input physical channel of the sound card, the option is given
to type the number of the modified channel in the yellow number box above the message “s inp_1”.
The default number of outputs can also be modified by pressing the corresponding button above
the options “stereo_set”, “quadri_set, “octo_set”. The default output architecture previews the
stereo outputs for Cello_1 and Cello_2, the quadraphonic output for Cello_4 and octophonic for
Cello_3 (which drives the spatialization, receiving the stereos of Cello_1 and _2 as inputs).

133
Network settings
Instructions:
1) Connect the Ethernet cable from the laptop to the Ethernet Switch
2) Navigate /system preferences /network /ethernet, inside your laptop
3) The Ethernet icon must be green, select it by clicking with the mouse
4) Set the position as automatic
5) Configure the IPv4 as manual, and then type the address and subnet mask as shown below.

"
Fig.3-T The network

The default addresses of the laptops are:


Cello_1: 128.0.0.1
Cello_2: 128.0.0.2
Cello_3: 128.0.0.3
Cello_4: 128.0.0.4
Laptop_5: 128.0.0.10
All with subnet mask 255.255.0.0

134
By double-clicking the icon "p network", you can access the network module of the App:
the addresses contained in the yellow boxes can be modified by typing inside (only in case of strict
necessity!). The IP addresses and the port numbers obviously must be corresponding among
the“p network” modules of all the applications, and also inside the system preferences network
of each laptop, as shown in Fig.3-T.

Input_output
Each App provides the I/O section in the upper-left part of the screen: by default the input channel
from each sound card is number 1 (adc~ 1 in the App), with the exception of Cello_4 which uses
two microphones (by default: input 1 for the sound production and input 2 for the internal sound
analysis).
As shown in Fig.1-T the I/O section contains visual sound-monitors and a couple of editable
number-boxes for a possible modification of the input and output gains (amplitudes between 0. and
1.), in the case the sound card doesn’t allow optimal balance and the default amplitudes don’t fit.

Duration
The overall duration of the work (and also the length of the opening Cello_1 solo) can be differently
set from the Cello_1 application.

"
Fig.4-T Durations

CALIBRATION

The motion tracking calibration of Cello_1, _2 and _3 should be verified before every performance,
the Application memorises the last calibration settings as default.

Cello_4 instead calibrates only the sound analysis data, which can be left unchanged after the first
calibration, unless instrument, microphones or sound card are different.
Cello_4 calibration (onset/offset note detection, spectral description thresholds and parameters)
will be described in the Cello_4 instructions.

The first time the calibration must be performed accurately (following the manual or the automatic
modality); after that, before any following performances, a manual check should be sufficient.

The automatic calibration starts by pressing the“auto-calib” button and following the interactive
instructions appearing on the screen.
If something fails during the process, it will suffice to press “auto-calib”, and start again from the
beginning. Alternatively full manual calibration can be performed.

135
MOTION TRACKING
Motion tracking calibration is crucial in order to define the “performance bowing space” marking
the boundaries of the bow positions.
Finding the minimum and maximum points of bow rotation in relation to the cello body is
the requisite for a precise and meaningful bow interaction.Each value will be normalised within
the extreme positions of 0. and 1., which represent the actual extreme points reached by the bow
with respect to the cello strings during the performance.
The numbers referred by B and D (shown in yellow inside Fig. 8) are flowing quantities relative to
the current Orientations of the bow: “absolute-roll”, “absolute-tilt”, “angle_position”. The
numbers referred by A and C will be set automatically (if auto-calibrated), otherwise the musician
has to write them by typing inside the boxes or dragging the decimal portions with the mouse. The
numbers referred by E are further thresholds to be eventually set. Default values are provided by
the application, but notice that every further change is memorised when the App will be reopened.

Fig.5-T Motion tracking calibration

136
Roll and tilt calibration
Absolute-roll and absolute-tilt are labels of the bow Orientation with respect to the x and y axis
(taking the floor as a point of reference). But the performance point of reference is instead
(from the point of view of the musician) the axis of the strings: we will therefore define “roll”
as the changing value when the bow crosses the strings from the lower to the higher one, and “tilt”
as the changing value when turning the bow (lying on the string) from “full-hair” position to
“wood+hair” position. Min/max roll-calibration and min/max tilt calibration therefore define the
extreme boundaries that will be reached during the performance.

It is useful though, during the roll calibration setup, to leave a little portion of room outside the
extreme boundaries of contact between the external strings and the bow: some off-the-string down-
ward and upward space, in order to allow some bow rotations in the air (especially for Cello_3).

If the musician is instead a violin or viola player, the roll values will obviously be reversed,
but the low string and high string remain in any case the right points of reference, and the final
result will be the same, and in any case Roll calibration defines the bow boundaries of Orientation
starting from the low string towards the high string (the boundaries are here intended as the most
extreme bow-flexion points outside which the bow loses contact with the string, starting to “scrape”
the body of the instrument).

"
Fig.6-T Calibration box

General instructions
-Place the bow at the frog on the low string (C for the cello, G for the violin) in its maximum
external flexion allowing not to loose the contact with the string.
Observe what is the mean flowing number appearing in the absolute-roll box, than transcribe
it manually inside the min roll-calibration number box (typing or dragging with two decimal
approximation).

-Place the bow at the frog on the high string (A for the cello, E for the violin) in its maximum
external flexion allowing not to loose the contact with the string.
Repeat the same procedure transcribing the absolute-roll mean value observed, inside the
max roll-calibration number box.

-Place the bow (in the middle) on the mid double strings as if playing with “full hair”.
Transcribe the absolute-tilt mean flowing value appearing, inside the max tilt-calibration number
box (the number should be very closed to 1.)

137
-Place the bow (in the middle) on the mid double strings again, as if playing with “wood+hair”.
Transcribe the absolute-tilt mean flowing value appearing, inside the min tilt-calibration number
box (the number will be closed to 0.)

If the numbers show to be reversed (i.e. the right outcome is high string ->1, low string ->0,
hair->1 wood->0), it means that the sensor is probably placed not correctly under the frog
of the bow, and it has to be put in the right position.

In this way the bow performance space will be filling the full range of values between 0. and 1.

Check the two number monitors under the calibration rectangle, or otherwise the main graphic
monitors.

String calibration (for Cello_2 and _3 only)


After roll calibration the “angle_position” values will define the rotational space between the
low and the high string (with a small added lateral space): this roll performance space is therefore
normalised between 0. and 1.

Some interactions require the string currently playing to be identified. For this reason string
calibration is required. This calibration is obtained by finding and fixing the “angle_position”
values as a reference when the bow is successively placed (at frog position) upon the three double
steps low, middle and high. This calibration in three steps is required for Cello_2. Cello_3 needs
the string calibration in 5 steps, in order to also detect the low and high off-the-string positions.

During the performance this detection is reached after a perceptible latency, and only as a conse-
quence of a clear sound attack, monitored with a flashing yellow bang: this procedure is due to the
need to avoid unstable and too frequent unwanted change-string detections, and to extract only the
most relevant changes of strings, monitored with a number tagged as “choose_string” (in this way
the system suggests leaving room for significant musical passages played on the same string).

Triggerings calibration
The calibration section E offers the possibility to soften or increase thresholds involved in some
triggering movements.

-“gyro-tolerance” regards quick rotations of the bow in the four principal directions Up, Down,
Forward, Back.
Softening the threshold below the default 0.92 could help for a more natural style of playing,
but risking at the same time to increase unwanted triggering because of a too tolerant threshold.
-“trigger threshold” is only for Cello_1 (“quick-impulsive down-bows”)
-“sound-attack thresh” defines the boundary of detection of the sound attack for the
“currently played string” function of Cello_2 and _3.

138
Calibration allows a fine-tuned rendering of the bow controls over the electronics and the
interactions. Bow movements are collected following three main typologies: Orientations (roll and
tilt), Energies (quickness and rotation), Styles (Tremolo, bounce and Staccato). Triggering involve
the detection of four types of “fast-impulsive-rotations” (Up, Down, Forward, Back),
“quick-impulsive down-bow” detection (only for Cello_1), and the “currently-played-string”
detection for Cello_2 and Cello_3.

AUDIO ANALYSIS (Cello_4 only)


Note analysis
The sound analysis modules involve timbre feature extractions, note detection and expressiveness10
pattern recognition (as a submodule of the note detection).

-Timbre is continuously monitored through the “gbr.yin” external detecting fundamental frequency,
overall amplitude, periodicity, spectral centroid, and spectral statistical distributions
-note detection is allowed by amplitude thresholds combined with transient deviation analysis
in order to approach a substantially reliable onset/offset system synchronised with fundamental
frequency detection
-estimations of “expressiveness” are computed inside the abstraction “expr_perf” which localises
the amplitude peak position and its intensity compared to the attack inside each note detected,
and computes standard deviation, mean and variance of the amplitude inside each note.

In this way it is possible to compute on the fly some consistent ratios between attack-sustain-decay
of the performed notes. In addition every inter-onset-interval (IOI) is compared to the “note length”
in order to extract a raw estimation of “legato” playing.
These data are returning immediately after each “note” is completed.
The “expressiveness” estimations inside the time and place of performance show evident
limitations, though being interesting and affordable performance subsidiary controls.
On the other hand note detection has to be exploited with some cautions and in relation to special
music contexts, as detailed inside the performance notes.

Sound calibration
Calibration is highly dependent on individual features, and must be setup again when instrument,
player, microphones or sound card are changed. The system is provided with default values for cello
playing. The most crucial calibration regards amplitude threshold and onset detection.

A deeper level of calibration involves offset detection and timbre “density”.


When the performing instrument is not a cello a full re-calibration is mandatory.

Sound analysis calibration is accessible inside the “p thresholds” module of Cello_4.


Each label is provided with a number, which can be changed by typing or dragging.
After having changed the calibration numbers, press the label “write” and save.

10 Canazza, DePoli, Drioli, Rodà,Vidolin, Proceedings of the IEEE, 92, 4, 2004


139
Instructions
Note detection needs to be carefully calibrated since it allows the performer to control global
aspects of the interaction working on a net of on/off functions.
Timbre detection allows instead a more loose and continuous interactive interplay.

Any sound analysis doesn’t happen when the signal is below the threshold “s loud-factor”,
set by default at -80 Db. You can raise it, if necessary.

A) Note detection parameters

-1. The attack threshold “writer” (see Cello_4


application) flashes when the cellist calls for a score
when he/her sound surpasses the threshold through
a sharp sound attack
-2. Note-on threshold. Note-on is detected as an
increasing amplitude ratio respect the last 50 ms
-3. Note-off threshold. Note-off combines three
different decay-tracking methods. Calibration can be
set only upon one of them: the decay difference re-
spect the last 100 ms. (in Db)
-4. Loudness gate disables the note-on detection when
the cello amplitude is below its threshold, in order
to avoid background noise detection.

Fig.7-T Calibration parameters

1) score receiving (how much sound attack you need in order to call for the score
-lower the number above the label “s write_th” if it requires too much effort
-increase the number if it is too sensitive and generates too many scores
-the minimum-maximum values of the threshold should be between 15 and 35.
2) If too many note-ons are detected you can raise its threshold (or otherwise lower it if you need
the opposite): the optimal range of “s on_th” is between 0.5 and 2.
3) Set an appropriate difference level in Db of “s off_th”
4) Raise the “s loud_th” if the background noise interferes, lower it when the note-on detection
cannot be performed playing softly.

The amplitude monitoring is performed through the peakamp~ MAX object

140
B) Timbre detection parameters

“Timbre” detection affects the interaction allowing the performer to modify the electronic timbre
of his/her augmented instrument (harmoniser).

It needs to be calibrated in deep when the instrument assuming the role of Cello_4 is not a cello.
In this case the values involved (amplitude, signal periodicity, spectral centroid, spectral kurtosis)
could be dramatically out of range respect the default, needing different kinds of compression and
scaling.

It could be assumed that for different cellists and cellos we don’t need a detailed timbre calibration,
since the interaction can be learned by the performer through direct interaction.

The most effective timbre calibration, maybe useful also for a cellist, is contained in the
“s pow_2feed” message. The referenced module computes the ratio between fundamental frequency
and spectral centroid (sounds full of low components will output a value approaching 1., sounds full
of high components will output values approaching 0.). The output, after being time-filtered,
is directly mapped to the feedback of the harmoniser delay chains.
In this way the performer controls the sound density and quality of his/her augmented instrument.

This ratio can be powered by a coefficient (by default the power is 1, therefore the ratio is left un-
changed). If you raise the power ratio inside the “s pow_2feed” module, the values of the feedback
will be compressed downward, making it difficult to boost high qualities of delayed repetitions;
conversely if you lower the power ratio between 0. and 1. the delay-feedback will be compressed
upwards, more easily approaching the maximum value of 1.

All the other timbre calibrations coming from the input sound analysis, dynamically affect the
sound parameters of the four voice harmoniser, namely the sinus, noise, formant and transient spec-
tral components of the internal vocoder.

Detailed timbre calibration instructions are contained inside the Cello_4 application (“notes"
patcher inside the “p threshold” module).

141
SOFTWARE

Inertial Motion Tracking is tested with the Orients_15 System, developed by the Centre for
Speckled Computing of the University of Edinburgh, 11 running through the orientMac application.
This application and the related Readme.txt document are contained in the main folder of this
software. The system needs a native Bluetooth 4 Mac version as minimal requirement.
A different Motion Tracking system is allowed by substituting the abstracion “rec_orients-D” with a
different OSC udpreceive module, which must contain proper scaling and normalisation.
Details are given inside the Readme text file.

The motion tracking data are collected in Laptop_5 through the UDP and OSC MAX objects.
These data are sent by the Ethernet network to the cello standalones inside the other laptops, where
individual modules called “p network” route the messages, decoded inside the “bowings” individual
abstractions. The processed data are sent to the main functions of the patches as controllers and
interactive agents. All the patches and abstractions rely on the "pattr" system for calibration and
automatic data recalling.

After processing inside the single cello-applications, the motion tracking data are sent back to
Laptop_5 in order to generate the video in real-time.

CELLO_1.SPECTRAL
MAX/Msp 6.1 or CELLO_1 standalone application

LIST OF EXTERNALS AND ABSTRACTIONS

analyzer~ (Tristan Jehan)


http://web.media.mit.edu/~tristan/maxmsp.html

contrast-enhancement (Michael Edwards)

dag.statistic (Pierre Guillot)


http://www-irma.u-strasbg.fr/~guillot/

dot.smooth, dot.std (Joseph Malloch et al.)


http://idmil.org/software/digital_orchestra_toolbox

ej.line (Emmanuel Jourdan)


http://www.e--j.com

11 www.specknet.org
142
expo74 abstraction: readaptation of 04-transit-freeze, 09-adapt-pvoc (Jean Francois Charles)
https://cycling74.com/toolbox/live-spectral-processing-patches-for-expo-74-nyc-2011/#.Vh0sE2A-
BE4

f0.distance, f0.round (Fredrik Olofsson)


http://www.fredrikolofsson.com/pages/code-max.html

ftm, ftm.list, ftm.object, ftm.reschedule,


gbr.bands, gbr:fft, gbr.resample, gbr.slice~, gbr.wind=, gbr.yin,
mnm.alphafilter, mnm.delta, mnm.list2col, mnm.list2row, mnm.list2vec, mnm.onepole, mnm.win-
filter,
FTM-Gabor library (Norbert Schnell et al.)
http://ftm.ircam.fr/index.php/Download

jg.spectdelay~ (John Gibson)


http://pages.iu.edu/~johgibso/software.htm

msd (Nicolas Montgermont)


http://nim.on.free.fr/msd

multiconvolve~ (Alex Harker and Pierre Alexandre Tremblay)


http://www.thehiss.org/

OSC-route (Matt Wright)


o.route (Adrian Freed)
http://www.cnmat.berkeley.edu/MAX

pipo (IRCAM IMTR)


http://forumnet.ircam.fr/shop/en/forumnet/59-mu.html

quat2car (freeware)
http://www.mat.ucsb.edu/~wakefield/soft/quat_release.zip

sigmund~ (Millar Puckette et al.)


http://vud.org/max/

SpT.ranspose (abstraction)
Spectral Toolbox (William A. Sethares et.al)
http://www.dynamictonality.com/spectools.htm

143
CELLO_2.ARTIFICIAL
MAX/Msp 6.1 or CELLO_2 standalone application

LIST OF EXTERNALS AND ABSTRACTIONS

analyzer~ (Tristan Jehan)


http://web.media.mit.edu/~tristan/maxmsp.html

dag.statistic (Pierre Guillot)


http://www-irma.u-strasbg.fr/~guillot/

dot.smooth, dot.std (Joseph Malloch et al.)


http://idmil.org/software/digital_orchestra_toolbox

ej.line (Emmanuel Jourdan)


http://www.e--j.com

f0.distance, f0.round (Fredrik Olofsson)


http://www.fredrikolofsson.com/pages/code-max.html

ftm, ftm.list, ftm.mess, ftm.object, ftm.reschedule,


gbr.slice~, gbr.yin,
mnm.alphafilter, mnm.delta, mnm.list2col, mnm.list2row, mnm.list2vec, mnm.onepole, mnm.win-
filter
FTM-Gabor library (Norbert Schnell et al.)
http://ftm.ircam.fr/index.php/Download

jgn.mesh~ (John Gibson)


http://pages.iu.edu/~johgibso/software.htm

modalys~, mlys.force, mlys.mono-string, mlys.point-output, mlys.script


(IRCAM Instrumental Acoustic Team)
http://forumnet.ircam.fr/product/modalys-en/

msd (Nicolas Montgermont)


http://nim.on.free.fr/msd

multiconvolve~ (Alex Harker and Pierre Alexandre Tremblay)


http://www.thehiss.org/

OSC-route (Matt Wright)


o.route (Adrian Freed)
resonators~ (Adrian Freed et al.)
144
http://www.cnmat.berkeley.edu/MAX

pipo (IRCAM IMTR)


http://forumnet.ircam.fr/shop/en/forumnet/59-mu.html

quat2car (freeware)
http://www.mat.ucsb.edu/~wakefield/soft/quat_release.zip

sigmund~ (Millar Puckette et al.)


http://vud.org/max/

CELLO_3.SAMPLER
MAX/Msp 6.1 or CELLO_3 standalone application

LIST OF EXTERNALS AND ABSTRACTIONS

ambiencode~, ambidecode~, ambimonitor (Jan Schacher)


http://trondlossius.no/articles/743-ambisonics-externals-for-maxmsp-and-pd

analyzer~ (Tristan Jehan)


http://web.media.mit.edu/~tristan/maxmsp.html

bonk~ (Millar Puckette et al.)


http://vud.org/max/

centroid~ (Ted Apel et al.)


http://vud.org/max/

dag.statistic (Pierre Guillot)


http://www-irma.u-strasbg.fr/~guillot/

dot.smooth, dot.std, dot.timedsmooth (Joseph Malloch et al.)


http://idmil.org/software/digital_orchestra_toolbox

ej.line (Emmanuel Jourdan)


http://www.e--j.com

f0.distance, f0.round (Fredrik Olofsson)


http://www.fredrikolofsson.com/pages/code-max.html

ftm, ftm.list, ftm.object, ftm.reschedule,


mnm.alphafilter, mnm.delta, mnm.list2col, mnm.list2row, mnm.list2vec, mnm.onepole, mnm.win-
filter
145
FTM (Frederic Bevilacqua et al.)
http://ftm.ircam.fr/index.php/Download

imubu, mubu, mubu.granular~, mubu.knn, mubu.process, mubu.record~, mubu.track,


pipo, pipo~ (IRCAM IMTR)
http://forumnet.ircam.fr/shop/en/forumnet/59-mu.html

msd (Nicolas Montgermont)


http://nim.on.free.fr/msd

multiconvolve~ (Alex Harker and Pierre Alexandre Tremblay)


http://www.thehiss.org/

OSC-route (Matt Wright)


o.route (Adrian Freed)
http://www.cnmat.berkeley.edu/MAX

quat2car (freeware)
http://www.mat.ucsb.edu/~wakefield/soft/quat_release.zip

CELLO_4.HARMONISER
MAX/Msp 6.1 or CELLO_4 standalone application

LIST OF EXTERNALS AND ABSTRACTIONS

analyzer~ (Tristan Jehan)


http://web.media.mit.edu/~tristan/maxmsp.html

bach.roll, bach.score, bach.transcribe (Andrea Agostini, Daniele Ghisi)


http://www.bachproject.net

dot.smooth, dot.std, dot.timedsmooth (Joseph Malloch et al.)


http://idmil.org/software/digital_orchestra_toolbox

ej.line (Emmanuel Jourdan)


http://www.e--j.com

f0.round (Fredrik Olofsson)


http://www.fredrikolofsson.com/pages/code-max.html

ftm, ftm.copy, ftm.list, ftm.mess, ftm.object,


gbr:fft, gbr.slice~, gbr.wind=, gbr.yin,
mnm.list2row, mnm.moments, mnm.winfilter
FTM-Gabor library (Norbert Schnell et al.)
146
http://ftm.ircam.fr/index.php/Download

lhigh (Peter Elsea)


http://peterelsea.com/lobjects.html

M4L.gain1~, M4L.delay~ (abstractions)


https://cycling74.com

multiconvolve~ (Alex Harker and Pierre Alexandre Tremblay)


http://www.thehiss.org/

o.route (Adrian Freed)


http://www.cnmat.berkeley.edu/MAX

sadam.stat (Ádám Siska)


http://www.sadam.hu/en/software

supervp.trans~ (IRCAM Analysis/Synthesis Team)


readaptation of SuperVP.HarmTransVoice
http://forumnet.ircam.fr/product/supervp-max-en/

LAPTOP_5.VIDEO
MAX/Msp 6.1, Jitter or LAPTOP_5 standalone application

LIST OF EXTERNALS AND ABSTRACTIONS

o.route (Adrian Freed)


http://www.cnmat.berkeley.edu/MAX

sadam.stat
http://www.sadam.hu/en/software

147
INDEX

p. 94 Les Demoiselles d’Avignon


PRESENTATION
Model
Frameworks
p.95 Notes
p.96 INTERACTION
Roles
Augmented instruments
p. 97 Sensing system
p. 99 Composed interactions
tonal center
spatialisation
p.100 action score
background colours
p. 102 Circuit
p. 13 GENERAL INSTRUCTIONS
Settings
Performance:start!
p.104 CELLO_1. SPECTRAL
Electronic sounds and interactive role of Cello_1
p.105 Features and interactions
p.107 The four virtual instruments
sound characters
p.108 internal nuances and controls
p.110 CELLO_2. ARTIFICIAL
Electronic sounds and interactive role of Cello_2
received messages
p.111 messages to cello_4
messages to the ensemble
timings
p.112 Interactions
messages to the ensemble
p.113 Messages to cello_4
electronic sounds
p.114 score
The artificial sounds: Virtual Music Instruments
p.115 white instrument “mandolins”
p.116 violet instrument “electric guitar”
yellow instrument “glissati"
p.117 green instrument “resonant percussion”
p.118 CELLO_3. SAMPLER
Electronic sounds and interactive role of Cello_3
p.119 Spatialization
Live recording
p.120 Sampling
p.121 Cello
Bowing styles
p.122 amplified cello
sound files
p.123 CELLO_4. HARMONISER
Electronic sounds and interactive role of Cello_4
p.124 Start/stop
Interactions
p.125 Note detection and expressiveness
p.126 Timbre
p.127 The score
148
p.128 Calibration
p.129 LAPTOP_5
p.130 SETTINGS
Hardware equipment
Real-time sensing and analysis
motion tracking
p.131 sound analysis
p.132 Setup
p.133 audio settings
p.134 network settings
p.135 input_output
durations
CALIBRATION
p.136 Motion tracking
p.137 roll and tilt calibration
general instructions
p.138 string calibration
triggerings calibration
p.139 Audio analysis
note analysis
sound calibration
p.140 instructions
p.142 SOFTWARE
Cello_1
List of externals
p.144 Cello_2
List of externals
p.145 Cello_3
List of externals
p.146 Cello_4
List of externals
p.147 Laptop_5
List of externals
p.148 INDEX
p.150 List of figures

149
LIST OF FIGURES
p.94 Fig.D_1 The painting
p.96 Fig.D_2 The wireless IMU sensor under the bow frog
p.98 Fig.D_3 Motion Tracking functions
p.100 Fig.D:4 Action score
p.102 Fig.D_5 The circuit
p.104 Fig.1-C_1 Cello_1 application
p.106 Fig.2-C_1 Action score
p.108 Fig.3-C_1 Action score Freezing spectrum instruments
p.109 Fig.4-C_1 Real-time spectral instruments (dynamicEQ)
p.110 Fig.1-C_2 Cello_2 application
p.111 Fig.2-C_2 Four virtual instruments
p.112 Fig.3-C_2 Action score
Fig.4-C_2 Message to video
p.113 Fig.5-C_2 String recognition
Fig.6-C_2 To Cello_4 polyphony
p.114 Fig.7-C_2 To Cello_4 score
Fig.8-C_2 The instrument “Mandolins”
p.118 Fig.1-C_3 Cello_3 application
p.119 Fig.2-C_3 Action score
Fig.3-C_3 Spatialisation
Fig.4-C_3 Rec_Monitor
p.120 Fig.5-C_3 Sampling
Fig.6-C_3 Choose-file
p.123 Fig.1-C_4 Cello_4 application
p.124 Fig.2-C_3 Interval detection and messages to Cello_2
p.125 Fig.3-C_4 Note-on/off detection
p.126 Fig.4-C_4 Expressiveness parameters
Fig.5-C_4 Timbre-expression parameters
Fig.6-C_4 Tibre interactions
p.127 Fig.7-C_4 Calling for the score
p.128 Fig.8-C_4 The score
Fig.9-C_4 Calibration parameters
p.132 Fig.1-T Graphic interfaces common to any cello applications
p.133 Fig.2-T Audio-settings and sound card
p.134 Fig.3-T The network
p.135 Fig.4-T Durations
p.136 Fig.5-T Motion tracking calibration
p.137 Fig.6-T Calibration box
p.140 Fig.6-T Calibration parameters

150
151
Wires’s
PRESENTATION
Studio recordings at: https://soundcloud.com/nicola-baroni/wires
https://soundcloud.com/nicola-baroni/shamans-wires
Live video at: http://www.youtube.com/watch?v=-E1B0DQmNFA
Performance instructions at: https://www.dropbox.com/s/evxyhf1obs487qx/Shaman-instructions.mp4?dl=0

SHAMAN’S WIRE
Shaman's Wires is a collaborative project involving
the composer-vocalist Angelina Yershova and the
cellist-composer Nicola Baroni.
The project, in its concert based facet, develops a
50' lasting macro-form, whose narrative unfolds
through improvisation.
The stage setup is arranged as 2 parallel extended
instrument assemblages:
-female voice and ethnic percussions (Bodran) em-
bedded in a live electronic environment
-prepared cello, augmented through interactive live
electronics equipment.
The 2 personal stage equipments are independent
augmented instruments partially interfacing
through technology. Fig.W_1 Shaman’s wire

Yershova's background as a native


Kazakh contemporary composer, and
Baroni's position as a cellist involved
in computer interactive composition,
fashion a cross-cultural project based
on ethnic composition, cello extended
techniques and Live Electronics.

Fig. W_2 Angelina and Nicola

The performative role of the cello is centred upon new sound vocabularies absorbing Kazakh
sounds and techniques. The actions of the electronics disembody, helping to reformulate the sounds
of the cello with respect to its native Western practices, and the same happens on the other side
for the Asian ethnic instruments.
Live Electronics, being unconventional in terms of practice and music "grammars", act as
a mediating open space between Western and Central Asian approaches to contemporary music.

152
SHAMAN'S SOUNDS
Syncretism, the state of "being both", is treated as an action of preserving traditions (Eastern and
Western as well), putting into question, whilst developing, their native motivations to make music.
On the other hand the electronics allow the mutation of the acoustic sounds onto imaginary
soundscapes, moving towards the shared compositional contexts animating the concert.
"Being both" is an actual shamanic state, and in fact the animistic Kazakh conception of music
maps sounds onto a transcendent and therapeutic space by which physical energy is charged:
in a word, music is more a spiritual practice than a form of "Art".
Inside our project the Western concert-based rite meets the Central Asian dimension of sound as
meditation, soundscape and self-emergence, through the mediation of the electronics.
The storytelling macro-form of the whole concert is traced upon the metaphor of the shamanic
harmony with the energies of the upper world, giving power to transform lower energies.
Originally the word Shaman (from the sanskrit Saman) means chant, which is a primary healing
practice; and our music is in fact a broad exploration of the harmonic chant techniques, through
acoustic and electroacoustic instruments.

SOUNDS
The sweeping overtones emanating from extremely low pitches which distinguish the Kazakh-
Mongolian singing practice, are boundaries of a corporeal resonance where sound meets mantra.
The tension between these simultaneous vibrations, obtained through special body techniques
(the shaman voice, or its corresponding instrumental sounds) reveal different corporeal energy
states, tuned to existential dimensions intertwined with health, balance, ethics, esoteric knowledge,
relationships with nature and ancestors.
Raucous vocalisations, besides any music representations, are symptoms and means to dissolve
energy blockages and trauma, breaking up dense energies through sound rattles.
On the other hand the music interrelation with tunes, melodies and note-scale systems are not
primarily viewed as means for composition nor perfect imitation, but as sound links with universals
resonating within our energy-body. In this context rhythmic patterns show the emergence of interior
motions (leading to focussed textural densities), rather than being a form of metered time division.

CELLO AND KYL KOBIZ


Kyl Kobiz is the principal bowed string instrument of Kazakhstan: generally
it is not bought from a luthier, but it is assigned to the musician by a shaman,
as a sounding entity tuned to the inner personal qualities and relationships with
the environment and the ancestors. Being more than a tradition, these cultural
aspects persist inside the Kazakh classical and contemporary music community.
http://www.bukhara-carpets.com/kazakh-musical-instruments.html

The role of the Kyl Kobiz is represented inside this project by the cello through
a specialised research involving extended techniques and interactive live
electronics.
Fig.3 Kyl Kobiz

153
WIRE’S
The interactive system here proposed has been natively built as the central part of the whole duo
Shaman's Wires. The composed interactions called Wire’s behave musically in a hyper-cello fashion,
and can be considered both as an autonomous solo, as well as a prominent cello section maybe
accompanied or dialoguing in duo.

Any further performance of Wire's could be independent of this original program, developing totally
free and autonomous choices by the cellist.

The interactive system Wire's “listens to” and interprets the cello timbre as it is played inside
the performance, and sensitively responds with electronic sounds shaping different levels of
attunement, abstraction and energetic intensity with respect to the cello sound.
The cellist structurally and interactively organises and influences the electronic evolutions
of the solo through music segmentations and timbre (whose features feed the composition algo-
rithms, operatively following the intentions of the cellist).

The electronic sounds have a perceptual (as well imaginary) connection with the Kazakh music
styles, and their characters are musical resonances responding with their own qualities
and autonomy to the cello input. The electronic system is organised as a subliminal self-organising
sound entity. Its junctions cross-resonate with the energetic and reflexive music intentions
of the cellist, as if they were arousing chakra vortexes.

The performance develops as a free improvisation.


No playing instructions are given, the laptop screen application is displayed for settings, the
electronic interaction will be conducted by listening.

The following performance explanations are structured as:


-Ethnic sounds and cello extended techniques
-Software composed interactions
-Augmented cello improvisation and performance

154
ETHNIC SOUNDS
Even if each performance of Wire's could be independent of this program, the following notes offer
guidelines for a faithful performance with respect to the native Western-Kazakh electroacoustic
interaction. The notes regard styles and cello extended techniques, in accordance with the
performative analysis here documenting possible symmetries between cello and Kyl Kobiz, as they
were found through music rehearsals.

1) The cello will be tuned:


A flat - slightly lowered = frequency 206
A flat - slightly lowered (1 octave below) = frequency 103
G - 1/6 of tone higher = frequency 100
C sharp - slightly lowered = frequency 69

Tuning can be afforded through the fundamental frequency monitor located in-
side the the bottom left section of the laptop screen (turning on the DSP).

Fig.W_4 Tuning monitor

This tuning permits:


- A more relaxed and edgeless timbre throughout the high ranges (the upper strings are lowered).
- A more pushing sound in the low ranges (the lowest string is higher).
- A middle range mainly producing faint resonances, giving rise to microtonal beats.

The division of the open cello strings in these 3 pitch regions engages with:
-strong sustained low drone tones expanding to a preferential detuned 5th double-stop
-higher melody ranges springing from the low bordone, but passing through the deconstructing re-
sonance of the middle strings
-higher tones show a heterophonic attitude because of the role of the middle strings
-middle strings in addition help to create a secondary higher drone, or a secondary inner voice
(more or less centred one octave below)

2) Low sustained tones are central to the performance and they can be:
- modulated in timbre (full tones, sweeping-sibling bowing conductions, harsh vs. slow-increasing
modes of attack, gradations of grattato styles, sul-tasto -> ponticello zooms)
- enhanced through low double-steps
- the 3rd string allows for natural harmonic sweepings recalling the Asian harmonic chant
- the ordinary open-string style can occasionally be alternated with different sustained pitches
(by left hand fingering)
- implicit rhythmic patterns vs. drones are means to structure the improvisation

155
3) Higher melodic sketches can be designed through short intervallic fragments and ornamentations
around few selected pitches
-melody should be highly fragmented, and interleaving with low and middle open string resonances
-melodic fragments are often performed through left finger half-pressure, quick glides, nail lateral
contact with the high string
-ornamentation is microtonal, gliding with the same finger, occasionally extending to larger
pitch-tremolos up to a minor third
-the middle strings offer the opportunity for secondary voices, intermediate drones, scraping ac-
companiments, differentiated bariolages

4) Bow-string noises are more important than melodic fragments:


-extreme bow pressure, pitch-timbre clusters, fast-extended left hand glides and sweeping harmo-
nics through different bow pressures
-timbre shaped through a rich vocabulary of bow roughness (half-pitches, fast tense tremolos,
small pressure distortions, unstable bow-bridge distance)
-the rotational bow activity through the strings increases beats, rough note attacks, collateral
bow-noises, re-bouncing resonances

5) The irregular melodic trend does not preclude occasionally developing a tune-like melody,
or shaping implicit rhythmic divisions
-a free unmetered performance can alternate with background rhythms conceived in an additive fa-
shion (similarly to the Indian Tala)
-rhythms can be freely chosen within slow or faster motion, strictly mingling binary and ternary
impulses in cyclic patterns (i.e. 4-4-2-3-2, 3-2-3-4, etc.)
-the additive rhythmic conduction frees the performer from the need to follow a fixed meter,
the rhythmic patterns can be dynamically changed during the performance

6) A small set of external objects preparing the cello is recommended


-copper thin wires (wearing low or/and high strings) for rumbling effects (raising intermodulations
and detuning the cello spectrum)
-small metal rings (such as key rings) fixed around the low string, rhythmically rattling against
the cello bridge
-small clips attached to mid portions of the strings in order to block the string vibration at special
nodes: the clip positions can be modified during the performance
These objects need to be easily fixed/removed during the performance

A sequence of predefined time segments needs to be planned in advance, at least for the initial
2'-3' in order to shape an organised sound interaction with the electronics.

156
SOFTWARE INTERACTIONS

The electronic system develops music by means of a Self Organising Map. SOM is an unsupervised
Machine Learning technique based on the Adaptive Resonance Theory (ART)12.
The software program reads the continuous stream of 5 cello timbre descriptors, and consequently
behaves in response of how and when the music created by the cellist is performed.
At the beginning the SOM progressively expands its mapping space from an initial point zero.
Subsequently the SOM automatically fixes some nodes (relative to the cello input articulations
and time evolutions), until it fills its mapping space.
The self organised mapping, even if apparently abstract, hence depends on the cello timbre and
the music segmentations as they are performed during the initial phrases of the solo.

CELLO TIMBRE
The system extracts the real-time analysis of the following cello features:
frequency, amplitude, periodicity, spectral kurtosis, timbre density.
-frequency individuates the pitch registers
-amplitude tracks the sound global volume intensity
-periodicity regards pure vs. noisy sounds
-kurtosis reveals if the timbre is resonant vs. ordinario vs. compressed
-density tracks timbre intensity (full tone) vs. airy timbre (i.e. sul tasto, light bowing, pizzicato)

The fluctuations of these 5 sound qualities are tracked in order to


detect their evolutions along a time span of 2.5 seconds.
They are computed as continuous values between 0. and 1.
Their streaming values are sent to the SOM.

Fig.W_5 Timbre cello descriptors

If the cellist plays contrasting episodes at the beginning of the solo, the SOM will be faster
to organise its mapping space. On the contrary, slow initial cello transitions showing longer music
segmentations of similar timbre contents will slow down the Machine Learning phase.
Slower initial transitions will be beneficial to the consistency of the automatic mapping process.
The cellist cannot control the detail and the timings of the SOM, as the system is self-regulating.
The cello improvisation therefore creates an indirect remote dialogue with the system. Rather than
a strict instrumental loop of control-reaction upon each effect, the cellist will be focussing
on high-level decisions involving the global music behaviour of the interaction.
In a sense the electronics behave more as an “alter-ego” than an objective instrument.

12B.D.Smith, G.E.Garrett, 2012


http://www.nime.org/proceedings/2012/nime2012_68.pdf
157
SELF ORGANISING MAP
The SOM, taking as input the stream of the 5 cello timbre descriptors, creates nodes of
cello-timbre similarity. The system organises its mapping space by identifying cello timbre charac-
ters (as inputs), and creating (as output) a code related to the sequence of occurrence of these
timbres, as they were performed by the cellist during the initial part of the solo.
Wire’s is structured in order to make the SOM pro-
gressively generate output numbers from 0 to 19 in 2
x/y dimensions during the initial part of the solo per-
formance (the Machine Learning phase).
The output numbers X are mapped to the electro-
nic effects, the output numbers Y are mapped to
the density of the electronics (how many effects are
running in parallel).

Fig.W_6. Self organised mapping outputs

The figure above shows, as an example, a situation in which the mapping space Y (density of
effects) was filled quite briefly, reaching its maximum of density in a short time.
A smoother process would be preferable.

Two further SOM output dimensions are mapped to the spatia-


lisation.

Fig. W_7 Output display monitor

ELECTRONIC RESPONSE
As listed below, there are 20 electronic effects in all, the electronic sounds with a low index
number are lighter in character. The more the index number rises, the more the referenced effect
will be intense and abstract.

Your laptop screen (as in Fig.5) shows the currently active “principal effect index” and
the “effect density” (when more effects are active in parallel they are neighbours of the principal
effect). When the cellist performs timbres very similar to those played at the beginning, the machine
should output very low output numbers: the output numbers increase when the cellist performs
timbres previously played, but during subsequent music segmentations. In this way the machine
shifts from light, middle and stronger effects, and with a narrow or larger band of effects density.

The cellist can monitor the mapping numbers as output by the SOM, as shown in figure 5. But the
true effective monitor (consistent with the audible response) is afforded through the colour changes
of the sound modules appearing on the laptop screen. But it is not necessary to exploit the laptop as
if it were a graphic score (looking at it exaggeratedly), since Wire’s is an improvised interaction.
158
After the start the laptop screen can be considered nothing more than a global monitor of the elec-
tronic densities and directions, and it should not disturb the listening priority of the interaction (pre-
vious setup and calibration maybe requiring the help of an assistant).

Fig.W_8 Main “Wire’s” MAX patch

The previous image shows an example of your laptop screen in action. The SOM monitors are in
the upper portion. The lower left part contains the start/stop/gain settings.

Seven modules arranged on the screen contain the electronic effects: the coloured interfaces show
you dynamically which sound effects are currently active.
Three of the modules are actually double-modules, and each module can play one octave lower.

Red squared module means “the effect is active and playing”


Blue means “the effect is performing one octave lower”
Violet means “the effect is performing at normal pitch plus its low-octave”

159
ELECTRONIC SOUNDS
Taking into account all the internal variants, the system provides 20 sound effects.
Each effect is tagged as an index to the SOM output maps. Starting from a "point zero"
mapping, represented by the amplified cello alone, the most artificial effects are located
as the most distant points from the origin of the self-growing mapping space.

The effects are ordered by surrogacy, that is timbral abstractedness and distance from the cello
1 amplified cello 2 amplified cello low octave
3 live recorded cello 4 live recorded cello low octave
5 filtered cello 6 filtered cello low octave
7 very amplified cello (ch1) 8 very amplified cello (ch1) low octave
9 cello with delay-feedback 10 cello with delay-feedback low octave
11 FOG cello granulator 12 FOG cello granulator low octave
13 very amplified cello (ch2) 14 very amplified cello (ch2) low octave
15 FOF synthesis 1 - artificial voice 16 FOFsynthesis 1 - artificial voice low octave
17 FOF synthesis 2 - artificial voice 18 FOF synthesis 2 - artificial voice low octave
19 Modalys physical model 20 Modalys physical model low octave

INTERACTION
The more the cello performance is varied and contrasting, the richer the electronic performance will
be. The more the cello performance is slow pacing and reflexive, the lighter, more controlled
and slowly evolving will be the electronic result. Maybe it is not necessary to exploit the full range
of the electronics, and a climax could be an isolated event.

The internal response of each effect is quite complex and should be performed intuitively,
but the internal local mappings are detailed inside each module (by double-clicking the labels
inside the main app you access mappings and explanations).

Each electronic response, even the most subtle, depends on the cello timbre as you are performing
it. In other words you sonify the timbre analysis of yourself. The most important thing is to keep
a clear connection with the conceptual aspects of the mappings:
-the initial sound characters building the overall system response,
-the kind and density of the effects during the middle part of the performance,
-how and when to create points of climax.

You can also individuate at your choice a few elements out of the complex detail of all the mapping
parameters, upon which to interplay giving nuance to the improvisation. Since timbre is complex
and multidimensional it will be impossible to force the machine into deterministic responses
in terms of a precise chain of cause and effect: the electronics will be a further sound dimension
coupled with the music and the intentions coming out of the cello performance.

More details are contained inside the module “performance-notes” inside the main app.
-the module “setups” contains the messages for calibration,
-“audio-descriptors” shows the sound analysis functions,
-inside the module “matrix” all the internals of the system are running.

160
PERFORMANCE
SETUP

“I/O settings” section allows to set:


-the sound card and the logical input channels,
-the outputs (by default the system is quadraphonic,
press the relative button for a different option).

Fig. W_9 I/O section

At the bottom left of the screen, the module “Setups” contains the calibration system:
-the inside section 5 involves the optional advanced option (requiring direct experimentation)
of setting the parameters of the SOM (plasticity, learning rate, neighbourhood).

-the amplitude calibration of section 4 is necessary:


by pressing “Tab” (with DSP on, and before starting the piece)
you have 10” time to play the loudest cello sound as possible
(2 flashes signal the beginning and the end of the calibration time
window)13.

Fig.W_10 Amplitude calibration

START

Optionally you can set the final gain (0.9 by default)


by pressing one of the laptop keys QWERTYUI.
By pressing the Spacebar you start the music: after
10” the sounds will be coming out, after 15” the full
interaction will be running. At the end, by pressing Enter,
a long fadeout steers the music to its conclusion.

Fig.W_11 Start-stop interface

More details can be found inside the module “laptop-controls” (double-click “ performance-notes”)
inside the main app. The start and stop functions are the only physical interaction that the cellist has
to perform with respect to the computer. All the interactive music information is sent to the
system exclusively by means of the cello sound.

13The system automatically sets the number box “amp_calib” to the value as returning inside its neighbour number box
“amplitude monitor”. After that press the message “write”, the application should remember your levels at the next use,
until a new calibration is stored again.
161
FORM BUILDING
The music interaction develops as a consequence of the sound choices of the cellist.
From the beginning the SOM observes the cello sound as it develops during the performance and
automatically builds its self-growing and abstract mapping space.
The cellist has to improvise a music characterised by different time regions (formal segments,
music phrases, contrasting timbral zones) possessing specific pitch-ranges (static or evolving),
mean volumes (i.e. piano, mezzo-forte, fortissimo), and timbres (pure vs. noisy, light vs dense,
resonant vs. intense).

Similar timbre cello inputs will move the system towards similar electronic outputs.
The consistency between cello timbres and electronic outputs is a result of the initial part
of the solo: the time development of the initial timbre characters (from the point of view
of the cellist), Machine Learning from the point of view of the computer.
Depending on the cello improvisation, Wire’s grows differently in terms of sound abstractness
(orders of sound surrogacy), density and spatialisation.
If the cellist during the initial 2'-3' of performance creates well shaped differentiated sound regions
(i.e. very low pitches / pianissimo / sul-tasto, after ca. 10” to 20” medium-low pitches / mezzo
forte / dense bowing, after ca. 10” to 20” high pitches / forte /noise-distorted etc.) the system should
couple the initial region with low-surrogacy effects, few effects active, and involving speaker 1.
As the system detects new sound characters it will start to progressively activate higher surrogacy
effects, a higher number of effects working in parallel and more dynamic spatial movements.
Similar cello sounds will always recall similar electronic situations.

If the cello improvisation develops through stable and slowly evolving music sounds the system
response will be smoother, more cello contrast will recall degrees of electroacoustic entropy.

ARTIFICIAL SOUNDS
Inside each of the effect modules you can find the description of the internal mappings.
The electronic effects are built in order to recreate concrete or imaginary features recalling Kazakh
sounds and techniques: enhancing, mimicking or estranging timbre qualities.
Starting from the simpler effects, towards the last more abstract ones it can be noticed that:
A) Cello treatments
-all the effects are provided with a low octave doubler, mimicking the deep throat voiced Central
Asian style.
-filtering allows for a pseudo harmonic chant result
Quick and nervous loudness variations by the cello increase the filter sweeps into different
harmonic frequencies; a bright timbre (i.e. sul ponticello) increasing the effect.
-the intricate delay system suggests multiple heterophonic textures
You amplify this effect by decreasing your cello volume
-the extra-gain effect fits for grungy subtle noises
It is very sensitive to soft and very noisy sounds
B) Synthetic sounds
-FOG enacts overtones/irregular-granulations/sound-distortions
-FOF modulates voiced/guttural/rumbling sounds
-physical models (Modalys) produce rumbles/percussions/abstract-bow-scrapings

162
Augmented interaction
Each internal modulation of these artificial effects responds to the cello timbre in a consistent way,
but the more abstract the effects are, the less predictable will be their response.
The cello improvisation, through its non obvious association with the electronics, acts as a timbre
based corporeal meta-language accessing different levels of remoteness and energy response.

The cellist has an indirect access to the means of electronic control, which are not in fact controls
at the low level of analytic instrument parameters. Especially at the level of macro-form, the cellist,
preserving his/her traditional instrument holistic approach to performance gains the power
to influence the compositional behaviours. But these compositional behaviours are the result
of a mediation between the musician’s choice and the mirroring autonomous behaviour of the
self-organising abstract machine. Both the cellist and the system have a strict relation with their past
choices which influence the present music events. The continuous negotiations between the cellist,
his/her mirroring self-organising alter-ego and the past of both, suggested the creation of such
a system as a means to convey the idea of energy exchanges through different spiritual dimensions.
In this sense the electronic sounds can be viewed as extra corporeal extensions, localised
and accessed through a “shamanic” body-based, but multi-dominant and interdependent balance.

PREPARED CELLO

-Copper wires and small mutes


-Rattling metal rings
-Clips

Fig.W_12 Modulating objects

HARDWARE EQUIPMENT

-1 Microphone (phantom powered) for the audio, possibly DPA


(adc~1)
-1 Bridge-contact pickup for the sound analysis, possibly
Fishmann (adc~2)
-1 Audio card (minimum 4 outputs)
-1 Mac laptop (minimum dual core, 2.4 GHz),
running MAX/Msp (some externals requiring IRCAM authorisa-
tion) or otherwise the Wire’s standalone
-PA quadraphonic at least

Fig.W_13 Cello microphones

163
SOFTWARE
MAX/Msp 6.1, or Wire’s standalone application

LIST OF EXTERNALS AND ABSTRACTIONS


banger (Peter Elsea)
http://peterelsea.com/lobjects.html

contrast-enhancement (Michael Edwards)

dot.smooth, dot.std (Joseph Malloch et al.)


http://idmil.org/software/digital_orchestra_toolbox

f0.fold, f0.line_log, f0.round (Fredrik Olofsson)


http://www.fredrikolofsson.com/pages/code-max.html

fiddle~ (Millar Puckette et al.)


http://vud.org/max/

fof~ (Michael Clarke and Xavier Rodet)


http://eprints.hud.ac.uk/2331/

fog~ (Michael Clarke and Xavier Rodet)


http://eprints.hud.ac.uk/2331/

ftm, ftm.copy, ftm.mess, ftm.object,


gbr:fft, gbr.resample, gbr.slice~, gbr.wind=, gbr.yin
FTM gabor library (Norbert Schnell et al.)
http://ftm.ircam.fr/index.php/Download

ml.som (Benjamin Smith, Guy Garnett)


http:/nime.org/proceedings/2012/2012_68.pdf

modalys~, mlys.bi-string, mlys.bi-two-mass, mlys.bow, mlys.point-input, mlys.point-output, mlys.position,


mlys.signal, mlys.speed (IRCAM Instrumental Acoustic Team)
http://forumnet.ircam.fr/product/modalys-en/

multiconvolve~ (Alex Harker and Pierre Alexandre Tremblay)


http://www.thehiss.org/

roughness (John MacCallum)


http://cnmat.berkeley.edu/downloads

sadam.stat (Ádám Siska)


http://www.sadam.hu/en/software

zsa.flux~ (zsa.easy_flux) (Mikhail Malt, Emmanuel Jourdan)


readaptation of the abstraction zsa.consonant tracking
http://www.e--j.com/index.php/download-zsa/

164
INDEX
p.152 Wire’s
PRESENTATION
Shaman’s Wire
p.153 Shaman’s sounds
Sounds
Cello and Kyl Kobiz
p.154 Wire’s
p.155 ETHNIC SOUNDS
p.157 SOFTWARE INTERACTIONS
Cello timbre
p.158 Self Organising Map
Electronic response
p.160 Electronic sounds
Interaction
p.161 PERFORMANCE
Setup
Start
p.162 Form building
Artificial sounds
cello treatments
synthetic sounds
p.163 augmented interaction
PREPARED CELLO
HARDWARE EQUIPMENT
p.164 SOFTWARE
List of externals
p.165 INDEX
List of Figures

LIST OF FIGURES
p.152 Fig.W_1 Shaman’s wire
Fig.W_2 Angelina and Nicola
p.153 Fig.W_3 Kyl Kobiz
p.155 Fig.W_4 Tuning monitor
p.157 Fig.W_5 Timbre cello descriptors
p.158 Fig.W_6 Self organised mapping outputs
Fig.W_7 Output display monitor
p.159 Fig.W_8 Main “Wire’s” MAX patch
p.161 Fig.W_9 I/O section
Fig.W_10 Amplitude calibration
Fig.W_11 Start-stop interface
p.163 Fig.W_12 Modulating objects
Fig.W_13 Cello microphones

165
Suite
audio-video interaction for 8 self-observing audio files
Stereo rendering at: https://youtu.be/BNzoDeourno

Suite is a formal self-regulating method of composition.


Eight stereo files are processed, mixed and spatialised in real-time.
The resulting audio output feeds a sound analysis module upon which, in a ring fashion, the
algorithmic processor creating the sound-video composition is based.
The internal analysis-synthesis loop is deterministic but the video-sonic result is never the same,
although showing asymmetric recursions.
The module continuously analysing the overall sound output is called “Observer”.

The Observer interprets and slices the sound signal regressed as a stream of features representing:
1) the overall amplitude, fundamental frequency, brightness, roughness, noisiness, onset time-point,
2) the local amplitudes of the 8 lowest Bark bands (the energy content detected inside the range
20-920 Hz, segmented as 8 perceptually relevant frequency bands).
-The overall sound qualities (1) are tracked in real-time;
-The Bark amplitudes (2) are instead analysed in their behavioural flow, taking in account their
individual stability, prominence and gait with respect to their own short-time “history”.

This streaming and complex analysis vector is assigned to a dense net of algorithmic decisions and
nuances operating upon the 8 stereo files, forming in this way the final audio.
A similar net processes 8 fix images producing the parallel video rendering.
The audio output thus obtained is in turn sent to the analyser (the Observer), circularly feeding the
algorithmic “decision-making” process.
In this way this structurally closed system is based on the absolute coincidence of input and output.

Even if the compositional internals are strictly shaped, the result shows an organisationally open
behaviour, enhanced in addition by the fact that even the most detailed analysis cannot apparently
avoid reductions and distortions, being a conceptual representation. Besides the metaphor of a
virtual embodied knowledge of the sound upon itself, this dynamic system shows different levels of
resistance between its input sound matter and its human-organised reading, computationally
re-assembled inside the compositional machine. More than exploring the new-cybernetic idea of a
self-aware system able to grow and behave, this work is intended as a study upon the concept of
instrument, at the edges of its interesting boundary state of a complete input-output conjunction.

The crucial composition algorithms of Suite mostly rely on the Bark-bands energy behaviours
(agencies) with respect to their recent time evolution (spanning up to 2 seconds).
Obviously the length of 2 seconds is not enough to speak of “history”, but it is a sufficient feedback
time for disengaging the mechanism from a straight real-time dimension. In this way the
compositional decision-tree acts as the consequence of a virtual short-term action-reaction domain,
in this way taking into account the mediating dimension of temporal expressivity, which could be
defined as “the time of the performance”.
166
Since these 8 audio files globally self-process themselves through audio analysis and automations,
we could say that the musical result is a sound output and an instrument at the same time.
A similar loop grounds the concept of the hyper-instruments, inside which the performative actions
(of the living instrumentalist) feed the output sound and the methods of machine processing in one
take. In this sense the compositional Suite ecology could be viewed as a virtual hyper-performer.

The affordance of this interactive feedback loop in the context of a hyper-instrument can be often
motivated by the aim to extend the compositional tasks of the performer, or to dynamically ground
complex self-emergent structures to perceptible human performative gestures.
This kind of interaction puts the performative gestural dimension of a living sound at a same level
of information trade as the compositional dimension of abstract structural choices.
In other words it allows direct synergies between the low levels of signal analysis and control
(i.e. sonic parameters, raw energies, physical interactions and modulations) with the mid levels of
structural decisions (i.e. musical patterns, regularities, directions, densities, repetitions, formal
interactions) which are considered as linguistic-compositional tasks.

The choice of exploring the interactive potentials of fixed sound files inside an automatic and
autonomous performative time domain, is linked to the operational necessity to aesthetically test the
consistency of infrastructures which elicit non obvious paradigms of composing by listening, and
symbolisms emerging from energetic perception/action gaps.

HISTORY OF THE COMPOSITION


The 8 sound files were originally composed as short electroacoustic commentaries for the play
“Il Padre de li Santi”14 (The Father of the Saints).
The concrete, satiric and psychoanalytic contents of the show, and in addition the theatrical
requirement of short occasional sound commentaries, suggested me the idea of brief time-expiring
automatic systems: “found” sound files sampling and processing themselves through the data
obtained by chains of self-automatic sound analysis. The resulting audio files were devised in order
to comment precise contents happening during the course of the show: my choice was to exploit
anecdotic music quotations, whose recognisable content and deconstruction was intended to convey
the required satire, thus each quotation was self transforming, self processing and self expiring th-
rough a built-in strategy of automatic audio analysis.

After the play I decided to collect the separated short excerpts as an autonomous work.
How could I make these satiric, quotational and self-reflexive short sound entities dance and unify?
My decision was to extend their means of composition (automatic self-analysis) as a further global
meta-level: a process of automatic decision-making about the occurrence, fragmentation, mixing
and spatial movement of the thus obtained assemblage. At every new occurrence, each file is
exposed to a different pitch transposition, speed of reproduction, point of departure and
“life duration”; in addition each triggered file follows a trajectory of internal shifts between the
states of normal, slightly granulated, and dynamically equalised reproduction.

14 by Luigi Lunari, performed at the Teatro dell’Orologio in Rome from


the 29 October until the 3rd of November 2013
167
The individual files are called:

1-Ingresso (entrance/ouverture)

2-Campane (bells)

3-Folla (crowd)

4-Wanda (recalling Wanda Osiris)

5-Esorcista (exorcist)

6-Tamburi (drums)

7-Maggiordomo (butler)

8-Uccello di Fuoco (Firebird)

Fig.S_1 The pre-sampled sound materials

The only explicit historic quotation originally requested by the actors team was Strawinsky’s
Firebird. The consequent idea of structuring a formal system based on quotations and stolen sounds,
taking Strawinsky’s music as a structural model appeared to be a quite rational solution. Just as the
music is a formal reshuffling of “stolen musics”, the video in parallel works on “stolen scores”
intensively remixed by exactly the same sound analysis methods manipulating the audio.

Fig.S_2 Screenshots from the video

168
ANALYSIS
The total output sound is sent to the analyser (analyzer~, and roughness~ MAX objects)
The following features of the 8 lowest Bark bands are tracked in real-time:
-their amplitudes in dB
-their amplitude difference (positive or negative) inside a time window of 2”
-their standard deviation inside a time window of 500 ms.
-their amplitude derivatives with respect to the previous sample (delta distance)
-selection of the currently most increasing and decreasing bark bands (inside their last 2” window)
-the sorted index (and current delta value) of the 8 bands with respect to their energy of change

METHODS
Layer A
Each of these 8 bands references a different output audio file (all the files are in loop mode):
-the positive and negative amplitude maxima determine the file occurrence (start/stop)
-the standard deviation influences the output amplitudes (volumes)
-the delta distances influence the spatial movements (speaker assignment)
-the absolute individual amplitudes determine the starting point of the files (“seek” function)

Further mappings connect the bark amplitudes to the parameters of granulation, equalisation and
delay, and determine the final video rendering. The onset attacks influence some step-by-step
processing modules (in opposition to further continuous effects).

Amplitude, frequency, noisiness, roughness and brightness vectors distribute their effecting details
and internal shapes inside the audio-video processing machine.

Layer B
A second collateral process is active: the quasi-random probabilistic selection of very short excerpts
taken from the original Strawinski’s Firebird, extremely fragmented and strongly filtered. It acts as
a nested sound skeleton of the overall music, able to offer a point of departure for the
analysis-composition loop, and an opportune background region filling the unavoidable intervals of
stillness occurring inside the “layer A” procedure.
Part of this fragment selection, and also the whole filtering, are also determined through mappings
by the global sound analysis engine (the Observer).

MOTIVATION
The automatic system was built with the aim of conveying formal abstract associations through its
internal parts, as a study on the form-bearing potentials of sound gesturally conceived and
segmented. The audio analysis treatments collect the energy behaviours of perceptually relevant
frequency bands (Barks), formally framed by long term segmentation through onset detection.
By associating the selection of each sound file to recurrent global timbre qualities the result is to
drive musical form through functional extensions of timbre.
The internal tensions and natural energy articulations of the analysed (output) sound add, through
global mappings, a further layer of pulses, selections, dynamic movements, cyclic ornaments and
contours to the principal semantic layout of the 8 original stereo files.
169
PERFORMANCE
The performance coincides with the real-time automatic audio-video rendering.
Every new performance will be different, since the system is a meta-composition. Potentially the
system could maintain its behaviour of output -> self-analysis -> self-structuring, as an eternal
circular loop; that’s why the final performance is conceived in the style of a sound-installation, and
does not require the audience to be positioned in a concert-like fashion. The audience could be
moving and walking around the central video projection and inside the space of the 8 speakers ide-
ally arranged as an external circle.

Every speaker is intended as a singular instrumental-like sound source, therefore site-specific


unconventional different speaker deployments are advocated.

The demo version lasts 10’, and it should be be differently or indefinitely extended in case of a
gallery-style performance. The overall duration (in minutes, i.e. 3 hours = 180 minutes) has to be
manually set in advance inside the application.

The rendering is completely automatic, after having set the analogue system and a few software
parameters, as it is described inside the main checking-list of the application.

The performance interaction accounts for the minimal gesture of just pressing the start button.
As shown inside the main MAX application:
1) Check the sound card (audio settings at the bottom)
2) Select a different diffusion option if 8 speakers are not available
3) Press start (yellow button on the left)
Optionally:
- enable/disable the video rendering (by default performed by a second laptop).
-select different means of sound analysis (the self-observing state sets the “no-file” message)
otherwise you will define 1 vs. multiple different internal sound analysis sources.
-set the duration time

Default: self-observing, 8-speakers, video-off, 10’ demo-length.


It is not recommended to perform audio and video on a same laptop.

Laptop_1 loads the application Suite for the audio rendering.


Laptop_2 loads the application Image for the video rendering.

170
HARDWARE EQUIPMENT
2 laptops in network (minimum OS X 10.8, 2 Ghz)
1 Ethernet cable (1000Mbit/s)
1 projector
1 sound interface (possibly RME or MOTU), minimum 8 outputs
8 speakers

"
Fig.S_3 Speaker arrangement

SOFTWARE

LAPTOP_1 LIST OF EXTERNALS AND ABSTRACTIONS


ambiencode~, ambidecode~, ambimonitor (Jan Schacher)
http://trondlossius.no/articles/743-ambisonics-externals-for-maxmsp-and-pd

analyzer~ (Tristan Jehan)


http://web.media.mit.edu/~tristan/maxmsp.html

dot.smooth, dot.std (Joseph Malloch et al.)


http://idmil.org/software/digital_orchestra_toolbox

ej.line (Emmanuel Jourdan)


http://www.e--j.com

f0 distance, f0.fold, f0.round, (Fredrik Olofsson)


http://www.fredrikolofsson.com/pages/code-max.html
171
fiddle~ (Millar Puckette et al.)
http://vud.org/max/

ftm, ftm.list, ftm.mess, ftm.object,


mnm.list2row, mnm.minmax, mnm.onepole, mnm.sum, mnm.winfilter
FTM library (Frederic Bevilacqua et al.)
http://ftm.ircam.fr/index.php/Download

jg.granulate~, jg.spectdelay~ (John Gibson)


http://pages.iu.edu/~johgibso/software.htm

multiconvolve~ (Alex Harker and Pierre Alexandre Tremblay)


http://www.thehiss.org/

roughness (John MacCallum), o.route (Adrian Freed)


http://cnmat.berkeley.edu/downloads

LAPTOP_2 LIST OF EXTERNALS AND ABSTRACTIONS


dot.smooth, dot.std (Joseph Malloch et al.)
http://idmil.org/software/digital_orchestra_toolbox

f0 distance, f0.fold (Fredrik Olofsson)


http://www.fredrikolofsson.com/pages/code-max.html

ftm, ftm.list, ftm.mess, ftm.object,


mnm.list2row, mnm.minmax, mnm.onepole, mnm.sum, mnm.winfilter
FTM library (Frederic Bevilacqua et al.)
http://ftm.ircam.fr/index.php/Download

Jitter
https://cycling74.com/

o.route (Adrian Freed)


http://www.cnmat.berkeley.edu/MAX

172
Table 1. The 8 original external self-observing systems, from which the stereo files were created.

1-Ingresso (entrance/ouverture)

2-Campane (bells)

3-Folla (crowd)

4-Wanda (recalling Wanda Osiris)

5-Esorcista (exorcist)

6-Tamburi (drums)

7-Maggiordomo (butler)

8-Uccello di Fuoco (Firebird)

173
INDEX

p.166 Suite
p.167 History of the composition
p.169 Analysis
Methods
layer A
layer B
Motivation
p.170 PERFORMANCE
HARDWARE EQUIPMENT
p.171 SOFTWARE
Laptop_1 list of externals
p.172 Laptop_2 list of externals
p.174 INDEX

LIST OF FIGURES
pag.168 Fig.S_1 The pre-sampled sound materials
Fig.S_2 Screenshots from the video
pag.171 Fig.S_3 Speaker arrangement
pag.173 Table 1. The 8 original external self-observing systems, from which the stereo files were created.

174

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy