Friday, 28 May 2010
oddmusic.com
Silent Construction 2 Composed and Performed by Jaime Oliver
Tuesday, 25 May 2010
Meeting with Hugues Genevois
- It may be interesting to think about using the controller to trigger and manipulate samples. Samples of real-world sounds allows very (naturally) rich sounds and timbres to be used and manipulated in a reasonably simplistic way (compared to attempting to synthesize such richness). Similar results could be obtained by using PM algorithms obtained via open sources. This could be a nice way for the project to progress in terms of the synthesis algorithm. Ie start by testing with fairly simplistic sine tone generators, progress onto more complex results using samples and then move onto having naturalistic control over the internals of the sound such as timbre via a PM.
- Hugues suggested gesture acceleration as being an important and useful variable. It allows for more degrees of freedom without necessarily having to include extra transducers. For example, position on a linear pot can also be complemented by the rate of change of position on the same pot. This also requires more physical energy which improves control and feedback for performer and audience. This aspect is also engaging for the audience because they can clearly see the performing changing aspects of sound, as opposed to the difference in sound between one finger position and the next which is perhaps not as obvious.
- The above point also indicates that the dimensions of the object are important. Giving a pot slider for example it is more difficult to accurately produce gestures on a smaller slider than a larger one. Emphasis should also be placed on actually having a substantial object to feel and touch and manipulate - the whole essence of my project!! Much more engaging for performer and audience! AND substantially sized instruments would allow more control using touch rather than visual feedback. This could allow the user to engage more with other musicians both technically and in terms of showing enjoyment and also with the audience. It also implies a greater possible degree of mastery since experienced players tend to use feedback channels other than visual for control. A small instrument would perhaps require a lot of visual concentration.
- One could incorporate sensors of different sizes for various musical tasks. For example a very long position sensor for long and accurate gestures (continuous bow-like control for example) and smaller FSRs for gestures requiring less long-term accuracy.
- The instrument should make use of the user's energy input in a big way in order to really connect the user. The timbre of the resulting sounds should be directly affected by the users energy ie changes in attack strength. This can be done fairly easily in many systems. A different sample (of different attack strengths) can be played depending on attack strength. Attack strength can be linked to the plucking, blowing or bowing strength variable of a PM. Attack strength can change the filter or the number of partials in a simple additive synthesis system - for example more force introduces more higher pitched partials. Changing timbre with energy is perhaps more important than loudness.
- It is important to experiment with various synthesis algorithm set ups in order to arrive at the best sound and the best control styles. Linking a controller with software is very much like a luthier fine tuning an instrument's design. It is important to make small and careful changes, listen to the resulting changes in sound and continue the process until the desired sound is arrived at.
- The way in which the signals resulting from the synthesis algorithm are transformed into sound is very important. The sound should be a living thing which is a direct result of the instrument. Using built in speakers is a very good way to provide sound localization and vibrotactile feedback. It also increases the engagement for the performer and the audience. It is a very important step in providing a link to the sounds. Multiple loudspeakers would be better than one. Loudspeakers are directional which is very unnatural for music and sounds in general. One could also consider a speaker pointing towards the musician for direct audio feedback.
- Non-linear mapping strategies will often be required for the kind of control desired.
- For tapping gestures like the one's I was thinking of a piezo contact mic may be suitable. Using this method control data would be derived from the audio signal of the mic rather than pressure or resistance. The mic could be placed directly onto the top plate of the instrument and then tapping and banging the instrument would produce a signal. This could be used for example to control the onset of the note which could then be caused to resonate (using karplus-strong for example) with a decay time based on the intensity of the attack.
Great Resource
BoSSA: The Deconstructed Violin Reconstructed
Friday, 21 May 2010
Towards a Model for Instrumental Mapping in Expert Musical Interaction - A. Hunt
Project Path

An attempt to identify some key steps to take when designing the instrument. Image also shows the links between each step. This diagram does not currently include a suggestion towards including a vibrotactile feedback channel but it may have to be updated to do so in the future depending on the steps involved in that task. Updates may also have to be made to reflect the possible implementation of a double mapping layer. The diagram is in fact a revision of the original steps I followed in order to shape my ideas. This original diagram is shown below:

Clearer Project Definition based on Research
- Create a dmi which attempts to allow its user to control the vast sound spaces afforded by digital synthesis algorithms whilst trying to provide a sense of connection with those sounds and a sense of engagement for performer and audience. The instrument should also present a learning curve to users and be reasonably challenging to ensure practice is rewarded with increased control and mastery yet novice players find the instrument approachable.
- Controller should provide transducers based on research into which gestures are best suited to performing certain musical functions. Ideally transducers will be able to provide feedback, such as haptic, in order to contribute to the connection mentioned above.
- Controller could also aim to give vibrotactile feedback and sound localization by using speaker(s) mounted in its body.
- Controller will not be any kind of sequencer.
- Controller likely to be an "instrument-inspired controller."
- Sensors connected to computer via arduino and algorithm run in Max/MSP
- Emphasis on controller design so could potentially use an open source algorithm for final model
- Mapping strategies to be properly considered and based on suggestions of research
Thursday, 20 May 2010
Zendrum
SynthAxe
Wednesday, 19 May 2010
Monome
Digital Lutherie - Crafting Musical Computers for New Musics' Performance and Improvisation
The Studio for Electro-instrumental Music
Novel DMI From NIME
Tuesday, 18 May 2010
A few controllers taken from Sergi Jordà Puig's Thesis (above)
New Interfaces for Musical Expression (NIME)
Monday, 17 May 2010
TOWARDS A NEW CONCEPTUAL FRAMEWORK FOR DIGITAL MUSICAL INSTRUMENTS
Gyrotyre: A dynamic handheld computer music controller based on a spinning wheel
PCMAG.COM's top 10 digital musical instruments
A Study of Gesture-based Electronic Musical Instruments
Wednesday, 12 May 2010
Physical Interface Design for Digital Musical Instruments - Mark Marshall
Monday, 10 May 2010
Misa Digital Guitar
Eigenharp
Expressiveness and Digital Musical Instrument Design - Daniel Arfib et al
Friday, 7 May 2010
Mapping Performer Parameters to Synthesis Engines - A Hunt, M Wanderley
Musical Taste
Gesture Analysis of Bow Strokes Using an Augmented Violin - Nicolas Hainiandry Rasamimanana
Gesture Analysis of Bow Strokes Using an Augmented Violin - Nicolas Hainiandry Rasamimanana
This research this article presents has been aimed towards finding new ways of interacting with a computer in a musical context. They do this using a traditional violin and bow which have been augmented with various sensors to provide different kinds of data readings which can be used for synthesis.
In this respect the article presents a good example of an augmented instrument and goes into detail about the sensor systems it employs (particularly in chapter 2, "Ircam's Augmented Violin").
Chapter 1, "State of the Art", begins by presenting definitions for "new music interfaces" (corresponding to the instrument-like, instrument-inspired and alternative controllers from previous definitions) and "augmented instruments". It also briefly presents some pros and cons of each approach to DMI creation.
Importantly, Chapter 1 also presents examples of existing systems some of which I have been able to find resources for:
BoSSA - the Bowed-Sensor-Speaker-Array - an instrument which looks as weird as it does cool. The link provides a description and a lot of examples and resources, including a publication which I will review. For convenience a demo video can be found here. Rasamimanana's article would consider (clearly) a new music interface. One could possibly argue that we could consider it an instrument-inspired controller, based on the violin. Indeed the instrument attempted directly to mimic some of the violin's physical performance interface but there is a question of how much a new controller must resemble an existing one in order to be considered "instrument-inspired". Personally, I feel comfortable enough giving it this label. Every aspect of this instrument is interesting. In terms of interaction it has many more transducers that it immediately appears to have including pressure sensors and several accelerometers on both the bow and finger board (which is also equipped with a linear position sensor for selecting notes). The most interesting aspect of this controller is the fact that it has attempted to reproduce the resonator of a violin using a spherical array of speakers which (i assume through some kind of DSP) can "reconstruct the radiative timbral qualities of violins in a
traditional acoustic space". According to Rasamimanana, in the video in the above link the bow data was used to control a comb filter vibrato.
Note: This section refers to pressure sensors as FSRs (force sensing resistors)
Biomuse - The report sites the BioMuse created by Atau Tanaka. "Bioelectrical signals, and particularly electromyograms of his arm muscles, are digitized and mapped to sounds and images. Therefore, the movements of his body are directly interpreted to create music". The sound quality in the above video isn't create but it does show very clearly the biomuse being used. Some brief research also returned the Biomuse Trio. Where a similar instrument again called the biomuse can be seen. This biomuse however is being played by Ben Knapp who is also sited as its creator. Not sure what, if any, the connection is between the two instruments.
Hypercello - See the video. An instrument created for Yo-Yo Ma by Tod Machover as part of his HyperInstruments research group. Again the cello has been augmented with various sensors, the signals from which are used as parameters for synthesis.
The rest of the report goes into detail about the analysis carried out on the signals received from this augmented instrument's sensors.
My aim from the beginning of this project was to look at ways in which some form of connection can be reestablished with digitally synthesized music in order to allow it to be used with a greater degree of expressivity. This paper specifically states that often in the case of new musical interfaces, although connection can be established using the right kinds of transducers, the simplicity (with regards to acoustic instruments) of the interface "often means a poorer expressive interface". The report also highlights again the importance of haptic (touch) feedback when playing, stating that this is something still under research. Perhaps the best way to over come these things is to consider an augmented instrument approach?
Thursday, 6 May 2010
Gesture - Music by Claude Cadoz
the importance of understanding the basic physiological behavior of the human body when modeling the interaction between man and a machineIn particular this section presents a definition for the terms Isometric Force and Isotonic Force used in Wanderley's "On the Choice of Transducer Technologies..." as such:
- Isometric - in order to produce a force
- Isotonic - in order to produce a displacement
- The ergotic function - material action, modification and transformation of the environment;
- The epistemic function - perception of the environment;
- The semiotic function - communication of information towards the environment.
- Excitation gesture;
- Modification gesture;
- Selection gesture.
Some Answers to Michaela's Questions
1)
It is possible to group digital musical instruments into three broad categories:
a - instrument like - A control interface that tends to reproduce each feature of an existing instrument (ie an electric guitar, keyboard, sax etc)
- instrument inspired - A sub-group of instrument like controllres. An interface that is largely inspired by an existing instrument but it is intended for a different and often more general purpose (ie the Digital Trumpet)
c - augmented instrument - An existing controller which has been fitted with additional sensors in order to provide extra elements of control/synthesis
d - alternative controller - A controller which does not follow the design of an established instrument
Originally in this project I was leaning towards an "instrument inspired" controller. There were two main reasons for this. The first is conceptual ease of design. I felt that because I don't have any background in product design and manufacture that perhaps it would help me when it came to actually designing the control interface if I already had some preexisting notion of the general shape, size and materials my controller could consist of. The second reason is to address questions of ease of use and playability. Reports suggest that when one first comes across a new musical control interface, the key aspect in terms of playing is ease of use. Once the basics have been learned the emphasis shifts on learnability, the ability to spend time with an instrument in order to learn more subtle and masterful control. I felt that using an instrument inspired controller should help in the first instance of ease of use, since many players would already have to a greater or lesser extent a notion of the kinds of gestures that are suitable to such an instrument. At the same time the choice of transducers could produce interesting challenges for even established players of the original instrument since they could effect and change for example the established attack gesture (for example a pressure pad used to trigger sounds on a guitar-like interface, changing the typical attack gesture from a plectrum attack to applying varying pressure or perhaps short percussive whacks).
Recently, David Creasey highlighted the importance of physical feedback when it comes to musical control so this is some thing I'd really like to think about. My above idea involving a pressure pad may be a good one since applying pressure to the pad would naturally produce some kinesthetic feedback. David suggested considering an augmented instrument controller in order to have physical feedback "built-in". This is an interesting approach which I will look into and post some articles and examples.
I also remember an earlier post regarding a slightly ambitious idea of equipping an instrument inspired controller with a resonator (some how) and passing the synthesis signals through it via some form of transducer. Having this resonator could possibly add an extra element of physical feedback useful for control, especially if combined with other techniques.
2)
one could say: "yes, we can build a lot of different kinds of new electronic musical instruments, but in the end it all comes down to doing the synthesis (in programmes like max/msp), so why bother, they can all sound the same..." what would your response be?
I think my response would be to talk about the expressivity possible with a given synthesis algorithm and controller combination. That is to say, not particularly the core of the sound which is the algorithm's responsibility, but rather how that sound is used to make music. Not the way it sounds but the way it is played. At the moment I could define two elements of expressivity. The first is the kind of physical feedback mentioned in (1).
The second element is related to the controller itself and its transducers. The choice of transducers for a controller can have a large impact on the possible expressivity of the resultant sound. To take an extreme example one could compare a controller like the one from a past post to the digital trumpet. The first controller triggers sounds using a small pressure button. The second has a pressure sensor mounted behind a trumpet mouth piece which detects even the smallest changes in the blowing style of the player. It is clear that hypothetically if they both trigger the same synthesis algorithm, the digital trumpet would allow sounds to be played with much greater control. Another question here however is ease of use and learnability. One could argue that the trumpet controller would be harder to use and almost impossible for most players to truly master (to possibly a greater level than the player in the video exhibits). In this respect perhaps for a lot of users the simpler button controller would be more appropriate. Linked to ease of use of course is the style of controller itself. Guitar players may find it much easier to use the first controller whilst trumpet players would be more comfortable with the second. If an alternate controller could be used all players would likely be of approximately equal physical skill to begin with.
I think it is possible to see that the choice and design of a controller is an important one since it effects the way in which a player can create the sounds produced by the synthesis. Another important point however is that of course the synthesis algorithm must be such that it is able to respond to the parameters made available by the controller. For example if a very simple algorithm which trigger sounds simply via an on or off stage were controlled by the trumpet controller, much of its expressivity would be lost. It would no longer be possible for example to produce crescendos using increasing breath pressure.
It seems that neither the controller nor the algorithm should be underestimated in their contribution to the expressivity and playability of the complete system. I think this question is definitely some thing to be looked into further. I think it would also be worth while looking further into mapping techniques and the possibility of introducing mapping layers to allow controllers to be portable from one algorithm the next...
3)
I think for this project i'd like to focus on ways of playing, since we've done quite a lot of synthesis work over the past 2 years. The issue is however that, as I mentioned above, however complex I make the controller I will have to design an algorithm sufficiently complex enough to respond to that controller's parameters. May be a better answer then would be that i'll have to focus on both. Actually David Creasey advised me that the best way to approach this kind of project is to try and keep the advancement of each section at roughly the same level so that the complete system has a chance of coming together as a whole. Or at least if the end goal cannot be reached then the system can still function well together for a presentation (instead of for example having a very simple bread board circuit of a controller and a complex synthesis program).
The fun thing for me would be actually playing the instrument and feeling at least some what involved in the sound making process.
Monday, 3 May 2010
Yamaha Tenori-On
Digital Trumpet
Initial Project Spider Diagram

