Purpose

The purpose of this blog is to enable my university supervisors and I to easily share multimedia content regarding ideas for my Final Year Project and to allow ideas and opinions to be discussed.

Friday, 30 April 2010

Digital Musical Instrument Composition: Limits and Constraints - D. Andrew Stewart

The report can be found here.

Interestingly, in section 3.2 of the report the author details different requirements of a DMI from the point of view of the performer. The report also presents an interesting DMI - The T-Stick. More details of the T-Stick can be found here.

The T-stick project has a focus on "expert use". In that the T-stick is designed in such a way that it requires a reasonably large degree of training before it can be used expressively.

"The T-Stick can sense where and how much of it is touched, tapping, twisting, tilting, squeezing, and shaking. The output of the sensors is sent over USB to Max/MSP software, which processes the data and maps it to sound synthesis parameters."

Similar Projects

A brief search on youtube for "digital musical instruments" returned two results that seem (at first glance anyway) to be remarkably similar to the kind of picture I had in my head for how my dmi might come out.

This is a fairly good example of the kind of "guitar-inspired" controller I had in mind. It presents an interesting idea for an exciter gesture transducer in the form of the pressure pad. This is a similar controller style. Of course the synthesis behind both of these controllers is analogue and, in my opinion, sounds horrible. The over all controller design though is interesting. Also both projects seem to use this kind of sensor for the pitch selecting controller which I think could be a nice way of doing things. That video is for a Trossen Robotics product. Looking at their website reveals lots of interesting products such as this which could be used as an attack sensor as in the first video. It would (I think) allow discrete or continuous attacks as well as a kind of "after-touch" effect.

One thing both of the instruments have in common is that they are both monophonic instruments. Which begs the question; will be dmi be monophonic or polyphonic?? I suppose polyphonic would allow the greatest levels of expressivity but would necessarily lead to a more complex project.

On the Choice of Transducer Technologies for Specific Musical Functions - Marcelo M. Wanderley et al

This article can be found be going here and selecting the first result.

This article was another important read for my project since it outlines the kinds of transducer technologies that are best suited to perform different musical tasks.

2.1 Performer Actions is an interesting section since it outlines the various hand gestures functions with regards to musical instruments.

The most important sections are 3 and 4 which present a simple classification of musical functions with regards to instrument control, their associated gesture types and the types of transducer which are best suited to that kind of control. I will use these definitions to select the best transducer type for the kinds of control my DMI will offer. This will be a very important decision since it will have direct effects on the ease of use and learnability of the DMI.

The basis for the definitions described above is largely taken from Towards a Musician's Cockpit: Transducers, Feedback and Musical Function - Roel Vertegaal. The paper can be found here (second entry). As well as these definitions the paper also makes some interesting comments about the relationship between the physical effort required to play an instrument and the musical tension of the piece being played.

Gestural Control of Music - Marcelo M. Wanderley (at IRCAM)


Here is the first of several interesting papers I have found regarding digital musical instrument control interfaces.

The paper begins by presenting the problem of how to design new computer-based musical instruments and goes on to outline so styles of Human Computer Interaction. Importantly, the paper outlines four key parts of the subject of control devices for sound synthesis software:
  • Definition and typologies of gesture
  • Gesture acquisition and input device design
  • Mapping of gestural variables to synthesis variables
  • Synthesis algorithms
The paper focuses predominantly on the second and third aspects but I think this is a nice way to break up my suggested project into four broad areas.

Possibly the most important section of this paper with regards to my project is section 3.3. Here the paper outlines several established types of gestural controllers for sound synthesis. Reading this section I came to the decision that it would perhaps be best for my controller to be an Instrument-inspired controller. This is because designing the controller to be largely inspired by an existing instrument would present several benefits (familiarity of use; easier to design...) but the controller would at the same time be aimed towards being a general device for controlling additive synthesis and not, for example, a physical model of the instrument it resembles.

Another key section of the paper is 5 regarding mapping strategies. Interestingly the paper outlines a study which found, to summarize, that a one-to-many mapping strategy seems to be preferred for complex modifications. That is, the output of one transducer would be used to control several aspects of the synthesis algorithm. One controller output to many synthesis inputs.

Section 5.4 presents A Model of Mapping as two Independent Layers. It suggests the outputs of the controller device could be mapped to an intermediate layer of controller values, which would then form the inputs to the synthesis algorithm. This may be a good approach to take with my controller. In this way if I designed a controller suited for the control variables presented by general additive synthesis, the mapping layers would enable the control to be used for most any additive synthesis algorithm, not just the specific Max/MSP patch that I will design.

Section 3, Digital Musical Instruments, talks about the separation in DMIs of the control interface and the sound producing unit, which can remove the sense of force/tactile feedback present in acoustic instruments. This sparked an idea for me. If the DMI's control interface could some how be part of a resonator, it could be possible to output the results of the synthesis algorithm and some how drive the physical resonator of the instrument at a key point (the area of the bridge in a violin for example). In this way the feedback loop would be restored and the sound would be made all the more interesting for having a natural reverberance. I know electronic driving technologies exist in the form of plate resonators but I'm not sure how complicated or expensive they would be. This idea is likely best as an optional extension of the main project.

This youtube video is a simplistic example of the kind of driver I had in mind. A (basic) plate reverb effect is created using a piezo buzzer. These buzzers could potentially be placed on the reverberator to produce the feedback effect mentioned above. Thinking briefly, to produce a more sophisticated result it may be worth considering passing different frequency ranges to different buzzers mounted on the resonator for best effect. The frequency response of the buzzers would also have to be taking into consideration.

Thursday, 29 April 2010

Owl Project

A set of interesting digital music instrument interface designs can be found at Owl Project.

The first thing that struck me looking at the main page was the interesting choice of materials for these instruments which seems to be often simply a log of wood. The reason I found this interesting was because I felt it went nicely with one of the inspirations behind my current project idea - my increasing fondness for "natural" music (ie not mainly electronic/digital) - A log of wood could perhaps be an interesting way to approach the problem of the choice of materials for my own digital music interface...

In terms of controllers I think I would prefer my instrument to rely less on knobs and switches and more on things like distance and pressure sensors in an attempt to allow more expressivity on the part of the player. Also rather than creating a sequencer or generative music maker I think I would rather focus on creating an "instrument" in a more traditional sense in that the instrument should involve a pitch selection and an attack to produce sound. In this sense the controller may be more "traditional" (ignoring the possibility of things like using ultrasonic sensors to determine pitch selection) but the sound space it gives access to is more contemporary, being based on digital synthesis techniques.

In any case the Owl Project is certainly an interesting if not weird one! I especially liked the results of the Sound Lathe which can be found in the audio section by playing "sound lathe at les urbaines".