Here is the first of several interesting papers I have found regarding digital musical instrument control interfaces.
The paper begins by presenting the problem of how to design new computer-based musical instruments and goes on to outline so styles of Human Computer Interaction. Importantly, the paper outlines four key parts of the subject of control devices for sound synthesis software:
- Definition and typologies of gesture
- Gesture acquisition and input device design
- Mapping of gestural variables to synthesis variables
- Synthesis algorithms
The paper focuses predominantly on the second and third aspects but I think this is a nice way to break up my suggested project into four broad areas.
Possibly the most important section of this paper with regards to my project is section 3.3. Here the paper outlines several established types of gestural controllers for sound synthesis. Reading this section I came to the decision that it would perhaps be best for my controller to be an Instrument-inspired controller. This is because designing the controller to be largely inspired by an existing instrument would present several benefits (familiarity of use; easier to design...) but the controller would at the same time be aimed towards being a general device for controlling additive synthesis and not, for example, a physical model of the instrument it resembles.
Another key section of the paper is 5 regarding mapping strategies. Interestingly the paper outlines a study which found, to summarize, that a one-to-many mapping strategy seems to be preferred for complex modifications. That is, the output of one transducer would be used to control several aspects of the synthesis algorithm. One controller output to many synthesis inputs.
Section 5.4 presents A Model of Mapping as two Independent Layers. It suggests the outputs of the controller device could be mapped to an intermediate layer of controller values, which would then form the inputs to the synthesis algorithm. This may be a good approach to take with my controller. In this way if I designed a controller suited for the control variables presented by general additive synthesis, the mapping layers would enable the control to be used for most any additive synthesis algorithm, not just the specific Max/MSP patch that I will design.
Section 3, Digital Musical Instruments, talks about the separation in DMIs of the control interface and the sound producing unit, which can remove the sense of force/tactile feedback present in acoustic instruments. This sparked an idea for me. If the DMI's control interface could some how be part of a resonator, it could be possible to output the results of the synthesis algorithm and some how drive the physical resonator of the instrument at a key point (the area of the bridge in a violin for example). In this way the feedback loop would be restored and the sound would be made all the more interesting for having a natural reverberance. I know electronic driving technologies exist in the form of plate resonators but I'm not sure how complicated or expensive they would be. This idea is likely best as an optional extension of the main project.
This youtube video is a simplistic example of the kind of driver I had in mind. A (basic) plate reverb effect is created using a piezo buzzer. These buzzers could potentially be placed on the reverberator to produce the feedback effect mentioned above. Thinking briefly, to produce a more sophisticated result it may be worth considering passing different frequency ranges to different buzzers mounted on the resonator for best effect. The frequency response of the buzzers would also have to be taking into consideration.

No comments:
Post a Comment