A friend of mine referred me to a Hugues Genevois of LAM. Hugues specializes in areas related to DMI production and has created and studied DMIs himself. I contacted Hugues via e-mail, outlining my project and asking for any advice or suggestions he had. He suggested we meet up to talk further. I spent a couple of hours today talking to Hugues at the LAM institute and below is a rough summary of our discussion. Hugues seemed rather enthusiastic and shared some beliefs with me. He gave me plenty of advice and suggested that we meet again in a few weeks to see how this project is progressing.
- It may be interesting to think about using the controller to trigger and manipulate samples. Samples of real-world sounds allows very (naturally) rich sounds and timbres to be used and manipulated in a reasonably simplistic way (compared to attempting to synthesize such richness). Similar results could be obtained by using PM algorithms obtained via open sources. This could be a nice way for the project to progress in terms of the synthesis algorithm. Ie start by testing with fairly simplistic sine tone generators, progress onto more complex results using samples and then move onto having naturalistic control over the internals of the sound such as timbre via a PM.
- Hugues suggested gesture acceleration as being an important and useful variable. It allows for more degrees of freedom without necessarily having to include extra transducers. For example, position on a linear pot can also be complemented by the rate of change of position on the same pot. This also requires more physical energy which improves control and feedback for performer and audience. This aspect is also engaging for the audience because they can clearly see the performing changing aspects of sound, as opposed to the difference in sound between one finger position and the next which is perhaps not as obvious.
- The above point also indicates that the dimensions of the object are important. Giving a pot slider for example it is more difficult to accurately produce gestures on a smaller slider than a larger one. Emphasis should also be placed on actually having a substantial object to feel and touch and manipulate - the whole essence of my project!! Much more engaging for performer and audience! AND substantially sized instruments would allow more control using touch rather than visual feedback. This could allow the user to engage more with other musicians both technically and in terms of showing enjoyment and also with the audience. It also implies a greater possible degree of mastery since experienced players tend to use feedback channels other than visual for control. A small instrument would perhaps require a lot of visual concentration.
- One could incorporate sensors of different sizes for various musical tasks. For example a very long position sensor for long and accurate gestures (continuous bow-like control for example) and smaller FSRs for gestures requiring less long-term accuracy.
- The instrument should make use of the user's energy input in a big way in order to really connect the user. The timbre of the resulting sounds should be directly affected by the users energy ie changes in attack strength. This can be done fairly easily in many systems. A different sample (of different attack strengths) can be played depending on attack strength. Attack strength can be linked to the plucking, blowing or bowing strength variable of a PM. Attack strength can change the filter or the number of partials in a simple additive synthesis system - for example more force introduces more higher pitched partials. Changing timbre with energy is perhaps more important than loudness.
- It is important to experiment with various synthesis algorithm set ups in order to arrive at the best sound and the best control styles. Linking a controller with software is very much like a luthier fine tuning an instrument's design. It is important to make small and careful changes, listen to the resulting changes in sound and continue the process until the desired sound is arrived at.
- The way in which the signals resulting from the synthesis algorithm are transformed into sound is very important. The sound should be a living thing which is a direct result of the instrument. Using built in speakers is a very good way to provide sound localization and vibrotactile feedback. It also increases the engagement for the performer and the audience. It is a very important step in providing a link to the sounds. Multiple loudspeakers would be better than one. Loudspeakers are directional which is very unnatural for music and sounds in general. One could also consider a speaker pointing towards the musician for direct audio feedback.
- Non-linear mapping strategies will often be required for the kind of control desired.
- For tapping gestures like the one's I was thinking of a piezo contact mic may be suitable. Using this method control data would be derived from the audio signal of the mic rather than pressure or resistance. The mic could be placed directly onto the top plate of the instrument and then tapping and banging the instrument would produce a signal. This could be used for example to control the onset of the note which could then be caused to resonate (using karplus-strong for example) with a decay time based on the intensity of the attack.
It is most important to listen to sounds and try to understand what is happening. If one can develop a good grasp of what sonic consequences a physical action has then one can attempt to reproduce such consequences in a dmi. Another important point is that it is often the case that fairly simple algorithms used with complex mappings often produce very good results and that making simplistic changes, such as moving a filter, can be convincing.
I'm going to attempt to do some further research in various areas including piezo mics and loudspeaker arrays and attempt to create a fairly solid first draft design of my DMI to show Hugues in a few week's time.

No comments:
Post a Comment