{"id":2957,"date":"2013-11-24T22:25:17","date_gmt":"2013-11-24T22:25:17","guid":{"rendered":"http:\/\/blog.soton.ac.uk\/digitalhumanities\/?p=2957"},"modified":"2022-07-27T09:58:00","modified_gmt":"2022-07-27T09:58:00","slug":"sotondh-small-grants-investigation-into-synthesizer-parameter-mapping-and-interaction-for-sound-design-purposes-post-3","status":"publish","type":"post","link":"https:\/\/digitalhumanities.soton.ac.uk\/small-grants\/2957","title":{"rendered":"sotonDH Small Grants: Investigation into Synthesizer Parameter Mapping and Interaction for Sound Design Purposes \u2013 Post 3"},"content":{"rendered":"
In the previous blog posts three research questions and supporting literature were presented in relationship to how synthesizers are used for sound design. With these in mind the next stage has been to consider how the presented research questions can begin to be evaluated and answered.\u00a0 Inline with these, it is proposed that the primary focus for this investigation will be on the synthesizer programming requirements of sound designers.\u00a0 These professionals tend to have extensive experience in this are so when undertaking a sound creation task, they are likely to have concrete ideas for the sound elements that they require or the direction they want to take.\u00a0 Although techniques have been proposed that allow interpolation to be performed between different parameter sets, the sets themselves need to be first defined by the sound designer.\u00a0 This could be done using some form of resynthesis if the sound designer can supply \u201ctargets\u201d for the texture that they require.\u00a0 This task would ideally be \u201cunsupervised\u201d so that the sound designers would not have to spend time refining the synthesis generation and it would be essentially be a automatic process.<\/p>\n
Having defined the sound textures that the designer wishes to use in a particular creation, the mapping of the parameters will then need to be considered.\u00a0 This is a vital area, as the mapping will need to be done in a way that permits the sound space to be explored in a logical and intuitive manner.\u00a0 This sound space maybe either a representation of the parameter space or timbre space if more perceptual mappings are being given.\u00a0 The actual mapping process could be done automatically or might be user defined.\u00a0 In addition, the interpolation systems developed so far offer straight interpolation between the parameters of the target sounds.\u00a0 Whereas, when sound designers work they will often apply different forms of manipulation to the sound textures they are using, such as: layering, flipping, reversing, time stretching, pitch shifting, etc.\u00a0 As a result, there would be an obvious advantage to an interpolation system that allowed not only the exploration of the available sound space, but also allowed more flexible manipulation of the sound textures.\u00a0 Ideally this programming interface will be independent of the actually synthesis engine so that it can be used with any form of synthesis and will mask the underlying architecture from the user.\u00a0 This will allow the sound space of different engines to be explored with the same interface without having to worry about the underlying implementation.\u00a0 In order to do this successfully a suitable graphical interface will need to be created that allows the sound space to be explored in an intuitive way, whilst masking the underlying detail.<\/p>\n
Tablet computers that offer mobile multi-touch functionality, such as the Apple iPad, have become invasive in modern society, particularly for content delivery.\u00a0 However, they have been less commonly used for content creation [1] and this is especially true in the area of Sound Design.\u00a0\u00a0 Recent software application releases for these devices [2], [3], [4], and new audio\/MIDI interfaces [5], [6], mean that this technology can now start to be used for content creation in this area.\u00a0 This then poses some interesting new research questions, such as:\u00a0 Can multi-touch computers offer a viable interface for synthesizer programming and sound design?\u00a0 Will mobile technologies allow content creation to be performed collaboratively between multiple parties in different locations?<\/p>\n
To answer these questions and give context to the previous research questions it is proposed that a SYTER style [7] interpolation system has been developed.\u00a0 When functional testing of this system has been completed, a secondary systems will be created that will allow the remote control of the interpolation, via the iPad.\u00a0 This will offer a couple of interesting areas for consideration:\u00a0 When implemented with a traditional mouse and screen for the control of interpolation, only one point can be moved at a time, whereas potentially with the iPad, multiple points can be controlled simultaneously.\u00a0 In addition, as it is intended that the tablet device is purely used for control purposes, potentially a number of users will be able control the interpolation at the same time, opening up the possibility of collaborative sound design.\u00a0 These are exciting prospects and ones warrant further investigation.<\/p>\n
The synthesis engine and interpolation has been realised using the Max\/MSP visual programming environment [8], which offers a relatively fast development with many standard building blocks.\u00a0\u00a0 For testing purposes, the idea is to use the interpolation system with a variety of different synthesis engines.\u00a0 Fortunately Max\/MSP does come with several pre-built synthesis engines so the interpolation interface has been designed to connect directly to the engines available.<\/p>\n
The interface presented uses the same paradigm as used with the original SYTER Interpol system, where the each sound, defined by a set of synthesizer parameters, is represented as a circle in the interpolation space [7].\u00a0 An example of the created interpolation space is shown in Figure 1, where interpolation is being performed between six sounds.<\/p>\n