{"id":2957,"date":"2013-11-24T22:25:17","date_gmt":"2013-11-24T22:25:17","guid":{"rendered":"http:\/\/blog.soton.ac.uk\/digitalhumanities\/?p=2957"},"modified":"2022-07-27T09:58:00","modified_gmt":"2022-07-27T09:58:00","slug":"sotondh-small-grants-investigation-into-synthesizer-parameter-mapping-and-interaction-for-sound-design-purposes-post-3","status":"publish","type":"post","link":"https:\/\/digitalhumanities.soton.ac.uk\/small-grants\/2957","title":{"rendered":"sotonDH Small Grants: Investigation into Synthesizer Parameter Mapping and Interaction for Sound Design Purposes \u2013 Post 3"},"content":{"rendered":"

sotonDH Small Grants: Investigation into Synthesizer Parameter Mapping and Interaction for Sound Design Purposes \u2013 Post 3 by Darrell Gibson<\/h4>\n

Research Group: Composition and Music Technology<\/em><\/h4>\n

Introduction<\/h4>\n

In the previous blog posts three research questions and supporting literature were presented in relationship to how synthesizers are used for sound design. With these in mind the next stage has been to consider how the presented research questions can begin to be evaluated and answered.\u00a0 Inline with these, it is proposed that the primary focus for this investigation will be on the synthesizer programming requirements of sound designers.\u00a0 These professionals tend to have extensive experience in this are so when undertaking a sound creation task, they are likely to have concrete ideas for the sound elements that they require or the direction they want to take.\u00a0 Although techniques have been proposed that allow interpolation to be performed between different parameter sets, the sets themselves need to be first defined by the sound designer.\u00a0 This could be done using some form of resynthesis if the sound designer can supply \u201ctargets\u201d for the texture that they require.\u00a0 This task would ideally be \u201cunsupervised\u201d so that the sound designers would not have to spend time refining the synthesis generation and it would be essentially be a automatic process.<\/p>\n

Having defined the sound textures that the designer wishes to use in a particular creation, the mapping of the parameters will then need to be considered.\u00a0 This is a vital area, as the mapping will need to be done in a way that permits the sound space to be explored in a logical and intuitive manner.\u00a0 This sound space maybe either a representation of the parameter space or timbre space if more perceptual mappings are being given.\u00a0 The actual mapping process could be done automatically or might be user defined.\u00a0 In addition, the interpolation systems developed so far offer straight interpolation between the parameters of the target sounds.\u00a0 Whereas, when sound designers work they will often apply different forms of manipulation to the sound textures they are using, such as: layering, flipping, reversing, time stretching, pitch shifting, etc.\u00a0 As a result, there would be an obvious advantage to an interpolation system that allowed not only the exploration of the available sound space, but also allowed more flexible manipulation of the sound textures.\u00a0 Ideally this programming interface will be independent of the actually synthesis engine so that it can be used with any form of synthesis and will mask the underlying architecture from the user.\u00a0 This will allow the sound space of different engines to be explored with the same interface without having to worry about the underlying implementation.\u00a0 In order to do this successfully a suitable graphical interface will need to be created that allows the sound space to be explored in an intuitive way, whilst masking the underlying detail.<\/p>\n

Multi-Touch Screen Technology<\/h4>\n

Tablet computers that offer mobile multi-touch functionality, such as the Apple iPad, have become invasive in modern society, particularly for content delivery.\u00a0 However, they have been less commonly used for content creation [1] and this is especially true in the area of Sound Design.\u00a0\u00a0 Recent software application releases for these devices [2], [3], [4], and new audio\/MIDI interfaces [5], [6], mean that this technology can now start to be used for content creation in this area.\u00a0 This then poses some interesting new research questions, such as:\u00a0 Can multi-touch computers offer a viable interface for synthesizer programming and sound design?\u00a0 Will mobile technologies allow content creation to be performed collaboratively between multiple parties in different locations?<\/p>\n

To answer these questions and give context to the previous research questions it is proposed that a SYTER style [7] interpolation system has been developed.\u00a0 When functional testing of this system has been completed, a secondary systems will be created that will allow the remote control of the interpolation, via the iPad.\u00a0 This will offer a couple of interesting areas for consideration:\u00a0 When implemented with a traditional mouse and screen for the control of interpolation, only one point can be moved at a time, whereas potentially with the iPad, multiple points can be controlled simultaneously.\u00a0 In addition, as it is intended that the tablet device is purely used for control purposes, potentially a number of users will be able control the interpolation at the same time, opening up the possibility of collaborative sound design.\u00a0 These are exciting prospects and ones warrant further investigation.<\/p>\n

Work Completed<\/h4>\n

The synthesis engine and interpolation has been realised using the Max\/MSP visual programming environment [8], which offers a relatively fast development with many standard building blocks.\u00a0\u00a0 For testing purposes, the idea is to use the interpolation system with a variety of different synthesis engines.\u00a0 Fortunately Max\/MSP does come with several pre-built synthesis engines so the interpolation interface has been designed to connect directly to the engines available.<\/p>\n

The interface presented uses the same paradigm as used with the original SYTER Interpol system, where the each sound, defined by a set of synthesizer parameters, is represented as a circle in the interpolation space [7].\u00a0 An example of the created interpolation space is shown in Figure 1, where interpolation is being performed between six sounds.<\/p>\n

\"Figure<\/a>
Figure 1: Example of the Synthesizer Interpolation Space<\/figcaption><\/figure>\n

The diameter of each circle defines the \u201cstrength\u201d that each sound will posses in the gravitational model.\u00a0 Then as the user moves the crosshairs a smooth interpolation is performed between the sounds that each circle represents.\u00a0 This is achieved by performing a linear interpolation between pairs of parameters defined by the circles.\u00a0 In this way, new sounds can be identified by the sound designer or the space can be used during a performance as a mechanism for creating morphing sounds.<\/p>\n

Further Work<\/h4>\n

The synthesis engine and the controlling interpolator have been realised on a laptop computer that is running the Max\/MSP software.\u00a0 The next stage will be to implement the mobile multi-touch interface on the iPad.\u00a0 It is intended that this will be done using the Fantastic application [9], which allows drawing commands to be sent to the iPad from Max\/MSP, via a WiFi network.\u00a0 In this way, it should be possible to give a representation of the interpolator on the iPad that can then be synchronised with the software running on the computer.\u00a0 The user will then be able to use the multi-touch functionality of the iPad to control the interpolator.<\/p>\n

When the system is working and functional testing has been completed then usability testing will be performed to try and answer the previously presented research questions.\u00a0 This will take the form of giving the participants specific sound design tasks so that the quantitative results can be derived against the \u201ctarget\u201d sounds.\u00a0 The tests are likely to take the form of asking the participants to complete three separate tests:\u00a0 First the participants will be asked to create a target sound directly with the synthesis engine, without the interpolated control.\u00a0 Before doing this the participants will first be allowed to explore the synthesizer\u2019s parameters and will then be supplied with the target sound and asked to try replicate the sound.\u00a0 In the second test the participant will be supplied with the same synthesis engine, but this time being controlled by the interpolation interface. However, interaction with the interpolator will be by the use of a traditional computer mouse or trackpad and a standard computer monitor screen.\u00a0 Again the participant will first be asked to explore the parameter space, this time defined by an interpolation space and they will then be supplied with a target sound and again asked to try replicate the target sound, but this time using the interpolator to control the synthesis engine.\u00a0 In the final test the same interpolator and synthesis engine will be used except that this time the multi-touch screen of the iPad will be as the input device.<\/p>\n

For all three of these tests the performance will be assessed through the amount of time it takes to create the target sound and the accuracy with which the sound created.\u00a0 In addition to this quantitative data, qualitative results will also be gathered following these tests through the use of questionnaires and focus groups.\u00a0 It is intended that the generated data will be analysed using a combination of Grounded Theory and Distributed Cognition approach [10].\u00a0 When this has been completed and based on the results obtained a second round of usability testing will be design, which will allows the exploration of using multiple iPads for collaborative sound design.<\/p>\n

It is anticipate that building and functional testing of the iPad-based interpolator will take another two months.\u00a0 Then the usability testing and evaluation will probably require another six months work.<\/p>\n

References<\/h4>\n
    \n
  1. Hendrik M\u00fcller, Jennifer L. Gove & John S. Webb, Understanding Tablet Use: A Multi-Method Exploration<\/i>, Proceedings of the 14th Conference on Human-Computer Interaction with Mobile Devices and Services (Mobile HCI 2012), ACM, 2012.<\/li>\n
  2. https:\/\/itunes.apple.com\/us\/app\/cubasis\/id583976519?mt=8<\/a>, Accessed 10\/01\/13.<\/li>\n
  3. https:\/\/itunes.apple.com\/ca\/app\/bloom-hd\/id373957864?mt=8<\/a>, Accessed 10\/01\/13.<\/li>\n
  4. https:\/\/itunes.apple.com\/gb\/app\/curtis-for-ipad\/id384228003?mt=8<\/a>, Accessed 10\/01\/13.<\/li>\n
  5. http:\/\/www.alesis.com\/iodock<\/a>, Accessed 10\/01\/13.<\/li>\n
  6. http:\/\/www.musicradar.com\/gear\/guitars\/computers-software\/peripherals\/input-devices\/audio-interfaces\/iu2-562042<\/a>, Accessed 10\/01\/13<\/li>\n
  7. Allouis, J., and Bernier, J. Y. The SYTER project: Sound processor design and software overview<\/i>. In Proceedings of the 1982 International Computer Music Conference (ICMC), 232\u2013240, 1982.<\/li>\n
  8. Manzo, V. J., Max\/MSP\/Jitter for Music: A Practical Guide to Developing Interactive Music Systems for Education and More<\/i>.\u00a0 Oxford University Press, 2011.<\/li>\n
  9. https:\/\/itunes.apple.com\/gb\/app\/fantastick\/id302266079?mt=8<\/a>, Accessed 10\/01\/13.<\/li>\n
  10. Rogers, Y., H. Sharp & J. Preece, Interaction Design: Beyond Human-Computer Interaction, 3rd Edition<\/i>. John Wiley & Sons, 2011.<\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"

    sotonDH Small Grants: Investigation into Synthesizer Parameter Mapping and Interaction for Sound Design Purposes \u2013 Post 3 by Darrell Gibson Research Group: Composition and Music Technology Introduction In the previous blog posts three research questions and supporting literature were presented in relationship to how synthesizers are used for sound design. With these in mind the next stage has been to …<\/p>\n","protected":false},"author":93693,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[198239],"tags":[],"_links":{"self":[{"href":"https:\/\/digitalhumanities.soton.ac.uk\/wp-json\/wp\/v2\/posts\/2957"}],"collection":[{"href":"https:\/\/digitalhumanities.soton.ac.uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/digitalhumanities.soton.ac.uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/digitalhumanities.soton.ac.uk\/wp-json\/wp\/v2\/users\/93693"}],"replies":[{"embeddable":true,"href":"https:\/\/digitalhumanities.soton.ac.uk\/wp-json\/wp\/v2\/comments?post=2957"}],"version-history":[{"count":3,"href":"https:\/\/digitalhumanities.soton.ac.uk\/wp-json\/wp\/v2\/posts\/2957\/revisions"}],"predecessor-version":[{"id":3234,"href":"https:\/\/digitalhumanities.soton.ac.uk\/wp-json\/wp\/v2\/posts\/2957\/revisions\/3234"}],"wp:attachment":[{"href":"https:\/\/digitalhumanities.soton.ac.uk\/wp-json\/wp\/v2\/media?parent=2957"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/digitalhumanities.soton.ac.uk\/wp-json\/wp\/v2\/categories?post=2957"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/digitalhumanities.soton.ac.uk\/wp-json\/wp\/v2\/tags?post=2957"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}