Enhanced Autonomous Control in Interactive Multi-Sensory Environments

被引:0
|
作者
Brown, Kenneth [1 ]
Ellis, Phil [1 ]
机构
[1] Univ Sunderland, Fac ADM, Priestman Bldg,Green Terrace, Sunderland SR1 3PZ, Durham, England
来源
PROCEEDINGS OF THE 3RD INTERNATIONAL CONFERENCE ON SOFTWARE DEVELOPMENT FOR ENHANCING ACCESSIBILITY AND FIGHTING INFO-EXCLUSION (DSAI 2010) | 2010年
关键词
D O I
暂无
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
iMUSE (Interactive Multi-Sensory Environments) promote expression and wellbeing with disabled and older people through the expressive use of sound. This is enabled through the use of non-invasive technology so that participants have the opportunity to exert a degree of control. They can use hand or arm movements to generate sound with accompanying images and vibration. However control of secondary parameters such as the audio timbre and visualisation colour scheme currently have to be adjusted by the facilitator. This paper describes methods of enabling autonomous control for selected aspects of iMUSE in order to enhance the health and wellbeing of the participants. The project focuses on research into potential user interface types and the development of corresponding hardware and software enabling control using simple participant actions. The 'vibration level' and 'visualisation colour' were the parameters chosen as being most relevant to the users. The system is currently being trialled and some preliminary participant observations are given. It is already clear that the MIDI-to-control mapping methodology devised for this project has potential to extend both to other aspects of iMUSE, and to other projects requiring a similar methodology.
引用
收藏
页码:149 / 156
页数:8
相关论文
共 50 条
  • [31] Multi-sensory Feedback Control in Door Approaching and Opening
    Winiarski, Tomasz
    Banachowicz, Konrad
    Seredynski, Dawid
    INTELLIGENT SYSTEMS'2014, VOL 2: TOOLS, ARCHITECTURES, SYSTEMS, APPLICATIONS, 2015, 323 : 57 - 70
  • [32] The Use of Multi-Sensory Environments in Schools Servicing Children with Severe Disabilities
    Mark Carter
    Jennifer Stephenson
    Journal of Developmental and Physical Disabilities, 2012, 24 : 95 - 109
  • [33] The Use of Multi-Sensory Environments in Schools Servicing Children with Severe Disabilities
    Carter, Mark
    Stephenson, Jennifer
    JOURNAL OF DEVELOPMENTAL AND PHYSICAL DISABILITIES, 2012, 24 (01) : 95 - 109
  • [34] Strengthening the affectivity of atmospheres in urban environments: the toolkit of multi-sensory experience
    Abusaada, Hisham
    ARCHNET-IJAR INTERNATIONAL JOURNAL OF ARCHITECTURAL RESEARCH, 2020, 14 (03) : 379 - 392
  • [35] The Golden Guardian: Multi-Sensory Immersive Gaming Through Multi-sensory Spatial Cues
    Chen, Taizhou
    Liu, Junyu
    Zhu, Kening
    Waliczky, Tamas
    SIGGRAPH ASIA 2017 VR SHOWCASE (SA'17), 2017,
  • [36] A multi-sensory interactive reading experience for visually impaired children; a user evaluation
    Edirisinghe C.
    Podari N.
    Cheok A.D.
    Personal and Ubiquitous Computing, 2022, 26 (3) : 807 - 819
  • [37] Indoor Multi-Sensory Self-Supervised Autonomous Mobile Robotic Navigation
    Xu, Junhong
    Guo, Hanqing
    Wu, Shaoen
    2018 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL INTERNET (ICII 2018), 2018, : 119 - 128
  • [38] Deep Multi-Sensory Object Category Recognition Using Interactive Behavioral Exploration
    Tatiya, Gyan
    Sinapov, Jivko
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 7872 - 7878
  • [39] Interactive, Tangible and Multi-sensory Technology for a Cultural Heritage Exhibition: The Battle of Pavia
    Cantoni, Virginio
    Lombardi, Luca
    Porta, Marco
    Setti, Alessandra
    INNOVATIVE APPROACHES AND SOLUTIONS IN ADVANCED INTELLIGENT SYSTEMS, 2016, 648 : 77 - 94
  • [40] Interactive Robotic Framework for Multi-sensory Therapy for Children with Autism Spectrum Disorder
    Bevill, Rachael
    Park, Chung Hyuk
    Jeon, Myounghoon
    Kim, Hyung Jung
    Lee, JongWon
    Rennie, Ariana
    Howard, Ayanna M.
    ELEVENTH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN ROBOT INTERACTION (HRI'16), 2016, : 421 - 422