A Generic Participatory Sensing Framework for Multi-modal Datasets

被引:0
|
作者
Wu, Fang-Jing [1 ]
Luo, Tie [1 ]
机构
[1] ASTAR, Inst Infocomm Res, Singapore, Singapore
关键词
Crowdsourcing; participatory sensing; pervasive computing; incentive mechanism; social network;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Participatory sensing has become a promising data collection approach to crowdsourcing data from multi-modal data sources. This paper proposes a generic participatory sensing framework that consists of a set of well-defined modules in support of diverse use cases. This framework incorporates a concept of "human-as-a-sensor" into participatory sensing and allows the public crowd to contribute human observations as well as sensor measurements from their mobile devices. We specifically address two issues: incentive and extensibility, where the former refers to motivating participants to contribute high-quality data while the latter refers to accommodating heterogeneous and uncertain data sources. To address the incentive issue, we design an incentive engine to attract high-quality contributed data independent of data modalities. This engine works together with a novel social network that we introduce into participatory sensing, where participants are linked together and interact with each other based on data quality and quantity they have contributed. To address the extensibility issue, the proposed framework embodies application-agnostic design and provides an interface to external datasets. To demonstrate and verify this framework, we have developed a prototype mobile application called imReporter, which crowdsources hybrid (image-text) reports from participants in an urban city, and incorporates an external dataset from a public data mall. A pilot study was also carried out with 15 participants for 3 consecutive weeks, and the result confirms that our proposed framework fulfills its design goals.
引用
收藏
页数:6
相关论文
共 50 条
  • [21] Multi-Modal Physiological Sensing on the Upper Arm
    Branan, Kimberly L.
    Reyes, Gilberto O. Flores
    Abel, John A.
    Erraguntla, Madhav
    Gutierrez-Osuna, Ricardo
    Cote, Gerard L.
    BIOPHOTONICS IN EXERCISE SCIENCE, SPORTS MEDICINE, HEALTH MONITORING TECHNOLOGIES, AND WEARABLES III, 2022, 11956
  • [22] Multi-modal Sensing for Human Activity Recognition
    Bruno, Barbara
    Grosinger, Jasmin
    Mastrogiovanni, Fulvio
    Pecora, Federico
    Saffiotti, Alessandro
    Sathyakeerthy, Subhash
    Sgorbissa, Antonio
    2015 24TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2015, : 594 - 600
  • [23] Multi-modal sensing in spin crossover compounds
    Gentili, Denis
    Demitri, Nicola
    Schaefer, Bernhard
    Liscio, Fabiola
    Bergenti, Ilaria
    Ruani, Giampiero
    Ruben, Mario
    Cavallini, Massimiliano
    JOURNAL OF MATERIALS CHEMISTRY C, 2015, 3 (30) : 7836 - 7844
  • [24] A Multi-Modal Framework for Future Emergency Systems
    Basil, Ahmed Osama
    Mu, Mu
    Agyeman, Michael Opoku
    2019 IEEE SMARTWORLD, UBIQUITOUS INTELLIGENCE & COMPUTING, ADVANCED & TRUSTED COMPUTING, SCALABLE COMPUTING & COMMUNICATIONS, CLOUD & BIG DATA COMPUTING, INTERNET OF PEOPLE AND SMART CITY INNOVATION (SMARTWORLD/SCALCOM/UIC/ATC/CBDCOM/IOP/SCI 2019), 2019, : 17 - 20
  • [25] Customizable Multi-Modal Mixed Reality Framework
    Omary, Danah
    2024 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES ABSTRACTS AND WORKSHOPS, VRW 2024, 2024, : 1140 - 1141
  • [26] Introduction to a Framework for Multi-modal and Tangible Interaction
    Lo, Kenneth W. K.
    Tang, Will W. W.
    Ngai, Grace
    Chan, Stephen C. F.
    Tse, Jason T. P.
    IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2010), 2010,
  • [27] Exploiting Multi-Modal Interactions: A Unified Framework
    Li, Ming
    Xue, Xiao-Bing
    Zhou, Zhi-Hua
    21ST INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI-09), PROCEEDINGS, 2009, : 1120 - 1125
  • [28] FARMI: A FrAmework for Recording Multi-Modal Interactions
    Jonell, Patrik
    Bystedt, Mattias
    Fallgren, Per
    Kontogiorgos, Dimosthenis
    Lopes, Jose
    Malisz, Zofia
    Mascarenhas, Samuel
    Oertel, Catharine
    Raveh, Eran
    Shore, Todd
    PROCEEDINGS OF THE ELEVENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2018), 2018, : 3969 - 3974
  • [29] A unified framework for multi-modal federated learning
    Xiong, Baochen
    Yang, Xiaoshan
    Qi, Fan
    Xu, Changsheng
    NEUROCOMPUTING, 2022, 480 : 110 - 118
  • [30] Multi-Modal Interactions of Mixed Reality Framework
    Omary, Danah
    Mehta, Gayatri
    17TH IEEE DALLAS CIRCUITS AND SYSTEMS CONFERENCE, DCAS 2024, 2024,