A Generic Participatory Sensing Framework for Multi-modal Datasets

被引:0
|
作者
Wu, Fang-Jing [1 ]
Luo, Tie [1 ]
机构
[1] ASTAR, Inst Infocomm Res, Singapore, Singapore
关键词
Crowdsourcing; participatory sensing; pervasive computing; incentive mechanism; social network;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Participatory sensing has become a promising data collection approach to crowdsourcing data from multi-modal data sources. This paper proposes a generic participatory sensing framework that consists of a set of well-defined modules in support of diverse use cases. This framework incorporates a concept of "human-as-a-sensor" into participatory sensing and allows the public crowd to contribute human observations as well as sensor measurements from their mobile devices. We specifically address two issues: incentive and extensibility, where the former refers to motivating participants to contribute high-quality data while the latter refers to accommodating heterogeneous and uncertain data sources. To address the incentive issue, we design an incentive engine to attract high-quality contributed data independent of data modalities. This engine works together with a novel social network that we introduce into participatory sensing, where participants are linked together and interact with each other based on data quality and quantity they have contributed. To address the extensibility issue, the proposed framework embodies application-agnostic design and provides an interface to external datasets. To demonstrate and verify this framework, we have developed a prototype mobile application called imReporter, which crowdsources hybrid (image-text) reports from participants in an urban city, and incorporates an external dataset from a public data mall. A pilot study was also carried out with 15 participants for 3 consecutive weeks, and the result confirms that our proposed framework fulfills its design goals.
引用
收藏
页数:6
相关论文
共 50 条
  • [41] Pervasive Physical Analytics using Multi-Modal Sensing
    Sen, Sougata
    2016 8TH INTERNATIONAL CONFERENCE ON COMMUNICATION SYSTEMS AND NETWORKS (COMSNETS), 2016,
  • [42] Automatic Geographic Enrichment by Multi-modal Bike Sensing
    Verstockt, Steven
    Slavkovikj, Viktor
    De Potter, Pieterjan
    Janssens, Olivier
    Slowack, Jurgen
    Van de Walle, Rik
    E-BUSINESS AND TELECOMMUNICATIONS, ICETE 2013, 2014, 456 : 369 - 384
  • [43] Urban traffic analysis through multi-modal sensing
    Perttunen, Mikko
    Kostakos, Vassilis
    Riekki, Jukka
    Ojala, Timo
    PERSONAL AND UBIQUITOUS COMPUTING, 2015, 19 (3-4) : 709 - 721
  • [44] Urban traffic analysis through multi-modal sensing
    Mikko Perttunen
    Vassilis Kostakos
    Jukka Riekki
    Timo Ojala
    Personal and Ubiquitous Computing, 2015, 19 : 709 - 721
  • [45] Evaluating multi-modal mobile behavioral biometrics using public datasets
    Ray-Dowling, Aratrika
    Hou, Daqing
    Schuckers, Stephanie
    Barbir, Abbie
    COMPUTERS & SECURITY, 2022, 121
  • [46] A Modular Approach to Programming Multi-Modal Sensing Applications
    Abdelmoamen, Ahmed
    2018 IEEE INTERNATIONAL CONFERENCE ON COGNITIVE COMPUTING (ICCC), 2018, : 91 - 98
  • [47] Truth Discovery With Multi-Modal Data in Social Sensing
    Shao, Huajie
    Sun, Dachun
    Yao, Shuochao
    Su, Lu
    Wang, Zhibo
    Liu, Dongxin
    Liu, Shengzhong
    Kaplan, Lance
    Abdelzaher, Tarek
    IEEE TRANSACTIONS ON COMPUTERS, 2021, 70 (09) : 1325 - 1337
  • [48] Unified reconstruction framework for multi-modal medical imaging
    Dong, Di
    Tian, Jie
    Dai, Yakang
    Yan, Guorui
    Yang, Fei
    Wu, Ping
    JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY, 2011, 19 (01) : 111 - 126
  • [49] Group interaction through a multi-modal haptic framework
    Le, Huang H.
    Loomes, Martin J.
    Loureiro, Rui C. V.
    12TH INTERNATIONAL CONFERENCE ON INTELLIGENT ENVIRONMENTS - IE 2016, 2016, : 62 - 67
  • [50] MMFusion: A Generalized Multi-Modal Fusion Detection Framework
    Cui, Leichao
    Li, Xiuxian
    Meng, Min
    Mo, Xiaoyu
    2023 IEEE INTERNATIONAL CONFERENCE ON DEVELOPMENT AND LEARNING, ICDL, 2023, : 415 - 422