Detecting Eating and Social Presence with All DayWearable RGB-T

被引:1
|
作者
Shahi, Soroush [1 ]
Sen, Sougata [2 ]
Pedram, Mahdi [1 ]
Alharbi, Rawan [1 ]
Gao, Yang [1 ]
Katsaggelos, Aggelos K. [1 ]
Hester, Josiah [3 ]
Alshurafa, Nabil [1 ]
机构
[1] Northwestern Univ, Evanston, IL 60208 USA
[2] Birla Inst Technol & Sci, Pilani, Goa, India
[3] Georgia Inst Technol, Atlanta, GA 30332 USA
来源
2023 IEEE/ACM CONFERENCE ON CONNECTED HEALTH: APPLICATIONS, SYSTEMS AND ENGINEERING TECHNOLOGIES, CHASE | 2023年
关键词
human activity recognition; wearable camera; deep learning; FOOD-INTAKE; IMPACT; REAL;
D O I
10.1145/3580252.3586974
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Social presence has been known to impact eating behavior among people with obesity; however, the dual study of eating behavior and social presence in real-world settings is challenging due to the inability to reliably confirm the co-occurrence of these important factors. High-resolution video cameras can detect timing while providing visual confirmation of behavior; however, their potential to capture all-day behavior is limited by short battery lifetime and lack of autonomy in detection. Low-resolution infrared (IR) sensors have shown promise in automating human behavior detection; however, it is unknown if IR sensors contribute to behavior detection when combined with RGB cameras. To address these challenges, we designed and deployed a low-power, and low-resolution RGB video camera, in conjunction with a low-resolution IR sensor, to test a learned model's ability to detect eating and social presence. We evaluated our system in the wild with 10 participants with obesity; our models displayed slight improvement when detecting eating (5%) and significant improvement when detecting social presence (44%) compared with using a video-only approach. We analyzed device failure scenarios and their implications for future wearable camera design and machine learning pipelines. Lastly, we provide guidance for future studies using low-cost RGB and IR sensors to validate human behavior with context.
引用
收藏
页码:68 / 79
页数:12
相关论文
共 50 条
  • [21] FEATURE ENHANCEMENT AND FUSION FOR RGB-T SALIENT OBJECT DETECTION
    Sun, Fengming
    Zhang, Kang
    Yuan, Xia
    Zhao, Chunxia
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 1300 - 1304
  • [22] Region Selective Fusion Network for Robust RGB-T Tracking
    Yu, Zhencheng
    Fan, Huijie
    Wang, Qiang
    Li, Ziwan
    Tang, Yandong
    IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 1357 - 1361
  • [23] Revisiting Feature Fusion for RGB-T Salient Object Detection
    Zhang, Qiang
    Xiao, Tonglin
    Huang, Nianchang
    Zhang, Dingwen
    Han, Jungong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (05) : 1804 - 1818
  • [24] Learning a Twofold Siamese Network for RGB-T Object Tracking
    Kuai, Yangliu
    Li, Dongdong
    Qian, Que
    JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2021, 30 (05)
  • [25] Learning cross-modal interaction for RGB-T tracking
    Xu, Chunyan
    Cui, Zhen
    Wang, Chaoqun
    Zhou, Chuanwei
    Yang, Jian
    SCIENCE CHINA-INFORMATION SCIENCES, 2023, 66 (01)
  • [26] Scribble-Supervised RGB-T Salient Object Detection
    Liu, Zhengyi
    Huang, Xiaoshen
    Zhang, Guanghui
    Fang, Xianyong
    Wang, Linbo
    Tang, Bin
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 2369 - 2374
  • [27] Bridging Search Region Interaction with Template for RGB-T Tracking
    Hui, Tianrui
    Xun, Zizheng
    Peng, Fengguang
    Huang, Junshi
    Wei, Xiaoming
    Wei, Xiaolin
    Dai, Jiao
    Han, Jizhong
    Liu, Si
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 13630 - 13639
  • [28] Learning cross-modal interaction for RGB-T tracking
    Chunyan XU
    Zhen CUI
    Chaoqun WANG
    Chuanwei ZHOU
    Jian YANG
    Science China(Information Sciences), 2023, 66 (01) : 320 - 321
  • [29] Enabling modality interactions for RGB-T salient object detection
    Zhang, Qiang
    Xi, Ruida
    Xiao, Tonglin
    Huang, Nianchang
    Luo, Yongjiang
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2022, 222
  • [30] Learning cross-modal interaction for RGB-T tracking
    Chunyan Xu
    Zhen Cui
    Chaoqun Wang
    Chuanwei Zhou
    Jian Yang
    Science China Information Sciences, 2023, 66