Detecting Eating and Social Presence with All DayWearable RGB-T

被引:1
|
作者
Shahi, Soroush [1 ]
Sen, Sougata [2 ]
Pedram, Mahdi [1 ]
Alharbi, Rawan [1 ]
Gao, Yang [1 ]
Katsaggelos, Aggelos K. [1 ]
Hester, Josiah [3 ]
Alshurafa, Nabil [1 ]
机构
[1] Northwestern Univ, Evanston, IL 60208 USA
[2] Birla Inst Technol & Sci, Pilani, Goa, India
[3] Georgia Inst Technol, Atlanta, GA 30332 USA
来源
2023 IEEE/ACM CONFERENCE ON CONNECTED HEALTH: APPLICATIONS, SYSTEMS AND ENGINEERING TECHNOLOGIES, CHASE | 2023年
关键词
human activity recognition; wearable camera; deep learning; FOOD-INTAKE; IMPACT; REAL;
D O I
10.1145/3580252.3586974
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Social presence has been known to impact eating behavior among people with obesity; however, the dual study of eating behavior and social presence in real-world settings is challenging due to the inability to reliably confirm the co-occurrence of these important factors. High-resolution video cameras can detect timing while providing visual confirmation of behavior; however, their potential to capture all-day behavior is limited by short battery lifetime and lack of autonomy in detection. Low-resolution infrared (IR) sensors have shown promise in automating human behavior detection; however, it is unknown if IR sensors contribute to behavior detection when combined with RGB cameras. To address these challenges, we designed and deployed a low-power, and low-resolution RGB video camera, in conjunction with a low-resolution IR sensor, to test a learned model's ability to detect eating and social presence. We evaluated our system in the wild with 10 participants with obesity; our models displayed slight improvement when detecting eating (5%) and significant improvement when detecting social presence (44%) compared with using a video-only approach. We analyzed device failure scenarios and their implications for future wearable camera design and machine learning pipelines. Lastly, we provide guidance for future studies using low-cost RGB and IR sensors to validate human behavior with context.
引用
收藏
页码:68 / 79
页数:12
相关论文
共 50 条
  • [31] Spatial exchanging fusion network for RGB-T crowd counting
    Rao, Chaoqun
    Wan, Lin
    NEUROCOMPUTING, 2024, 609
  • [32] A Lightweight RGB-T Fusion Network for Practical Semantic Segmentation
    Zhang, Haoyuan
    Li, Zifeng
    Wu, Zhenyu
    Wang, Danwei
    2023 IEEE 26TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS, ITSC, 2023, : 4233 - 4238
  • [33] Multimodal Feature-Guided Pretraining for RGB-T Perception
    Ouyang, Junlin
    Jin, Pengcheng
    Wang, Qingwang
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2024, 17 : 16041 - 16050
  • [34] RGB-T显著性目标检测综述
    吴锦涛
    王安志
    任春洪
    红外技术, 2025, 47 (01) : 1 - 9
  • [35] Cross-modal collaborative propagation for RGB-T saliency detection
    Yu, Xiaosheng
    Pang, Yu
    Chi, Jianning
    Qi, Qi
    VISUAL COMPUTER, 2024, 40 (06): : 4337 - 4354
  • [36] AGFNet: Adaptive Gated Fusion Network for RGB-T Semantic Segmentation
    Zhou, Xiaofei
    Wu, Xiaoling
    Bao, Liuxin
    Yin, Haibing
    Jiang, Qiuping
    Zhang, Jiyong
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2025,
  • [37] RGB-T Saliency Detection Based on Multiscale Modal Reasoning Interaction
    Wu, Yunhe
    Jia, Tong
    Chang, Xingya
    Wang, Hao
    Chen, Dongyue
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73
  • [38] Modal complementary fusion network for RGB-T salient object detection
    Ma, Shuai
    Song, Kechen
    Dong, Hongwen
    Tian, Hongkun
    Yan, Yunhui
    APPLIED INTELLIGENCE, 2023, 53 (08) : 9038 - 9055
  • [39] MiLNet: Multiplex Interactive Learning Network for RGB-T Semantic Segmentation
    Liu, Jinfu
    Liu, Hong
    Li, Xia
    Ren, Jiale
    Xu, Xinhua
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2025, 34 : 1686 - 1699
  • [40] 基于CNN特征的RGB-T目标跟踪算法
    刘莲
    李福生
    计算机与数字工程, 2024, (02) : 432 - 435