Impacts of Image Obfuscation on Fine-grained Activity Recognition in Egocentric Video

被引:0
|
作者
Shahi, Soroush [1 ,3 ]
Alharbi, Rawan [1 ,3 ]
Gao, Yang [1 ,3 ]
Sen, Sougata [4 ]
Katsaggelos, Aggelos K. [1 ,2 ]
Hester, Josiah [1 ,2 ,3 ]
Alshurafa, Nabil [1 ,2 ,3 ]
机构
[1] Northwestern Univ, Dept Comp Sci, Chicago, IL 60611 USA
[2] Northwestern Univ, Elect & Comp Engn, Chicago, IL 60611 USA
[3] Northwestern Univ, Dept Prevent Med, Chicago, IL 60611 USA
[4] BITS, Dept Comp Sci & Informat Syst, Pilani, Goa, India
来源
2022 IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATIONS WORKSHOPS AND OTHER AFFILIATED EVENTS (PERCOM WORKSHOPS) | 2022年
基金
美国国家科学基金会;
关键词
Human Activity Recognition; Wearable Camera; Deep Learning; Image Obfuscation;
D O I
10.1109/PerComWorkshops53856.2022.9767447
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Automated detection and validation of fine-grained human activities from egocentric vision has gained increased attention in recent years due to the rich information afforded by RGB images. However, it is not easy to discern how much rich information is necessary to detect the activity of interest reliably. Localization of hands and objects in the image has proven helpful to distinguishing between hand-related fine-grained activities. This paper describes the design of a hand-object-based mask obfuscation method (HOBM) and assesses its effect on automated recognition of fine-grained human activities. HOBM masks all pixels other than the hand and object in-hand, improving the protection of personal user information (PUI). We test a deep learning model trained with and without obfuscation using a public egocentric activity dataset with 86 class labels and achieve almost similar classification accuracies (2% decrease with obfuscation). Our findings show that it is possible to protect PUI at smaller image utility costs (loss of accuracy).
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Fine-Grained Obfuscation Scheme Recognition on Binary Code
    Tian, Zhenzhou
    Mao, Hengchao
    Huang, Yaqian
    Tian, Jie
    Li, Jinrui
    DIGITAL FORENSICS AND CYBER CRIME, ICDF2C 2021, 2022, 441 : 215 - 228
  • [2] Unsupervised online learning for fine-grained hand segmentation in egocentric video
    Zhao, Ying
    Luo, Zhiwei
    Quan, Changqin
    2017 14TH CONFERENCE ON COMPUTER AND ROBOT VISION (CRV 2017), 2017, : 248 - 255
  • [3] Fine-Grained Crowdsourcing for Fine-Grained Recognition
    Jia Deng
    Krause, Jonathan
    Li Fei-Fei
    2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, : 580 - 587
  • [4] Learning to locate for fine-grained image recognition
    Chen, Jiamin
    Hu, Jianguo
    Li, Shiren
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2021, 206
  • [5] Incremental Learning for Fine-Grained Image Recognition
    Cao, Liangliang
    Hsiao, Jenhao
    de Juan, Paloma
    Li, Yuncheng
    Thomee, Bart
    ICMR'16: PROCEEDINGS OF THE 2016 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, 2016, : 363 - 366
  • [6] Fine-grained Activity Recognition in Baseball Videos
    Piergiovanni, A. J.
    Ryoo, Michael S.
    PROCEEDINGS 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2018, : 1821 - 1829
  • [7] Fine-Grained Activity Recognition for Assembly Videos
    Jones, Jonathan D.
    Cortesa, Cathryn
    Shelton, Amy
    Landau, Barbara
    Khudanpur, Sanjeev
    Hager, Gregory D.
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (02): : 3728 - 3735
  • [8] Selective Sparse Sampling for Fine-grained Image Recognition
    Ding, Yao
    Zhou, Yanzhao
    Zhu, Yi
    Ye, Qixiang
    Jiao, Jianbin
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 6598 - 6607
  • [9] Destruction and Construction Learning for Fine-grained Image Recognition
    Chen, Yue
    Bai, Yalong
    Zhang, Wei
    Mei, Tao
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 5152 - 5161
  • [10] Hybrid Granularities Transformer for Fine-Grained Image Recognition
    Yu, Ying
    Wang, Jinghui
    ENTROPY, 2023, 25 (04)