Multi-modal person identification in a smart environment

被引:0
|
作者
Ekenel, Hazim Kemal [1 ]
Fischer, Mika [1 ]
Jin, Qin [2 ]
Stiefelhagen, Rainer [1 ]
机构
[1] Univ Karlsruhe, ISL, D-76131 Karlsruhe, Germany
[2] Carnegie Mellon Univ, ISL, Pittsburgh, PA 15213 USA
关键词
D O I
暂无
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
In this paper, we present a detailed analysis of multimodal fusion for person identification in a smart environment. The multi-modal system consists of a videobased face recognition system and a speaker identification system. We investigated different score normalization, modality weighting and modality combination schemes during the fusion of the individual modalities. We introduced two new modality weighting schemes, namely, the cumulative ratio of correct matches (CRCM) and distance-to-second-closest (DT2ND) measures. In addition, we also assessed the effects of the well-known score normalization and classifier combination methods on the identification performance. Experimental results obtained on the CLEAR 2007 evaluation corpus, which contains audio-visual recordings from different smart rooms, show that CRCM-based modality weighting improves the correct identification rates significantly.
引用
收藏
页码:2984 / +
页数:3
相关论文
共 50 条
  • [21] CONTEXTUAL PERSON DETECTION IN MULTI-MODAL OUTDOOR SURVEILLANCE
    Robertson, Neil M.
    Letham, Jonathan
    2012 PROCEEDINGS OF THE 20TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2012, : 1930 - 1934
  • [22] Person Tracking Association Using Multi-modal Systems
    Belmonte-Hernandez, A.
    Solachidis, V.
    Theodoridis, T.
    Hernandez-Penaloza, G.
    Conti, G.
    Vretosl, N.
    Alvarez, F.
    Daras, P.
    2017 14TH IEEE INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL BASED SURVEILLANCE (AVSS), 2017,
  • [23] EgoCom: A Multi-Person Multi-Modal Egocentric Communications Dataset
    Northcutt, Curtis G.
    Zha, Shengxin
    Lovegrove, Steven
    Newcombe, Richard
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (06) : 6783 - 6793
  • [24] User Identification from Gait Analysis Using Multi-Modal Sensors in Smart Insole
    Choi, Sang-Il
    Moon, Jucheol
    Park, Hee-Chan
    Choi, Sang Tae
    SENSORS, 2019, 19 (17)
  • [25] Passive multi-modal sensors for the urban environment
    Ladas, A
    Frankel, R
    Unattended Ground Sensor Technologies and Applications VII, 2005, 5796 : 477 - 486
  • [26] Multi-modal uniform deep learning for RGB-D person re-identification
    Ren, Liangliang
    Lu, Jiwen
    Feng, Jianjiang
    Zhou, Jie
    PATTERN RECOGNITION, 2017, 72 : 446 - 457
  • [27] TriReID: Towards Multi-Modal Person Re-Identification via Descriptive Fusion Model
    Zhai, Yajing
    Zeng, Yawen
    Cao, Da
    Lu, Shaofei
    PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2022, 2022, : 63 - 71
  • [28] Graph based Spatial-temporal Fusion for Multi-modal Person Re-identification
    Zhang, Yaobin
    Lv, Jianming
    Liu, Chen
    Cai, Hongmin
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 3736 - 3744
  • [29] Multi-modal Interaction System for Smart TV Environments
    Lee, Injae
    Cha, Jihun
    Kwon, Ohseok
    2014 IEEE INTERNATIONAL SYMPOSIUM ON MULTIMEDIA (ISM), 2014, : 263 - 266
  • [30] A Multi-Modal Approach to Creating Routines for Smart Speakers
    Barricelli, Barbara Rita
    Fogli, Daniela
    Iemmolo, Letizia
    Locoro, Angela
    PROCEEDINGS OF THE WORKING CONFERENCE ON ADVANCED VISUAL INTERFACES AVI 2022, 2022,