Evaluating multi-modal mobile behavioral biometrics using public datasets

被引:9
|
作者
Ray-Dowling, Aratrika [1 ]
Hou, Daqing [1 ]
Schuckers, Stephanie [1 ]
Barbir, Abbie [2 ]
机构
[1] Clarkson Univ, Dept Elect & Comp Engn, 8 Clarkson Ave, Potsdam, NY 13699 USA
[2] CVS Hlth, Mobile Secur Grp, Lowell, MA USA
基金
美国国家科学基金会;
关键词
Performance evaluation; Behavioral biometric; Continuous authentication; Multi; -Modality; Likelihood ratio -based score fusion; Support vector machine; CONTINUOUS AUTHENTICATION; PHONES;
D O I
10.1016/j.cose.2022.102868
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Behavioral biometric-based continuous user authentication is promising for securing mobile phones while complementing traditional security mechanisms. However, the existing state of art perform continuous authentication to evaluate deep learning models, but lacks examining different f eature sets over the data. Therefore, we evaluate the performance of user authentication based on acceleration, gyroscope (angu-lar velocity), and swipe data from two public mobile datasets, HMOG (Hand-Movement, Orientation, and Grasp) (Sitova et al., (2015) dataset et al. (2015)) and BB-MAS (Behavioral Biometrics Multi-device and multi-Activity data from Same users) (Belman et al., (2019) dataset et al. (2019)) extracted with differ-ent feature sets to observe the variation in authentication performance. We evaluate the performances of both individual modalities and their fusion. Since the swipe data is intermittent but the motion event data continuous, we evaluate fusion of swipes with motion events that occur within the swipes versus fusion of motion events outside of swipes. Moreover, we extract Frank et al.'s (2012) Touchalytics fea-tures Frank et al. (2012) on the swipe data but three different f eature sets (median, HMOG (Sitova et al. (2015)), and Shen's (Shen et al. (2017))) on the motion event data, among which the Shen's features were shown to perform the best. More specifically, we perform score-level fusion for a single modality utilizing binary SVMs (Support Vector Machine). Furthermore, we evaluate the fusion of multiple modalities using Nandakumar's likelihood ratio-based score fusion (Nandakumar et al. (2007)) by utilizing both one-class and binary SVMs. The best EERs (Equal Error Rates) of fusing all three modalities when using the one -class SVMs are 8.8% and 0.9% for HMOG and BB-MAS respectively. On the other hand, the best EERs in the case of binary SVMs are 1.5% and 0.2% respectively. Observing the better performances of BB-MAS com-pared to HMOG in swipe-based experiments, we examine the difference of swipe trajectory between the two datasets and find that BB-MAS has longer swipes than HMOG which would explain the performance difference in the experiments.(c) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:21
相关论文
共 50 条
  • [21] The Authentication System for Multi-modal Behavior Biometrics Using Concurrent Pareto Learning SOM
    Dozono, Hiroshi
    Ito, Shinsuke
    Nakakuni, Masanori
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2011, PT II, 2011, 6792 : 197 - +
  • [22] Multi-modal sensor localization using a mobile access point
    Sadler, BM
    Kozick, RJ
    Tong, L
    2005 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOLS 1-5: SPEECH PROCESSING, 2005, : 753 - 756
  • [23] A secure multi-modal biometrics using deep ConvGRU neural networks based hashing
    Sasikala, T. S.
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 235
  • [24] Integration of multi-modal datasets to estimate human aging
    Ribeiro, Rogerio
    Moraes, Athos
    Moreno, Marta
    Ferreira, Pedro G.
    MACHINE LEARNING, 2024, 113 (10) : 7293 - 7317
  • [25] Biometrics and forensics integration using deep multi-modal semantic alignment and joint embedding
    Toor, Andeep S.
    Wechsler, Harry
    PATTERN RECOGNITION LETTERS, 2018, 113 : 29 - 37
  • [26] A Generic Participatory Sensing Framework for Multi-modal Datasets
    Wu, Fang-Jing
    Luo, Tie
    2014 IEEE NINTH INTERNATIONAL CONFERENCE ON INTELLIGENT SENSORS, SENSOR NETWORKS AND INFORMATION PROCESSING (IEEE ISSNIP 2014), 2014,
  • [27] Multi-modal Stance Detection: New Datasets and Model
    Liang, Bin
    Li, Ang
    Zhao, Jingqian
    Gui, Lin
    Yang, Min
    Yu, Yue
    Wong, Kam-Fai
    Xu, Ruifeng
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 12373 - 12387
  • [28] TOD and Multi-modal Public Transport
    Mees, Paul
    PLANNING PRACTICE AND RESEARCH, 2014, 29 (05): : 461 - 470
  • [29] Extraction of Temporal Patterns in Multi-rate and Multi-modal Datasets
    Liutkus, Antoine
    Simsekli, Umut
    Cemgil, A. Taylan
    LATENT VARIABLE ANALYSIS AND SIGNAL SEPARATION, LVA/ICA 2015, 2015, 9237 : 135 - 142
  • [30] Ameliorating the Accuracy & Dimensional Reduction of Multi-modal Biometrics by Deep Learning
    Raiu, Viswanadha
    Vidyasree, P.
    Patel, Ashok
    2021 IEEE AEROSPACE CONFERENCE (AEROCONF 2021), 2021,