Human-Like Context Sensing for Robot Surveillance

被引:0
|
作者
Giunchiglia, Fausto [1 ]
Bignotti, Enrico [1 ]
Zeni, Mattia [1 ]
机构
[1] Univ Trento, DISI, Via Sommar 9, I-38123 Trento, Italy
基金
欧盟地平线“2020”;
关键词
Ontologies; context modeling; robotics; surveillance; sensors;
D O I
10.1142/S1793351X1840007X
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Robot surveillance requires robots to make sense of what is happening around them, which is what humans do with contexts. This is critical when the robots have to interact with people. Thus, the main issue is how to model human-like context to be mapped to robots, so that they can mirror human understanding. We propose a context model, organized according to the different dimensions of the environment. We then introduce the notions of endurants and perdurants to account for how space and time, respectively, aggregate context for humans. To map real-world data, i.e. sensory inputs, to our context model, we propose a system capable of managing both the robots sensors and interacting with sensors from other devices. The proposed use case is a robot, using the system fusing sensory inputs and the context model, patrolling an university building.
引用
收藏
页码:129 / 148
页数:20
相关论文
共 50 条
  • [31] Human-Like Sensing for Robotic Remote Inspection and Analytics
    Arpan Pal
    Ranjan Dasgupta
    Arindam Saha
    Bhaskar Nandi
    Wireless Personal Communications, 2016, 88 : 23 - 38
  • [32] Development of a new human-like talking robot for human vocal mimicry
    Fukui, K
    Nishikawa, K
    Kuwae, T
    Takanobu, H
    Mochida, T
    Honda, M
    Takanishi, A
    2005 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), VOLS 1-4, 2005, : 1437 - 1442
  • [33] Kinematically Constrained Human-like Bimanual Robot-to-Human Handovers
    Goeksu, Yasemin
    Correia, Antonio De Almeida
    Prasad, Vignesh
    Kshirsagar, Alap
    Koert, Dorothea
    Peters, Jan
    Chalvatzaki, Georgia
    COMPANION OF THE 2024 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, HRI 2024 COMPANION, 2024, : 497 - 501
  • [34] A Joint Motion Model for Human-Like Robot-Human Handover
    Rasch, Robin
    Wachsmuth, Sven
    Koenig, Matthias
    2018 IEEE-RAS 18TH INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS (HUMANOIDS), 2018, : 180 - 187
  • [35] Learning Human-like Hand Reaching for Human-Robot Handshaking
    Prasad, Vignesh
    Stock-Homburg, Ruth
    Peters, Jan
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 3612 - 3618
  • [36] Will human-like machines make human-like mistakes?
    Livesey, Evan J.
    Goldwater, Micah B.
    Colagiuri, Ben
    BEHAVIORAL AND BRAIN SCIENCES, 2017, 40
  • [37] Human-Like Sensing for Robotic Remote Inspection and Analytics
    Pal, Arpan
    Dasgupta, Ranjan
    Saha, Arindam
    Nandi, Bhaskar
    WIRELESS PERSONAL COMMUNICATIONS, 2016, 88 (01) : 23 - 38
  • [38] Review on human-like robot manipulation using dexterous hands
    Sampath, Suhas Kadalagere
    Wang, Ning
    Wu, Hao
    Yang, Chenguang
    COGNITIVE COMPUTATION AND SYSTEMS, 2023, 5 (01) : 14 - 29
  • [39] A storytelling robot: Modeling and evaluation of human-like gaze behavior
    Mutlu, Bilge
    Forlizzi, Jodi
    Hodgins, Jessica
    2006 6TH IEEE-RAS INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS, VOLS 1 AND 2, 2006, : 518 - +
  • [40] A Human-Like Approach Towards Humanoid Robot Footstep Planning
    Ayaz, Yasar
    Konno, Atsushi
    Munawar, Khalid
    Tsujita, Teppei
    Komizunai, Shunsuke
    Uchiyama, Masaru
    INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2011, 8 (04): : 98 - 109