Multi-camera networks:: Eyes from eyes

被引:11
|
作者
Fermüller, C [1 ]
Aloimonos, Y [1 ]
Baker, P [1 ]
Pless, R [1 ]
Neumann, J [1 ]
Stuart, B [1 ]
机构
[1] Univ Maryland, Comp Vis Lab, College Pk, MD 20742 USA
关键词
D O I
10.1109/OMNVIS.2000.853797
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Autonomous or semi-autonomous intelligent systems, in order to function appropriately, need to create models of their environment, i.e., models of space-time. These are descriptions of objects and scenes and descriptions of changes of space over time, that is, events and actions. Despite the large amount of research on this problem, as a community we are still far from developing robust descriptions of a system's spatiotemporal environment using video input (image sequences). Undoubtedly, some progress has been made regarding the understanding of estimating the structure of visual space, but it has not led to solutions to specific applications. There is, however, an alternative approach which is in line with today's "zeitgeist." The vision of artificial systems can be enhanced by providing them with new eyes. If conventional video cameras are put together in various configurations, new sensors can be constructed that have much more power and the way they "see" the world makes it much easier to solve problems of vision. This research is motivated by examining the wide variety of eye design in the biological world and obtaining inspiration for an ensemble of computational studies that relate how a system sees to what that system does (i.e., relating perception to action). This, coupled with the geometry of multiple views that has flourished in terms of theoretical results in the past few years, points to new ways of constructing powerful imaging devices which suit particular tasks in robotics, visualization, video processing, virtual reality and various computer vision applications, better than conventional cameras. This paper presents a number of new sensors that we built using common video cameras and shows their superiority with regard to developing models of space and motion.
引用
收藏
页码:11 / 18
页数:8
相关论文
共 50 条
  • [1] Multi-Camera Networks for Coverage Control of Drones
    Huang, Sunan
    Teo, Rodney Swee Huat
    Leong, William Wai Lun
    DRONES, 2022, 6 (03)
  • [2] Distributed Sensing and Processing for Multi-Camera Networks
    Sankaranarayanan, Aswin C.
    Chellappa, Rama
    Baraniuk, Richard G.
    DISTRIBUTED VIDEO SENSOR NETWORKS, 2011, : 85 - 101
  • [3] Coverage Enhancement for Deployment of Multi-camera Networks
    Zhang, Xuebo
    Alarcon-Herrera, Jose Luis
    Chen, Xiang
    2015 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS (AIM), 2015, : 909 - 914
  • [4] Scheduling for Multi-Camera Surveillance in LTE Networks
    Wang, Chih-Hang
    Yang, De-Nian
    Chen, Wen-Tsuen
    2015 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2015,
  • [5] Computing camera positions from a multi-camera head
    Roth, G
    THIRD INTERNATIONAL CONFERENCE ON 3-D DIGITAL IMAGING AND MODELING, PROCEEDINGS, 2001, : 135 - 142
  • [6] Multi-camera networks for motion parameter estimation of an aircraft
    Guan, Banglei
    Sun, Xiangyi
    Shang, Yang
    Zhang, Xiaohu
    Hofer, Manuel
    INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2017, 14 (01):
  • [7] Layered and collaborative gesture analysis in multi-camera networks
    Aghajan, Hamid
    2007 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL IV, PTS 1-3, 2007, : 1377 - 1380
  • [8] Improved Adaptivity and Robustness in Decentralised Multi-Camera Networks
    Esterle, Lukas
    Rinner, Bernhard
    Lewis, Peter R.
    Yao, Xin
    2012 SIXTH INTERNATIONAL CONFERENCE ON DISTRIBUTED SMART CAMERAS (ICDSC), 2012,
  • [9] Multi-camera people tracking using Bayesian Networks
    Tan, MH
    Ranganath, S
    ICICS-PCM 2003, VOLS 1-3, PROCEEDINGS, 2003, : 1335 - 1340
  • [10] An Efficient System for Vehicle Tracking in Multi-Camera Networks
    Dixon, Michael
    Jacobs, Nathan
    Pless, Robert
    2009 THIRD ACM/IEEE INTERNATIONAL CONFERENCE ON DISTRIBUTED SMART CAMERAS, 2009, : 232 - 239