Multi-camera networks:: Eyes from eyes

被引:11
|
作者
Fermüller, C [1 ]
Aloimonos, Y [1 ]
Baker, P [1 ]
Pless, R [1 ]
Neumann, J [1 ]
Stuart, B [1 ]
机构
[1] Univ Maryland, Comp Vis Lab, College Pk, MD 20742 USA
来源
IEEE WORKSHOP ON OMNIDIRECTIONAL VISION, PROCEEDINGS | 2000年
关键词
D O I
10.1109/OMNVIS.2000.853797
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Autonomous or semi-autonomous intelligent systems, in order to function appropriately, need to create models of their environment, i.e., models of space-time. These are descriptions of objects and scenes and descriptions of changes of space over time, that is, events and actions. Despite the large amount of research on this problem, as a community we are still far from developing robust descriptions of a system's spatiotemporal environment using video input (image sequences). Undoubtedly, some progress has been made regarding the understanding of estimating the structure of visual space, but it has not led to solutions to specific applications. There is, however, an alternative approach which is in line with today's "zeitgeist." The vision of artificial systems can be enhanced by providing them with new eyes. If conventional video cameras are put together in various configurations, new sensors can be constructed that have much more power and the way they "see" the world makes it much easier to solve problems of vision. This research is motivated by examining the wide variety of eye design in the biological world and obtaining inspiration for an ensemble of computational studies that relate how a system sees to what that system does (i.e., relating perception to action). This, coupled with the geometry of multiple views that has flourished in terms of theoretical results in the past few years, points to new ways of constructing powerful imaging devices which suit particular tasks in robotics, visualization, video processing, virtual reality and various computer vision applications, better than conventional cameras. This paper presents a number of new sensors that we built using common video cameras and shows their superiority with regard to developing models of space and motion.
引用
收藏
页码:11 / 18
页数:8
相关论文
共 50 条
  • [11] Eyes in the Sky: Decentralized Control for the Deployment of Robotic Camera Networks
    Schwager, Mac
    Julian, Brian J.
    Angermann, Michael
    Rus, Daniela
    PROCEEDINGS OF THE IEEE, 2011, 99 (09) : 1541 - 1561
  • [12] Multi-Camera Saliency
    Luo, Yan
    Jiang, Ming
    Wong, Yongkang
    Zhao, Qi
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2015, 37 (10) : 2057 - 2070
  • [13] Eyes have it for visionary camera
    不详
    MATERIALS WORLD, 2004, 12 (09) : 3 - 3
  • [14] A Pocket Camera With Many Eyes
    Laroia, Rajiv
    IEEE SPECTRUM, 2016, 53 (11) : 35 - 40
  • [15] Correlation-Aware Packet Scheduling in Multi-Camera Networks
    Toni, Laura
    Maugey, Thomas
    Frossard, Pascal
    IEEE TRANSACTIONS ON MULTIMEDIA, 2014, 16 (02) : 496 - 509
  • [16] A Task-oriented Approach for Multi-Camera Person Tracking in Distributed Camera Networks
    Monari, Eduardo
    Kroschel, Kristian
    TM-TECHNISCHES MESSEN, 2010, 77 (10) : 530 - 537
  • [17] Multi-Camera Cinematography and Production
    不详
    SIGHT AND SOUND, 2024, 34 (03): : 66 - 66
  • [18] Multi-camera video surveillance
    Ellis, T
    36TH ANNUAL 2002 INTERNATIONAL CARNAHAN CONFERENCE ON SECURITY TECHNOLOGY, PROCEEDINGS, 2002, : 228 - 233
  • [19] Multi-camera colour tracking
    Orwell, J
    Remagnino, P
    Jones, GA
    SECOND IEEE WORKSHOP ON VISUAL SURVEILLANCE (VS'99), PROCEEDINGS, 1999, : 14 - 21
  • [20] Calibration of Multi-Camera Systems
    Dondo, Diego Gonzalez
    Trasobares, Fernando
    Yoaquino, Leandro
    Padilla, Julian
    Redolfi, Javier
    2015 XVI WORKSHOP ON INFORMATION PROCESSING AND CONTROL (RPIC), 2015,