Depth-Based Efficient PnP: A Rapid and Accurate Method for Camera Pose Estimation

被引:0
|
作者
Xie, Xinyue [1 ]
Zou, Deyue [1 ,2 ]
机构
[1] Dalian Univ Technol, Dalian 116024, Peoples R China
[2] Harbin Inst Technol, Harbin 150001, Peoples R China
来源
基金
中国国家自然科学基金;
关键词
Cameras; Accuracy; Uncertainty; Three-dimensional displays; Computational efficiency; Pose estimation; Robustness; Optimization; perspective-n-point (PnP); real-time pose estimation; SLAM; vision-based navigation;
D O I
10.1109/LRA.2024.3438037
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
This letter presents a novel approach, DEPnP (Depth-based Efficient PnP), addressing the Perspective-n-Point (PnP) problem crucial in vision-based navigation and SLAM (Simultaneous Localization and Mapping) in robotics and automation, which estimates the pose of a calibrated camera by observing the 2D projections of known 3D points onto the camera image plane. The method employs eight variables to control the depth of control points and orientation of camera, formulating camera pose estimation as an optimization task. By optimizing these variables utilizing mean-subtracted rotation equations, rapid and accurate camera pose estimation is achieved. Notably, the careful selection of variables and objective function simplifies the computation of the Jacobian matrix, ensuring computational efficiency. DEPnP demonstrates robustness against noise and inlier disturbances, consistently delivering accurate camera pose estimation. Experimental evaluations validate the effectiveness and accuracy of DEPnP, positioning it as a competitive solution for real-time applications requiring precise camera pose estimation in robotics and automation. Our code has been open-sourced on GitHub.
引用
收藏
页码:9287 / 9294
页数:8
相关论文
共 50 条
  • [21] Efficient bundle optimization for accurate camera pose estimation in mobile augmented reality systems
    Li, Shanglin
    Li, Yalan
    Lan, Yulin
    Lin, Anping
    EGYPTIAN JOURNAL OF REMOTE SENSING AND SPACE SCIENCES, 2024, 27 (04): : 743 - 752
  • [22] WatchNet++: efficient and accurate depth-based network for detecting people attacks and intrusion
    M. Villamizar
    A. Martínez-González
    O. Canévet
    J.-M. Odobez
    Machine Vision and Applications, 2020, 31
  • [23] Depth-based Hand Pose Segmentation with Hough Random Forest
    Tsai, Wei-Jiun
    Chen, Ju-Chin
    Lin, Kawuu W.
    2016 3RD INTERNATIONAL CONFERENCE ON GREEN TECHNOLOGY AND SUSTAINABLE DEVELOPMENT (GTSD), 2016, : 166 - 167
  • [24] Pose Estimation of a Depth Camera Using Plane Features
    Rhee, Seon-Min
    Lee, Yong-Beom
    Kim, James D. K.
    Rhee, Taehyun
    2013 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS (ICCE), 2013, : 133 - +
  • [25] Depth-based 3D Hand Pose Tracking
    Quach, Kha Gia
    Chi Nhan Duong
    Luu, Khoa
    Bui, Tien D.
    2016 23RD INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2016, : 2746 - 2751
  • [26] Accurate mirror-based camera pose estimation with explicit geometric meanings
    LI Xin
    LONG Gu Can
    GUO Peng Yu
    LIU Jin Bo
    ZHANG Xiao Hu
    YU Qi Feng
    Science China(Technological Sciences), 2014, 57 (12) : 2504 - 2513
  • [27] Accurate mirror-based camera pose estimation with explicit geometric meanings
    Li Xin
    Long GuCan
    Guo PengYu
    Liu JinBo
    Zhang XiaoHu
    Yu QiFeng
    SCIENCE CHINA-TECHNOLOGICAL SCIENCES, 2014, 57 (12) : 2504 - 2513
  • [28] Accurate Camera Pose Estimation for KinectFusion Based on Line Segment Matching by LEHF
    Nakayama, Yusuke
    Honda, Toshihiro
    Saito, Hideo
    Shimizu, Masayoshi
    Yamaguchi, Nobuyasu
    2014 22ND INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2014, : 2149 - 2154
  • [29] Accurate mirror-based camera pose estimation with explicit geometric meanings
    LI Xin
    LONG Gu Can
    GUO Peng Yu
    LIU Jin Bo
    ZHANG Xiao Hu
    YU Qi Feng
    Science China(Technological Sciences), 2014, (12) : 2504 - 2513
  • [30] Accurate mirror-based camera pose estimation with explicit geometric meanings
    Xin Li
    GuCan Long
    PengYu Guo
    JinBo Liu
    XiaoHu Zhang
    QiFeng Yu
    Science China Technological Sciences, 2014, 57 : 2504 - 2513