The next-best-view for workpiece localization in robot workspace

被引:2
|
作者
Hu, Jie [1 ]
Pagilla, Prabhakar R. [1 ]
Darbha, Swaroop [1 ]
机构
[1] Texas A&M Univ, Dept Mech Engn, College Stn, TX 77843 USA
关键词
workpiece localization; robotics; manufacturing; next-best-view;
D O I
10.1109/AIM46487.2021.9517657
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Workpiece localization is the process of obtaining the location of a workpiece in a reference frame of a robotic workspace. The location (position and orientation) is represented by the transformation between a local frame associated with the workpiece and the specified reference frame in the workspace. In this work, we study the workpiece localization problem without the two commonly adopted restrictive assumptions: the data used to calculate the transformation is readily available and the correspondence between the data sets used for calculation is known. The goal is to automate the localization process starting from efficient data collection to determining the workpiece location in the workspace. We describe a strategy that includes the following aspects: predicting the correspondence between the measured data and the workpiece CAD model data; generating representative vectors that would aid in determining the next-best-view for collecting new information of the workpiece location; evaluating a search region to find the next sensor location that satisfies both the robot kinematics as well as sensor field-of-view constraints while giving the maximum view gain; and calculating the rigid body transformation from the local frame to the world frame to localize the workpiece. Numerical simulation and experimental results are presented and discussed for the proposed strategy.
引用
收藏
页码:1201 / 1206
页数:6
相关论文
共 50 条
  • [21] Supervised learning of the next-best-view for 3d object reconstruction
    Mendoza, Miguel
    Irving Vasquez-Gomez, J.
    Taud, Hind
    Enrique Sucar, Luis
    Reta, Carolina
    PATTERN RECOGNITION LETTERS, 2020, 133 : 224 - 231
  • [22] A Sampling-based Next-Best-View Path Planner for Environment Exploration
    Liu, Qishuai
    Jiang, Yufan
    Li, Ying
    2023 9TH INTERNATIONAL CONFERENCE ON MECHATRONICS AND ROBOTICS ENGINEERING, ICMRE, 2023, : 128 - 132
  • [23] Accurate and Interactive Visual-Inertial Sensor Calibration with Next-Best-View and Next-Best-Trajectory Suggestion
    Choi, Christopher L.
    Xu, Binbin
    Leutenegger, Stefan
    2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, IROS, 2023, : 1759 - 1766
  • [24] Closed-Loop Next-Best-View Planning for Target-Driven Grasping
    Breyer, Michel
    Ott, Lionel
    Siegwart, Roland
    Chung, Jen Jen
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 1411 - 1416
  • [25] Deep Next-Best-View Planner for Cross-Season Visual Route Classification
    Kanya, Kurauchi
    Kanji, Tanaka
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 497 - 502
  • [26] Next-Best-View Planning for Environment Exploration and 3D Model Construction
    ELzaiady, Mohamed E.
    Elnagar, Ashraf
    2017 INTERNATIONAL CONFERENCE ON INFOCOM TECHNOLOGIES AND UNMANNED SYSTEMS (TRENDS AND FUTURE DIRECTIONS) (ICTUS), 2017, : 745 - 750
  • [27] Next-best-view regression using a 3D convolutional neural network
    J. Irving Vasquez-Gomez
    David Troncoso
    Israel Becerra
    Enrique Sucar
    Rafael Murrieta-Cid
    Machine Vision and Applications, 2021, 32
  • [28] Autonomous Exploration of Complex Underwater Environments Using a Probabilistic Next-Best-View Planner
    Palomeras, Narcis
    Hurtos, Natalia
    Vidal, Eduard
    Carreras, Marc
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2019, 4 (02): : 1619 - 1625
  • [29] Recovering 6D Object Pose and Predicting Next-Best-View in the Crowd
    Doumanoglou, Andreas
    Kouskouridas, Rigas
    Malassiotis, Sotiris
    Kim, Tae-Kyun
    2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 3583 - 3592
  • [30] Next-best-view regression using a 3D convolutional neural network
    Vasquez-Gomez, J. Irving
    Troncoso, David
    Becerra, Israel
    Sucar, Enrique
    Murrieta-Cid, Rafael
    MACHINE VISION AND APPLICATIONS, 2021, 32 (02)