A technology for generation of space object optical image based on 3D point cloud model

被引:0
|
作者
Lu T. [1 ]
Li X. [1 ]
Zhang Y. [1 ]
Yan Y. [1 ]
Yang W. [2 ]
机构
[1] Research and Development Department, China Academy of Launch Vehicle Technology, Beijing
[2] Key Laboratory of Grain Information Processing and Control, Ministry of Education, Henan University of Technology, Zhengzhou
关键词
Artificial intelligence; Point cloud model; Projective transformation; Simulated image; Space object;
D O I
10.13700/j.bh.1001-5965.2019.0189
中图分类号
学科分类号
摘要
The lack of the prior image data in the space exploration tasks makes it difficult to quantitatively test and evaluate the situation awareness and navigation algorithms based on the optical images. Accordingly, in this paper, we present an algorithm for generating the synthetic space object optical image based on the 3D point cloud model and the basic theory of the projective transformation. First, the 3D point cloud model of the space object and the optical camera model were constructed. Then, the corresponding pairs between all the pixels in the image plane and the space points of the 3D point cloud model were obtained via the basic theory of projective transformation, and subsequently the intensity of each pixel in the image plane was calculated by the lighting direction of its corresponding space point and the Lambertian reflection model, and finally the simulated image was generated. A great deal of simulation experiments demonstrate that the proposed algorithm can produce the more vivid simulated images rapidly than the traditional analytical image generation algorithm, and the generated images can be applied to testing and evaluating the typical space application algorithms qualitatively and quantitatively, such as ellipse fitting, crater detection, optical navigation landing on the planet, automated rendezvous and docking of spacecraft, 3D tracking of spacecraft, and so on. © 2020, Editorial Board of JBUAA. All right reserved.
引用
收藏
页码:274 / 286
页数:12
相关论文
共 27 条
  • [1] Xiang Y., Schmidt T., Narayanan V., Et al., PoseCNN: A convolutional neural network for 6D object pose estimation in cluttered scenes
  • [2] Liu C., Hu W., Real-time geometric fitting and pose estimation for surface of revolution, Pattern Recognition, 85, pp. 90-108, (2019)
  • [3] Crivellaro A., Rad M., Verdie Y., Et al., Robust 3D object tracking from monocular images using stable parts, IEEE Transactions on Pattern Analysis and Machine Intelligence, 40, 6, pp. 1465-1479, (2018)
  • [4] Yu M., Cui H., Tian Y., A new approach based on crater detection and matching for visual navigation in planetary landing, Advances in Space Research, 53, 12, pp. 1810-1821, (2014)
  • [5] Zhang H., Jiang Z., Elgammal A., Satellite recognition and pose estimation using homeomorphic manifold analysis, IEEE Transactions on Aerospace and Electronic Systems, 51, 1, pp. 785-792, (2015)
  • [6] Pinpoint vision-based landings on moon, mars and asteroids
  • [7] TRON-Testbed for robotic optical navigation
  • [8] High-fidelity hardware-in-the-loop emulators
  • [9] Parkes S.M., Martin I., Virtual lunar landscapes for testing vision-guided lunar landers, IEEE International Conference on Information Visualization, pp. 122-127, (1999)
  • [10] PANGU-Planet and asteroid natural scene generation utility