CULTURAL HERITAGE DIGITAL PRESERVATION THROUGH AI-DRIVEN ROBOTICS

被引:0
|
作者
Marchello, G. [2 ]
Giovanelli, R. [1 ,2 ,3 ]
Fontana, E. [2 ]
Cannella, F. [2 ]
Traviglia, A. [1 ]
机构
[1] Ist Italiano Tecnol, Ctr Cultural Heritage Technol, I-30172 Venice, Italy
[2] Ist Italiano Tecnol, Ctr Convergent Technol, Ind Robot Facil, I-16163 Genoa, Italy
[3] Ca Foscari Univ Venice, DSU, I-3246 Venice, Italy
关键词
Digital Twins; Robotics; Computer Vision; Structure from Motion; Artificial Intelligence; Cultural Heritage; PHOTOGRAMMETRY; MOTION;
D O I
10.5194/isprs-archives-XLVIII-M-2-2023-995-2023
中图分类号
K85 [文物考古];
学科分类号
0601 ;
摘要
This paper introduces a novel methodology developed for creating 3D models of archaeological artifacts that reduces the time and effort required by operators. The approach uses a simple vision system mounted on a robotic arm that follows a predetermined path around the object to be reconstructed. The robotic system captures different viewing angles of the object and assigns 3D coordinates corresponding to the robot's pose, allowing it to adjust the trajectory to accommodate objects of various shapes and sizes. The angular displacement between consecutive acquisitions can also be fine-tuned based on the desired final resolution. This flexible approach is suitable for different object sizes, textures, and levels of detail, making it ideal for both large volumes with low detail and small volumes with high detail. The recorded images and assigned coordinates are fed into a constrained implementation of the structure-from-motion (SfM) algorithm, which uses the scale-invariant features transform (SIFT) method to detect key points in each image. By utilising a priori knowledge of the coordinates and SIFT algorithm, low processing time can be ensured while maintaining high accuracy in the final reconstruction. The use of a robotic system to acquire images at a pre-defined pace ensures high repeatability and consistency across different 3D reconstructions, eliminating operator errors in the workflow. This approach not only allows for comparisons between similar objects but also provides the ability to track structural changes of the same object over time. Overall, the proposed methodology provides a significant improvement over current photogrammetry techniques by reducing the time and effort required to create 3D models while maintaining a high level of accuracy and repeatability.
引用
收藏
页码:995 / 1000
页数:6
相关论文
共 50 条
  • [42] INTELLECTUAL PROPERTY MANAGEMENT IN DIGITIZATION AND DIGITAL PRESERVATION OF CULTURAL HERITAGE
    Trencheva, Tereza
    Zdravkova-Velichkova, Evelina
    EDULEARN19: 11TH INTERNATIONAL CONFERENCE ON EDUCATION AND NEW LEARNING TECHNOLOGIES, 2019, : 6082 - 6087
  • [44] The Application of Virtual Reality Technology in the Digital Preservation of Cultural Heritage
    Zhong, Hong
    Wang, Leilei
    Zhang, Heqing
    COMPUTER SCIENCE AND INFORMATION SYSTEMS, 2021, 18 (02) : 535 - 551
  • [45] Development of an AI-driven system for neurosurgery with a usability study: a step towards minimal invasive robotics
    Zeineldin, Ramy A.
    Junger, Denise
    Mathis-Ullrich, Franziska
    Burgert, Oliver
    AT-AUTOMATISIERUNGSTECHNIK, 2023, 71 (07) : 537 - 546
  • [46] Research on Digital Cultural Heritage Expansion Using AI Technology
    Park, Chan-Woo
    Kim, Hee-Kwon
    Lee, Jae-Ho
    2024 FIFTEENTH INTERNATIONAL CONFERENCE ON UBIQUITOUS AND FUTURE NETWORKS, ICUFN 2024, 2024, : 516 - 519
  • [47] The Cultural Heritage Preservation
    不详
    DENKMALPFLEGE, 2014, 72 (01): : 3 - 3
  • [48] PRESERVATION OF CULTURAL HERITAGE
    Cristini, Valentina
    LOGGIA ARQUITECTURA & RESTAURACION, 2015, (28): : 152 - 152
  • [49] Enhancing Student Scholarly Writing Through AI-Driven Teaching Strategies
    Fritz, Ashlie
    Toothaker, Rebecca
    NURSE EDUCATOR, 2025,
  • [50] Resolving Engineering, Industrial and Healthcare Challenges through AI-Driven Applications
    Asvial, Muhamad
    Zagloel, Teuku Yuri M.
    Fitri, Ismi Rosyiana
    Kusrini, Eny
    Whulanza, Yudan
    INTERNATIONAL JOURNAL OF TECHNOLOGY, 2023, 14 (06) : 1177 - 1184