Evaluating the Work Productivity of Assembling Reinforcement through the Objects Detected by Deep Learning

被引:8
|
作者
Li, Jiaqi [1 ]
Zhao, Xuefeng [1 ,2 ]
Zhou, Guangyi [1 ,3 ]
Zhang, Mingyuan [1 ]
Li, Dongfang [1 ,3 ]
Zhou, Yaochen [3 ]
机构
[1] Dalian Univ Technol, Fac Infrastruct Engn, Dalian 116024, Peoples R China
[2] Dalian Univ Technol, State Key Lab Coastal & Offshore Engn, Dalian 116024, Peoples R China
[3] Northeast Branch China Construct Eighth Engn Div, Dalian 116019, Peoples R China
关键词
construction engineering; construction management; work productivity; computer vision; deep learning; ACTIVITY RECOGNITION; SURVEILLANCE VIDEOS; CRACK DETECTION; CLASSIFICATION; FRAMEWORK; EFFICIENT; TRACKING;
D O I
10.3390/s21165598
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
With the rapid development of deep learning, computer vision has assisted in solving a variety of problems in engineering construction. However, very few computer vision-based approaches have been proposed on work productivity's evaluation. Therefore, taking a super high-rise project as a research case, using the detected object information obtained by a deep learning algorithm, a computer vision-based method for evaluating the productivity of assembling reinforcement is proposed. Firstly, a detector that can accurately distinguish various entities related to assembling reinforcement based on CenterNet is established. DLA34 is selected as the backbone. The mAP reaches 0.9682, and the speed of detecting a single image can be as low as 0.076 s. Secondly, the trained detector is used to detect the video frames, and images with detected boxes and documents with coordinates can be obtained. The position relationship between the detected work objects and detected workers is used to determine how many workers (N) have participated in the task. The time (T) to perform the process can be obtained from the change of coordinates of the work object. Finally, the productivity is evaluated according to N and T. The authors use four actual construction videos for validation, and the results show that the productivity evaluation is generally consistent with the actual conditions. The contribution of this research to construction management is twofold: On the one hand, without affecting the normal behavior of workers, a connection between construction individuals and work object is established, and the work productivity evaluation is realized. On the other hand, the proposed method has a positive effect on improving the efficiency of construction management.
引用
收藏
页数:20
相关论文
共 50 条
  • [41] Improving the Performance of Autonomous Driving through Deep Reinforcement Learning
    Tammewar, Akshaj
    Chaudhari, Nikita
    Saini, Bunny
    Venkatesh, Divya
    Dharahas, Ganpathiraju
    Vora, Deepali
    Patil, Shruti
    Kotecha, Ketan
    Alfarhood, Sultan
    SUSTAINABILITY, 2023, 15 (18)
  • [42] Turbulence control for drag reduction through deep reinforcement learning
    Lee, Taehyuk
    Kim, Junhyuk
    Lee, Changhoon
    PHYSICAL REVIEW FLUIDS, 2023, 8 (02)
  • [43] An autonomous ore packing system through deep reinforcement learning
    Ren, He
    Zhong, Rui
    ADVANCES IN SPACE RESEARCH, 2024, 74 (12) : 6366 - 6383
  • [44] Accelerating Deep Continuous Reinforcement Learning through Task Simplification
    Kerzel, Matthias
    Mohammadi, Hadi Beik
    Zamani, Mohammad Ali
    Wermter, Stefan
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018, : 139 - 144
  • [45] Magnetic control of tokamak plasmas through deep reinforcement learning
    Jonas Degrave
    Federico Felici
    Jonas Buchli
    Michael Neunert
    Brendan Tracey
    Francesco Carpanese
    Timo Ewalds
    Roland Hafner
    Abbas Abdolmaleki
    Diego de las Casas
    Craig Donner
    Leslie Fritz
    Cristian Galperti
    Andrea Huber
    James Keeling
    Maria Tsimpoukelli
    Jackie Kay
    Antoine Merle
    Jean-Marc Moret
    Seb Noury
    Federico Pesamosca
    David Pfau
    Olivier Sauter
    Cristian Sommariva
    Stefano Coda
    Basil Duval
    Ambrogio Fasoli
    Pushmeet Kohli
    Koray Kavukcuoglu
    Demis Hassabis
    Martin Riedmiller
    Nature, 2022, 602 : 414 - 419
  • [46] Sensor Fusion for Robot Control through Deep Reinforcement Learning
    Bohez, Steven
    Verbelen, Tim
    De Coninck, Elias
    Vankeirsbilck, Bert
    Simoens, Pieter
    Dhoedt, Bart
    2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2017, : 2365 - 2370
  • [47] Smooth Trajectory Collision Avoidance through Deep Reinforcement Learning
    Song, Sirui
    Saunders, Kirk
    Yue, Ye
    Liu, Jundong
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 914 - 919
  • [48] Model Compression for Deep Reinforcement Learning Through Mutual Information
    Garcia-Ramirez, Jesus
    Morales, Eduardo F.
    Escalante, Hugo Jair
    ADVANCES IN ARTIFICIAL INTELLIGENCE-IBERAMIA 2022, 2022, 13788 : 196 - 207
  • [49] Reimagining space layout design through deep reinforcement learning
    Kakooee, Reza
    Dillenburger, Benjamin
    JOURNAL OF COMPUTATIONAL DESIGN AND ENGINEERING, 2024, 11 (03) : 43 - 55
  • [50] Target Tracking Control of UAV Through Deep Reinforcement Learning
    Ma, Bodi
    Liu, Zhenbao
    Zhao, Wen
    Yuan, Jinbiao
    Long, Hao
    Wang, Xiao
    Yuan, Zhirong
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (06) : 5983 - 6000