Evaluating the Work Productivity of Assembling Reinforcement through the Objects Detected by Deep Learning

被引:8
|
作者
Li, Jiaqi [1 ]
Zhao, Xuefeng [1 ,2 ]
Zhou, Guangyi [1 ,3 ]
Zhang, Mingyuan [1 ]
Li, Dongfang [1 ,3 ]
Zhou, Yaochen [3 ]
机构
[1] Dalian Univ Technol, Fac Infrastruct Engn, Dalian 116024, Peoples R China
[2] Dalian Univ Technol, State Key Lab Coastal & Offshore Engn, Dalian 116024, Peoples R China
[3] Northeast Branch China Construct Eighth Engn Div, Dalian 116019, Peoples R China
关键词
construction engineering; construction management; work productivity; computer vision; deep learning; ACTIVITY RECOGNITION; SURVEILLANCE VIDEOS; CRACK DETECTION; CLASSIFICATION; FRAMEWORK; EFFICIENT; TRACKING;
D O I
10.3390/s21165598
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
With the rapid development of deep learning, computer vision has assisted in solving a variety of problems in engineering construction. However, very few computer vision-based approaches have been proposed on work productivity's evaluation. Therefore, taking a super high-rise project as a research case, using the detected object information obtained by a deep learning algorithm, a computer vision-based method for evaluating the productivity of assembling reinforcement is proposed. Firstly, a detector that can accurately distinguish various entities related to assembling reinforcement based on CenterNet is established. DLA34 is selected as the backbone. The mAP reaches 0.9682, and the speed of detecting a single image can be as low as 0.076 s. Secondly, the trained detector is used to detect the video frames, and images with detected boxes and documents with coordinates can be obtained. The position relationship between the detected work objects and detected workers is used to determine how many workers (N) have participated in the task. The time (T) to perform the process can be obtained from the change of coordinates of the work object. Finally, the productivity is evaluated according to N and T. The authors use four actual construction videos for validation, and the results show that the productivity evaluation is generally consistent with the actual conditions. The contribution of this research to construction management is twofold: On the one hand, without affecting the normal behavior of workers, a connection between construction individuals and work object is established, and the work productivity evaluation is realized. On the other hand, the proposed method has a positive effect on improving the efficiency of construction management.
引用
收藏
页数:20
相关论文
共 50 条
  • [21] Supply Chain Synchronization Through Deep Reinforcement Learning
    Jackson, Ilya
    TRANSBALTICA XII: TRANSPORTATION SCIENCE AND TECHNOLOGY, 2022, : 490 - 498
  • [22] Market Making With Signals Through Deep Reinforcement Learning
    Gasperov, Bruno
    Kostanjcar, Zvonko
    IEEE ACCESS, 2021, 9 : 61611 - 61622
  • [23] Detecting Phishing Websites through Deep Reinforcement Learning
    Chatterjee, Moitrayee
    Namin, Akbar Siami
    2019 IEEE 43RD ANNUAL COMPUTER SOFTWARE AND APPLICATIONS CONFERENCE (COMPSAC), VOL 2, 2019, : 227 - 232
  • [24] Direct shape optimization through deep reinforcement learning
    Viquerat, Jonathan
    Rabault, Jean
    Kuhnle, Alexander
    Ghraieb, Hassan
    Larcher, Aurelien
    Hachem, Elie
    JOURNAL OF COMPUTATIONAL PHYSICS, 2021, 428
  • [25] Learn to Navigate Autonomously Through Deep Reinforcement Learning
    Wu, Keyu
    Wang, Han
    Esfahani, Mahdi Abolfazli
    Yuan, Shenghai
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2022, 69 (05) : 5342 - 5352
  • [26] Universal quantum control through deep reinforcement learning
    Murphy Yuezhen Niu
    Sergio Boixo
    Vadim N. Smelyanskiy
    Hartmut Neven
    npj Quantum Information, 5
  • [27] Maintaining flexibility in smart grid consumption through deep learning and deep reinforcement learning☆
    Gallego, Fernando
    Martin, Cristian
    Diaz, Manuel
    Garrido, Daniel
    ENERGY AND AI, 2023, 13
  • [28] Robust Deep Reinforcement Learning through Adversarial Loss
    Oikarinen, Tuomas
    Zhang, Wang
    Megretski, Alexandre
    Daniel, Luca
    Weng, Tsui-Wei
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [29] Connectivity conservation planning through deep reinforcement learning
    Equihua, Julian
    Beckmann, Michael
    Seppelt, Ralf
    METHODS IN ECOLOGY AND EVOLUTION, 2024, 15 (04): : 779 - 790
  • [30] Efficient Novelty Search Through Deep Reinforcement Learning
    Shi, Longxiang
    Li, Shijian
    Zheng, Qian
    Yao, Min
    Pan, Gang
    IEEE ACCESS, 2020, 8 : 128809 - 128818