Dynamic node selection in camera networks based on approximate reinforcement learning

被引:0
|
作者
Qian Li
Zhengxing Sun
Songle Chen
Shiming Xia
机构
[1] Nanjing University,State Key Laboratory for Novel Software Technology
[2] PLA University of Science and Technology,College of Meteorology and Oceanography
来源
关键词
Camera selection; Approximate reinforcement learning; Gaussian mixture model (GMM); Video analysis; Camera networks;
D O I
暂无
中图分类号
学科分类号
摘要
In camera networks, dynamic node selection is an effective technique that enables video stream transmission with constrained network bandwidth, more economical node cooperation for nodes with constrained power supplies, and optimal use of a limited number of display terminals, particularly for applications that need to obtain high-quality video of specific targets. However, the nearest camera in a network cannot be identified by directional measurements alone. Furthermore, errors are introduced into computer vision algorithms by complex background, illumination, and other factors, causing unstable and jittery processing results. Consequently, in selecting camera network nodes, two issues must be addressed: First, a dynamic selection mechanism that can choose the most appropriate node is needed. Second, metrics to evaluate the visual information in a video stream must be modeled and adapted to various camera parameters, backgrounds, and scenes. This paper proposes a node selection method based on approximate reinforcement learning in which nodes are selected to obtain the maximum expected reward using approximate Q-learning. The Q-function is approximated by a Gaussian Mixture Model with parameters that are sequentially updated by a mini-batch stepwise Expectation–Maximization algorithm. To determine the most informative camera node dynamically, the immediate reward in Q-learning integrates the visibility, orientation, and image clarity of the object in view. Experimental results show that the proposed visual evaluation metrics can effectively capture the motion state of objects, and that the selection method reduces camera switching and related errors compared with state-of-the art methods.
引用
收藏
页码:17393 / 17419
页数:26
相关论文
共 50 条
  • [21] Approximate reinforcement learning to control beaconing congestion in distributed networks
    J. Aznar-Poveda
    A.-J. García-Sánchez
    E. Egea-López
    J. García-Haro
    Scientific Reports, 12
  • [22] Reinforcement Learning-based Dynamic Service Placement in Vehicular Networks
    Talpur, Anum
    Gurusamy, Mohan
    2021 IEEE 93RD VEHICULAR TECHNOLOGY CONFERENCE (VTC2021-SPRING), 2021,
  • [23] Approximate reinforcement learning to control beaconing congestion in distributed networks
    Aznar-Poveda, J.
    Garcia-Sanchez, A-J
    Egea-Lopez, E.
    Garcia-Haro, J.
    SCIENTIFIC REPORTS, 2022, 12 (01)
  • [24] Dynamic Selection of Priority Rules Based on Deep Reinforcement Learning for Rescheduling of RCPSP
    Wang, Teng
    Cheng, Wei
    Zhang, Yahui
    Hu, Xiaofeng
    IFAC PAPERSONLINE, 2022, 55 (10): : 2144 - 2149
  • [25] Dynamic Feature Selection for Solar Irradiance Forecasting Based on Deep Reinforcement Learning
    Lyu, Cheng
    Eftekharnejad, Sara
    Basumallik, Sagnik
    Xu, Chongfang
    IEEE TRANSACTIONS ON INDUSTRY APPLICATIONS, 2023, 59 (01) : 533 - 543
  • [26] Relay selection scheme based on deep reinforcement learning in wireless sensor networks
    Zhou, Dongmei
    Yan, Baowan
    Li, Cuiran
    Wang, Aihuan
    Wei, Haixia
    PHYSICAL COMMUNICATION, 2022, 54
  • [27] Optimal Node Selection for Target Localization in Wireless Camera Sensor Networks
    Liu, Liang
    Zhang, Xi
    Ma, Huadong
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2010, 59 (07) : 3562 - 3576
  • [28] Automatic node selection and target tracking in wireless camera sensor networks
    Wang, Yong
    Wang, Dianhong
    Fang, Wu
    COMPUTERS & ELECTRICAL ENGINEERING, 2014, 40 (02) : 484 - 493
  • [29] Dynamic Leader Selection Based on Approximate Manipulability
    Sato, Hiroshi
    Kubo, Masao
    Shirakawa, Tomohiro
    Namatame, Akira
    2015 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC), 2015, : 1432 - 1437
  • [30] Data Driven State Reconstruction of Dynamical System Based on Approximate Dynamic Programming and Reinforcement Learning
    Da Silva, Fabio Nogueira
    Da Fonseca Neto, Joao Viana
    IEEE ACCESS, 2021, 9 (09): : 73299 - 73306