Reinforcement learning based Secure edge enabled multi task scheduling model for internet of everything applications

被引:0
|
作者
Kesavan, V. Thiruppathy [1 ,2 ]
Venkatesan, R. [3 ]
Wong, Wai Kit [4 ]
Ng, Poh Kiat [4 ]
机构
[1] Dhanalakshmi Srinivasan Engn Coll, Fac Informat Technol, Perambalur 621212, Tamil Nadu, India
[2] Multimedia Univ, Engn & Technol, Malacca, Malaysia
[3] SASTRA Deemed Univ, Sch Comp, Thanjavur, India
[4] Multimedia Univ, Fac Engn & Technol, Jalan Ayer Keroh Lama, Bukit Beruang 75450, Malaysia
来源
SCIENTIFIC REPORTS | 2025年 / 15卷 / 01期
关键词
Internet of everything; Security; Task scheduling; Reinforcement learning; Attacks; Energy utilization; Key generation; Internet of things;
D O I
10.1038/s41598-025-89726-2
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
The fast growth of the Internet of Everything (IoE) has resulted in an exponential rise in network data, increasing the demand for distributed computing. Data collection and management with job scheduling using wireless sensor networks are considered essential requirements of the IoE environment; however, security issues over data scheduling on the online platform and energy consumption must be addressed. The Secure Edge Enabled Multi-Task Scheduling (SEE-MTS) model has been suggested to properly allocate jobs across machines while considering the availability of relevant data and copies. The proposed approach leverages edge computing to enhance the efficiency of IoE applications, addressing the growing need to manage the huge data generated by IoE devices. The system ensures user protection through dynamic updates, multi-key search generation, data encryption, and verification of search result accuracy. A MTS mechanism is employed to optimize energy usage, which allocates energy slots for various data processing tasks. Energy requirements are assessed to allocate tasks and manage queues, preventing node overloading and minimizing system disruptions. Additionally, reinforcement learning techniques are applied to reduce the overall task completion time using minimal data. Efficiency and security have been improved due to reduced energy, delay, reaction, and processing times. Results indicate that the SEE-MTS model achieves energy utilization of 4 J, a delay of 2s, a reaction time of 4s, energy efficiency at 89%, and a security level of 96%. With computation time at 6s, SEE-MTS offers improved efficiency and security, reducing energy, delay, reaction, and processing times, although real-world implementation may be limited due to the number of devices and incoming data.
引用
收藏
页数:30
相关论文
共 50 条
  • [41] Deep Reinforcement Learning for Dynamic Task Scheduling in Edge-Cloud Environments
    Rani, D. Mamatha
    Supreethi, K. P.
    Jayasingh, Bipin Bihari
    INTERNATIONAL JOURNAL OF ELECTRICAL AND COMPUTER ENGINEERING SYSTEMS, 2024, 15 (10) : 837 - 850
  • [42] A Multi-Layer Deep Reinforcement Learning Approach for Joint Task Offloading and Scheduling in Vehicular Edge Networks
    Wu, Jiaqi
    Ye, Ziyuan
    He, Lin
    Wang, Tong
    Gao, Lin
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 3872 - 3877
  • [43] Digital Twin-Assisted Efficient Reinforcement Learning for Edge Task Scheduling
    Wang, Xiucheng
    Ma, Longfei
    Li, Haocheng
    Yin, Zhisheng
    Luan, Tom
    Cheng, Nan
    2022 IEEE 95TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2022-SPRING), 2022,
  • [44] Secure Transmission for Multi-UAV-Assisted Mobile Edge Computing Based on Reinforcement Learning
    Lu, Weidang
    Mo, Yandan
    Feng, Yunqi
    Gao, Yuan
    Zhao, Nan
    Wu, Yuan
    Nallanathan, Arumugam
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2023, 10 (03): : 1270 - 1282
  • [45] A Delay-Optimal Task Scheduling Strategy for Vehicle Edge Computing Based on the Multi-Agent Deep Reinforcement Learning Approach
    Nie, Xuefang
    Yan, Yunhui
    Zhou, Tianqing
    Chen, Xingbang
    Zhang, Dingding
    ELECTRONICS, 2023, 12 (07)
  • [46] Reinforcement learning-based task scheduling for heterogeneous computing in end-edge-cloud environment
    Wangbo Shen
    Weiwei Lin
    Wentai Wu
    Haijie Wu
    Keqin Li
    Cluster Computing, 2025, 28 (3)
  • [47] Multi-Agent Deep Reinforcement Learning Based Incentive Mechanism for Multi-Task Federated Edge Learning
    Zhao, Nan
    Pei, Yiyang
    Liang, Ying-Chang
    Niyato, Dusit
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (10) : 13530 - 13535
  • [48] Reputation-Aware Scheduling for Secure Internet of Drones: A Federated Multi-Agent Deep Reinforcement Learning Approach
    Moudoud, Hajar
    Abou El Houda, Zakaria
    Brik, Bouzian
    IEEE INFOCOM 2024-IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS, INFOCOM WKSHPS 2024, 2024,
  • [49] Mobile edge computing task distribution and offloading algorithm based on deep reinforcement learning in internet of vehicles
    Wang, Jianxi
    Wang, Liutao
    JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, 2021,
  • [50] Deep Reinforcement Learning for Task Allocation in UAV-enabled Mobile Edge Computing
    Yu, Changliang
    Du, Wei
    Ren, Fan
    Zhao, Nan
    ADVANCES IN INTELLIGENT NETWORKING AND COLLABORATIVE SYSTEMS (INCOS-2021), 2022, 312 : 225 - 232