An Enterprise Multi-agent Model with Game Q-Learning Based on a Single Decision Factor

被引:1
|
作者
Xu, Siying [1 ,2 ]
Zhang, Gaoyu [2 ]
Yuan, Xianzhi [3 ]
机构
[1] Shanghai Univ Finance & Econ, Shanghai 200433, Peoples R China
[2] Shanghai Lixin Univ Accounting & Finance, Shanghai 201209, Peoples R China
[3] Chengdu Univ, Chengdu 610106, Peoples R China
基金
中国国家自然科学基金;
关键词
SMEs; Multi-agent; Q-learning; Evolutionary gaming; PRODUCT INNOVATION; EVOLUTIONARY GAME; PROTOCOL;
D O I
10.1007/s10614-023-10524-x
中图分类号
F [经济];
学科分类号
02 ;
摘要
In recent years, the study of enterprise survival development and cooperation in the whole economic market has been rapidly developed. However, in most literature studies, the traditional enterprise multi-agent cannot effectively simulate the process of enterprise survival and development since the fundamental characteristics used to describe enterprises in social networks, such as the values of enterprise multi-agent attributes, cannot be changed in process of the simulation. To address this problem, an enterprise multi-agent model based on game Q- learning to simulate enterprise decision making which aims to maximize the benefits of enterprises and optimize the effect of inter-firm cooperation is proposed in this article. The Firm Q Learning algorithm is used to dynamically change the attribute values of the enterprise multi-agent to optimize the game results in the evolutionary game model and thus effectively simulate the dynamic cooperation among the enterprise agents. The simulation result shows that the evolution of the enterprise multi-agent model based on game Q-learning can more realistically reflect the process of real enterprise survival and development than the multi-agent simulation with fixed attribute values.
引用
收藏
页码:2523 / 2562
页数:40
相关论文
共 50 条
  • [31] Consensus of discrete-time multi-agent system based on Q-learning
    Zhu Z.-B.
    Wang F.-Y.
    Yin Y.-H.
    Liu Z.-X.
    Chen Z.-Q.
    Kongzhi Lilun Yu Yingyong/Control Theory and Applications, 2021, 38 (07): : 997 - 1005
  • [32] Sparse Cooperative Multi-agent Q-learning Based on Vector Potential Field
    Liu, Liang
    Li, Longshu
    PROCEEDINGS OF THE 2009 WRI GLOBAL CONGRESS ON INTELLIGENT SYSTEMS, VOL I, 2009, : 99 - 103
  • [33] Cooperative behavior acquisition for multi-agent systems by Q-learning
    Xie, M. C.
    Tachibana, A.
    2007 IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTATIONAL INTELLIGENCE, VOLS 1 AND 2, 2007, : 424 - +
  • [34] The acquisition of sociality by using Q-learning in a multi-agent environment
    Nagayuki, Yasuo
    PROCEEDINGS OF THE SIXTEENTH INTERNATIONAL SYMPOSIUM ON ARTIFICIAL LIFE AND ROBOTICS (AROB 16TH '11), 2011, : 820 - 823
  • [35] Multi-Agent Q-Learning with Joint State Value Approximation
    Chen Gang
    Cao Weihua
    Chen Xin
    Wu Min
    2011 30TH CHINESE CONTROL CONFERENCE (CCC), 2011, : 4878 - 4882
  • [36] Real-Valued Q-learning in Multi-agent Cooperation
    Hwang, Kao-Shing
    Lo, Chia-Yue
    Chen, Kim-Joan
    2009 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2009), VOLS 1-9, 2009, : 395 - 400
  • [37] Continuous strategy replicator dynamics for multi-agent Q-learning
    Galstyan, Aram
    AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, 2013, 26 (01) : 37 - 53
  • [38] Q-Learning with Side Information in Multi-Agent Finite Games
    Sylvestre, Mathieu
    Pavel, Lacra
    2019 IEEE 58TH CONFERENCE ON DECISION AND CONTROL (CDC), 2019, : 5032 - 5037
  • [39] Multi-Agent Reward-Iteration Fuzzy Q-Learning
    Leng, Lixiong
    Li, Jingchen
    Zhu, Jinhui
    Hwang, Kao-Shing
    Shi, Haobin
    INTERNATIONAL JOURNAL OF FUZZY SYSTEMS, 2021, 23 (06) : 1669 - 1679
  • [40] CONTINUOUS ACTION GENERATION OF Q-LEARNING IN MULTI-AGENT COOPERATION
    Hwang, Kao-Shing
    Chen, Yu-Jen
    Jiang, Wei-Cheng
    Lin, Tzung-Feng
    ASIAN JOURNAL OF CONTROL, 2013, 15 (04) : 1011 - 1020