Communication-Efficient Decentralized Online Continuous DR-Submodular Maximization

被引:1
|
作者
Zhang, Qixin [1 ]
Deng, Zengde [2 ]
Jian, Xiangru [3 ]
Chen, Zaiyi [2 ]
Hu, Haoyuan [2 ]
Yang, Yu [1 ]
机构
[1] City Univ Hong Kong, Hong Kong, Peoples R China
[2] Cainiao Network, Hangzhou, Peoples R China
[3] Univ Waterloo, Waterloo, ON, Canada
关键词
distributed data mining; online learning; submodular maximization;
D O I
10.1145/3583780.3614817
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Maximizing a monotone submodular function is a fundamental task in data mining, machine learning, economics, and statistics. In this paper, we present two communication-efficient decentralized online algorithms for the monotone continuous DR-submodular maximization problem, both of which reduce the number of perfunction gradient evaluations and per-round communication complexity from T-3/2 to 1. The first one, One-shot Decentralized MetaFrank-Wolfe (Mono-DMFW), achieves a ( 1 - 1/e)-regret bound of O(T-4/5). As far as we know, this is the first one-shot and projectionfree decentralized online algorithm for monotone continuous DRsubmodular maximization. Next, inspired by the non-oblivious boosting function [29], we propose the Decentralized Online Boosting Gradient Ascent (DOBGA) algorithm, which attains a (1- 1/e)-regret of O (root T). To the best of our knowledge, this is the first result to obtain the optimal O (root T) against a ( 1- 1/e)-approximation with only one gradient inquiry for each local objective function per step. Finally, various experimental results confirm the effectiveness of the proposed methods.
引用
收藏
页码:3330 / 3339
页数:10
相关论文
共 50 条
  • [1] Sample Efficient Decentralized Stochastic Frank-Wolfe Methods for Continuous DR-Submodular Maximization
    Gao, Hongchang
    Xu, Hanzi
    Vucetic, Slobodan
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 3501 - 3507
  • [2] Continuous DR-submodular Maximization: Structure and Algorithms
    Bian, An
    Levy, Kfir Y.
    Krause, Andreas
    Buhmann, Joachim M.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [3] Continuous Profit Maximization: A Study of Unconstrained Dr-Submodular Maximization
    Guo, Jianxiong
    Wu, Weili
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2021, 8 (03) : 768 - 779
  • [4] Online Non-Monotone DR-Submodular Maximization
    Nguyen Kim Thang
    Srivastav, Abhinav
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 9868 - 9876
  • [5] Online Continuous DR-Submodular Maximization with Long-Term Budget Constraints
    Sadeghi, Omid
    Fazel, Maryam
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 4410 - 4418
  • [6] lDecentralized Gradient Tracking for Continuous DR-Submodular Maximization
    Xie, Jiahao
    Hang, Chao Z.
    Shen, Zebang
    Mi, Chao
    Qian, Hui
    22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89
  • [7] Optimal Algorithms for Continuous Non-monotone Submodular and DR-Submodular Maximization
    Niazadeh, Rad
    Roughgarden, Tim
    Wang, Joshua R.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [8] Online DR-Submodular Maximization: Minimizing Regret and Constraint Violation
    Raut, Prasanna
    Sadeghi, Omid
    Fazel, Maryam
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 9395 - 9402
  • [9] Optimal Algorithms for Continuous Non-monotone Submodular and DR-Submodular Maximization
    Niazadeh, Rad
    Roughgarden, Tim
    Wang, Joshua R.
    JOURNAL OF MACHINE LEARNING RESEARCH, 2020, 21
  • [10] Optimal algorithms for continuous non-monotone submodular and DR-submodular maximization
    Niazadeh, Rad
    Roughgarden, Tim
    Wang, Joshua R.
    Journal of Machine Learning Research, 2020, 21