Communication-Efficient Decentralized Online Continuous DR-Submodular Maximization

被引:1
|
作者
Zhang, Qixin [1 ]
Deng, Zengde [2 ]
Jian, Xiangru [3 ]
Chen, Zaiyi [2 ]
Hu, Haoyuan [2 ]
Yang, Yu [1 ]
机构
[1] City Univ Hong Kong, Hong Kong, Peoples R China
[2] Cainiao Network, Hangzhou, Peoples R China
[3] Univ Waterloo, Waterloo, ON, Canada
关键词
distributed data mining; online learning; submodular maximization;
D O I
10.1145/3583780.3614817
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Maximizing a monotone submodular function is a fundamental task in data mining, machine learning, economics, and statistics. In this paper, we present two communication-efficient decentralized online algorithms for the monotone continuous DR-submodular maximization problem, both of which reduce the number of perfunction gradient evaluations and per-round communication complexity from T-3/2 to 1. The first one, One-shot Decentralized MetaFrank-Wolfe (Mono-DMFW), achieves a ( 1 - 1/e)-regret bound of O(T-4/5). As far as we know, this is the first one-shot and projectionfree decentralized online algorithm for monotone continuous DRsubmodular maximization. Next, inspired by the non-oblivious boosting function [29], we propose the Decentralized Online Boosting Gradient Ascent (DOBGA) algorithm, which attains a (1- 1/e)-regret of O (root T). To the best of our knowledge, this is the first result to obtain the optimal O (root T) against a ( 1- 1/e)-approximation with only one gradient inquiry for each local objective function per step. Finally, various experimental results confirm the effectiveness of the proposed methods.
引用
收藏
页码:3330 / 3339
页数:10
相关论文
共 50 条
  • [21] Online Continuous Submodular Maximization
    Chen, Lin
    Hassani, Hamed
    Karbasi, Amin
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 84, 2018, 84
  • [22] Online Learning for Non-monotone DR-Submodular Maximization: From Full Information to Bandit Feedback
    Zhang, Qixin
    Deng, Zengde
    Chen, Zaiyi
    Zhou, Kuangqi
    Hu, Haoyuan
    Yang, Yu
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 206, 2023, 206
  • [23] Online Non-monotone DR-Submodular Maximization: 1/4 Approximation Ratio and Sublinear Regret
    Feng, Junkai
    Yang, Ruiqi
    Zhang, Haibin
    Zhang, Zhenning
    COMPUTING AND COMBINATORICS, COCOON 2022, 2022, 13595 : 118 - 125
  • [24] Resolving the Approximability of Offline and Online Non-monotone DR-Submodular Maximization over General Convex Sets
    Mualem, Loay
    Feldman, Moran
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 206, 2023, 206
  • [25] Fast First-Order Methods for Monotone Strongly DR-Submodular Maximization
    Sadeghi, Omid
    Fazel, Maryam
    SIAM CONFERENCE ON APPLIED AND COMPUTATIONAL DISCRETE ALGORITHMS, ACDA23, 2023, : 169 - 179
  • [26] Non-monotone DR-submodular Maximization over General Convex Sets
    Durr, Christoph
    Nguyen Kim Thang
    Srivastav, Abhinav
    Tible, Leo
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 2148 - 2154
  • [27] A bi-criteria algorithm for online non-monotone maximization problems: DR-submodular plus concave
    Feng, Junkai
    Yang, Ruiqi
    Zhang, Haibin
    Zhang, Zhenning
    THEORETICAL COMPUTER SCIENCE, 2023, 979
  • [28] Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings
    Mokhtari, Aryan
    Hassani, Flamed
    Karbasi, Amin
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [29] Online Weakly DR-Submodular Optimization Under Stochastic Cumulative Constraints
    Feng, Junkai
    Yang, Ruiqi
    Zhang, Yapu
    Zhang, Zhenning
    TSINGHUA SCIENCE AND TECHNOLOGY, 2024, 29 (06): : 1667 - 1673
  • [30] Multiple knapsack-constrained monotone DR-submodular maximization on distributive lattice— Continuous Greedy Algorithm on Median Complex —
    Takanori Maehara
    So Nakashima
    Yutaro Yamaguchi
    Mathematical Programming, 2022, 194 : 85 - 119