A Gradient Tracking Protocol for Optimization Over Nabla Fractional Multi-Agent Systems

被引:4
作者
Zhou, Shuaiyu [1 ]
Wei, Yiheng [1 ]
Liang, Shu [2 ,3 ]
Cao, Jinde [1 ]
机构
[1] Southeast Univ, Sch Math, Nanjing 211189, Peoples R China
[2] Tongji Univ, Dept Control Sci & Engn, Shanghai 200092, Peoples R China
[3] Shanghai Res Inst Intelligent Autonomous Syst, Shanghai 201210, Peoples R China
来源
IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS | 2024年 / 10卷
基金
中国国家自然科学基金;
关键词
Optimization; Convergence; Linear programming; Protocols; Information processing; Heuristic algorithms; Multi-agent systems; Gradient tracking; nabla fractional system; distributed optimization; multi-agent network; DISTRIBUTED OPTIMIZATION; CONVEX-OPTIMIZATION; LINEAR CONVERGENCE; CONSENSUS; ALGORITHM;
D O I
10.1109/TSIPN.2024.3402354
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This paper investigates the distributed consensus optimization over a class of nabla fractional multi-agent systems (nFMASs). The proposed approach, built upon conventional gradient tracking techniques, addresses the specificity of the studied system by introducing a fractional gradient tracking protocol based on globally differential information of optimization variables. This protocol is applicable to nabla fractional systems of any order less than 1 and can be extended to integer discrete-time systems. The distributed optimization algorithms derived from this protocol ensure globally precise convergence under fixed step-sizes, thereby guaranteeing the feasibility of consensus optimization over nFMASs. Simulation results are presented to validate and substantiate the effectiveness of the proposed algorithms.
引用
收藏
页码:500 / 512
页数:13
相关论文
共 73 条
[11]  
Di Lorenzo P, 2016, INT CONF ACOUST SPEE, P4124, DOI 10.1109/ICASSP.2016.7472453
[12]  
Di Lorenzo P, 2015, 2015 IEEE 6TH INTERNATIONAL WORKSHOP ON COMPUTATIONAL ADVANCES IN MULTI-SENSOR ADAPTIVE PROCESSING (CAMSAP), P229, DOI 10.1109/CAMSAP.2015.7383778
[13]   Parallel Fractional Stochastic Gradient Descent With Adaptive Learning for Recommender Systems [J].
Elahi, Fatemeh ;
Fazlali, Mahmood ;
Malazi, Hadi Tabatabaee ;
Elahi, Mehdi .
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2024, 35 (03) :470-483
[14]   A divide-and-conquer algorithm for distributed optimization on networks [J].
Emirov, Nazar ;
Song, Guohui ;
Sun, Qiyu .
APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS, 2024, 70
[15]   Distributed Nash Equilibrium Computation Under Round-Robin Scheduling Protocol [J].
Feng, Zhangcheng ;
Xu, Wenying ;
Cao, Jinde .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2024, 69 (01) :339-346
[16]  
Goodrich C., 2015, DISCRETE FRACTIONAL
[17]  
Hong X. L., 2023, P 6 INT S AUT SYST 2, P1, DOI [10.1109/ISAS59543.2023.10164582, DOI 10.1109/ISAS59543.2023.10164582]
[18]   Distributed Optimization Algorithm for Multi-Robot Formation with Virtual Reference Center [J].
Huang, Jingyi ;
Zhou, Shuaiyu ;
Tu, Hua ;
Yao, Yuhong ;
Liu, Qingshan .
IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2022, 9 (04) :732-734
[19]   Linear Convergence of Consensus-Based Quantized Optimization for Smooth and Strongly Convex Cost Functions [J].
Kajiyama, Yuichi ;
Hayashi, Naoki ;
Takai, Shigemasa .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2021, 66 (03) :1254-1261
[20]   Tutorial on Dynamic Average Consensus THE PROBLEM, ITS APPLICATIONS, AND THE ALGORITHMS [J].
Kia, Solmaz S. ;
Van Scoy, Bryan ;
Cortes, Jorge ;
Freeman, Randy A. ;
Lynch, Kevin M. ;
Martinez, Sonia .
IEEE CONTROL SYSTEMS MAGAZINE, 2019, 39 (03) :40-72