Signal Processing, Volume (183), Year (2021-6)

Title : ( Convergence behavior of diffusion stochastic gradient descent algorithm )

Authors: Abdorreza Savadi , Hadi Sadoghi Yazdi ,

Access to full-text not allowed by authors

Citation: BibTeX | EndNote


Stochastic gradient descent (SGD) is a well-known method in machine learning that takes advantage of lower computational complexity for largescale problems. Distributed learning provides a good framework to manage such problems and avoids data aggregation in a central workstation, and also saves time and energy. Diffusion strategies can be applied to solve distributed learning problems. In this paper, we present a new distributed algorithm based on diffusion strategies of the SGD type, called the diffusion SGD algorithm and investigate its convergence behavior for solving linear prediction problems. We prove the convergence of the proposed algorithm by using a mathematical formulation for its progression process and obtain an upper bound of the errors made by the update rule. Experiments on the system identification problems confirm our theoretical findings. The simulation results comparing with state-of-the-art methods specify that the diffusion SGD algorithm has good performance and acceptable convergence rate.


, Adaptive Networks, Diffusion Strategies, Distributed Learning, Stochastic Gradient Descent.
برای دانلود از شناسه و رمز عبور پرتال پویا استفاده کنید.

author = {Savadi, Abdorreza and Sadoghi Yazdi, Hadi},
title = {Convergence behavior of diffusion stochastic gradient descent algorithm},
journal = {Signal Processing},
year = {2021},
volume = {183},
month = {June},
issn = {0165-1684},
keywords = {Adaptive Networks; Diffusion Strategies; Distributed Learning; Stochastic Gradient Descent.},


%0 Journal Article
%T Convergence behavior of diffusion stochastic gradient descent algorithm
%A Savadi, Abdorreza
%A Sadoghi Yazdi, Hadi
%J Signal Processing
%@ 0165-1684
%D 2021