Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
A General Framework to Distribute Iterative Algorithms with Localized Information over Networks
RISE Research Institutes of Sweden, Digitala system, Datavetenskap.ORCID-id: 0000-0001-5091-6285
MIT Massachusetts Institute of Technology, USA.
Stockholm university, Sweden.
KTH Royal Institute of Technology, Sweden.
2023 (engelsk)Inngår i: IEEE Transactions on Automatic Control, ISSN 0018-9286, E-ISSN 1558-2523, Vol. 68, nr 12, s. 7358-Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Emerging applications in IoT (Internet of Things) and edge computing/learning have sparked massive renewed interest in developing distributed versions of existing (centralized) iterative algorithms often used for optimization or machine learning purposes. While existing work in the literature exhibit similarities, for the tasks of both algorithm design and theoretical analysis, there is still no unified method or framework for accomplishing these tasks. This paper develops such a general framework, for distributing the execution of (centralized) iterative algorithms over networks in which the required information or data is partitioned between the nodes in the network. This paper furthermore shows that the distributed iterative algorithm, which results from the proposed framework, retains the convergence properties (rate) of the original (centralized) iterative algorithm. In addition, this paper applies the proposed general framework to several interesting example applications, obtaining results comparable to the state of the art for each such example, while greatly simplifying and generalizing their convergence analysis. These example applications reveal new results for distributed proximal versions of gradient descent, the heavy-ball method, and Newton's method. For example, these results show that the dependence on the condition number for the convergence rate of this distributed Heavy ball method is at least as good as for centralized gradient descent. Author

sted, utgiver, år, opplag, sider
Institute of Electrical and Electronics Engineers Inc. , 2023. Vol. 68, nr 12, s. 7358-
Emneord [en]
agents and autonomous systems, communication networks, Convergence, Distributed algorithms, Heuristic algorithms, Internet of Things, Iterative algorithms, Newton method, Optimization, optimization algorithms, Distributed computer systems, Heuristic methods, Learning systems, Newton-Raphson method, Number theory, Parallel algorithms, Agent and autonomous system, Centralised, Communications networks, Gradient-descent, Heuristics algorithm, Iterative algorithm, Newton's methods, Optimisations
HSV kategori
Identifikatorer
URN: urn:nbn:se:ri:diva-65622DOI: 10.1109/TAC.2023.3279901Scopus ID: 2-s2.0-85161038450OAI: oai:DiVA.org:ri-65622DiVA, id: diva2:1777524
Tilgjengelig fra: 2023-06-29 Laget: 2023-06-29 Sist oppdatert: 2025-09-23bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekstScopus

Person

Ohlson Timoudas, Thomas

Søk i DiVA

Av forfatter/redaktør
Ohlson Timoudas, Thomas
Av organisasjonen
I samme tidsskrift
IEEE Transactions on Automatic Control

Søk utenfor DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric

doi
urn-nbn
Totalt: 136 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
v. 2.47.0