Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A General Framework to Distribute Iterative Algorithms with Localized Information over Networks
RISE Research Institutes of Sweden, Digital Systems, Data Science.ORCID iD: 0000-0001-5091-6285
MIT Massachusetts Institute of Technology, USA.
Stockholm university, Sweden.
KTH Royal Institute of Technology, Sweden.
2023 (English)In: IEEE Transactions on Automatic Control, ISSN 0018-9286, E-ISSN 1558-2523, Vol. 68, no 12, p. 7358-Article in journal (Refereed) Published
Abstract [en]

Emerging applications in IoT (Internet of Things) and edge computing/learning have sparked massive renewed interest in developing distributed versions of existing (centralized) iterative algorithms often used for optimization or machine learning purposes. While existing work in the literature exhibit similarities, for the tasks of both algorithm design and theoretical analysis, there is still no unified method or framework for accomplishing these tasks. This paper develops such a general framework, for distributing the execution of (centralized) iterative algorithms over networks in which the required information or data is partitioned between the nodes in the network. This paper furthermore shows that the distributed iterative algorithm, which results from the proposed framework, retains the convergence properties (rate) of the original (centralized) iterative algorithm. In addition, this paper applies the proposed general framework to several interesting example applications, obtaining results comparable to the state of the art for each such example, while greatly simplifying and generalizing their convergence analysis. These example applications reveal new results for distributed proximal versions of gradient descent, the heavy-ball method, and Newton's method. For example, these results show that the dependence on the condition number for the convergence rate of this distributed Heavy ball method is at least as good as for centralized gradient descent. Author

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc. , 2023. Vol. 68, no 12, p. 7358-
Keywords [en]
agents and autonomous systems, communication networks, Convergence, Distributed algorithms, Heuristic algorithms, Internet of Things, Iterative algorithms, Newton method, Optimization, optimization algorithms, Distributed computer systems, Heuristic methods, Learning systems, Newton-Raphson method, Number theory, Parallel algorithms, Agent and autonomous system, Centralised, Communications networks, Gradient-descent, Heuristics algorithm, Iterative algorithm, Newton's methods, Optimisations
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:ri:diva-65622DOI: 10.1109/TAC.2023.3279901Scopus ID: 2-s2.0-85161038450OAI: oai:DiVA.org:ri-65622DiVA, id: diva2:1777524
Available from: 2023-06-29 Created: 2023-06-29 Last updated: 2024-06-07Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Ohlson Timoudas, Thomas

Search in DiVA

By author/editor
Ohlson Timoudas, Thomas
By organisation
Data Science
In the same journal
IEEE Transactions on Automatic Control
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 88 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf