Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Integer Echo State Networks: Efficient Reservoir Computing for Digital Hardware
RISE Research Institutes of Sweden, Digital Systems, Data Science. University of California at Berkeley, USA.ORCID iD: 0000-0002-6032-6155
Intel Labs, USA; University of California at Berkeley, USA.
Netlight Consulting AB, Sweden.
Luleå University of Technology, Sweden.
2022 (English)In: IEEE Transactions on Neural Networks and Learning Systems, ISSN 2162-237X, E-ISSN 2162-2388, Vol. 33, no 4, p. 1688-1701Article in journal (Refereed) Published
Abstract [en]

We propose an approximation of echo state networks (ESNs) that can be efficiently implemented on digital hardware based on the mathematics of hyperdimensional computing. The reservoir of the proposed integer ESN (intESN) is a vector containing only n-bits integers (where n< 8 is normally sufficient for a satisfactory performance). The recurrent matrix multiplication is replaced with an efficient cyclic shift operation. The proposed intESN approach is verified with typical tasks in reservoir computing: memorizing of a sequence of inputs, classifying time series, and learning dynamic processes. Such architecture results in dramatic improvements in memory footprint and computational efficiency, with minimal performance loss. The experiments on a field-programmable gate array confirm that the proposed intESN approach is much more energy efficient than the conventional ESN. 

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc. , 2022. Vol. 33, no 4, p. 1688-1701
Keywords [en]
Dynamic systems modeling, echo state networks (ESNs), hyperdimensional computing (HDC), memory capacity, reservoir computing (RC), time-series classification, vector symbolic architectures., Computational efficiency, Energy efficiency, Recurrent neural networks, Digital hardware, Dynamic process, Echo state networks, Energy efficient, MAtrix multiplication, Memory footprint, Performance loss, Reservoir Computing, Field programmable gate arrays (FPGA)
National Category
Natural Sciences
Identifiers
URN: urn:nbn:se:ri:diva-51913DOI: 10.1109/TNNLS.2020.3043309Scopus ID: 85098778990OAI: oai:DiVA.org:ri-51913DiVA, id: diva2:1519850
Available from: 2021-01-19 Created: 2021-01-19 Last updated: 2023-12-12Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Kleyko, Denis

Search in DiVA

By author/editor
Kleyko, Denis
By organisation
Data Science
In the same journal
IEEE Transactions on Neural Networks and Learning Systems
Natural Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 16 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf