Endre søk
Link to record
Permanent link

Direct link
Kleyko, Denis, Ph.D.ORCID iD iconorcid.org/0000-0002-6032-6155
Publikasjoner (10 av 33) Visa alla publikasjoner
Vergés, P., Heddes, M., Nunes, I., Kleyko, D., Givargis, T. & Nicolau, A. (2025). Classification using hyperdimensional computing: a review with comparative analysis. Artificial Intelligence Review, 58(6), Article ID 173.
Åpne denne publikasjonen i ny fane eller vindu >>Classification using hyperdimensional computing: a review with comparative analysis
Vise andre…
2025 (engelsk)Inngår i: Artificial Intelligence Review, ISSN 0269-2821, E-ISSN 1573-7462, Vol. 58, nr 6, artikkel-id 173Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Hyperdimensional computing (HD), also known as vector symbolic architectures (VSA), is an emerging and promising paradigm for cognitive computing. At its core, HD/VSA is characterized by its distinctive approach to compositionally representing information using high-dimensional randomized vectors. The recent surge in research within this field gains momentum from its computational efficiency stemming from low-resolution representations and ability to excel in few-shot learning scenarios. Nonetheless, the current literature is missing a comprehensive comparative analysis of various methods since each of them uses a different benchmark to evaluate its performance. This gap obstructs the monitoring of the field’s state-of-the-art advancements and acts as a significant barrier to its overall progress. To address this gap, this review not only offers a conceptual overview of the latest literature but also introduces a comprehensive comparative study of HD/VSA classification methods. The exploration starts with an overview of the strategies proposed to encode information as high-dimensional vectors. These vectors serve as integral components in the construction of classification models. Furthermore, we evaluate diverse classification methods as proposed in the existing literature. This evaluation encompasses techniques such as retraining and regenerative training to augment the model’s performance. To conclude our study, we present a comprehensive empirical study. This study serves as an in-depth analysis, systematically comparing various HD/VSA classification methods using two benchmarks, the first being a set of seven popular datasets used in HD/VSA and the second consisting of 121 datasets being the subset from the UCI Machine Learning repository. To facilitate future research on classification with HD/VSA, we open-sourced the benchmarking and the implementations of the methods we review. Since the considered data are tabular, encodings based on key-value pairs emerge as optimal choices, boasting superior accuracy while maintaining high efficiency. Secondly, iterative adaptive methods demonstrate remarkable efficacy, potentially complemented by a regenerative strategy, depending on the specific problem. Furthermore, we show how HD/VSA is able to generalize while training with a limited number of training instances. Lastly, we demonstrate the robustness of HD/VSA methods by subjecting the model memory to a large number of bit-flips. The results illustrate that the model’s performance remains reasonably stable until the occurrence of 40% of bit flips, where the model’s performance is drastically degraded. Overall, this study performed a thorough performance evaluation on different methods and, on the one hand, a positive trend was observed in terms of improving classification performance but, on the other hand, these developments could often be surpassed by off-the-shelf methods. This calls for better integration with the broader machine learning literature; the developed benchmarking framework provides practical means for doing so. © The Author(s) 2025.

sted, utgiver, år, opplag, sider
Springer Nature, 2025
Emneord
Contrastive Learning; Health risks; Bit-flips; Classification methods; Comparative analyzes; Distributed representation; High-dimensional; Higher-dimensional; Hyperdimensional computing; Machine-learning; Performance; Vector symbolic architecture; Benchmarking
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-78329 (URN)10.1007/s10462-025-11181-2 (DOI)2-s2.0-105000327360 (Scopus ID)
Tilgjengelig fra: 2025-09-23 Laget: 2025-09-23 Sist oppdatert: 2025-09-23bibliografisk kontrollert
Kleyko, D., Kymn, C. J., Thomas, A., Olshausen, B. A., Sommer, F. T. & Frady, E. P. (2025). Principled neuromorphic reservoir computing. Nature Communications, 16(1)
Åpne denne publikasjonen i ny fane eller vindu >>Principled neuromorphic reservoir computing
Vise andre…
2025 (engelsk)Inngår i: Nature Communications, E-ISSN 2041-1723, Vol. 16, nr 1Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Reservoir computing advances the intriguing idea that a nonlinear recurrent neural circuit—the reservoir—can encode spatio-temporal input signals to enable efficient ways to perform tasks like classification or regression. However, recently the idea of a monolithic reservoir network that simultaneously buffers input signals and expands them into nonlinear features has been challenged. A representation scheme in which memory buffer and expansion into higher-order polynomial features can be configured separately has been shown to significantly outperform traditional reservoir computing in prediction of multivariate time-series. Here we propose a configurable neuromorphic representation scheme that provides competitive performance on prediction, but with significantly better scaling properties than directly materializing higher-order features as in prior work. Our approach combines the use of randomized representations from traditional reservoir computing with mathematical principles for approximating polynomial kernels via such representations. While the memory buffer can be realized with standard reservoir networks, computing higher-order features requires networks of ‘Sigma-Pi’ neurons, i.e., neurons that enable both summation as well as multiplication of inputs. Finally, we provide an implementation of the memory buffer and Sigma-Pi networks on Loihi 2, an existing neuromorphic hardware platform. 

sted, utgiver, år, opplag, sider
Nature Research, 2025
Emneord
buffer; hardware; numerical method; prediction; adult; article; controlled study; female; human; nerve cell network; prediction; time series analysis
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-78006 (URN)10.1038/s41467-025-55832-y (DOI)2-s2.0-85215759192 (Scopus ID)
Merknad

D.K. has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curiegrant agreement No 839179. The work of C.J.K. was supported by theDepartment of Defense (DoD) through the National Defense Science &Engineering Graduate (NDSEG) Fellowship Program. The work of C.J.K.and B.A.O. was supported by the Center for the Co-Design of CognitiveSystems (CoCoSys), one of seven centers in JUMP 2.0, a SemiconductorResearch Corporation (SRC) program sponsored by DARPA, as well as NSF Awards 2147640 and 2313149. The work of D.K., B.A.O., and F.T.S. was supported in part by Intel’s THWAI program. F.T.S. was supported by NSF Grant IIS1718991, NIH Grant R01-EB026955 and by the Kavli Foundation. The authors acknowledge the EuroHPC Joint Undertaking for awarding this study access to the EuroHPC supercomputer LUMI (project No 465000448), hosted by CSC (Finland) and the LUMI consortiu tthrough a EuroHPC Regular Access call.

Tilgjengelig fra: 2025-09-15 Laget: 2025-09-15 Sist oppdatert: 2025-09-23bibliografisk kontrollert
Schlegel, K., Rachkovskij, D., Kleyko, D., Gayler, R., Protzel, P. & Neubert, P. (2025). Structured temporal representation in time series classification with ROCKETs and hyperdimensional computing. Data mining and knowledge discovery, 39(6), Article ID 90.
Åpne denne publikasjonen i ny fane eller vindu >>Structured temporal representation in time series classification with ROCKETs and hyperdimensional computing
Vise andre…
2025 (engelsk)Inngår i: Data mining and knowledge discovery, ISSN 1384-5810, E-ISSN 1573-756X, Vol. 39, nr 6, artikkel-id 90Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Time series classification poses significant challenges due to the inherent temporal order of the data points and the existence of sequential dependencies between them. The ROCKET family, featuring methods like MiniROCKET, MultiROCKET, and HYDRA, is currently a leading approach in this domain, leveraging convolution kernels to aggregate temporal features into encodings for linear classifiers. However, these models encode temporal features over short temporal windows and then aggregate them as an unordered set of encodings over the longer temporal window of the entire data sequence. This prevents these models from capturing any longer sequence structure. To address this design drawback, we propose integrating hyperdimensional computing into ROCKET methods to explicitly incorporate temporal order of the short-term features within the entire time series. This approach enhances the discriminative power of encodings generated by MiniROCKET, MultiROCKET, and HYDRA where longer-term structure exists in the data, leading to increased classification performance with minimal computational overhead. More specifically, we introduce a method to represent time series as high-dimensional vectors through multiplicative binding of ROCKET encodings with encodings representing temporal order, applying this approach across various ROCKET methods. Additionally, we explore different high-dimensional vector representations of temporal order, yielding diverse similarity kernels that enhance classification accuracy. Through experiments on synthetic datasets, we highlight the limitations of ROCKET methods in handling temporal dependencies and show how the methods based on hyperdimensional computing overcome these limitations. Furthermore, our extensive experimental evaluation with real-world datasets included in the recent UCR archive, validates the advantages of our approach, consistently achieving classification improvements across all ROCKET methods that integrate hyperdimensional computing. Notably, our best model achieves a relative error rate reduction of over 50% compared to the best ROCKET model on several UCR datasets.

sted, utgiver, år, opplag, sider
Springer, 2025
Emneord
Compositional Representation, HYDRA, Hyperdimensional Computing, MiniROCKET, MultiROCKET, Position Encoding, ROCKET, Time Series Classification, Classification (of information), Encoding (symbols), Signal encoding, Time series, Encodings, Temporal features, Temporal ordering, Time series classifications, Rockets
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-79345 (URN)10.1007/s10618-025-01162-y (DOI)2-s2.0-105019187565 (Scopus ID)
Merknad

Article; Granskad

Tilgjengelig fra: 2025-11-28 Laget: 2025-11-28 Sist oppdatert: 2025-11-28bibliografisk kontrollert
Yik, J., Kleyko, D. & Reddi, V. J. (2025). The neurobench framework for benchmarking neuromorphic computing algorithms and systems. Nature Communications, 16(1), Article ID 1545.
Åpne denne publikasjonen i ny fane eller vindu >>The neurobench framework for benchmarking neuromorphic computing algorithms and systems
2025 (engelsk)Inngår i: Nature Communications, E-ISSN 2041-1723, Vol. 16, nr 1, artikkel-id 1545Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Neuromorphic computing shows promise for advancing computing efficiency and capabilities of AI applications using brain-inspired principles. However, the neuromorphic research field currently lacks standardized benchmarks, making it difficult to accurately measure technological advancements, compare performance with conventional methods, and identify promising future research directions. This article presents NeuroBench, a benchmark framework for neuromorphic algorithms and systems, which is collaboratively designed from an open community of researchers across industry and academia. NeuroBench introduces a common set of tools and systematic methodology for inclusive benchmark measurement, delivering an objective reference framework for quantifying neuromorphic approaches in both hardware-independent and hardware-dependent settings. For latest project updates, visit the project website (neurobench.ai). 

sted, utgiver, år, opplag, sider
Nature Research, 2025
Emneord
accuracy assessment; algorithm; benchmarking; hardware; performance assessment; technological development; academia; algorithm; artificial intelligence software; benchmarking; controlled study; diagnosis; review
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-78354 (URN)10.1038/s41467-025-56739-4 (DOI)2-s2.0-85218828097 (Scopus ID)
Merknad

Authors of this work have been supported in parts by SemiconductorResearch Corporation (JY), the European Research Council (ERC) underthe European Union’s Horizon 2020 research and innovation programme (grant agreement No. 101001448), a grant from the ResearchGrants Council of the Hong Kong Special Administrative Region, China[Project No. CityU 11200922], ARC Laureate Fellowship FL210100156,and the EU H2020 project BeFerroSynaptic (871737). We acknowledgethe financial support of the CogniGron research center and the UbboEmmius Funds (Univ. of Groningen). We acknowledge a contributionfrom the Italian National Recovery and Resilience Plan (NRRP), M4C2,funded by the European Union -NextGenerationEU (Project IR0000011,CUP B51E22000150006, “EBRAINS-Italy”). The work of SynSense waspartially supported by the European Commission, under the Horizongrant Ferro4Edge AI (grant agreement 101135656). This work is partlyfunded by the German Federal Ministry of Education and Research(BMBF) and the free state of Saxony within the ScaDS.AI center ofexcellence for AI research and by the German Federal Ministry forEconomic Affairs and Climate Action (BMWK) under contract01MN23004F (ESCADE). This work is partially supported by NSF Grant2020624 AccelNet:Accelerating Research on Neuromorphic Perception, Action, and Cognition and NSF Grant 2332166 RCN-SC: ResearchCoordination Network for Neuromorphic Integrated Circuits. SandiaNational Laboratories is a multi-mission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC(NTESS), a wholly owned subsidiary of Honeywell International Inc., forthe U.S. Department of Energy’s National Nuclear Security Administration (DOE/NNSA) under contract DE-NA0003525. This written work isauthored by an employee of NTESS. The employee, not NTESS, owns theright, title and interest in and to the written work and is responsible for itscontents. Any subjective views or opinions that might be expressed inthe written work do not necessarily represent the views of the U.S.Government. The publisher acknowledges that the U.S. Governmentretains a non-exclusive, paid-up, irrevocable, world-wide license topublish or reproduce the published form of this written work or allowothers to do so, for U.S. Government purposes. The DOE will providepublic access to results of federally sponsored research in accordancewith the DOE Public Access Plan. This paper describes objective technical results and analysis. Any subjective views or opinions that might beexpressed in the paper do not necessarily represent the views of the U.S.Department of Energy or the United States Government.

Tilgjengelig fra: 2025-09-22 Laget: 2025-09-22 Sist oppdatert: 2025-09-23bibliografisk kontrollert
Kymn, C. J., Mazelet, S., Ng, A., Kleyko, D. & Olshausen, B. A. (2024). Compositional Factorization of Visual Scenes with Convolutional Sparse Coding and Resonator Networks. In: 2024 IEEE Neuro Inspired Computational Elements Conference, NICE 2024 - Proceedings: . Paper presented at 2024 IEEE Neuro Inspired Computational Elements Conference, NICE 2024. La Jolla, USA. 23 April 2024 through 26 April 2024. Institute of Electrical and Electronics Engineers Inc.
Åpne denne publikasjonen i ny fane eller vindu >>Compositional Factorization of Visual Scenes with Convolutional Sparse Coding and Resonator Networks
Vise andre…
2024 (engelsk)Inngår i: 2024 IEEE Neuro Inspired Computational Elements Conference, NICE 2024 - Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2024Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

We propose a system for visual scene analysis and recognition based on encoding the sparse, latent feature-representation of an image into a high-dimensional vector that is subsequently factorized to parse scene content. The sparse feature representation is learned from image statistics via convolutional sparse coding, while scene parsing is performed by a resonator network [1]. The integration of sparse coding with the resonator network increases the capacity of distributed representations and reduces collisions in the combinatorial search space during factorization. We find that for this problem the resonator network is capable of fast and accurate vector factorization, and we develop a confidence-based metric that assists in tracking the convergence of the resonator network. 

sted, utgiver, år, opplag, sider
Institute of Electrical and Electronics Engineers Inc., 2024
Emneord
Convolution; Factorization; Network coding; Vectors; Combinatorial search; Computing-in-superposition; Feature representation; Hyperdimensional computing; Resonator network; Sparse coding; Vector factorizations; Vector symbolic architecture; Visual scene; Visual scene understanding; Resonators
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-74865 (URN)10.1109/NICE61972.2024.10549719 (DOI)2-s2.0-85196224848 (Scopus ID)9798350390582 (ISBN)
Konferanse
2024 IEEE Neuro Inspired Computational Elements Conference, NICE 2024. La Jolla, USA. 23 April 2024 through 26 April 2024
Merknad

The work of CJK was supported by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate (NDSEG) Fellowship Program. The work of SM was carried out as part of the ARPE program of ENS Paris-Saclay. The work of DK and BAO was supported in part by Intel's THWAI program. The work of CJK and BAO was supported by the Center for the Co-Design of Cognitive Systems (CoCoSys), one of seven centers in JUMP 2.0, a Semiconductor Research Corporation (SRC) program sponsored by DARPA. DK has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 839179.

Tilgjengelig fra: 2024-08-20 Laget: 2024-08-20 Sist oppdatert: 2025-09-23bibliografisk kontrollert
Kymn, C. J., Kleyko, D., Frady, E. P., Bybee, C., Kanerva, P., Sommer, F. T. & Olshausen, B. A. (2024). Computing With Residue Numbers in High-Dimensional Representation. Neural Computation, 37(1), 1-37
Åpne denne publikasjonen i ny fane eller vindu >>Computing With Residue Numbers in High-Dimensional Representation
Vise andre…
2024 (engelsk)Inngår i: Neural Computation, ISSN 0899-7667, E-ISSN 1530-888X, Vol. 37, nr 1, s. 1-37Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

We introduce residue hyperdimensional computing, a computing framework that unifies residue number systems with an algebra defined over random, high-dimensional vectors. We show how residue numbers can be represented as high-dimensional vectors in a manner that allows algebraic operations to be performed with component-wise, parallelizable operations on the vector elements. The resulting framework, when combined with an efficient method for factorizing high-dimensional vectors, can represent and operate on numerical values over a large dynamic range using resources that scale only logarithmically with the range, a vast improvement over previous methods. It also exhibits impressive robustness to noise. We demonstrate the potential for this framework to solve computationally difficult problems in visual perception and combinatorial optimization, showing improvement over baseline methods. More broadly, the framework provides a possible account for the computational operations of grid cells in the brain, and it suggests new machine learning architectures for representing and manipulating numerical data. 

Emneord
adult; article; controlled study; grid cell; human; human cell; machine learning; noise
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-76490 (URN)10.1162/neco_a_01723 (DOI)2-s2.0-85212593084 (Scopus ID)
Merknad

The work of C.J.K. was supported by the Department of Defense through the National Defense Science and Engineering Graduate Fellowship Program. The work of D.K., C.B., F.T.S., and B.A.O. was supported in part by Intel’s THWAI program. The work of C.J.K., C.B., P.K., and B.A.O. was supported by the Center for the Co-Design of Cognitive Systems, one of seven centersin JUMP 2.0, a Semiconductor Research Corporation program sponsored by DARPA. D.K. has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement 839179. The work of F.T.S. was supported in part by NIH under grant R01-EB026955 and in part by NSF under grant IIS-118991

Tilgjengelig fra: 2025-01-27 Laget: 2025-01-27 Sist oppdatert: 2025-09-23bibliografisk kontrollert
Schlegel, K., Kleyko, D., Brinkmann, B. H., Nurse, E. S., Gayler, R. W. & Neubert, P. (2024). Lessons from a challenge on forecasting epileptic seizures from non-cerebral signals. Nature Machine Intelligence, 6(2), 243-244
Åpne denne publikasjonen i ny fane eller vindu >>Lessons from a challenge on forecasting epileptic seizures from non-cerebral signals
Vise andre…
2024 (engelsk)Inngår i: Nature Machine Intelligence, ISSN 2522-5839, Vol. 6, nr 2, s. 243-244Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

The “My Seizure Gauge” competition explored the challenge of forecasting epileptic seizures using non-invasive wearable devices without an electroencephalogram. The organizers and the winning team reflect on their experiences. 

sted, utgiver, år, opplag, sider
Nature Research, 2024
Emneord
Epileptic seizures; Wearable devices; Neurophysiology
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-73015 (URN)10.1038/s42256-024-00799-6 (DOI)2-s2.0-85185110710 (Scopus ID)
Tilgjengelig fra: 2024-04-17 Laget: 2024-04-17 Sist oppdatert: 2025-09-23bibliografisk kontrollert
Kleyko, D. & Rachkovskij, D. A. (2024). On Design Choices in Similarity-Preserving Sparse Randomized Embeddings. In: Proceedings Of The International Joint Conference On Neural Networks: . Paper presented at International Joint Conference on Neural Networks, IJCNN 2024. Yokohama, Japan. 30 June 2024 through 5 July 2024. Institute of Electrical and Electronics Engineers Inc.
Åpne denne publikasjonen i ny fane eller vindu >>On Design Choices in Similarity-Preserving Sparse Randomized Embeddings
2024 (engelsk)Inngår i: Proceedings Of The International Joint Conference On Neural Networks, Institute of Electrical and Electronics Engineers Inc. , 2024Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Expand & Sparsify is a principle that is observed in anatomically similar neural circuits found in the mushroom body (insects) and the cerebellum (mammals). Sensory data are projected randomly to much higher-dimensionality (expand part) where only few the most strongly excited neurons are activated (sparsify part). This principle has been leveraged to design a FlyHash algorithm that forms similarity-preserving sparse embeddings, which have been found useful for such tasks as novelty detection, pattern recognition, and similarity search. Despite its simplicity, FlyHash has a number of design choices to be set such as preprocessing of the input data, choice of sparsifying activation function, and formation of the random projection matrix. In this paper, we explore the effect of these choices on the performance of similarity search with FlyHash embeddings. We find that the right combination of design choices can lead to drastic difference in the search performance. 

sted, utgiver, år, opplag, sider
Institute of Electrical and Electronics Engineers Inc., 2024
Emneord
Neurons; Embeddings; Expand & sparsify; Hyperdimensional computing; Mushroom bodies; Neural circuits; Random projections; Similarity preserving; Similarity search; Sparse representation; Winner-take-all; Embeddings
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-76195 (URN)10.1109/IJCNN60899.2024.10651277 (DOI)2-s2.0-85205027306 (Scopus ID)
Konferanse
International Joint Conference on Neural Networks, IJCNN 2024. Yokohama, Japan. 30 June 2024 through 5 July 2024
Merknad

The work of DK was supported by the European Union's Horizon 2020 Research and Innovation Programme under the Marie Sk\u0142odowska-Curie Individual Fellowship Grant Agreement 839179. The work of DAR was supported in part by the Swedish Foundation for Strategic Research (SSF, grant nos. UKR22-0024 & UKR24-0014) and the Swedish Research Council Scholars at Risk Sweden (VR SAR, grant no. GU 2022/1963).

Tilgjengelig fra: 2024-11-18 Laget: 2024-11-18 Sist oppdatert: 2025-09-23bibliografisk kontrollert
Kleyko, D., Rosato, A., Frady, E. P., Panella, M. & Sommer, F. T. (2024). Perceptron Theory Can Predict the Accuracy of Neural Networks. IEEE Transactions on Neural Networks and Learning Systems, 35(7), 9885
Åpne denne publikasjonen i ny fane eller vindu >>Perceptron Theory Can Predict the Accuracy of Neural Networks
Vise andre…
2024 (engelsk)Inngår i: IEEE Transactions on Neural Networks and Learning Systems, ISSN 2162-237X, E-ISSN 2162-2388, Vol. 35, nr 7, s. 9885-Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Multilayer neural networks set the current state of the art for many technical classification problems. But, these networks are still, essentially, black boxes in terms of analyzing them and predicting their performance. Here, we develop a statistical theory for the one-layer perceptron and show that it can predict performances of a surprisingly large variety of neural networks with different architectures. A general theory of classification with perceptrons is developed by generalizing an existing theory for analyzing reservoir computing models and connectionist models for symbolic reasoning known as vector symbolic architectures. Our statistical theory offers three formulas leveraging the signal statistics with increasing detail. The formulas are analytically intractable, but can be evaluated numerically. The description level that captures maximum details requires stochastic sampling methods. Depending on the network model, the simpler formulas already yield high prediction accuracy. The quality of the theory predictions is assessed in three experimental settings, a memorization task for echo state networks (ESNs) from reservoir computing literature, a collection of classification datasets for shallow randomly connected networks, and the ImageNet dataset for deep convolutional neural networks. We find that the second description level of the perceptron theory can predict the performance of types of ESNs, which could not be described previously. Furthermore, the theory can predict deep multilayer neural networks by being applied to their output layer. While other methods for prediction of neural networks performance commonly require to train an estimator model, the proposed theory requires only the first two moments of the distribution of the postsynaptic sums in the output neurons. Moreover, the perceptron theory compares favorably to other methods that do not rely on training an estimator model.

HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-67822 (URN)10.1109/TNNLS.2023.3237381 (DOI)
Tilgjengelig fra: 2023-11-22 Laget: 2023-11-22 Sist oppdatert: 2025-09-23bibliografisk kontrollert
Kleyko, D., Rachkovskij, D., Osipov, E. & Rahimi, A. (2023). A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part II: Applications, Cognitive Models, and Challenges. ACM Computing Surveys, 55(9), Article ID 3558000.
Åpne denne publikasjonen i ny fane eller vindu >>A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part II: Applications, Cognitive Models, and Challenges
2023 (engelsk)Inngår i: ACM Computing Surveys, ISSN 0360-0300, E-ISSN 1557-7341, Vol. 55, nr 9, artikkel-id 3558000Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

This is Part II of the two-part comprehensive survey devoted to a computing framework most commonly known under the names Hyperdimensional Computing and Vector Symbolic Architectures (HDC/VSA). Both names refer to a family of computational models that use high-dimensional distributed representations and rely on the algebraic properties of their key operations to incorporate the advantages of structured symbolic representations and vector distributed representations. Holographic Reduced Representations [321, 326] is an influential HDC/VSA model that is well known in the machine learning domain and often used to refer to the whole family. However, for the sake of consistency, we use HDC/VSA to refer to the field.Part I of this survey [222] covered foundational aspects of the field, such as the historical context leading to the development of HDC/VSA, key elements of any HDC/VSA model, known HDC/VSA models, and the transformation of input data of various types into high-dimensional vectors suitable for HDC/VSA. This second part surveys existing applications, the role of HDC/VSA in cognitive computing and architectures, as well as directions for future work. Most of the applications lie within the Machine Learning/Artificial Intelligence domain; however, we also cover other applications to provide a complete picture. The survey is written to be useful for both newcomers and practitioners. 

sted, utgiver, år, opplag, sider
Association for Computing Machinery, 2023
Emneord
analogical reasoning, applications, Artificial intelligence, binary spatter codes, cognitive architectures, cognitive computing, distributed representations, geometric analogue of holographic reduced representations, holographic reduced representations, hyperdimensional computing, machine learning, matrix binding of additive terms, modular composite representations, multiply-add-permute, sparse binary distributed representations, sparse block codes, tensor product representations, vector symbolic architectures, Architecture, Codes (symbols), Computer architecture, Holography, Metadata, Binary spatter code, Composite representations, Distributed representation, Geometric analog of holographic reduced representation, Machine-learning, Matrix binding, Matrix binding of additive term, Modular composite representation, Modulars, Multiply-add, Product representation, Sparse binary distributed representation, Sparse block code, Tensor product representation, Tensor products, Vector symbolic architecture, Vectors
HSV kategori
Identifikatorer
urn:nbn:se:ri:diva-64721 (URN)10.1145/3558000 (DOI)2-s2.0-85147845869 (Scopus ID)
Merknad

Funding details: Air Force Office of Scientific Research, AFOSR, FA9550-19-1-0241; Funding details: Intel Corporation; Funding details: H2020 Marie Skłodowska-Curie Actions, MSCA, 839179; Funding details: Stiftelsen för Strategisk Forskning, SSF, UKR22-0024; Funding details: National Academy of Sciences of Ukraine, NASU, 0117U002286, 0120U000122, 0121U000016, 0122U002151; Funding details: Ministry of Education and Science of Ukraine, MESU, 0121U000228, 0122U000818; Funding text 1: The work of DK was supported by the European Union’s Horizon 2020 Programme under the Marie Skłodowska-Curie Individual Fellowship Grant (839179). The work of DK was also supported in part by AFOSR FA9550-19-1-0241 and Intel’s THWAI program. The work of DAR was supported in part by the National Academy of Sciences of Ukraine (grant no. 0120U000122, 0121U000016, 0122U002151, and 0117U002286), the Ministry of Education and Science of Ukraine (grant no. 0121U000228 and 0122U000818), and the Swedish Foundation for Strategic Research (SSF, grant no. UKR22-0024).; Funding text 2: The work of DK was supported by the European Union’s Horizon 2020 Programme under the Marie SkÅ odowska-Curie Individual Fellowship Grant (839179). The work of DK was also supported in part by AFOSR FA9550-19-1-0241 and Intel’s THWAI program. The work of DAR was supported in part by the National Academy of Sciences of Ukraine (grant no. 0120U000122, 0121U000016, 0122U002151, and 0117U002286), the Ministry of Education and Science of Ukraine (grant no. 0121U000228 and 0122U000818), and the Swedish Foundation for Strategic Research (SSF, grant no. UKR22-0024).

Tilgjengelig fra: 2023-05-15 Laget: 2023-05-15 Sist oppdatert: 2025-09-23bibliografisk kontrollert
Organisasjoner
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0002-6032-6155
v. 2.47.0