Change search
Refine search result
1 - 7 of 7
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Frady, Edward Paxon
    et al.
    Intel Labs, USA; University of California, USA.
    Kleyko, Denis
    RISE Research Institutes of Sweden, Digital Systems, Data Science. Intel Labs, USA.
    Sommer, Friedrich T
    Intel Labs, USA; University of California.
    Variable Binding for Sparse Distributed Representations: Theory and Applications2023In: IEEE Transactions on Neural Networks and Learning Systems, ISSN 2162-237X, E-ISSN 2162-2388, Vol. 34, no 5, p. 2191-2204Article in journal (Refereed)
    Abstract [en]

    Variable binding is a cornerstone of symbolic reasoning and cognition. But how binding can be implemented in connectionist models has puzzled neuroscientists, cognitive psychologists, and neural network researchers for many decades. One type of connectionist model that naturally includes a binding operation is vector symbolic architectures (VSAs). In contrast to other proposals for variable binding, the binding operation in VSAs is dimensionality-preserving, which enables representing complex hierarchical data structures, such as trees, while avoiding a combinatoric expansion of dimensionality. Classical VSAs encode symbols by dense randomized vectors, in which information is distributed throughout the entire neuron population. By contrast, in the brain, features are encoded more locally, by the activity of single neurons or small groups of neurons, often forming sparse vectors of neural activation. Following Laiho et al. (2015), we explore symbolic reasoning with a special case of sparse distributed representations. Using techniques from compressed sensing, we first show that variable binding in classical VSAs is mathematically equivalent to tensor product binding between sparse feature vectors, another well-known binding operation which increases dimensionality. This theoretical result motivates us to study two dimensionality-preserving binding methods that include a reduction of the tensor matrix into a single sparse vector. One binding method for general sparse vectors uses random projections, the other, block-local circular convolution, is defined for sparse vectors with block structure, sparse block-codes. Our experiments reveal that block-local circular convolution binding has ideal properties, whereas random projection based binding also works, but is lossy. We demonstrate in example applications that a VSA with block-local circular convolution and sparse block-codes reaches similar performance as classical VSAs. Finally, we discuss our results in the context of neuroscience and neural networks. 

  • 2.
    Kleyko, Denis
    et al.
    RISE Research Institutes of Sweden, Digital Systems, Data Science. University of California at Berkeley, USA.
    Frady, Edward
    Intel Labs, USA; University of California at Berkeley, USA.
    Kheffache, Mansour
    Netlight Consulting AB, Sweden.
    Osipov, Evgeny
    Luleå University of Technology, Sweden.
    Integer Echo State Networks: Efficient Reservoir Computing for Digital Hardware2022In: IEEE Transactions on Neural Networks and Learning Systems, ISSN 2162-237X, E-ISSN 2162-2388, Vol. 33, no 4, p. 1688-1701Article in journal (Refereed)
    Abstract [en]

    We propose an approximation of echo state networks (ESNs) that can be efficiently implemented on digital hardware based on the mathematics of hyperdimensional computing. The reservoir of the proposed integer ESN (intESN) is a vector containing only n-bits integers (where n< 8 is normally sufficient for a satisfactory performance). The recurrent matrix multiplication is replaced with an efficient cyclic shift operation. The proposed intESN approach is verified with typical tasks in reservoir computing: memorizing of a sequence of inputs, classifying time series, and learning dynamic processes. Such architecture results in dramatic improvements in memory footprint and computational efficiency, with minimal performance loss. The experiments on a field-programmable gate array confirm that the proposed intESN approach is much more energy efficient than the conventional ESN. 

  • 3.
    Kleyko, Denis
    et al.
    RISE Research Institutes of Sweden, Digital Systems, Data Science. University of California at Berkeley, USA.
    Frady, Edward
    University of California at Berkeley, USA; Intel Labs, USA.
    Sommer, Friederich
    University of California at Berkeley, USA; Intel Labs, USA.
    Cellular Automata Can Reduce Memory Requirements of Collective-State Computing2022In: IEEE Transactions on Neural Networks and Learning Systems, ISSN 2162-237X, E-ISSN 2162-2388, Vol. 33, no 6, p. 2701-2713Article in journal (Refereed)
    Abstract [en]

    Various nonclassical approaches of distributed information processing, such as neural networks, reservoir computing (RC), vector symbolic architectures (VSAs), and others, employ the principle of collective-state computing. In this type of computing, the variables relevant in computation are superimposed into a single high-dimensional state vector, the collective state. The variable encoding uses a fixed set of random patterns, which has to be stored and kept available during the computation. In this article, we show that an elementary cellular automaton with rule 90 (CA90) enables the space-time tradeoff for collective-state computing models that use random dense binary representations, i.e., memory requirements can be traded off with computation running CA90. We investigate the randomization behavior of CA90, in particular, the relation between the length of the randomization period and the size of the grid, and how CA90 preserves similarity in the presence of the initialization noise. Based on these analyses, we discuss how to optimize a collective-state computing model, in which CA90 expands representations on the fly from short seed patterns--rather than storing the full set of random patterns. The CA90 expansion is applied and tested in concrete scenarios using RC and VSAs. Our experimental results show that collective-state computing with CA90 expansion performs similarly compared to traditional collective-state models, in which random patterns are generated initially by a pseudorandom number generator and then stored in a large memory. 

  • 4.
    Kleyko, Denis
    et al.
    RISE Research Institutes of Sweden, Digital Systems, Data Science. University of California, USA.
    Karunaratne, Geethan
    IBM Research, Switzerland.
    Rabaey, Jan M.
    IBM Research, Switzerland.
    Sebastian, Abu
    IBM Research, Switzerland.
    Rahimi, Abbas
    University of California, USA.
    Generalized Key-Value Memory to Flexibly Adjust Redundancy in Memory-Augmented Networks2023In: IEEE Transactions on Neural Networks and Learning Systems, ISSN 2162-237X, E-ISSN 2162-2388, Vol. 34, no 12, p. 10993-10998Article in journal (Refereed)
    Abstract [en]

    Memory-augmented neural networks enhance a neural network with an external key-value (KV) memory whose complexity is typically dominated by the number of support vectors in the key memory. We propose a generalized KV memory that decouples its dimension from the number of support vectors by introducing a free parameter that can arbitrarily add or remove redundancy to the key memory representation. In effect, it provides an additional degree of freedom to flexibly control the tradeoff between robustness and the resources required to store and compute the generalized KV memory. This is particularly useful for realizing the key memory on in-memory computing hardware where it exploits nonideal, but extremely efficient nonvolatile memory devices for dense storage and computation. Experimental results show that adapting this parameter on demand effectively mitigates up to 44% nonidealities, at equal accuracy and number of devices, without any need for neural network retraining.

  • 5.
    Kleyko, Denis
    et al.
    RISE Research Institutes of Sweden, Digital Systems, Data Science. University of California at Berkeley, USA.
    Kheffache, Mansour
    Netlight Consulting AB, Sweden.
    Frady, E Paxon
    University of California at Berkeley, USA.
    Wiklund, Urban
    Umeå University, Sweden.
    Osipov, Evgeny
    Luleå University of Technology, Sweden.
    Density Encoding Enables Resource-Efficient Randomly Connected Neural Networks2021In: IEEE Transactions on Neural Networks and Learning Systems, ISSN 2162-237X, E-ISSN 2162-2388, Vol. 32, no 8, p. 3777-3783, article id 9174774Article in journal (Refereed)
    Abstract [en]

    The deployment of machine learning algorithms on resource-constrained edge devices is an important challenge from both theoretical and applied points of view. In this brief, we focus on resource-efficient randomly connected neural networks known as random vector functional link (RVFL) networks since their simple design and extremely fast training time make them very attractive for solving many applied classification tasks. We propose to represent input features via the density-based encoding known in the area of stochastic computing and use the operations of binding and bundling from the area of hyperdimensional computing for obtaining the activations of the hidden neurons. Using a collection of 121 real-world data sets from the UCI machine learning repository, we empirically show that the proposed approach demonstrates higher average accuracy than the conventional RVFL. We also demonstrate that it is possible to represent the readout matrix using only integers in a limited range with minimal loss in the accuracy. In this case, the proposed approach operates only on small ${n}$ -bits integers, which results in a computationally efficient architecture. Finally, through hardware field-programmable gate array (FPGA) implementations, we show that such an approach consumes approximately 11 times less energy than that of the conventional RVFL.

  • 6.
    Kleyko, Denis
    et al.
    RISE Research Institutes of Sweden, Digital Systems, Data Science. University of California at Berkeley, USA.
    Rosato, Antonello
    University of Rome “La Sapienza”, Italy.
    Frady, Edward Paxon
    Intel Labs, USA.
    Panella, Massimo
    University of Rome “La Sapienza”, Italy.
    Sommer, Friedrich T.
    Intel Labs, USA; University of California at Berkeley, USA.
    Perceptron Theory Can Predict the Accuracy of Neural Networks2023In: IEEE Transactions on Neural Networks and Learning Systems, ISSN 2162-237X, E-ISSN 2162-2388Article in journal (Refereed)
    Abstract [en]

    Multilayer neural networks set the current state of the art for many technical classification problems. But, these networks are still, essentially, black boxes in terms of analyzing them and predicting their performance. Here, we develop a statistical theory for the one-layer perceptron and show that it can predict performances of a surprisingly large variety of neural networks with different architectures. A general theory of classification with perceptrons is developed by generalizing an existing theory for analyzing reservoir computing models and connectionist models for symbolic reasoning known as vector symbolic architectures. Our statistical theory offers three formulas leveraging the signal statistics with increasing detail. The formulas are analytically intractable, but can be evaluated numerically. The description level that captures maximum details requires stochastic sampling methods. Depending on the network model, the simpler formulas already yield high prediction accuracy. The quality of the theory predictions is assessed in three experimental settings, a memorization task for echo state networks (ESNs) from reservoir computing literature, a collection of classification datasets for shallow randomly connected networks, and the ImageNet dataset for deep convolutional neural networks. We find that the second description level of the perceptron theory can predict the performance of types of ESNs, which could not be described previously. Furthermore, the theory can predict deep multilayer neural networks by being applied to their output layer. While other methods for prediction of neural networks performance commonly require to train an estimator model, the proposed theory requires only the first two moments of the distribution of the postsynaptic sums in the output neurons. Moreover, the perceptron theory compares favorably to other methods that do not rely on training an estimator model.

  • 7.
    Osipov, Evgeny
    et al.
    Luleå University of Technology, Sweden.
    Kahawala, S.
    La Trobe University, Australia.
    Haputhanthri, D.
    La Trobe University, Australia.
    Kempitiya, T.
    La Trobe University, Australia.
    Silva, D. D.
    La Trobe University, Australia.
    Alahakoon, D.
    La Trobe University, Australia.
    Kleyko, Denis
    RISE Research Institutes of Sweden, Digital Systems, Data Science. University of California at Berkeley, USA.
    Hyperseed: Unsupervised Learning With Vector Symbolic Architectures2023In: IEEE Transactions on Neural Networks and Learning Systems, ISSN 2162-237X, E-ISSN 2162-2388, Vol. 12, no 12, article id e202300141Article in journal (Refereed)
    Abstract [en]

    Motivated by recent innovations in biologically inspired neuromorphic hardware, this article presents a novel unsupervised machine learning algorithm named Hyperseed that draws on the principles of vector symbolic architectures (VSAs) for fast learning of a topology preserving feature map of unlabeled data. It relies on two major operations of VSA, binding and bundling. The algorithmic part of Hyperseed is expressed within the Fourier holographic reduced representations (FHRR) model, which is specifically suited for implementation on spiking neuromorphic hardware. The two primary contributions of the Hyperseed algorithm are few-shot learning and a learning rule based on single vector operation. These properties are empirically evaluated on synthetic datasets and on illustrative benchmark use cases, IRIS classification, and a language identification task using the $n$ -gram statistics. The results of these experiments confirm the capabilities of Hyperseed and its applications in neuromorphic hardware.

1 - 7 of 7
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf