Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Variable Binding for Sparse Distributed Representations: Theory and Applications
Intel Labs, USA; University of California, USA.
RISE Research Institutes of Sweden, Digital Systems, Data Science. Intel Labs, USA.ORCID iD: 0000-0002-6032-6155
Intel Labs, USA; University of California.
2023 (English)In: IEEE Transactions on Neural Networks and Learning Systems, ISSN 2162-237X, E-ISSN 2162-2388, Vol. 34, no 5, p. 2191-2204Article in journal (Refereed) Published
Abstract [en]

Variable binding is a cornerstone of symbolic reasoning and cognition. But how binding can be implemented in connectionist models has puzzled neuroscientists, cognitive psychologists, and neural network researchers for many decades. One type of connectionist model that naturally includes a binding operation is vector symbolic architectures (VSAs). In contrast to other proposals for variable binding, the binding operation in VSAs is dimensionality-preserving, which enables representing complex hierarchical data structures, such as trees, while avoiding a combinatoric expansion of dimensionality. Classical VSAs encode symbols by dense randomized vectors, in which information is distributed throughout the entire neuron population. By contrast, in the brain, features are encoded more locally, by the activity of single neurons or small groups of neurons, often forming sparse vectors of neural activation. Following Laiho et al. (2015), we explore symbolic reasoning with a special case of sparse distributed representations. Using techniques from compressed sensing, we first show that variable binding in classical VSAs is mathematically equivalent to tensor product binding between sparse feature vectors, another well-known binding operation which increases dimensionality. This theoretical result motivates us to study two dimensionality-preserving binding methods that include a reduction of the tensor matrix into a single sparse vector. One binding method for general sparse vectors uses random projections, the other, block-local circular convolution, is defined for sparse vectors with block structure, sparse block-codes. Our experiments reveal that block-local circular convolution binding has ideal properties, whereas random projection based binding also works, but is lossy. We demonstrate in example applications that a VSA with block-local circular convolution and sparse block-codes reaches similar performance as classical VSAs. Finally, we discuss our results in the context of neuroscience and neural networks. 

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc. , 2023. Vol. 34, no 5, p. 2191-2204
Keywords [en]
Brain; Compressed sensing; Convolution; Data structures; Matrix algebra; Network architecture; Neural networks; Neurons; Tensors; Trees (mathematics), Cognition; Cognitive reasoning; Compound; Compressed sensing; Compressed-Sensing; Distributed representation; Sparse block code; Sparse distributed representation; Sparse matrices; Tensor product variable binding; Tensor products; Variable binding; Vector symbolic architecture ., Vectors, artificial neural network; brain; cognition; nerve cell; physiology; problem solving, Brain; Cognition; Neural Networks, Computer; Neurons; Problem Solving
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:ri:diva-67764DOI: 10.1109/TNNLS.2021.3105949Scopus ID: 2-s2.0-85114716481OAI: oai:DiVA.org:ri-67764DiVA, id: diva2:1813838
Available from: 2023-11-22 Created: 2023-11-22 Last updated: 2023-12-12Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopusFull text

Authority records

Kleyko, Denis

Search in DiVA

By author/editor
Kleyko, Denis
By organisation
Data Science
In the same journal
IEEE Transactions on Neural Networks and Learning Systems
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 2 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf