Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
On Design Choices in Similarity-Preserving Sparse Randomized Embeddings
RISE Research Institutes of Sweden, Digital Systems, Data Science. Örebro University, Sweden; .ORCID iD: 0000-0002-6032-6155
Luleå University of Technology, Sweden; IRTC for IT and Systems, Ukraine.
2024 (English)In: Proceedings Of The International Joint Conference On Neural Networks, Institute of Electrical and Electronics Engineers Inc. , 2024Conference paper, Published paper (Refereed)
Abstract [en]

Expand & Sparsify is a principle that is observed in anatomically similar neural circuits found in the mushroom body (insects) and the cerebellum (mammals). Sensory data are projected randomly to much higher-dimensionality (expand part) where only few the most strongly excited neurons are activated (sparsify part). This principle has been leveraged to design a FlyHash algorithm that forms similarity-preserving sparse embeddings, which have been found useful for such tasks as novelty detection, pattern recognition, and similarity search. Despite its simplicity, FlyHash has a number of design choices to be set such as preprocessing of the input data, choice of sparsifying activation function, and formation of the random projection matrix. In this paper, we explore the effect of these choices on the performance of similarity search with FlyHash embeddings. We find that the right combination of design choices can lead to drastic difference in the search performance. 

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc. , 2024.
Keywords [en]
Neurons; Embeddings; Expand & sparsify; Hyperdimensional computing; Mushroom bodies; Neural circuits; Random projections; Similarity preserving; Similarity search; Sparse representation; Winner-take-all; Embeddings
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:ri:diva-76195DOI: 10.1109/IJCNN60899.2024.10651277Scopus ID: 2-s2.0-85205027306OAI: oai:DiVA.org:ri-76195DiVA, id: diva2:1914199
Conference
International Joint Conference on Neural Networks, IJCNN 2024. Yokohama, Japan. 30 June 2024 through 5 July 2024
Note

The work of DK was supported by the European Union's Horizon 2020 Research and Innovation Programme under the Marie Sk\u0142odowska-Curie Individual Fellowship Grant Agreement 839179. The work of DAR was supported in part by the Swedish Foundation for Strategic Research (SSF, grant nos. UKR22-0024 & UKR24-0014) and the Swedish Research Council Scholars at Risk Sweden (VR SAR, grant no. GU 2022/1963).

Available from: 2024-11-18 Created: 2024-11-18 Last updated: 2025-09-23Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Kleyko, Denis

Search in DiVA

By author/editor
Kleyko, Denis
By organisation
Data Science
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 16 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf